In home theatre news, this article on XLR vs RCA – which cable is better? makes for good reading, particularly if you’re starting to convince yourself that you need to take things in a certain direction.
Welcome to Random Short Take #57. Only one player has worn 57 in the NBA. So it looks like this particular bit is done. Let’s get random.
In the early part of my career I spent a lot of time tuning up old UNIX workstations. I remember lifting those SGI CRTs from desk to desk was never a whole lot of fun. This article about a Sun Ultra 1 project bought back a hint of nostalgia for those days (but not enough to really get into it again). Hat tip to Scott Lowe for the link.
As you get older, you realise that people talk a whole lot of rubbish most of the time. This article calling out audiophiles for the practice was great.
This article on the Backblaze blog about one company’s approach to building its streaming media capability on B2 made for interesting reading.
DH2i recently announced the general availability of DxEnterprise (DxE) for Containers, enabling cloud-native Microsoft SQL Server container Availability Groups outside and inside Kubernetes.
Speaking of press releases, Zerto has made a few promotions recently. You can keep up with that news here.
I’m terrible when it comes to information security, but if you’re looking to get started in the field, this article provides some excellent guidance on what you should be focussing on.
We all generally acknowledge that NTP is important, and most of us likely assume that it’s working. But have you been checking? This article from Tony does a good job of outlining some of the reasons you should be paying some more attention to NTP.
Welcome to Random Short Take #52. A few players have worn 52 in the NBA including Victor Alexander (I thought he was getting dunked on by Shawn Kemp but it was Chris Gatling). My pick is Greg Oden though. If only his legs were the same length. Let’s get random.
This article on data mobility from my preferred Chris Evans was great. We talk a lot about data mobility in this industry, but I don’t know that we’ve all taken the time to understand what it really means.
I’m a big fan of Tech Field Day, and it’s nice to see presenting companies take on feedback from delegates and putting out interesting articles. Kit’s a smart fellow, and this article on using VMware Cloud for application modernisation is well worth reading.
Preston wrote about some experiences he had recently with almost failing drives in his home environment, and raised some excellent points about resilience, failure, and caution.
Speaking of people I worked with briefly, I’ve enjoyed Siobhán’s series of articles on home automation. I would never have the patience to do this, but I’m awfully glad that someone did.
Welcome to Random Short Take #34. Some really good players have worn 34 in the NBA, including Ray Allen and Sir Charles. This one, though, goes out to my favourite enforcer, Charles Oakley. If it feels like it’s only been a week since the last post, that’s because it has.
April Fool’s is always a bit of a trying time, what with a lot of the world being a few timezones removed from where I live. Invariably I stop checking news sites for a few days to be sure. Backblaze recognised that these are strange times, and decided to have some fun with their releases, rather than trying to fool people outright. I found the post on Catblaze Cloud Backup inspiring.
VMware vSphere 7 recently went GA. Here’s a handy article covering what it means for VMware cloud providers.
Speaking of VMware things, John Nicholson wrote a great article on SMB and vSAN (I can’t bring myself to write CIFS, even when I know why it’s being referred to that way).
Scale is infinite, until it isn’t. Azure had some minor issues recently, and Keith Townsend shared some thoughts on the situation.
StorMagic recently announced that it has acquired KeyNexus. It also announced the availability of SvKMS, a key management system for edge, DC, and cloud solutions.
Joey D’Antoni, in collaboration with DH2i, is delivering a webinar titled “Overcoming the HA/DR and Networking Challenges of SQL Server on Linux”. It’s being held on Wednesday 15th April at 11am Pacific Time. If that timezone works for you, you can find out more and register here.
Disclaimer: I recently attended VMworld 2017 – US. My flights were paid for by ActualTech Media, VMware provided me with a free pass to the conference and various bits of swag, and Tech Field Day picked up my hotel costs. There is no requirement for me to blog about any of the content presented and I am not compensated in any way for my time at the event. Some materials presented were discussed under NDA and don’t form part of my blog posts, but could influence future discussions.
Here are my rough notes from “STO1179BU – Understanding the Availability Features of vSAN”, presented by GS Khalsa (@gurusimran) and Jeff Hunter (@jhuntervmware). You can grab a PDF of the notes from here. Note that these posts don’t provide much in the way of opinion, analysis, or opinionalysis. They’re really just a way of providing you with a snapshot of what I saw. Death by bullet point if you will.
Components and Failure
vSAN Objects Consist of Components
VM Home – multiple components
Virtual Disk – multiple components
Swap File – multiple components
vSAN has a cache tier and capacity tier (objects are stored here)
Greater than 50% must be online to achieve quorum
Each component has one vote by default
Odd number of votes required to break tie – preserves data integrity
Greater than 50% of components (votes) must be online
Components can have more than one vote
Votes added by vSAN, if needed, to ensure odd number
Disclaimer: I recently attended Storage Field Day 7. My flights, accommodation and other expenses were paid for by Tech Field Day. There is no requirement for me to blog about any of the content presented and I am not compensated in any way for my time at the event. Some materials presented were discussed under NDA and don’t form part of my blog posts, but could influence future discussions.
For each of the presentations I attended at SFD7, there are a few things I want to include in the post. Firstly, you can see video footage of the VMware presentation here. You can also download my raw notes from the presentation here. Finally, here’s a link to the VMware website that covers some of what they presented.
I’d like to say a few things about the presentation. Firstly, it was held in the “Rubber Chicken” Room at VMware HQ.
Secondly, Rawlinson was there, but we ran out of time to hear him present. This seems to happen each time I see him in real life. Still, it’s not everyday you get to hear Christos Karamanolis (@XtosK) talk about this stuff, so I’ll put my somewhat weird @PunchingClouds fanboy thing to the side for the moment.
Thirdly, and I’ll be upfront about this, I was a bit disappointed that VMware didn’t go outside some fairly fixed parameters as far as what they could and couldn’t talk about with regards to Virtual SAN. I understand that mega software companies have to be a bit careful about what they can say publicly, but I had hoped for something fresher in this presentation. In any case, I’ve included my notes on Christos’s view on the VSAN architecture – I hope it’s useful.
VMware adopted the following principles when designing VSAN.
Compute + storage scalability
Unobtrusive to existing data centre architecture
Distributed software running on every host
Pools local storage (flash + HDD) on hosts (virtual shared datastore)
Symmetric architecture – no single point of failure, no bottleneck
The hypervisor opens up new opportunities, with the virtualisation platform providing:
Visibility to individual VMs and application storage
Manages all applications’ resource requirements
Sits directly in the I/O path
A global view of underlying infrastructure
Supports an extensive hardware compatibility list (HCL)
Critical paths in ESX kernel
The cluster service allows for
Fast failure detection
High performance (especially for writes)
The data path provides
Minimal CPU per IO
Minimal Mem consumption
Physical access to devices
This equals minimal impact on consolidation rates. This is a Good Thing™.
Optimized internet protocol
As ESXi is both the “consumer” and “producer” of data there is no need for a standard data access protocol.
Per-object coordinator = client
Distributed “metadata server”
Transactions span only object distribution
Efficient reliable data transport (RDT)
Protocol agnostic (now TCP/IP)
Standard protocol for external access?
Two tiers of storage: Hybrid
Optimise the cost of physical storage resources
HDDS: cheap capacity, expensive IOPS
Flash: expensive capacity, cheap IOPS
Combine best of both worlds
Performance from flash (read cache + write back)
Capacity from HDD (capacity tier)
Optimise workload per tier
Random IO to flash (high IOPS)
Sequential IO to HDD (high throughput)
Storage organised in disk groups (flash device and magnetic disks) – up to 5 disk groups, 1 SSD + 7 HDDs – this is the fault domain. 70% of flash is read cache, 30% is write buffer. Writes are accumulated, then staged in a magnetic disk-friendly fashion. Proximal IO – writing blocks within a certain number of cylinders. Filesystem on the magnetic disks is slightly different to the one on the SSDs. Uses the back-end of the Virsto filesystem, but doesn’t use the log-structure filesystem component.
Flash device: cache of disk group (70% read cache, 30% write-back buffer)
No caching on “local” flash where VM runs
Flash latencies 100x network latencies
No data transfers, no perf hit during VM migration
VM consists of a number of objects – each object individually distributed
VSAN doesn’t know about VMs and VMDKs
Up to 62TB useable
Single namespace, multiple mount points
VMFS created in sub-namespace
The VM Home directory object is formatted with VMFS to allow a VM’s config files to be stored on it. Mounted under the root dir vsanDatastore.
Availability policy reflected on number of replicas
Performance policy may include a stripe width per replica
Object “components” may reside in different disks and / or hosts
VSAN cluster = vSphere cluster
Ease of management
Piggyback on vSphere management workflow, e.g. EMM
Ensure coherent configuration of hosts in vSphere cluster
Adapt to the customer’s data centre architecture while working with network topology constraints.
Maintenance mode – planned downtime.
Full data migration; and
No data migration.
VM-centric monitoring and troubleshooting
Configure, manage, monitor
Policy compliance reporting
Combination of tools for monitoring in 5.5
Ruby vSphere console
More to come soon …
Real *software* defined storage
Software + hardware – component based (individual components), Virtual SAN ready node (40 OEM validated server configurations are ready for VSAN deployment)
VMware EVO:RAIL = Hyper-converged infrastructure
It’s a big task to get all of this working with everything (supporting the entire vSphere HCL).
Closing Thoughts and Further Reading
I like VSAN. And I like that VMware are working so hard at getting it right. I don’t like some of the bs that goes with their marketing of the product, but I think it has its place in the enterprise and is only going to go from strength to strength with the amount of resources VMware is throwing at it. In the meantime, check out Keith’s background post on VMware here. In my opinion, you can’t go past Cormac’s posts on VSAN if you want a technical deep dive. Also, buy his book.