Welcome to Random Short Take #81. Last one for the year, because who really wants to read this stuff over the holiday season? Let’s get random.
Curtis did a podcast on archive and retrieve as part of his “Backup to Basics” series. It’s something I feel pretty strongly about, so much so that I wrote a chapter in his book about it. You can listen to it here.
More William Gibson content is coming to our screens, with the announcement that Neuromancer is coming to Apple TV. I hope it’s as well done as The Peripheral. Hat tip to John Birmingham for posting the news on his blog.
It’s been a while since I looked at Dell storage, but my friend Max wrote a great article over at Gestalt IT on Dell PowerStore.
I love Backblaze. Not in the sense that I want to marry the company, but I really like what the folks there do. And I really like the transparency with which they operate. This article giving a behind the scenes look at its US East Data Center is a fantastic example of that.
And, to “celebrate” 81 Random Short Takes (remember when I used to list my favourite NBA players and the numbers they wore?), let’s take a stroll down memory lane with two of my all-time, top 5, favourite NBA players – Kobe Bryant and Jalen Rose. The background for this video is explained by Jalen here.
Take care of yourselves and each other, and I’ll hopefully see you all on the line or in person next year.
Speaking of data protection software releases and enhancements, we’ve barely recovered from the excitement of Veeam v10 being released and Anthony is already talking about v11. More on that here.
Speaking of Veeam, Rhys posted a very detailed article on setting up a Veeam backup repository on NFS using a Pure Storage FlashBlade environment.
Sticking with the data protection theme, I penned a piece over at Gestalt IT for Druva talking about OneDrive protection and why it’s important.
OpenDrives has some new gear available – you can read more about that here.
The nice folks at Spectro Cloud recently announced that its first product is generally available. You can read the press release here.
Wiliam Lam put out a great article on passing through the integrated GPU on Apple Mac minis with ESXi 7.
Welcome to Random Short Take #37. Not a huge amount of players have worn 37 in the NBA, but Metta World Peace did a few times. When he wasn’t wearing 15, and other odd numbers. But I digress. Let’s get random.
Pavilion Data recently added S3 capability to its platform. It’s based on a variant of MinIO, and adds an interesting dimension to what Pavilion Data has traditionally offered. Mellor provided some good coverage here.
Speaking of object storage, Dell EMC recently announced ECS 3.5. You can read more on that here. The architectural white paper has been updated to reflect the new version as well.
Online events are all the rage at the moment, and two noteworthy events are coming up shortly: Pure//Accelerate and VeeamON 2020. Speaking of online events, we’re running a virtual BNEVMUG next week. Details on that here. ZertoCON Virtual is also a thing.
Speaking of Pure Storage, this article from Cody Hosterman on NVMe and vSphere 7 is lengthy, but definitely worth the read.
I can’t recall whether I mentioned that this white paper covering VCD on VCF 3.9 is available now, and I can’t be bothered checking. So here it is.
I’m not just a fan of Backblaze because of its cool consumer backup solution and object storage platform, I’m also a big fan because of its blog. Articles like this one are a great example of companies doing corporate culture right (at least from what I can see).
I have the impression that Datadobi has been doing some cool stuff recently, and this story certainly seems to back it up.
Welcome to Random Short Take #36. Not a huge amount of players have worn 36 in the NBA, but Shaq did (at the end of his career), and Marcus Smart does. This one, though, goes out to one of my favourite players from the modern era, Rasheed Wallace. It seems like Boston is the common thread here. Might have something to do with those hall of fame players wearing numbers in the low 30s. Or it might be entirely unrelated.
Scale Computing recently announced its all-NVMe HC3250DF as a new appliance targeting core data centre and edge computing use cases. It offers higher performance storage, networking and processing. You can read the press release here.
Dell EMC PowerStore has been announced. Chris Mellor covered the announcement here. I haven’t had time to dig into this yet, but I’m keen to learn more. Chris Evans also wrote about it here.
StorCentric’s Nexsan recently announced the E-Series 32F Storage Platform. You can read the press release here.
In what can only be considered excellent news, Preston de Guisehas announced the availability of the second edition of his book, “Data Protection: Ensuring Data Availability”. It will be available in a variety of formats, with the ebook format already being out. I bought the first edition a few times to give as a gift, and I’m looking forward to giving away a few copies of this one too.
Backblaze B2 has been huge for the company, and Backblaze B2 with S3-compatible API access is even huger. Read more about that here. Speaking of Backblaze, it just released its hard dive stats for Q1, 2020. You can read more on that here.
Hal recently upgraded his NUC-based home lab to vSphere 7. You can read more about the process here.
Jon recently posted an article on a new upgrade command available in OneFS. If you’re into Isilon, you might just be into this.
Disclaimer: I recently attended Storage Field Day 19. My flights, accommodation and other expenses were paid for by Tech Field Day. There is no requirement for me to blog about any of the content presented and I am not compensated in any way for my time at the event. Some materials presented were discussed under NDA and don’t form part of my blog posts, but could influence future discussions.
Dell EMC describes PowerOne as “all-in-one autonomous infrastructure”. It’s converged infrastructure, meaning your storage, compute, and networking are all built into the rack. It’s a transportation-tested package and fully assembled when it ships. When it arrives, you can plug it in, fire up the API, and be up and going “within a few hours”.
Trey Layton is no stranger to Vblock / VxBlock, and he was very clear with the delegates that PowerOne is not replacing VxBlock. After all, VxBlock lets them sell Dell EMC external storage into Cisco UCS customers.
So What Is It Then?
It’s a rack or racks full of gear. All of which is now Dell EMC gear. And it’s highly automated and has some proper management around it too.
Dynamically provision compute resources into clusters
Automated chassis expansion
Telemetry aggregation
Kinetic infrastructure
System Fabrics
Switches are 32Gbps
98% reduction in network configuration steps
System fabric visibility and lifecycle management
Intent-based automated deployment and provision
PowerSwitch open networking
PowerOne Controller
Highly automates 1000s of tasks
Powered by Kubernetes and Ansible
Delivers next-gen autonomous outcomes via robust API capabilities
From a scalability perspective, you can go to 275 nodes in a pod, and you can look after up to 32 pods (I think). The technical specifications are here.
Thoughts and Further Reading
Converged infrastructure has always been an interesting architectural choice for the enterprise. When VCE first came into being 10+ years ago via Acadia, delivering consistent infrastructure experiences in the average enterprise was a time-consuming endeavour and not a lot of fun. It was also hard to do well. VCE changed a lot of that with Vblock, but you paid a premium. The reason you paid that premium was that VCE did a pretty decent job of putting together an architecture that was reliable and, more importantly, supportable by the vendor. It wasn’t just the IP behind this that made it successful though, it was the effort put into logistics and testing. And yes, a lot of that was built on the strength of spreadsheets and the blood, sweat and tears of the deployment engineers out in the field.
PowerOne feels like a very different beast in this regard. Dell EMC took us through a demo of the “unboxing” experience, and talked extensively about the lifecycle of the product. They also demonstrated many of the automation features included in the solution that weren’t always there with Vblock. I’ve been responsible for Vblock environments over the years, and a lot of the lifecycle management activities were very thoroughly documented, and extremely manual. PowerOne, on the other hand, doesn’t look like it relies extensively on documentation and spreadsheets to be managed effectively. But maybe that’s just because Trey and the team were able to demonstrate things so effectively.
So why would the average enterprise get tangled up in converged infrastructure nowadays? What with all the kids and their HCI solutions, and the public cloud, and the plethora of easy to consume infrastructure solutions available via competitive consumption models? Well, some enterprises don’t like relying on people within the organisation to deliver solutions for mission critical applications. These enterprises would rather leave that type of outcome in the hands of one trusted vendor. But they might still want that outcome to be hosted on-premises. Think of big financial institutions, and various government agencies looking after very important things. These are the kinds of customers that PowerOne is well suited to.
That doesn’t mean that what Dell EMC is doing with PowerOne isn’t innovative. In fact I think what they’ve managed to do with converged infrastructure is very innovative, within the confines of converged infrastructure. This type of approach isn’t for everyone though. There’ll always be organisations that can do it faster and cheaper themselves, but they may or may not have as much at stake as some of the other guys. I’m curious to see how much uptake this particular solution gets in the market, particularly in environments where HCI and public cloud adoption is on the rise. It strikes me that Dell EMC has turned a corner in terms of system integration too, as the out of the box experience looks really well thought out compared to some of its previous attempts at integration.
Disclaimer: I recently attended Storage Field Day 19. My flights, accommodation and other expenses were paid for by Tech Field Day. There is no requirement for me to blog about any of the content presented and I am not compensated in any way for my time at the event. Some materials presented were discussed under NDA and don’t form part of my blog posts, but could influence future discussions.
The data centre is changing, as is the way we manage it. There’s been an observable evolution of the applications we run in the DC and a need for better tools. The traditional approach to managing infrastructure, with siloed teams of storage, network, and compute administrators, is also becoming less common. One of the key parts of this story is the growing need for automation. As operational organisations in charge of infrastructure and applications, we want to:
Manage large scale operations across the hybrid cloud;
Enable DevOps and CI/CD models with infrastructure as code (operational discipline); and
Deliver self service experience.
Automation has certainly gotten easier, and as an industry we’re moving from brute force scripting to assembling pre-built modules.
Enablers for Dell EMC Storage (for Programmers)
REST
All of our automation Power Tools use REST
Arrays have a REST API
REST APIs are versioned APIs
Organised by resource for simple navigation
Secure
HTTPS, TLS 1.2 or higher
Username / password or token based
Granular RBAC
With REST, development is accelerated
Ansible for Storage?
Ansible is a pretty cool automation engine that’s already in use in a lot of organisations.
Minimal Setup
Install from yum or apt-get on a Linux server / VM
No agents anywhere
Low bar of entry to automation
Near zero programming
Simple syntax
Dell EMC and vRO for storage
VMware’s vRealize Orchestrator has been around for some time. It has a terrible name, but does deliver on its promise of simple automation for VMware environments.
Plugins allow full automation, from storage to VM
Easily integrated with other automation tools
The cool thing about the plugin is that you can replace homegrown scripts with a pre-written set of plugins fully supported by Dell EMC.
You can also use vRO to implement automated policy based workflows:
Automatic extension of datastores;
Configure storage the same way every time; and
Tracking of operations in a single place.
vRO plugs in to vRealize Automation as well, giving you self service catalogue capabilities along with support for quotas and roles.
What does the vRO plugin support?
Supported Arrays
PowerMax / VMAX All-Flash (Enterprise)
Unity (Midrange)
XtremIO
Storage Provisioning Operations
Adds
Moves
Changes
Array Level Data Protection Services
Snapshots
Remote replication
Thoughts and Further Reading
DevOps means a lot of things to a lot of people. Which is a bit weird, because some smart folks have written a handbook that lays it all out for us to understand. But the point is that automation is a big part of what makes DevOps work at a functional level. The key to a successful automation plan, though, is that you need to understand what you want to automate, and why you want to automate it. There’s no point automating every process in your organisation if you don’t understand why you do that process in the first place.
Does the presence of a vRO plugin mean that Dell EMC will make it super easy for you to automate daily operations in your storage environment? Potentially. As long as you understand the need for those operations and they’re serving a function in your organisation. I’m waffling, I know, but the point I’m attempting to make is that having a tool bag / shed / whatever is great, and automating daily processes is great, but the most successful operations environments are mature enough to understand not just the how but the why. Taking what you do every day and automating it can be a terrifically time-consuming activity. The important thing to understand is why you do that activity in the first place.
I’m really pleased that Dell EMC has made this level of functionality available to end users of its storage platforms. Storage administration and operations can still be a complicated endeavour, regardless of whether you’re a storage administrator comfortably ensconced in an operational silo, or one of those cool site reliability engineers wearing jeans to work every day and looking after thousands of cloud-native apps. I don’t think it’s the final version of what these tools look like, or what Dell EMC want to deliver in terms of functionality, but it’s definitely a step in the right direction.
Disclaimer: I recently attended Storage Field Day 19. My flights, accommodation and other expenses were paid for by Tech Field Day. There is no requirement for me to blog about any of the content presented and I am not compensated in any way for my time at the event. Some materials presented were discussed under NDA and don’t form part of my blog posts, but could influence future discussions.
One of the key features of the Isilon platform has been its scalability. OneFS automatically expands the filesystem across additional nodes. This scalability is impressive, and the platform has the ability to linearly scale both capacity and performance. It supports up to 252 nodes, petabytes of capacity and millions of file operations. My favourite thing about the scalability story, though, is that it’s non-disruptive. Dell EMC says it takes less than 60 seconds to add a node. That assumes you’ve done a bit of pre-work, but it’s a good story to tell. Even better, Isilon supports automated workload rebalancing – so your data is automatically redistributed to take advantage of new nodes when they’re added.
One Filesystem
They call it OneFS for a reason. Clients can read / write from any Isilon node, and client connections are distributed across cluster. Each file is automatically distributed across the cluster. This means that the larger the cluster, the better the efficiency and performance is. OneFS is also natively multi-protocol – clients can read / write same data over multiple protocols.
Always-on
There are some neat features in terms of resiliency too.
The cluster can sustain multiple failures with no impact – no impact for failures of up to 4 nodes or 4 drives in each pool
Non-disruptive tech refresh – non-disruptively add, remove or replace nodes in the cluster
No dedicated spare nodes or drives – better efficiency as no node or drive is unused
There is support for an ultra dense configuration: 4 nodes in 4U, offering up to 240TB raw per RU.
Comprehensive Enterprise Software
SmartDedupe and Compression – storage efficiency
SmartPools – Automated Tiering
CloudPools – Cloud tiering
SmartQuotas – Thin provisioning
SmartConnect – Connection rebalancing
SmartLock – Data integrity
SnapshotIQ – Rapid Restore
SyncIQ – Disaster Recovery
Three Approaches to Data Reduction
Inline compression and deduplication
Post-process deduplication
Small file packing
Configurable tiering based on time
Policy based tiering at file level
Transparent to clients / apps
Other Cool Stuff
SmartConnect with NFS Failover
High Availability
No RTO or RPO
SnapshotIQ
Very fast file recovery
Low RTO and RPO
SyncIQ via LAN
Disk-based backup and business continuity
Medium RTO and RPO
SyncIQ via WAN
Offsite DR
Medium – high RTO and RPO
NDMP Backup
Backup to tape
FC backup accelerator
Higher RTO and RPO
Scalability
Key Features
Support for files up to 16TB in size
Increase of 4X over previous versions
Benefits
Support applications and workloads that typically deal with large files
Use Isilon as a destination or temporary staging area for backups and database
Isilon in the Cloud
All this Isilon stuff is good, but what if you want to leverage those features in a more cloud-friendly way? Dell EMC has you covered. There’s a good story with getting data to and from the major public cloud providers (in a limited amount of regions), and there’s also an interesting solution when it comes to running OneFS in the cloud itself.
[image courtesy of Dell EMC]
Thoughts and Further Reading
If you’re familiar with Isilon, a lot of what I’ve covered here wouldn’t be news, and would likely be a big part of the reason why you might even be an existing customer. But the OneFS in the public cloud stuff may come as a bit of a surprise. Why would you do it? Why would you pay over the odds to run appliance-like storage services when you could leverage native storage services from these cloud providers? Because the big public cloud providers expect you to have it all together, and run applications that can leverage existing public cloud concepts of availability and resiliency. Unfortunately, that isn’t always the case, and many enterprises find themselves lifting and shifting workloads to public clouds. OneFS gives those customers access to features that may not be available to them using the platform natively. These kinds of solutions can also be interesting in the verticals where Isilon has traditionally proven popular. Media and entertainment workloads, for example, often still rely on particular tools and workflows that aren’t necessarily optimised for public cloud. You might have a render job that you need to get done quickly, and the amount of compute available in the public cloud would make that a snap. So you need storage that integrates nicely with your render workflow. Suddenly these OneFS in X Cloud services are beginning to make sense.
It’s been interesting to watch the evolution of the traditional disk slingers in the last 5 years. I don’t think the public cloud has eaten their lunch by any means, but enterprises continue to change the way they approach the need for core infrastructure services, across all of the verticals. Isilon continues to do what it did in the first place – scale out NAS – very well. But Dell EMC has also realised that it needs to augment its approach in order to keep up with what the hyperscalers are up to. I don’t see on-premises Isilon going away any time soon, but I’m also keen to see how the product portfolio develops over the next few years. You can read some more on OneFS in Google Cloud here.
Welcome to my semi-regular, random news post in a short format. This is #26. I was going to start naming them after my favourite basketball players. This one could be the Korver edition, for example. I don’t think that’ll last though. We’ll see. I’ll stop rambling now.
Do you know Cody? Cody’s a smart guy, and very good at expressing technical things on his blog. This article on deploying the Pure Storage OVA using PowerShell is a good example of that.
InfiniteIO has been doing some cool stuff. I spoke to them recently and will be writing something about them in the near future. In the meantime, here’s their most recent press release.
I wrote about Excelero recently, but neglected to mention some of what it’s been doing with NVIDIA. You can read more about that here.
The August edition of the Brisbane VMUG meeting will be held on Tuesday 20th August at Fishburners from 4 – 6pm. It’s sponsored by Dell EMC and should to be a great afternoon.
Here’s the agenda:
VMUG Intro
VMware Presentation: TBA
Dell EMC Presentation: Protecting Your Critical Assets With Dell EMC
Q&A
Refreshments and drinks.
Dell EMC have gone to great lengths to make sure this will be a fun and informative session and I’m really looking forward to hearing about their data protection portfolio. You can find out more information and register for the event here. I hope to see you there. Also, if you’re interested in sponsoring one of these events, please get in touch with me and I can help make it happen.
Disclaimer: I recently attended Dell Technologies World 2019. My flights, accommodation and conference pass were paid for by Dell Technologies via the Media, Analysts and Influencers program. There is no requirement for me to blog about any of the content presented and I am not compensated in any way for my time at the event. Some materials presented were discussed under NDA and don’t form part of my blog posts, but could influence future discussions.
Here’s a quick post with links to the other posts I did surrounding Dell Technologies World 2019, as well as links to other articles I found interesting.
Product Announcements
Here’re the posts I did covering the main product-related announcements from the show.
I had a busy but enjoyable week. I would have liked the get to some of the technical breakout sessions, but being given access to some of the top executives in the company via the Media, Analysts and Influencers program was invaluable. Thanks again to Dell Technologies (particularly Debbie Friez and Konnie) for having me along to the show. And big thanks to Stephen and the Tech Field Day team for having me along to the Tech Field Day event as well.
We use cookies on our website to give you the most relevant experience by remembering your preferences and repeat visits. By clicking “Accept”, you consent to the use of ALL the cookies.
This website uses cookies to improve your experience while you navigate through the website. Out of these cookies, the cookies that are categorized as necessary are stored on your browser as they are essential for the working of basic functionalities of the website. We also use third-party cookies that help us analyze and understand how you use this website. These cookies will be stored in your browser only with your consent. You also have the option to opt-out of these cookies. But opting out of some of these cookies may have an effect on your browsing experience.