Cisco Introduces HyperFlex 4.5

Disclaimer: I recently attended Storage Field Day 20.  My flights, accommodation and other expenses were paid for by Tech Field Day. There is no requirement for me to blog about any of the content presented and I am not compensated in any way for my time at the event.  Some materials presented were discussed under NDA and don’t form part of my blog posts, but could influence future discussions.

Cisco presented a sneak preview of HyperFlex 4.5 at Storage Field Day 20 a little while ago. You can see videos of the presentation here, and download my rough notes from here. Note that this preview was done some time before the product was officially announced, so there may be a few things that did or didn’t make it into the final product release.

 

Announcing HyperFlex 4.5

4.5: Meat and Potatoes

So what are the main components of the 4.5 announcement?

  • iSCSI Block storage
  • N:1 Edge data replication
  • New edge platforms / SD-WAN
  • HX Application Platform (KVM)
  • Intersight K8s Service
  • Intersight Workload Optimizer

Other Cool Stuff

  • HX Boost Mode – virtual CPU configuration change in HX controller VM, the boost is persistent (scale up).
  • ESXi & VC 7.0, Native VC Plugin, 6.0 is EoS, HX Native HTML5 vCenter Plugin (this has been available since HX 4.0)
  • Secure Boot – protect the hypervisor against bootloader attacks with secure boot anchored in Cisco hardware root of trust
  • Hardened SDS Controller – reduce the attack surface and mitigate against compromised admin credentials

The HX240 Short Depth nodes have been available since HX 4.0, but there’s now a new Edge Option – the HX240 Edge. This is a new 2RU form factor option for HX Edge (2N / 3N / 4N), A-F and hybrid, 1 or 2 sockets, up to 3TB RAM and 175TB capacity, and PCIe slots for dense GPUs.

 

iSCSI in HX 4.5(1a)

[image courtesy of Cisco]

iSCSI Topologies

[image courtesy of Cisco]

 

Thoughts and Further Reading

Some of the drama traditionally associated with HCI marketing seems to have died down now, and people have mostly stopped debating what it is or isn’t, and started focusing on what they can get from the architecture over more traditional infrastructure deployments. Hyperconverged has always had a good story when it comes to compute and storage, but the networking piece has proven problematic in the field.

When I think of Cisco HyperFlex I think of it as the little HCI solution that could. It doesn’t dominate industry conversation like some of the other vendors, but it’s certainly had an impact, in much the same way UCS has. I’ve been a big fan of Springpath for some time, and HyperFlex has taken a solid foundation and turned it into something even more versatile. I think the key thing to remember with HyperFlex is that it’s a networking company selling this stuff – a networking company that knows what’s up when it comes to connecting all kinds of infrastructure together.

The addition of iSCSI keeps the block storage crowd happy, and the new edge form-factor will have appeal for customers trying to squeeze these boxes into places they probably shouldn’t be going. I’m looking forward to seeing more HyperFlex from Cisco over the next 12 months, as I think it finally has a really good story to tell, particularly when it comes to integration with other Cisco bits and pieces.

East Coast (Virtual) VMUG – December 2020

hero_vmug_express_2011

The East Coast VMUGs are hosting a virtual event on December 10, powered by the Melbourne VMUG. The event starts at 4pm AEST (5pm AEDT). Details as follows:

 

MVMUG Leader Update – Jeremy Drossinis

  • recent changes to the MVMUG Committee.
  • announcement in regards to MVMUG’s end of year function.
  • updates from the Brisbane and Sydney VMUGs.
  • 2020, the year that was

 

Sponsor Session by Michael Lang, Solutions Architecture Manager, NVIDIA

Accelerated VDI Desktops

Virtual Desktops in 2020 with Windows 10, Office 365, a variety of browsers as well as video and conferencing tools, created unprecedented challenges around scale, performance and of course a great user experience, and with increasing adoption of Work From Home, the challenge for IT has never been higher to deliver.

Come and find out how a vGPU accelerated solution can help you deliver a high quality Horizon desktop anywhere, any time and delight your users.

What workload elements make a modern VDI desktop and what are the challenges?

  • How does vGPU accelerate the Horizon desktop and ensure a good UX?
  • HW and SW elements of vGPU.
  • Sizing and deployment considerations.
  • Some example use cases (Education and FSI)
  • Q&A

One attendee will win a prize draw of an NVIDIA goodies, consisting of an NVIDIA Star Wars branded , a  (great for office bragging rights) as well as a PC care kit with stickers, webcam cover and cleaning cloth.

 

Please note to enter the prize draw by NVIDIA the following apply:

  • You must attend the virtual meeting
  • In order to be in the running, you must complete a short sign up form, which will be shared by NVIDIA during their presentation.

VMware Session – to be announced!

NVIDIA has gone to great lengths to make sure this will be a fun and informative session and I’m really looking forward to hearing all about vGPU. You can find out more information and register for the event here. I hope to see you there. Also, if you’re interested in sponsoring one of these events, please get in touch with me and I can help make it happen.

Druva Update – Q3 2020

I caught up with my friend W. Curtis Preston from Druva a little while ago to talk about what the company has been up to. It seems like quite a bit, so I thought I’d share some notes here.

 

DXP and Company Update

Firstly, Druva’s first conference, DXP, is coming up shortly. There’s an interesting range of topics and speakers, and it looks to be jam packed with useful info. You can find out more and register for that here. The company seems to be going from strength to strength, enjoying 50% year-on-year growth, and 70% for Phoenix in particular (its DC product).

If you’re into Gartner Peer Insights – Druva has taken out the top award in 3 categories – file analysis, DRaaS, and data centre backup. Preston also tells me Druva is handling around 5 million backups a day, for what it’s worth. Finally, if you’re into super fluffy customer satisfaction metrics, Druva is reporting an “industry-leading NPS score of 88” that has been third-party verified.

 

Product News

It’s Fun To Read The CCPA

If you’re unfamiliar, California has released its version of the GDPR, know as the California Consumer Privacy Act. Druva has created a template for data types that shouldn’t be stored in plain text and can flag them as they’re backed up. It can also do the same thing in email, and you can now do a federated search against both of these things. If anything turns up that shouldn’t be there, you can go and remove problematic files.

ServiceNow Automation

Druva now has support for automated SNOW ticket creation. It’s based on some advanced logic, too. For example, if a backup fails 3 times, a ticket will be created and can be routed to the people who should be caring about such things.

More APIs

There’s been a lot of done work to deliver more APIs, and a more robust RBAC implementation.

DRaaS

DRaaS is currently only for VMware, VMC, and AWS-based workloads. Preston tells me that users are getting an RTO of 15-20 minutes, and an RPO of 1 hour. Druva added failback support a little while ago (one VM at a time). That feature has now been enhanced, and you can failback as many workloads as you want. You can also add a prefix or suffix to a VM name, and Druva has added a failover prerequisite check as well.

 

Other Notes

In other news, Druva is now certified on VMC on Dell. It’s added support for Microsoft Teams and support for Slack. Both useful if you’ve stopped storing your critical data in email and started storing it in collaboration apps instead.

Storage Insights and Recommendations

There’s also a storage insights feature that is particularly good for unstructured data. Say, for example, that 30% of your backups are media files, you might not want to back them up (unless you’re in the media streaming business, I guess). You can delete bad files from backups, and automatically create an exclusion for those file types.

Support for K8s

Support for everyone’s favourite container orchestration system has been announced, not yet released. Read about that here. You can now do a full backup of an entire K8s environment (AWS only in v1). This includes Docker containers, mounted volumes, and DBs referenced in those containers.

NAS Backup

Druva has enhanced its NAS backup in two ways, the first of which is performance. Preston tells me the current product is at least 10X faster than one year ago. Also, for customers already using a native recovery mechanism like snapshots, Druva has also added the option to backup directly to Glacier, which cuts your cost in half.

Oracle Support

For Oracle, Druva has what Preston describes as “two solid options”. Right now there’s an OVA that provides a ready to go, appliance-like experience, uses the image copy format (supporting block-level incremental, and incremental merge). The other option will be announced next week at DxP.

 

Thoughts and Further Reading

Some of these features seem like incremental improvements, but when you put it all together, it makes for some impressive reading. Druva has done a really impressive job, in my opinion, of sticking with the built in the cloud, for the cloud mantra that dominates much of its product design. The big news is the support for K8s, but things like multi-VM failback with the DRaaS solution is nothing to sneeze at. There’s more news coming shortly, and I look forward to covering that. In the meantime, if you have the time, be sure to check out DXP – I think it will be quite an informative event.

 

 

Random Short Take #46

Welcome to Random Short Take #46. Not a great many players have worn 46 in the NBA, but one player who has is one of my favourite Aussie players: Aron “Bangers” Baynes. So let’s get random.

  • Enrico recently attended Cloud Field Day 9, and had some thoughts on NetApp’s identity in the new cloud world. You can read his insights here.
  • This article from Chris Wahl on multi-cloud design patterns was fantastic, and well worth reading.
  • I really enjoyed this piece from Russ on technical debt, and some considerations when thinking about how we can “future-proof” our solutions.
  • The Raspberry Pi 400 was announced recently. My first computer was an Amstrad CPC 464, so I have a real soft spot for jamming computers inside keyboards.
  • I enjoyed this piece from Chris M. Evans on hybrid storage, and what it really means nowadays.
  • Working from home a bit this year? Me too. Tom wrote a great article on some of the security challenges associated with the new normal.
  • Everyone has a quadrant nowadays, and Zerto has found itself in another one recently. You can read more about that here.
  • Working with VMware Cloud Director and wanting to build a custom theme? Check out this article.

ANZ VMUG Virtual Event – November 2020

hero_vmug_express_2011

The November edition of the Brisbane VMUG meeting is a special one – we’re doing a joint session with a number of the other VMUG chapters in Australia and New Zealand. It will be held on Tuesday 17th November on Zoom from 3pm – 5pm AEST. It’s sponsored by Google Cloud for VMware and promises to be a great afternoon.

Here’s the agenda:

  • VMUG Intro
  • VMware Presentation: VMware SASE
  • Google Presentation: Google Cloud VMware Engine Overview
  • Q&A

Google Cloud has gone to great lengths to make sure this will be a fun and informative session and I’m really looking forward to hearing about Google Cloud VMware Engine. You can find out more information and register for the event here. I hope to see you there. Also, if you’re interested in sponsoring one of these events, please get in touch with me and I can help make it happen.

Random Short Take #45

Welcome to Random Short Take #45. The number 45 has taken a bit of a beating in terms of popularity in recent years, but a few pretty solid players have nonetheless worn 45 in the NBA, including MJ and The Rifleman. My favourite from this list is A.C. Green (“slam so hard, break your TV screen“). So let’s get random.

StorCentric Announces Data Mobility Suite

StorCentric recently announced its Data Mobility Suite (DMS). I had the opportunity to talk to Surya Varanasi (StorCentric CTO) about the news, and thought I’d share some of my notes here.

 

What Is It?

DMS is being positioned as a suite of “data cloud services” by StorCentric, with a focus on:

  • Data migration;
  • Data consistency; and
  • Data operation.

It has the ability to operate across heterogeneous storage, clouds, and protocols. It’s a software solution based on subscription licensing and uses a policy-driven engine to manage data in the enterprise. It can run on bare-metal or as a VM appliance. Object storage platform / cloud support if fairly robust, with AWS, Backblaze B2, and Wasabi, amongst others, all being supported.

[image courtesy of StorCentric]

Use Cases

There are a number of scenarios where a solution like DMS makes sense. You might have a bunch of NFS storage on-premises, for example, and want to move it to a cloud storage target using S3. Another use case cited involved collaboration across multiple sites, with the example being a media company creating content in three places, and working in different time zones, and wanting to move the data back to a centralised location.

Big Ideas

Speaking to StorCentric about the announcement, it was clear that there’s a lot more on the DMS roadmap. Block storage is something the team wants to tackle, and they’re also looking to deliver analytics and ransomware alerting. There’s also a strong desire to provide governance as well. For example, if I want to copy some data somewhere and keep it for 10 years, I’ll configure DMS to take care of that for me.

 

Thoughts and Further Reading

Data management means a lot of things to a lot of people. Storage companies often focus on moving blocks and files from one spot to another, but don’t always do a solid job of capturing data needs to be stored where it does. Or how, for that matter. There’s a lot more to data management than keeping ones and zeroes in a safe place. But it’s not just about being able to move data from one spot to another. It’s about understanding the value of your data, and understanding where it needs to be to deliver the most value to your organisation. Whilst it seems like DMS is focused primarily on moving data from one spot to another, there’s plenty of potential here to develop a broader story in terms of data governance and mobility. There’s built-in security, and the ability to apply levels of data governance to data in various locations. The greater appeal here is also the ability to automate the movement of data to different places based on policy. This policy-driven approach becomes really interesting when you start to look at complicated collaboration scenarios, or need to do something smart with replication or data migration.

Ultimately, there are a bunch of different ways to get data from one point to another, and a bunch of different reasons why you might need to do that. The value in something like DMS is the support for heterogeneous storage platforms, as well as the simple to use GUI support. Plenty of data migration tools come with extremely versatile command line interfaces and API support, but the trick is delivering an interface that is both intuitive and simple to navigate. It’s also nice to have a few different use cases met with one tool, rather than having to reach into the bag a few different times to solve very similar problems. StorCentric has a lot of plans for DMS moving forward, and if those plans come to fruition it’s going to form a very compelling part of the typical enterprise’s data management toolkit. You can read the press release here.

Zerto Announces 8.5 and Zerto Data Protection

Zerto recently announced 8.5 of its product, along with a new offering, Zerto Data Protection (ZDP). I had the good fortune to catch up with Caroline Seymour (VP, Product Marketing) about the news and thought I’d share some thoughts here.

 

ZDP, Yeah You Know Me

Global Pandemic for $200 Please, Alex

In “these uncertain times”, organisations are facing new challenges

  • No downtime, no data loss, 24/7 availability
  • Influx of remote work
  • Data growth and sprawl
  • Security threats
  • Acceleration of cloud

Many of these things were already a problem, and the global pandemic has done a great job highlighting them.

“Legacy Architecture”

Zerto paints a bleak picture of the “legacy architecture” adopted by many of the traditional dat protection solutions, positing that many IT shops need to use a variety of tools to get to a point where operations staff can sleep better at night. Disaster recovery, for example, is frequently handled via replication for mission-critical applications, with backup being performed via periodic snapshots for all other applications. ZDP aims to being all this together under one banner of continuous data protection, delivering:

  • Local continuous backup and long-term retention (LTR) to public cloud; and
  • Pricing optimised for backup.

[image courtesy of Zerto]

Features

[image courtesy of Zerto]

So what do you get with ZDP? Some neat features, including:

  • Continuous backup with journal
  • Instant restore from local journal
  • Application consistent recovery
  • Short-term SLA policy settings
  • Intelligent index and search
  • LTR to disk, object or Cloud (Azure, AWS)
  • LTR policies, daily incremental with weekly, monthly or yearly fulls
  • Data protection workflows

 

New Licensing

It wouldn’t be a new software product without some mention of new licensing. If you want to use ZDP, you get:

  • Backup for short-term retention and LTR;
  • On-premises or backup to cloud;
  • Analytics; and
  • Orchestration and automation for backup functions.

If you’re sticking with (the existing) Zerto Cloud Edition, you get:

  • Everything in ZDP;
  • Disaster Recovery for on-premises and cloud;
  • Multi-cloud support; and
  • Orchestration and automation.

 

Zerto 8.5

A big focus of Zerto’s recently has been VMware on public cloud support, including the various flavours of VMware on Azure, AWS, and Oracle Cloud. There are a bunch of reasons why this approach has proven popular with existing VMware customers looking to migrate from on-premises to public cloud, including:

  • Native VMware support – run existing VMware workloads natively on IaaS;
  • Policies and configuration don’t need to change;
  • Minimal changes – no need to refactor applications; and
  • IaaS benefits- reliability, scale, and operational model.

[image courtesy of Zerto]

New in 8.5

With 8.5, you can now backup directly to Microsoft Azure and AWS. You also get instant file and folder restores to production. There’s now support for VMware on public cloud disaster recovery and data protection for Microsoft Azure VMware Solution, Google Cloud VMware Engine, and the Oracle Cloud VMware Solution. You also get platform automation and lifecycle management features, including:

  • Auto-evacuate for recovery hosts;
  • Auto-populate for recovery hosts; and
  • Encryption capabilities.

And finally, a Zerto PowerShell Cmdlets Module has also been released.

 

Thoughts and Further Reading

The writing’s been on the wall for some time that Zerto might need to expand its solution offering to incorporate backup and recovery. Continuous data protection is a great feature and my experience with Zerto has been that it does what it says on the tin. The market, however, is looking for ways to consolidate solution offerings in order to save a few more dollarydoos and keep the finance department happy. I haven’t seen the street pricing for ZDP, but Seymour seemed confident that it stacks up well against the more traditional data protection options on the market, particularly when compared against offerings that incorporate components that deal with CDP and periodic data protection with different tools. There’s a new TCO calculator on the Zerto website, and there’s also the opportunity to talk to a Zerto account representative about your particular needs.

I’ve always treated regular backup and recovery and disaster recovery as very different things, mainly because they are. Companies frequently make the mistake of trying to cobble together some kind of DR solution using traditional backup and recovery tools. I’m interested to see how Zerto goes with this approach. It’s not the first company to converge elements that fit in the data protection space together, and it will be interesting to see how much of the initial uptake of ZDP is with existing customers or net new logos. The broadening of support for the VMware on X public cloud workloads is good news for enterprises too (putting aside my thoughts on whether or not that’s a great long term strategy for said enterprises). There’s some interesting stuff happening, and I’m looking forward to see how the story unfolds over the next 6 – 12 months.

Quobyte Announces 3.0

Quobyte recently announced Release 3.0 of its software. I had the opportunity to speak to Björn Kolbeck (Co-Founder and CEO) about the release, and thought I’d share some thoughts here.

 

About Quobyte

If you haven’t heard of Quobyte before, it was founded in 2013 by some ex-Googlers and HPC experts. The folks at Quobyte were heavily influenced by Google’s scale-out software model and wanted to bring that to the enterprise. Quobyte has had software in production since 2016 and has customers across a range of industry verticals, including financial services and media streaming. It’s not really object storage, more a parallel file system or, at a stretch, scale-out NAS.

 

The Tech

Kolbeck describes Quobyte as “storage for Generation Scale-Out” and is focussed on “getting storage out of the ugly corner of specialised appliances”.

Unlimited Performance

  • Linear scaling delivers unlimited performance
  • No bottlenecks – scale from small to 1000s of servers
  • No more NFS – it’s part of the problem

Deploy Anywhere

  • True software storage runs anywhere – bare metal, containers, cloud
  • Almost any x86t server – no appliances

Unconditional Simplicity

  • Anyone can do storage, it’s just another Linux application
  • All in user space, installs in minutes

 

The Announcement

Free Edition

The first part of the announcement is that there’s a free edition (previously there was a 45 day trial on offer). It’s limited in terms of capacity, support, and file system clients, but could be useful in labs and smaller environments.

[image courtesy of Quobyte]

3.0 Release

The 3.0 release is also a big part of Quobyte’s news, with the new version delivering a bunch of new features, most of which are outlined below.

360 Security

  • Holistic data protection
  • End to end AES encryption (in transit / at rest / untrusted storage nodes)
  • Selective TLS support
  • Access keys for the file system
  • X.509 certificates
  • Event stream (metadata, file access)

Policy Engine

Powerful Policy Engine

  • For: Tenant, volume, file, client
  • Control: Layout, tiering, QoS, recoding, caching
  • Dynamic: Runtime re-configurable

Automated

  • Auto file layout: replication + EC and Flash + HDD
  • Auto selection of replication factor, EC schema

Self-Service

Quobyte is looking to deliver a “cloud-like experience” with its self-service capabilities.

Login for users

  • Manage access keys
  • Check resource consumption

Authenticate using access keys

  • S3
  • File system driver
  • K8s / CSI
  • User-space drivers: HDFS, TF, MPI-IO

Multi-Cluster

Data Mover

  • Bi-directional sync (evental consistency)
  • Policy-based data tiering between clusters
  • Recoding

TLS between clusters

More Native Drivers

HDFS

MPI-IO

Benefit of kernel bypass

  • Lower latency
  • Less memory bandwidth

 

Thoughts and Further Reading

One of the challenges with software-defined storage is invariably the constraint that poor hardware choices can put on performance. Kolbeck acknowledged that Quobyte is “as fast as your hardware”. I asked him whether Quobyte provided guidance on hardware choices that worked well with the platform. There is a bunch of recommended (and tested) hardware listed on this page. He did mention that whichever way you decided to go, it was recommended to stick with either Mellanox or Broadcom NICs due to issues observed with other vendors’ Linux drivers. There’re also recommendations on the site for public cloud instance sizing covering AWS, GCP, and Oracle.

Quobyte is being deployed to support scale-out workloads in the enterprise across a number of sectors including financial services, life sciences, media and entertainment, and manufacturing in Europe and Asia. Kolbeck noted that one of the interesting things about the advent of smart everything is that “car manufacturers are suddenly in the machine learning field” and looking for new ways to support their businesses.

There are a lot of reasons to like software-defined storage offerings. You can generally run them on anything, and performance enhancements can frequently be had via code upgrades. That’s not to say that you don’t get that with the big box slingers, but the flexibility of hardware choice has tremendous appeal, particularly in the enterprise market where it can feel like the margin on commodity hardware can be exorbitant. Quobyte hasn’t been around forever, but the folks over there seem to have a pretty solid heritage in software-defined and scale-out storage solutions – a good sign if you’re in the market for a software-defined, scale-out storage solution. Some folks are going to rue the lack of NFS support, but I’m sure Kolbeck and the team would be happy to sit down and discuss with them why that’s no great loss. There’s some pretty cool stuff in this release, and the free edition is definitely worth taking for a spin. I’m looking forward to hearing more from Quobyte over the next little while.

StorONE Q3-2020 Update

StorONE recently announced details of its Q3-2020 software release. I had the opportunity to talk about the announcement with George Crump and thought I’d share some brief thoughts here.

 

Release Highlights

Performance Improvements

One of the key highlights of this release is significant performance improvements for the platform based purely on code optimisations. Crump tells me that customers with Intel Optane and NVMe SSDs will be extremely happy with what they see. What’s also notable is that customers still using high latency media such as hard disk drives will still see a performance improvement of 15 – 20%.

Data Protection

StorONE has worked hard on introducing some improved resilience for the platform as well, with two key features being made available:

  • vRack; and
  • vReplicate.

vRack provides the ability to split S1 storage across more than one rack (or row, for that matter) to mitigate any failures impacting the rack hosting the controllers and disk enclosures. You can now also set tolerance for faults at an enclosure level, not just a drive level.

[image courtesy of StorONE]

vReplicate extends S1:Replicate’s capabilities to provide cascading replication. You can now synchronously replicate between data centres or campus sites and then asynchronously send that data to another site, hundreds of kilometres away if necessary. Primary systems can be an All-Flash Array.next, traditional All-Flash Array, or a Hybrid Array, and the replication target can be an inexpensive hard disk only S1 system.

[image courtesy of StorONE]

There’s now full support for Volume Shadow Copy Service (VSS) for S1:Snap users.

 

Other Enhancements

Some of the other enhancements included with this release are:

  • Improved support for NVMe-oF (including the ability to simultaneously support iSCSI and FC along with NVMe);
  • Improved NAS capability, with support for quotas and NIS / LDAP; and
  • Downloadable stats for increased insights.

 

Thoughts

Some of these features might seem like incremental improvements, but this is an incremental release. I like the idea of supporting legacy connections while supporting the ability to add newer tech to the platform, and providing a way forward in terms of hardware migration. The vRack resiliency concept is also great, and a salient reminder that the ability to run this code on commodity hardware makes some of these types of features a little more accessible. I also like the idea of being able to download analytics data and do things with it to gain greater insights into what the system is doing. Sure, it’s an incremental improvement, but an important one nonetheless.

I’ve been a fan of the StorONE story for some time now (and not just because the team slings a few dollars my way to support the site every now and then). I think the key to much of StorONE’s success has been that it hasn’t gotten caught up trying to be a storage appliance vendor, and has instead focussed on delivering reliable code on commodity systems that results in a performance-oriented storage platform that continues to improve from a software perspective without being tied to a particular hardware platform. The good news is though, when new hardware becomes available (such as Optane), it’s not a massive problem to incorporate it into the solution.

StorONE has always talked a big game in terms of raw performance numbers, but I think it’s the addition of features such as vRack and improvements to the replication capability that really makes it a solution worth investigating. It doesn’t hurt that you can check the pricing calculator out for yourself before you decide to go down the path of talking to StorONE’s sales team. I’m looking forward to seeing what StorONE has in store in the next little while, as I get the impression it’s going to be pretty cool. You can read details of the update here.