Random Short Take #61

Welcome to Random Short take #61.

  • VMworld is on this week. I still find the virtual format (and timezones) challenging, and I miss the hallway track and the jet lag. There’s nonetheless some good news coming out of the event. One thing that was announced prior to the event was Tanzu Community Edition. William Lam talks more about that here.
  • Speaking of VMworld news, Viktor provided a great summary on the various “projects” being announced. You can read more here.
  • I’ve been a Mac user for a long time, and there’s stuff I’m learning every week via Howard Oakley’s blog. Check out this article covering the Recovery Partition. While I’m at it, this presentation he did on Time Machine is also pretty ace.
  • Facebook had a little problem this week, and the Cloudflare folks have provided a decent overview of what happened. As someone who works for a service provider, this kind of stuff makes me twitchy.
  • Fibre Channel? Cloud? Chalk and cheese? Maybe. Read Chin-Fah’s article for some more insights. Personally, I miss working with FC, but I don’t miss the arguing I had to do with systems and networks people when it came to the correct feeding and watering of FC environments.
  • Remote working has been a challenge for many organisations, with some managers not understanding that their workers weren’t just watching streaming video all day, but actually being more productive. Not everything needs to be a video call, however, and this post / presentation has a lot of great tips on what does and doesn’t work with distributed teams.
  • I’ve had to ask this question before. And Jase has apparently had to answer it too, so he’s posted an article on vSAN and external storage here.
  • This is the best response to a trio of questions I’ve read in some time.

CTERA – Storage The Way Your Users Want It

Disclaimer: I recently attended Storage Field Day 22.  Some expenses were paid for by Tech Field Day. There is no requirement for me to blog about any of the content presented and I am not compensated in any way for my time at the event.  Some materials presented were discussed under NDA and don’t form part of my blog posts, but could influence future discussions.

CTERA recently presented at Storage Field Day 22. You can see their videos from Storage Field Day 22 here, and download a PDF copy of my rough notes from here.

 

CTERA?

In a nutshell, CTERA is:

  • Enterprise NAS over Object
  • 100% Private
  • Multi-cloud, hybrid consistent
  • Delivers data placement policy and mobility
  • Caching, not tiering
  • Zero-trust

 

The Problem

So what’s the problem we’re trying to solve with unstructured data?

  • Every IT environment is hybrid
  • More data is being generated at the edge
  • Workload placement strategies are driving storage placement
  • Storage must be instrumented and accessible anywhere

Seems simple enough, but edge storage is hard to get right.

[image courtesy of CTERA]

What Else Do You Want?

We want a lot from our edge storage solutions, including the ability to:

  • Migrate data to cloud, while keeping a fast local cache
  • Connect branches and users over a single namespace
  • Enjoy a HQ-grade experience regardless of location
  • Achieve 80% cost saving with global dedupe and cloud economics.

 

The Solution?

CTERA Multi-cloud Global File System – a “software-defined file over object with distributed SMB/NFS edge caching and endpoint collaboration”.

[image courtesy of CTERA]

CTERA Architecture

  • Single namespace connecting HQ, branches and users with ACL support
  • Object-native backend with cache accelerated access for remote sites
  • Multi-cloud scale-out to customer’s private or public infrastructure
  • Source-based encryption and global deduplication
  • Multi-tenant administration scalable to thousands of sites
  • Data management ecosystem for content security, analytics and DevOps automation

[image courtesy of CTERA]

Use Cases?

  • NAS Modernisation – Hybrid Edge Filer, Object-based Filesystem, Elastic scaling, Built-in Backup & DR
  • Remote Workforce – Endpoint Sync, Share, Backup & Cached Drive Distributed VDI clusters Small-form-factor Filer Mobile Collaboration
  • Media – Large Dataset Handling, Ultra-Fast Cloud Sync, MacOS Experience, Cloud Streaming
  • Multi-site Collaboration – Global File System Distributed Sync Scalable Central Mgt.
  • Edge Data Processing – Integrated HCI Filers Distributed Data Analysis Machine-Generated Data
  • Container-native – Global File System Across Distributed Kubernetes Clusters and Tethered Cloud Services

 

Thoughts and Further Reading

It should come as no surprise that people expect data to be available to them everywhere nowadays. And that’s not just sync and share solutions or sneaker net products on USB drives. No, folks want to be able to access corporate data in a non-intrusive fashion. It gets worse for the IT department though, because your end users aren’t just “heavy spreadsheet users”. They’re also editing large video files, working on complicated technical design diagrams, and generating gigabytes of log files for later analysis. And it’s not enough to say “hey, can you download a copy and upload it later and hope that no-one else has messed with the file?”. Users are expecting more from their systems. There are a variety of ways to deal with this problem, and CTERA seems to have provided a fairly robust solution, with many ways of accessing data, collaborating, and storing data in the cloud and at the edge. The focus isn’t limited to office automation data, with key verticals such as media and entertainment, healthcare, and financial services all having solutions suited to their particular requirements.

CTERA’s Formula One slide is impressive, as is the variety of ways it works to help organisations address the explosion unstructured data in the enterprise. With large swathes of knowledge workers now working more frequently outside the confines of the head office, these kinds of solutions are only going to become more in demand, particularly those that can leverage cloud in an effective (and transparent) fashion. I’m excited to see what’s to come with CTERA. Check out Ray’s article for a more comprehensive view of what CTERA does.

Random Short Take #60

Welcome to Random Short take #60.

  • VMware Cloud Director 10.3 went GA recently, and this post will point you in the right direction when it comes to planning the upgrade process.
  • Speaking of VMware products hitting GA, VMware Cloud Foundation 4.3 became available about a week ago. You can read more about that here.
  • My friend Tony knows a bit about NSX-T, and certificates, so when he bumped into an issue with NSX-T and certificates in his lab, it was no big deal to come up with the fix.
  • Here’s everything you wanted to know about creating an external bootable disk for use with macOS 11 and 12 but were too afraid to ask.
  • I haven’t talked to the good folks at StarWind in a while (I miss you Max!), but this article on the new All-NVMe StarWind Backup Appliance by Paolo made for some interesting reading.
  • I loved this article from Chin-Fah on storage fear, uncertainty, and doubt (FUD). I’ve seen a fair bit of it slung about having been a customer and partner of some big storage vendors over the years.
  • This whitepaper from Preston on some of the challenges with data protection and long-term retention is brilliant and well worth the read.
  • Finally, I don’t know how I came across this article on hacking Playstation 2 machines, but here you go. Worth a read if only for the labels on some of the discs.

Cohesity DataProtect Delivered As A Service – SaaS Connector

I recently wrote about my experience with Cohesity DataProtect Delivered as a Service. One thing I didn’t really go into in that article was the networking and resource requirements for the SaaS Connector deployment. It’s nothing earth-shattering, but I thought it was worthwhile noting nonetheless.

In terms of the VM that you deploy for each SaaS Connector, it has the following system requirements:

  • 4 CPUs
  • 10 GB RAM
  • 20 GB disk space (100 MB throughput, 100 IOPs)
  • Outbound Internet connection

In terms of scaleability, the advice from Cohesity at the time of writing is to deploy “one SaaS Connector for each 160 VMs or 16 TB of source data. If you have more data, we recommend that you stagger their first full backups”. Note that this is subject to change. The outbound Internet connectivity is important. You’ll (hopefully) have some kind of firewall in place, so the following ports need to be open.

Port
Protocol
Target
Direction (from Connector)
Purpose

443

TCP

helios.cohesity.com

Outgoing

Connection used for control path

443

TCP

helios-data.cohesity.com

Outgoing

Used to send telemetry data

22, 443

TCP

rt.cohesity.com

Outgoing

Support channel

11117

TCP

*.dmaas.helios.cohesity.com

Outgoing

Connection used for data path

29991

TCP

*.dmaas.helios.cohesity.com

Outgoing

Connection used for data path

443

TCP

*.cloudfront.net

Outgoing

To download upgrade packages

443

TCP

*.amazonaws.com

Outgoing

For S3 data traffic

123, 323

UDP

ntp.google.com or internal NTP

Outgoing

Clock sync

53

TCP & UDP

8.8.8.8 or internal DNS

Bidirectional

Host resolution

Cohesity recommends that you deploy more than one SaaS Connector, and you can scale them out depending on the number of VMs / how much data you’re protecting with the service.

If you’re having concerns with bandwidth, you can configure the bandwidth used by the SaaS Connector via Helios.

Navigate to Settings -> SaaS Connections and click on Bandwidth Usage Options. You can then add a rule.

You then schedule bandwidth usage, potentially for quiet times (particularly useful in small environments where Internet connections may be shared with end users). There’s support for upload and download traffic, and multiple schedules as well.

And that’s pretty much it. Once you have your SaaS Connectors deployed you can monitor everything from Helios.

 

Cohesity DataProtect Delivered As A Service – A Few Notes

As part of a recent vExpert giveaway the folks at Cohesity gave me a 30-day trial of the Cohesity DataProtect Delivered as a Service offering. This is a component of Cohesity’s Data Management as a Service (DMaaS) offering and, despite the slightly unwieldy name, it’s a pretty neat solution. I want to be clear that it’s been a little while since I had any real stick time with Cohesity’s DataProtect offering, and I’m looking at this in a friend’s home lab, so I’m making no comments or assertions regarding the performance of the service. I’d also like to be clear that I’m not making any recommendation one way or another with regards to the suitability of this service for your organisation. Every organisation has its own requirements and it’s up to you to determine whether this is the right thing for you.

 

Overview

I’ve added a longer article here that explains the setup process in more depth, but here’s the upshot of what you need to do to get up and running. In short, you sign up, select the region you want to backup workloads to, configure your SaaS Connectors for the particular workloads you’d like to protect, and then go nuts. It’s really pretty simple.

Workloads

In terms of supported workloads, the following environments are currently supported:

  • Hypervisors (VMware and Hyper-V);
  • NAS (generic SMB and NFS, Isilon, and NetApp);
  • Microsoft SQL Server;
  • Oracle;
  • Microsoft 365;
  • Amazon AWS; and
  • Physical hosts.

This list will obviously grow as some of the support for particular workloads with DataProtect and Helios improves over time.

Regions

The service is currently available in seven AWS Regions:

  • US East (Ohio)
  • US East (N. Virginia)
  • US West (Oregon)
  • US West (N. California)
  • Canada (Central)
  • Asia Pacific (Sydney)
  • Europe (Frankfurt)

You’ve got some flexibility in terms of where you store your data, but it’s my understanding that the telemetry data (i.e. Helios) goes to one of the US East Regions. It’s also important to note that once you’ve put data in a particular Region, you can’t then move that data to another Region.

Encryption

Data is encrypted in-flight and at rest, and you have a choice of KMS solutions (Cohesity-managed or DIY AWS KMS). Note that once you choose a KMS, you cannot change your mind. Well, you can, but you can’t do anything about it.

 

Thoughts

Data protection as a service offerings are proving increasingly popular with customers, data protection vendors, and service providers. The appeal for the punters is that they can apply some of the same thinking to protecting their investment in their cloud as they did to standing it up in the first place. The appeal for the vendors and SPs is that they can deliver service across a range of platforms without shipping tin anywhere, and build up annuity business as well.

With regards to this particular solution, it still has some rough edges, but it’s great to see just how much can already be achieved. As I mentioned, it’s been a while since I had some time with DataProtect, and some of the usability and functionality of both it and Helios has really come along in leaps and bounds. And the beauty of this being a vendor-delivered as a Service offering is that features can be rolled out on a frequent basis, rather than waiting for quarterly improvements to arrive via regularly scheduled software maintenance releases. Once you get your head around the workload, things tend to work as expected, and it was fairly simple to get everything setup and working in a short period of time.

This isn’t for everyone, obviously. If you’re not a fan of doing things in AWS, then you’re really not going to like how this works. And if you don’t operate near one of the currently supported Regions, then the tyranny of bandwidth (i.e. physics) may prevent reasonable recovery times from being achievable for you. It might seem a bit silly, but these are nonetheless things you need to consider when looking at adopting a service like this. It’s also important to think of the security posture of these kinds of services. Sure, things are encrypted, and you can use MFA with Helios, but folks outside the US sometimes don’t really dig the idea of any of their telemetry data living in the US. Sure, it’s a little bit tinfoil hat but it you’d be surprised how much it comes up. And it should be noted that this is the same for on-premises Cohesity solutions using Helios. Then again, Cohesity is by no means alone in sending telemetry data back for support and analysis purposes. It’s fairly common and something your infosec will likely already be across how to deal with it.

If you’re fine with that (and you probably should be), and looking to move away from protecting your data with on-premises solutions, or looking for something that gives you some flexible deployment and management options, this could be of interest. As I mentioned, the beauty of SaaS-based solutions is that they’re more frequently updated by the vendor with fixes and features. Plus you don’t need to do a lot of the heavy lifting in terms of care and feeding of the environment. You’ll also notice that this is the DataProtect component, and I imagine that Cohesity has plans to fill out the Data Management part of the solution more thoroughly in the future. If you’d like to try it for yourself, I believe there’s a trial you can sign up for. Finally, thanks to the Cohesity TAG folks for the vExpert giveaway and making this available to people like me.

Random Short Take #53

Welcome to Random Short Take #53. A few players have worn 53 in the NBA including Mark Eaton, James Edwards, and Artis Gilmore. My favourite though was Chocolate Thunder, Darryl Dawkins. Let’s get random.

  • I love Preston’s series of articles covering the basics of backup and recovery, and this one on backup lifecycle is no exception.
  • Speaking of data protection, Druva has secured another round of funding. You can read Mellor’s thoughts here, and the press release is here.
  • More data protection press releases? I’ve got you covered. Zerto released one recently about cloud data protection. Turns out folks like cloud when it comes to data protection. But I don’t know that everyone has realised that there’s some work still to do in that space.
  • In other press release news, Cloud Propeller and Violin Systems have teamed up. Things seem to have changed a bit at Violin Systems since StorCentric’s acquisition, and I’m interested to see how things progress.
  • This article on some of the peculiarities associated with mainframe deployments in the old days by Anthony Vanderwerdt was the most entertaining thing I’ve read in a while.
  • Alastair has been pumping out a series of articles around AWS principles, and this one on understanding your single points of failure is spot on.
  • Get excited! VMware Cloud Director 10.2.2 is out now. Read more about that here.
  • A lot of people seem to think it’s no big thing to stretch Layer 2 networks. I don’t like it, and this article from Ethan Banks covers a good number of reasons why you should think again if you’re that way inclined.

Random Short Take #52

Welcome to Random Short Take #52. A few players have worn 52 in the NBA including Victor Alexander (I thought he was getting dunked on by Shawn Kemp but it was Chris Gatling). My pick is Greg Oden though. If only his legs were the same length. Let’s get random.

  • Penguin Computing and Seagate have been doing some cool stuff with the Exos E 5U84 platform. You can read more about that here. I think it’s slightly different to the AP version that StorONE uses, but I’ve been wrong before.
  • I still love Fibre Channel (FC), as unhealthy as that seems. I never really felt the same way about FCoE though, and it does seem to be deader than tape.
  • VMware vSAN 7.0 U2 is out now, and Cormac dives into what’s new here. If you’re in the ANZ timezone, don’t forget that Cormac, Duncan and Frank will be presenting (virtually) at the Sydney VMUG *soon*.
  • This article on data mobility from my preferred Chris Evans was great. We talk a lot about data mobility in this industry, but I don’t know that we’ve all taken the time to understand what it really means.
  • I’m a big fan of Tech Field Day, and it’s nice to see presenting companies take on feedback from delegates and putting out interesting articles. Kit’s a smart fellow, and this article on using VMware Cloud for application modernisation is well worth reading.
  • Preston wrote about some experiences he had recently with almost failing drives in his home environment, and raised some excellent points about resilience, failure, and caution.
  • Speaking of people I worked with briefly, I’ve enjoyed Siobhán’s series of articles on home automation. I would never have the patience to do this, but I’m awfully glad that someone did.
  • Datadobi appears to be enjoying some success, and have appointed Paul Repice to VP of Sales for the Americas. As the clock runs down on the quarter, I’m going two for one, and also letting you know that Zerto has done some work to enhance its channel program.

Random Short Take #47

Welcome to Random Short Take #47. Not a great many players have worn 47 in the NBA, but Andrei “AK-47” Kirilenko did. So let’s get random.

  • I’ve been doing some stuff with Runecast in my day job, so this post over at Gestalt IT really resonated.
  • I enjoyed this article from Alastair on AWS Design, and the mention of “handcrafted perfection” in particular has put an abrupt end to any yearning I’d be doing to head back into the enterprise fray.
  • Speaking of AWS, you can now hire Mac mini instances. Frederic did a great job of documenting the process here.
  • Liking VMware Cloud Foundation but wondering if you can get it via your favourite public cloud provider? Wonder no more with this handy reference from Simon Long.
  • Ransomware. Seems like everyone’s doing it. This was a great article on the benefits of the air gap approach to data protection. Remember, it’s not a matter of if, but when.
  • Speaking of data protection and security, BackupAssist Classic v11 launched recently. You can read the press release here.
  • Using draw.io but want to use some VVD stencils? Christian has the scoop here.
  • Speaking of VMware Cloud Director, Steve O has a handy guide on upgrading to 10.2 that you can read here.

Druva Update – Q3 2020

I caught up with my friend W. Curtis Preston from Druva a little while ago to talk about what the company has been up to. It seems like quite a bit, so I thought I’d share some notes here.

 

DXP and Company Update

Firstly, Druva’s first conference, DXP, is coming up shortly. There’s an interesting range of topics and speakers, and it looks to be jam packed with useful info. You can find out more and register for that here. The company seems to be going from strength to strength, enjoying 50% year-on-year growth, and 70% for Phoenix in particular (its DC product).

If you’re into Gartner Peer Insights – Druva has taken out the top award in 3 categories – file analysis, DRaaS, and data centre backup. Preston also tells me Druva is handling around 5 million backups a day, for what it’s worth. Finally, if you’re into super fluffy customer satisfaction metrics, Druva is reporting an “industry-leading NPS score of 88” that has been third-party verified.

 

Product News

It’s Fun To Read The CCPA

If you’re unfamiliar, California has released its version of the GDPR, know as the California Consumer Privacy Act. Druva has created a template for data types that shouldn’t be stored in plain text and can flag them as they’re backed up. It can also do the same thing in email, and you can now do a federated search against both of these things. If anything turns up that shouldn’t be there, you can go and remove problematic files.

ServiceNow Automation

Druva now has support for automated SNOW ticket creation. It’s based on some advanced logic, too. For example, if a backup fails 3 times, a ticket will be created and can be routed to the people who should be caring about such things.

More APIs

There’s been a lot of done work to deliver more APIs, and a more robust RBAC implementation.

DRaaS

DRaaS is currently only for VMware, VMC, and AWS-based workloads. Preston tells me that users are getting an RTO of 15-20 minutes, and an RPO of 1 hour. Druva added failback support a little while ago (one VM at a time). That feature has now been enhanced, and you can failback as many workloads as you want. You can also add a prefix or suffix to a VM name, and Druva has added a failover prerequisite check as well.

 

Other Notes

In other news, Druva is now certified on VMC on Dell. It’s added support for Microsoft Teams and support for Slack. Both useful if you’ve stopped storing your critical data in email and started storing it in collaboration apps instead.

Storage Insights and Recommendations

There’s also a storage insights feature that is particularly good for unstructured data. Say, for example, that 30% of your backups are media files, you might not want to back them up (unless you’re in the media streaming business, I guess). You can delete bad files from backups, and automatically create an exclusion for those file types.

Support for K8s

Support for everyone’s favourite container orchestration system has been announced, not yet released. Read about that here. You can now do a full backup of an entire K8s environment (AWS only in v1). This includes Docker containers, mounted volumes, and DBs referenced in those containers.

NAS Backup

Druva has enhanced its NAS backup in two ways, the first of which is performance. Preston tells me the current product is at least 10X faster than one year ago. Also, for customers already using a native recovery mechanism like snapshots, Druva has also added the option to backup directly to Glacier, which cuts your cost in half.

Oracle Support

For Oracle, Druva has what Preston describes as “two solid options”. Right now there’s an OVA that provides a ready to go, appliance-like experience, uses the image copy format (supporting block-level incremental, and incremental merge). The other option will be announced next week at DxP.

 

Thoughts and Further Reading

Some of these features seem like incremental improvements, but when you put it all together, it makes for some impressive reading. Druva has done a really impressive job, in my opinion, of sticking with the built in the cloud, for the cloud mantra that dominates much of its product design. The big news is the support for K8s, but things like multi-VM failback with the DRaaS solution is nothing to sneeze at. There’s more news coming shortly, and I look forward to covering that. In the meantime, if you have the time, be sure to check out DXP – I think it will be quite an informative event.

 

 

Random Short Take #46

Welcome to Random Short Take #46. Not a great many players have worn 46 in the NBA, but one player who has is one of my favourite Aussie players: Aron “Bangers” Baynes. So let’s get random.

  • Enrico recently attended Cloud Field Day 9, and had some thoughts on NetApp’s identity in the new cloud world. You can read his insights here.
  • This article from Chris Wahl on multi-cloud design patterns was fantastic, and well worth reading.
  • I really enjoyed this piece from Russ on technical debt, and some considerations when thinking about how we can “future-proof” our solutions.
  • The Raspberry Pi 400 was announced recently. My first computer was an Amstrad CPC 464, so I have a real soft spot for jamming computers inside keyboards.
  • I enjoyed this piece from Chris M. Evans on hybrid storage, and what it really means nowadays.
  • Working from home a bit this year? Me too. Tom wrote a great article on some of the security challenges associated with the new normal.
  • Everyone has a quadrant nowadays, and Zerto has found itself in another one recently. You can read more about that here.
  • Working with VMware Cloud Director and wanting to build a custom theme? Check out this article.