I recently had the opportunity to deploy a Cohesity C2500 4-node appliance and thought I’d run through the basics of the installation. There’s a new document outlining the process on the articles page.
Disclaimer: I recently attended VMworld 2017 – US. My flights were paid for by ActualTech Media, VMware provided me with a free pass to the conference and various bits of swag, and Tech Field Day picked up my hotel costs. There is no requirement for me to blog about any of the content presented and I am not compensated in any way for my time at the event. Some materials presented were discussed under NDA and don’t form part of my blog posts, but could influence future discussions.
Here are my rough notes on “STO3331BUS – Cohesity Hyperconverged Secondary Storage: Simple Data Protection for VMware and vSAN” presented by Gaetan Castelein of Cohesity and Shawn Long, CEO of viLogics. You can grab a PDF of my notes from here.
Secondary Storage Problem
SDS has changed for the better.
Primary storage has improved dramatically
- High CapEx costs
- Device-centric silos
- Complex processes
- Policy-based management
- Cost-efficient performance
- Modern storage architectures
But secondary storage is still problematic
Rapidly growing data
- 6ZB in 2016
- 93ZB in 2025
- 80% unstructured
Too many copies
- 45% – 60% of capacity for copy data
- 10 – 12 copies on average
- $50B problem
Legacy storage can’t keep up
- Doesn’t scale
- Fragmented silos
Cohesity Hyperconverged Secondary Storage
You can use this for a number of different applications, including:
- File shares
- Test / Dev
It also offers native integration with the public cloud and Cohesity have been clear that you shouldn’t consider it to be just another backup appliance.
Consolidate Secondary Storage Silos at Web-Scale
- Data Protection with Cohesity DataProtect;
- Third-party backup DB copies with CommVault, Oracle RMAN, Veritas, IBM and Veeam;
- Files; and
Deliver Data Instantly
Want to make the data useful (via SnapTree)?
Software defined from Edge to Cloud
You can read more about Cohesity’s cloud integration here.
- Simple Data Protection
- Distributed File Services
- Object Services
- Multicloud Mobility
- Test / Dev Copies
You can use Cohesity with existing backup products if required or you can use Cohesity DataProtect.
Always-Ready Snapshots for Instant Restores
- Sub-5 minute RPOs
- Fully hydrated images (linked clones)
- Catalogue of always-ready images
- Instant recoveries (near-zero RTOs)
- Integration with Pure Storage
Tight Integration with VMware
- vCenter Integration
- VADP for snap-based CBT backups
- vRA plugin for self-service, policy-based management
- Policy-based archival
- Dedupe, compression, encryption
- Everything is indexed before it goes to the cloud – search files and VMs
- Individual file recovery
- Recover to a different Cohesity cluster
- Replicate backup data to cloud
Deploy Cohesity to the cloud (available on Azure currently, other platforms soon).
You can move from “Legacy backup”, where you’re paying maintenance on backup software and deduplication appliances, to paying just for Cohesity.
Shawn Long from viLogics then took the stage to talk about their experiences with Cohesity.
- People want to consume IT
- “Product’s only as good as the support behind it”
This was a useful session. I do enjoy the sponsored sessions at VMworld. It’s a useful way for the vendors to get their message across in a way that needs to tie back to VMware. There’s often a bit of a sales pitch, but there’s usually also enough information in them to get you looking further into the solution. I’ve been keeping an eye on Cohesity since I first encountered them a few years ago at Storage Field Day, and their story has improved in clarity and coherence since them. If you’re looking at secondary storage solutions it’s worth checking the out. You’ll find some handy resources here. 3.5 stars.
It’s only been 13 months since I did one of these, so clearly the frequency is increasing. Here’re a few things that I’ve noticed and thought may be worthy of your consideration:
- This VMware KB article on Calculating Bandwidth Requirements for vSphere Replication is super helpful and goes into a lot of depth on planning for replication. There’s also an appliance available via a Fling, and a calculator you can use.
- NetWorker 9.1 was recently released and Preston has a great write-up on some of the new features. “Things are hotting up”, as they say.
- Rawlinson has now joined Cohesity, and I look forward to hearing more about that in the near future. In the meantime, this article on automated deployment for HCI is very handy.
- My hosting provider moved me to a new platform in September. By October I’d decided to move somewhere else based on the poor performance of the site and generally shoddy support experience. I’m now with SiteGround. They’re nice, fast and cheap enough for me. I’ve joined their affiliate program, so if you decide to sign up with them I can get some cash.
- My blog got “hacked” yesterday. Someone put a redirect in place to a men’s performance pill site. Big thanks to Mike Yurick for pointing it out to me and to my colleague Josh for answering my pleas for help and stepping in and cleaning it up while I was on a plane inter-state. He used Wordfence to scan and clean up the site – check them out and make sure your backups are up to date. If it happens to you, and you don’t have a Josh, check out this guidance from WordPress.
- The next Brisbane VMUG will be held on Tuesday February 21st. I’ll be putting up an article on it in the next few weeks. It will be sponsored by Veeam and should be great.
- I always enjoy spending time with Stephen Foskett, and when I can’t be with him I like to read his articles (it’s not stalky at all). This one on containers was particularly good.
That’s it for the moment. Hopefully you all have an enjoyable and safe holiday season. And if my site looks like this in the future – let me know.
I’ve been following Cohesity for some time now, and have covered a number of their product announcements and saw them in action at Storage Field Day 8. They announced version 3.0 at the end of June, and Gaetan Castelein kindly offered to give me a brief on where they’re at in the lead up to VMworld US.
What’s a Cohesity?
Cohesity’s goal is to take the complexity out of secondary storage. They argue that SDS has done a good job of this on primary storage platforms, but we’ve all ignored the issues around running secondary storage. The primary vehicle for this is Cohesity DataPlatform, combined with Cohesity DataProtect. Cohesity have a number of use cases for the platform that they cover, and I thought it might be handy to go over these here.
Use Case 1 – DataPlatform as a “better backup target”
Cohesity are taking aim at the likes of Data Domain, and are keen to replace them as backup targets. Cohesity tell me that DataPlatform offers the following features:
- Scale-out platform (with no single point of failure), simple capacity planning, no forklift upgrades;
- Global deduplication;
- Native cloud integration;
- High performance with parallelized ingest; and
- QoS and multitenancy.
These all seem like nice things to have.
Use Case 2 – Simpler Data Protection
Cohesity tell me that the DataPlatform also makes a great option for VMware-based backups, providing data protection folks with the ability to leverage the following features:
- Converged infrastructure with single pane of glass;
- Policy-based automation;
- Fast SLAs (15 min RPOs and instantaneous RTOs); and
- Productive data (instant clones for test/dev, deep visibility into the data for indexing, custom analytics, etc).
While the single pane of glass often becomes the single pain, the last point about making data productive, depending on the environment you’re working in, is particularly important. There’re a tonne of enterprises out there where people are following some mighty cumbersome processes on snapshots of data to do analytics on the data. Any platform that makes this easier and more accessible seems like a great idea.
Use Case 3 – NFS & SMB Interfaces
You can also use the DataPlatform for file consolidation. Cohesity have even started positioning a combination of VMware VSAN as your primary storage platform (great for running VMs), with Cohesity offering secondary storage and the ability to deliver it over SMB or NFS. You can read more about this here.
Use Case 4 – Test/Dev
Cohesity’s first foray into the market revolved around providing enhanced capabilities for developers, and this remains a key selling point of the platform, with a full set of APIs exposed (which can be easily leveraged for use with Chef, Puppet, etc).
Use Case 5 – Analytics
Analytics have also been a major part of Cohesity’s early forays into secondary storage, with native reporting providing:
- Utilization metrics (storage utilization, capacity forecasting); and
- Performance metrics (ingrest rates, date reduction, IOPS, latency).
There’s also content indexing and search, providing data indexing (index upon ingest, VM and file metadata, files within VMs), and “Google-like” search. You can also access an analytics workbench with built-in MapReduce.
What Have You Done For Me Lately?
So with the Cohesity 3.0 Announcement a bunch of expanded application and OS integrations were announced, with a particular focus on SQL, Exchange, SharePoint, MS Windows, Linux, Oracle DBs (RMAN and remote adapter). Here’s a table that Cohesity provided that covers off a lot of the new features.
In addition to the DataProtect enhancements, a number of enhancements have been made to both the DataPlatform and File Services components of the product. I’m particularly interested in the ROBO solution, and I think this could end up being a very clever attempt by Cohesity at capturing the secondary storage market at a very broad level.
Cohesity have been moving ahead in leaps and bounds, and I’ve been impressed by what they’ve had to say, and the development of their narrative compared to some of the earlier messaging. It remains to be seen whether they’ll get to where they want to be, but I think they’re giving it a good shake. They’ll be present at VMworld US next week (Booth 827), where you can hear more about what they’re doing with VSAN and vRealize Automation.
I’ve posted previously about the opportunity I had to talk in depth with some of the folks from Cohesity at Storage Field Day 8. They’ve now come out with their “Hybrid Cloud Strategy”, and I thought it was worthwhile putting together a brief post covering the announcement.
As you’ve probably been made aware countless times by various technology sales people, analysts and pundits, enterprises are moving workload to the cloud. Cohesity are offering what they call a complete approach via the following features:
- Cohesity CloudArchive;
- Cohesity CloudTier; and
- Cohesity CloudReplicate.
Cohesity CloudArchive is, as the name implies, a mechanism to “seamlessly archive datasets for extended retention from the Cohesity Data Platform through pre-built integrations with Google Nearline, Microsoft Azure and Amazon S3, Glacier”. This feature was made available as part of the 2.0 release, which I covered here.
Cohesity CloudTier allows you to use public cloud as an extension of your on-premises storage. It “dynamically increases local storage capacity, by moving seldom-accessed data blocks into the cloud”. The cool thing about this is that, via the policy-based waterfall model, transparent cloud tiering can be managed from the Cohesity Data Platform console. Cohesity suggest that the main benefit is that end users no longer have to worry about exceeding their on-premises capacity during temporary or seasonal demand spikes.
Cohesity CloudReplicate allows Cohesity users to “replicate local storage instances to remote public or private cloud services”. This has the potential to provide a lower-cost disaster recovery solution for their on-premises installations. Cohesity have said that this feature will be released for production use later this year.
Further Reading and Thoughts
Everyone and their dog is doing some kind of cloud storage play nowadays. This isn’t a bad thing by any stretch, as CxOs and enterprises are really super keen to move some (if not all) of their workloads off-premises in order to reduce their reliance on in-house IT systems. Every cloud opportunity comes with caveats though, and you need to be mindful of the perceived versus actual cost of storing a bunch of your data off -premises. You also need to look at things like security, bandwidth and accessibility before you take the leap. But this is all stuff you know, and I’m sure that a lot of people have thought about the impact of off-premises storage for large datasets before blindly signing up with Amazon and the like. The cool thing about this Cohesity’s secondary storage hybrid cloud solution is that Cohesity are focussed on the type of data that lends itself really well to off-premises storage.
I’ve been a fan of Cohesity since they first announced shipping product. And it’s been great to see the speed with which new features are being added to the product. As well as this, Cohesity’s responsiveness to criticism and suggestions for improvements has been exciting to see play out. You can check out a video of Cohesity’s Hybrid Cloud demo here, while the cloud integration demo from Storage Field Day 9 is available here. Alex also has a nice write-up here.
I’ve previously blogged about Cohesity‘s Data Platform here. I also had the good fortune to sit in on their presentation at SFD8 and got to mix with some of their product people that week. So I was pleased to hear news from Nick that Cohesity Data Platform 2.0 was ready to roll.
- Site-to-Site Replication: Cohesity have introduced improved resiliency with the new capability for site-to-site replication between Cohesity Clusters. I’m looking forward to checking this one out in more depth.
- SMB Protocol: Support for SMB 3.0 – this is big, but no AD-integration yet (it’s coming, and part of a bigger RBAC push).
- Stronger Security: Hardware-accelerated AES 256-bit FIPS-compatible encryption – I knew a few people who’ll be excited about this.
- Automated VM Cloning for Test/Dev: To deliver a more streamlined test/dev workflow that repurposes backup data, automated cloning of backup VMs is now available to more quickly spin up zero-space clones.
- Cloud Enabled: A newly added public cloud archival tier enables spill over of the least used data to Google Cloud Storage Nearline, Microsoft Azure and Amazon S3, Glacier. Cohesity says that this will help in cutting on-premises storage costs. The cynic in me suggests that this will also increase your off-premises costs, but your mileage might vary.
The UI has had a bit of touch-up, which is always nice. Nick provided me with a sample screenshot.
There’s a cool video on YouTube that provides a good overview of the dashboard – you can see it here.
Nick also said that they’ve modified “the data protection UI to now deal in profiles that are assigned policies. Jobs are still a thing that happen, but they happen as a result of policies attached to a profile, where the policy determines RPO/RTO, indexing, location, etc.” Here’s a screenshot of the Replication & Archival policy that can be attached to profiles.
Further Reading and Conclusion
Cohesity are doing a bit of tour around the US to let us know more about what they’re all about. You can sign up for a session here. Mohit Aron (the CEO) is the featured speaker, and I recommend getting along to listen to him if you can.
I’ve been intrigued by Cohesity since they came out of stealth and I had the opportunity to talk to them in more depth late last year. I think the concept is interesting, and the execution technically has been really good from what I’ve seen. I’ve criticised them previously for some mixed messaging in the marketplace, but I put that down to the version 1 flavour of everything. That said, Cohesity are listening to customers, pundits, and the marketplace in general, and they’re actively developing features based on that feedback. If you’re looking for a different approach to “secondary storage” in general, I recommend having a chat to Cohesity about what they can do for you.
Disclaimer: I recently attended Storage Field Day 8. My flights, accommodation and other expenses were paid for by Tech Field Day. There is no requirement for me to blog about any of the content presented and I am not compensated in any way for my time at the event. Some materials presented were discussed under NDA and don’t form part of my blog posts, but could influence future discussions.
For each of the presentations I attended at SFD8, there are a few things I want to include in the post. Firstly, you can see video footage of the Cohesity presentation here. You can also download my raw notes from the presentation here. Finally, here’s a link to the Cohesity website that covers some of what they presented.
Cohesity gave us Mai Tais before the presentation kicked off. This may or may not have been a great idea.
So what’s All This About Secondary Storage?
I previously posted an article about Cohesity’s General Availability here. But what I found really interesting is Cohesity’s interpretation of what secondary storage is and how the built-in analytics engine can be leveraged to make even better use of all that data you’ve just got lying around.
Cohesity talk about primary storage as the tip of the iceberg, with “secondary storage” being comprised of:
- File shares
- DevOps (this is Test / Dev more than DevOps)
The problem, as Cohesity see it, is that all this data is “fragmented, inefficient and dark data”. Cohesity’s aim has been to build an “infinitely” scalable storage platform and consolidate this secondary storage into one platform. What really struck me about Cohesity’s presentation was the focus on running analytics across all of this “secondary storage”. It’s really not an approach I’d considered previously.
As part of OASIS, Cohesity provides an Analytics WorkBench (AWB) with the following features:
- Deep analysis using native tools;
- A MapReduce Engine that is distributed and runs natively; and
- Customisable (you can inject custom code, run specific jobs and search specific file types).
You can use custom apps with the AWB, and there’s a few Cohesity-delivered pre-configured apps available as well, including distributed GREP. You can read more about it here. They have a lot of plans for the future, including extensive third-party integration.
Closing Thoughts and Further Reading
If you’ve watched the Cohesity presentation, you’ll have noticed a few things. Firstly, some of the delegates were a bit annoyed that the message was a little muddled. They may have gotten a bit shirtier than they should have, but I think a fair bit of the criticism was valid. Secondly, you’ll have noticed that a few of the presenters weren’t your standard-issue tech marketing types. Instead, they were some of the developers that have been working on the product for some time now. The cool thing about this is you see at times that their vision for the product perhaps doesn’t always easily align with what we, the delegates, are seeing out in the field. But that’s okay, because this is their first crack at it, and it’s only going to improve, in terms of both messaging and customer feedback.
I’m excited by what Cohesity has managed to achieve thus far, and look forward to seeing the platform develop, particularly the analytics capability. Sure, the message and marketing needs a little work, but I think that 12 months from now we’ll be seeing something pretty exciting in the marketplace.
This is slightly old news, as the Cohesity announcement went up a few weeks ago, but Nick Howell was nice enough to reach out and give me some background on what Cohesity were all about prior to me seeing them at SFD8. Cormac also has a typically enlightening post over here that is worth your time. Note that I’ve not had any stick time with this gear, nor do I have any customer data on how well it performs, but based on the briefing I received I think it’s worth investigating further.
People have started to twig that “secondary data” is becoming a problem in enterprises. Secondary data could be your backup and archive data, or it could be snapshots in test / dev environments that are current or stale, or it could be a bunch of data used for analytics. In large organisations, particularly ones with active developers, this can be a drain on storage resources. Understanding where everything is and what it does can also be a problem. This is where Cohesity step in.
“To bring order to data management, Cohesity has launched the first converged secondary storage solution, which scales seamlessly while maintaining the management flexibility necessary to handle a wide variety of workloads. Cohesity empowers companies to control the growing volume of data by replacing the sprawl of point solutions with a single, consolidated platform.”
Sounds pretty good …
The Cohesity Data Platform is a combination of hardware and software. I have a soft spot for this approach, as I still feel that we’re not quite there with the “oh, you can run it on anything” software-defined storage nirvana that everyone is constantly chirping about. That said, just because you’ve slapped some code on some tin doesn’t mean you’ve automatically fixed the problem. Here’s a look at the Data Platform architecture. I’m going to focus on a bit of the hardware and software in this post, but there’s a really good white paper on the Cohesity site that goes into some depth about how it all ties together.
The hardware appliance is an Intel-based 2RU appliance comprised of 4 nodes (4 nodes to a Block). There are 2 versions – the C2300 and the C2500.
Ostensibly the difference is in the available capacity of the nodes. Here’s a table that says this more clearly than I have.
The software foundation is OASIS, which stands for Open Architecture for Scalable, Intelligent Storage. I am a big fan of companies that can come up with meaningful acronyms to represent their products, so kudos to Cohesity for this one. OASIS offers a number of features, including:
- Copy Data Management: Global, policy-based deduplication and patented snapshot / cloning technology;
- Data Protection: Data protection for applications through integrated backup and recovery features, including unlimited snapshotting and thin cloning, with built-in global indexing for full searchability;
- DevOps: Quickly deploy DevOps clones from backup data, efficiently repurposing passive data in legacy environments for faster development workflows; and
- In-Place Analytics: Cohesity offers built-in and programmable data analytics. Cohesity’s native analytics capabilities provide real-time metrics and forecasting, while its programmable Analytics Workbench empowers businesses to run custom queries against their datasets. All analytics are run in-place on the Cohesity cluster to maximize time-to-insight and eliminate the need for separate data analytics infrastructure. Support for integrated third-party analytics applications will be available in 2016.
The UI looks pretty, and seems easy enough to navigate (although I’ve only seen screenshots at this stage). I like the thumbs up. Deep down that’s all we really want to know from our storage gear.
Closing Thoughts and Further Reading
Mohit Aron has some pretty good experience up his sleeve, and I think Cohesity have come up with a solid approach to the problem they’re trying to solve. I’m really looking forward to hearing what Cohesity have to say at Storage Field Day 8, and I’m keen to see how they make some of the things on their roadmap happen in the near future. In the meantime, check out Nick’s announcement post here, and Mohit’s announcement is here.