Pure Storage Announces Second Generation FlashArray//C with QLC

Disclaimer: I recently attended Storage Field Day 20.  My flights, accommodation and other expenses were paid for by Tech Field Day. There is no requirement for me to blog about any of the content presented and I am not compensated in any way for my time at the event.  Some materials presented were discussed under NDA and don’t form part of my blog posts, but could influence future discussions.

Pure Storage recently announced its second generation FlashArray//C – an all-QLC offering offering scads of capacity in a dense form factor. Pure Storage presented on this topic at Storage Field Day 20. You can see videos of the presentation here, and download my rough notes from here.

 

It’s A Box!

FlashArray//C burst on to the scene last year as an all-flash, capacity-optimised storage option for customers looking for storage that didn’t need to go quite as fast the FlashArray//X, but that wasn’t built on spinning disk. Available capacities range from 1.3PB to 5.2PB (effective).

[image courtesy of Pure Storage]

There are a number of models available, with a variety of capacities and densities.

  Capacity Physical
 

//C60-366

 

Up to 1.3PB effective capacity**

366TB raw capacity**

3U; 1000–1240 watts (nominal–peak)

97.7 lbs (44.3 kg) fully loaded

5.12” x 18.94” x 29.72” chassis

 

//C60-494

 

Up to 1.9PB effective capacity**

494TB raw capacity**

3U; 1000–1240 watts (nominal–peak)

97.7 lbs (44.3 kg) fully loaded

5.12” x 18.94” x 29.72” chassis

 

//C60-840

 

Up to 3.2PB effective capacity**

840TB raw capacity**

6U; 1480–1760 watts (nominal–peak)

177.0lbs (80.3kg) fully loaded

10.2” x 18.94 x 29.72” chassis

 

//C60-1186

 

Up to 4.6PB effective capacity**

1.2PB raw capacity**

6U; 1480–1760 watts (nominal–peak)

185.4 lbs (84.1 kg) fully loaded

15.35” x 18.94 x 29.72” chassis

 

//C60-1390

 

Up to 5.2PB effective capacity**

1.4PB raw capacity**

9U; 1960–2280 watts (nominal–peak)

273.2 lbs (123.9 kg) fully loaded

15.35” x 18.94 x 29.72” chassis

Workloads

There are reasons why the FlashArray//C could be a really compelling option for workload consolidation. More and more workloads are “business critical” in terms of both performance and availability. There’s a requirement to do more with less, while battling complexity, and a strong desire to manage everything via a single pane of glass.

There are some other cool things you could use the //C for as well, including:

  • Automated policy-based VM tiering between //X and //C arrays;
  • DR using the //X at production and //C at your secondary site;
  • Consolidating multiple //X array workloads on a single //C array for test and dev; and
  • Consolidating multiple //X array snapshots to a single //C array for long-term retention.

 

It’s a QLC World, Sort Of

The second generation is FlashArray//C means you can potentially now have flash all through the data centre.

  • Apps and VMs – provision your high performance workloads to //X, lower performance / high capacity workloads to //C
  • Modern Data Protection & Disaster Recovery – on-premises production applications on //X efficiently replicated or backed up to //C at DR site
  • User File Shares – User file access with Purity 6.0 via SMB, NFS

QLC nonetheless presents significant engineering challenges with traditionally high write latency and low endurance (when compared to SLC, MLC, and TLC). Pure Storage’s answer to that problem has been to engineer the crap out of DirectFlash to get the required results. I’d do a bad job of explaining it, so instead I recommend you check out Pete Kirkpatrick’s explanation.

 

Thoughts And Further Reading

I covered the initial FlashArray//C announcement here and many of the reasons why this type of offering is appealing remain the same. The knock on Pure Storage in the last few years has been that, while FlashArray//X is nice and fast and a snap to use, it couldn’t provide the right kind of capacity (i.e. cheap and deep) that a number of price-sensitive punters wanted.  Sure, they could go and buy the FlashArray//X and then look to another vendor for a dense storage option, but the motivation to run with a number of storage vendors in smaller enterprise shops is normally fairly low. The folks in charge of technology in these environments are invariably stretched in terms of bodies on the floor to run the environments, and cash in the bank to procure those solutions. A single vendor solution normally makes sense for them (as opposed to some of the larger shops, or specialist organisations that really have very specific requirements that can only be serviced by particular solutions).

So now Pure Storage has the FlashArray//C, and you can get it with some decent density, some useful features (thanks in part to some new features in Purity 6), and integration with the things you know and like about Pure Storage, such as Pure1 and Evergreen storage. It seems like Pure Storage has done an awful lot of work to squeeze performance out of QLC whilst ensuring that the modules don’t need replacing every other week. There’s a lot to like about the evolving Pure Storage story, and I’m interested to see how they tie it all together as the portfolio continues to expand. You can read the press release here, access the data sheet here, and read Mellor’s take on the news here.

Datadobi Announces DobiProtect

Datadobi recently announced DobiProtect. I had the opportunity to speak with Michael Jack and Carl D’Halluin about the announcement, and thought I’d share some thoughts here.

 

The Problem

Disaster Recovery

Modern disaster recovery solutions tend more towards business continuity than DR. The challenge with data replication solutions is that it’s a trivial thing to replicate corruption from your primary storage to your DR storage. Backup systems are vulnerable too, and most instances you need to make some extra effort to ensure you’ve got a replicated catalogue, and that your backup data is not isolated. Invariably, you’ll be looking to restore to like hardware in order to reduce the recovery time. Tape is still a pain to deal with, and invariably you’re also at the mercy of people and processes going wrong.

What Do Customers Need?

To get what you need out of a robust DR system, there are a few criteria that need to be met, including:

  • An easy way to select business-critical data;
  • A simple way to make a golden copy in native format;
  • A bunker site in a DC or cloud;
  • A manual air-gap procedure;
  • A way to restore to anything; and
  • A way to failover if required.

 

Enter DobiProtect

What Does It Do?

The idea is that you have two sites with a manual air-gap between them, usually controlled by a firewall of some type. The first site is where you run your production workload, and there’ll likely be a subset of data that is really quirte important to your business. You can use DobiProtect to get that data from your production site to DR (it might even be in a bunker!). In order to get the data from Production to DR, DobiProtect scans the data before it’s pulled across to DR. Note that the data is pulled, not pushed. This is important as it means that there’s no obvious trace of the bunker’s existence in production.

[image courtesy of Datadobi]

If things go bang, you can recover to any NAS or Object.

  • Browse golden copy
  • Select by directory structure, folder, or object patterns
  • Mounts and shares
  • Specific versions

Bonus Use Case

One of the more popular use cases that Datadobi spoke to me about was heterogeneous edge-to-core protection. Data on the edge is usually more vulnerable, and not every organisation has the funding to put robust protection mechanisms in place at every edge site to protect critical data. With the advent of COVID-19, many organisations have been pushing more data to the edge in order for remote workers to have better access to data. The challenge then becomes keeping that data protected in a reliable fashion. DobiProtect can be used to pull data from the core once data has been pulled back from the edge. Because it’s a software only product, your edge storage can be anything that supports object, SMB, or NFS, and the core could be anything else. This provides a lot of flexibility in terms of the expense traditionally associated with DR at edge sites.

[image courtesy of Datadobi]

 

Thoughts and Further Reading

The idea of an air-gapped site in a bunker somewhere is the sort of thing you might associate with a James Bond story. In Australia these aren’t exactly a common thing (bunkers, not James Bond stories), but Europe and the US is riddled with them. As Jack pointed out in our call, “[t]he first rule of bunker club – you don’t talk about the bunker”. Datadobi couldn’t give me a list of customers using this type of solution because all of the customers didn’t want people to know that they were doing things this way. It seems a bit like security via obscurity, but there’s no point painting a big target on your back or giving clues out for would-be crackers to get into your environment and wreak havoc.

The idea that your RPO is a day, rather than minutes, is also confronting for some folks. But the idea of this solution is that you’ll use it for your absolutely mission critical can’t live without it data, not necessarily your virtual machines that you may be able to recover normally if you’re attacked or the magic black smoke escapes from one of your hosts. If you’ve gone to the trouble of looking into acquiring some rack space in a bunker, limited the people in the know to a handful, and can be bothered messing about with a manual air-gap process, the data you’re looking to protect is clearly pretty important.

Datadobi has a rich heritage in data migration for both file and object storage systems. It makes sense that eventually customer demand would drive them down this route to deliver a migration tool that ostensibly runs all the time as sort of data protection tool. This isn’t designed to protect everything in your environment, but for the stuff that will ruin your business if it goes away, it’s very likely worth the effort and expense. There are some folks out there actively looking for ways to put you over a barrel, so it’s important to think about what it’s worth to your organisation to avoid that if possible.

BackupAssist Announces BackupAssist ER

BackupAssist recently announced BackupAssist ER. I recently had the opportunity to speak with Linus Chang (CEO), Craig Ryan, and Madeleine Tan about the announcement.

 

BackupAssist

Founded in 2001, BackupAssist is focussed primarily on the small to medium enterprise (under 500 seats). They sell the product via a variety of mechanisms, including:

  • Direct
  • Partners
  • Distribution channels

 

Challenges Are Everywhere

Some of the challenges faced by the average SME when it comes to data protection include the following:

  • Malware
  • COVID-19
  • Compliance

So what does the average SME need when it comes to selecting a data protection solution?

  • Make it affordable
  • Automatic offsite backups with history and retention
  • Most recoveries are local – make them fast!
  • The option to recover in the cloud if needed (the fallback to the fallback)

 

What Is It?

So what exactly is BackupAssist ER? It’s backup and recovery software.

[image courtesy of BackupAssist]

It’s deployed on Windows servers, and has support for disk to disk to cloud as a protection topology.

CryptoSafeGuard

Another cool feature is CryptoSafeGuard, providing the following features:

  • Shield from unauthorised access
  • Detect – Alert – Preserve

Disaster Recovery

  • VM Instant boot (converting into a Hyper-V guest)
  • BMR (catering for dissimilar hardware)
  • Download cloud backup anywhere

Data Recovery

The product supports the granular recovery of files, Exchange, and applications.

Data Handling and Control

A key feature of the solution is the approach to data handling, offering:

  • Accessibility
  • Portability
  • Retention

It uses the VHDX file format to store protection data. It can also backup to Blob storage. Chang also advised that they’re working on introducing S3 compatibility at some stage.

Retention

The product supports a couple of different retention schemes, including:

  • Local – Keep N copies (GFS is coming)
  • Cloud – Keep X copies
  • Archival – Keep a backup on a HDD, and retain for years

Pricing

BackupAssist ER is licensed in a variety of ways. Costs are as follows:

  • Per physical machine – $399 US annually;
  • Per virtual guest machine – $199 US annually; and
  • Per virtual host machine – $699 US annually.

There are discounts available for multi-year subscriptions, as well as discounts to be had if you’re looking to purchase licensing for more than 5 machines.

 

Thoughts and Further Reading

Chang noted that BackupAssist is “[n]ot trying to be the best, but the best fit”. You’ll see that a lot of the capability is Microsoft-centric, with support for Windows and Hyper-V. This makes sense when you look at what the SME market is doing in terms of leveraging Microsoft platforms to deliver their IT requirements. Building a protection product that covers every platform is time-consuming and expensive in terms of engineering effort. What Chang and the team have been focussed on is delivering data protection products to customers at a particular price point while delivering the right amount of technology.

The SME market is notorious for wanting to consume quality product at a particular price point. Every interaction I’ve had with customers in the SME segment has given me a crystal clear understanding of “Champagne tastes on a beer budget”. But in much the same way that some big enterprise shops will never stop doing things at a glacial pace, so too will many SME shops continue to look for high value at a low cost. Ultimately, compromises need to be made to meet that price point, hence the lack of support for features such as VMware. That doesn’t mean that BackupAssist can’t meet your requirements, particularly if you’re running your business’s IT on a couple of Windows machines. For this it’s well suited, and the flexibility on offer in terms of disk targets, retention, and recovery should be motivation to investigate further. It’s a bit of a nasty world out there, so anything you can do to ensure your business data is a little safer should be worthy of further consideration. You can read the press release here.

Pure Storage and Cohesity Announce Strategic Partnership and Pure FlashRecover

Pure Storage and Cohesity announced a strategic partnership and a new joint solution today. I had the opportunity to speak with Amy Fowler and Biswajit Mishra from Pure Storage, along with Anand Nadathur and Chris Wiborg from Cohesity, and thought I’d share my notes here.

 

Friends In The Market

The announcement comes in two parts, with the first being that Pure Storage and Cohesity are forming a strategic partnership. The idea behind this is that, together, the companies will deliver “industry-leading storage innovations from Pure Storage with modern, flash-optimised backup from Cohesity”.  There are plenty of things in common between the companies, including the fact that they’re both, as Wiborg puts it, “keenly focused on doing the right thing for the customer”.

 

Pure FlashRecover Powered By Cohesity

Partnerships are exciting and all, but what was of more interest was the Pure FlashRecover announcement. What is it exactly? It’s basically Cohesity DataProtect running on Cohesity-certified compute nodes (the whitebox gear you might be familiar with if you’ve bought Cohesity tin previously), using Pure’s FlashBlades as the storage backend.

[image courtesy of Pure Storage]

FlashRecover has a targeted general availability for Q4 CY2020 (October). It will be released in the US initially, with other regions to follow. From a go to market perspective, Pure will handle level 1 and level 2 support, with Cohesity support being engaged for escalations. Cohesity DataProtect will be added to the Pure price list, and Pure becomes a Cohesity Technology Partner.

 

Thoughts

My first thought when I heard about this was why would you? I’ve traditionally associated scalable data protection and secondary storage with slower, high-capacity appliances. But as we talked through the use cases, it started to make sense. FlashBlades by themselves aren’t super high capacity devices, but neither are the individual nodes in Cohesity appliances. String a few together and you have enough capacity to do data protection and fast recovery in a predictable fashion. FlashBlade supports 75 nodes (I think) [Edit: it scales up to 150x 52TB nodes. Thanks for the clarification from Andrew Miller] and up to 1PB of data in a single namespace. Throw in some of the capabilities that Cohesity DataProtect brings to the table and you’ve got an interesting solution. The knock on some of the next-generation data protection solutions has been that recovery can still be quite time-consuming. The use of all-flash takes away a lot of that pain, especially when coupled with a solution like FlashBlade that delivers some pretty decent parallelism in terms of getting data recovered back to production quickly.

An evolving use case for protection data is data reuse. For years, application owners have been stuck with fairly clunky ways of getting test data into environments to use with application development and testing. Solutions like FlashRecover provide a compelling story around protection data being made available for reuse, not just recovery. Another cool thing is that when you invest in FlashBlade, you’re not locking yourself into a particular silo, you can use the FlashBlade solution for other things too.

I don’t work with Pure Storage and Cohesity on a daily basis anymore, but in my previous role I had the opportunity to kick the tyres extensively with both the Cohesity DataProtect solution and the Pure Storage FlashBlade. I’m an advocate of both of these companies because of the great support I received from both companies from pre-sales through to post-sales support. They are relentlessly customer focused, and that really translates in both the technology and the field experience. I can’t speak highly enough of the engagement I’ve experienced with both companies, from both a blogger’s experience, and as an end user.

FlashRecover isn’t going to be appropriate for every organisation. Most places, at the moment, can probably still get away with taking a little time to recover large amounts of data if required. But for industries where time is money, solutions like FlashRecover can absolutely make sense. If you’d like to know more, there’s a comprehensive blog post over at the Pure Storage website, and the solution brief can be found here.

StorCentric Announces Nexsan Unity 3300 And 7900

StorCentric recently announced new Nexsan Unity storage arrays. I had the opportunity to speak to Surya Varanasi, CTO of StorCentric, about the announcement, and thought I’d share some thoughts here.

 

Speeds And Feeds

[image courtesy of Nexsan]

The new Unity models announced are the 3300 and 7900. Both models use two controllers and vary in capacity between 1.6PB and 6.7PB. They both use the Intel Xeon E5 v4 Family processors, and have between 256GB and 448GB of system RAM. There are hybrid storage options available, and both systems support RAID 5, 6, and 10. You can access the spec sheet here.

 

Use Cases

Unbreakable

One of the more interesting use cases we discussed was what StorCentric refer to as “Unbreakable Backup”. The idea behind Nexsan Unbreakable Backup is that you can use your preferred data protection vendor to send backup data to a Unity array. This data can then be replicated to Nexsan’s Assureon platform. The cool thing about the Assureon is that it’s a locked down. So even if you’re hit with a ransomware attack, it’s going to be mighty hard for the bad guys to crack the Assureon platform as well, as StorCentric uses a Key Management System hosted inside StorCentric, and provides minimal privileges to end users.

Data Migration

There’s also a Data Mobility Suite coming at the end of Q3, including:

  • Cloud Connector, giving you the ability to replicate data from Unity to 18 Public clouds including Amazon and Google (for unstructured data, cloud-based backup); and
  • Flexible Data Migrations – streamline Unity implementations, migrate data from heterogeneous systems.

 

Thoughts and Further Reading

I’ve written enthusiastically about Assureon in the past, so it was nice to revisit the platform via this announcement. Ransomware is a scary prospect for many organisations, so a system that can integrate nicely to help with protecting protection data seems like a pretty good idea. Sure, having to replicate the data to a second system might seem like an unnecessary expense, but organisations should be assessing the value of that investment against the cost of having corporate data potentially irretrievably corrupted. Insurance against ransomware attacks probably seems like something that you shouldn’t need to spend money on, until you need to spend money recovering, or sending bitcoin to some clown because you need your data back. It’s not appealing by any stretch, but it’s also important to take precautions wherever possible.

Midrange storage is by no means a sexy topic to talk about. In my opinion it’s a well understood architecture that most tier 1 companies do pretty well nowadays. But that’s the beauty of the midrange system in a lot of ways – it’s a well understood architecture. So you generally know what you’re getting with hybrid (or all-flash) dual controller systems. The Unity range from Nexsan is no different, and that’s not a bad thing. There are a tonne of workloads in the enterprise today that aren’t necessarily well suited to cloud (for the moment), and just need some block or file storage and a bit of resiliency for good measure. The Unity series of arrays from Nexsan offer a bunch of useful features, including tiering and a variety of connectivity options. It strikes me that these arrays are a good fit for a whole lot of workloads that live in the data centre, from enterprise application hosting through to data protection workloads. If you’re after a reliable workhorse, it’s worth looking into the Unity range.

StorONE Announces AFA.next

StorONE recently announced the All-Flash Array.next (AFAn). I had the opportunity to speak to George Crump (StorONE Chief Marketing Officer) about the news, and thought I’d share some brief thoughts here.

 

What Is It? 

It’s a box! (Sorry I’ve been re-watching Silicon Valley with my daughter recently).

[image courtesy of StorONE]

More accurately, it’s an Intel Server with Intel Optane and Intel QLC storage, powered by StorONE’s software.

S1:Tier

S1:Tier is StorONE’s tiering solution. It operates within the parameters of a high and low watermark. Once the Optane tier fills up, the data is written out, sequentially, to QLC. The neat thing is that when you need to recall the data on QLC, you don’t necessarily need to move it all back to the Optane tier. Rather, read requests can be served directly from QLC. StorONE call this a multi-tier capability, because you can then move data to cloud storage for long-term retention if required.

[image courtesy of StorONE]

S1:HA

Crump noted that the Optane drives are single ported, leading some customers to look highly available configurations. These are catered for with a variation of S1:HA, where the HA solution is now a synchronous mirror between 2 stacks.

 

Thoughts and Further Reading

I’m not just a fan of StorONE because the company occasionally throws me a few dollarydoos to keep the site running. I’m a fan because the folks over there do an awful lot of storage type stuff on what is essentially commodity hardware, and they’re getting results that are worth writing home about, with a minimum of fuss. The AFAn uses Optane as a storage tier, not just read cache, so you get all of the benefit of Optane write performance (many, many IOPS). It has the resilience and data protection features you see in many midrange and enterprise arrays today (namely vRAID, replication, and snapshots). Finally, it has varying support for all three use cases (block, file, and object), so there’s a good chance your workload will fit on the box.

More and more vendors are coming to market with Optane-based storage solutions. It still seems that only a small number of them are taking full advantage of Optane as a write medium, instead focusing on its benefit as a read tier. As I mentioned before, Crump and the team at StorONE have positioned some pretty decent numbers coming out of the AFAn. I think the best thing is that it’s now available as a configuration item on the StorONE TRUprice site as well, so you can see for yourself how much the solution costs. If you’re after a whole lot of performance in a small box, this might be just the thing. You can read more about the solution and check out the lab report here. My friend Max wrote a great article on the solution that you can read here.

Komprise Announces Cloud Capability

Komprise recently made some announcements around extending its product to cloud. I had the opportunity to speak to Krishna Subramanian (President and COO) about the news and I thought I’d share some of my thoughts here.

 

The Announcement

Komprise has traditionally focused on unstructured data stored on-premises. It has now extended the capabilities of Komprise Intelligent Data Management to include cloud data. There’s currently support for Amazon S3 and Wasabi, with Google Cloud, Microsoft Azure, and IBM support coming soon.

 

Benefits

So what do you get with this capability?

Analyse data usage across cloud accounts and buckets easily

  • Single view across cloud accounts, buckets, and storage classes
  • Analyse AWS usage by various metrics accurately based on access times
  • Explore different data archival, replication, and deletion strategies with instant cost projections

Optimise AWS costs with analytics-driven archiving

  • Continuously move objects by policy across Cloud Network Attached Storage (NAS), Amazon S3, Amazon S3 Standard-IA, Amazon S3 Glacier, and Amazon S3 Glacier DeepArchive
  • Minimise costs and penalties by moving data at the right time based on access patterns

Bridge to Big Data/Artificial Intelligence (AI) projects

  • Create virtual data lakes for Big Data, AI – search for exactly what you need across cloud accounts and buckets
  • Native access to moved data on each storage class with full data fidelity

Create Cyber Resiliency with AWS

  • Copy S3 data to AWS to protect from ransomware with an air-gapped copy

[image courtesy of Komprise]

 

Why Is This Good?

The move to cloud storage hasn’t been all beer and skittles for enterprise. Storing large amounts of data in public cloud presents enterprises with a number of challenges, including:

  • Poor visibility – “Bucket sprawl”
  • Insufficient data – Cloud does not easily track last access / data use
  • Cost complexity – Manual data movement can lead to unexpected retrieval cost surprises
  • Labour – Manually moving data is error-prone and time-consuming

Sample Use Cases

Some other reasons you might want to have Komprise manage your data include:

  • Finding ex-employee data stored in buckets.
  • Data migration – you might want to take a copy of your data from Wasabi to AWS.

There’s support for all unstructured data (file and object), so the benefits of Komprise can be enjoyed regardless of how you’re storing your unstructured data. It’s also important to note that there’s no change to the existing licensing model, you’re just now able to use the product on public cloud storage.

 

Thoughts

Effective data management remains a big challenge for enterprises. It’s no secret that public cloud storage is really just storage that lives in another company’s data centre. Sure, it might be object storage, rather than file based, but it’s still just a bunch of unstructured data sitting in another company’s data centre. The way you consume that data may have changed, and certainly the way you pay for it has changed, but fundamentally it’s still your unstructured data sitting on a share or a filesystem. The problems you had on-premises though, still manifest in public cloud environments (i.e. data sprawl, capacity issues, etc). That’s why the Komprise solution seems so compelling when it comes to managing your on-premises storage consumption, and extending that capability to cloud storage is a no-brainer. When it comes to storing unstructured data, it’s frequently a bin fire of some sort or another. The reason for this is because it doesn’t scale well. I don’t mean the storage doesn’t scale – you can store petabytes all over the place if you like. But if you’re still hand crafting your shares and manually moving data around, you’ll notice that it becomes more and more time consuming as time goes on (and your data storage needs grow).

One way to address this challenge is to introduce a level of automation, which is something that Komprise does quite well. If you’ve got many terabytes of data stored on-premises and in AWS buckets (or you’re looking to move some old data from on-premises to the cloud) and you’re not quite sure what it’s all for or how best to go about it, Komprise can certainly help you out.

Datadobi Announces S3 Migration Capability

Datadobi recently announced S3 migration capabilities as part of DobiMigrate 5.9. I had the opportunity to speak to Carl D’Halluin and Michael Jack about the announcement and thought I’d share some thoughts on it here.

 

What Is It?

In short, you can now use DobiMigrate to perform S3 to S3 object storage migrations. It’s flexible too, offering the ability to migrate data from a variety of on-premises object systems up to public cloud object storage, between on-premises systems, or back to on-premises from public cloud storage. There’s support for a variety of S3 systems, including:

In the future Datadobi is looking to add support for AWS Glacier, object locks, object tags, and non-current object versions.

 

Why Would You?

There are quite a few reasons why you might want to move S3 data around. You could be seeing high egress charges from AWS because you’re accessing more data in S3 than you’d initially anticipated. You might be looking to move to the cloud and have a significant on-premises footprint that needs to go. Or you might be looking to replace your on-premises solution with a solution from another vendor.

 

How Would You?

The process used to migrate object is fairly straightforward, and follows a pattern that will be familiar if you’ve done anything with any kind of storage migration tool before. In short, you setup a migration pair (source and destination), run a scan and first copy, then do some incremental copies. Once you’ve got a maintenance window, there’s a cutover where the final scan and copy is done. And then you’re good to go. Basically.

[image courtesy of Datadobi]

 

Final Thoughts

Why am I so interested in these types of offerings? Part of it is that it reminds of all of the time I burnt through earlier in my career migrating data from various storage platforms to other storage platforms. One of the funny things about storage is that there’s rarely enough to service demand, and it rarely delivers the performance you need after it’s been in use for a few years. As such, there’s always some requirement to move data from one spot to another, and to keep that data intact in terms of its permissions, and metadata.

Amazon’s S3 offering has been amazing in terms of bringing object storage to the front of mind of many storage consumers who had previously only used block or file storage. Some of those users are now discovering that, while S3 is great, it can be expensive if you haven’t accounted for egress costs, or you’ve started using a whole lot more of it than initially anticipated. Some companies simply have to take their lumps, as everything is done in public cloud. But for those organisations with some on-premises footprint, the idea of being able to do performance oriented object storage in their own data centre holds a great deal of appeal. But how do you get it back on-premises in a reliable fashion? I believe that’s where Datadobi’s solution really shines.

I’m a fan of software that makes life easier for storage folk. Platform migrations can be a real pain to deal with, and are often riddled with risky propositions and daunting timeframes. Datadobi can’t necessarily change the laws of physics in a way that will keep your project manager happy, but it can do some stuff that means you won’t be quite as broken after a storage migration as you might have been previously. They already had a good story when it came to file storage migration, and the object to object story enhances it. Worth checking out.

StorONE Announces S1:TRUprice

StorONE recently announced S1:TRUprice. I had the opportunity to talk about the announcement with George Crump, and thought I’d share some of my notes here.

 

What Is It?

A website that anyone can access that provides a transparent view of StorONE’s pricing. There are three things you’ll want to know when doing a sample configuration:

  • Capacity
  • Use case (All-Flash, Hybrid, or All-HDD); and
  • Preferred server hardware (Dell EMC, HPE, Supermicro)

There’s also an option to do a software-only configuration if you’d rather roll your own. In the following example, I’ve configured HPE hardware in a highly available fashion with 92TB of capacity. This costs US $97243.14. Simple as that. Once you’re happy with the configuration, you can have a formal quote sent to you, or choose to get on a call with someone.

 

Thoughts and Further Reading

Astute readers will notice that there’s a StorONE banner on my website, and the company has provided funds that help me pay the costs of running my blog. This announcement is newsworthy regardless of my relationship with StorONE though. If you’ve ever been an enterprise storage customer, you’ll know that getting pricing is frequently a complicated endeavour. there’s rarely a page hosted on the vendor’s website that provides the total cost of whatever array / capacity you’re looking to consume. Instead, there’ll be an exercise involving a pre-sales engineer, possibly some sizing and analysis, and a bunch of data is put into a spreadsheet. This then magically determines the appropriate bit of gear. This specification is sent to a pricing team, some discounts to the recommended retail price are usually applied, and it’s sent to you to consider. If it’s a deal that’s competitive, there might be some more discount. If it’s the end of quarter and the sales person is “motivated”, you might find it’s a good time to buy. There are a whole slew of reasons why the price is never the price. But the problem with this is you can never know the price without talking to someone working for the vendor. Want to budget for some new capacity? Or another site deployment? Talk to the vendor. This makes a lot of sense for the vendor. It gives the sales team insight into what’s happening in the account. There’s “engagement” and “partnership”. Which is all well and good, but does withholding pricing need to be the cost of this engagement?

The Cloud Made Me Do It

The public availability of cloud pricing is changing the conversation when it comes to traditional enterprise storage consumption. Not just in terms of pricing transparency, but also equipment availability, customer enablement, and time to value. Years ago we were all beholden to our storage vendor of choice to deliver storage to us under the terms of the vendor, and when the vendor was able to do it. Nowadays, even enterprise consumers can go and grab the cloud storage they want or need with only a small modicum of fuss. This has changed the behaviours of the traditional storage vendors in a way that I don’t think was foreseen. Sure, cloud still isn’t the answer to every solution, and if you’re selling big tin into big banks, you might have a bit of runway before you need show your customers too much of what’s happening behind the curtain. But this move by StorONE demonstrates that there’s a demand for pricing transparency in the market, and customers are looking to vendors to show some innovation when it comes to the fairly boring business of enterprise storage. I’m very curious to see what other vendors decide to follow suit.

We won’t automatically see the end of some of the practices surrounding enterprise storage pricing, but initiatives like this certainly put some pressure back on the vendors to justify the price per GB they’re slinging gear for. It’s a bit easier to keep prices elevated when your customers have to do a lot of work to go to a competitor and find out what it charges for a similar solution. There are reasons for everything (including high prices), and I’m not suggesting that the major storage vendors have been colluding on price by any means. But something like S1:TRUprice is another nail in the coffin of the old way of doing things, and I’m happy about that. For another perspective on this news, check out Chris M. Evans’ article here.

Spectro Cloud – Profile-Based Kubernetes Management For The Enterprise

 

Spectro Cloud launched in March. I recently had the opportunity to speak to Tenry Fu (CEO) and Tina Nolte (VP, Products) about the launch, and what Spectro Cloud is, and thought I’d share some notes here.

 

The Problem?

I was going to start this article by saying that Kubernetes in the enterprise is a bin fire, but that’s too harsh (and entirely unfair on the folks who are doing it well). There is, however, a frequent compromise being made between ease of use, control, and visibility.

[image courtesy of Spectro Cloud]

According to Fu, the way that enterprises consume Kubernetes shouldn’t just be on the left or the right side of the diagram. There is a way to do both.

 

The Solution?

According to the team, Spectro Cloud is “a SaaS platform that gives Enterprises control over Kubernetes infrastructure stack integrations, consistently and at scale”. What does that mean though? Well, you get access to the “table stakes” SaaS management, including:

  • Managed Kubernetes experience;
  • Multi-cluster and environment management; and
  • Enterprise features.

Profile-Based Management

You also get some cool stuff that heavily leverages profile-based management, including infrastructure stack modelling and lifecycle management that can be done based on integration policies. In short, you build cluster profiles and then apply them to your infrastructure. The cluster profile usually describes the OS flavour and version, Kubernetes version, storage configuration, networking drivers, and so on. The Pallet orchestrator then ensures these profiles are used to maintain the desired cluster state. There are also security-hardened profiles available out of the box.

If you’re a VMware-based cloud user, the appliance (deployed from an OVA file) sits in your on-premises VMware cloud environment and communicates with the Spectro Cloud SaaS offering over TLS, and the cloud properties are dynamically propagated.

Licensing

The solution is licensed on the number of worker node cores under management. This is tiered based on the number of cores and it follows a simple model: More cores and a longer commitment equals a bigger discount.

 

The Differentiator?

Current Kubernetes deployment options vary in their complexity and maturity. You can take the DIY path, but you might find that this option is difficult to maintain at scale. There are packaged options available, such as VMware Tanzu, but you might find that multi-cluster management is not always a focus. The managed Kubernetes option (such as those offered by Google and AWS) has its appeal to the enterprise crowd, but those offerings are normally quite restricted in terms of technology offerings and available versions.

Why does Spectro Cloud have appeal as a solution then? Because you get control over the integrations you might want to use with your infrastructure, but also get the warm and fuzzy feeling of leveraging a managed service experience.

 

Thoughts

I’m no great fan of complexity for complexity’s sake, particularly when it comes to enterprise IT deployments. That said, there are always reasons why things get complicated in the enterprise. Requirements come from all parts of the business, legacy applications need to be fed and watered, rules and regulations seem to be in place simply to make things difficult. Enterprise application owners crave solutions like Kubernetes because there’s some hope that they, too, can deliver modern applications if only they used some modern application deployment and management constructs. Unfortunately, Kubernetes can be a real pain in the rear to get right, particularly at scale. And if enterprise has taught us anything, it’s that most enterprise shops are struggling to do the basics well, let alone the needlessly complicated stuff.

Solutions like the one from Spectro Cloud aren’t a silver bullet for enterprise organisations looking to modernise the way applications are deployed, scaled, and managed. But something like Spectro Cloud certainly has great appeal given the inherent difficulties you’re likely to experience if you’re coming at this from a standing start. Sure, if you’re a mature Kubernetes shop, chances are slim that you really need something like this. But if you’re still new to it, or are finding that the managed offerings don’t give you the flexibility you might need, then something like Spectro Cloud could be just what you’re looking for.