Big Switch Announces AWS Public Cloud Monitoring

Big Switch Networks recently announced Big Mon for AWS. I had the opportunity to speak with Prashant Gandhi (Chief Product Officer) about the announcement and thought I’d share some thoughts here.

The Announcement

Big Switch describe Big Monitoring Fabric Public Cloud (it’s real product name) as “a seamless deep packet monitoring solution that enables workload monitoring within customer specified Virtual Private Clouds (VPCs). All components of the solution are virtual, with elastic scale-out capability based on traffic volumes.”

[image courtesy of Big Switch]

There are some real benefits to be had, including:

  • Complete AWS Visibility;
  • Multi-VPC support;
  • Elastic scaling; and
  • Consistent with the On-Prem offering.

Capabilities

  • Centralised packet and flow-based monitoring of all VPCs of a user account
  • Visibility-related traffic is kept local for security purposes and cost savings
  • Monitoring and security tools are centralised and tagged within the dedicated VPC for ease of configuration
  • Role-based access control enables multiple teams to operate Big Mon 
  • Supports centralised AWS VPC tool farm to reduce monitoring cost
  • Integrated with Big Switch’s Multi-Cloud Director for centralised hybrid cloud management

Thoughts and Further Reading

It might seem a little odd that I’m covering news from a network platform vendor on this blog, given the heavy focus I’ve had over the years on storage and virtualisation technologies. But the world is changing. I work for a Telco now and cloud is dominating every infrastructure and technology conversation I’m having. Whether it’s private or public or hybrid, cloud is everywhere, and networks are a bit part of that cloud conversation (much as it has been in the data centre), as is visibility into those networks. 

Big Switch have been around for under 10 years, but they’ve already made some decent headway with their switching platform and east-west monitoring tools. They understand cloud networking, and particularly the challenges facing organisations leveraging complicated cloud networking topologies. 

I’m the first guy to admit that my network chops aren’t as sharp as they could be (if you watched me setup some Google WiFi devices over the weekend, you’d understand). But I also appreciate that visibility is key to having control over what can sometimes be an overly elastic / dynamic infrastructure. It’s been hard to see traffic between availability zones, between instances, and contained in VPNs. I also like that they’ve focussed on a consistent experience between the on-premises offering and the public cloud offering. 

If you’re interested in learning more about Big Switch Networks, I also recommend checking out their labs.

Pure Storage Goes All In On Hybrid … Cloud

I recently had the opportunity to hear from Chadd Kenney about Pure Storage’s Cloud Data Services announcement and thought it worthwhile covering here. But before I get into that, Pure have done a little re-branding recently. You’ll now hear them referring to Cloud Data Infrastructure (their on-premises instances of FlashArray, FlashBlade, FlashStack) and Cloud Data Management (being their Pure1 instances).

 

The Announcement

So what is “Cloud Data Services”? It’s comprised of:

According to Kenney, “[t]he right strategy is and not or, but the enterprise is not very cloudy, and the cloud is not very enterprise-y”. If you’ve spent time in any IT organisation, you’ll see that there is, indeed, a “Cloud divide” in play. What we’ve seen in the last 5 – 10 years is a marked difference in application architectures, consumption and management, and even storage offerings.

[image courtesy of Pure Storage]

 

Cloud Block Store

The first part of the puzzle is probably the most interesting for those of us struggling to move traditional application stacks to a public cloud solution.

[image courtesy of Pure Storage]

According to Pure, Cloud Block Store offers:

  • High reliability, efficiency, and performance;
  • Hybrid mobility and protection; and
  • Seamless APIs on-premises and cloud.

Kenney likens building a Purity solution on AWS to the approach Pure took in the early days of their existence, when they took off the shelf components and used optimised software to make them enterprise-ready. Now they’re doing the same thing with AWS, and addressing a number of the shortcomings of the underlying infrastructure through the application of the Purity architecture.

Features

So why would you want to run virtual Pure controllers on AWS? The idea is that Cloud Block Store:

  • Aggregates performance and reliability across many cloud stores;
  • Can be deployed HA across two availability zones (using active cluster);
  • Is always thin, deduplicated, and compressed;
  • Delivers instant space-saving snapshots; and
  • Is always encrypted.

Management and Orchestration

If you have previous experience with Purity, you’ll appreciate the management and orchestration experience remains the same.

  • Same management, with Pure1 managing on-premises instances and instances in the cloud
  • Consistent APIs on-premises and in cloud
  • Plugins to AWS and VMware automation
  • Open, full-stack orchestration

Use Cases

Pure say that you can use this kind of solution in a number of different scenarios, including DR, backup, and migration in and between clouds. If you want to use ActiveCluster between AWS regions, you might have some trouble with latency, but in those cases other replication options are available.

[image courtesy of Pure Storage]

Not that Cloud Block Store is available in a few different deployment configurations:

  • Test/Dev – using a single controller instance (EBS can’t be attached to more than one EC2 instance)
  • Production – ActiveCluster (2 controllers, either within or across availability zones)

 

CloudSnap

Pure tell us that we’ve moved away from “disk to disk to tape” as a data protection philosophy and we now should be looking at “Flash to Flash to Cloud”. CloudSnap allows FlashArray snapshots to be easily sent to Amazon S3. Note that you don’t necessarily need FlashBlade in your environment to make this work.

[image courtesy of Pure Storage]

For the moment, this only being certified on AWS.

 

StorReduce for AWS

Pure acquired StorReduce a few months ago and now they’re doing something with it. If you’re not familiar with them, “StorReduce is an object storage deduplication engine, designed to enable simple backup, rapid recovery, cost-effective retention, and powerful data re-use in the Amazon cloud”. You can leverage any array, or existing backup software – it doesn’t need to be a Pure FlashArray.

Features

According to Pure, you get a lot of benefits with StorReduce, including:

  • Object fabric – secure, enterprise ready, highly durable cloud object storage;
  • Efficient – Reduces storage and bandwidth costs by up to 97%, enabling cloud storage to cost-effectively replace disk & tape;
  • Fast – Fastest Deduplication engine on the market. 10s of GiB/s or more sustained 24/7;
  • Cloud Native – Native S3 interface enabling openness, integration, and data portability. All Data & Metadata stored in object store;
  • Single namespace – Stores in a single data hub across your data centre to enable fast local performance and global data protection; and
  • Scalability – Software nodes scale linearly to deliver 100s of PBs and 10s of GBs bandwidth.

 

Thoughts and Further Reading

The title of this post was a little misleading, as Pure have been doing various cloud things for some time. But sometimes I give in to my baser instincts and like to try and be creative. It’s fine. In my mind the Cloud Block Store for AWS piece of the Cloud Data Services announcement is possibly the most interesting one. It seems like a lot of companies are announcing these kinds of virtualised versions of their hardware-based appliances that can run on public cloud infrastructure. Some of them are just encapsulated instances of the original code, modified to deal with a VM-like environment, whilst others take better advantage of the public cloud architecture.

So why are so many of the “traditional” vendors producing these kinds of solutions? Well, the folks at AWS are pretty smart, but it’s a generally well understood fact that the enterprise moves at enterprise pace. To that end, they may not be terribly well positioned to spend a lot of time and effort to refactor their applications to a more cloud-friendly architecture. But that doesn’t mean that the CxOs haven’t already been convinced that they don’t need their own infrastructure anymore. So the operations folks are being pushed to migrate out of their DCs and into public cloud provider infrastructure. The problem is that, if you’ve spent a few minutes looking at what the likes of AWS and GCP offer, you’ll see that they’re not really doing things in the same way that their on-premises comrades are. AWS expects you to replicate your data at an application level, for example, because those EC2 instances will sometimes just up and disappear.

So how do you get around the problem of forcing workloads into public cloud without a lot of the safeguards associated with on-premises deployments? You leverage something like Pure’s Cloud Block Store. It overcomes a lot of the issues associated with just running EC2 on EBS, and has the additional benefit of giving your operations folks a consistent management and orchestration experience. Additionally, you can still do things like run ActiveCluster between and within Availability Zones, so your mission critical internal kitchen roster application can stay up and running when an EC2 instance goes bye bye. You’ll pay a bit less or more than you would with normal EBS, but you’ll get some other features too.

I’ve argued before that if enterprises are really serious about getting into public cloud, they should be looking to work towards refactoring their applications. But I also understand that the reality of enterprise application development means that this type of approach is not always possible. After all, enterprises are (generally) in the business of making money. If you come to them and can’t show exactly how they’ save money by moving to public cloud (and let’s face it, it’s not always an easy argument), then you’ll find it even harder to convince them to undertake significant software engineering efforts simply because the public cloud folks like to do things a certain way. I’m rambling a bit, but my point is that these types of solutions solve a problem that we all wish didn’t exist but it does.

Justin did a great write-up here that I recommend reading. Note that both Cloud Block Store and StorReduce are in Beta with planned general availability in 2019.

Scale Computing Have Been Busy

I recently had the opportunity to get on a call with Alan Conboy to talk about what’s been happening with Scale Computing lately. It was an interesting chat, as always, and I thought I’d share some of the news here.

 

Detroit Rock City

It’s odd how sometimes I forget that pretty much every type of business in existence uses some form of IT. Arts and performance organisations, such as the Detroit Symphony Orchestra are no exception. They are also now very happy Scale customers. There’s a YouTube video detailing their experiences that you can check out here.

 

Lenovo Partnership

Scale and Lenovo recently announced a strategic partnership, focussed primarily on edge workloads, with particular emphasis on retail and industrial environments. You can download a solution brief here. This doesn’t mean that Lenovo are giving up on some of their other HCI partnerships, but it does give them a competent partner to attack the edge infrastructure market.

 

GCG, Yeah You Know Me

Grupo Colón Gerena is a Puerto Rico-based “restaurant management company that owns franchises of brands including Wendy’s, Applebee’s, Famous Davés, Sizzler’s, Longhorn Steakhouse, Olive Garden and Red Lobster throughout the island”. You may recall Puerto Rico suffered through some pretty devastating weather in 2017 thanks to Hurricane Maria. GCG have been running the bulk of their workload in Google Cloud since just before the event, and are still deciding whether they really want to move it back to an on-premises solution. There’s definitely a good story with Scale delivering workloads from the edge to the core and through to Google Cloud. You can read the full case study here.

 

Thoughts

It’s no big secret that I’m a fan of Scale Computing. And not just because I have an old HC1000 in my office that I fire up every now and then (Collier I’m still waiting on those SSDs you promised me a few years ago). They are relentlessly focussed on delivering easy to use solutions that work well and deliver great resiliency and performance, particularly in smaller environments. Their DRaaS play, and partnership with Google, has opened up some doors to customers that may not have considered Scale previously. The Lenovo partnership, and success with customers like GCG and DSO, is proof that Scale are doing a lot of good stuff in the HCI space.

Anyone who’s had the good fortune to deal with Scale, from their executives and founders through to their support staff, will tell you that they’re super easy to deal with and pretty good at what they do. It’s great to see them enjoying some success. It strikes me that they go about their business without a lot of the chest beating and carry on associated with some other vendors in the industry. This is a good thing, and I’m looking forward to seeing what comes next for them.

Elastifile Announces v3.0

Elastifile recently announced version 3.0 of their product. I had the opportunity to speak to Jerome McFarland (VP of Marketing) and thought I’d share some information from the announcement here. If you haven’t heard of them before, “Elastifile augments public cloud capabilities and facilitates cloud consumption by delivering enterprise-grade, scalable file storage in the cloud”.

 

The Announcement

ClearTier

One of the major features of the 3.0 release is “ClearTier”, delivering integration between file and object storage in public clouds. With ClearTier, you have object storage expanding the file system namespace. The cool thing about this is that Elastifile’s ECFS provides transparent read / write access to all data. No need to re-tool applications to take advantage of the improved economics of object storage in the public cloud.

How Does It Work?

All data is accessible through ECFS via a standard NFS mount, and application access to object data is routed automatically. Data tiering occurs automatically according to user-defined policies specifying:

  • Targeted capacity ratio between file and object;
  • Eligibility for data demotion (i.e. min time since last access); and
  • Promotion policies control response to object data access.

Bursting

ClearTier gets even more interesting when you combine it with Elastifile’s CloudConnect, by using CloudConnect to get data to the public cloud in the first place, and then using CloudTier to push data to object storage.

[image courtesy of Elastifile]

It becomes a simple process, and consists of two steps:

  1. Move on-premises data (from any NAS) to cloud-based object storage using CloudConnect; and
  2. Deploy ECFS with pointer to designated object store.

Get Snappy

ClearTier also provides the ability to store snapshots on an object tier. Snapshots occur automatically according to user- defined policies specifying:

  • Data to include;
  • Destination for snapshot (i.e. file storage / object storage); and
  • Schedule for snapshot creation.

The great thing is that all snapshots are accessible through ECFS via the same NFS mount.

 

Thoughts And Further Reading

I was pretty impressed with Elastifile’s CloudConnect solution when they first announced it. When you couple CloudConnect with something like ClearTier, and have it sitting on top of the ECFS foundation, it strikes me as a pretty cool solution. If you’re using applications that rely heavily on NFS, for example, ClearTier gives you a way to leverage the traditionally low cost of cloud object storage with the improved performance of file. I like the idea that you can play with the ratio of file and object, and I’m a big fan of not having to re-tool my file-centric applications to take advantage of object economics. The ability to store a bunch of snapshots on the object tier also adds increased flexibility in terms of data protection and storage access options.

The ability to burst workloads is exactly the kind of technical public cloud use case that we’ve been talking about in slideware for years now. The reality, however, has been somewhat different. It looks like Elastifile are delivering a solution that competes aggressively with some of the leading cloud providers’ object solutions, whilst also giving the storage array vendors, now dabbling in cloud solutions, pause for thought. There are a bunch of interesting use cases, particularly if you need to access a bunch of compute, and large data sets via file-based storage, in a cloud environment for short periods of time. If you’re looking for a cost-effective, scalable storage solution, I think that Elastifile are worth checking out.

Cohesity Announces Helios

I recently had the opportunity to hear from Cohesity (via a vExpert briefing – thanks for organising this TechReckoning!) regarding their Helios announcement and thought I’d share what I know here.

 

What Is It?

If we’re not talking about the god and personification of the Sun, what are we talking about? Cohesity tells me that Helios is a “SaaS-based data and application orchestration and management solution”.

[image courtesy of Cohesity]

Here is the high-level architecture of Helios. There are three main features:

  • Multi-cluster management – Control all your Cohesity clusters located on-premises, in the cloud or at the edge from a single dashboard;
  • SmartAssist – Gives critical global operational data to the IT admin; and
  • Machine Learning Engine – Gives IT Admins machine driven intelligence so that they can make an informed decision.

All of this happens when Helios collects, anonymises, aggregates, and analyses globally available metadata and gives actionable recommendations to IT Admins.

 

Multi-cluster Management

Multi-cluster management is just that: the ability to manage more than one cluster through a unified UI. The cool thing is that you can rollout policies or make upgrades across all your locations and clusters with a single click. It also provides you with the ability to monitor your Cohesity infrastructure in real-time, as well as being able to search and generate reports on the global infrastructure. Finally, there’s an aggregated, simple to use dashboard.

 

SmartAssist

SmartAssist is a feature that provides you with the ability to have smart management of SLAs in the environment. The concept is that if you configure two protection jobs in the environment with competing requirements, the job with the higher SLA will get priority. I like this idea as it prevents people doing silly things with protection jobs.

 

Machine Learning

The Machine Learning part of the solution provides a number of things, including insights into capacity consumption. And proactive wellness? It’s not a pitch for some dodgy natural health product, but instead gives you the ability to perform:

  • Configuration validations, preventing you from doing silly things in your environment;
  • Blacklist version control, stopping known problematic software releases spreading too far in the wild; and
  • Hardware health checks, ensuring things are happy with your hardware (important in a software-defined world).\

 

Thoughts and Further Reading

There’s a lot more going on with Helios, but I’d like to have some stick time with it before I have a lot more to say about it. People are perhaps going to be quick compare this with other SaaS offerings, but I think they might be doing some different things, with a bit of a different approach. You can’t go five minutes on the Internet without hearing about how ML is changing the world. If nothing else, this solution delivers a much needed consolidated view of the Cohesity environment. This seems like an obvious thing, but probably hasn’t been necessary until Cohesity landed the type of customers that had multiple clusters installed all over the place.

I also really like the concept of a feature like SmartAssist. There’s only so much guidance you can give people before they have to do some thinking for themselves. Unfortunately, there are still enough environments in the wild where people are making the wrong decision about what priority to place on jobs in their data protection environment. SmartAssist can do a lot to take away the possibility that things will go awry from an SLA perspective.

You can grab a copy of the data sheet here, and read a blog post by Raj Dutt here. El Reg also has some coverage of the announcement here.

Rubrik Announces Polaris Radar

Polaris?

I’ve written about Rubrik’s Polaris offering in the past, with GPS being the first cab off the rank.  You can think of GPS as the command and control platform, offering multi-cloud control and policy management via the Polaris SaaS framework. I recently had the opportunity to hear from Chris Wahl about Radar and thought it worthwhile covering here.

 

The Announcement

Rubrik announced recently (fine, a few weeks ago) that Polaris Radar is now generally available.

 

The Problem

People don’t want to hear about the problem, because they already know what it is and they want to spend time hearing about how the vendor is going to solve it. I think in this instance, though, it’s worth re-iterating that security attacks happen. A lot. According to the Cisco 2017 Annual Cybersecurity Report ransomware attacks are growing by more than 350% annually. It’s Rubrik’s position that security is heavily focused on the edge, with firewalls and desktop protection being the main tools deployed. “Defence in depth is lopsided”, with a focus on prevention, not necessarily the recovery. According to Wahl, “it’s hard to bounce back fast”.

 

What It Does

So what does Radar do (in the context of Rubrik Polaris)? The idea is that it is increasing the intelligence to know when you get hit, and helping you to recover faster. The goal of Radar is fairly straightforward, with the following activities being key to the solution:

  • Detection – identify all strains of ransomware;
  • Analysis – understand impact of an attack; and
  • Recovery – restore as quickly as possible.

Radar achieves this by:

  • Detecting anomalies – leverage insights on suspicious activity to accelerate detection;
  • Analysing threat impact – spend less time discovering which applications and files were impacted; and
  • Accelerating recovery – minimise downtime by simplifying manual processes into just a few clicks.

 

How?

Rubrik tell me they use (drumroll please) Machine Learning for detection. Is it really machine learning? That doesn’t really matter for the purpose of this story.

[image courtesy of Rubrik]

The machine learning model learns the baseline behaviour, detects anomalies and alerts as they come in. So how does that work then?

1. Detect anomalies – apply machine learning on application metadata to detect and alert unusual change activity with protected data, such as ransomware.

What happens post anomaly detection?

  • Email alert is sent to user
  • Radar inspects snapshot for encryption
  • Results uploaded to Polaris
  • User informed of results (via the Polaris UI)

2. Analyse threat impact – Visualise how an attack impacted the system with a detailed view of file content changes at the time of the event.

3. Accelerate recovery – Select all impacted resources, specify the desired location, and restore the most recent clean versions with a few clicks. Rubrik automates the rest of the restore process.

 

Thoughts and Further Reading

I think there’s a good story to tell with Polaris. SaaS is an accessible way of delivering features to the customer base without the angst traditionally associated with appliance platform upgrades. Data security should be a big part of data protection. After all, data protection is generally critical to recovery once there’s been a serious breach. We’re no longer just protecting against users inside the organisation accidentally deleting large chunks of data, or having to recover from serious equipment failures. Instead, we’re faced with the reality that a bunch of idiots with bad intentions are out to wreck some of our stuff and make a bit of coin on the side. The sooner you know something has gone awry, the quicker you can hopefully recover from the problem (and potentially re-evaluate some of your security). Being attacked shouldn’t be about being ashamed, but it should be about being able to quickly recover and get on with whatever your company does to make its way in the world. With this in mind, I think that Rubrik are on the right track.

You can grab the data sheet from here, and Chris has an article worth checking out here. You can also register to access the Technical Overview here.

Datrium Announces CloudShift

I recently had the opportunity to speak to Datrium‘s Brian Biles and Craig Nunes about their CloudShift announcement and thought it was worth covering some of the highlights here.

 

DVX Now

Datrium have had a scalable protection tier and focus on performance since their inception.

[image courtesy of Datrium]

The “mobility tier”, in the form of Cloud DVX, has been around for a little while now. It’s simple to consume (via SaaS), yields decent deduplication results, and the Datrium team tells me it also delivers fast RTO. There’s also solid support for moving data between DCs with the DVX platform. This all sounds like the foundation for something happening in the hybrid space, right?

 

And Into The Future

Datrium pointed out that disaster recovery has traditionally been a good way of finding out where a lot of the problems exist in you data centre. There’s nothing like failing a failover to understand where the integration points in your on-premises infrastructure are lacking. Disaster recovery needs to be a seamless, integrated process, but data centres are still built on various silos of technology. People are still using clouds for a variety of reasons, and some clouds do some things better than others. It’s easy to pick and choose what you need to get things done. This has been one of the big advantages of public cloud and a large reason for its success. As a result of this, however, the silos are moving to the cloud, even as they’re fixed in the DC.

As a result of this, Datrium are looking to develop a solution that delivers on the following theme: “Run. Protect. Any Cloud”. The idea is simple, offering up an orchestrated DR offering that makes failover and failback a painless undertaking. Datrium tell me they’ve been a big supporter of VMware’s SRM product, but have observed that there can be problems with VMware offering an orchestration-only layer, with adapters having issues from time to time, and managing the solution can be complicated. With CloudShift, Datrium are taking a vertical stack approach, positioning CloudShift as an orchestrator for DR as a SaaS offering. Note that it only works with Datrium.

[image courtesy of Datrium]

The idea behind CloudShift is pretty neat. With Cloud DVX you can already backup VMs to AWS using S3 and EC2. The idea is that you can leverage data already in AWS to fire up VMs on AWS (using on-demand instances of VMware Cloud on AWS) to provide temporary disaster recovery capability. The good thing about this is that converting your VMware VMs to someone else’s cloud is no longer a problem you need to resolve. You’ll need to have a relationship with AWS in the first place – it won’t be as simple as entering your credit card details and firing up an instance. But it certainly seems a lot simpler than having an existing infrastructure in place, and dealing with the conversion problems inherent in going from vSphere to KVM and other virtualisation platforms.

[image courtesy of Datrium]

Failover and failback is a fairly straightforward process as well, with the following steps required for failover and failback of workloads:

  1. Backup to Cloud DVX / S3 – This is ongoing and happens in the background;
  2. Failover required – the CloudShift runbook is initiated;
  3. Restart VM groups on VMC – VMs are rehydrated from data in S3; and
  4. Failback to on-premises – CloudShift reverses the process with deltas using change block tracking.

It’s being pitched as a very simple way to run DR, something that has been notorious for being a stressful activity in the past.

 

Thoughts and Further Reading

CloudShift is targeted for release in the first half of 2019. The economic power of DRaaS in the cloud is very strong. People love the idea that they can access the facility on-demand, rather than having passive infrastructure doing nothing on the off chance that it will be required. There’s obviously some additional cost when you need to use on demand versus reserved resources, but this is still potentially cheaper than standing up and maintaining your own secondary DC presence.

Datrium are focused on keeping inherently complex activities like DR simple. I’ll be curious to see whether they’re successful with this approach. The great thing about something like a generic orchestration framework like VMware SRM is that you can use a number of different vendors in the data centre and not have a huge problem with interoperability. The downside to this approach is that this broader ecosystem can leave you exposed to problems with individual components in the solution. Datrium is taking a punt that their customers are going to see the advantages of having an integrated approach to leveraging on demand services. I’m constantly astonished that people don’t get more excited about DRaaS offerings. It’s really cool that you can get this level of protection without having to invest a tonne in running your own passive infrastructure. If you’d like to read more about CloudShift, there’s a blog post that sheds some more light on the solution on Datrium’s site, and you can grab a white paper here too.

Nexsan Announces Assureon Cloud Transfer

Announcement

Nexsan announced Cloud Transfer for their Assureon product a little while ago. I recently had the chance to catch up with Gary Watson (Founder / CTO at Nexsan) and thought it would be worth covering the announcement here.

 

Assureon Refresher

Firstly, though, it might be helpful to look at what Assureon actually is. In short, it’s an on-premises storage archive that offers:

  • Long term archive storage for fixed content files;
  • Dependable file availability, with files being audited every 90 days;
  • Unparalleled file integrity; and
  • A “policy” system for protecting and stubbing files.

Notably, there is always a primary archive and a DR archive included in the price. No half-arsing it here – which is something that really appeals to me. Assureon also doesn’t have a “delete” key as such – files are only removed based on defined Retention Rules. This is great, assuming you set up your policies sensibly in the first place.

 

Assureon Cloud Transfer

Cloud Transfer provides the ability to move data between on-premises and cloud instances. The idea is that it will:

  • Provide reliable and efficient cloud mobility of archived data between cloud server instances and between cloud vendors; and
  • Optimise cloud storage and backup costs by offloading cold data to on-premises archive.

It’s being positioned as useful for clients who have a large unstructured data footprint on public cloud infrastructure and are looking to reduce their costs for storing data up there. There’s currently support for Amazon AWS and Microsoft Azure, with Google support coming in the near future.

[image courtesy of Nexsan]

There’s stub support for those applications that support. There’s also an optional NFS / SMB interface that can be configured in the cloud as an Assureon archiving target that caches hot files and stubs cold files. This is useful for those non-Windows applications that have a lot of unstructured data that could be moved to an archive.

 

Thoughts and Further Reading

The concept of dedicated archiving hardware and software bundles, particularly ones that live on-premises, might seem a little odd to some folks who spend a lot of time failing fast in the cloud. There are plenty of enterprises, however, that would benefit from the level of rigour that Nexsan have wrapped around the Assureon product. It’s my strong opinion that too many people still don’t understand the difference between backup and recovery and archive data. The idea that you need to take archive data and make it immutable (and available) for a long time has great appeal, particularly for organisations getting slammed with a whole lot of compliance legislation. Vendors have been talking about reducing primary storage use for years, but there seems to have been some pushback from companies not wanting to invest in these solutions. It’s possible that this was also a result of some kludgy implementations that struggled to keep up with the demands of the users. I can’t speak for the performance of the Assureon product, but I like the fact that it’s sold as a pair, and with a lot of the decision-making around protection taken away from the end user. As someone who worked in an organisation that liked to cut corners on this type of thing, it’s nice to see that.

But why would you want to store stuff on-premises? Isn’t everyone moving everything to the cloud? No, they’re not. I don’t imagine that this type of product is being pitched at people running entirely in public cloud. It’s more likely that, if you’re looking at this type of solution, you’re probably running a hybrid setup, and still have a footprint in a colocation facility somewhere. The benefit of this is that you can retain control over where your archived data is placed. Some would say that’s a bit of a pain, and an unnecessary expense, but people familiar with compliance will understand that business is all about a whole lot of wasted expense in order to make people feel good. But I digress. Like most on-premises solutions, the Assureon offering compares well with a public cloud solution on a $/GB basis, assuming you’ve got a lot of sunk costs in place already with your data centre presence.

The immutability story is also a pretty good one when you start to think about organisations that have been hit by ransomware in the last few years. That stuff might roll through your organisation like a hot knife through butter, but it won’t be able to do anything with your archive data – that stuff isn’t going anywhere. Combine that with one of those fancy next generation data protection solutions and you’re in reasonable shape.

In any case, I like what the Assureon product offers, and am looking forward to seeing Nexsan move beyond the Windows-only platform support that it currently offers. You can read the Nexsan Assueron Cloud Transfer press release here. David Marshall covered the announcement over at VMblog and ComputerWeekly.com did an article as well.

Cloudistics, Choice and Private Cloud

I’ve had my eye on Cloudistics for a little while now.  They published an interesting post recently on virtualisation and private cloud. It makes for an interesting read, and I thought I’d comment briefly and post this article if for no other reason than you can find your way to the post and check it out.

TL;DR – I’m rambling a bit, but it’s not about X versus Y, it’s more about getting your people and processes right.

 

Cloud, Schmoud

There are a bunch of different reasons why you’d want to adopt a cloud operating model, be it public, private or hybrid. These include the ability to take advantage of:

  • On-demand service;
  • Broad network access;
  • Resource pooling;
  • Rapid elasticity; and
  • Measured service, or pay-per-use.

Some of these aspects of cloud can be more useful to enterprises than others, depending in large part on where they are in their journey (I hate calling it that). The thing to keep in mind is that cloud is really just a way of doing things slightly differently to improve deficiencies in areas that are normally not tied to one particular piece of technology. What I mean by that is that cloud is a way of dealing with some of the issues that you’ve probably seen in your IT organisation. These include:

  • Poor planning;
  • Complicated network security models;
  • Lack of communication between IT and the business;
  • Applications that don’t scale; and
  • Lack of capacity planning.

Operating Expenditure

These are all difficult problems to solve, primarily because people running IT organisations need to be thinking not just about technology problems, but also people and business problems. And solving those problems takes resources, something that’s often in short supply. Coupled with the fact that many businesses feel like they’ve been handing out too much money to their IT organisations for years and you start to understand why many enterprises are struggling to adapt to new ways of doing things. One thing that public cloud does give you is a way to consume resources via OpEx rather than CapEx. The benefit here is that you’re only consuming what you need, and not paying for the whole thing to be built out on the off chance you’ll use it all over the five year life of the infrastructure. Private cloud can still provide this kind of benefit to the business via “showback” mechanisms that can really highlight the cost of infrastructure being consumed by internal business units. Everyone has complained at one time or another about the Finance group having 27 test environments, now they can let the executives know just what that actually costs.

Are You Really Cloud Native?

Another issue with moving to cloud is that a lot of enterprises are still looking to leverage Infrastructure-as-a-Service (IaaS) as an extension of on-premises capabilities rather than using cloud-native technologies. If you’ve gone with lift and shift (or “move and improve“) you’ve potentially just jammed a bunch of the same problems you had on-premises in someone else’s data centre. The good thing about moving to a cloud operating model (even if it’s private) is that you’ll get people (hopefully) used to consuming services from a catalogue, and taking responsibility for how much their footprint occupies. But if your idea of transformation is running SQL 2005 on Windows Server 2003 deployed from VMware vRA then I think you’ve got a bit of work to do.

 

Conclusion

As Cloudistics point out in their article, it isn’t really a conversation about virtualisation versus private cloud, as virtualisation (in my mind at least) is the platform that makes a lot of what we do nowadays with private cloud possible. What is more interesting is the private versus public debate. But even that one is no longer as clear cut as vendors would like you to believe. If a number of influential analysts are right, most of the world has started to realise that it’s all about a hybrid approach to cloud. The key benefits of adopting a new way of doing things are more about fixing up the boring stuff, like process. If you think you get your house in order simply by replacing the technology that underpins it then you’re in for a tough time.

SwiftStack Announces 1space

SwiftStack recently announced 1space, and I was lucky enough to snaffle some time with Joe Arnold to talk more about what it all means. I thought it would be useful to do a brief post, as I really do like SwiftStack, and I feel like I don’t spend enough time talking about them.

 

The Announcement

So what exactly is 1space? It’s basically SwiftStack delivering access to their storage across both on-premises and public cloud. But what does that mean? Well, you get some cool features as a result, including:

  • Integrated multi-cloud access
  • Scale-out & high-throughput data movement
  • Highly reliable & available policy execution
  • Policies for lifecycle, data protection & migration
  • Optional, scale-out containers with AWS S3 support
  • Native access in public cloud (direct to S3, GCS, etc.)
  • Data created in public cloud accessible on-premises
  • Native format enabling cloud-native services

[image courtesy of SwiftStack]

According to Arnold, one of the really cool things about this is that it “provides universal access to over both file protocols and object APIs to a single storage namespace, it is increasingly used for distributed workflows across multiple geographic regions and multiple clouds”.

 

Metadata Search

But wait …

One of the really nice things that SwiftStack has done is add integrated metadata search via a desktop client for Windows, macOS, and Linux. It’s called MetaSync.

 

Thoughts

This has been a somewhat brief post, but something I did want to focus on was the fact that this product has been open-sourced. SwiftStack have been pretty keen on open source as a concept, and I think that comes through when you have a look at some of their contributions to the community. These contributions shouldn’t be underestimated, and I think it’s important that we call out when vendors are contributing to the open source community. Let’s face it, a whole lot of startups are taking advantage of code generated by the open source community, and a number of them have the good sense to know that it’s most certainly a two-way street, and they can’t relentlessly pillage the community without it eventually falling apart.

But this announcement isn’t just me celebrating the contributions of neckbeards from within the vendor community and elsewhere. SwiftStack have delivered something that is really quite cool. In much the same way that storage types won’t shut up about NVMe over Fabrics, cloud folks are really quite enthusiastic about the concept of multi-cloud connectivity. There are a bunch of different use cases where it makes sense to leverage a universal namespace for your applications. If you’d like to see SwiftStack in action, check out this YouTube channel (there’s a good video about 1space here) and if you’d like to take SwiftStack for a spin, you can do that here.