Elastifile Announces v3.0

Elastifile recently announced version 3.0 of their product. I had the opportunity to speak to Jerome McFarland (VP of Marketing) and thought I’d share some information from the announcement here. If you haven’t heard of them before, “Elastifile augments public cloud capabilities and facilitates cloud consumption by delivering enterprise-grade, scalable file storage in the cloud”.

 

The Announcement

ClearTier

One of the major features of the 3.0 release is “ClearTier”, delivering integration between file and object storage in public clouds. With ClearTier, you have object storage expanding the file system namespace. The cool thing about this is that Elastifile’s ECFS provides transparent read / write access to all data. No need to re-tool applications to take advantage of the improved economics of object storage in the public cloud.

How Does It Work?

All data is accessible through ECFS via a standard NFS mount, and application access to object data is routed automatically. Data tiering occurs automatically according to user-defined policies specifying:

  • Targeted capacity ratio between file and object;
  • Eligibility for data demotion (i.e. min time since last access); and
  • Promotion policies control response to object data access.

Bursting

ClearTier gets even more interesting when you combine it with Elastifile’s CloudConnect, by using CloudConnect to get data to the public cloud in the first place, and then using CloudTier to push data to object storage.

[image courtesy of Elastifile]

It becomes a simple process, and consists of two steps:

  1. Move on-premises data (from any NAS) to cloud-based object storage using CloudConnect; and
  2. Deploy ECFS with pointer to designated object store.

Get Snappy

ClearTier also provides the ability to store snapshots on an object tier. Snapshots occur automatically according to user- defined policies specifying:

  • Data to include;
  • Destination for snapshot (i.e. file storage / object storage); and
  • Schedule for snapshot creation.

The great thing is that all snapshots are accessible through ECFS via the same NFS mount.

 

Thoughts And Further Reading

I was pretty impressed with Elastifile’s CloudConnect solution when they first announced it. When you couple CloudConnect with something like ClearTier, and have it sitting on top of the ECFS foundation, it strikes me as a pretty cool solution. If you’re using applications that rely heavily on NFS, for example, ClearTier gives you a way to leverage the traditionally low cost of cloud object storage with the improved performance of file. I like the idea that you can play with the ratio of file and object, and I’m a big fan of not having to re-tool my file-centric applications to take advantage of object economics. The ability to store a bunch of snapshots on the object tier also adds increased flexibility in terms of data protection and storage access options.

The ability to burst workloads is exactly the kind of technical public cloud use case that we’ve been talking about in slideware for years now. The reality, however, has been somewhat different. It looks like Elastifile are delivering a solution that competes aggressively with some of the leading cloud providers’ object solutions, whilst also giving the storage array vendors, now dabbling in cloud solutions, pause for thought. There are a bunch of interesting use cases, particularly if you need to access a bunch of compute, and large data sets via file-based storage, in a cloud environment for short periods of time. If you’re looking for a cost-effective, scalable storage solution, I think that Elastifile are worth checking out.

Imanis Data Overview and 4.0 Announcement

I recently had the opportunity to speak with Peter Smails and Jay Desai from Imanis Data. They provided me with an overview of what the company does and a view of their latest product announcement. I thought I’d share some of it here as I found it pretty interesting.

 

Overview

Imanis Data provides enterprise data management for Hadoop and NoSQL running on-premises or in the public cloud.

Data Management

A big part of the Imanis Data story revolves around the “three pillars” of data management, namely:

  • Protection – providing redundancy in case of a disaster;
  • Orchestration – moving data around for different use cases (eg. test and dev, cloud migration, archival); and
  • Automation – using machine learning to automate the data management functions, eg. Detecting anomalies (ThreatSense), SmartPolicies for backups based on RPO/RTO

The software itself is hardware-agnostic, and can run on any virtual, physical, or container-based platform. It can also runs on any cloud, and hence on any storage. You start with 3 nodes, and scale out from there. Imanis Data tell me that everything runs in parallel, and it’s agentless, using native APIs for the platforms. This is a big plus when it comes to protecting these kinds of workloads, as there’s usually a large number of hosts involved, and managing agents everywhere is a real pain.

It also delivers storage optimisation services, and supports erasure coding, compression, and content-aware deduplication. There’s a nice paper on the architecture that you can grab from here.

 

What’s New?

So what’s new with 4.0?

Any Point-in-time Recovery

Imanis Data now provides APITR for Couchbase, MongoDB, & Cassandra

  • APITR can be enabled at bucket level for Couchbase;
  • APITR can be enabled at repository level for Cassandra and MongoDB;
  • Aggressively collects transaction information from primary database; and
  • At time of recovery, user can pick a date & time.

ThreatSense

ThreatSense “learns” from human input and updates the anomaly model. It’s a smart way of doing malware and ransomware detection.

SmartPolicies

What?

  • Autonomous RPO-based backup powered by machine learning;
  • Machine learning model built based on cluster workloads and utilisation;
  • Model determines backup frequency & resource prioritisation;
  • Continuously adapts to meet required RPO; and
  • Provides guidance on required resources to achieve desired RPOs.

 

Thoughts

I do a lot with a number of data protection vendors in various on-premises and cloud incantations, but I’m the first to admit that my experience with protection mechanisms for things like NoSQL is non-existent. It seems like that’s not an uncommon problem, and Imanis Data has spent the last 5 or so years working on fixing that for folks.

I’m intrigued by the idea that policies could be applied to objects based on criteria beyond a standard RPO requirement. In the enterprise I frequently run into situations where the RPO is often at odds with the capabilities of the protection system, or clashing with some critical processing activity that happens at a certain time each night. Getting the balance right can be challenging at the best of times. Like most things related to automation, if the system can do what I need it to do in the time I need it to happen, I’m going to be happy. Particularly if I don’t need to do anything after I’ve set it to run.

Imanis Data seems to be offering up a pretty cool solution that scales well and does a lot of things that are important for protecting critical workloads. Imanis Data tell me they’re not interested in the relational side of things, and are continuing to focus on their core competency for the moment. It looks like pretty neat stuff and I’m looking forward to see what they come up with in the future.

Violin Systems Announces Violin XVS 8

Violin Systems recently announced their new XVS 8 platform. I had the opportunity to speak to Gary Lyng (Chief Marketing Officer) and thought I’d share some thoughts here.

 

Background

A few things have changed for Violin since they folded as Violin Memory and were acquired by Soros in 2017. Firstly, they’re now 100% channel focused. And secondly, according to Lyng, they’re “all about microseconds”.

What Really Matters?

Violin are focused on extreme performance, specifically:

  • Low latency;
  • Consistent performance (24x7x365); and
  • Enterprise data services.

The key use cases they’re addressing are:

  • Tier 0;
  • Realtime insight;
  • OLTP, DB, VDI;
  • AI / ML;
  • Commercial IoT; and
  • Trading, supply chain.

 

The Announcement

The crux of the announcement is the Violin XVS 8.

[image courtesy of Violin Systems]

Specifications

Performance Latency as low 50µs to 800µs

Dedupe LUN performance improved by >40%

Capacity Usable –  44.3TB – 88.7TB

Effective –  256TB – 512TB

 

Enterprise Data Services
Efficiency Dedupe + compression reduction Ratio 6:1

Low impact Snapshots, Thin Provisioning, Thin and Thick Clones

Continuity

Protection

Scalability

Synchronous Replication (Local/Metro) | Asynchronous Replication |Stretch clusters (0 RPO & RTO – 7700) |NDU

Snapshots (crash consistent) |Consistency Groups (snaps & replication)

Transparent LUN mirroring

Online LUN expansion

Capacity pooling across shelves

Single Name Space

Hosts  8x 32Gb FC (NVMe Ready) or 8×10 GbE iSCSI

Feature Summary

Performance & Experience Advances

  • Consistent-Performance Guarantee
  • Cloud-based predictive analytics providing insight into future performance needs
  • NVMe over FC

Flexibility & Efficiency

  • Single Platform with selectable dedupe per LUN / Application
  • Snap-Dedupe

Application Infrastructure Ecosystems

Other Neat Features

32Gbps FC connectivity

Concerto OS updates (expected early Q1 2019)

  • Simple software upgrade to existing systems
  • Lowered IO Latency, Higher Bandwidth
  • Lower CPU usage and enable cost savings through compute and software consolidation
  • Optimised for transporting data from solid state storage to numerous processors

Everyone Has An App Now

All the cool storage vendors have an app. You can walk into your DC and (assuming you have the right credentials) scan a code on the front of the box. This will get you access to cloud-based analytics to see just how your system is performing.

[image courtesy of Violin Systems]

 

Thoughts

Violin Memory were quite the pioneers in the all-flash storage market many years ago. The pundits lamented the issues that Violin had with keeping pace with some of the smaller start-ups and big box sellers in recent times. The decision to focus on the “extreme performance” space is an interesting one. Violin certainly have some decent pedigree when it comes to the enterprise data services that these types of high-end customers would be looking for. And it’s not just about speed, it’s also about resilience and reliability. I asked about the decision to pursue NVMe over FC, and Lyng said that the feeling was that technologies such as RocE weren’t quite there yet.

I’m curious to see whether Violin can continue to have an impact on the market. This isn’t their first rodeo, and if the box can deliver the numbers that have been touted, it will make for a reasonably compelling offering. Particularly in the financial services / transactional space where time is money.

Zerto Announces ZVR 6.5

Zerto recently announced version 6.5 of their Zero Virtual Replication (ZVR) product and I had the opportunity to speak with Steve Blow and Caroline Seymour about the announcement.

 

Announcement

More Multi-cloud

Zerto 6.5 includes features that will accelerate this adoption, specifically:

Backup Capabilities

Zerto’s Long Term Retention feature has also been enhanced. You now have the ability to do incremental backups – effectively deliver forever incremental capability – with synthetic fulls as required. There’s also:

  • Support for Microsoft Data Box Edge using standard storage protocols; and
  • The ability to recover individual VMs out of Virtual Protection Groups.

Analytics

Zerto have worked hard to improve their analytics capabilities, providing:

  • Data for quarterly reports, including SLA compliance;
  • Troubleshooting of monthly data anomalies;
  • Enhanced data about VMs including journal size, throughput, IOPS and WAN; and
  • Cloud Service Provider Client Organisational Filter with enhanced visibility to create customer reports and automatically deliver real-time analysis to clients.

 

Events

Zerto have been busy at Microsoft’s Ignite event recently, and are also holding “IT Resilience Roadshow” events in the U.S. and Europe in the next few months in collaboration with Microsoft. There’s a Zerto+Azure workshop being held at each event, as well as the ability to sit for “Zerto+Azure Specialist” Certification. The workshop will give you the opportunity to use Zerto+Azure to:

  • Create a Disaster Recovery environment in Azure;
  • Migrate End of Life Windows Server 2008/SQL Server 2008 to Azure;
  • Migrate your on-premises data centre to Azure; and
  • Move or protect Linux and other workloads to Azure.

 

Thoughts

I’ve been a fan of Zerto for some time. They’ve historically done a lot with DR solutions and are now moving nicely beyond just DR into “IT Resilience”, with a solution that aims to incorporate a range of features. Zerto have also been pretty transparent with the market in terms of their vision for version 7. There’s an on-demand webinar you can register for that will provide some further insights into what that will bring. I’m a fan of their multi-cloud strategy, and I’m looking forward to seeing that continue to evolve.

I like it when companies aren’t afraid to show their hand a little. Too often companies focus on keeping these announcements a big secret until some special event or arbitrary date in a marketing team’s calendar. I know that Zerto haven’t quite finished version 7 yet, but they have been pretty upfront about the direction they’re trying to head in and some of the ways they’re intending to get there. In my opinion this is a good thing, as it gives their customer base time to prepare, and an idea of what capabilities they’ll be able to leverage in the future. Ultimately, Zerto are providing a solution that is geared up to help protect critical infrastructure assets and move data around to where you need it to be (whether it is planned or not). Zerto seem to understand that the element of surprise isn’t really what their customers are in to when looking at these types of solutions. It isn’t always about being the first company to offer this or that capability. Instead, it should be about offering capabilities that actually work reliably.

Scale Computing Announces Partnership With APC by Schneider Electric For DCIAB

(I’m really hoping the snappy title will bring in a few more readers). I recently had a chance to speak with Doug Howell, Senior Director Global Alliances at Scale Computing about their Data Centre In A Box (DCIAB) offering in collaboration with APC by Schneider Electric and thought I’d share some thoughts.

 

It’s A Box

Well, a biggish box. The solution is built on APC’s Micro Data Centre solution, combined with 3 Scale HC3 1150 nodes. The idea is that you have 1 SKU to deal with, which includes the Scale HC3 nodes, UPS, PDUs, and rack. You can then wheel it in, plug it in to the wall and network and it’s ready to go. Howell mentioned that they have a customer that is in the process of deploying a significant amount of these things in the wild.

Note that this is slightly different to the EMEA campaign with Lenovo from earlier in the year and is focused, at this stage, on the North American market. You can grab the solution brief from here.

 

Thoughts

The “distributed enterprise” has presented challenges to IT organisations for years now. Not everyone works in a location that is nicely co-located with headquarters. And these folks need compute and storage too. You’ve no doubt heard about how the “edge” is the new hotness in IT, and I frequently hear pitches from vendors talking about how they handle storage or compute requirements at the edge in some kind of earth-shattering way. It’s been a hard problem to solve, because locality (either for storage or compute or both) is generally a big part of the success of these solutions, particularly from the end user’s perspective. This is oftentimes at odds with traditional enterprise deployments, where all of the key compute and storage components are centrally located for ease of access, management and protection. Improvements in WAN technologies, and distributed application availability is changing that story to an extent though, hence the requirement for these kind of edge solutions. Sometimes, you just need to have stuff close to where you’re main business activity is occurring.

So what makes the Scale and APC offering any different? Nothing really, except that Scale have built their reputation on being able to deliver simple to operate hyper-converged infrastructure to small and medium enterprises with a minimum of fuss and at a reasonable price point. The cool thing here is that you’re also leveraging APC’s ability to deliver robust micro DC services with Scale’s offering that can fit in well with their other solutions, such as DRaaS.

Not every solution from every vendor needs to be unique for it to stand out from the crowd. Scale have historically demonstrated a relentless focus on quality products, excellent after-sales support and market focus. This collaboration will no doubt open up some more doors for them with APC customers who were previously unaware of the Scale story (and vice versa). This can only be a good thing in my opinion.

Dell EMC News From VMworld US 2018

I’m not at VMworld US this year, but I had the opportunity to be briefed by Sam Grocott (Dell EMC Cloud Strategy) on some of Dell EMC‘s key announcements during the event, and thought I’d share some of my rough notes and links here. You can read the press release here.

TL;DR?

It is a multi-cloud world. Multi-cloud requires workload mobility. The market requires a consistent experience between on-premises and off-premises. Dell EMC are doing some more stuff around that.

 

Cloud Platforms

Dell EMC offer a number of engineered systems to run both IaaS and cloud native applications.

VxRail

Starting with vSphere 6.7, Dell EMC are saying they’re delivering “near” synchronous software releases between VMware and VxRail. In this case that translates to a less than 30 Day delta between releases. There’s also support for:

VxRack SDDC with VMware Cloud Foundation

  • Support for latest VCF releases – VCF 2.3.2, and future proof for next generation VMware cloud technologies
  • Alignment with VxRail hardware options – P, E, V series VxRail models, now including Storage Dense S-series
  • Configuration flexibility

 

Cloud-enabled Infrastructure

Focus is on the data

  • Cloud data mobility;
  • Cloud data protection;
  • Cloud data services; and
  • Cloud control.

Cloud Data Protection

  • DD Cloud DR – keep copies of VM data from on-premises DD to public cloud and orchestrate failover of workloads to the cloud
  • Data Protection Suite – use cloud storage for backup and retention
  • Cloud Snapshot Manager – Backup and recovery for public cloud workloads (Now MS Azure)
  • Data Domain virtual edition running in the cloud

DD VE 4.0 Enhancements

  • KVM support added for DD VE on-premises
  • In-cloud capacity expanded to 96TB (was 16TB)
  • Can run in AWS, Azure and VMware Cloud

Cloud Data Services

Dell EMC have already announced services such as:

And now you can get Dell EMC UnityVSA Cloud Edition.

UnityVSA Cloud Edition

[image courtesy of Dell EMC]

  • Up to 256TB file systems
  • VMware Cloud on AWS

CloudIQ

  • No cost, SaaS offering
  • Predictive analytics – intelligently project capacity and performance
  • Anomaly detection – leverage ML to pinpoint deviations
  • Proactive health – identify risks before they impact the environment

Enhancements include:

Data Domain Cloud Tier

There are some other Data Domain related enhancements, including new AWS support (meaning you can have a single vendor for Long Term Retention).

ECS

ECS enhancements have also been announced, with a 50%+ increase in storage capacity and compute.

 

Thoughts

As would be expected from a company with a large portfolio of products, there’s quite a bit happening on the product enhancement front. Dell EMC are starting to get that they need to be on-board with those pesky cloud types, and they’re also doing a decent job of ensuring their private cloud customers have something to play with as well.

I’m always a little surprised by vendors offering “Cloud Editions” of key products, as it feels a lot like they’re bolting on something to the public cloud when the focus could perhaps be on helping customers get to a cloud-native position sooner. That said, there are good economic reasons to take this approach. By that I mean that there’s always going to be someone who thinks they can just lift and shift their workload to the public cloud, rather than re-factoring their applications. Dell EMC are providing a number of ways to make this a fairly safe undertaking, and products like Unity Cloud Edition provide some nice features such as increased resilience that would be otherwise lacking if the enterprise customer simply dumped its VMs in AWS as-is. I still have hope that we’ll stop doing this as an industry in the near future and embrace some smarter ways of working. But while enterprises are happy enough to spend their money on doing things like they always have, I can’t criticise Dell EMC for wanting a piece of the pie.

Cohesity Announces Helios

I recently had the opportunity to hear from Cohesity (via a vExpert briefing – thanks for organising this TechReckoning!) regarding their Helios announcement and thought I’d share what I know here.

 

What Is It?

If we’re not talking about the god and personification of the Sun, what are we talking about? Cohesity tells me that Helios is a “SaaS-based data and application orchestration and management solution”.

[image courtesy of Cohesity]

Here is the high-level architecture of Helios. There are three main features:

  • Multi-cluster management – Control all your Cohesity clusters located on-premises, in the cloud or at the edge from a single dashboard;
  • SmartAssist – Gives critical global operational data to the IT admin; and
  • Machine Learning Engine – Gives IT Admins machine driven intelligence so that they can make an informed decision.

All of this happens when Helios collects, anonymises, aggregates, and analyses globally available metadata and gives actionable recommendations to IT Admins.

 

Multi-cluster Management

Multi-cluster management is just that: the ability to manage more than one cluster through a unified UI. The cool thing is that you can rollout policies or make upgrades across all your locations and clusters with a single click. It also provides you with the ability to monitor your Cohesity infrastructure in real-time, as well as being able to search and generate reports on the global infrastructure. Finally, there’s an aggregated, simple to use dashboard.

 

SmartAssist

SmartAssist is a feature that provides you with the ability to have smart management of SLAs in the environment. The concept is that if you configure two protection jobs in the environment with competing requirements, the job with the higher SLA will get priority. I like this idea as it prevents people doing silly things with protection jobs.

 

Machine Learning

The Machine Learning part of the solution provides a number of things, including insights into capacity consumption. And proactive wellness? It’s not a pitch for some dodgy natural health product, but instead gives you the ability to perform:

  • Configuration validations, preventing you from doing silly things in your environment;
  • Blacklist version control, stopping known problematic software releases spreading too far in the wild; and
  • Hardware health checks, ensuring things are happy with your hardware (important in a software-defined world).\

 

Thoughts and Further Reading

There’s a lot more going on with Helios, but I’d like to have some stick time with it before I have a lot more to say about it. People are perhaps going to be quick compare this with other SaaS offerings, but I think they might be doing some different things, with a bit of a different approach. You can’t go five minutes on the Internet without hearing about how ML is changing the world. If nothing else, this solution delivers a much needed consolidated view of the Cohesity environment. This seems like an obvious thing, but probably hasn’t been necessary until Cohesity landed the type of customers that had multiple clusters installed all over the place.

I also really like the concept of a feature like SmartAssist. There’s only so much guidance you can give people before they have to do some thinking for themselves. Unfortunately, there are still enough environments in the wild where people are making the wrong decision about what priority to place on jobs in their data protection environment. SmartAssist can do a lot to take away the possibility that things will go awry from an SLA perspective.

You can grab a copy of the data sheet here, and read a blog post by Raj Dutt here. El Reg also has some coverage of the announcement here.

Rubrik Announces Polaris Radar

Polaris?

I’ve written about Rubrik’s Polaris offering in the past, with GPS being the first cab off the rank.  You can think of GPS as the command and control platform, offering multi-cloud control and policy management via the Polaris SaaS framework. I recently had the opportunity to hear from Chris Wahl about Radar and thought it worthwhile covering here.

 

The Announcement

Rubrik announced recently (fine, a few weeks ago) that Polaris Radar is now generally available.

 

The Problem

People don’t want to hear about the problem, because they already know what it is and they want to spend time hearing about how the vendor is going to solve it. I think in this instance, though, it’s worth re-iterating that security attacks happen. A lot. According to the Cisco 2017 Annual Cybersecurity Report ransomware attacks are growing by more than 350% annually. It’s Rubrik’s position that security is heavily focused on the edge, with firewalls and desktop protection being the main tools deployed. “Defence in depth is lopsided”, with a focus on prevention, not necessarily the recovery. According to Wahl, “it’s hard to bounce back fast”.

 

What It Does

So what does Radar do (in the context of Rubrik Polaris)? The idea is that it is increasing the intelligence to know when you get hit, and helping you to recover faster. The goal of Radar is fairly straightforward, with the following activities being key to the solution:

  • Detection – identify all strains of ransomware;
  • Analysis – understand impact of an attack; and
  • Recovery – restore as quickly as possible.

Radar achieves this by:

  • Detecting anomalies – leverage insights on suspicious activity to accelerate detection;
  • Analysing threat impact – spend less time discovering which applications and files were impacted; and
  • Accelerating recovery – minimise downtime by simplifying manual processes into just a few clicks.

 

How?

Rubrik tell me they use (drumroll please) Machine Learning for detection. Is it really machine learning? That doesn’t really matter for the purpose of this story.

[image courtesy of Rubrik]

The machine learning model learns the baseline behaviour, detects anomalies and alerts as they come in. So how does that work then?

1. Detect anomalies – apply machine learning on application metadata to detect and alert unusual change activity with protected data, such as ransomware.

What happens post anomaly detection?

  • Email alert is sent to user
  • Radar inspects snapshot for encryption
  • Results uploaded to Polaris
  • User informed of results (via the Polaris UI)

2. Analyse threat impact – Visualise how an attack impacted the system with a detailed view of file content changes at the time of the event.

3. Accelerate recovery – Select all impacted resources, specify the desired location, and restore the most recent clean versions with a few clicks. Rubrik automates the rest of the restore process.

 

Thoughts and Further Reading

I think there’s a good story to tell with Polaris. SaaS is an accessible way of delivering features to the customer base without the angst traditionally associated with appliance platform upgrades. Data security should be a big part of data protection. After all, data protection is generally critical to recovery once there’s been a serious breach. We’re no longer just protecting against users inside the organisation accidentally deleting large chunks of data, or having to recover from serious equipment failures. Instead, we’re faced with the reality that a bunch of idiots with bad intentions are out to wreck some of our stuff and make a bit of coin on the side. The sooner you know something has gone awry, the quicker you can hopefully recover from the problem (and potentially re-evaluate some of your security). Being attacked shouldn’t be about being ashamed, but it should be about being able to quickly recover and get on with whatever your company does to make its way in the world. With this in mind, I think that Rubrik are on the right track.

You can grab the data sheet from here, and Chris has an article worth checking out here. You can also register to access the Technical Overview here.

Nexsan Announces Assureon Cloud Transfer

Announcement

Nexsan announced Cloud Transfer for their Assureon product a little while ago. I recently had the chance to catch up with Gary Watson (Founder / CTO at Nexsan) and thought it would be worth covering the announcement here.

 

Assureon Refresher

Firstly, though, it might be helpful to look at what Assureon actually is. In short, it’s an on-premises storage archive that offers:

  • Long term archive storage for fixed content files;
  • Dependable file availability, with files being audited every 90 days;
  • Unparalleled file integrity; and
  • A “policy” system for protecting and stubbing files.

Notably, there is always a primary archive and a DR archive included in the price. No half-arsing it here – which is something that really appeals to me. Assureon also doesn’t have a “delete” key as such – files are only removed based on defined Retention Rules. This is great, assuming you set up your policies sensibly in the first place.

 

Assureon Cloud Transfer

Cloud Transfer provides the ability to move data between on-premises and cloud instances. The idea is that it will:

  • Provide reliable and efficient cloud mobility of archived data between cloud server instances and between cloud vendors; and
  • Optimise cloud storage and backup costs by offloading cold data to on-premises archive.

It’s being positioned as useful for clients who have a large unstructured data footprint on public cloud infrastructure and are looking to reduce their costs for storing data up there. There’s currently support for Amazon AWS and Microsoft Azure, with Google support coming in the near future.

[image courtesy of Nexsan]

There’s stub support for those applications that support. There’s also an optional NFS / SMB interface that can be configured in the cloud as an Assureon archiving target that caches hot files and stubs cold files. This is useful for those non-Windows applications that have a lot of unstructured data that could be moved to an archive.

 

Thoughts and Further Reading

The concept of dedicated archiving hardware and software bundles, particularly ones that live on-premises, might seem a little odd to some folks who spend a lot of time failing fast in the cloud. There are plenty of enterprises, however, that would benefit from the level of rigour that Nexsan have wrapped around the Assureon product. It’s my strong opinion that too many people still don’t understand the difference between backup and recovery and archive data. The idea that you need to take archive data and make it immutable (and available) for a long time has great appeal, particularly for organisations getting slammed with a whole lot of compliance legislation. Vendors have been talking about reducing primary storage use for years, but there seems to have been some pushback from companies not wanting to invest in these solutions. It’s possible that this was also a result of some kludgy implementations that struggled to keep up with the demands of the users. I can’t speak for the performance of the Assureon product, but I like the fact that it’s sold as a pair, and with a lot of the decision-making around protection taken away from the end user. As someone who worked in an organisation that liked to cut corners on this type of thing, it’s nice to see that.

But why would you want to store stuff on-premises? Isn’t everyone moving everything to the cloud? No, they’re not. I don’t imagine that this type of product is being pitched at people running entirely in public cloud. It’s more likely that, if you’re looking at this type of solution, you’re probably running a hybrid setup, and still have a footprint in a colocation facility somewhere. The benefit of this is that you can retain control over where your archived data is placed. Some would say that’s a bit of a pain, and an unnecessary expense, but people familiar with compliance will understand that business is all about a whole lot of wasted expense in order to make people feel good. But I digress. Like most on-premises solutions, the Assureon offering compares well with a public cloud solution on a $/GB basis, assuming you’ve got a lot of sunk costs in place already with your data centre presence.

The immutability story is also a pretty good one when you start to think about organisations that have been hit by ransomware in the last few years. That stuff might roll through your organisation like a hot knife through butter, but it won’t be able to do anything with your archive data – that stuff isn’t going anywhere. Combine that with one of those fancy next generation data protection solutions and you’re in reasonable shape.

In any case, I like what the Assureon product offers, and am looking forward to seeing Nexsan move beyond the Windows-only platform support that it currently offers. You can read the Nexsan Assueron Cloud Transfer press release here. David Marshall covered the announcement over at VMblog and ComputerWeekly.com did an article as well.

NetApp Announces NetApp ONTAP AI

As a member of NetApp United, I had the opportunity to sit in on a briefing from Mike McNamara about NetApp‘s recently announced AI offering, the snappily named “NetApp ONTAP AI”. I thought I’d provide a brief overview here and share some thoughts.

 

The Announcement

So what is NetApp ONTAP AI? It’s a “proven” architecture delivered via NetApp’s channel partners. It’s comprised of compute, storage and networking. Storage is delivered over NFS. The idea is that you can start small and scale out as required.

Hardware

Software

  • NVIDIA GPU Cloud Deep Learning Stack
  • NetApp ONTAP 9
  • Trident, dynamic storage provisioner

Support

  • Single point of contact support
  • Proven support model

 

[image courtesy of NetApp]

 

Thoughts and Further Reading

I’ve written about NetApp’s Edge to Core to Cloud story before, and this offering certainly builds on the work they’ve done with big data and machine learning solutions. Artificial Intelligence (AI) and Machine Learning (ML) solutions are like big data from five years ago, or public cloud. You can’t go to any industry event, or take a briefing from an infrastructure vendor, without hearing all about how they’re delivering solutions focused on AI. What you do with the gear once you’ve bought one of these spectacularly ugly boxes is up to you, obviously, and I don’t want to get in to whether some of these solutions are really “AI” or not (hint: they’re usually not). While the vendors are gushing breathlessly about how AI will conquer the world, if you tone down the hyperbole a bit, there’re still some fascinating problems being solved with these kinds of solutions.

I don’t think that every business, right now, will benefit from an AI strategy. As much as the vendors would like to have you buy one of everything, these kinds of solutions are very good at doing particular tasks, most of which are probably not in your core remit. That’s not to say that you won’t benefit in the very near future from some of the research and development being done in this area. And it’s for this reason that I think architectures like this one, and those from NetApp’s competitors, are contributing something significant to the ongoing advancement of these fields.

I also like that this is delivered via channel partners. It indicates, at least at first glance, that AI-focused solutions aren’t simply something you can slap a SKU on and sells 100s of. Partners generally have a better breadth of experience across the various hardware, software and services elements and their respective constraints, and will often be in a better position to spend time understanding the problem at hand rather than treating everything as the same problem with one solution. There’s also less chance that the partner’s sales people will have performance accelerators tied to selling one particular line of products. This can be useful when trying to solve problems that are spread across multiple disciplines and business units.

The folks at NVIDIA have made a lot of noise in the AI / ML marketplace lately, and with good reason. They know how to put together blazingly fast systems. I’ll be interested to see how this architecture goes in the marketplace, and whether customers are primarily from the NetApp side of the fence, from the NVIDIA side, or perhaps both. You can grab a copy of the solution brief here, and there’s an AI white paper you can download from here. The real meat and potatoes though, is the reference architecture document itself, which you can find here.