Elastifile Announces v3.0

Elastifile recently announced version 3.0 of their product. I had the opportunity to speak to Jerome McFarland (VP of Marketing) and thought I’d share some information from the announcement here. If you haven’t heard of them before, “Elastifile augments public cloud capabilities and facilitates cloud consumption by delivering enterprise-grade, scalable file storage in the cloud”.

 

The Announcement

ClearTier

One of the major features of the 3.0 release is “ClearTier”, delivering integration between file and object storage in public clouds. With ClearTier, you have object storage expanding the file system namespace. The cool thing about this is that Elastifile’s ECFS provides transparent read / write access to all data. No need to re-tool applications to take advantage of the improved economics of object storage in the public cloud.

How Does It Work?

All data is accessible through ECFS via a standard NFS mount, and application access to object data is routed automatically. Data tiering occurs automatically according to user-defined policies specifying:

  • Targeted capacity ratio between file and object;
  • Eligibility for data demotion (i.e. min time since last access); and
  • Promotion policies control response to object data access.

Bursting

ClearTier gets even more interesting when you combine it with Elastifile’s CloudConnect, by using CloudConnect to get data to the public cloud in the first place, and then using CloudTier to push data to object storage.

[image courtesy of Elastifile]

It becomes a simple process, and consists of two steps:

  1. Move on-premises data (from any NAS) to cloud-based object storage using CloudConnect; and
  2. Deploy ECFS with pointer to designated object store.

Get Snappy

ClearTier also provides the ability to store snapshots on an object tier. Snapshots occur automatically according to user- defined policies specifying:

  • Data to include;
  • Destination for snapshot (i.e. file storage / object storage); and
  • Schedule for snapshot creation.

The great thing is that all snapshots are accessible through ECFS via the same NFS mount.

 

Thoughts And Further Reading

I was pretty impressed with Elastifile’s CloudConnect solution when they first announced it. When you couple CloudConnect with something like ClearTier, and have it sitting on top of the ECFS foundation, it strikes me as a pretty cool solution. If you’re using applications that rely heavily on NFS, for example, ClearTier gives you a way to leverage the traditionally low cost of cloud object storage with the improved performance of file. I like the idea that you can play with the ratio of file and object, and I’m a big fan of not having to re-tool my file-centric applications to take advantage of object economics. The ability to store a bunch of snapshots on the object tier also adds increased flexibility in terms of data protection and storage access options.

The ability to burst workloads is exactly the kind of technical public cloud use case that we’ve been talking about in slideware for years now. The reality, however, has been somewhat different. It looks like Elastifile are delivering a solution that competes aggressively with some of the leading cloud providers’ object solutions, whilst also giving the storage array vendors, now dabbling in cloud solutions, pause for thought. There are a bunch of interesting use cases, particularly if you need to access a bunch of compute, and large data sets via file-based storage, in a cloud environment for short periods of time. If you’re looking for a cost-effective, scalable storage solution, I think that Elastifile are worth checking out.

Imanis Data Overview and 4.0 Announcement

I recently had the opportunity to speak with Peter Smails and Jay Desai from Imanis Data. They provided me with an overview of what the company does and a view of their latest product announcement. I thought I’d share some of it here as I found it pretty interesting.

 

Overview

Imanis Data provides enterprise data management for Hadoop and NoSQL running on-premises or in the public cloud.

Data Management

A big part of the Imanis Data story revolves around the “three pillars” of data management, namely:

  • Protection – providing redundancy in case of a disaster;
  • Orchestration – moving data around for different use cases (eg. test and dev, cloud migration, archival); and
  • Automation – using machine learning to automate the data management functions, eg. Detecting anomalies (ThreatSense), SmartPolicies for backups based on RPO/RTO

The software itself is hardware-agnostic, and can run on any virtual, physical, or container-based platform. It can also runs on any cloud, and hence on any storage. You start with 3 nodes, and scale out from there. Imanis Data tell me that everything runs in parallel, and it’s agentless, using native APIs for the platforms. This is a big plus when it comes to protecting these kinds of workloads, as there’s usually a large number of hosts involved, and managing agents everywhere is a real pain.

It also delivers storage optimisation services, and supports erasure coding, compression, and content-aware deduplication. There’s a nice paper on the architecture that you can grab from here.

 

What’s New?

So what’s new with 4.0?

Any Point-in-time Recovery

Imanis Data now provides APITR for Couchbase, MongoDB, & Cassandra

  • APITR can be enabled at bucket level for Couchbase;
  • APITR can be enabled at repository level for Cassandra and MongoDB;
  • Aggressively collects transaction information from primary database; and
  • At time of recovery, user can pick a date & time.

ThreatSense

ThreatSense “learns” from human input and updates the anomaly model. It’s a smart way of doing malware and ransomware detection.

SmartPolicies

What?

  • Autonomous RPO-based backup powered by machine learning;
  • Machine learning model built based on cluster workloads and utilisation;
  • Model determines backup frequency & resource prioritisation;
  • Continuously adapts to meet required RPO; and
  • Provides guidance on required resources to achieve desired RPOs.

 

Thoughts

I do a lot with a number of data protection vendors in various on-premises and cloud incantations, but I’m the first to admit that my experience with protection mechanisms for things like NoSQL is non-existent. It seems like that’s not an uncommon problem, and Imanis Data has spent the last 5 or so years working on fixing that for folks.

I’m intrigued by the idea that policies could be applied to objects based on criteria beyond a standard RPO requirement. In the enterprise I frequently run into situations where the RPO is often at odds with the capabilities of the protection system, or clashing with some critical processing activity that happens at a certain time each night. Getting the balance right can be challenging at the best of times. Like most things related to automation, if the system can do what I need it to do in the time I need it to happen, I’m going to be happy. Particularly if I don’t need to do anything after I’ve set it to run.

Imanis Data seems to be offering up a pretty cool solution that scales well and does a lot of things that are important for protecting critical workloads. Imanis Data tell me they’re not interested in the relational side of things, and are continuing to focus on their core competency for the moment. It looks like pretty neat stuff and I’m looking forward to see what they come up with in the future.

Violin Systems Announces Violin XVS 8

Violin Systems recently announced their new XVS 8 platform. I had the opportunity to speak to Gary Lyng (Chief Marketing Officer) and thought I’d share some thoughts here.

 

Background

A few things have changed for Violin since they folded as Violin Memory and were acquired by Soros in 2017. Firstly, they’re now 100% channel focused. And secondly, according to Lyng, they’re “all about microseconds”.

What Really Matters?

Violin are focused on extreme performance, specifically:

  • Low latency;
  • Consistent performance (24x7x365); and
  • Enterprise data services.

The key use cases they’re addressing are:

  • Tier 0;
  • Realtime insight;
  • OLTP, DB, VDI;
  • AI / ML;
  • Commercial IoT; and
  • Trading, supply chain.

 

The Announcement

The crux of the announcement is the Violin XVS 8.

[image courtesy of Violin Systems]

Specifications

Performance Latency as low 50µs to 800µs

Dedupe LUN performance improved by >40%

Capacity Usable –  44.3TB – 88.7TB

Effective –  256TB – 512TB

 

Enterprise Data Services
Efficiency Dedupe + compression reduction Ratio 6:1

Low impact Snapshots, Thin Provisioning, Thin and Thick Clones

Continuity

Protection

Scalability

Synchronous Replication (Local/Metro) | Asynchronous Replication |Stretch clusters (0 RPO & RTO – 7700) |NDU

Snapshots (crash consistent) |Consistency Groups (snaps & replication)

Transparent LUN mirroring

Online LUN expansion

Capacity pooling across shelves

Single Name Space

Hosts  8x 32Gb FC (NVMe Ready) or 8×10 GbE iSCSI

Feature Summary

Performance & Experience Advances

  • Consistent-Performance Guarantee
  • Cloud-based predictive analytics providing insight into future performance needs
  • NVMe over FC

Flexibility & Efficiency

  • Single Platform with selectable dedupe per LUN / Application
  • Snap-Dedupe

Application Infrastructure Ecosystems

Other Neat Features

32Gbps FC connectivity

Concerto OS updates (expected early Q1 2019)

  • Simple software upgrade to existing systems
  • Lowered IO Latency, Higher Bandwidth
  • Lower CPU usage and enable cost savings through compute and software consolidation
  • Optimised for transporting data from solid state storage to numerous processors

Everyone Has An App Now

All the cool storage vendors have an app. You can walk into your DC and (assuming you have the right credentials) scan a code on the front of the box. This will get you access to cloud-based analytics to see just how your system is performing.

[image courtesy of Violin Systems]

 

Thoughts

Violin Memory were quite the pioneers in the all-flash storage market many years ago. The pundits lamented the issues that Violin had with keeping pace with some of the smaller start-ups and big box sellers in recent times. The decision to focus on the “extreme performance” space is an interesting one. Violin certainly have some decent pedigree when it comes to the enterprise data services that these types of high-end customers would be looking for. And it’s not just about speed, it’s also about resilience and reliability. I asked about the decision to pursue NVMe over FC, and Lyng said that the feeling was that technologies such as RocE weren’t quite there yet.

I’m curious to see whether Violin can continue to have an impact on the market. This isn’t their first rodeo, and if the box can deliver the numbers that have been touted, it will make for a reasonably compelling offering. Particularly in the financial services / transactional space where time is money.

Zerto Announces ZVR 6.5

Zerto recently announced version 6.5 of their Zero Virtual Replication (ZVR) product and I had the opportunity to speak with Steve Blow and Caroline Seymour about the announcement.

 

Announcement

More Multi-cloud

Zerto 6.5 includes features that will accelerate this adoption, specifically:

Backup Capabilities

Zerto’s Long Term Retention feature has also been enhanced. You now have the ability to do incremental backups – effectively deliver forever incremental capability – with synthetic fulls as required. There’s also:

  • Support for Microsoft Data Box Edge using standard storage protocols; and
  • The ability to recover individual VMs out of Virtual Protection Groups.

Analytics

Zerto have worked hard to improve their analytics capabilities, providing:

  • Data for quarterly reports, including SLA compliance;
  • Troubleshooting of monthly data anomalies;
  • Enhanced data about VMs including journal size, throughput, IOPS and WAN; and
  • Cloud Service Provider Client Organisational Filter with enhanced visibility to create customer reports and automatically deliver real-time analysis to clients.

 

Events

Zerto have been busy at Microsoft’s Ignite event recently, and are also holding “IT Resilience Roadshow” events in the U.S. and Europe in the next few months in collaboration with Microsoft. There’s a Zerto+Azure workshop being held at each event, as well as the ability to sit for “Zerto+Azure Specialist” Certification. The workshop will give you the opportunity to use Zerto+Azure to:

  • Create a Disaster Recovery environment in Azure;
  • Migrate End of Life Windows Server 2008/SQL Server 2008 to Azure;
  • Migrate your on-premises data centre to Azure; and
  • Move or protect Linux and other workloads to Azure.

 

Thoughts

I’ve been a fan of Zerto for some time. They’ve historically done a lot with DR solutions and are now moving nicely beyond just DR into “IT Resilience”, with a solution that aims to incorporate a range of features. Zerto have also been pretty transparent with the market in terms of their vision for version 7. There’s an on-demand webinar you can register for that will provide some further insights into what that will bring. I’m a fan of their multi-cloud strategy, and I’m looking forward to seeing that continue to evolve.

I like it when companies aren’t afraid to show their hand a little. Too often companies focus on keeping these announcements a big secret until some special event or arbitrary date in a marketing team’s calendar. I know that Zerto haven’t quite finished version 7 yet, but they have been pretty upfront about the direction they’re trying to head in and some of the ways they’re intending to get there. In my opinion this is a good thing, as it gives their customer base time to prepare, and an idea of what capabilities they’ll be able to leverage in the future. Ultimately, Zerto are providing a solution that is geared up to help protect critical infrastructure assets and move data around to where you need it to be (whether it is planned or not). Zerto seem to understand that the element of surprise isn’t really what their customers are in to when looking at these types of solutions. It isn’t always about being the first company to offer this or that capability. Instead, it should be about offering capabilities that actually work reliably.

Scale Computing Announces Partnership With APC by Schneider Electric For DCIAB

(I’m really hoping the snappy title will bring in a few more readers). I recently had a chance to speak with Doug Howell, Senior Director Global Alliances at Scale Computing about their Data Centre In A Box (DCIAB) offering in collaboration with APC by Schneider Electric and thought I’d share some thoughts.

 

It’s A Box

Well, a biggish box. The solution is built on APC’s Micro Data Centre solution, combined with 3 Scale HC3 1150 nodes. The idea is that you have 1 SKU to deal with, which includes the Scale HC3 nodes, UPS, PDUs, and rack. You can then wheel it in, plug it in to the wall and network and it’s ready to go. Howell mentioned that they have a customer that is in the process of deploying a significant amount of these things in the wild.

Note that this is slightly different to the EMEA campaign with Lenovo from earlier in the year and is focused, at this stage, on the North American market. You can grab the solution brief from here.

 

Thoughts

The “distributed enterprise” has presented challenges to IT organisations for years now. Not everyone works in a location that is nicely co-located with headquarters. And these folks need compute and storage too. You’ve no doubt heard about how the “edge” is the new hotness in IT, and I frequently hear pitches from vendors talking about how they handle storage or compute requirements at the edge in some kind of earth-shattering way. It’s been a hard problem to solve, because locality (either for storage or compute or both) is generally a big part of the success of these solutions, particularly from the end user’s perspective. This is oftentimes at odds with traditional enterprise deployments, where all of the key compute and storage components are centrally located for ease of access, management and protection. Improvements in WAN technologies, and distributed application availability is changing that story to an extent though, hence the requirement for these kind of edge solutions. Sometimes, you just need to have stuff close to where you’re main business activity is occurring.

So what makes the Scale and APC offering any different? Nothing really, except that Scale have built their reputation on being able to deliver simple to operate hyper-converged infrastructure to small and medium enterprises with a minimum of fuss and at a reasonable price point. The cool thing here is that you’re also leveraging APC’s ability to deliver robust micro DC services with Scale’s offering that can fit in well with their other solutions, such as DRaaS.

Not every solution from every vendor needs to be unique for it to stand out from the crowd. Scale have historically demonstrated a relentless focus on quality products, excellent after-sales support and market focus. This collaboration will no doubt open up some more doors for them with APC customers who were previously unaware of the Scale story (and vice versa). This can only be a good thing in my opinion.

Cohesity Announces Helios

I recently had the opportunity to hear from Cohesity (via a vExpert briefing – thanks for organising this TechReckoning!) regarding their Helios announcement and thought I’d share what I know here.

 

What Is It?

If we’re not talking about the god and personification of the Sun, what are we talking about? Cohesity tells me that Helios is a “SaaS-based data and application orchestration and management solution”.

[image courtesy of Cohesity]

Here is the high-level architecture of Helios. There are three main features:

  • Multi-cluster management – Control all your Cohesity clusters located on-premises, in the cloud or at the edge from a single dashboard;
  • SmartAssist – Gives critical global operational data to the IT admin; and
  • Machine Learning Engine – Gives IT Admins machine driven intelligence so that they can make an informed decision.

All of this happens when Helios collects, anonymises, aggregates, and analyses globally available metadata and gives actionable recommendations to IT Admins.

 

Multi-cluster Management

Multi-cluster management is just that: the ability to manage more than one cluster through a unified UI. The cool thing is that you can rollout policies or make upgrades across all your locations and clusters with a single click. It also provides you with the ability to monitor your Cohesity infrastructure in real-time, as well as being able to search and generate reports on the global infrastructure. Finally, there’s an aggregated, simple to use dashboard.

 

SmartAssist

SmartAssist is a feature that provides you with the ability to have smart management of SLAs in the environment. The concept is that if you configure two protection jobs in the environment with competing requirements, the job with the higher SLA will get priority. I like this idea as it prevents people doing silly things with protection jobs.

 

Machine Learning

The Machine Learning part of the solution provides a number of things, including insights into capacity consumption. And proactive wellness? It’s not a pitch for some dodgy natural health product, but instead gives you the ability to perform:

  • Configuration validations, preventing you from doing silly things in your environment;
  • Blacklist version control, stopping known problematic software releases spreading too far in the wild; and
  • Hardware health checks, ensuring things are happy with your hardware (important in a software-defined world).\

 

Thoughts and Further Reading

There’s a lot more going on with Helios, but I’d like to have some stick time with it before I have a lot more to say about it. People are perhaps going to be quick compare this with other SaaS offerings, but I think they might be doing some different things, with a bit of a different approach. You can’t go five minutes on the Internet without hearing about how ML is changing the world. If nothing else, this solution delivers a much needed consolidated view of the Cohesity environment. This seems like an obvious thing, but probably hasn’t been necessary until Cohesity landed the type of customers that had multiple clusters installed all over the place.

I also really like the concept of a feature like SmartAssist. There’s only so much guidance you can give people before they have to do some thinking for themselves. Unfortunately, there are still enough environments in the wild where people are making the wrong decision about what priority to place on jobs in their data protection environment. SmartAssist can do a lot to take away the possibility that things will go awry from an SLA perspective.

You can grab a copy of the data sheet here, and read a blog post by Raj Dutt here. El Reg also has some coverage of the announcement here.

Rubrik Announces Polaris Radar

Polaris?

I’ve written about Rubrik’s Polaris offering in the past, with GPS being the first cab off the rank.  You can think of GPS as the command and control platform, offering multi-cloud control and policy management via the Polaris SaaS framework. I recently had the opportunity to hear from Chris Wahl about Radar and thought it worthwhile covering here.

 

The Announcement

Rubrik announced recently (fine, a few weeks ago) that Polaris Radar is now generally available.

 

The Problem

People don’t want to hear about the problem, because they already know what it is and they want to spend time hearing about how the vendor is going to solve it. I think in this instance, though, it’s worth re-iterating that security attacks happen. A lot. According to the Cisco 2017 Annual Cybersecurity Report ransomware attacks are growing by more than 350% annually. It’s Rubrik’s position that security is heavily focused on the edge, with firewalls and desktop protection being the main tools deployed. “Defence in depth is lopsided”, with a focus on prevention, not necessarily the recovery. According to Wahl, “it’s hard to bounce back fast”.

 

What It Does

So what does Radar do (in the context of Rubrik Polaris)? The idea is that it is increasing the intelligence to know when you get hit, and helping you to recover faster. The goal of Radar is fairly straightforward, with the following activities being key to the solution:

  • Detection – identify all strains of ransomware;
  • Analysis – understand impact of an attack; and
  • Recovery – restore as quickly as possible.

Radar achieves this by:

  • Detecting anomalies – leverage insights on suspicious activity to accelerate detection;
  • Analysing threat impact – spend less time discovering which applications and files were impacted; and
  • Accelerating recovery – minimise downtime by simplifying manual processes into just a few clicks.

 

How?

Rubrik tell me they use (drumroll please) Machine Learning for detection. Is it really machine learning? That doesn’t really matter for the purpose of this story.

[image courtesy of Rubrik]

The machine learning model learns the baseline behaviour, detects anomalies and alerts as they come in. So how does that work then?

1. Detect anomalies – apply machine learning on application metadata to detect and alert unusual change activity with protected data, such as ransomware.

What happens post anomaly detection?

  • Email alert is sent to user
  • Radar inspects snapshot for encryption
  • Results uploaded to Polaris
  • User informed of results (via the Polaris UI)

2. Analyse threat impact – Visualise how an attack impacted the system with a detailed view of file content changes at the time of the event.

3. Accelerate recovery – Select all impacted resources, specify the desired location, and restore the most recent clean versions with a few clicks. Rubrik automates the rest of the restore process.

 

Thoughts and Further Reading

I think there’s a good story to tell with Polaris. SaaS is an accessible way of delivering features to the customer base without the angst traditionally associated with appliance platform upgrades. Data security should be a big part of data protection. After all, data protection is generally critical to recovery once there’s been a serious breach. We’re no longer just protecting against users inside the organisation accidentally deleting large chunks of data, or having to recover from serious equipment failures. Instead, we’re faced with the reality that a bunch of idiots with bad intentions are out to wreck some of our stuff and make a bit of coin on the side. The sooner you know something has gone awry, the quicker you can hopefully recover from the problem (and potentially re-evaluate some of your security). Being attacked shouldn’t be about being ashamed, but it should be about being able to quickly recover and get on with whatever your company does to make its way in the world. With this in mind, I think that Rubrik are on the right track.

You can grab the data sheet from here, and Chris has an article worth checking out here. You can also register to access the Technical Overview here.

NetApp Announces NetApp ONTAP AI

As a member of NetApp United, I had the opportunity to sit in on a briefing from Mike McNamara about NetApp‘s recently announced AI offering, the snappily named “NetApp ONTAP AI”. I thought I’d provide a brief overview here and share some thoughts.

 

The Announcement

So what is NetApp ONTAP AI? It’s a “proven” architecture delivered via NetApp’s channel partners. It’s comprised of compute, storage and networking. Storage is delivered over NFS. The idea is that you can start small and scale out as required.

Hardware

Software

  • NVIDIA GPU Cloud Deep Learning Stack
  • NetApp ONTAP 9
  • Trident, dynamic storage provisioner

Support

  • Single point of contact support
  • Proven support model

 

[image courtesy of NetApp]

 

Thoughts and Further Reading

I’ve written about NetApp’s Edge to Core to Cloud story before, and this offering certainly builds on the work they’ve done with big data and machine learning solutions. Artificial Intelligence (AI) and Machine Learning (ML) solutions are like big data from five years ago, or public cloud. You can’t go to any industry event, or take a briefing from an infrastructure vendor, without hearing all about how they’re delivering solutions focused on AI. What you do with the gear once you’ve bought one of these spectacularly ugly boxes is up to you, obviously, and I don’t want to get in to whether some of these solutions are really “AI” or not (hint: they’re usually not). While the vendors are gushing breathlessly about how AI will conquer the world, if you tone down the hyperbole a bit, there’re still some fascinating problems being solved with these kinds of solutions.

I don’t think that every business, right now, will benefit from an AI strategy. As much as the vendors would like to have you buy one of everything, these kinds of solutions are very good at doing particular tasks, most of which are probably not in your core remit. That’s not to say that you won’t benefit in the very near future from some of the research and development being done in this area. And it’s for this reason that I think architectures like this one, and those from NetApp’s competitors, are contributing something significant to the ongoing advancement of these fields.

I also like that this is delivered via channel partners. It indicates, at least at first glance, that AI-focused solutions aren’t simply something you can slap a SKU on and sells 100s of. Partners generally have a better breadth of experience across the various hardware, software and services elements and their respective constraints, and will often be in a better position to spend time understanding the problem at hand rather than treating everything as the same problem with one solution. There’s also less chance that the partner’s sales people will have performance accelerators tied to selling one particular line of products. This can be useful when trying to solve problems that are spread across multiple disciplines and business units.

The folks at NVIDIA have made a lot of noise in the AI / ML marketplace lately, and with good reason. They know how to put together blazingly fast systems. I’ll be interested to see how this architecture goes in the marketplace, and whether customers are primarily from the NetApp side of the fence, from the NVIDIA side, or perhaps both. You can grab a copy of the solution brief here, and there’s an AI white paper you can download from here. The real meat and potatoes though, is the reference architecture document itself, which you can find here.

Puppet Announces Puppet Discovery, Can Now Find and Manage Your Stuff Everywhere

Puppet recently wrapped up their conference, PuppetConf2017, and made some product announcements at the same time. I thought I’d provide some brief coverage of one of the key announcements here.

 

What’s a Discovery Puppet?

No, it’s Puppet Discovery, and it’s the evolution of Puppet’s focus on container and cloud infrastructure discovery, and the result of feedback from their customers on what’s been a challenge for them. Puppet describe it as “a new turnkey approach to traditional and cloud resource discovery”.

It also provides:

  • Agentless service discovery for AWS EC2, containers, and physical hosts;
  • Actionable views across the environment; and
  • The ability to bring unmanaged resources under Puppet management.

Puppet Discovery currently allows for discovery of VMware vSphere VMs, AWS and Azure resources, and containers, with support for other cloud vendors, such as Google Cloud Platform, to follow.

 

Conclusion and Further Reading

Puppet have been around for some time and do a lot of interesting stuff. I haven’t covered them previously on this blog, but that doesn’t mean they’re not doing interesting stuff. I have a lot of customers leveraging Puppet in the wild, and any time companies make the discovery, management and automation of infrastructure easier I’m all for it. I’m particularly enthusiastic about the hybrid play, as I agree with Puppet’s claim that a lot of these types of solutions work particularly well on static, internal networks but struggle when technologies such as containers and public cloud come into play.

Just like VM sprawl before it, cloud sprawl is a problem that enterprises, in particular, are starting to experience with more frequency. Tools like Discovery can help identify just what exactly has been deployed. Once users have a better handle on that, they can start to make decisions about what needs to stay and what should go. I think this is key to good infrastructure management, regardless of whther you were jeans and a t-shirt to work or prefer a suit and tie.

The press release for Puppet Discovery can be found here. You can apply to participate in the preview phase here. There’s also a blog post covering the announcement here.

Tintri Announces New Scale-Out Storage Platform

I’ve had a few briefings with Tintri now, and talked about Tintri’s T5040 here. Today they announced a few enhancements to their product line, including:

  • Nine new Tintri VMstore T5000 all flash models with capacity expansion capabilities;
  • VM Scale-out software;
  • Tintri Analytics for predictive capacity and performance planning; and
  • Two new Tintri Cloud offerings.

 

Scale-out Storage Platform

You might be familiar with the T5040, T5060 and T5080 models, with the Tintri VMstore T5000 all-flash series being introduced in August 2015. All three models have been updated with new capacity options ranging from 17 TB to 308 TB. These systems use the latest in 3D NAND technology and high density drives to offer organizations both higher capacity and lower $/GB.

Tintri03_NewModels

The new models have the following characteristics:

  • Federated pool of storage. You can now treat multiple Tintri VMstores—both all-flash and hybrid-flash nodes—as a pool of storage. This makes management, planning and resource allocation a lot simpler. You can have up to 32 VMstores in a pool.
  • Scalability and performance. The storage platform is designed to scale to more than one million VMs. Tintri tell me that the  “[s]eparation of control flow from data flow ensures low latency and scalability to a very large number of storage nodes”.
  • This allows you to scale from small to very large with new and existing, all flash and hybrid, partially or fully populated systems.
  • The VM Scale-out software works across any standard high performance Ethernet network, eliminating the need for proprietary interconnects. The VM Scale-out software automatically provides best placement recommendation for VMs.
  • Scale compute and storage independently. Loose coupling of storage and compute provides customers with maximum flexibility to scale these elements independently. I think this is Tintri’s way of saying they’re not (yet) heading down the hyperconverged path.

 

VM Scale-out Software

Tintri’s new VM Scale-out Software (*included with Tintri Global Center Advanced license) provides the following capabilities:

  • Predictive analytics derived from one million statistics collected every 10 minutes from 30 days of history, accounting for peak loads instead of average loads, providing (according to Tintri) for the most accurate predictions. Deep workload analysis identifies VMs that are growing rapidly and applies sophisticated algorithms to model the growth ahead and avoid resource constraints.
  • Least-cost optimization based on multi-dimensional modelling. Control algorithm constantly optimizes across the thousands of VMs in each pool of VMstores, taking into account space savings, resources required by each VM, and the cost in time and data to move VMs, and makes the least-cost recommendation for VM migration that optimizes the pool.
  • Retain VM policy settings and stats. When a VM is moved, not only are the snapshots moved with the VM, the stastistics,  protection and QoS policies migrate as well using efficient compressed and deduplicated replication protocol.
  • Supports all major hypervisors.

Tintri04_ScaleOut

You can check out a YouTube video on Tintri VM Scale-out (covering optimal VM distribution) here.

 

Tintri Analytics
Tintri has always offered real-time, VM-level analytics as part of its Tintri Operating System and Tintri Global Center management system. This has now been expanded to include a SaaS offering of predictive analytics that provides organizations with the ability to model both capacity and performance requirements. Powered by big data engines such as Apache Spark and Elastic Search, Tintri Analytics is capable of analyzing stats from 500,000 VMs over several years in one second.  By mining the rich VM-level metadata, Tintri Analytics provides customers with information about their environment to help them make better decisions about applications’ behaviours and storage needs.

Tintri Analytics is a SaaS tool that allows you to model storage needs up to 6 months into the future based on up to 3 years of historical data.

Tintri01_Analytics

Here is a shot of the dashboard. You can see a few things here, including:

  • Your live resource usage for your entire footprint up to 32 VMstores;
  • Average consumption per VM (bottom left); and
  • The types of applications that are your largest consumers of Capacity, Performance and Working Set (bottom center).

Tintri02_Analytics

Here you can see exactly how your usage of capacity, performance and working set have been trending over time. You can see also when you can expect to run out of these resources (and which is on the critical path). It also provides the ability to change the timeframe to alter the projections, or drill into specific application types to understand their impact on your footprint.

There are a number of videos covering Tintri Analytics that I think are worth checking out:

 

Tintri Cloud Suites

Tintri have also come up with a new packaging model called “Tintri Cloud”. Aimed at folks still keen on private cloud deployments, Tintri Cloud combines the Tintri Scale-out platform and the all-flash VMstores.

Customers can start with a single Tintri VMstore T5040 with 17 TB of effective capacity and scale out to the Tintri Foundation Cloud with 1.2 PB in as few as 8 rack units. Or they can grow all the way to the Tintri Ultimate Cloud, which delivers a 10 PB cloud-ready storage infrastructure for up to 160,000 VMs, delivering over 6.4 million IOPS in 64 RU for less than $1/GB effective. Both the Foundation Cloud and Ultimate Cloud include Tintri’s complete set of software offerings for storage management, VM-level analytics, VM Scale-out, replication, QoS, and lifecycle management.

 

Further Reading and Thoughts

There’s another video covering setting policies on groups of VMs in Tintri Global Center here. You might also like to check out the Tintri Product Launch webinar.

Tintri have made quite a big deal about their “VM-aware” storage in the past, and haven’t been afraid to call out the bigger players on their approach to VM-centric storage. While I think they’ve missed the mark with some of their comments, I’ve enjoyed the approach they’ve taken with their own products. I’ve also certainly been impressed with the demonstrations I’ve been given on the capability built into the arrays and available via Global Center. Deploying workload to the public cloud isn’t for everyone, and Tintri are doing a bang-up job of going for those who still want to run their VM storage decoupled from their compute and in their own data centre. I love the analytics capability, and the UI looks to be fairly straightforward and informative. Trending still seems to be a thing that companies are struggling with, so if a dashboard can help them with further insight then it can’t be a bad thing.