Maxta Announces MxIQ

Maxta recently announced MxIQ. I had the opportunity to speak to Barry Phillips (Chief Marketing Officer) and Kiran Sreenivasamurthy (VP, Product Management) and thought I’d share some information from the announcement here. It’s been a while since I’ve covered Maxta, and you can read my previous thoughts on them here.

 

Introducing MxIQ

MxIQ is Maxta’s support and analytics solution and it focuses on four key aspects:

  • Proactive support through data analytics;
  • Preemptive recommendation engine;
  • Forecast capacity and performance trends; and
  • Resource planning assistance.

Historical data trends for capacity and performance are available, as well as metadata concerning cluster configuration, licensing information, VM inventory and logs.

Architecture

MxIQ is a server – client solution and the server component is currently hosted by Maxta in AWS. This can be decoupled from AWS and hosted in a private DC environment if customers don’t want their data sitting in AWS. The downside of this is that Maxta won’t have visibility into the environment, and you’ll lose a lot of the advantages of aggregated support data and analytics.

[image courtesy of Maxta]

There is a client component that runs on every node in the cluster in the customer site. Note that one agent in each cluster is active, with the other agents communicate with the active agent. From a security perspective, you only need to configure an outbound connection, as the server responds to client requests, but doesn’t initiate communications with the client. This may change in the future as Maxta adds increased functionality to the solution.

From a heartbeat perspective, the agent talks to the server every minute or so. If, for some reason, it doesn’t check in, a support ticket is automatically opened.

[image courtesy of Maxta]

Privileges

There are three privilege levels that are available with the MxIQ solution.

  • Customer
  • Partner
  • Admin

Note that the Admin (Maxta support) needs to be approved by the customer.

[image courtesy of Maxta]

The dashboard provides an easy to consume overview of what’s going on with managed Maxta clusters, and you can tell at a glance if there are any problems or areas of concern.

[image courtesy of Maxta]

 

Thoughts

I asked the Maxta team if they thought this kind of solution would result in more work for support staff as there’s potentially more information coming in and more support calls being generated. Their opinion was that, as more and more activities were automated, the workload would decrease. Additionally, logs are collected every four hours. This saves Maxta support staff time chasing environmental information after the first call is logged. I also asked whether the issue resolution was automated. Maxta said it wasn’t right now, as it’s still early days for the product, but that’s the direction it’s heading in.

The type of solution that Maxta are delivering here is nothing new in the marketplace, but that doesn’t mean it’s not valuable for Maxta and their customers. I’m a big fan of adding automated support and monitoring to infrastructure environments. It makes it easier for the vendor to gather information about how their product is being used, and it provides the ability for them to be proactive, and super responsive, to customer issues as the arise.

From what I can gather from my conversation with the Maxta team, it seems like there’s a lot of additional functionality they’ll be looking to add to the product as it matures. The real value of the solution will increase over time as customers contribute more and more telemetry data and support to the environment. This will obviously improve Maxta’s ability to respond quickly to support issues, and, potentially, give them enough information to avoid some of the more common problems in the first place. Finally, the capacity planning feature will no doubt prove invaluable as customers continue to struggle with growth in their infrastructure environments. I’m really looking forward to seeing how this product evolves over time.

NVMesh 2 – A Compelling Sequel From Excelero

The Announcement

Excelero recently announced NVMesh 2 – the next iteration of their NVMesh product. NVMesh is a software-only solution designed to pool NVMe-based PCIe SSDs.

[image courtesy of Excelero]

Key Features

There are three key features that have been added to NVMesh.

  • MeshConnect – adding support for traditional network technologies TCP/IP and Fibre Channel, giving NVMesh the widest selection of supported protocols and fabrics of software-defined storage platforms along with already supported InfiniBand, RoCE v2, RDMA and NVMe-oF.
  • MeshProtect – offering flexible protection levels for differing application needs, including mirrored and parity-based redundancy.
  • MeshInspect – with performance analytics for pinpointing anomalies quickly and at scale.

Performance

Excelero have said that NVMesh delivers “shared NVMe at local performance and 90+% storage efficiency that helps further drive down the cost per GB”.

Protection

There’s also a range of protection options available now. Excelero tell me that you can start at level 0 (no protection, lowest latency) all the way to “MeshProtect 10+2 (distributed dual parity)”. This allows customers to “choose their preferred level of performance and protection. [While] Distributing data redundancy services eliminates the storage controller bottleneck.”

Visibility

One of my favourite things about NVMesh 2 is the MeshInspect feature, with a “built-in statistical collection and display, stored in a scalable NoSQL database”.

[image courtesy of Excelero]

 

Thoughts And Further Reading

Excelero emerged form stealth mode at Storage Field Day 12. I was impressed with their offering back then, and they continue to add features while focussing on delivering top notch performance via a software-only solution. It feels like there’s a lot of attention on NVMe-based storage solutions, and with good reason. These things can go really, really fast. There are a bunch of startups with an NVMe story, and the bigger players are all delivering variations on these solutions as well.

Excelero seem well placed to capitalise on this market interest, and their decision to focus on a software-only play seems wise, particularly given that some of the standards, such as NVMe over TCP, haven’t been fully ratified yet. This approach will also appeal to the aspirational hyperscalers, because they can build their own storage solution, source their own devices, and still benefit from a fast software stack that can deliver performance in spades. Excelero also supports a wide range of transports now, with the addition of NVMe over FC and TCP support.

NVMesh 2 looks to be smoothing some of the rougher edges that were present with version 1, and I’m pumped to see the focus on enhanced visibility via MeshInspect. In my opinion these kinds of tools are critical to the uptake of solutions such as NVMesh in both the enterprise and cloud markets. The broadening of the connectivity story, as well as the enhanced resiliency options, make this something worth investigating. If you’d like to read more, you can access a white paper here (registration required).

Vembu BDR Suite 4.0 Is Coming

Disclaimer

Vembu are a site sponsor of PenguinPunk.net. They’ve asked me to look at their product and write about it. I’m in the early stages of evaluating the BDR Suite in the lab, but thought I’d pass on some information about their upcoming 4.0 release. As always, if you’re interested in these kind of solutions, I’d encourage you to do your own evaluation and get in touch with the vendor, as everyone’s situation and requirements are different. I can say from experience that the Vembu sales and support staff are very helpful and responsive, and should be able to help you with any queries. I recently did a brief article on getting started with BDR Suite 3.9.1 that you can download from here.

 

New Features

So what’s coming in 4.0?

Hyper-V Cluster Backup

Vembu will support backing up VMs in a Hyper-V cluster and, even if VMs configured for backup are moved from one host to another, the incremental backup will continue to happen without any interruption.

Shared VHDx Backup

Vembu now supports backup of the shared VHDx of Hyper-V.

CheckSum-based Incrementals

Vembu uses CBT for incremental backups. And for some CBT failure cases they will be using CheckSum for the incremental to happen without any interruption.

Credential Manager

No need to enter credentials every time, Vembu Credential Manager now allows you to manage the credentials of the host and the VMs running in it. This will be particularly handy if you’re doing a lot of application-aware backup job configuration.

 

Thoughts

I had a chance to speak with Vembu about the product’s functionality. There’s a lot to like in terms of breadth of features. I’m interested in seeing how 4.0 goes when it’s released and hope to do a few more articles on the product then. If you’re looking to evaluate the product, this evaluator’s guide is as good place as any to start. As an aside, Vembu are also offering 10% off their suite this Halloween (until November 2nd) – see here for more details.

For a fuller view of what’s coming in 4.0, you can read Vladan‘s coverage here.

Elastifile Announces v3.0

Elastifile recently announced version 3.0 of their product. I had the opportunity to speak to Jerome McFarland (VP of Marketing) and thought I’d share some information from the announcement here. If you haven’t heard of them before, “Elastifile augments public cloud capabilities and facilitates cloud consumption by delivering enterprise-grade, scalable file storage in the cloud”.

 

The Announcement

ClearTier

One of the major features of the 3.0 release is “ClearTier”, delivering integration between file and object storage in public clouds. With ClearTier, you have object storage expanding the file system namespace. The cool thing about this is that Elastifile’s ECFS provides transparent read / write access to all data. No need to re-tool applications to take advantage of the improved economics of object storage in the public cloud.

How Does It Work?

All data is accessible through ECFS via a standard NFS mount, and application access to object data is routed automatically. Data tiering occurs automatically according to user-defined policies specifying:

  • Targeted capacity ratio between file and object;
  • Eligibility for data demotion (i.e. min time since last access); and
  • Promotion policies control response to object data access.

Bursting

ClearTier gets even more interesting when you combine it with Elastifile’s CloudConnect, by using CloudConnect to get data to the public cloud in the first place, and then using CloudTier to push data to object storage.

[image courtesy of Elastifile]

It becomes a simple process, and consists of two steps:

  1. Move on-premises data (from any NAS) to cloud-based object storage using CloudConnect; and
  2. Deploy ECFS with pointer to designated object store.

Get Snappy

ClearTier also provides the ability to store snapshots on an object tier. Snapshots occur automatically according to user- defined policies specifying:

  • Data to include;
  • Destination for snapshot (i.e. file storage / object storage); and
  • Schedule for snapshot creation.

The great thing is that all snapshots are accessible through ECFS via the same NFS mount.

 

Thoughts And Further Reading

I was pretty impressed with Elastifile’s CloudConnect solution when they first announced it. When you couple CloudConnect with something like ClearTier, and have it sitting on top of the ECFS foundation, it strikes me as a pretty cool solution. If you’re using applications that rely heavily on NFS, for example, ClearTier gives you a way to leverage the traditionally low cost of cloud object storage with the improved performance of file. I like the idea that you can play with the ratio of file and object, and I’m a big fan of not having to re-tool my file-centric applications to take advantage of object economics. The ability to store a bunch of snapshots on the object tier also adds increased flexibility in terms of data protection and storage access options.

The ability to burst workloads is exactly the kind of technical public cloud use case that we’ve been talking about in slideware for years now. The reality, however, has been somewhat different. It looks like Elastifile are delivering a solution that competes aggressively with some of the leading cloud providers’ object solutions, whilst also giving the storage array vendors, now dabbling in cloud solutions, pause for thought. There are a bunch of interesting use cases, particularly if you need to access a bunch of compute, and large data sets via file-based storage, in a cloud environment for short periods of time. If you’re looking for a cost-effective, scalable storage solution, I think that Elastifile are worth checking out.

Imanis Data Overview and 4.0 Announcement

I recently had the opportunity to speak with Peter Smails and Jay Desai from Imanis Data. They provided me with an overview of what the company does and a view of their latest product announcement. I thought I’d share some of it here as I found it pretty interesting.

 

Overview

Imanis Data provides enterprise data management for Hadoop and NoSQL running on-premises or in the public cloud.

Data Management

A big part of the Imanis Data story revolves around the “three pillars” of data management, namely:

  • Protection – providing redundancy in case of a disaster;
  • Orchestration – moving data around for different use cases (eg. test and dev, cloud migration, archival); and
  • Automation – using machine learning to automate the data management functions, eg. Detecting anomalies (ThreatSense), SmartPolicies for backups based on RPO/RTO

The software itself is hardware-agnostic, and can run on any virtual, physical, or container-based platform. It can also runs on any cloud, and hence on any storage. You start with 3 nodes, and scale out from there. Imanis Data tell me that everything runs in parallel, and it’s agentless, using native APIs for the platforms. This is a big plus when it comes to protecting these kinds of workloads, as there’s usually a large number of hosts involved, and managing agents everywhere is a real pain.

It also delivers storage optimisation services, and supports erasure coding, compression, and content-aware deduplication. There’s a nice paper on the architecture that you can grab from here.

 

What’s New?

So what’s new with 4.0?

Any Point-in-time Recovery

Imanis Data now provides APITR for Couchbase, MongoDB, & Cassandra

  • APITR can be enabled at bucket level for Couchbase;
  • APITR can be enabled at repository level for Cassandra and MongoDB;
  • Aggressively collects transaction information from primary database; and
  • At time of recovery, user can pick a date & time.

ThreatSense

ThreatSense “learns” from human input and updates the anomaly model. It’s a smart way of doing malware and ransomware detection.

SmartPolicies

What?

  • Autonomous RPO-based backup powered by machine learning;
  • Machine learning model built based on cluster workloads and utilisation;
  • Model determines backup frequency & resource prioritisation;
  • Continuously adapts to meet required RPO; and
  • Provides guidance on required resources to achieve desired RPOs.

 

Thoughts

I do a lot with a number of data protection vendors in various on-premises and cloud incantations, but I’m the first to admit that my experience with protection mechanisms for things like NoSQL is non-existent. It seems like that’s not an uncommon problem, and Imanis Data has spent the last 5 or so years working on fixing that for folks.

I’m intrigued by the idea that policies could be applied to objects based on criteria beyond a standard RPO requirement. In the enterprise I frequently run into situations where the RPO is often at odds with the capabilities of the protection system, or clashing with some critical processing activity that happens at a certain time each night. Getting the balance right can be challenging at the best of times. Like most things related to automation, if the system can do what I need it to do in the time I need it to happen, I’m going to be happy. Particularly if I don’t need to do anything after I’ve set it to run.

Imanis Data seems to be offering up a pretty cool solution that scales well and does a lot of things that are important for protecting critical workloads. Imanis Data tell me they’re not interested in the relational side of things, and are continuing to focus on their core competency for the moment. It looks like pretty neat stuff and I’m looking forward to see what they come up with in the future.

Violin Systems Announces Violin XVS 8

Violin Systems recently announced their new XVS 8 platform. I had the opportunity to speak to Gary Lyng (Chief Marketing Officer) and thought I’d share some thoughts here.

 

Background

A few things have changed for Violin since they folded as Violin Memory and were acquired by Soros in 2017. Firstly, they’re now 100% channel focused. And secondly, according to Lyng, they’re “all about microseconds”.

What Really Matters?

Violin are focused on extreme performance, specifically:

  • Low latency;
  • Consistent performance (24x7x365); and
  • Enterprise data services.

The key use cases they’re addressing are:

  • Tier 0;
  • Realtime insight;
  • OLTP, DB, VDI;
  • AI / ML;
  • Commercial IoT; and
  • Trading, supply chain.

 

The Announcement

The crux of the announcement is the Violin XVS 8.

[image courtesy of Violin Systems]

Specifications

Performance Latency as low 50µs to 800µs

Dedupe LUN performance improved by >40%

Capacity Usable –  44.3TB – 88.7TB

Effective –  256TB – 512TB

 

Enterprise Data Services
Efficiency Dedupe + compression reduction Ratio 6:1

Low impact Snapshots, Thin Provisioning, Thin and Thick Clones

Continuity

Protection

Scalability

Synchronous Replication (Local/Metro) | Asynchronous Replication |Stretch clusters (0 RPO & RTO – 7700) |NDU

Snapshots (crash consistent) |Consistency Groups (snaps & replication)

Transparent LUN mirroring

Online LUN expansion

Capacity pooling across shelves

Single Name Space

Hosts  8x 32Gb FC (NVMe Ready) or 8×10 GbE iSCSI

Feature Summary

Performance & Experience Advances

  • Consistent-Performance Guarantee
  • Cloud-based predictive analytics providing insight into future performance needs
  • NVMe over FC

Flexibility & Efficiency

  • Single Platform with selectable dedupe per LUN / Application
  • Snap-Dedupe

Application Infrastructure Ecosystems

Other Neat Features

32Gbps FC connectivity

Concerto OS updates (expected early Q1 2019)

  • Simple software upgrade to existing systems
  • Lowered IO Latency, Higher Bandwidth
  • Lower CPU usage and enable cost savings through compute and software consolidation
  • Optimised for transporting data from solid state storage to numerous processors

Everyone Has An App Now

All the cool storage vendors have an app. You can walk into your DC and (assuming you have the right credentials) scan a code on the front of the box. This will get you access to cloud-based analytics to see just how your system is performing.

[image courtesy of Violin Systems]

 

Thoughts

Violin Memory were quite the pioneers in the all-flash storage market many years ago. The pundits lamented the issues that Violin had with keeping pace with some of the smaller start-ups and big box sellers in recent times. The decision to focus on the “extreme performance” space is an interesting one. Violin certainly have some decent pedigree when it comes to the enterprise data services that these types of high-end customers would be looking for. And it’s not just about speed, it’s also about resilience and reliability. I asked about the decision to pursue NVMe over FC, and Lyng said that the feeling was that technologies such as RocE weren’t quite there yet.

I’m curious to see whether Violin can continue to have an impact on the market. This isn’t their first rodeo, and if the box can deliver the numbers that have been touted, it will make for a reasonably compelling offering. Particularly in the financial services / transactional space where time is money.

Zerto Announces ZVR 6.5

Zerto recently announced version 6.5 of their Zero Virtual Replication (ZVR) product and I had the opportunity to speak with Steve Blow and Caroline Seymour about the announcement.

 

Announcement

More Multi-cloud

Zerto 6.5 includes features that will accelerate this adoption, specifically:

Backup Capabilities

Zerto’s Long Term Retention feature has also been enhanced. You now have the ability to do incremental backups – effectively deliver forever incremental capability – with synthetic fulls as required. There’s also:

  • Support for Microsoft Data Box Edge using standard storage protocols; and
  • The ability to recover individual VMs out of Virtual Protection Groups.

Analytics

Zerto have worked hard to improve their analytics capabilities, providing:

  • Data for quarterly reports, including SLA compliance;
  • Troubleshooting of monthly data anomalies;
  • Enhanced data about VMs including journal size, throughput, IOPS and WAN; and
  • Cloud Service Provider Client Organisational Filter with enhanced visibility to create customer reports and automatically deliver real-time analysis to clients.

 

Events

Zerto have been busy at Microsoft’s Ignite event recently, and are also holding “IT Resilience Roadshow” events in the U.S. and Europe in the next few months in collaboration with Microsoft. There’s a Zerto+Azure workshop being held at each event, as well as the ability to sit for “Zerto+Azure Specialist” Certification. The workshop will give you the opportunity to use Zerto+Azure to:

  • Create a Disaster Recovery environment in Azure;
  • Migrate End of Life Windows Server 2008/SQL Server 2008 to Azure;
  • Migrate your on-premises data centre to Azure; and
  • Move or protect Linux and other workloads to Azure.

 

Thoughts

I’ve been a fan of Zerto for some time. They’ve historically done a lot with DR solutions and are now moving nicely beyond just DR into “IT Resilience”, with a solution that aims to incorporate a range of features. Zerto have also been pretty transparent with the market in terms of their vision for version 7. There’s an on-demand webinar you can register for that will provide some further insights into what that will bring. I’m a fan of their multi-cloud strategy, and I’m looking forward to seeing that continue to evolve.

I like it when companies aren’t afraid to show their hand a little. Too often companies focus on keeping these announcements a big secret until some special event or arbitrary date in a marketing team’s calendar. I know that Zerto haven’t quite finished version 7 yet, but they have been pretty upfront about the direction they’re trying to head in and some of the ways they’re intending to get there. In my opinion this is a good thing, as it gives their customer base time to prepare, and an idea of what capabilities they’ll be able to leverage in the future. Ultimately, Zerto are providing a solution that is geared up to help protect critical infrastructure assets and move data around to where you need it to be (whether it is planned or not). Zerto seem to understand that the element of surprise isn’t really what their customers are in to when looking at these types of solutions. It isn’t always about being the first company to offer this or that capability. Instead, it should be about offering capabilities that actually work reliably.

Scale Computing Announces Partnership With APC by Schneider Electric For DCIAB

(I’m really hoping the snappy title will bring in a few more readers). I recently had a chance to speak with Doug Howell, Senior Director Global Alliances at Scale Computing about their Data Centre In A Box (DCIAB) offering in collaboration with APC by Schneider Electric and thought I’d share some thoughts.

 

It’s A Box

Well, a biggish box. The solution is built on APC’s Micro Data Centre solution, combined with 3 Scale HC3 1150 nodes. The idea is that you have 1 SKU to deal with, which includes the Scale HC3 nodes, UPS, PDUs, and rack. You can then wheel it in, plug it in to the wall and network and it’s ready to go. Howell mentioned that they have a customer that is in the process of deploying a significant amount of these things in the wild.

Note that this is slightly different to the EMEA campaign with Lenovo from earlier in the year and is focused, at this stage, on the North American market. You can grab the solution brief from here.

 

Thoughts

The “distributed enterprise” has presented challenges to IT organisations for years now. Not everyone works in a location that is nicely co-located with headquarters. And these folks need compute and storage too. You’ve no doubt heard about how the “edge” is the new hotness in IT, and I frequently hear pitches from vendors talking about how they handle storage or compute requirements at the edge in some kind of earth-shattering way. It’s been a hard problem to solve, because locality (either for storage or compute or both) is generally a big part of the success of these solutions, particularly from the end user’s perspective. This is oftentimes at odds with traditional enterprise deployments, where all of the key compute and storage components are centrally located for ease of access, management and protection. Improvements in WAN technologies, and distributed application availability is changing that story to an extent though, hence the requirement for these kind of edge solutions. Sometimes, you just need to have stuff close to where you’re main business activity is occurring.

So what makes the Scale and APC offering any different? Nothing really, except that Scale have built their reputation on being able to deliver simple to operate hyper-converged infrastructure to small and medium enterprises with a minimum of fuss and at a reasonable price point. The cool thing here is that you’re also leveraging APC’s ability to deliver robust micro DC services with Scale’s offering that can fit in well with their other solutions, such as DRaaS.

Not every solution from every vendor needs to be unique for it to stand out from the crowd. Scale have historically demonstrated a relentless focus on quality products, excellent after-sales support and market focus. This collaboration will no doubt open up some more doors for them with APC customers who were previously unaware of the Scale story (and vice versa). This can only be a good thing in my opinion.

Cohesity Announces Helios

I recently had the opportunity to hear from Cohesity (via a vExpert briefing – thanks for organising this TechReckoning!) regarding their Helios announcement and thought I’d share what I know here.

 

What Is It?

If we’re not talking about the god and personification of the Sun, what are we talking about? Cohesity tells me that Helios is a “SaaS-based data and application orchestration and management solution”.

[image courtesy of Cohesity]

Here is the high-level architecture of Helios. There are three main features:

  • Multi-cluster management – Control all your Cohesity clusters located on-premises, in the cloud or at the edge from a single dashboard;
  • SmartAssist – Gives critical global operational data to the IT admin; and
  • Machine Learning Engine – Gives IT Admins machine driven intelligence so that they can make an informed decision.

All of this happens when Helios collects, anonymises, aggregates, and analyses globally available metadata and gives actionable recommendations to IT Admins.

 

Multi-cluster Management

Multi-cluster management is just that: the ability to manage more than one cluster through a unified UI. The cool thing is that you can rollout policies or make upgrades across all your locations and clusters with a single click. It also provides you with the ability to monitor your Cohesity infrastructure in real-time, as well as being able to search and generate reports on the global infrastructure. Finally, there’s an aggregated, simple to use dashboard.

 

SmartAssist

SmartAssist is a feature that provides you with the ability to have smart management of SLAs in the environment. The concept is that if you configure two protection jobs in the environment with competing requirements, the job with the higher SLA will get priority. I like this idea as it prevents people doing silly things with protection jobs.

 

Machine Learning

The Machine Learning part of the solution provides a number of things, including insights into capacity consumption. And proactive wellness? It’s not a pitch for some dodgy natural health product, but instead gives you the ability to perform:

  • Configuration validations, preventing you from doing silly things in your environment;
  • Blacklist version control, stopping known problematic software releases spreading too far in the wild; and
  • Hardware health checks, ensuring things are happy with your hardware (important in a software-defined world).

 

Thoughts and Further Reading

There’s a lot more going on with Helios, but I’d like to have some stick time with it before I have a lot more to say about it. People are perhaps going to be quick compare this with other SaaS offerings, but I think they might be doing some different things, with a bit of a different approach. You can’t go five minutes on the Internet without hearing about how ML is changing the world. If nothing else, this solution delivers a much needed consolidated view of the Cohesity environment. This seems like an obvious thing, but probably hasn’t been necessary until Cohesity landed the type of customers that had multiple clusters installed all over the place.

I also really like the concept of a feature like SmartAssist. There’s only so much guidance you can give people before they have to do some thinking for themselves. Unfortunately, there are still enough environments in the wild where people are making the wrong decision about what priority to place on jobs in their data protection environment. SmartAssist can do a lot to take away the possibility that things will go awry from an SLA perspective.

You can grab a copy of the data sheet here, and read a blog post by Raj Dutt here. El Reg also has some coverage of the announcement here.

Rubrik Announces Polaris Radar

Polaris?

I’ve written about Rubrik’s Polaris offering in the past, with GPS being the first cab off the rank.  You can think of GPS as the command and control platform, offering multi-cloud control and policy management via the Polaris SaaS framework. I recently had the opportunity to hear from Chris Wahl about Radar and thought it worthwhile covering here.

 

The Announcement

Rubrik announced recently (fine, a few weeks ago) that Polaris Radar is now generally available.

 

The Problem

People don’t want to hear about the problem, because they already know what it is and they want to spend time hearing about how the vendor is going to solve it. I think in this instance, though, it’s worth re-iterating that security attacks happen. A lot. According to the Cisco 2017 Annual Cybersecurity Report ransomware attacks are growing by more than 350% annually. It’s Rubrik’s position that security is heavily focused on the edge, with firewalls and desktop protection being the main tools deployed. “Defence in depth is lopsided”, with a focus on prevention, not necessarily the recovery. According to Wahl, “it’s hard to bounce back fast”.

 

What It Does

So what does Radar do (in the context of Rubrik Polaris)? The idea is that it is increasing the intelligence to know when you get hit, and helping you to recover faster. The goal of Radar is fairly straightforward, with the following activities being key to the solution:

  • Detection – identify all strains of ransomware;
  • Analysis – understand impact of an attack; and
  • Recovery – restore as quickly as possible.

Radar achieves this by:

  • Detecting anomalies – leverage insights on suspicious activity to accelerate detection;
  • Analysing threat impact – spend less time discovering which applications and files were impacted; and
  • Accelerating recovery – minimise downtime by simplifying manual processes into just a few clicks.

 

How?

Rubrik tell me they use (drumroll please) Machine Learning for detection. Is it really machine learning? That doesn’t really matter for the purpose of this story.

[image courtesy of Rubrik]

The machine learning model learns the baseline behaviour, detects anomalies and alerts as they come in. So how does that work then?

1. Detect anomalies – apply machine learning on application metadata to detect and alert unusual change activity with protected data, such as ransomware.

What happens post anomaly detection?

  • Email alert is sent to user
  • Radar inspects snapshot for encryption
  • Results uploaded to Polaris
  • User informed of results (via the Polaris UI)

2. Analyse threat impact – Visualise how an attack impacted the system with a detailed view of file content changes at the time of the event.

3. Accelerate recovery – Select all impacted resources, specify the desired location, and restore the most recent clean versions with a few clicks. Rubrik automates the rest of the restore process.

 

Thoughts and Further Reading

I think there’s a good story to tell with Polaris. SaaS is an accessible way of delivering features to the customer base without the angst traditionally associated with appliance platform upgrades. Data security should be a big part of data protection. After all, data protection is generally critical to recovery once there’s been a serious breach. We’re no longer just protecting against users inside the organisation accidentally deleting large chunks of data, or having to recover from serious equipment failures. Instead, we’re faced with the reality that a bunch of idiots with bad intentions are out to wreck some of our stuff and make a bit of coin on the side. The sooner you know something has gone awry, the quicker you can hopefully recover from the problem (and potentially re-evaluate some of your security). Being attacked shouldn’t be about being ashamed, but it should be about being able to quickly recover and get on with whatever your company does to make its way in the world. With this in mind, I think that Rubrik are on the right track.

You can grab the data sheet from here, and Chris has an article worth checking out here. You can also register to access the Technical Overview here.