Formulus Black Announces Forsa 3.0

Formulus Black recently announced version 3.0 of its Forsa product. I had the opportunity to speak with Mark Iwanowski and Jing Xie about the announcement and wanted to share some thoughts here.

 

So What’s A Forsa Again?

It’s a software solution for running applications in memory without needing to re-tool your applications or hardware. You can present persistent storage (think Intel Optane) or non-persistent memory (think DRAM) as a block device to the host and run your applications on that. Here’s a look at the architecture.

[image courtesy of Formulus Black]

Is This Just a Linux Thing?

No, not entirely. There’s Ubuntu and CentOS support out of the box, and Red Hat support is imminent. If you don’t use those operating systems though, don’t stress. You can also run this using a KVM-based hypervisor. So anything supported by that can be supported by Forsa.

But What If My Memory Fails?

Formulus Black has a technology called “BLINK” which provides the ability to copy your data down to SSDs, or you can failover the data to another host.

Won’t I Need A Bunch Of RAM?

Formulus Black uses Bit Markers – a memory efficient technology (like deduplication) – to make efficient use of the available memory. They call it “amplification” as opposed to deduplication, as it amplifies the available space.

Is This Going To Cost Me?

A little, but not as much as you’d think (because nothing’s ever free). The software is licensed on a per-socket basis, so if you decide to add memory capacity you’re not up for additional licensing costs.

 

Thoughts and Further Reading

I don’t do as much work with folks requiring in-memory storage solutions as much as I’d like to do, but I do appreciate the requirement for these kinds of solutions. The big appeal here is the lack of requirement to re-tool your applications to work in-memory. All you need is something that runs on Linux or KVM and you’re pretty much good to go. Sure, I’m over-simplifying things a little, but it looks like there’s a good story here in terms of the lack of integration required to get some serious performance improvements.

Formulus Black came out of stealth around 4 and a bit months ago and have already introduced a raft of improvements over version 2.0 of their offering. It’s great to see the speed with which they’ve been able to execute on new features in their offering. I’m curious to see what’s next, as there’s obviously been a great focus on performance and simplicity.

The cool kids are all talking about the benefits of NVMe-based, centralised storage solutions. And they’re right to do this, as most applications will do just fine with these kinds of storage platforms. But there are still going to be minuscule bottlenecks associated with these devices. If you absolutely need things to run screamingly fast, you’ll likely want to run them in-memory. And if that’s the case, Formulus Black’s Forsa solution might be just what you’re looking for. Plus, it’s a pretty cool name for a company, or possibly an aspiring wizard.

Burlywood Tech Announces TrueFlash Insight

Burlywood Tech came out of stealth a few years ago, and I wrote about their TrueFlash announcement here. I had another opportunity to speak to Mike Tomky recently about Burlywood’s TrueFlash Insight announcement and thought I’d share some thoughts here.

 

The Announcement

Burlywood’s “TrueFlash” product delivers what they describe as a “software-defined SSD” drive. Since they’ve been active in the market they’ve gained traction in what they call the Tier 2 service provider segments (not the necessarily the “Big 7” hyperscalers).

They’ve announced TrueFlash Insight because, in a number of cases, customers don’t know what their workloads really look like. The idea behind TrueFlash Insight is that it can be run in a production environment for a period of time to collect metadata and drive telemetry. Engineers can also be sent on site if required to do the analysis. The data collected with TrueFlash Insight helps Burlywood with the process of designing and tuning the TrueFlash product for the desired workload.

How It Works

  • Insight is available only on Burlywood TrueFlash drives
  • Enabled upon execution of a SOW for Insight analysis services
  • Run your application as normal in a system with one or more Insight-enabled TrueFlash drives
  • Follow the instructions to download the telemetry files
  • Send telemetry data to Burlywood for analysis
  • Burlywood parses the telemetry, analyses data patterns, shares performance information, and identifies potential bottlenecks and trouble spots
  • This information can then be used to tune the TrueFlash SSDs for optimal performance

 

Thoughts and Further Reading

When I wrote about Burlywood previously I was fascinated by the scale that would be required for a company to consider deploying SSDs with workload-specific code sitting on them. And then I stopped and thought about my comrades in the enterprise space struggling to get the kind of visibility into their gear that’s required to make these kinds of decisions. But when your business relies so heavily on good performance, there’s a chance you have some idea of how to get information on the performance of your systems. The fact that Burlywood are making this offering available to customers indicates that even those customers that are on board with the idea of “Software-defined SSDs (SDSSDs?)” don’t always have the capabilities required to make an accurate assessment of their workloads.

But this solution isn’t just for existing Burlywood customers. The good news is it’s also available for customers considering using Burlywood’s product in their DC. It’s a reasonably simple process to get up and running, and my impression is that it will save a bit of angst down the track. Tomky made the comment that, with this kind of solution, you don’t need to “worry about masking problems at the drive level – [you can] work on your core value”. There’s a lot to be said for companies, even the ones with very complex technical requirements, not having to worry about the technical part of the business as much as the business part of the business. If Burlywood can make that process easier for current and future customers, I’m all for it.

StorONE Announces S1-as-a-Service

StorONE recently announced its StorONE-as-a-Service (S1aaS) offering. I had the opportunity to speak to Gal Naor about it and thought I’d share some thoughts here.

 

The Announcement

StorONE’s S1-as-a-Service (S1aaS), is a use-based solution integrating StorONE’s S1 storage services with Dell Technologies and Mellanox hardware. The idea is they’ll ship you an appliance (available in a few different configurations) and you plug it in and away you go. There’s not a huge amount to say about it as it’s fairly straightforward. If you need more that the 18TB entry-level configuration, StorONE can get you up and running with 60TB thanks to overnight shipping.

Speedonomics

The as-a-Service bit is what most people are interested in, and S1aaS starts at $999 US per month for the 18TB all-flash array that delivers up to 150000 IOPS. There are a couple of other configurations available as well, including 36TB at $1797 per month, and 54TB at $2497 per month. If, for some reason, you decide you don’t want the device any more, or you no longer have that particular requirement, you can cancel your service with 30 days’ notice.

 

Thoughts and Further Reading

The idea of consuming storage from vendors on-premises via flexible finance plans isn’t a new one. But S1aaS isn’t a leasing plan. There’s no 60-month commitment and payback plan. If you want to use this for three months for a particular project and then cancel your service, you can. Just as you could with cable. From that perspective, it’s a reasonably interesting proposition. A number of the major storage vendors would struggle to put that much capacity and speed in such a small footprint on-premises for $999 per month. This is the major benefit of a software-based storage product that, by all accounts, can get a lot out of commodity server hardware.

I wrote about StorONE when they came out of stealth mode a few years ago, and noted the impressive numbers they were posting. Are numbers the most important thing when it comes to selecting storage products? No, not always. There’s plenty to be said for “good enough” solutions that are more affordable. But it strikes me that solutions that go really fast and don’t cost a small fortune to run are going to be awfully compelling. One of the biggest impediments to deploying on-premises storage solutions “as-a-Service” is that there’s usually a minimum spend required to make it worthwhile for the vendor or service provider. Most attempts previously have taken more than 2RU of rack space as a minimum footprint, and have required the customer to sign up for minimum terms of 36 – 60 months. That all changes (for the better) when you can run your storage on a server with NVMe-based drives and an efficient, software-based platform.

Sure, there are plenty of enterprises that are going to need more than 18TB of capacity. But are they going to need more than 54TB of capacity that goes at that speed? And can they build that themselves for the monthly cost that StorONE is asking for? Maybe. But maybe it’s just as easy for them to look at what their workloads are doing and decide whether they want everything to on that one solution. And there’s nothing to stop them deploying multiple configurations either.

I was impressed with StorONE when they first launched. They seem to have a knack for getting good performance from commodity gear, and they’re willing to offer that solution to customers at a reasonable price. I’m looking forward to seeing how the market reacts to these kinds of competitive offerings. You can read more about S1aaS here.

Scale Computing Announces HE500 Range

Scale Computing recently announced its “HC3 Edge Platform“. I had a chance to talk to Alan Conboy about it, and thought I’d share some of my thoughts here.

 

The Announcement

The HE500 series has been introduced to provide smaller customers and edge infrastructure environments with components that better meet the sizing and pricing requirements of those environments. There are a few different flavours of nodes, with every node offering E-2100 Intel CPUs, 32 – 64GB RAM, and dual power supplies. There are a couple of minor differences with regards to other configuration options.

  • HE500 – 4x 1,2,4 or 8TB HDD, 4x 1GbE, 4x 10GbE
  • HE550 – 1x 480GB or 960GB SSD, 3x 1,2, or 4TB HDD, 4x 1GbE, 4x 10GbE
  • HE550F – 4 x 240GB, 480GB, 960GB SSD, 4x 1GbE, 4x 10GbE
  • HE500T – 4x 1,2,4 or 8TB HDD, 8 x HDD 4TB, 8TB, 2x 1GbE
  • HE550TF – 4 x 240GB, 480GB, 960GB SSD, 2x 1GbE

The “T” version comes in a tower form factor, and offers 1GbE connectivity. Everything runs on Scale’s HC3 platform, and offers all of the features and support you expect with that platform. In terms of scalability, you can run up to 8 nodes in a cluster.

 

Thoughts And Further Reading

In the past I’ve made mention of Scale Computing and Lenovo’s partnership, and the edge infrastructure approach is also something that lends itself well to this arrangement. If you don’t necessarily want to buy Scale-badged gear, you’ll see that the models on offer look a lot like the SR250 and ST250 models from Lenovo. In my opinion, the appeal of Scale’s hyper-converged infrastructure story has always been the software platform that sits on the hardware, rather than the specifications of the nodes they sell. That said, these kinds of offerings play an important role in the market, as they give potential customers simple options to deliver solutions at a very competitive price point. Scale tell me that an entry-level 3-node cluster comes in at about US $16K, with additional nodes costing approximately $5K. Conboy described it as “[l]owering the barrier to entry, reducing the form factor, but getting access to the entire stack”.

Combine some of these smaller solutions with various reference architectures and you’ve got a pretty powerful offering that can be deployed in edge sites for a small initial outlay. People often deploy compute at the edge because they have to, not because they necessarily want to. Anything that can be done to make operations and support simpler is a good thing. Scale Computing are focused on delivering an integrated stack that meets those requirements in a lightweight form factor. I’ll be interested to see how the market reacts to this announcement. For more information on the HC3 Edge offering, you can grab a copy of the data sheet here, and the press release is available here. There’s a joint Lenovo – Scale Computing case study that can be found here.

Zerto – News From ZertoCON 2019

Zerto recently held their annual user conference (ZertoCON) in Nashville, TN. I had the opportunity to talk to Rob Strechay about some of the key announcements coming out of the event and thought I’d cover them here.

 

Key Announcements

Licensing

You can now acquire Zerto either as a perpetual license or via a subscription. There’s previously been some concept of subscription pricing with Zerto, with customers having rented via managed service providers, but this is the first time it’s being offered directly to customers. Strechay noted that Zerto is “[n]ot trying to move to a subscription-only model”, but they are keen to give customers further flexibility in how they consume the product. Note that the subscription pricing also includes maintenance and support.

7.5 Is Just Around The Corner

If it feels like 7.0 was only just delivered, that’s because it was (in April). But 7.5 is already just around the corner. They’re looking to add a bunch of features, including:

  • Deeper integration with StoreOnce from HPE using Catalyst-based API, leveraging source-side deduplication
  • Qualification of Azure’s Data Box
  • Cloud mobility – in 7.0 they started down the path with Azure. Zerto Cloud Appliances now autoscale within Azure.

Azure Integration

There’s a lot more focus on Azure in 7.5, and Zerto are working on

  • Managed failback / managed disks in Azure
  • Integration with Azure Active Directory
  • Adding encryption at rest in AWS, and doing some IAM integration
  • Automated driver injection on the fly as you recover into AWS (with Red Hat)

Resource Planner

Building on their previous analytics work, you’ll also be able to (shortly) download Zerto Virtual Manager. This talks to vCenter and can gather data and help customers plan their VMware to VMware (or to Azure / AWS) migrations.

VAIO

Zerto has now completed the initial certification to use VMware’s vSphere APIs for I/O Filtering (VAIO) and they’ll be leveraging these in 7.5. Strechay said they’ll probably have both versions in the product for a little while.

 

Thoughts And Further Reading

I’d spoken with Strechay previously about Zerto’s plans to compete against the “traditional” data protection vendors, and asked him what the customer response has been to Zerto’s ambitions (and execution). He said that, as they’re already off-siting data (as part of the 3-2-1 data protection philosophy), how hard is it to take it to the next level? He said a number of customers were very motivated to use long term retention, and wanted to move on from their existing backup vendors. I’ve waxed lyrical in the past about what I thought some of the key differences were between periodic data protection, disaster recovery, and disaster avoidance were. That doesn’t mean that companies like Zerto aren’t doing a pretty decent job of blurring the lines between the types of solution they offer, particularly with the data mobility capabilities built in to their offerings. I think there’s a lot of scope with Zerto to move into spaces that they’ve previously only been peripherally involved in. It makes sense that they’d focus on data mobility and off-site data protection capabilities. There’s a good story developing with their cloud integration, and it seems like they’ll just continue to add features and capabilities to the product. I really like that they’re not afraid to make promises on upcoming releases and have (thus far) been able to deliver on them.

The news about VAIO certification is pretty big, and it might remove some of the pressure that potential customers have faced previously about adopting protection solutions that weren’t entirely blessed by VMware.

I’m looking forward to see what Zerto ends up delivering with 7.5, and I’m really enjoying the progress they’re making with both their on-premises and public cloud focused solutions. You can read Zerto’s press release here, and Andrea Mauro published a comprehensive overview here.

Dell EMC Announces PowerProtect Software (And Hardware)

Disclaimer: I recently attended Dell Technologies World 2019.  My flights, accommodation and conference pass were paid for by Dell Technologies via the Media, Analysts and Influencers program. There is no requirement for me to blog about any of the content presented and I am not compensated in any way for my time at the event.  Some materials presented were discussed under NDA and don’t form part of my blog posts, but could influence future discussions.

Last week at Dell Technologies World there were a number of announcements made regarding Data Protection. I thought I’d cover them here briefly. Hopefully I’ll have the chance to dive a little deeper into the technology in the next few weeks.

 

PowerProtect Software

The new PowerProtect software is billed as Dell EMC’s “Next Generation Data Management software platform” and provides “data protection, replication and reuse, as well as SaaS-based management and self-service capabilities that give individual data owners the autonomy to control backup and recovery operations”. It currently offers support for:

  • Oracle;
  • Microsoft SQL;
  • VMware;
  • Windows Filesystems; and
  • Linux Filesystems.

More workload support is planned to arrive in the next little while. There are some nice features included, such as automated discovery and on-boarding of databases, VMs and Data Domain protection storage. There’s also support for tiering protection data to public cloud environments, and support for SaaS-based management is a nice feature too. You can view the data sheet here.

 

PowerProtect X400

The PowerProtect X400 is being positioned by Dell EMC as a “multi-dimensional” appliance, with support for both scale out and scale up expansion.

There are three “bits” to the X400 story. There’s the X400 cube, which is the brains of the operation. You then scale it out using either X400F (All-Flash) or X400H (Hybrid) cubes. The All-Flash version can be configured from 64 – 448TB of capacity, delivering up to 22.4PB of logical capacity. The Hybrid version runs from 64 – 384TB of capacity, and can deliver up to 19.2PB of logical capacity. The logical capacity calculation is based on “10x – 50x deduplication ratio”. You can access the spec sheet here, and the data sheet can be found here.

Scale Up and Out?

So what do Dell EMC mean by “multi-dimensional” then? It’s a neat marketing term that means you can scale up and out as required.

  • Scale-up with grow-in-place capacity expansion (16TB); and
  • Scale-out compute and capacity with additional X400F or X400H cubes (starting at 64TB each).

This way you can “[b]enefit from the linear scale-out of performance, compute, network and capacity”.

 

IDPA

Dell EMC also announced that the Integrated Data Protection Appliance (IDPA) was being made available in an 8-24TB version, providing a lower capacity option to service smaller environments.

 

Thoughts and Further Reading

Everyone I spoke to at Dell Technologies World was excited about the PowerProtect announcement. Sure, it’s their job to be excited about this stuff, but there’s a lot here to be excited about, particularly if you’re an existing Dell EMC data protection customer. The other “next-generation” data protection vendors seem to have given the 800 pound gorilla the wakeup call it needed, and the PowerProtect offering is a step in the right direction. The scalability approach used with the X400 appliance is potentially a bit different to what’s available in the market today, but it seems to make sense in terms of reducing the footprint of the hardware to a manageable amount. There were some high numbers being touted in terms of performance but I won’t be repeating any of those until I’ve seen this for myself in the wild. The all-flash option seems a little strange at first, as this normally associated with data protection, but I think it’s competitive nod to some of the other vendors offering top of rack, all-flash data protection.

So what if you’re an existing Data Domain / NetWorker / Avamar customer? There’s no need to panic. You’ll see continued development of these products for some time to come. I imagine it’s not a simple thing for an established company such as Dell EMC to introduce a new product that competes in places with something it already sells to customers. But I think it’s the right thing for them to do, as there’s been significant pressure from other vendors when it comes to telling a tale of simplified data protection leveraging software-defined solutions. Data protection requirements have seen significant change over the last few years, and this new architecture is a solid response to those changes.

The supported workloads are basic for the moment, but a cursory glance through most enterprise environments would be enough to reassure you that they have the most common stuff covered. I understand that existing DPS customers will also get access to PowerProtect to take it for a spin. There’s no word yet on what the migration path for existing customers looks like, but I have no doubt that people have already thought long and hard about what that would look like and are working to make sure the process is field ready (and hopefully straightforward). Dell EMC PowerProtect Software platform and PowerProtect X400 appliance will be generally available in July 2019.

For another perspective on the announcement, check out Preston‘s post here.

Axellio Announces Azure Stack HCI Support

Microsoft recently announced their Azure Stack HCI program, and I had the opportunity to speak to the team from Axellio (including Bill Miller, Barry Martin, and Kara Smith) about their support for it.

 

Azure Stack Versus Azure Stack HCI

So what’s the difference between Azure Stack and Azure Stack HCI? You can think of Azure Stack as an extension of Azure – designed for cloud-native applications. The Azure Stack HCI is more for your traditional VM-based applications – the kind of ones that haven’t been refactored (or can’t be) for public cloud.

[image courtesy of Microsoft]

The Azure Stack HCI program has fifteen vendor partners on launch day, of which Axellio is one.

 

Axellio’s Take

Miller describes the Axellio solution as “[n]ot your father’s HCI infrastructure”, and Axellio tell me it “has developed the new FabricXpress All-NVMe HCI edge-computing platform built from the ground up for high-performance computing and fast storage for intense workload environments. It delivers 72 NVMe SSDS per server, and packs 2 servers into one 2U chassis”. Cluster sizes start at 4 nodes and run up to 16. Note that the form factor measurement in the table below includes any required switching for the solution. You can grab the data sheet from here.

[image courtesy of Axellio]

It uses the same Hyper-V based software-defined compute, storage and networking as Azure Stack and integrates on-premises workloads with Microsoft hybrid data services including Azure Site Recovery and Azure Backup, Cloud Witness and Azure Monitor.

 

Thoughts and Further Reading

When Microsoft first announced plans for a public cloud presence, some pundits suggested they didn’t have the chops to really make it. It seems that Microsoft has managed to perform well in that space despite what some of the analysts were saying. What Microsoft has had working in its favour is that it understands the enterprise pretty well, and has made a good push to tap that market and help get the traditionally slower moving organisations to look seriously at public cloud.

Azure Stack HCI fits nicely in between Azure and Azure Stack, giving enterprises the opportunity to host workloads that they want to keep in VMs hosted on a platform that integrates well with public cloud services that they may also wish to leverage. Despite what we want to think, not every enterprise application can be easily refactored to work in a cloud-native fashion. Nor is every enterprise ready to commit that level of investment into doing that with those applications, preferring instead to host the applications for a few more years before introducing replacement application architectures.

It’s no secret that I’m a fan of Axellio’s capabilities when it comes to edge compute and storage solutions. In speaking to the Axellio team, what stands out to me is that they really seem to understand how to put forward a performance-oriented solution that can leverage the best pieces of the Microsoft stack to deliver an on-premises hosting capability that ticks a lot of boxes. The ability to move workloads (in a staged fashion) so easily between public and private infrastructure should also have a great deal of appeal for enterprises that have traditionally struggled with workload mobility.

Enterprise operations can be a pain in the backside at the best of times. Throw in the requirement to host some workloads in public cloud environments like Azure, and your operations staff might be a little grumpy. Fans of HCI have long stated that the management of the platform, and the convergence of compute and storage, helps significantly in easing the pain of infrastructure operations. If you then take that management platform and integrate it successfully with you public cloud platform, you’re going to have a lot of fans. This isn’t Axellio’s only solution, but I think it does fit in well with their ability to deliver performance solutions in both the core and edge.

Thomas Maurer wrote up a handy article covering some of the differences between Azure Stack and Azure Stack HCI. The official Microsoft blog post on Azure Stack HCI is here. You can read the Axellio press release here.

Komprise Continues To Gain Momentum

I first encountered Komprise at Storage Field Day 17, and was impressed by the offering. I recently had the opportunity to take a briefing with Krishna Subramanian, President and COO at Komprise, and thought I’d share some of my notes here.

 

Momentum

Funding

The primary reason for our call was to discuss Komprise’s Series C funding round of US $24 million. You can read the press release here. Some noteworthy achievements include:

  • Revenue more than doubled every single quarter, with existing customers steadily growing how much they manage with Komprise; and
  • Some customers now managing hundreds of PB with Komprise.

 

Key Verticals

Komprise are currently operating in the following key verticals:

  • Genomics and health care, with rapidly growing footprints;
  • Financial and Insurance sectors (5 out of 10 of the largest insurance companies in the world apparently use Komprise);
  • A lot of universities (research-heavy environments); and
  • Media and entertainment.

 

What’s It Do Again?

Komprise manages unstructured data over three key protocols (NFS, SMB, S3). You can read more about the product itself here, but some of the key features include the ability to “Transparently archive data”, as well as being able to put a copy of your data in another location (the cloud, for example).

 

So What’s New?

One of Komprise’s recent announcements was NAS to NAS migration.  Say, for example, you’d like to migrate your data from an Isilon environment to FlashBlade, all you have to do is set one as a source, and one as target. The ACLs are fully preserved across all scenarios, and Komprise does all the heavy lifting in the background.

They’re also working on what they call “Deep Analytics”. Komprise already aggregates file analytics data very efficiently. They’re now working on indexing metadata on files and exposing that index. This will give you “a Google-like search on all your data, no matter where it sits”. The idea is that you can find data using any combination of metadata. The feature is in beta right now, and part of the new funding is being used to expand and grow this capability.

 

Other Things?

Komprise can be driven entirely from an API, making it potentially interesting for service providers and VARs wanting to add support for unstructured data and associated offerings to their solutions. You can also use Komprise to “confine” data. The idea behind this is that data can be quarantined (if you’re not sure it’s being used by any applications). Using this feature you can perform staged deletions of data once you understand what applications are using what data (and when).

 

Thoughts

I don’t often write articles about companies getting additional funding. I’m always very happy when they do, as someone thinks they’re on the right track, and it means that people will continue to stay employed. I thought this was interesting enough news to cover though, given that unstructured data, and its growth and management challenges, is an area I’m interested in.

When I first wrote about Komprise I joked that I needed something like this for my garage. I think it’s still a valid assertion in a way. The enterprise, at least in the unstructured file space, is a mess based on the what I’ve seen in the wild. Users and administrators continue to struggle with the sheer volume and size of the data they have under their management. Tools such as this can provide valuable insights into what data is being used in your organisation, and, perhaps more importantly, who is using it. My favourite part is that you can actually do something with this knowledge, using Komprise to copy, migrate, or archive old (and new) data to other locations to potentially reduce the load on your primary storage.

I bang on all the time about the importance of archiving solutions in the enterprise, particularly when companies have petabytes of data under their purview. Yet, for reasons that I can’t fully comprehend, a number of enterprises continue to ignore the problem they have with data hoarding, instead opting to fill their DCs and cloud storage with old data that they don’t use (and very likely don’t need to store). Some of this is due to the fact that some of the traditional archive solution vendors have moved on to other focus areas. And some of it is likely due to the fact that archiving can be complicated if you can’t get the business to agree to stick to their own policies for document management. In just the same way as you can safely delete certain financial information after an amount of time has elapsed, so too can you do this with your corporate data. Or, at the very least, you can choose to store it on infrastructure that doesn’t cost a premium to maintain. I’m not saying “Go to work and delete old stuff”. But, you know, think about what you’re doing with all of that stuff. And if there’s no value in keeping the “kitchen cleaning roster May 2012.xls” file any more, think about deleting it? Or, consider a solution like Komprise to help you make some of those tough decisions.

Imanis Data and MDL autoMation Case Study

Background

I’ve covered Imanis Data in the past, but am the first to admit that their focus area is not something I’m involved with on a daily basis. They recently posted a press release covering a customer success story with MDL autoMation. I had the opportunity to speak with both Peter Smails from Imanis Data, as well as Eric Gutmann from MDL autoMation. Whilst I enjoy speaking to vendors about their successes in the market, I’m even more intrigued by customer champions and what they have to say about their experience with a vendor’s offering. It’s one thing to talk about what you’ve come up with as a product, and how you think it might work well in the real world. It’s entirely another thing to have a customer take the time to speak to people on your behalf and talk about how your product works for them. Ultimately, these are usually interesting conversations, and it’s always useful for me to hear about how various technologies are applied in the real world. Note that I spoke to them separately, so Gutmann wasn’t being pushed in a certain direction by Imanis Data – he’s just really enthusiastic about the solution.

 

The Case Study

The Customer

Founded in 2006, MDL autoMation (MDL) is “one of the automotive industry’s leaders in the application of IoT and SaaS-based technologies for process improvement, automated customer recognition, vehicle tracking and monitoring, personalised customer service and sales, and inventory management”. Gutmann explained to me that for them, “every single customer is a VIP”. There’s a lot of stuff happening on the back-end to make sure that the customer’s experience is an extremely smooth one. MongoDB provides the foundation for the solution. When they first deployed the environment, they used MongoDB Cloud Manager to protect the environment, but struggled to get it to deliver the results they required.

 

Key Challenges

MDL moved to another provider, and spent approximately six months with getting it running. It worked well at the time, and met their requirements, saving them money and delivering quick backup on-premises and quick restores. There were a few issues though, including the:

  • Cost and complexity of backup and recovery for 15-node, sharded, MongoDB deployment across three data centres;
  • Time and complexity associated with daily refresh to non-sharded QA test cluster (it would take 2 days to refresh QA); and
  • Inability to use Active Directory for user access control.

 

Why Imanis Data?

So what got Gutmann and MDL excited about Imanis Data? There were a few reasons that Eric outlined for me, including:

  • 10x backup storage efficiency;
  • 26x faster QA refresh time – incremental restore;
  • 95% reduction in number policies to manage – enterprise policy engine, the number of policies to manage was reduced from 40 to 2; and
  • Native integration with Active Directory.

It was cheaper again than the previous provider, and, as Gutmann puts it “[i]t took literally hours to implement the Imanis product”. MDL are currently protecting 1.6TB of data, and it takes 7 minutes every hour to backup any changes.

 

Conclusion and Further Reading

Data protection is a problem that everyone needs to deal with at some level. Whether you have “traditional” infrastructure delivering your applications, or one of those fancy new NoSQL environments, you still need to protect your stuff. There are a lot of built-in features with MongoDB to ensure it’s resilient, but keeping the data safe is another matter. Coupled with that is the fact that developers have relied on data recovery activities to get data in to quality assurance environments for years now. Add all that together and you start to see why customers like MDL are so excited when they come across a solution that does what they need it to do.

Working in IT infrastructure (particularly operations) can be a grind at times. Something always seems to be broken or about to break. Something always seems to be going a little bit wrong. The best you can hope for at times is that you can buy products that do what you need them to do to ensure that you can produce value for the business. I think Imanis Data have a good story to tell in terms of the features they offer to protect these kinds of environments. It’s also refreshing to see a customer that is as enthusiastic as MDL is about the functionality and performance of the product, and the engagement as a whole. And as Gutmann pointed out to me, his CEO is always excited about the opportunity to save money. There’s no shame in being honest about that requirement – it’s something we all have to deal with one way or another.

Note that neither of us wanted to focus on the previous / displaced solution, as it serves no real purpose to talk about another vendor in a negative light. Just because that product didn’t do what MDL wanted it to do, doesn’t mean that that product wouldn’t suit other customers and their particular use cases. Like everything in life, you need to understand what your needs and wants are, prioritise them, and then look to find solutions that can fulfil those requirements.

Elastifile Announces Cloud File Service

Elastifile recently announced a partnership with Google to deliver a fully-managed file service delivered via the Google Cloud Platform. I had the opportunity to speak with Jerome McFarland and Dr Allon Cohen about the announcement and thought I’d share some thoughts here.

 

What Is It?

Elastifile Cloud File Service delivers a self-service SaaS experience, providing the ability to consume scalable file storage that’s deeply integrated with Google infrastructure. You could think of it as similar to Amazon’s EFS.

[image courtesy of Elastifile]

 

Benefits

Easy to Use

Why would you want to use this service? It:

  • Eliminates manual infrastructure management;
  • Provisions turnkey file storage capacity in minutes; and
  • Can be delivered in any zone, and any region.

 

Elastic

It’s also cloudy in a lot of the right ways you want things to be cloudy, including:

  • Pay-as-you-go, consumption-based pricing;
  • Flexible pricing tiers to match workflow requirements; and
  • The ability to start small and scale out or in as needed and on-demand.

 

Google Native

One of the real benefits of this kind of solution though, is the deep integration with Google’s Cloud Platform.

  • The UI, deployment, monitoring, and billing are fully integrated;
  • You get a single bill from Google; and
  • The solution has been co-engineered to be GCP-native.

[image courtesy of Elastifile]

 

What About Cloud Filestore?

With Google’s recently announced Cloud Filestore, you get:

  • A single storage tier selection, being Standard or SSD;
  • It’s available in-cloud only; and
  • Grow capacity or performance up to a tier capacity.

With Elastifile’s Cloud File Service, you get access to the following features:

  • Aggregates performance & capacity of many VMs
  • Elastically scale-out or -in; on-demand
  • Multiple service tiers for cost flexibility
  • Hybrid cloud, multi-zone / region and cross-cloud support

You can also use ClearTier to perform tiering between file and object without any application modification.

 

Thoughts

I’ve been a fan of Elastifile for a little while now, and I thought their 3.0 release had a fair bit going for it. As you can see from the list of features above, Elastifile are really quite good at leveraging all of the cool things about cloud – it’s software only (someone else’s infrastructure), reasonably priced, flexible, and scalable. It’s a nice change from some vendors who have focussed on being in the cloud without necessarily delivering the flexibility that cloud solutions have promised for so long. Coupled with a robust managed service and some preferential treatment from Google and you’ve got a compelling solution.

Not everyone will want or need a managed service to go with their file storage requirements, but if you’re an existing GCP and / or Elastifile customer, this will make some sense from a technical assurance perspective. The ability to take advantage of features such as ClearTier, combined with the simplicity of keeping it all under the Google umbrella, has a lot of appeal. Elastifile are in the box seat now as far as these kinds of offerings are concerned, and I’m keen to see how the market responds to the solution. If you’re interested in this kind of thing, the Early Access Program opens December 11th with general availability in Q1 2019. In the meantime, if you’d like to try out ECFS on GCP – you can sign up here.