Elastifile Announces Cloud File Service

Elastifile recently announced a partnership with Google to deliver a fully-managed file service delivered via the Google Cloud Platform. I had the opportunity to speak with Jerome McFarland and Dr Allon Cohen about the announcement and thought I’d share some thoughts here.

 

What Is It?

Elastifile Cloud File Service delivers a self-service SaaS experience, providing the ability to consume scalable file storage that’s deeply integrated with Google infrastructure. You could think of it as similar to Amazon’s EFS.

[image courtesy of Elastifile]

 

Benefits

Easy to Use

Why would you want to use this service? It:

  • Eliminates manual infrastructure management;
  • Provisions turnkey file storage capacity in minutes; and
  • Can be delivered in any zone, and any region.

 

Elastic

It’s also cloudy in a lot of the right ways you want things to be cloudy, including:

  • Pay-as-you-go, consumption-based pricing;
  • Flexible pricing tiers to match workflow requirements; and
  • The ability to start small and scale out or in as needed and on-demand.

 

Google Native

One of the real benefits of this kind of solution though, is the deep integration with Google’s Cloud Platform.

  • The UI, deployment, monitoring, and billing are fully integrated;
  • You get a single bill from Google; and
  • The solution has been co-engineered to be GCP-native.

[image courtesy of Elastifile]

 

What About Cloud Filestore?

With Google’s recently announced Cloud Filestore, you get:

  • A single storage tier selection, being Standard or SSD;
  • It’s available in-cloud only; and
  • Grow capacity or performance up to a tier capacity.

With Elastifile’s Cloud File Service, you get access to the following features:

  • Aggregates performance & capacity of many VMs
  • Elastically scale-out or -in; on-demand
  • Multiple service tiers for cost flexibility
  • Hybrid cloud, multi-zone / region and cross-cloud support

You can also use ClearTier to perform tiering between file and object without any application modification.

 

Thoughts

I’ve been a fan of Elastifile for a little while now, and I thought their 3.0 release had a fair bit going for it. As you can see from the list of features above, Elastifile are really quite good at leveraging all of the cool things about cloud – it’s software only (someone else’s infrastructure), reasonably priced, flexible, and scalable. It’s a nice change from some vendors who have focussed on being in the cloud without necessarily delivering the flexibility that cloud solutions have promised for so long. Coupled with a robust managed service and some preferential treatment from Google and you’ve got a compelling solution.

Not everyone will want or need a managed service to go with their file storage requirements, but if you’re an existing GCP and / or Elastifile customer, this will make some sense from a technical assurance perspective. The ability to take advantage of features such as ClearTier, combined with the simplicity of keeping it all under the Google umbrella, has a lot of appeal. Elastifile are in the box seat now as far as these kinds of offerings are concerned, and I’m keen to see how the market responds to the solution. If you’re interested in this kind of thing, the Early Access Program opens December 11th with general availability in Q1 2019. In the meantime, if you’d like to try out ECFS on GCP – you can sign up here.

Big Switch Announces AWS Public Cloud Monitoring

Big Switch Networks recently announced Big Mon for AWS. I had the opportunity to speak with Prashant Gandhi (Chief Product Officer) about the announcement and thought I’d share some thoughts here.

The Announcement

Big Switch describe Big Monitoring Fabric Public Cloud (it’s real product name) as “a seamless deep packet monitoring solution that enables workload monitoring within customer specified Virtual Private Clouds (VPCs). All components of the solution are virtual, with elastic scale-out capability based on traffic volumes.”

[image courtesy of Big Switch]

There are some real benefits to be had, including:

  • Complete AWS Visibility;
  • Multi-VPC support;
  • Elastic scaling; and
  • Consistent with the On-Prem offering.

Capabilities

  • Centralised packet and flow-based monitoring of all VPCs of a user account
  • Visibility-related traffic is kept local for security purposes and cost savings
  • Monitoring and security tools are centralised and tagged within the dedicated VPC for ease of configuration
  • Role-based access control enables multiple teams to operate Big Mon 
  • Supports centralised AWS VPC tool farm to reduce monitoring cost
  • Integrated with Big Switch’s Multi-Cloud Director for centralised hybrid cloud management

Thoughts and Further Reading

It might seem a little odd that I’m covering news from a network platform vendor on this blog, given the heavy focus I’ve had over the years on storage and virtualisation technologies. But the world is changing. I work for a Telco now and cloud is dominating every infrastructure and technology conversation I’m having. Whether it’s private or public or hybrid, cloud is everywhere, and networks are a bit part of that cloud conversation (much as it has been in the data centre), as is visibility into those networks. 

Big Switch have been around for under 10 years, but they’ve already made some decent headway with their switching platform and east-west monitoring tools. They understand cloud networking, and particularly the challenges facing organisations leveraging complicated cloud networking topologies. 

I’m the first guy to admit that my network chops aren’t as sharp as they could be (if you watched me setup some Google WiFi devices over the weekend, you’d understand). But I also appreciate that visibility is key to having control over what can sometimes be an overly elastic / dynamic infrastructure. It’s been hard to see traffic between availability zones, between instances, and contained in VPNs. I also like that they’ve focussed on a consistent experience between the on-premises offering and the public cloud offering. 

If you’re interested in learning more about Big Switch Networks, I also recommend checking out their labs.

Pure Storage Goes All In On Hybrid … Cloud

I recently had the opportunity to hear from Chadd Kenney about Pure Storage’s Cloud Data Services announcement and thought it worthwhile covering here. But before I get into that, Pure have done a little re-branding recently. You’ll now hear them referring to Cloud Data Infrastructure (their on-premises instances of FlashArray, FlashBlade, FlashStack) and Cloud Data Management (being their Pure1 instances).

 

The Announcement

So what is “Cloud Data Services”? It’s comprised of:

According to Kenney, “[t]he right strategy is and not or, but the enterprise is not very cloudy, and the cloud is not very enterprise-y”. If you’ve spent time in any IT organisation, you’ll see that there is, indeed, a “Cloud divide” in play. What we’ve seen in the last 5 – 10 years is a marked difference in application architectures, consumption and management, and even storage offerings.

[image courtesy of Pure Storage]

 

Cloud Block Store

The first part of the puzzle is probably the most interesting for those of us struggling to move traditional application stacks to a public cloud solution.

[image courtesy of Pure Storage]

According to Pure, Cloud Block Store offers:

  • High reliability, efficiency, and performance;
  • Hybrid mobility and protection; and
  • Seamless APIs on-premises and cloud.

Kenney likens building a Purity solution on AWS to the approach Pure took in the early days of their existence, when they took off the shelf components and used optimised software to make them enterprise-ready. Now they’re doing the same thing with AWS, and addressing a number of the shortcomings of the underlying infrastructure through the application of the Purity architecture.

Features

So why would you want to run virtual Pure controllers on AWS? The idea is that Cloud Block Store:

  • Aggregates performance and reliability across many cloud stores;
  • Can be deployed HA across two availability zones (using active cluster);
  • Is always thin, deduplicated, and compressed;
  • Delivers instant space-saving snapshots; and
  • Is always encrypted.

Management and Orchestration

If you have previous experience with Purity, you’ll appreciate the management and orchestration experience remains the same.

  • Same management, with Pure1 managing on-premises instances and instances in the cloud
  • Consistent APIs on-premises and in cloud
  • Plugins to AWS and VMware automation
  • Open, full-stack orchestration

Use Cases

Pure say that you can use this kind of solution in a number of different scenarios, including DR, backup, and migration in and between clouds. If you want to use ActiveCluster between AWS regions, you might have some trouble with latency, but in those cases other replication options are available.

[image courtesy of Pure Storage]

Not that Cloud Block Store is available in a few different deployment configurations:

  • Test/Dev – using a single controller instance (EBS can’t be attached to more than one EC2 instance)
  • Production – ActiveCluster (2 controllers, either within or across availability zones)

 

CloudSnap

Pure tell us that we’ve moved away from “disk to disk to tape” as a data protection philosophy and we now should be looking at “Flash to Flash to Cloud”. CloudSnap allows FlashArray snapshots to be easily sent to Amazon S3. Note that you don’t necessarily need FlashBlade in your environment to make this work.

[image courtesy of Pure Storage]

For the moment, this only being certified on AWS.

 

StorReduce for AWS

Pure acquired StorReduce a few months ago and now they’re doing something with it. If you’re not familiar with them, “StorReduce is an object storage deduplication engine, designed to enable simple backup, rapid recovery, cost-effective retention, and powerful data re-use in the Amazon cloud”. You can leverage any array, or existing backup software – it doesn’t need to be a Pure FlashArray.

Features

According to Pure, you get a lot of benefits with StorReduce, including:

  • Object fabric – secure, enterprise ready, highly durable cloud object storage;
  • Efficient – Reduces storage and bandwidth costs by up to 97%, enabling cloud storage to cost-effectively replace disk & tape;
  • Fast – Fastest Deduplication engine on the market. 10s of GiB/s or more sustained 24/7;
  • Cloud Native – Native S3 interface enabling openness, integration, and data portability. All Data & Metadata stored in object store;
  • Single namespace – Stores in a single data hub across your data centre to enable fast local performance and global data protection; and
  • Scalability – Software nodes scale linearly to deliver 100s of PBs and 10s of GBs bandwidth.

 

Thoughts and Further Reading

The title of this post was a little misleading, as Pure have been doing various cloud things for some time. But sometimes I give in to my baser instincts and like to try and be creative. It’s fine. In my mind the Cloud Block Store for AWS piece of the Cloud Data Services announcement is possibly the most interesting one. It seems like a lot of companies are announcing these kinds of virtualised versions of their hardware-based appliances that can run on public cloud infrastructure. Some of them are just encapsulated instances of the original code, modified to deal with a VM-like environment, whilst others take better advantage of the public cloud architecture.

So why are so many of the “traditional” vendors producing these kinds of solutions? Well, the folks at AWS are pretty smart, but it’s a generally well understood fact that the enterprise moves at enterprise pace. To that end, they may not be terribly well positioned to spend a lot of time and effort to refactor their applications to a more cloud-friendly architecture. But that doesn’t mean that the CxOs haven’t already been convinced that they don’t need their own infrastructure anymore. So the operations folks are being pushed to migrate out of their DCs and into public cloud provider infrastructure. The problem is that, if you’ve spent a few minutes looking at what the likes of AWS and GCP offer, you’ll see that they’re not really doing things in the same way that their on-premises comrades are. AWS expects you to replicate your data at an application level, for example, because those EC2 instances will sometimes just up and disappear.

So how do you get around the problem of forcing workloads into public cloud without a lot of the safeguards associated with on-premises deployments? You leverage something like Pure’s Cloud Block Store. It overcomes a lot of the issues associated with just running EC2 on EBS, and has the additional benefit of giving your operations folks a consistent management and orchestration experience. Additionally, you can still do things like run ActiveCluster between and within Availability Zones, so your mission critical internal kitchen roster application can stay up and running when an EC2 instance goes bye bye. You’ll pay a bit less or more than you would with normal EBS, but you’ll get some other features too.

I’ve argued before that if enterprises are really serious about getting into public cloud, they should be looking to work towards refactoring their applications. But I also understand that the reality of enterprise application development means that this type of approach is not always possible. After all, enterprises are (generally) in the business of making money. If you come to them and can’t show exactly how they’ save money by moving to public cloud (and let’s face it, it’s not always an easy argument), then you’ll find it even harder to convince them to undertake significant software engineering efforts simply because the public cloud folks like to do things a certain way. I’m rambling a bit, but my point is that these types of solutions solve a problem that we all wish didn’t exist but it does.

Justin did a great write-up here that I recommend reading. Note that both Cloud Block Store and StorReduce are in Beta with planned general availability in 2019.

Cloudtenna Announces DirectSearch GA

I’ve covered Cloudtenna in the past and had the good fortune to chat with Aaron Ganek about the general availability of Cloudtenna’s universal search product – DirectSearch. I thought I’d share some of my thoughts here.

 

About Cloudtenna

Cloudtenna are focussed on delivering “[t]urn-key search infrastructure designed specifically for files”. If you think of Elasticsearch as being synonymous with log search, then you might also like to think of Cloudtenna delivering an equivalent capability with file search.

The Challenge

According to Cloudtenna, the problem is that “[e]nterprises can’t keep track of files that are pattered across on-premises, cloud, and SaaS apps” and traditional search is a one-size-fits-all solution. In Cloudtenna’s opinion though, file search requires personalised search that reflects things such as ACLs. It’s expensive and difficult to scale.

Cloudtenna’s Solution

So what do Cloudtenna do then? The key features are the ability to:

  • Efficiently ingress massive amounts of data
  • Understand and adhere to user permissions
  • Return queries in near real-time
  • Reduce index storage and compute costs

“DirectSearch” is now generally available, and allows for cross-silo search across services such as DropBox, Gmail, Slack, Confluence, and so on. It seems reasonably priced at $10 US per user per month. Note that users who sign-up before December 1st 2018 can get 3 months of a free trial with no credit card details required).

DirectSearch CORE

In parallel to the release of DirectSearch, Cloudtenna are also announcing DirectSearch CORE – delivered via an OEM Model. I asked Ganek where he thought this kind of solution was a good fit. He told me that he saw it falling into three main categories:

  • Digital workspace category – eg. VMware, Citrix. Companies that want to be able to connect files into virtual digital workspaces;
  • Storage space – large storage vendors with SMB and NFS solutions – they might want to provide a global namespace over those transports; and
  • SaaS collaboration – eg. companies delivering chat, bug tracking, word processing – unify those offerings and give a single view of files.

Cloudtenna describe DirectSearch CORE as a turn-key file search infrastructure offering:

  • Fast query latency;
  • ACL crunching;
  • Deduplication; and
  • Contextual intelligence.

ACLs

One of the big challenges with delivering a solution like DirectSearch is that every data source has its own permissions and ACL enforcement is a big challenge. Keep in mind that all of these different applications have their own version of authentication mechanisms, with some using open directory standards, and others doing proprietary stuff. And once you have authentication sorted out, you still need to ensure that users only get access to what they’re allowed to see. Cloudtenna tackle this challenge by ingesting “native ACLs” and normalising those ACLs with metadata.

 

Thoughts

Search is hard to do well. You want it to be quick, accurate, and easy to use. You also generally want it to be able to find stuff in all kinds of places. One of the problems with modern infrastructure is that we have access to a whole bunch of content repositories as part of our everyday corporate endeavours. I work with Slack, Dropbox, Box, OneDrive, SharePoint, file servers, Microsoft Teams, iMessage, email, and all kinds of systems as part of my job. I’m the first to admit that I don’t always have a good handle on where some stuff is. And sometimes I use the wrong system because it’s more convenient to access than the correct one is. Now multiply this problem out by the thousands of users in a decent-sized enterprise and you’ve got a recipe for disaster in terms of finding corporate knowledge in a timely fashion. Combine that with billions of files and you’re a passenger on Terry Tate’s pain train. Cloudtenna has quite a job on its hands in terms of delivering on the promise of “[b]ringing order to file chaos”, but if they can do that, it’ll be pretty cool. I’ll be signing up for a trial in the very near future and, if chaotic files aren’t your bag, then maybe you should give it a spin too.

Dell EMC News From VMworld US 2018

I’m not at VMworld US this year, but I had the opportunity to be briefed by Sam Grocott (Dell EMC Cloud Strategy) on some of Dell EMC‘s key announcements during the event, and thought I’d share some of my rough notes and links here. You can read the press release here.

TL;DR?

It is a multi-cloud world. Multi-cloud requires workload mobility. The market requires a consistent experience between on-premises and off-premises. Dell EMC are doing some more stuff around that.

 

Cloud Platforms

Dell EMC offer a number of engineered systems to run both IaaS and cloud native applications.

VxRail

Starting with vSphere 6.7, Dell EMC are saying they’re delivering “near” synchronous software releases between VMware and VxRail. In this case that translates to a less than 30 Day delta between releases. There’s also support for:

VxRack SDDC with VMware Cloud Foundation

  • Support for latest VCF releases – VCF 2.3.2, and future proof for next generation VMware cloud technologies
  • Alignment with VxRail hardware options – P, E, V series VxRail models, now including Storage Dense S-series
  • Configuration flexibility

 

Cloud-enabled Infrastructure

Focus is on the data

  • Cloud data mobility;
  • Cloud data protection;
  • Cloud data services; and
  • Cloud control.

Cloud Data Protection

  • DD Cloud DR – keep copies of VM data from on-premises DD to public cloud and orchestrate failover of workloads to the cloud
  • Data Protection Suite – use cloud storage for backup and retention
  • Cloud Snapshot Manager – Backup and recovery for public cloud workloads (Now MS Azure)
  • Data Domain virtual edition running in the cloud

DD VE 4.0 Enhancements

  • KVM support added for DD VE on-premises
  • In-cloud capacity expanded to 96TB (was 16TB)
  • Can run in AWS, Azure and VMware Cloud

Cloud Data Services

Dell EMC have already announced services such as:

And now you can get Dell EMC UnityVSA Cloud Edition.

UnityVSA Cloud Edition

[image courtesy of Dell EMC]

  • Up to 256TB file systems
  • VMware Cloud on AWS

CloudIQ

  • No cost, SaaS offering
  • Predictive analytics – intelligently project capacity and performance
  • Anomaly detection – leverage ML to pinpoint deviations
  • Proactive health – identify risks before they impact the environment

Enhancements include:

Data Domain Cloud Tier

There are some other Data Domain related enhancements, including new AWS support (meaning you can have a single vendor for Long Term Retention).

ECS

ECS enhancements have also been announced, with a 50%+ increase in storage capacity and compute.

 

Thoughts

As would be expected from a company with a large portfolio of products, there’s quite a bit happening on the product enhancement front. Dell EMC are starting to get that they need to be on-board with those pesky cloud types, and they’re also doing a decent job of ensuring their private cloud customers have something to play with as well.

I’m always a little surprised by vendors offering “Cloud Editions” of key products, as it feels a lot like they’re bolting on something to the public cloud when the focus could perhaps be on helping customers get to a cloud-native position sooner. That said, there are good economic reasons to take this approach. By that I mean that there’s always going to be someone who thinks they can just lift and shift their workload to the public cloud, rather than re-factoring their applications. Dell EMC are providing a number of ways to make this a fairly safe undertaking, and products like Unity Cloud Edition provide some nice features such as increased resilience that would be otherwise lacking if the enterprise customer simply dumped its VMs in AWS as-is. I still have hope that we’ll stop doing this as an industry in the near future and embrace some smarter ways of working. But while enterprises are happy enough to spend their money on doing things like they always have, I can’t criticise Dell EMC for wanting a piece of the pie.

Nexsan Announces Assureon Cloud Transfer

Announcement

Nexsan announced Cloud Transfer for their Assureon product a little while ago. I recently had the chance to catch up with Gary Watson (Founder / CTO at Nexsan) and thought it would be worth covering the announcement here.

 

Assureon Refresher

Firstly, though, it might be helpful to look at what Assureon actually is. In short, it’s an on-premises storage archive that offers:

  • Long term archive storage for fixed content files;
  • Dependable file availability, with files being audited every 90 days;
  • Unparalleled file integrity; and
  • A “policy” system for protecting and stubbing files.

Notably, there is always a primary archive and a DR archive included in the price. No half-arsing it here – which is something that really appeals to me. Assureon also doesn’t have a “delete” key as such – files are only removed based on defined Retention Rules. This is great, assuming you set up your policies sensibly in the first place.

 

Assureon Cloud Transfer

Cloud Transfer provides the ability to move data between on-premises and cloud instances. The idea is that it will:

  • Provide reliable and efficient cloud mobility of archived data between cloud server instances and between cloud vendors; and
  • Optimise cloud storage and backup costs by offloading cold data to on-premises archive.

It’s being positioned as useful for clients who have a large unstructured data footprint on public cloud infrastructure and are looking to reduce their costs for storing data up there. There’s currently support for Amazon AWS and Microsoft Azure, with Google support coming in the near future.

[image courtesy of Nexsan]

There’s stub support for those applications that support. There’s also an optional NFS / SMB interface that can be configured in the cloud as an Assureon archiving target that caches hot files and stubs cold files. This is useful for those non-Windows applications that have a lot of unstructured data that could be moved to an archive.

 

Thoughts and Further Reading

The concept of dedicated archiving hardware and software bundles, particularly ones that live on-premises, might seem a little odd to some folks who spend a lot of time failing fast in the cloud. There are plenty of enterprises, however, that would benefit from the level of rigour that Nexsan have wrapped around the Assureon product. It’s my strong opinion that too many people still don’t understand the difference between backup and recovery and archive data. The idea that you need to take archive data and make it immutable (and available) for a long time has great appeal, particularly for organisations getting slammed with a whole lot of compliance legislation. Vendors have been talking about reducing primary storage use for years, but there seems to have been some pushback from companies not wanting to invest in these solutions. It’s possible that this was also a result of some kludgy implementations that struggled to keep up with the demands of the users. I can’t speak for the performance of the Assureon product, but I like the fact that it’s sold as a pair, and with a lot of the decision-making around protection taken away from the end user. As someone who worked in an organisation that liked to cut corners on this type of thing, it’s nice to see that.

But why would you want to store stuff on-premises? Isn’t everyone moving everything to the cloud? No, they’re not. I don’t imagine that this type of product is being pitched at people running entirely in public cloud. It’s more likely that, if you’re looking at this type of solution, you’re probably running a hybrid setup, and still have a footprint in a colocation facility somewhere. The benefit of this is that you can retain control over where your archived data is placed. Some would say that’s a bit of a pain, and an unnecessary expense, but people familiar with compliance will understand that business is all about a whole lot of wasted expense in order to make people feel good. But I digress. Like most on-premises solutions, the Assureon offering compares well with a public cloud solution on a $/GB basis, assuming you’ve got a lot of sunk costs in place already with your data centre presence.

The immutability story is also a pretty good one when you start to think about organisations that have been hit by ransomware in the last few years. That stuff might roll through your organisation like a hot knife through butter, but it won’t be able to do anything with your archive data – that stuff isn’t going anywhere. Combine that with one of those fancy next generation data protection solutions and you’re in reasonable shape.

In any case, I like what the Assureon product offers, and am looking forward to seeing Nexsan move beyond the Windows-only platform support that it currently offers. You can read the Nexsan Assueron Cloud Transfer press release here. David Marshall covered the announcement over at VMblog and ComputerWeekly.com did an article as well.

Cloudistics, Choice and Private Cloud

I’ve had my eye on Cloudistics for a little while now.  They published an interesting post recently on virtualisation and private cloud. It makes for an interesting read, and I thought I’d comment briefly and post this article if for no other reason than you can find your way to the post and check it out.

TL;DR – I’m rambling a bit, but it’s not about X versus Y, it’s more about getting your people and processes right.

 

Cloud, Schmoud

There are a bunch of different reasons why you’d want to adopt a cloud operating model, be it public, private or hybrid. These include the ability to take advantage of:

  • On-demand service;
  • Broad network access;
  • Resource pooling;
  • Rapid elasticity; and
  • Measured service, or pay-per-use.

Some of these aspects of cloud can be more useful to enterprises than others, depending in large part on where they are in their journey (I hate calling it that). The thing to keep in mind is that cloud is really just a way of doing things slightly differently to improve deficiencies in areas that are normally not tied to one particular piece of technology. What I mean by that is that cloud is a way of dealing with some of the issues that you’ve probably seen in your IT organisation. These include:

  • Poor planning;
  • Complicated network security models;
  • Lack of communication between IT and the business;
  • Applications that don’t scale; and
  • Lack of capacity planning.

Operating Expenditure

These are all difficult problems to solve, primarily because people running IT organisations need to be thinking not just about technology problems, but also people and business problems. And solving those problems takes resources, something that’s often in short supply. Coupled with the fact that many businesses feel like they’ve been handing out too much money to their IT organisations for years and you start to understand why many enterprises are struggling to adapt to new ways of doing things. One thing that public cloud does give you is a way to consume resources via OpEx rather than CapEx. The benefit here is that you’re only consuming what you need, and not paying for the whole thing to be built out on the off chance you’ll use it all over the five year life of the infrastructure. Private cloud can still provide this kind of benefit to the business via “showback” mechanisms that can really highlight the cost of infrastructure being consumed by internal business units. Everyone has complained at one time or another about the Finance group having 27 test environments, now they can let the executives know just what that actually costs.

Are You Really Cloud Native?

Another issue with moving to cloud is that a lot of enterprises are still looking to leverage Infrastructure-as-a-Service (IaaS) as an extension of on-premises capabilities rather than using cloud-native technologies. If you’ve gone with lift and shift (or “move and improve“) you’ve potentially just jammed a bunch of the same problems you had on-premises in someone else’s data centre. The good thing about moving to a cloud operating model (even if it’s private) is that you’ll get people (hopefully) used to consuming services from a catalogue, and taking responsibility for how much their footprint occupies. But if your idea of transformation is running SQL 2005 on Windows Server 2003 deployed from VMware vRA then I think you’ve got a bit of work to do.

 

Conclusion

As Cloudistics point out in their article, it isn’t really a conversation about virtualisation versus private cloud, as virtualisation (in my mind at least) is the platform that makes a lot of what we do nowadays with private cloud possible. What is more interesting is the private versus public debate. But even that one is no longer as clear cut as vendors would like you to believe. If a number of influential analysts are right, most of the world has started to realise that it’s all about a hybrid approach to cloud. The key benefits of adopting a new way of doing things are more about fixing up the boring stuff, like process. If you think you get your house in order simply by replacing the technology that underpins it then you’re in for a tough time.

Cloudtenna Announces DirectSearch

 

I had the opportunity to speak to Aaron Ganek about Cloudtenna and their DirectSearch product recently and thought I’d share some thoughts here. Cloudtenna recently announced $4M in seed funding, have Citrix as a key strategic partner, and are shipping a beta product today. Their goal is “[b]ringing order to file chaos!”.

 

The Problem

Ganek told me that there are three major issues with file management and the plethora of collaboration tools used in the modern enterprise:

  • Search is too much effort
  • Security tends to fall through the cracks
  • Enterprise IT is dangerously non-compliant

Search

Most of these collaboration tools are geared up for search, because people don’t tend to remember where they put files, or what they’ve called them. So you might have some files in your corporate Box account, and some in Dropbox, and then some sitting in Confluence. The problem with trying to find something is that you need to search each application individually. According to Cloudtenna, this:

  • Wastes time;
  • Leads to frustration; and
  • Often yields poor results.

Security

Security also becomes a problem when you have multiple storage repositories for corporate files.

  • There are too many apps to manage
  • It’s difficult to track users across applications
  • There’s no consolidated audit trail

Exposure

As a result of this, enterprises find themselves facing exposure to litigation, primarily because they can’t answer these questions:

  • Who accessed what?
  • When and from where?
  • What changed?

As some of my friends like to say “people die from exposure”.

 

Cloudtenna – The DirectSearch Solution

Enter DirectSearch. At its core it’s a SaaS offering that

  • Catalogues file activity across disparate data silos; and
  • Delivers machine learning services to mitigate the “chaos”.

Basically you point it at all of your data repositories and you can then search across all of those from one screen. The cool thing about the catalogue is not just that it tracks metadata and leverages full-text indexing, it also tracks user activity. It supports a variety of on-premises, cloud and SaaS applications (6 at the moment, 16 by September). You only need to login once and there’s full ACL support – so users can only see what they’re meant to see.

According to Ganek, it also delivers some pretty fast search results, in the order of 400 – 600ms.

[image courtesy of Cloudtenna]

I was interested to know a little more about how the machine learning could identify files that were being worked on by people in the same workgroup. Ganek said they didn’t rely on Active Directory group membership, as these were often outdated. Instead, they tracked file activity to create a “Shadow IT organisational chart” that could be used to identify who was collaborating on what, and tailor the search results accordingly.

 

Thoughts and Further Reading

I’ve spent a good part of my career in the data centre providing storage solutions for enterprises to host their critical data on. I talk a lot about data and how important it is to the business. I’ve worked at some established companies where thousands of files are created every day and terabytes of data is moved around. Almost without fail, file management has been a pain in the rear. Whether I’ve been using Box to collaborate, or sending links to files with Dropbox, or been stuck using Microsoft Teams (great for collaboration but hopeless from a management perspective), invariably files get misplaced or I find myself firing up a search window to try and track down this file or that one. It’s a mess because we don’t juts work from a single desktop and carefully curated filesystem any more. We’re creating files on mobile devices, emailing them about, and gathering data from systems that don’t necessarily play well on some platforms. It’s a mess, but we need access to the data to get our jobs done. That’s why something like Cloudtenna has my attention. I’m looking forward to seeing them progress with the beta of DirectSearch, and I have a feeling they’re on to something pretty cool with their product. You can also read Rich’s thoughts on Cloudtenna over at the Gestalt IT website.

Nexenta Announces NexentaCloud

I haven’t spoken to Nexenta in some time, but that doesn’t mean they haven’t been busy. They recently announced NexentaCloud in AWS, and I had the opportunity to speak to Michael Letschin about the announcement.

 

What Is It?

In short, it’s a version of NexentaStor that you can run in the cloud. It’s ostensibly an EC2 machine running in your virtual private cloud using EBS for storage on the backend. It’s:

  • Available in the AWS Marketplace;
  • Is deployed on preconfigured Amazon Machine Images; and
  • Delivers unified file and block services (NFS, SMB, iSCSI).

According to Nexenta, the key benefits include:

  • Access to a fully-featured file (NFS and SMB) and block (iSCSI) storage array;
  • Improved cloud resource efficiency through
    • data reduction
    • thin provisioning
    • snapshots and clones
  • Seamless replication to/from NexentaStor and NexentaCloud;
  • Rapid deployment of NexentaCloud instances for test/dev operations;
  • Centralised management of NexentaStor and NexentaCloud;
  • Advanced Analytics across your entire Nexenta storage environment; and
  • Migrate legacy applications to the cloud without re-architecting your applications.

There’s an hourly or annual subscription model, and I believe there’s also capacity-based licensing options available.

 

But Why?

Some of the young people reading this blog who wear jeans to work every day probably wonder why on earth you’d want to deploy a virtual storage array in your VPC in the first place. Why would your cloud-native applications care about iSCSI access? It’s very likely they don’t. But one of the key reasons why you might consider the NexentaCloud offering is because you’ve not got the time or resources to re-factor your applications and you’ve simply lifted and shifted a bunch of your enterprise applications into the cloud. These are likely applications that depend on infrastructure-level resiliency rather than delivering their own application-level resiliency. In this case, a product like NexentaCloud makes sense in that it provides some of the data services and resiliency that are otherwise lacking with those enterprise applications.

 

Thoughts

I’m intrigued by the NexentaCloud offering (and by Nexenta the company, for that matter). They have a solid history of delivering interesting software-defined storage solutions at a reasonable cost and with decent scale. If you’ve had the chance to play with NexentaStor (or deployed it in production), you’ll know it’s a fairly solid offering with a lot of the features you’d look for in a traditional storage platform. I’m curious to see how many enterprises take advantage of the NexentaCloud product, although I know there are plenty of NexentaStor users out in the wild, and I have no doubt their CxOs are placing a great amount of pressure on them to don the cape and get “to the cloud” post haste.

Druva Announces Cloud Platform Enhancements

Druva Cloud Platform

Data protection has been on my mind quite a bit lately. I’ve been talking to a number of vendors, partners and end users about data protection challenges and, sometimes, successes. With World Backup Day coming up I had the opportunity to get a briefing from W. Curtis Preston on Druva’s Cloud Platform and thought I’d share some of the details here.

 

What is it?

Druva Cloud Platform is Druva’s tool for tying together their as-a-Service data protection solution within a (sometimes maligned) single pane of glass. The idea behind it is you can protect your assets – from end points through to your cloud applications (and everything in between) – all from the one service, and all managed in the one place.

[image courtesy of Druva]

 

Druva Cloud Platform was discussed at Tech Field Day Extra at VMworld US 2017, and now fully supports Phoenix (the DC protection offering), inSync
(end point & SaaS protection), and Apollo (native EC2 backup). There’s also some nice Phoenix integration with VMware Cloud on AWS (VMC).

[image courtesy of Druva]

 

Druva’s Cloud Credentials

Druva provide a nice approach to as-a-Service data protection that’s a little different from a number of competing products:

  • You don’t need to see or manage backup server nodes;
  • Server infrastructure security is not your responsibility;
  • Server nodes are spawned / stopped based on load;
  • S3 is less expensive (and faster with parallelisation);
  • There are no egress charges during restore; and
  • No on-premises component or CapEx is required (although you can deploy a cache node for quicker restore to on-premises).

 

Thoughts

I first encountered Druva at Tech Field Day Extra VMworld US in 2017 and was impressed by both the breadth of their solution and the cloudiness of it all compared to some of the traditional vendor approaches to protecting cloud-native and traditional workloads via the cloud. They have great support for end point protection, SaaS and traditional, DC-flavoured workloads. I’m particularly a fan of their willingness to tackle end point protection. When I was first starting out in data protection, a lot of vendors were speaking about how they could protect business from data loss. Then it seemed like it all became a bit too hard and maybe we just started to assume that the data was safe somewhere in the cloud or data centre (week not really but we’re talking feelings, not fact for the moment). End point protection is not an easy thing to get right, but it’s a really important part of data protection. Because ultimately you’re protecting data from bad machines and bad events and, ultimately, bad people. Sometimes the people aren’t bad at all, just a little bit silly.

Cloud is hard to do well. Lifting and shifting workloads from the DC to the public cloud has proven to be a challenge for a lot of enterprises. And taking a lift and shift approach to data protection in the cloud is also proving to be a bit of challenge, not least of which because people struggle with the burstiness of cloud workloads and need protection solutions that can accommodate those requirements. I like Druva’s approach to data protection, at least from the point of view of their “cloud-nativeness” and their focus on protecting a broad spectrum of workloads and scenarios. Not everything they do will necessarily fit in with the way you do things in your business, but there’re some solid, modern foundations there to deliver a comprehensive service. And I think that’s a nice thing to build on.

Druva are also presenting at Cloud Field Day 3 in early April. I recommend checking out their session. Justin also did a post in anticipation of the session that is well worth a read.