Cloudtenna Announces DirectSearch

 

I had the opportunity to speak to Aaron Ganek about Cloudtenna and their DirectSearch product recently and thought I’d share some thoughts here. Cloudtenna recently announced $4M in seed funding, have Citrix as a key strategic partner, and are shipping a beta product today. Their goal is “[b]ringing order to file chaos!”.

 

The Problem

Ganek told me that there are three major issues with file management and the plethora of collaboration tools used in the modern enterprise:

  • Search is too much effort
  • Security tends to fall through the cracks
  • Enterprise IT is dangerously non-compliant

Search

Most of these collaboration tools are geared up for search, because people don’t tend to remember where they put files, or what they’ve called them. So you might have some files in your corporate Box account, and some in Dropbox, and then some sitting in Confluence. The problem with trying to find something is that you need to search each application individually. According to Cloudtenna, this:

  • Wastes time;
  • Leads to frustration; and
  • Often yields poor results.

Security

Security also becomes a problem when you have multiple storage repositories for corporate files.

  • There are too many apps to manage
  • It’s difficult to track users across applications
  • There’s no consolidated audit trail

Exposure

As a result of this, enterprises find themselves facing exposure to litigation, primarily because they can’t answer these questions:

  • Who accessed what?
  • When and from where?
  • What changed?

As some of my friends like to say “people die from exposure”.

 

Cloudtenna – The DirectSearch Solution

Enter DirectSearch. At its core it’s a SaaS offering that

  • Catalogues file activity across disparate data silos; and
  • Delivers machine learning services to mitigate the “chaos”.

Basically you point it at all of your data repositories and you can then search across all of those from one screen. The cool thing about the catalogue is not just that it tracks metadata and leverages full-text indexing, it also tracks user activity. It supports a variety of on-premises, cloud and SaaS applications (6 at the moment, 16 by September). You only need to login once and there’s full ACL support – so users can only see what they’re meant to see.

According to Ganek, it also delivers some pretty fast search results, in the order of 400 – 600ms.

[image courtesy of Cloudtenna]

I was interested to know a little more about how the machine learning could identify files that were being worked on by people in the same workgroup. Ganek said they didn’t rely on Active Directory group membership, as these were often outdated. Instead, they tracked file activity to create a “Shadow IT organisational chart” that could be used to identify who was collaborating on what, and tailor the search results accordingly.

 

Thoughts and Further Reading

I’ve spent a good part of my career in the data centre providing storage solutions for enterprises to host their critical data on. I talk a lot about data and how important it is to the business. I’ve worked at some established companies where thousands of files are created every day and terabytes of data is moved around. Almost without fail, file management has been a pain in the rear. Whether I’ve been using Box to collaborate, or sending links to files with Dropbox, or been stuck using Microsoft Teams (great for collaboration but hopeless from a management perspective), invariably files get misplaced or I find myself firing up a search window to try and track down this file or that one. It’s a mess because we don’t juts work from a single desktop and carefully curated filesystem any more. We’re creating files on mobile devices, emailing them about, and gathering data from systems that don’t necessarily play well on some platforms. It’s a mess, but we need access to the data to get our jobs done. That’s why something like Cloudtenna has my attention. I’m looking forward to seeing them progress with the beta of DirectSearch, and I have a feeling they’re on to something pretty cool with their product. You can also read Rich’s thoughts on Cloudtenna over at the Gestalt IT website.

Nexenta Announces NexentaCloud

I haven’t spoken to Nexenta in some time, but that doesn’t mean they haven’t been busy. They recently announced NexentaCloud in AWS, and I had the opportunity to speak to Michael Letschin about the announcement.

 

What Is It?

In short, it’s a version of NexentaStor that you can run in the cloud. It’s ostensibly an EC2 machine running in your virtual private cloud using EBS for storage on the backend. It’s:

  • Available in the AWS Marketplace;
  • Is deployed on preconfigured Amazon Machine Images; and
  • Delivers unified file and block services (NFS, SMB, iSCSI).

According to Nexenta, the key benefits include:

  • Access to a fully-featured file (NFS and SMB) and block (iSCSI) storage array;
  • Improved cloud resource efficiency through
    • data reduction
    • thin provisioning
    • snapshots and clones
  • Seamless replication to/from NexentaStor and NexentaCloud;
  • Rapid deployment of NexentaCloud instances for test/dev operations;
  • Centralised management of NexentaStor and NexentaCloud;
  • Advanced Analytics across your entire Nexenta storage environment; and
  • Migrate legacy applications to the cloud without re-architecting your applications.

There’s an hourly or annual subscription model, and I believe there’s also capacity-based licensing options available.

 

But Why?

Some of the young people reading this blog who wear jeans to work every day probably wonder why on earth you’d want to deploy a virtual storage array in your VPC in the first place. Why would your cloud-native applications care about iSCSI access? It’s very likely they don’t. But one of the key reasons why you might consider the NexentaCloud offering is because you’ve not got the time or resources to re-factor your applications and you’ve simply lifted and shifted a bunch of your enterprise applications into the cloud. These are likely applications that depend on infrastructure-level resiliency rather than delivering their own application-level resiliency. In this case, a product like NexentaCloud makes sense in that it provides some of the data services and resiliency that are otherwise lacking with those enterprise applications.

 

Thoughts

I’m intrigued by the NexentaCloud offering (and by Nexenta the company, for that matter). They have a solid history of delivering interesting software-defined storage solutions at a reasonable cost and with decent scale. If you’ve had the chance to play with NexentaStor (or deployed it in production), you’ll know it’s a fairly solid offering with a lot of the features you’d look for in a traditional storage platform. I’m curious to see how many enterprises take advantage of the NexentaCloud product, although I know there are plenty of NexentaStor users out in the wild, and I have no doubt their CxOs are placing a great amount of pressure on them to don the cape and get “to the cloud” post haste.

Druva Announces Cloud Platform Enhancements

Druva Cloud Platform

Data protection has been on my mind quite a bit lately. I’ve been talking to a number of vendors, partners and end users about data protection challenges and, sometimes, successes. With World Backup Day coming up I had the opportunity to get a briefing from W. Curtis Preston on Druva’s Cloud Platform and thought I’d share some of the details here.

 

What is it?

Druva Cloud Platform is Druva’s tool for tying together their as-a-Service data protection solution within a (sometimes maligned) single pane of glass. The idea behind it is you can protect your assets – from end points through to your cloud applications (and everything in between) – all from the one service, and all managed in the one place.

[image courtesy of Druva]

 

Druva Cloud Platform was discussed at Tech Field Day Extra at VMworld US 2017, and now fully supports Phoenix (the DC protection offering), inSync
(end point & SaaS protection), and Apollo (native EC2 backup). There’s also some nice Phoenix integration with VMware Cloud on AWS (VMC).

[image courtesy of Druva]

 

Druva’s Cloud Credentials

Druva provide a nice approach to as-a-Service data protection that’s a little different from a number of competing products:

  • You don’t need to see or manage backup server nodes;
  • Server infrastructure security is not your responsibility;
  • Server nodes are spawned / stopped based on load;
  • S3 is less expensive (and faster with parallelisation);
  • There are no egress charges during restore; and
  • No on-premises component or CapEx is required (although you can deploy a cache node for quicker restore to on-premises).

 

Thoughts

I first encountered Druva at Tech Field Day Extra VMworld US in 2017 and was impressed by both the breadth of their solution and the cloudiness of it all compared to some of the traditional vendor approaches to protecting cloud-native and traditional workloads via the cloud. They have great support for end point protection, SaaS and traditional, DC-flavoured workloads. I’m particularly a fan of their willingness to tackle end point protection. When I was first starting out in data protection, a lot of vendors were speaking about how they could protect business from data loss. Then it seemed like it all became a bit too hard and maybe we just started to assume that the data was safe somewhere in the cloud or data centre (week not really but we’re talking feelings, not fact for the moment). End point protection is not an easy thing to get right, but it’s a really important part of data protection. Because ultimately you’re protecting data from bad machines and bad events and, ultimately, bad people. Sometimes the people aren’t bad at all, just a little bit silly.

Cloud is hard to do well. Lifting and shifting workloads from the DC to the public cloud has proven to be a challenge for a lot of enterprises. And taking a lift and shift approach to data protection in the cloud is also proving to be a bit of challenge, not least of which because people struggle with the burstiness of cloud workloads and need protection solutions that can accommodate those requirements. I like Druva’s approach to data protection, at least from the point of view of their “cloud-nativeness” and their focus on protecting a broad spectrum of workloads and scenarios. Not everything they do will necessarily fit in with the way you do things in your business, but there’re some solid, modern foundations there to deliver a comprehensive service. And I think that’s a nice thing to build on.

Druva are also presenting at Cloud Field Day 3 in early April. I recommend checking out their session. Justin also did a post in anticipation of the session that is well worth a read.

2018 AKA The Year After 2017

I said last year that I don’t do future prediction type posts, and then I did one anyway. This year I said the same thing and then I did one around some Primary Data commentary. Clearly I don’t know what I’m doing, so here we are again. This time around, my good buddy Jason Collier (Founder at Scale Computing) had some stuff to say about hybrid cloud, and I thought I’d wade in and, ostensibly, nod my head in vigorous agreement for the most part. Firstly, though, here’s Jason’s quote:

“Throughout 2017 we have seen many organizations focus on implementing a 100% cloud focused model and there has been a push for complete adoption of the cloud. There has been a debate around on-premises and cloud, especially when it comes to security, performance and availability, with arguments both for and against. But the reality is that the pendulum stops somewhere in the middle. In 2018 and beyond, the future is all about simplifying hybrid IT. The reality is it’s not on-premises versus the cloud. It’s on-premises and the cloud. Using hyperconverged solutions to support remote and branch locations and making the edge more intelligent, in conjunction with a hybrid cloud model, organizations will be able to support highly changing application environments”.

 

The Cloud

I talk to people every day in my day job about what their cloud strategy is, and most people in enterprise environments are telling me that there are plans afoot to go all in on public cloud. No one wants to run their own data centres anymore. No one wants to own and operate their own infrastructure. I’ve been hearing this for the last five years too, and have possibly penned a few strategy documents in my time that said something similar. Whether it’s with AWS, Azure, Google or one of the smaller players, public cloud as a consumption model has a lot going for it. Unfortunately, it can be hard to get stuff working up there reliably. Why? Because no-one wants to spend time “re-factoring” their applications. As a result of this, a lot of people want to lift and shift their workloads to public cloud. This is fine in theory, but a lot of those applications are running crusty versions of Microsoft’s flagship RDBMS, or they’re using applications that are designed for low-latency, on-premises data centres, rather than being addressable over the Internet. And why is this? Because we all spent a lot of the business’s money in the late nineties and early noughties building these systems to a level of performance and resilience that we thought people wanted. Except we didn’t explain ourselves terribly well, and now the business is tired of spending all of this money on IT. And they’re tired of having to go through extensive testing cycles every time they need to do a minor upgrade. So they stop doing those upgrades, and after some time passes, you find that a bunch of key business applications are suddenly approaching end of life and in need of some serious TLC. As a result of this, those same enterprises looking to go cloud first also find themselves struggling mightily to get there. This doesn’t mean public cloud isn’t necessarily the answer, it just means that people need to think things through a bit.

 

The Edge

Another reason enterprises aren’t necessarily lifting and shifting every single workload to the cloud is the concept of data gravity. Sometimes, your applications and your data need to be close to each other. And sometimes that closeness needs to occur closest to the place you generate the data (or run the applications). Whilst I think we’re seeing a shift in the deployment of corporate workloads to off-premises data centres, there are still some applications that need everything close by. I generally see this with enterprises working with extremely large datasets (think geo-spatial stuff or perhaps media and entertainment companies) that struggle to move large amounts of the data around in a fashion that is cost effective and efficient from a time and resource perspective. There are some neat solutions to some of these requirements, such as Scale Computing’s single node deployment option for edge workloads, and X-IO Technologiesneat approach to moving data from the edge to the core. But physics is still physics.

 

The Bit In Between

So back to Jason’s comment on hybrid cloud being the way it’s really all going. I agree that it’s very much a question of public cloud and on-premises, rather than one or the other. I think the missing piece for a lot of organisations, however, doesn’t necessarily lie in any one technology or application architecture. Rather, I think the key to a successful hybrid strategy sits squarely with the capability of the organization to provide consistent governance throughout the stack. In my opinion, it’s more about people understanding the value of what their company does, and the best way to help it achieve that value, than it is about whether HCI is a better fit than traditional rackmount servers connected to fibre channel fabrics. Those considerations are important, of course, but I don’t think they have the same impact on a company’s potential success as the people and politics does. You can have some super awesome bits of technology powering your company, but if you don’t understand how you’re helping the company do business, you’ll find the technology is not as useful as you hoped it would be. You can talk all you want about hybrid (and you should, it’s a solid strategy) but if you don’t understand why you’re doing what you do, it’s not going to be as effective.

Scale Computing and WinMagic Announce Partnership, Refuse to Sit Still

Scale Computing and WinMagic recently announced a partnership improving the security of Scale’s HC3 solution. I had the opportunity to be briefed by the good folks at Scale and WinMagic and thought I’d provide a brief overview of the announcement here.

 

But Firstly, Some Background

Scale Computing announced their HC3 Cloud Unity offering in late September this year. Cloud Unity, in a nutshell, let’s you run embedded HC3 instances in Google Cloud. Coupled with some SD-WAN smarts, you can move workloads easily between on-premises infrastructure and GCP. It enables companies to perform lift and shift migrations, if required, with relative ease, and removes a lot of the complexity traditionally associated of deploying hybrid-friendly workloads in the data centre.

 

So the WinMagic Thing?

WinMagic have been around for quite some time, and offer a range of security products aimed at various sizes of organization. This partnership with Scale delivers SecureDoc CloudVM as a mechanism for encryption and key management. You can download a copy of the brochure from here. The point of the solution is to provide a secure mechanism for hosting your VMs either on-premises or in the cloud. Key management can be a pain in the rear, and WinMagic provides a fully-featured solution for this that’s easy to use and simple to manage. There’s broad support for a variety of operating environments and clients. Authentication and authorized key distribution takes place prior to workloads being deployed to ensure that the right person is accessing data from an expected place and device and there’s support for password only or multi-factor authentication.

 

Thoughts

Scale Computing have been doing some really cool stuff in the hyperconverged arena for some time now. The new partnership with Google Cloud, and the addition of the WinMagic solution, demonstrates their focus on improving an already impressive offering with some pretty neat features. It’s one thing to enable customers to get to the cloud with relative ease, but it’s a whole other thing to be able to help them secure their assets when they make that move to the cloud.

It’s my opinion that Scale Computing have been the quiet achievers in the HCI marketplace, with reported fantastic customer satisfaction and a solid range of products on offer at a very reasonable RRP. Couple this with an intelligent hypervisor platform and the ability to securely host assets in the public cloud, and it’s clear that Scale Computing aren’t interested in standing still. I’m really looking forward to seeing what’s next for them. If you’re after an HCI solution where you can start really (really) small and grow as required, it would be worthwhile having a chat to them.

Also, if you’re into that kind of thing, Scale and WinMagic are hosting a joint webinar on November 28 at 10:30am EST. Registration for the webinar “Simplifying Security across your Universal I.T. Infrastructure: Top 5 Considerations for Securing Your Virtual and Cloud IT Environments, Without Introducing Unneeded Complexity” can be found here.

 

 

Aparavi Comes Out Of Stealth. Dazzles.

Santa Monica-based (I love that place) SaaS data protection outfit, Aparavi, recently came out of stealth, and I thought it was worthwhile covering their initial offering.

 

So Latin Then?

What’s an Aparavi? It’s apparently Latin and means “[t]o prepare, make ready, and equip”. The way we consume infrastructure has changed, but a lot of data protection products haven’t changed to accommodate this. Aparavi are keen to change that, and tell me that their product is “designed to work seamlessly alongside your business continuity plan to ease the burden of compliance and data protection for mid market companies”. Sounds pretty neat, so how does it work?

 

Architecture

Aparavi uses a three tiered architecture written in Node.js and C++. It consists of:

  • The Aparavi hosted platform;
  • An on-premises software appliance; and
  • A source client.

[image courtesy of Aparavi]

The platform is available as a separate module if required, otherwise it’s hosted on Aparavi’s infrastructure. The software appliance is the relationship manager in the solution. It performs in-line deduplication and compression. The source client can be used as a temporary recovery location if required. AES-256 encryption is done at the source, and the metadata is also encrypted. Key storage is all handled via keyring-style encryption mechanisms. There is communication between the web platform and the appliance, but the appliance can operate when the platform is off-line if required.

 

Cool Features

There are a number of cool features of the Aparavi solution, including:

  • Patented point-in-time recovery – you can recover data from any combination of local and cloud storage (you don’t need the backup set to live in one place);
  • Cloud active data pruning – will automatically remove files, and portions of files no longer needed from cloud locations;
  • Multi-cloud agile retention (this is my favourite) – you can use multiple cloud locations without the need to move data from one to the other;
  • Open data format – open source published, with Aparavi providing a reader so data can be read by any tool; and
  • Multi-tier, multi-tenancy – Aparavi are very focused on delivering a multi-tier and multi-tenant environment for service providers and folks who like to scale.

 

Retention Simplified

  • Policy Engine – uses file exclusion and inclusion lists
  • Comprehensive Search – search by user name and appliance name as well as file name
  • Storage Analytics – how much you’re saving by pruning, data growth / shrinkage over time, % change monitor
  • Auditing and Reporting Tools
  • RESTful API – anything in the UI can be automated

 

What Does It Run On?

Aparavi runs on all Microsoft-supported Windows platforms as well as most major Linux distributions (including Ubuntu and RedHat). They use the Amazon S3 API, and support GCP and are working on OpenStack and Azure. They’ve also got some good working relationships with Cloudian and Scality, amongst others.

[image courtesy of Aparavi]

 

Availability?

Aparavi are having a “soft launch” on October 25th. The product is licensed on the amount of source data protected. From a pricing perspective, the first TB is always free. Expect to pay US $999/year for 3TB.

 

Conclusion

Aparavi are looking to focus on the mid-market to begin with, and stressed to me that it isn’t really a tool that will replace your day to day business continuity tool. That said, they recognize that customers may end up using the tool in ways that they hadn’t anticipated.

Aparavi’s founding team of Adrian Knapp, Rod Christensen, Jonathan Calmes and Jay Hill have a whole lot of experience with data protection engines and a bunch of use cases. Speaking to Jonathan it feels like they’ve certainly thought about a lot the issues facing folks leveraging cloud for data protection. I like the open approach to storing the data, and the multi-cloud friendliness takes the story well beyond the hybrid slideware I’m accustomed to seeing from some companies.

Cloud has opened up a lot of possibilities for companies that were traditionally constrained by their own ability to deliver functional, scalable and efficient infrastructure internally. It’s since come to people’s attention that, much like the days of internal-only deployments, a whole lot of people who should know better still don’t understand what they’re doing with data protection, and there’s crap scattered everywhere. Products like Aparavi are a positive step towards taking control of data protection in fluid environments, potentially helping companies to get it together in an effective manner. I’m looking forward to diving further into the solution, and am interested to see how the industry reacts to Aparavi over the coming months.

Puppet Announces Puppet Discovery, Can Now Find and Manage Your Stuff Everywhere

Puppet recently wrapped up their conference, PuppetConf2017, and made some product announcements at the same time. I thought I’d provide some brief coverage of one of the key announcements here.

 

What’s a Discovery Puppet?

No, it’s Puppet Discovery, and it’s the evolution of Puppet’s focus on container and cloud infrastructure discovery, and the result of feedback from their customers on what’s been a challenge for them. Puppet describe it as “a new turnkey approach to traditional and cloud resource discovery”.

It also provides:

  • Agentless service discovery for AWS EC2, containers, and physical hosts;
  • Actionable views across the environment; and
  • The ability to bring unmanaged resources under Puppet management.

Puppet Discovery currently allows for discovery of VMware vSphere VMs, AWS and Azure resources, and containers, with support for other cloud vendors, such as Google Cloud Platform, to follow.

 

Conclusion and Further Reading

Puppet have been around for some time and do a lot of interesting stuff. I haven’t covered them previously on this blog, but that doesn’t mean they’re not doing interesting stuff. I have a lot of customers leveraging Puppet in the wild, and any time companies make the discovery, management and automation of infrastructure easier I’m all for it. I’m particularly enthusiastic about the hybrid play, as I agree with Puppet’s claim that a lot of these types of solutions work particularly well on static, internal networks but struggle when technologies such as containers and public cloud come into play.

Just like VM sprawl before it, cloud sprawl is a problem that enterprises, in particular, are starting to experience with more frequency. Tools like Discovery can help identify just what exactly has been deployed. Once users have a better handle on that, they can start to make decisions about what needs to stay and what should go. I think this is key to good infrastructure management, regardless of whther you were jeans and a t-shirt to work or prefer a suit and tie.

The press release for Puppet Discovery can be found here. You can apply to participate in the preview phase here. There’s also a blog post covering the announcement here.

Scale Computing Announces Cloud Unity – Clouds For Everyone

 

The Announcement

Scale Computing recently announced the availability of a new offering: Cloud Unity. I had the opportunity to speak with the Scale Computing team at VMworld US this year to run through some of the finer points of the announcement and thought I’d cover it off here.

 

Cloud What?

So what exactly is Cloud Unity? If you’ve been keeping an eye on the IT market in the last few years, you’ll notice that everything has cloud of some type in its product name. In this case, Cloud Unity is a mechanism by which you can run Scale Computing’s HC3 hypervisor nested in Google Cloud Platform (GCP). The point of the solution, ostensibly, is to provide a business with disaster recovery capability on a public cloud platform. You’re basically running an HC3 cluster on GCP, with the added benefit that you can create an encrypted VXLAN connection between your on-premises HC3 cluster and the GCP cluster. The neat thing here is that everything runs as a small instance to handle replication from on-premises and only scales up when you’re actually needing to run the VMs in anger. The service is bought through Scale Computing, and starts from as little as $1000US per month (for 5TB). There are other options available as well and the solution is expected to be Generally Available in Q4 this year.

 

Conclusion and Further Reading

This isn’t the first time nested virtualisation has been released as a product, with AWS, Azure and Ravello all doing similar things. The cool thing here is that it’s aimed at Scale Computing’s traditional customers, namely small to medium businesses. These are the people who’ve bought into the simplicity of the Scale Computing model and don’t necessarily have time to re-write their line of business applications to work as cloud native applications (as much as it would be nice that this were the case). Whilst application lift and shift isn’t the ideal outcome, the other benefit of this approach is that companies who may not have previously invested in DR capability can now leverage this product to solve the technical part of the puzzle fairly simply.

DR should be a simple thing to have in place. Everyone has horror stories of data centres going off line because of natural disasters or, more commonly, human error. The price of good DR, however, has traditionally been quite high. And it’s been pretty hard to achieve. The beauty of this solution is that it provides businesses with solid technical capabilities for a moderate price, and allows them to focus on people and processes, which are arguably the key parts of DR that are commonly overlooked. Disasters are bad, which is why they’re called disasters. If you run a small to medium business and want to protect yourself from bad things happening, this is the kind of solution that should be of interest to you.

A few years ago, Scale Computing sent me a refurbished HC1000 cluster to play with, and I’ve had first-hand exposure to the excellent support staff and experience that Scale Computing tell people about. The stories are true – these people are very good at what they do and this goes a long way in providing consumers with confidence in the solution. This confidence is fairly critical to the success of technical DR solutions – you want to leverage something that’s robust in times of duress. You don’t want to be worrying about whether it will work or not when your on-premises DC is slowly becoming submerged in water because building maintenance made a boo boo. You want to be able to focus on your processes to ensure that applications and data are available when and where they’re required to keep doing what you do.

If you’d like to read what other people have written, Justin Warren posted a handy article at Forbes, and Chris Evans provided a typically insightful overview of the announcement and the challenges it’s trying to solve that you can read here. Scott D. Lowe also provided a very useful write-up here. Scale Computing recently presented at Tech Field Day 15, and you can watch their videos here.

Oracle Announces Ravello on Oracle Cloud Infrastructure

It seems to be the season for tech company announcements. I was recently briefed by Oracle on their Ravello on Oracle Cloud Infrastructure announcement and thought I’d take the time to provide some coverage.

 

What’s a Ravello?

Ravello is an overlay cloud that enables enterprises to run their VMware and KVM workloads with DC-like (L2) networking ‘as-is’ on public cloud without any modifications”. It’s pretty cool stuff, and I’ve covered it briefly in the past. They’ve been around for a while and were acquired by Oracle last year. The held a briefing day for bloggers in early 2017, and Chris Wahl did a comprehensive write-up here.

 

HVX

The technology components are a:

  • High-performance nested virtualisation engine (or nested hypervisor);
  • Software-defined network; and
  • Storage overlay.

[image courtesy of Oracle]

The management layer manages the technology components, provides the user interface and API for all environment definitions and deployments and handles image management and monitoring. Ravello in its current iteration is software-based, nested virtualisation. This is what you may have used in the past to run ESXi on AWS or GCP.

[image courtesy of Oracle]

 

Ravello on Oracle Cloud Infrastructure

Ravello on Oracle Cloud Infrastructure (OCI) provides you with the option of leveraging either “hardware-assisted, nested virtualisation” or bare-metal.

[images courtesy of Oracle]

Oracle are excited about the potential performance gains from running Ravello on OCI, stating that there is up to a 14x performance improvement over running Ravello on other cloud services. The key here is that they’ve developed extensions that integrate directly with Oracle’s Cloud platform. Makes sense when you consider they purchased Ravello for reasons.

 

Why Would You?

So why would you use Ravello? It provides enterprises with the ability to “take any VMware based multi-VM application and run it on public cloud without making any changes”. You don’t have to worry about:

  • Re-platforming – You normally can’t run VMware VMs on public clouds.
  • P2V Conversions – Your physical hosts can’t go to the public cloud.
  • Re-networking – Layer 2? Nope.
  • Re-configuration – What about all of your networking and security appliances?

This is all hard to do and points to the need to re-write your applications and re-architect your platforms. Sounds expensive and time-consuming and there are other things people would rather be doing.

 

Conclusion and Further Reading

I am absolutely an advocate for architecting applications to run natively on cloud infrastructure. I don’t think that lift and shift is a sustainable approach to cloud adoption by any stretch. That said, I’ve worked in plenty of large enterprises running applications that are poorly understood and nonetheless critical to the business. Yes, it’s silly. But if you’ve spent any time in any enterprise you’ll start to realise that silly is quite a common modus operandi. Coupled with increasing pressure on CxOs to reduce their on-premises footprint and you’ll see that this technology is something of a life vest for enterprises struggling to make the leap from on-premises to public cloud with minimal modification to their existing applications.

I don’t know what this service will cost you, so I can’t tell you whether this service will provide you with value for money. That’s something you’re better off speaking to Oracle about. Sometimes return on investment is hard to judge unless you’re against the wall with no alternatives. I’ll always say you should re-write your apps rather than lift and shift, but sometimes you don’t have the choice. If you’re in that position, you should consider Ravello’s offering. You can sign up for a free trial here. You can read Oracle’s post on the news here, and Tim’s insights here.

VMware – VMworld 2017 – STO3194BU – Protecting Virtual Machines in VMware Cloud on AWS

Disclaimer: I recently attended VMworld 2017 – US.  My flights were paid for by ActualTech Media, VMware provided me with a free pass to the conference and various bits of swag, and Tech Field Day picked up my hotel costs. There is no requirement for me to blog about any of the content presented and I am not compensated in any way for my time at the event.  Some materials presented were discussed under NDA and don’t form part of my blog posts, but could influence future discussions.

Here are my rough notes from “STO3194BU – Protecting Virtual Machines in VMware Cloud on AWS”, presented by Brian Young and Anita Thomas. You can grab a PDF copy of my notes from here.

VMware on AWS Backup Overview

VMware Cloud on AWS

  • VMware is enabling the VADP backup partner ecosystem on VMC
  • Access to native AWS storage for backup target
  • Leverages high performance network between Virtual Private Clouds

You can read more about VMware Cloud on AWS here.

 

Backup Partner Strategy

VMware Certified – VMware provides highest level of product endorsement

  • Product certification with VMware Compatibility Guide Listing
  • Predictable Life Cycle Management
  • VMware maintains continuous testing of VAPD APIs on VMC releases

Customer Deployed – Same solution components for both on-premises and VMC deployments

  • Operational Consistency
  • Choice of backup methods – image-level, in-guest
  • Choice of backup targets – S3, EBS, EFS

Partner Supported – Partner provides primary support

  • Same support model as on-premises

 

VADP / ENI / Storage Targets

VADP

  • New VDDK supports both on-premises and VMC
  • VMware backup partners are updating existing products to use new VDDK to enable backup of VMC based VMs

Elastic Network Interface (ENI)

  • Provide access to high speed, low latency network between VMC and AWS Virtual Private Clouds
  • No ingress or egress charges within the same availability zone

Backup Storage Targets

  • EC2 based backup appliance – EBS and S3 storage
  • Direct to S3

 

Example Backup Topology

  • Some partners will support in-guest and image level backups direct to S3
  • Deduplicates, compresses and encrypts on EC2 backup appliance
  • Store or cache backups on EBS
  • Some partners will support vaulting older backups to S3

 

Summary

  • VADP based backup products for VMC are available now
  • Elastic Network Interface connection to native AWS services is available now
  • Dell EMC Data Protection Suite is the first VADP data protection product available on VMC
  • Additional VADP backup solutions will be available in the coming months

 

Dell EMC Data Protection for VMware Cloud on AWS

Data Protection Continuum – Where you need it, how you want it

Dell EMC Data Protection is a Launch Partner for VMware Cloud on AWS. Data Protection Suite protects VMs and enterprise workloads whether on-premises or in VMware Cloud

  • Same data protection policies
  • Leveraging best-in-class Data Domain Virtual Edition
  • AWS S3 integration for cost efficient data protection

 

Dell EMC Data Domain and DP Suite

Data Protection Suite

  • Protects across the continuum – replication, snapshot, backup and archive
  • Covers all consumption models
  • Broadest application and platform support
  • Tightest integration with Data Domain

Data Domain Virtual Edition

  • Deduplication ratios up to 55x
  • Supports on-premises and cloud
  • Data encryption at rest
  • Data Invulnerability Architecture – best-in-class reliability
  • Includes DD Boost, DD Replicator

 

Dell EMC Solution Highlights

Unified

  • Single solution for enterprise applications and virtual machines
  • Works across on-premises and cloud deployments

Efficient

  • Direct application backup to S3
  • Minimal compute costs in cloud
  • Storage-efficient: deduplication up to 55x to DD/VE

Scalable

  • Highly scalable solution using lightweight stateless proxies
  • Virtual synthetic full backups – lightning fast daily backups, faster restores
  • Uses CBT for faster VM-image backup and restore

 

Solution Detail

Backup of VMs and applications in VMC to a DD/VE or AWS S3. The solution supports

  • VM image backup and restore
  • In-guest backup and restore of applications using agents for consistency
  • Application direct to S3

 

ESG InstaGraphic

  • ESG Lab has confirmed that the efficiency of the Dell EMC architecture can be used to reduce monthly in-cloud data protection costs by 50% or more
  • ESG Research has confirmed that public cloud adoption is on the rise. More than 75% of IT organisations report they are using the public cloud and 41% are using it for production applications
  • There is a common misconception that an application, server, or data moved to the cloud is automatically backed up the same way it was on-premises
  • Architecture matters when choosing a public cloud data protection solution

Source – ESG White Paper – Cost-efficient Data Protection for Your Cloud – to be published.

 

Manage Backups Using a Familiar Interface

  • Consistent user experience in cloud and on-premises
  • Manage backups using familiar data protection UI
  • Extend data protection policies to cloud
  • Detailed reporting and monitoring

 

Software Defined Data Protection Policies

Dynamic Polices – Keeping up with VM data growth and smart policies

Supported Attributes

  • DS Clusters
  • Data Center
  • Tags
  • VMname
  • Data Store
  • VMfolder
  • VM resource group
  • vApp

 

Technology Preview

The Vision we are building towards (screenshot demos).

 

Further Reading

You can read more in Chad’s post on the solution. Dell EMC put out a press release that you can see here. There’s a blog post from Dell EMC that also provides some useful information. I found this to be a pretty useful overview of what’s available and what’s coming in the future. 4 stars.