Datera and the Rise of Enterprise Software-Defined Storage

Disclaimer: I recently attended Storage Field Day 18.  My flights, accommodation and other expenses were paid for by Tech Field Day. There is no requirement for me to blog about any of the content presented and I am not compensated in any way for my time at the event.  Some materials presented were discussed under NDA and don’t form part of my blog posts, but could influence future discussions.

Datera recently presented at Storage Field Day 18. You can see videos of their presentation here, and download my rough notes from here.

 

Enterprise Software-Defined Storage

Datera position themselves as delivering “Enterprise Software-Defined Storage”. But what does that really mean? Enterprise IT gives you:

  • High Performance
  • Enterprise Features
    • QoS
    • Fault Domains
    • Stretched Cluster
    • L3 Networking
    • Deduplication
    • Replication
  • HA
  • Resiliency

Software-defined storage gives you:

  • Automation
  • DC Awareness Agility
  • Continuous Availability
  • Targeted Data Placement
  • Continuous Optimisation
  • Rapid technology adoption

Combine both of these and you get Datera.

[image courtesy of Datera]

 

Why Datera?

There are some other features built in to the platform that differentiate Datera’s offering, including:

  • L3 Networking – Datera brings standard protocols with modern networking to data centre storage. Resources are designed to float to allow for agility, availability, and scalability.
  • Policy-based Operations – Datera was built from day 1 with policy controls and policy templates to easy operations at scale while maintaining agility and availability.
  • Targeted Data Placement – ensure data is distributed correctly across the physical infrastructure to meet policies around perfromance, availability, data protection while controlling cost

 

Thoughts and Further Reading

I’ve waxed lyrical about Datera’s intent-based approach previously. I like the idea that they’re positioning themselves as “Enterprise SDS”. While my day job is now at a service provider, I spent a lot of time in enterprise shops getting crusty applications to keep on running, as best as they could, on equally crusty storage arrays. Something like Datera comes along with a cool hybrid storage approach and the enterprise guys get a little nervous. They want replication, they want resiliency, they want to apply QoS policies to it.

The software-defined data centre is the darling architecture of the private cloud world. Everyone wants to work with infrastructure that can be easily automated, highly available, and extremely scalable. Historically, some of these features have flown in the face of what the enterprise wants: stability, performance, resiliency. The enterprise guys aren’t super keen on updating platforms in the middle of the day. They want to buy multiples of infrastructure components. And they want multiple sets of infrastructure protecting applications. They aren’t that far away from those software-defined folks in any case.

The ability to combine continuous optimisation with high availability is a neat part of Datera’s value proposition. Like a number of software-defined storage solutions, the ability to rapidly iterate new features within the platform, while maintaining that “enterprise” feel in terms of stability and resiliency, is a pretty cool thing. Datera are working hard to bring the best of both worlds together, and managing to deliver the agility that enterprise wants, while maintaining the availability within the infrastructure that they crave.

I’ve spoken at length before about the brutally slow pace of working in some enterprise storage shops. Operations staff are constantly being handed steamers from under-resourced or inexperienced project delivery staff. Change management people are crippling the pace. And the CIO wants to know why you’ve not moved your SQL 2005 environment to AWS. There are some very good reasons why things work the way they do (and also some very bad ones), and innovation can be painfully hard to make happen in these environments. The private cloud kids, on the other hand, are all in on the fast paced, fail fast, software-defined life. They’ve theoretically got it all humming along without a whole lot of involvement on a daily basis. Sure, they’re living on the edge (do I sound old and curmudgeonly yet?). In my opinion, Datera are doing a pretty decent job of bringing these two worlds together. I’m looking forward to seeing what they do in the next 12 months to progress that endeavour.

Nexenta Announces NexentaCloud

I haven’t spoken to Nexenta in some time, but that doesn’t mean they haven’t been busy. They recently announced NexentaCloud in AWS, and I had the opportunity to speak to Michael Letschin about the announcement.

 

What Is It?

In short, it’s a version of NexentaStor that you can run in the cloud. It’s ostensibly an EC2 machine running in your virtual private cloud using EBS for storage on the backend. It’s:

  • Available in the AWS Marketplace;
  • Is deployed on preconfigured Amazon Machine Images; and
  • Delivers unified file and block services (NFS, SMB, iSCSI).

According to Nexenta, the key benefits include:

  • Access to a fully-featured file (NFS and SMB) and block (iSCSI) storage array;
  • Improved cloud resource efficiency through
    • data reduction
    • thin provisioning
    • snapshots and clones
  • Seamless replication to/from NexentaStor and NexentaCloud;
  • Rapid deployment of NexentaCloud instances for test/dev operations;
  • Centralised management of NexentaStor and NexentaCloud;
  • Advanced Analytics across your entire Nexenta storage environment; and
  • Migrate legacy applications to the cloud without re-architecting your applications.

There’s an hourly or annual subscription model, and I believe there’s also capacity-based licensing options available.

 

But Why?

Some of the young people reading this blog who wear jeans to work every day probably wonder why on earth you’d want to deploy a virtual storage array in your VPC in the first place. Why would your cloud-native applications care about iSCSI access? It’s very likely they don’t. But one of the key reasons why you might consider the NexentaCloud offering is because you’ve not got the time or resources to re-factor your applications and you’ve simply lifted and shifted a bunch of your enterprise applications into the cloud. These are likely applications that depend on infrastructure-level resiliency rather than delivering their own application-level resiliency. In this case, a product like NexentaCloud makes sense in that it provides some of the data services and resiliency that are otherwise lacking with those enterprise applications.

 

Thoughts

I’m intrigued by the NexentaCloud offering (and by Nexenta the company, for that matter). They have a solid history of delivering interesting software-defined storage solutions at a reasonable cost and with decent scale. If you’ve had the chance to play with NexentaStor (or deployed it in production), you’ll know it’s a fairly solid offering with a lot of the features you’d look for in a traditional storage platform. I’m curious to see how many enterprises take advantage of the NexentaCloud product, although I know there are plenty of NexentaStor users out in the wild, and I have no doubt their CxOs are placing a great amount of pressure on them to don the cape and get “to the cloud” post haste.

Data Virtualisation is More Than Just Migration for Primary Data

Disclaimer: I recently attended Storage Field Day 10.  My flights, accommodation and other expenses were paid for by Tech Field Day. There is no requirement for me to blog about any of the content presented and I am not compensated in any way for my time at the event.  Some materials presented were discussed under NDA and don’t form part of my blog posts, but could influence future discussions.

logo-Primary_Data-400px

Before I get started, you can find a link to my raw notes on Primary Data’s presentation here. You can also see videos of the presentation here. I’ve seen Primary Data present at SFD7 and SFD8, and I’ve typically been impressed with their approach to Software-Defined Storage (SDS) and data virtualisation generally. And I’m also quite a fan of David Flynn‘s whiteboarding chops.

SFD10_Pd_DavidFlynn

 

Data Virtualisation is More Than Just Migration

Primary Data spent  some time during their presentation at SFD10 talking about Data Migration vs Data Mobility.

SFD10_Pd_DataMigrationvsMobility

[image courtesy of Primary Data]

Data migration can be a real pain to manage. It’s quite often a manual process and is invariably tied to the capabilities of the underlying storage platform hosting the data. The cool thing about Primary Data’s solution is that it offers dynamic data mobility, aligning “data’s needs (objectives) with storage capabilities (service levels) through automated mobility, arbitrated by economic value and reported as compliance”. Sounds like a mouthful, but it’s a nice way of defining pretty much what everyone’s been trying to achieve with storage virtualisation solutions for the last decade or longer.

What I like about this approach is that it’s a data-centric, rather than employing a storage platform focused approach. Primary Data supports “anything that can be presented to Linux as a block device”, so the options to deploy this stuff are fairly broad. Once you’ve presented your data to DSX, there’s some smart service level objectives (SLOs) that can be applied to the data. These can be broken down into the categories of protection, performance, and price/penalty:

Protection

  • Durability
  • Availability
  • Recoverability – Security
  • Priority
  • Sovereignty

Performance

  • IOPS / Bandwidth / Latency – Read / Write
  • Sustained / Burst

Price / Penalty

  • Per File
  • Per Byte
  • Per Operation

Access Control can also be applied to your data. With Primary Data, “[e]very storage container is a landlord with floorspace to lease and utilities available (capacity and performance)”.

 

Further Reading and Final Thoughts

I like the approach to data virtualisation that Primary Data have taken. There are a number of tools on the market that claim to fully virtualise storage and offer mobility across platforms. Some of them do it well, and some focus more on the benefits provided around ease of migration from one platform to another.

That said, there’s certainly some disagreement in the market place on whether Primary Data could be considered a fully-fledged SDS solution. Be that as it may, I really like the focus on data, rather than silos of storage. I’m also a big fan of applying SLOs to data, particularly when it can be automated to improve the overall performance of the solution and make the data more accessible and, ultimately, more valuable.

Primary Data has a bunch of use cases that extend beyond data mobility as well, including deployment options ranging from Hyperconverged, software-defined NAS and clustering across existing storage platforms. Primary Data want to “do for storage what VMware did for compute”. I think the approach they’ve taken has certainly gotten them on the right track, and the platform has matured greatly in the last few years.

If you’re after some alternative (and better thought out) posts on Primary Data, you can read Jon‘s post here. Max also did a good write-up here, while Chris M.Evans did a nice preview post on Primary Data that you can find here.

The Cool Thing About Datera Is Intent

Disclaimer: I recently attended Storage Field Day 10.  My flights, accommodation and other expenses were paid for by Tech Field Day. There is no requirement for me to blog about any of the content presented and I am not compensated in any way for my time at the event.  Some materials presented were discussed under NDA and don’t form part of my blog posts, but could influence future discussions.

Datera-250x33

Before I get started, you can find a link to my raw notes on Datera‘s presentation here. You can also see videos of their presentation here.

 

What’s a Datera?

Datera’s Elastic Data Fabric is “software defined storage appliance that takes over the hardware”. It’s currently available in two flavours:

  • Software available with qualified hardware (this is prescriptive, and currently based on a SuperMicro platform); and
  • Can be licensed as software-only as well with 2 SKUs available in 50TB or 100TB chunks.

 

What Can I Do With a Datera?

SFD10_Datera

[image courtesy of Datera]

There are a couple of features that make Datera pretty cool, including:

  • Intent defined – you can use templates to enable intelligent placement of application data;
  • Economic flexibility – heterogeneous nodes can be deployed in the same cluster (capacity, performance, media type);
  • Works with an API first or Dev/Ops model – treating your infrastructure as code, programmable/composable;
  • Multi-tenant capability – this includes network isolation and QoS features;
  • Infrastructure awareness – auto-forming, optimal allocation of infrastructure resources.

 

What Do You Mean “Intent”?

According to Datera, Application Intent is “[a] way of describing what your application wants and then letting the system allocate the data”. You can define the following capabilities with an application template:

  • Policies for management (e.g. QoS) – data redundancy, data protection, data placement;
  • Storage template – defines how many volumes you want and the size you want; and
  • Pools of resources that will be consumed.

I think this is a great approach, and really provides the infrastructure operator with a fantastic level of granularity when it comes to deploying their applications.

Datera don’t use RAID, currently using 1->5 replication (synchronous) within the cluster to protect data. Snapshots are copy on write (at an application intent level).

Further Reading and Final Thoughts

I know I’ve barely scratched the surface of some of the capabilities of the Datera platform. I am super enthusiastic about the concept of Application Intent, particularly as it relates to scale-out, software-defined storage platforms. I think we spend a lot of time talking about how fast product X can go, and why technology Y is the best at emitting long beeps or performing firmware downgrades. We tend to forget about why we’re buying product X or deploying technology Y. It’s to run the business, isn’t it? Whether it’s teaching children or saving lives or printing pamphlets, the “business” is the reason we need the applications, and thus the reason we need the infrastructure to power those applications. So it’s nice to see vendors such as Datera (and others) working hard to build application-awareness as a core capability of their architecture. When I spoke to Datera, they had four customers announced, with more than 10 “not announced”. They’re obviously keen to get traction, and as their product improves and more people get to know about them, I’ve no doubt that this number will increase dramatically.

While I haven’t had stick-time with the product, and thus can’t talk to the performance or otherwise, I can certainly vouch for the validity of the approach from an architectural perspective. If you’re looking to read up on software-defined storage, I wouldn’t hesitate to recommend Enrico‘s recent post on the topic. Chris M. Evans also did a great write-up on Datera as part of his extensive series of SFD10 preview posts – you can check it out here. Finally, if you ever need to get my attention in presentations, the phrase “no more data migration orgies” seems to be a sure-fire way of getting me to listen.

It’s Hedvig, not Hedwig

Disclaimer: I recently attended Storage Field Day 10.  My flights, accommodation and other expenses were paid for by Tech Field Day. There is no requirement for me to blog about any of the content presented and I am not compensated in any way for my time at the event.  Some materials presented were discussed under NDA and don’t form part of my blog posts, but could influence future discussions.

hedvig_logo_260x77-200x59

Before I get started, you can find a link to my raw notes on Hedvig‘s presentation here. You can also see videos of the presentation here.

 

It’s Hedvig, not Hedwig

I’m not trying to be a smart arse. But when you have a daughter who’s crazy about Harry Potter, it’s hard not to think about Hedwig when seeing the Hedvig brand name. I’m sure in time I’ll learn not to do this.

If you’re unfamiliar with Hedvig, it’s software-defined storage. The Hedvig Distributed Storage Platform is made up of standard servers and the Hedvig software.

Some of the key elements of the Hedvig solution are as follows:

  • Software is completely decoupled from commodity hardware;
  • Application-specific storage policies; and
  • Automated and API-driven.

 

Capabilities

Hedvig took us through their 7 core capabilities, which were described as follows:

  • Seamless scaling with x86 or ARM (haven’t seen an ARM-64 deployment yet);
  • Hyperconverged and hyperscale architectures (can mix and match in the same cluster);
  • Support for any hypervisor, container or OS (Xen, KVM, HyperV, ESX, containers, OpenStack, bare-metal Windows or Linux);
  • Block (iSCSI), file (NFS) and object (S3, SWIFT) protocols in one platform;
  • Enterprise features: dedupe, compression, tiering, caching, snaps/clones;
  • Granular feature provisioning per virtual disk; and
  • Multi-DC and cloud replication.

 

Components

SFD10_Hedvig_Components

The Hedvig solution is comprised of the following key components:

  • Hedvig Storage Proxy – presents the block and file storage; runs as VM, container, or bare metal;
  • Hedvig Storage Service – forms an elastic cluster using commodity servers and/or cloud infrastructure; and
  • RESTful APIs – provides object access via S3 or Swift, instruments control and data plane

 

How Does It Work?

This is oversimplifying things, but here’s roughly how it works:

  • Create and present virtual disks to the application tier;
  • Hedvig Storage Proxy captures and directs I/O to storage cluster;
  • Hedvig Storage Service distributes and replicates data across nodes;
  • The cluster caches and balances across nodes and racks; and
  • The cluster replicates for DR across DCs and/or clouds.

 

Use Cases?

So where would you use Hedvig? According to Hedvig, they’re seeing uptake in a number of both “traditional” and “new” areas:

Traditional

  • Server virtualisation
  • Backup and BC/DR
  • VDI

New workloads

  • Production clouds
  • Test/Dev
  • Big data/IoT

 

Further Reading and Final Thoughts

Before I wrap up, a quick shout-out to Chris Kranz for his use of Hedvig flavoured magnetic props during his whiteboard session – it was great. Here’s a shonky photo of Chris.

SFD10_Hedvig

Avinash Lakshman is a super smart dude with a tonne of experience in doing cloud and storage things at great scale. He doesn’t believe that traditional storage has a future. When you watch the video of the Hedvig presentation at SFD10 you get a real feel for where the company’s coming from. The hyper-functional API access versus the GUI that looks a little rough around the edges certainly gives away the heritage of this product. That said, I think Avinash and Hedvig are onto a good thing here. The “traditional” storage architectures are indeed dying, as much as we might enjoy the relative simplicity of selling someone a dual-controller, midrange, block array with limited scalability.

As with many of these solutions I feel like we’re on the cusp of seeing something really cool being developed right in front of us. For some us, the use cases won’t strike a chord, and the need for this level of scalability may not be there. But if you’re all in on SDS, Hedvig certainly has some compelling pieces of the puzzle that I think are worthy of further investigation.

The Hedvig website contains a wealth of information. You should also check out Chris M. Evans‘s SFD10 preview post on Hedvig here, while Rick Schlander did a great overview post that I recommend reading. Max did a really good deep dive post, along with a higher level view that you can see here.

 

‘Building a Modern Data Center’ Now Available

In my post on the Atlantis CX-4 announcement last week I mentioned that ActualTech Media would be releasing a new book in conjunction with Atlantis Computing – “Building a Modern Data Center: Principles and Strategies of Design”. The book is now available for download here and I highly recommend you check it out. If you have anything to do with data centres then this is an invaluable resource that covers a bunch of different aspects, not just the marketecture of hyperconvergence.  I’ve said on the record that it’s a ripping yarn, and there are a number of people who agree. A Kindle version is available here for US $2.99, with print copies (US $9.99) available from Amazon next month.  ActualTech Media are also running a webinar on February 2 that I’d recommend checking out if you have the time.

Atlantis Computing Announces HyperScale CX-4 and Dell Partnership

It’s been a little while since I talked about Atlantis Computing and things have developed a bit since then. They’ve added a bunch of new features to USX, including, amongst other things:

I was recently lucky enough to have the opportunity to be briefed on their latest developments by Priyadarshi Prasad, Senior Director of Product Management at Atlantis Computing.

 

HyperScale CX-4

Atlantis Computing recently announced a new addition to their HyperScale range of products – the CX-4. If you’re familiar with the existing HyperScale line-up, you’ll realise that this is aimed at the smaller end of the market. Atlantis have stated that “[t]he CX-4 appliance is a two-node hyperconverged integrated system with compute, all-flash storage, networking and virtualisation designed for remote offices, branch offices (ROBO) and “micro” data centres”.

Atlantis HyperScale Box Shot

Atlantis Computing have previously leveraged Cisco, HP, Lenovo and SuperMicro for their hardware offerings and this has continued with the CX-4. The SuperMicro specs are as follows:

Atlantis_CX-4_Spec

 

Dell FX2

Atlantis also let me know that “Dell is teaming with Atlantis to provide the entire line of Atlantis HyperScale all-flash hyperconverged appliances on their PowerEdge FX2 platform. Atlantis HyperScale CX-4, CX-12 and CX-24 appliances are now available on Dell servers through Dell distributors and channel partners in the U.S., Europe and Middle East, shipped directly to customers”. Here’s an artist’s interpretation of the FX2.

Dell_FX2_FC630_Front_edited

As far as the CX-4 goes, the Dell differences are as follows:

  • Form factor – 2U 2N or 2U 4N
  • Memory per Node – 256GB – 768GB
  • Redundant Integrated 10GbE switch

 

Resiliency

Resiliency for the cluster comes by way of a mirror relationship between the two nodes in the CX-4 appliance. Atlantis also provides the ability to define an external tie-breaker virtual machine (VM). In keeping with the ROBO theme, this can be run at a central site, and multiple data centres / appliances can use the same tie-breaker VM. There is also high availability logic in the CX-4 system itself.

The tie-breaker is ostensibly there to keep in contact with the nodes and understand whether they’re up or not. In the event of a split-brain scenario, there is a fight for the tie-breaker (a single token). But what happens if the tie-breaker VM is unavailable (e.g. the WAN link is down)? There’s also an internal tie-breaker operating between the nodes, handled by a service VM on each node.

Atlantis_Witness

 

Simplicity and Scale

One of the key focus areas for Atlantis has been on simplicity, and they’ve gone to great lengths to build a solution and supporting framework ensuring that the deployment, operation and support of these appliances is simple. There’s a single point of support (Atlantis), network connectivity is straightforward, you can have IP configuration done at the factory, and everything can be managed either centrally via USX Manager or individually if required.

The CX-4 can be used as a gateway to the CX-12 if you like, simply by adding another CX-4 (2 nodes). Or you can choose to scale out, depending on your particular use case.

 

Further Reading and Final Thoughts

Atlantis also recently commissioned a survey that was conducted by Scott D. Lowe at ActualTech Media. You can read the results of “From the Field: Software Defined Storage and Hyperconverged Infrastructure in 2016” here. It provides an interesting insight into what is happening out there in the big, bad world at the moment, and is definitely worth a read. Scott, along with David M. Davis and James Green, has also written a book – “Building a Modern Data Center – Principles and Strategies of Design”. You can reserve your copy here. While I’m linking to articles of interest, this white paper from DeepStorage.net on the Atlantis USX solution is worth a look (registration required).

I really like the focus by Atlantis on simplicity. Particularly if you’re looking to deploy these things in a fairly remote destination.

Secondly, the built-in resiliency of the solution allows for operational efficiencies (you don’t have to get someone straight out to the site in the event of a node failure). I also like the fact that you can use these as a starting point for a HCI deployment, without a significant up-front investment. Finally, the use of all-flash helps with power and cooling, which can be a real problem in remote sites that don’t have high quality data centre infrastructure options available.

I’ve been impressed with Atlantis in the discussions I’ve had with them, and I like the look of what they’ve done with the CX-4. It strikes me that they’ve thought about a number of different scenarios and use cases, and they’ve also thought about working with customers beyond the purchase of the first appliance. Given the street price of these things, it would be worthwhile investigating further if you’re in the market for a hyperconverged solution.

Atlantis Computing – Product Announcement

I haven’t previously talked a lot about Atlantis Computing but I have a friend who joined the company a while ago and he’s been quite enthusiastic about what they’re doing in the SDS / hyperwhat space, so I figured it was worth checking out.

 

Overview

You may have heard that Atlantis Computing recently announced the availability of Atlantis HyperScale appliances. In a nutshell, this is a software-defined storage solution on a choice of server hardware from HP, Cisco, Lenovo or SuperMicro using a hypervisor from VMware or Citrix. It sure does look pretty.

 

Atlantis-HyperScale-on-black

Atlantis says it’s an all-flash, hyper-converged, turnkey hardware appliance. If that seems like a mouthful, it is. It’s also built on the Atlantis USX platform with end-to-end support provided by Atlantis. If you’re not familiar with USX, I encourage you to check out some details on it here. In short, it:

  • Is a 100% Software-Defined Storage;
  • Is an enterprise-class storage platform;
  • Pools SAN, NAS, DAS, Flash; and
  • Runs on any server, any storage (within some very specific limits).

 

HyperScale and Hardware-defined Software

Here’s a pretty snazzy shot of the features in HyperScale. There’s a nice overview of the available data services and REST capability.

HyperScale2

The cool thing is you can run this on a mix of hardware vendors as well, including Cisco, HP, Lenovo and SuperMicro.

  • Fixed configurations and specifications of 12TB and 24TB;
  • Turnkey 4-node appliances; and
  • Supported by Atlantis (24x7x365 with 4 hour response).

HyperScale3

The CX-12 and CX-24 have the following specs (depending on the vendor you choose):

HyperScale4

Some of the models of servers cited in the briefing included (everything is 4 nodes):

  • Cisco USC C220 M4;
  • Lenovo x3550 M5;
  • SuperMicro TwinPro; and
  • HP DL360 Gen9.

This is not an exhaustive list, but gives you an idea of the type of appliance hardware in play.

 

Final Thoughts

Atlantis are very excited about some of the potential TCO benefits to be had in comparison with Nutanix, Simplivity and VMware’s EVO:RAIL. I’m not going to go into numbers here, but I think it would be worth your while, if you’re in the market for a Hyper solution, to have a conversation with these guys.

Storage Field Day 7 – Day 3 – Maxta

Disclaimer: I recently attended Storage Field Day 7.  My flights, accommodation and other expenses were paid for by Tech Field Day. There is no requirement for me to blog about any of the content presented and I am not compensated in any way for my time at the event.  Some materials presented were discussed under NDA and don’t form part of my blog posts, but could influence future discussions.

For each of the presentations I attended at SFD7, there are a few things I want to include in the post. Firstly, you can see video footage of the Maxta presentation here. You can also download my raw notes from the presentation here. Finally, here’s a link to the Maxta website that covers some of what they presented.

 

Company Overview

Yoram Novick, CEO of Maxta, took us through a little of the company’s history and an overview of the products they offer.

Founded 2009, Maxta “maximises the promise of hyper-convergence” through:

  • Choice;
  • Simplicity;
  • Scalability; and
  • Cost.

They currently offer a buzzword-compliant storage platform via their MxSP product, while producing hyper-converged appliances via the MaxDeploy platform. They’re funded by Andreessen Horowitz, Intel Capital, and Tenaya Capital amongst others and are seeking to “[a]lign the storage construct with the abstraction layer”. They do this through:

  • Dramatically simplified management;
  • “World class” VM-level data services;
  • Eliminating storage arrays and storage networking; and
  • Leveraging flash / disk and capacity optimization.

 

Solutions

MaxDeploy is Maxta’s Hyper-Converged Appliance, running on a combination of preconfigured servers and Maxta software. Maxta suggest you can go from zero to running VMs in 15 minutes. They offer peace of mind through:

  • Interoperability;
  • Ease of ordering and deployment; and
  • Predictability of performance.

MxSP is Maxta’s Software-Defined Storage product. Not surprisingly, it is software only, and offered via a perpetual license or via subscription. Like a number of SDS products, the benefits are as follows:

  • Flexibility
    • DIY – your choice in hardware
    • Works with existing infrastructure – no forklift upgrades
  • Full-featured
    • Enterprise class data services
    • Support latest and greatest technologies
  • Customised configuration for users
    • Major server vendors supported
    • Proposed configuration validated
    • Fulfilled by partners

 

Architecture

MaxtaMaxDeployArchitecture

The Maxta Architecture is built around the following key features:

Data Services

  • Data integrity
  • Data protection / snapshots / clones
  • High availability
  • Capacity optimisation (thin / deduplication / compression)
  • Linear scalability

Simplicity

  • VM-centric
  • Tight integration with orchestration software / tools
  • Policy based management
  • Multi-hypervisor support (VMware, KVM, OpenStack integration)

What’s the value proposition?

  • Maximise choice – any server, hypervisor, storage, workload
  • Maximise IT simplicity – manage VMs, not storage
  • Maximise Cost Savings – standard components and capacity optimisation
  • Provide high levels of data resiliency, availability and protection

I get the impression that Maxta thought a bit about data layout, with the following points being critical to the story:

  • Cluster-wide capacity balancing
  • Favours placement of new data on new / under-utilised disks / nodes
  • Periodic rebalancing across disks / nodes
  • Proactive data relocation

 

Closing Thoughts and Further Reading

I like Maxta’s story. I like the two-pronged  approach they’ve taken with their product set, and appreciate the level of thought they’ve put into their architecture. I have no idea how much this stuff costs, so can’t say whether it represents good value or no, but on the basis of the presentation I saw I certainly think they’re worth looking at if you’re looking to get into either mega-converged appliances or buzzword-storage platforms. You should also check out Keith’s preview blog post on Maxta here, while Cormac did a great write-up late last year that is well worth checking out.