Tintri Announces New Scale-Out Storage Platform

I’ve had a few briefings with Tintri now, and talked about Tintri’s T5040 here. Today they announced a few enhancements to their product line, including:

  • Nine new Tintri VMstore T5000 all flash models with capacity expansion capabilities;
  • VM Scale-out software;
  • Tintri Analytics for predictive capacity and performance planning; and
  • Two new Tintri Cloud offerings.

 

Scale-out Storage Platform

You might be familiar with the T5040, T5060 and T5080 models, with the Tintri VMstore T5000 all-flash series being introduced in August 2015. All three models have been updated with new capacity options ranging from 17 TB to 308 TB. These systems use the latest in 3D NAND technology and high density drives to offer organizations both higher capacity and lower $/GB.

Tintri03_NewModels

The new models have the following characteristics:

  • Federated pool of storage. You can now treat multiple Tintri VMstores—both all-flash and hybrid-flash nodes—as a pool of storage. This makes management, planning and resource allocation a lot simpler. You can have up to 32 VMstores in a pool.
  • Scalability and performance. The storage platform is designed to scale to more than one million VMs. Tintri tell me that the  “[s]eparation of control flow from data flow ensures low latency and scalability to a very large number of storage nodes”.
  • This allows you to scale from small to very large with new and existing, all flash and hybrid, partially or fully populated systems.
  • The VM Scale-out software works across any standard high performance Ethernet network, eliminating the need for proprietary interconnects. The VM Scale-out software automatically provides best placement recommendation for VMs.
  • Scale compute and storage independently. Loose coupling of storage and compute provides customers with maximum flexibility to scale these elements independently. I think this is Tintri’s way of saying they’re not (yet) heading down the hyperconverged path.

 

VM Scale-out Software

Tintri’s new VM Scale-out Software (*included with Tintri Global Center Advanced license) provides the following capabilities:

  • Predictive analytics derived from one million statistics collected every 10 minutes from 30 days of history, accounting for peak loads instead of average loads, providing (according to Tintri) for the most accurate predictions. Deep workload analysis identifies VMs that are growing rapidly and applies sophisticated algorithms to model the growth ahead and avoid resource constraints.
  • Least-cost optimization based on multi-dimensional modelling. Control algorithm constantly optimizes across the thousands of VMs in each pool of VMstores, taking into account space savings, resources required by each VM, and the cost in time and data to move VMs, and makes the least-cost recommendation for VM migration that optimizes the pool.
  • Retain VM policy settings and stats. When a VM is moved, not only are the snapshots moved with the VM, the stastistics,  protection and QoS policies migrate as well using efficient compressed and deduplicated replication protocol.
  • Supports all major hypervisors.

Tintri04_ScaleOut

You can check out a YouTube video on Tintri VM Scale-out (covering optimal VM distribution) here.

 

Tintri Analytics
Tintri has always offered real-time, VM-level analytics as part of its Tintri Operating System and Tintri Global Center management system. This has now been expanded to include a SaaS offering of predictive analytics that provides organizations with the ability to model both capacity and performance requirements. Powered by big data engines such as Apache Spark and Elastic Search, Tintri Analytics is capable of analyzing stats from 500,000 VMs over several years in one second.  By mining the rich VM-level metadata, Tintri Analytics provides customers with information about their environment to help them make better decisions about applications’ behaviours and storage needs.

Tintri Analytics is a SaaS tool that allows you to model storage needs up to 6 months into the future based on up to 3 years of historical data.

Tintri01_Analytics

Here is a shot of the dashboard. You can see a few things here, including:

  • Your live resource usage for your entire footprint up to 32 VMstores;
  • Average consumption per VM (bottom left); and
  • The types of applications that are your largest consumers of Capacity, Performance and Working Set (bottom center).

Tintri02_Analytics

Here you can see exactly how your usage of capacity, performance and working set have been trending over time. You can see also when you can expect to run out of these resources (and which is on the critical path). It also provides the ability to change the timeframe to alter the projections, or drill into specific application types to understand their impact on your footprint.

There are a number of videos covering Tintri Analytics that I think are worth checking out:

 

Tintri Cloud Suites

Tintri have also come up with a new packaging model called “Tintri Cloud”. Aimed at folks still keen on private cloud deployments, Tintri Cloud combines the Tintri Scale-out platform and the all-flash VMstores.

Customers can start with a single Tintri VMstore T5040 with 17 TB of effective capacity and scale out to the Tintri Foundation Cloud with 1.2 PB in as few as 8 rack units. Or they can grow all the way to the Tintri Ultimate Cloud, which delivers a 10 PB cloud-ready storage infrastructure for up to 160,000 VMs, delivering over 6.4 million IOPS in 64 RU for less than $1/GB effective. Both the Foundation Cloud and Ultimate Cloud include Tintri’s complete set of software offerings for storage management, VM-level analytics, VM Scale-out, replication, QoS, and lifecycle management.

 

Further Reading and Thoughts

There’s another video covering setting policies on groups of VMs in Tintri Global Center here. You might also like to check out the Tintri Product Launch webinar.

Tintri have made quite a big deal about their “VM-aware” storage in the past, and haven’t been afraid to call out the bigger players on their approach to VM-centric storage. While I think they’ve missed the mark with some of their comments, I’ve enjoyed the approach they’ve taken with their own products. I’ve also certainly been impressed with the demonstrations I’ve been given on the capability built into the arrays and available via Global Center. Deploying workload to the public cloud isn’t for everyone, and Tintri are doing a bang-up job of going for those who still want to run their VM storage decoupled from their compute and in their own data centre. I love the analytics capability, and the UI looks to be fairly straightforward and informative. Trending still seems to be a thing that companies are struggling with, so if a dashboard can help them with further insight then it can’t be a bad thing.

Scale Computing Announces Support For Hybrid Storage and Other Good Things

Scale_Logo_High_Res

If you’re unfamiliar with Scale Computing, they’re a hyperconverged infrastructure (HCI) vendor out of Indianapolis that have been around for some time and deliver a solution aimed squarely at the small to mid-size market. They’ve been around since 2008, and launched their HC3 platform in 2012. They have around 1600 customers, and about 6000 units deployed in the field. Justin Warren provides a nice overview here as part of his research for Storage Field Day 5, while Trevor Pott wrote a comprehensive review for El Reg that you can read here. I was fortunate enough to get a briefing from Alan Conboy from Scale Computing and thought it worthy of putting pen to paper, so to speak.

 

So What is a Scale Computing?

Scale describes the HC3 as a scale-out system. It has the following features:

  • 3 or more nodes –fully automated Active/Active architecture;
  • Clustered virtualization compute platform with no virtualization licensing (KVM-based, not VMware);
  • Protocol-less pooled storage resources eliminate external storage requirements entirely with no SAN or VSA;
  • +60% efficiency gains built in to the IO path – Scale made much of this in my briefing, and it certainly looks good on paper;
  • Cluster is self healing and self load balancing – the nodes talk directly to each other;
  • Scale’s State Machine technology makes the cluster Self-Aware with no need for external management servers – so no out of band management servers. When you’ve done as many vSphere deployments as I have this becomes very appealing;

You can read a bit more about how it all hangs together here. Here’s a simple diagram of the how it looks from a networking perspective. Each node has 4 NICs, with two going to the back-end and two ports for the front-end. You can read up on recommended network switches here.

Scale01_HC3
Each node contains:

  • 8 to 40 vCores;
  • 32 to 512GB VM Memory;
  • Quad Network interface ports in 1GbE or 10GbE;
  • 4 or 8 spindles in 7.2k, 10k, or 15k RPM and SSD as a tier.

Here’s an overview of the different models, along with list prices in $US. You can check out the specification sheet here.

Scale02_Node_Models

 

So What’s New?

Flash. Scale tell me “it’s not being used as a simple cache, but as a proper, fluid tier of storage to meet the needs of a growing and changing SMB to SME market”. There are some neat features that have been built in to the interface. I was able to test these during the briefing with Scale. In a nutshell, there’s a level of granularity that the IT generalist should be pleased with.

  • Set different priority for VMs on a per virtual disk basis;
  • Change on the fly as needed;
  • Makes use of SLC SSD as a storage tier not just a cache; and
  • Keep unnecessary workloads off of the SSD tier completely.

Scale is deploying its new HyperCore Enhanced Automated Tiering (HEAT) technology across the HC3 product line and is introducing a flash storage tier as part of its HC2150 and HC4150 appliances. Scale tell me that they are “[a]vailable in 4- or 8-drive units”, and “Scale’s latest offerings include one 400 or 800GB SSD with three NL-SAS HDD in 1-6TB capacities and memory up to 256GB, or two 400 or 800GB SSD with 6 NL-SAS HDD in 1-2TB capacities and up to 512 GB memory respectively. Network connectivity for either system is achieved through two 10GbE SFP+ ports per node”.

It’s also worth noting that the new products can be used to form new clusters, or they can be added to existing HC3 clusters. Existing workloads on those clusters will automatically utilize the new storage tier when the new nodes are added. You can read more on what’s new here.

 

Further Reading and Feelings

As someone who deals with reasonably complex infrastructure builds as part of my day job, it was refreshing to get a briefing from a company who’s focus is on simplicity for a certain market segment, rather than trying to be the HCI vendor everyone goes to. I was really impressed with the intuitive nature of the interface, the simplicity with which tasks could be achieved, and the thought that’s gone into the architecture. The price, for what it offers, is very competitive as well, particularly in the face of more traditional compute + storage stacks aimed at SMEs. I’m working with Scale to get myself some more stick time in the near future and am looking forward to reporting back with the results.

Storage Field Day – I’ll Be At SFD10

SFD-Logo2-150x150

Woohoo! I’ll be heading to the US in just over a fortnight for another Storage Field Day event. If you haven’t heard of the very excellent Tech Field Day events, you should check them out. I’m looking forward to time travel and spending time with some really smart people for a few days. It’s also worth checking back on the SFD10 website during the event as there’ll be video streaming and updated links to additional content. You can also see the list of delegates and event-related articles that they’ve published.

I think it’s a great line-up of companies this time around, with some I’m familiar with and some not so much.

SFD10_Companies

I’d also like to publicly thank in advance the nice folk from Tech Field Day (Stephen, Claire and Tom) who’ve seen fit to have me back, as well as my employer for giving me time to attend these events. Also big thanks to the companies presenting.

 

iStock-Unfinished-Business-5

EMC Announces Unity – Part 2

I covered EMC‘s announcement of their new Unity platform in an earlier post, and thought it would be worthwhile following up on a few key points around data protection and protocol support.

 

Data Protection with Unity

You can do a bunch of the high level local and remote protection management through Unisphere, including:

  • Scheduling snapshots
  • Viewing system defined schedules
  • Modifying protection options
  • Customizing schedules based on your SLAs
  • Configuring replication
  • Managing replication operations such as session failover and failback
  • Viewing replication session states and statuses

Unified Snapshots provide:

  • Point-in-time snapshot copies of data;
  • Snapshots for both block and file resources (finally!); and
  • Snapshots are used as the foundation for native asynchronous replication on Unity.

The following table provides information on the limits with snapshots on the Unity platform.

Unity014_Snapshot_Limits

You can asynchronously replicate file and block data from Unity to Unity or Unity VSA, VNXe, or vVNX. How do I get my VNX data onto the Unity array? EMC say that RecoverPoint is your best bet for array replication activities from the VNX1 or 2 to the Unity platform. If you’re looking at data migration options, the following table may help.

Unity009_Migration

 

Protocols and Filesystems

There’s a fair bit of support for more “modern” iterations of SMB and NFS. These are outlined below:

SMB share options

  • Continuous Availability
  • Protocol Encryption
  • Access Based Enumeration (ABE)
  • Distributed File System (DFS)
  • Branch Cache
  • Offline Availability
  • Umask

Supported Features

  • Dynamic access control
  • Hyper-V shared VHDX
  • Antivirus

NFS V4.0 & 4.1

Unity introduces support for NFS v4.0 & 4.1

  • Functionality described in RFC 3530 & RFC 5661
  • Includes NFS ACL
  • Stateful protocol unlike earlier NFS versions

Note, however, the following exceptions

  • No pNFS
  • No directory delegation

FTP/SFTP

Unity supports accessing NAS Servers via FTP and SFTP

  • This can be enabled and disabled independently
  • Accessible by Windows, Unix, and anonymous users

Access control lists

  • Enable or disable access for users, groups, and hosts

FTP/SFTP auditing can be configured on the NAS Server

  • Client IP, time of connection, uploaded/downloaded files
  • Log directory and maximum log size are configurable

EMC have also delivered a new scalable filesystem. This filesystem is a 64-bit filesystem that delivers range of file services, including:

  • Scalability to 64TBs;
  • Space efficient Snapshots;
  • The ability to shrink a file system and reclaim that space;
  • Support for up to 256 VMDK clones;
  • Fast failover;
  • In-Memory Log Replay is an improvement to the file system’s ability to quickly recover its state in the event of an ungraceful shutdown. The advantage of this is a faster failover time; and
  • Improved quota management

The following table provides some more information on the supported configuration maximums for filesystems across the Unity platform.

Unity013_FS_NAS_Limits

 

FAST Cache

The following options are available for FAST Cache configuration on the new Unity arrays.

Unity007_FASTCache

Note also the following improvements (both of which I think are pretty neat from an operational perspective)

  • FAST Cache supports online expansion – up to the system maximum; and
  • FAST Cache supports online shrink – you now have the ability to remove all but 1 FAST Cache pair.

 

Maintenance Options

EMC have been paying attention to the like of Pure and Nimble with their long life maintenance programs designed to be a little kinder to customers wanting to keep their systems for more than five minutes. As such EMC customers can now “Xpect More” for all-flash systems, with Unity (all-flash) customers being guaranteed:

  • Lifetime maintenance pricing for their Unity all-flash;
  • Investment protection on flash drives that need to be replaced or repaired; and
  • Lifetime flash endurance protection.

Obviously I recommend reading the fine print about this program, but on the face of it it certainly warrants further investigation.

 

CLI

You’re probably asking if there is a CLI available for Unity, like naviseccli (Navisphere Secure CLI). After all, naviseccli is pretty awesome, and you’ve no doubt spent hours getting a bunch of stuff automated with just naviseccli and a dream. The good news is that yes, you can run UEMCLI commands from your workstation or via SSH on the system. The bad news is that previous custom scripts using naviseccli will not work using Unity UEMCLI.

 

Other Notes

Here are a few other points that I found interesting:

  • Inline compression is due before the end of the calendar year, and a deduplication option is yet to be made available for the platform.
  • There is a limit of 10 DAEs, 250 drives per bus (same as the VNX2).
  • Unity doesn’t have 60 or 120-drive DAEs, but there is a plan under consideration to support a higher number of drives.
  • Data At Rest Encryption (D@RE) is optional software that is only offered at the point of sale and cannot be enabled after the system is purchased. EMC don’t offer D@RE in certain restricted countries, including China and Russia.

 

Further Reading and Conclusion

[Update] There are a few nice articles that I didn’t see at the time of publication that I think are worth looking at. Dave Henry has a comprehensive write-up on Unity here, Rob Koper has some good coverage here, and Chris Evans has a typically thought-provoking article here that I recommend reading. Finally, Chad Sakac has a comprehensive write-up here that is well worth your time.

If you’ve had to use local protection tools on a unified VNX, you’ll be pleased to see the improvements that EMC have made with regards to coherent features and toolsets across file and block. Likewise if you’ve struggled with the lack of modern protocol support on previous unified offerings, then Unity will be a refreshing change. It’s a bummer that the CLI has changed, but this might be an opportunity to re-evaluate a number of the scripts you’ve been using to get things done previously. If nothing else, it should give me fodder for a few more blog posts along the lines of “I used to do x with naviseccli, now I do y with UEMCLI”. I’m looking forward to digging in further.

EMC Announces Unity – Part 1

EMC recently announced their new midrange array “Unity“. The message from EMC that I’ve heard during various briefings has been that it “eclipses” the VNX and VNXe. What they mean by that  is this. There is no VNX3 platform planned – Unity is EMC’s new midrange storage platform. Of interest though is that there are currently no VNX2 and VNXe EOL dates. EMC are positioning the Unity arrays in between the VNXe1600 and VNXe3200 and the 7600 and 8000 Hybrids. This will make a bit more sense as you read on. and while I’m at it, here’s a box shot, only because it wouldn’t be a product announcement without one of those.

Unity007_Box

 

Major Highlights

So what are the exciting parts of the announcement? Well, there are a few good bits that I’ll cover in depth further on.

  • HTML5 GUI – This is big. Java can finally go die in a fire. Or at least get updated on my laptop to something sensible;
  • Native block, file and VVOLS;
  • A new filesystem that goes to 64TB;
  • Unified block and file snapshots and replication;
  • Everything is now in 2RU of rack space – there are no more Control Stations, no more Data Movers.

Also of note is that within 90 days of GA VCE will be delivering these solutions as well.

 

New Models

There are four new models, with every model having an all-flash and hybrid option (all-flash being denoted by the F).

Unity001

All models feature:

  • Proactive support
  • Self-service portal
  • System monitoring
  • CloudIQ dashboard and management platform.

EMC talked a bit about the density improvements as well, using the change from a base VNX5800 to the Unity 600F. In this example:

  • The footprint goes from 7RU – 2RU;
  • Cabling goes from 30 cables down to 6;
  • Power consumption is reduced from 1495W to 703W;
  • rack installation time goes from 60min – 2min; and
  • The hero number increases as well, with a benchmark as follows: 101K -> 295K IOPS (Thin LUN, Small block random workloads).

I haven’t put one of these things in a rack yet, nor have I had a chance to do my own testing, so I can only report what EMC are telling me. As always, your mileage might vary.

 

Architecture

Are we finally rid of Windows-based FLARE running on SPs? EMC tells me we are. If you’ve been following Chad’s blog you’d have a feel for some of the background architecture that’s gone into Unity. In short, it’s a SUSE-based operating platform with everything (block, VVOLS and file) in a common pool. In my opinion this is kind of what we were hoping to see with VNX2, and it’s good to see it’s finally here.

Unity005_Architecture

Some of the features of the new architecture include:

  • A 64-bit, 64TB filesystem (wheee!);
  • Support for IP multi-tenancy;
  • Unified snapshots and replication (it was previously a bit of a mess of different tools);
  • Integrated data copy management (I need to read up on this);
  • Improved Quality of Service (QoS) and quota management;
  • Encryption and anti-virus services; and
  • “Modern” data protection choices.

 

Storage Pools

Storage Pools have been around since Release 30 of FLARE, but these ones are a bit more capable than their predecessors. All storage resources builds off storage pools. A few of the features include:

  • Modify operations include create, expand, modify, and delete (still no shrink, as best I can tell); and
  • Users can monitor and configure storage pools (good for shops with odd requirements).

Users can also view

  • Current and historical capacity usage;
  • FAST VP relocation and data distribution across storage pool tiers
  • Snapshot storage consumption thresholds and deletion policies

Here’s a handy table listing the maximum capacities for Storage Pools on each Unity model.

Unity012_StoragePools_Limits

Note that the file components live inside what EMC calls “NAS Servers”, which are like virtualised data movers. I’ll be looking into these in more depth in the near future.

Unity006_StoragePools

 

Speeds and Feeds

Here’s a table covering off the configurations for the various models (excluding the UnityVSA, which I’ll cover off later). Note that the Unity 500 (F) supports 350 drives initially, with 500 being supported in 2H 16. Note also that the Unity 600 (F) supports 500 drives with 1000 being supported in 2H 16.

Unity002_HybridMaximums

 

A DPE has two Storage Processors (SPs), each with:

  • A single socket CPU Intel Haswell processor with 6-12 cores each
  • DDR4 DIMM slots
  • Embedded ports:
    • 2x 1GbE RJ45 ports (management and service)
    • 2x 10GbE RJ45 ports (front-end)
    • 2x CNA ports (front-end; configured during OE install for either FC or Ethernet)
    • 2x mini-HD SAS ports (12Gb SAS DAE connectivity)
    • 1x USB port
  • Front end connectivity is IP/iSCSI & Fibre Channel
  • Back end connective to drives is 12Gb SAS

All Unity Hybrid models support the 2U drive enclosure which supports up to twenty five 2.5” drives and/or the 3U drive enclosure which supports fifteen  3.5”  drives. Note that the All-Flash models support only the 2U drive enclosure. There is no need for a 3U drive enclosure to be supported as that enclosure is for SAS and NL-SAS.

Here’s a table providing an overview of the (pretty reasonable) range of drives supported.

Unity003_DriveSupport

 

UnityVSA

You’ve already heard about vVNX. I even wrote about it. The UnityVSA takes that same concept and applies it to Unity, which is pretty cool. The following tables provide information on the basic configuration you’ll need in place to get it up and running.

Unity010_VSA_Reqs

There are a few different editions as well, with the 10TB and greater versions being made available on a yearly subscription basis with EMC Enhanced support. Pricing and capacity is as follows (note that these are US list prices):

  • 4TB – Free, Community supported
  • 10TB – $2995, EMC supported
  • 25TB – $3995, EMC supported
  • 50TB – $4995, EMC supported

Feature parity is there as much as it can be for a virtual system.

Unity011_VSA_features

 

Unity Unisphere

I mentioned at the start of this post that Unisphere no longer uses Java. This is seriously good news in my opinion. As well as this, Unity’s new user interface has the following benefits:

  • Eliminates security concerns using browser plugins (that’s right no one likes you Java);
  • A sleek and clean look and feel; and
  • A flat UI, allowing all functions to be accomplished on the first screen in a category (be it file, block or VMware VVOLS).

As a result of the move to HTML5, a wide range of browsers are now supported, including:

  • Google Chrome v33 or later;
  • Internet Explorer v10 or later;
  • Mozilla Firefox v28 or later; and
  • Apple Safari v6 or later.

Here’s a screenshot of the new UI, and you can see that it’s a lot different to Navisphere and Unisphere.

Unity004_Unisphere

 

Conclusion

I’ve worked with EMC midrange gear for a long time now, and it forms the bread and butter of a number of the solutions I sell and work on on a daily basis. While the VNX2 has at times looked a little long in the tooth, the Unity platform has (based on what I’ve been told so far) shaken off the rust and delivers a midrange array that feels a whole lot more modern than previous iterations of the EMC midrange. I’ll be interested to see how these things go in the field and am looking forward to putting them through their paces from a technical perspective. If you’re in the market for a new mid-range solution it wouldn’t hurt to talk to EMC about the Unity platform.

 

SwiftStack Announces Object Storage Version 4.0

If you’ve not heard of SwiftStack before, they do “object storage for the enterprise”, with the core product built on OpenStack Swift. I recently had the opportunity to be briefed by Mario Blandini on their 4.0 announcement. Mario describes them as “Like Amazon cloud but inside your DC and behind your firewall”.

New SwiftStack 4.0 innovations introduced today (and available now or in the next 90 days) include:

  • Integrated load balancing reducing the need for expensive dedicated network hardware and minimizing latency and bandwidth costs while scaling to larger numbers of storage nodes
  • Metadata search increases business value with integrated third-party indexing and search services to make stored object data analytics-ready
  • SwiftStack Drive is an optional desktop client that enables access to objects directly from desktops or laptops
  • Enhanced management with new IPv6 support, capacity planning and advanced data migration tools

Swift00

One of the key points in this announcement is the metadata search capability. Object storage is not just about “cheap and deep”, and the way we use metadata can have a big impact on the value of the data, often to applications that didn’t necessarily generate the data in the first place.

Like all good scale out solutions, you don’t need to buy everything up front, just what you need to get started. SwiftStack aren’t in the hardware business though, so you’ll be rolling your own. The hardware requirements for SwiftStack are here, and there’s also a reference architecture for Cisco.

 

Futures

SwiftStack have plans to introduce “Swift File Access” in 2016

Swift00_File

Some of the benefits of this include:

  • Scale-out file services; SMB and NFS – minimizes the need for gateways
  • Fully bimodal > files can come in over SMB and accessed through object APIs and visa versa
  • Integrated into the proxy role > performance scales independently of capacity

SwiftStack also have plans to introduce “Object Synchronization” in 2016

Swift00_ObjectSync

This will provide S3 Synchronization capability, including

  • Replication of objects to S3 buckets
  • Policy-driven > protecting and accessing files using centralized policies
  • Supporting any cloud compatible with the S3 API

This is pretty cool as there’s a lot of momentum within enterprises to consume data in places where it’s needed, not necessarily where it’s created.

 

Final Thoughts

Object storage is hot, because folks love cloud, and object is a big part of that. I like what object can do for storage, particularly as it relates to metadata and scale out performance. I’m happy to see SwiftStack making a decent play inside the enterprise, rather than aiming to be just another public cloud storage provider. I think they’re worth checking out, particularly if you have data that could benefit from object storage without necessarily having live in the public cloud.

VMware vSphere Next Beta Applications Are Now Open

VMware recently announced that applications for the next VMware vSphere Beta Program are now open. People wishing to participate in the program can now indicate their interest by filling out this simple form. The vSphere team will grant access to the program to selected candidates in stages. This vSphere Beta Program leverages a private Beta community to download software and share information. There will be discussion forums, webinars, and service requests to enable you to share your feedback with VMware.

So what’s involved? Participants are expected to:

  • Accept the Master Software Beta Test Agreement prior to visiting the Private Beta Community;
  • Install beta software within 3 days of receiving access to the beta product;
  • Provide feedback within the first 4 weeks of the beta program;
  • Submit Support Requests for bugs, issues and feature requests;
  • Complete surveys and beta test assignments; and
  • Participate in the private beta discussion forum and conference calls.

All testing is free-form and you’re encouraged to use the software in ways that interest you. This will provide VMware with valuable insight into how you use vSphere in real-world conditions and with real-world test cases.

Why participate? Some of the many reasons to participate include:

  • Receiving early access to the vSphere Beta products;
  • Interacting with the vSphere Beta team consisting of Product Managers, Engineers, Technical Support, and Technical Writers;
  • Providing direct input on product functionality, configurability, usability, and performance;
  • Providing feedback influencing future products, training, documentation, and services; and
  • Collaborating with other participants, learning about their use cases, and sharing advice and learnings.

I’m a big fan of public beta testing. While we’re not all experts on how things should work, it’s a great opportunity to at least have your say on how you think that vSphere should work. While the guys in vSphere product management may not be able to incorporate every idea you have for how vSphere should work, you’ll at least have an opportunity to contribute feedback and give VMware some insight on how their product is being used in the wild. In my opinion this is extremely valuable for both VMware and us, the consumers of their product. Plus, you’ll get a sneak peak into what’s coming up.

So, if you’re good with NDAs and have some time to devote to some testing of next-generation vSphere, this is the program for you. So head over to the website and check it out.

Cohesity Announces Hybrid Cloud Strategy

I’ve posted previously about the opportunity I had to talk in depth with some of the folks from Cohesity at Storage Field Day 8. They’ve now come out with their “Hybrid Cloud Strategy”, and I thought it was worthwhile putting together a brief post covering the announcement.

As you’ve probably been made aware countless times by various technology sales people, analysts and pundits, enterprises are moving workload to the cloud. Cohesity are offering what they call a complete approach via the following features:

  • Cohesity CloudArchive;
  • Cohesity CloudTier; and
  • Cohesity CloudReplicate.

 

Cohesity CloudArchive

Cohesity CloudArchive is, as the name implies, a mechanism to “seamlessly archive datasets for extended retention from the Cohesity Data Platform through pre-built integrations with Google Nearline, Microsoft Azure and Amazon S3, Glacier”. This feature was made available as part of the 2.0 release, which I covered here.

CloudArchive

 

Cohesity CloudTier

Cohesity CloudTier allows you to use public cloud as an extension of your on-premises storage.  It “dynamically increases local storage capacity, by moving seldom-accessed data blocks into the cloud”. The cool thing about this is that, via the policy-based waterfall model, transparent cloud tiering can be managed from the Cohesity Data Platform console. Cohesity suggest that the main benefit is that end users no longer have to worry about exceeding their on-premises capacity during temporary or seasonal demand spikes.

CloudTier

 

Cohesity CloudReplicate

Cohesity CloudReplicate allows Cohesity users to “replicate local storage instances to remote public or private cloud services”.  This has the potential to provide a lower-cost disaster recovery solution for their on-premises installations. Cohesity have said that this feature will be released for production use later this year.

CloudReplicate

 

Further Reading and Thoughts

Everyone and their dog is doing some kind of cloud storage play nowadays. This isn’t a bad thing by any stretch, as CxOs and enterprises are really super keen to move some (if not all) of their workloads off-premises in order to reduce their reliance on in-house IT systems. Every cloud opportunity comes with caveats though, and you need to be mindful of the perceived versus actual cost of storing a bunch of your data off -premises. You also need to look at things like security, bandwidth and accessibility before you take the leap. But this is all stuff you know, and I’m sure that a lot of people have thought about the impact of off-premises storage for large datasets before blindly signing up with Amazon and the like. The cool thing about this Cohesity’s secondary storage hybrid cloud solution is that Cohesity are focussed on the type of data that lends itself really well to off-premises storage.

I’ve been a fan of Cohesity since they first announced shipping product. And it’s been great to see the speed with which new features are being added to the product. As well as this, Cohesity’s responsiveness to criticism and suggestions for improvements has been exciting to see play out. You can check out a video of Cohesity’s Hybrid Cloud demo here, while the cloud integration demo from Storage Field Day 9 is available here. Alex also has a nice write-up here.

EMC Announces VxRail

Yes, yes, I know it was a little while ago now. I’ve been occupied by other things and wanted to let the dust settle on the announcement before I covered it off here. And it was really a VCE announcement. But anyway. I’ve been doing work internally around all things hyperconverged and, as I work for a big EMC partner, people have been asking me about VxRail. So I thought I’d cover some of the more interesting bits.

So, let’s start with the reasonably useful summary links:

  • The VxRail datasheet (PDF) is here;
  • The VCE landing page for VxRail is here;
  • Chad’s take (worth the read!) can be found here; and
  • Simon from El Reg did a write-up here.

 

So what is it?

Well it’s a re-envisioning of VMware’s EVO:RAIL hyperconverged infrastructure in a way. But it’s a bit better than that, a bit more flexible, and potentially more cost effective. Here’s a box shot, because it’s what you want to see.

VxRail_002

Basically it’s a 2RU appliance housing 4 nodes. You can scale these nodes out in increments as required. There’s a range of hybrid configurations available.

VxRail_006

As well as some all flash versions.

VxRail_007

By default the initial configuration must be fully populated with 4 nodes, with the ability to scale up to 64 nodes (with qualification from VCE). Here are a few other notes on clusters:

  • You can’t mix All Flash and Hybrid nodes in the same cluster (this messes up performance);
  • All nodes within the cluster must have the same license type (Full License or BYO/ELA); and
  • First generation VSPEX BLUE appliances can be used in the same cluster with second generation appliances but EVC must be set to align with the G1 appliances for the whole cluster.

 

On VMware Virtual SAN

I haven’t used VSAN/Virtual SAN enough in production to have really firm opinions on it, but I’ve always enjoyed tracking its progress in the marketplace. VMware claim that the use of Virtual SAN over other approaches has the following advantages:

  • No need to install Virtual Storage Appliances (VSA);
  • CPU utilization <10%;
  • No reserved memory required;
  • Provides the shortest path for I/O; and
  • Seamlessly handles VM migrations.

If that sounds a bit like some marketing stuff, it sort of is. But that doesn’t mean they’re necessarily wrong either. VMware state that the placement of Virtual SAN directly in the hypervisor kernel allows it to “be fast, highly efficient, and be able to scale with flash and modern CPU architectures”.

While I can’t comment on this one way or another, I’d like to point out that this appliance is really a VMware play. The focus here is on the benefit of using an established hypervisor (vSphere), and established management solution (vCenter) and a (soon-to-be) established software defined storage solution (Virtual SAN). If you’re looking for the flexibility of multiple hypervisors or incorporating other storage solutions this really isn’t for you.

 

Further Reading and Final Thoughts

Enrico has a good write-up on El Reg about Virtual SAN 6.2 that I think is worth a look. You might also be keen to try something that’s NSX-ready. This is as close as you’ll get to that (although I can’t comment on the reality of one of those configurations). You’ve probably noticed there have been a tonne of pissing matches on the Twitters recently between VMware and Nutanix about their HCI offerings and the relative merits (or lack thereof) of their respective architectures. I’m not telling you to go one way or another. The HCI market is reasonably young, and I think there’s still plenty of change to come before the market has determined whether this really is the future of data centre infrastructure. In the meantime though, if you’re already slow-dancing with EMC or VCE and get all fluttery when people mention VMware, then the VxRail is worth a look if you’re HCI-curious but looking to stay with your current partner. It may not be for the adventurous amongst you, but you already know where to get your kicks. In any case, have a look at the datasheet and talk to your local EMC and VCE folk to see if this is the right choice for you.

New eBook from Dell

I recently had the opportunity to contribute to an eBook from Dell (just quietly it feels more like a pamphlet) called “10 Ways to Flash Forward: Future-Ready Storage Insights from the Experts”. Besides the fact that I need to get a headshot that isn’t the same as my work ID card, I think it’s worth checking out if only for the insights that other people have provided. You can grab a PDF copy here. It’s also available via SlideShare.