Elastifile Announces Cloud File Service

Elastifile recently announced a partnership with Google to deliver a fully-managed file service delivered via the Google Cloud Platform. I had the opportunity to speak with Jerome McFarland and Dr Allon Cohen about the announcement and thought I’d share some thoughts here.

 

What Is It?

Elastifile Cloud File Service delivers a self-service SaaS experience, providing the ability to consume scalable file storage that’s deeply integrated with Google infrastructure. You could think of it as similar to Amazon’s EFS.

[image courtesy of Elastifile]

 

Benefits

Easy to Use

Why would you want to use this service? It:

  • Eliminates manual infrastructure management;
  • Provisions turnkey file storage capacity in minutes; and
  • Can be delivered in any zone, and any region.

 

Elastic

It’s also cloudy in a lot of the right ways you want things to be cloudy, including:

  • Pay-as-you-go, consumption-based pricing;
  • Flexible pricing tiers to match workflow requirements; and
  • The ability to start small and scale out or in as needed and on-demand.

 

Google Native

One of the real benefits of this kind of solution though, is the deep integration with Google’s Cloud Platform.

  • The UI, deployment, monitoring, and billing are fully integrated;
  • You get a single bill from Google; and
  • The solution has been co-engineered to be GCP-native.

[image courtesy of Elastifile]

 

What About Cloud Filestore?

With Google’s recently announced Cloud Filestore, you get:

  • A single storage tier selection, being Standard or SSD;
  • It’s available in-cloud only; and
  • Grow capacity or performance up to a tier capacity.

With Elastifile’s Cloud File Service, you get access to the following features:

  • Aggregates performance & capacity of many VMs
  • Elastically scale-out or -in; on-demand
  • Multiple service tiers for cost flexibility
  • Hybrid cloud, multi-zone / region and cross-cloud support

You can also use ClearTier to perform tiering between file and object without any application modification.

 

Thoughts

I’ve been a fan of Elastifile for a little while now, and I thought their 3.0 release had a fair bit going for it. As you can see from the list of features above, Elastifile are really quite good at leveraging all of the cool things about cloud – it’s software only (someone else’s infrastructure), reasonably priced, flexible, and scalable. It’s a nice change from some vendors who have focussed on being in the cloud without necessarily delivering the flexibility that cloud solutions have promised for so long. Coupled with a robust managed service and some preferential treatment from Google and you’ve got a compelling solution.

Not everyone will want or need a managed service to go with their file storage requirements, but if you’re an existing GCP and / or Elastifile customer, this will make some sense from a technical assurance perspective. The ability to take advantage of features such as ClearTier, combined with the simplicity of keeping it all under the Google umbrella, has a lot of appeal. Elastifile are in the box seat now as far as these kinds of offerings are concerned, and I’m keen to see how the market responds to the solution. If you’re interested in this kind of thing, the Early Access Program opens December 11th with general availability in Q1 2019. In the meantime, if you’d like to try out ECFS on GCP – you can sign up here.

Random Short Take #9

Here are a few links to some random news items and other content that I found interesting. You might find it interesting too. Maybe.

 

 

Pure Storage Goes All In On Hybrid … Cloud

I recently had the opportunity to hear from Chadd Kenney about Pure Storage’s Cloud Data Services announcement and thought it worthwhile covering here. But before I get into that, Pure have done a little re-branding recently. You’ll now hear them referring to Cloud Data Infrastructure (their on-premises instances of FlashArray, FlashBlade, FlashStack) and Cloud Data Management (being their Pure1 instances).

 

The Announcement

So what is “Cloud Data Services”? It’s comprised of:

According to Kenney, “[t]he right strategy is and not or, but the enterprise is not very cloudy, and the cloud is not very enterprise-y”. If you’ve spent time in any IT organisation, you’ll see that there is, indeed, a “Cloud divide” in play. What we’ve seen in the last 5 – 10 years is a marked difference in application architectures, consumption and management, and even storage offerings.

[image courtesy of Pure Storage]

 

Cloud Block Store

The first part of the puzzle is probably the most interesting for those of us struggling to move traditional application stacks to a public cloud solution.

[image courtesy of Pure Storage]

According to Pure, Cloud Block Store offers:

  • High reliability, efficiency, and performance;
  • Hybrid mobility and protection; and
  • Seamless APIs on-premises and cloud.

Kenney likens building a Purity solution on AWS to the approach Pure took in the early days of their existence, when they took off the shelf components and used optimised software to make them enterprise-ready. Now they’re doing the same thing with AWS, and addressing a number of the shortcomings of the underlying infrastructure through the application of the Purity architecture.

Features

So why would you want to run virtual Pure controllers on AWS? The idea is that Cloud Block Store:

  • Aggregates performance and reliability across many cloud stores;
  • Can be deployed HA across two availability zones (using active cluster);
  • Is always thin, deduplicated, and compressed;
  • Delivers instant space-saving snapshots; and
  • Is always encrypted.

Management and Orchestration

If you have previous experience with Purity, you’ll appreciate the management and orchestration experience remains the same.

  • Same management, with Pure1 managing on-premises instances and instances in the cloud
  • Consistent APIs on-premises and in cloud
  • Plugins to AWS and VMware automation
  • Open, full-stack orchestration

Use Cases

Pure say that you can use this kind of solution in a number of different scenarios, including DR, backup, and migration in and between clouds. If you want to use ActiveCluster between AWS regions, you might have some trouble with latency, but in those cases other replication options are available.

[image courtesy of Pure Storage]

Not that Cloud Block Store is available in a few different deployment configurations:

  • Test/Dev – using a single controller instance (EBS can’t be attached to more than one EC2 instance)
  • Production – ActiveCluster (2 controllers, either within or across availability zones)

 

CloudSnap

Pure tell us that we’ve moved away from “disk to disk to tape” as a data protection philosophy and we now should be looking at “Flash to Flash to Cloud”. CloudSnap allows FlashArray snapshots to be easily sent to Amazon S3. Note that you don’t necessarily need FlashBlade in your environment to make this work.

[image courtesy of Pure Storage]

For the moment, this only being certified on AWS.

 

StorReduce for AWS

Pure acquired StorReduce a few months ago and now they’re doing something with it. If you’re not familiar with them, “StorReduce is an object storage deduplication engine, designed to enable simple backup, rapid recovery, cost-effective retention, and powerful data re-use in the Amazon cloud”. You can leverage any array, or existing backup software – it doesn’t need to be a Pure FlashArray.

Features

According to Pure, you get a lot of benefits with StorReduce, including:

  • Object fabric – secure, enterprise ready, highly durable cloud object storage;
  • Efficient – Reduces storage and bandwidth costs by up to 97%, enabling cloud storage to cost-effectively replace disk & tape;
  • Fast – Fastest Deduplication engine on the market. 10s of GiB/s or more sustained 24/7;
  • Cloud Native – Native S3 interface enabling openness, integration, and data portability. All Data & Metadata stored in object store;
  • Single namespace – Stores in a single data hub across your data centre to enable fast local performance and global data protection; and
  • Scalability – Software nodes scale linearly to deliver 100s of PBs and 10s of GBs bandwidth.

 

Thoughts and Further Reading

The title of this post was a little misleading, as Pure have been doing various cloud things for some time. But sometimes I give in to my baser instincts and like to try and be creative. It’s fine. In my mind the Cloud Block Store for AWS piece of the Cloud Data Services announcement is possibly the most interesting one. It seems like a lot of companies are announcing these kinds of virtualised versions of their hardware-based appliances that can run on public cloud infrastructure. Some of them are just encapsulated instances of the original code, modified to deal with a VM-like environment, whilst others take better advantage of the public cloud architecture.

So why are so many of the “traditional” vendors producing these kinds of solutions? Well, the folks at AWS are pretty smart, but it’s a generally well understood fact that the enterprise moves at enterprise pace. To that end, they may not be terribly well positioned to spend a lot of time and effort to refactor their applications to a more cloud-friendly architecture. But that doesn’t mean that the CxOs haven’t already been convinced that they don’t need their own infrastructure anymore. So the operations folks are being pushed to migrate out of their DCs and into public cloud provider infrastructure. The problem is that, if you’ve spent a few minutes looking at what the likes of AWS and GCP offer, you’ll see that they’re not really doing things in the same way that their on-premises comrades are. AWS expects you to replicate your data at an application level, for example, because those EC2 instances will sometimes just up and disappear.

So how do you get around the problem of forcing workloads into public cloud without a lot of the safeguards associated with on-premises deployments? You leverage something like Pure’s Cloud Block Store. It overcomes a lot of the issues associated with just running EC2 on EBS, and has the additional benefit of giving your operations folks a consistent management and orchestration experience. Additionally, you can still do things like run ActiveCluster between and within Availability Zones, so your mission critical internal kitchen roster application can stay up and running when an EC2 instance goes bye bye. You’ll pay a bit less or more than you would with normal EBS, but you’ll get some other features too.

I’ve argued before that if enterprises are really serious about getting into public cloud, they should be looking to work towards refactoring their applications. But I also understand that the reality of enterprise application development means that this type of approach is not always possible. After all, enterprises are (generally) in the business of making money. If you come to them and can’t show exactly how they’ save money by moving to public cloud (and let’s face it, it’s not always an easy argument), then you’ll find it even harder to convince them to undertake significant software engineering efforts simply because the public cloud folks like to do things a certain way. I’m rambling a bit, but my point is that these types of solutions solve a problem that we all wish didn’t exist but it does.

Justin did a great write-up here that I recommend reading. Note that both Cloud Block Store and StorReduce are in Beta with planned general availability in 2019.

OT – I Voted. Now It’s Over To You

Eric Siebert has opened up voting for the Top vBlog 2018. I’m listed on the vLaunchpad and you can vote for me under storage and independent blog categories as well. There are a bunch of great blogs listed on Eric’s vLaunchpad, so if nothing else you may discover someone you haven’t heard of before, and chances are they’ll have something to say that’s worth checking out. If this stuff seems a bit needy, it is. But it’s also nice to have people actually acknowledging what you’re doing. I’m hoping that people find this blog useful, because it really is a labour of love (random vendor t-shirts notwithstanding).

NVMesh 2 – A Compelling Sequel From Excelero

The Announcement

Excelero recently announced NVMesh 2 – the next iteration of their NVMesh product. NVMesh is a software-only solution designed to pool NVMe-based PCIe SSDs.

[image courtesy of Excelero]

Key Features

There are three key features that have been added to NVMesh.

  • MeshConnect – adding support for traditional network technologies TCP/IP and Fibre Channel, giving NVMesh the widest selection of supported protocols and fabrics of software-defined storage platforms along with already supported InfiniBand, RoCE v2, RDMA and NVMe-oF.
  • MeshProtect – offering flexible protection levels for differing application needs, including mirrored and parity-based redundancy.
  • MeshInspect – with performance analytics for pinpointing anomalies quickly and at scale.

Performance

Excelero have said that NVMesh delivers “shared NVMe at local performance and 90+% storage efficiency that helps further drive down the cost per GB”.

Protection

There’s also a range of protection options available now. Excelero tell me that you can start at level 0 (no protection, lowest latency) all the way to “MeshProtect 10+2 (distributed dual parity)”. This allows customers to “choose their preferred level of performance and protection. [While] Distributing data redundancy services eliminates the storage controller bottleneck.”

Visibility

One of my favourite things about NVMesh 2 is the MeshInspect feature, with a “built-in statistical collection and display, stored in a scalable NoSQL database”.

[image courtesy of Excelero]

 

Thoughts And Further Reading

Excelero emerged form stealth mode at Storage Field Day 12. I was impressed with their offering back then, and they continue to add features while focussing on delivering top notch performance via a software-only solution. It feels like there’s a lot of attention on NVMe-based storage solutions, and with good reason. These things can go really, really fast. There are a bunch of startups with an NVMe story, and the bigger players are all delivering variations on these solutions as well.

Excelero seem well placed to capitalise on this market interest, and their decision to focus on a software-only play seems wise, particularly given that some of the standards, such as NVMe over TCP, haven’t been fully ratified yet. This approach will also appeal to the aspirational hyperscalers, because they can build their own storage solution, source their own devices, and still benefit from a fast software stack that can deliver performance in spades. Excelero also supports a wide range of transports now, with the addition of NVMe over FC and TCP support.

NVMesh 2 looks to be smoothing some of the rougher edges that were present with version 1, and I’m pumped to see the focus on enhanced visibility via MeshInspect. In my opinion these kinds of tools are critical to the uptake of solutions such as NVMesh in both the enterprise and cloud markets. The broadening of the connectivity story, as well as the enhanced resiliency options, make this something worth investigating. If you’d like to read more, you can access a white paper here (registration required).

Random Short Take #8

Here are a few links to some news items and other content that might be useful. Maybe.

Elastifile Announces v3.0

Elastifile recently announced version 3.0 of their product. I had the opportunity to speak to Jerome McFarland (VP of Marketing) and thought I’d share some information from the announcement here. If you haven’t heard of them before, “Elastifile augments public cloud capabilities and facilitates cloud consumption by delivering enterprise-grade, scalable file storage in the cloud”.

 

The Announcement

ClearTier

One of the major features of the 3.0 release is “ClearTier”, delivering integration between file and object storage in public clouds. With ClearTier, you have object storage expanding the file system namespace. The cool thing about this is that Elastifile’s ECFS provides transparent read / write access to all data. No need to re-tool applications to take advantage of the improved economics of object storage in the public cloud.

How Does It Work?

All data is accessible through ECFS via a standard NFS mount, and application access to object data is routed automatically. Data tiering occurs automatically according to user-defined policies specifying:

  • Targeted capacity ratio between file and object;
  • Eligibility for data demotion (i.e. min time since last access); and
  • Promotion policies control response to object data access.

Bursting

ClearTier gets even more interesting when you combine it with Elastifile’s CloudConnect, by using CloudConnect to get data to the public cloud in the first place, and then using CloudTier to push data to object storage.

[image courtesy of Elastifile]

It becomes a simple process, and consists of two steps:

  1. Move on-premises data (from any NAS) to cloud-based object storage using CloudConnect; and
  2. Deploy ECFS with pointer to designated object store.

Get Snappy

ClearTier also provides the ability to store snapshots on an object tier. Snapshots occur automatically according to user- defined policies specifying:

  • Data to include;
  • Destination for snapshot (i.e. file storage / object storage); and
  • Schedule for snapshot creation.

The great thing is that all snapshots are accessible through ECFS via the same NFS mount.

 

Thoughts And Further Reading

I was pretty impressed with Elastifile’s CloudConnect solution when they first announced it. When you couple CloudConnect with something like ClearTier, and have it sitting on top of the ECFS foundation, it strikes me as a pretty cool solution. If you’re using applications that rely heavily on NFS, for example, ClearTier gives you a way to leverage the traditionally low cost of cloud object storage with the improved performance of file. I like the idea that you can play with the ratio of file and object, and I’m a big fan of not having to re-tool my file-centric applications to take advantage of object economics. The ability to store a bunch of snapshots on the object tier also adds increased flexibility in terms of data protection and storage access options.

The ability to burst workloads is exactly the kind of technical public cloud use case that we’ve been talking about in slideware for years now. The reality, however, has been somewhat different. It looks like Elastifile are delivering a solution that competes aggressively with some of the leading cloud providers’ object solutions, whilst also giving the storage array vendors, now dabbling in cloud solutions, pause for thought. There are a bunch of interesting use cases, particularly if you need to access a bunch of compute, and large data sets via file-based storage, in a cloud environment for short periods of time. If you’re looking for a cost-effective, scalable storage solution, I think that Elastifile are worth checking out.

Storage Field Day 17 – Wrap-up and Link-o-rama

Disclaimer: I recently attended Storage Field Day 17.  My flights, accommodation and other expenses were paid for by Tech Field Day. There is no requirement for me to blog about any of the content presented and I am not compensated in any way for my time at the event.  Some materials presented were discussed under NDA and don’t form part of my blog posts, but could influence future discussions.

This is a quick post to say thanks once again to Stephen and Ben, and the presenters at Storage Field Day 17. I had a super fun and educational time. For easy reference, here’s a list of the posts I did covering the events (they may not match the order of the presentations).

Storage Field Day – I’ll Be At Storage Field Day 17

Storage Field Day 17 – (Fairly) Full Disclosure

I Need Something Like Komprise For My Garage

NGD Systems Are On The Edge Of Glory

Intel’s Form Factor Is A Factor

StarWind Continues To Do It Their Way

 

Also, here’s a number of links to posts by my fellow delegates (in no particular order). They’re all very smart people, and you should check out their stuff, particularly if you haven’t before. I’ll attempt to keep this updated as more posts are published. But if it gets stale, the Storage Field Day 17 landing page will have updated links.

 

Max Mortillaro (@DarkkAvenger)

I will be at Storage Field Day 17! Wait what is a “Storage Field Day”?

The Rise of Computational Storage

Komprise: Data Management Made Easy

What future for Intel Optane?

 

Ray Lucchesi (@RayLucchesi)

Screaming IOP performance with StarWind’s new NVMeoF software & Optane SSDs

GreyBeards talk Computational Storage with Scott Shadley VP Marketing NGD Systems

 

Howard Marks (@DeepStorageNet)

 

Arjan Timmerman (@ArjanTim)

EP10 – Computational Storage: A Paradigm Shift In The Storage Industry with Scott Shadley and NGD Systems

 

Aaron Strong (@TheAaronStrong)

Komprise Systems Overview from #SFD17

NGD Systems from #SFD17

StarWind NVMeoF

 

Jeffrey Powers (@Geekazine)

Komprise Transforming Data Management with Disruption at SFD17

Starwind NVMe Over Fabrics for SMB and ROBO at SFD17

NGD Systems Help Make Cat Searches Go Faster with Better Results at SFD17

 

Joe Houghes (@JHoughes)

 

Luigi Danakos (@NerdBlurt)

Tech Stand UP Episode 8 – SFD17 – Initial Thoughts on Komprise Podcast

Tech Stand Up Episode 9 – SFD17 – Initial thoughts NGD Systems Podcast

 

Mark Carlton (@MCarlton1983)

 

Enrico Signoretti (@ESignoretti)

Secondary Storage Is The New Primary

The Era of Composable Storage Infrastructures is Coming

The Fascinating World Of Computational Storage

Secondary Data and Komprise with Krishna Subramanian

 

Jon Hudson (@_Desmoden)

 

[photo courtesy of Ramon]

StarWind Continues To Do It Their Way

Disclaimer: I recently attended Storage Field Day 17.  My flights, accommodation and other expenses were paid for by Tech Field Day. There is no requirement for me to blog about any of the content presented and I am not compensated in any way for my time at the event.  Some materials presented were discussed under NDA and don’t form part of my blog posts, but could influence future discussions.

 

StarWind recently presented at Storage Field Day 17. You can see their videos from Storage Field Day 17 here, and download a PDF copy of my rough notes from here.

 

StarWind Do All Kinds Of Stuff

I’ve written enthusiastically about StarWind previously. If you’re unfamiliar with them, they have three main focus areas:

They maintain a strict focus on the SMB and Enterprise ROBO markets, and aren’t looking to be the next big thing in the enterprise any time soon.

 

So What’s All This About NVMe [over Fabrics]?

According to Max and the team, NVMe over Fabrics is “the next big thing in [network] storage”. Here’s a photo of Max saying just that.

Why Hate SAS?

It’s not that people hate SAS, it’s just that the SAS protocol was designed for disk, and NVMe was designed for Flash devices.

SAS (iSCSI / iSER) NVMe [over Fabrics]
Complex driver built around archaic SCSI Simple driver built around block device (R/W)
Single short queue per controller One device = one controller, no bottlenecks
Single short queue per device Many long queues per device
Serialised access, locks Non-serialised access, no locks
Many-to-One-to-Many Many-to-Many, true Point-to-Point

 

You Do You, Boo

StarWind have developed their own NVMe SPDK for Windows Server (as Intel doesn’t currently provide one). In early development they had some problems with high CPU overheads. CPU might be a “cheap resource”, but you still don’t want to use up 8 cores dishing out IO for a single device. They’ve managed to move a lot of the work to user space and cut down on core consumption. They’ve also built their own Linux (CentOS) based initiator for NVMe over Fabrics. They’ve developed a NVMe-oF initiator for Windows by combining a Linux initiator and stub driver in the hypervisor. “We found the elegant way to bring missing SPDK functionality to Windows Server: Run it in a VM with proper OS! First benefit – CPU is used more efficiently”. They’re looking to do something similar with ESXi in the very near future.

 

Thoughts And Further Reading

I like to think of StarWind as the little company from the Ukraine that can. They have a long, rich heritage in developing novel solutions to everyday storage problems in the data centre. They’re not necessarily trying to take over the world, but they’ve demonstrated before that they have an ability to deliver solutions that are unique (and sometimes pioneering) in the marketplace. They’ve spent a lot of time developing software storage solutions over the years, so it makes sense that they’d be interested to see what they could do with the latest storage protocols and devices. And if you’ve ever met Max and Anton (and the rest of their team), it makes even more sense that they wouldn’t necessarily wait around for Intel to release a Windows-based SPDK to see what type of performance they could get out of these fancy new Flash devices.

All of the big storage companies are coming out with various NVMe-based products, and a number are delivering NVMe over Fabrics solutions as well. There’s a whole lot of legacy storage that continues to dominate the enterprise and SMB storage markets, but I think it’s clear from presentations such as StarWind’s that the future is going to look a lot different in terms of the performance available to applications (both at the core and edge).

You can check out this primer on NVMe over Fabrics here, and the ratified 1.0a specification can be viewed hereRay Lucchesi, as usual, does a much better job than I do of explaining things, and shares his thoughts here.

Intel’s Form Factor Is A Factor

Disclaimer: I recently attended Storage Field Day 17.  My flights, accommodation and other expenses were paid for by Tech Field Day. There is no requirement for me to blog about any of the content presented and I am not compensated in any way for my time at the event.  Some materials presented were discussed under NDA and don’t form part of my blog posts, but could influence future discussions.

 

The Intel Optane team recently presented at Storage Field Day 17. You can see their videos from Storage Field Day 17 here, and download a PDF copy of my rough notes from here. I urge you to check out the videos, as there was a heck of a lot of stuff in there. But rather than talk about benchmarks and the SPDK, I’m going to focus on what’s happening with Intel’s approach to storage in terms of the form factor.

 

Of Form Factors And Other Matters Of Import

An Abbreviated History Of Drive Form Factors

Let’s start with little bit of history to get you going. IBM introduced the first hard drive – the IBM 350 disk storage unit – in 1956. Over time we’ve gone from a variety of big old drives to smaller form factors. I’m not old enough to reminisce about the Winchester drives, but I do remember the 5.25″ drives in the XT. Wikipedia provides a good a place to start as any if you’re interested in knowing more about hard drives. In any case, we now have the following prevailing form factors in use as hard drive storage:

  • 3.5″ drives – still reasonably common in desktop computers and “cheap and deep” storage systems;
  • 2.5″ drives (SFF) – popular in laptops and used as a “dense” form factor for a variety of server and storage solutions;
  • U.2 – mainstream PCIe SSD form factor that has the same dimensions as 2.5″ drives; and
  • M.2 – designed for laptops and tablets.

Challenges

There are a number of challenges associated with the current drive form factors. The most notable of these is the density issue. Drive (and storage) vendors have been struggling for years to try and cram more and more devices into smaller spaces whilst increasing device capacities as well. This has led to problems with cooling, power, and overall reliability. Basically, there’s only so much you can put in 1RU without the whole lot melting.

 

A Ruler? Wait, what?

Intel’s “Ruler” is a ruler-like, long (or short) drive based on the EDSFF (Enterprise and Datacenter Storage Form Factor) specification. There’s a tech brief you can view here. There are a few different versions (basically long and short), and it still leverages NVMe via PCIe.

[image courtesy of Intel]

It’s Denser

You can cram a lot of these things in a 1RU server, as Super Micro demonstrated a few months ago.

  • Up to 32 E1.L 9.5mm drives per 1RU
  • Up to 48 E1.S drives per 1RU

Which means you could be looking at around a petabyte of raw storage in 1RU (using 32TB E1.L drives). This number is only going to go up as capacities increase. Instead of half a rack of 4TB SSDs, you can do it all in 1RU.

It’s Cooler

Cooling has been a problem for storage systems for some time. A number of storage vendors have found out the hard way that jamming a bunch of drives in a small enclosure has a cost in terms of power and cooling. Intel tell us that they’ve had some (potentially) really good results with the E1.L and E1.S based on testing to date (in comparison to traditional SSDs). They talked about:

  • Up to 2x less airflow needed per E1.L 9.5mm SSD vs. U.2 15mm (based on Intel’s internal simulation results); and
  • Up to 3x less airflow needed per E1.S SSD vs. U.2 7mm.

Still Serviceable

You can also replace these things when they break. Intel say they’re:

  • Fully front serviceable with an integrated pull latch;
  • Support integrated, programmable LEDs; and
  • Support remote, drive specific power cycling.

 

Thoughts And Further Reading

SAN and NAS became popular in the data centre because you could jam a whole bunch of drives in a central location and you weren’t limited by what a single server could support. For some workloads though, having storage decoupled from the server can be problematic either in terms of latency, bandwidth, or both. Some workloads need their storage as close to the processor as possible. Technologies such as NVMe over Fabrics are addressing that issue to an extent, and other vendors are working to bring the compute closer to the storage. But some people just want to do what they do, and they need more and more storage to do it. I think the “ruler” form factor is an interesting approach to the issue traditionally associated with cramming a bunch of capacity in a small space. It’s probably going to be some time before you see this kind of thing in data centres as a matter of course, because it takes a long time to change the way that people design their servers to accommodate new standards. Remember how long it took for SFF drives to become as common in the DC as they are? No? Well it took a while. Server designs are sometimes developed years (or at least months) ahead of their release to the market. That said, I think Intel have come up with a really cool idea here, and if they can address the cooling and capacity issues as well as they say they can, this will likely take off. Of course, the idea of having 1PB of data sitting in 1RU should be at least a little scary in terms of failure domains, but I’m sure someone will work that out. It’s just physics after all, isn’t it?

There’s also an interesting article at The Register on the newer generation of drive form factors that’s worth checking out.