Dell Technologies World 2018 – storage.27 – Isilon: What’s New in 2018 & Future Directions Notes

Disclaimer: I recently attended Dell Technologies World 2018.  My flights, accommodation and conference pass were paid for by Dell Technologies via the Press, Analysts and Influencers program. There is no requirement for me to blog about any of the content presented and I am not compensated in any way for my time at the event.  Some materials presented were discussed under NDA and don’t form part of my blog posts, but could influence future discussions.

Here are my rough notes from the storage.27 session. This was presented by John Hayden, VP Software Engineering, Unstructured Data Storage, and covered Isilon: What’s New in 2018 & Future Directions. This is a futures session, so some of this may not come to pass exactly as it was described here, and there are no dates. The choice was to talk about dates, with no technical details, or talk technical details, but no dates. Option 2 made for a more entertaining session.

Momentum with Isilon – >3.5 EB shipped calendar year 2017

 

What’s new since DEW 2017?

What’s new since 2017?

  • Release of OneFS 8.1
  • New generation of Isilon hardware products
    • All-Flash
    • Hybrid
    • Archive
  • 3 year satisfaction guarantee

 

Isilon hardware design: compute

Features

  • 4 nodes in 4U chassis
  • Intel Broadwell CPU optimised compute to drive ratios
  • Up to 6TB cache per node
  • No single points of failure
  • Networking flexibility: InfiniBand, 10GbE / 40GbE

Benefits

  • 4:1 reduction in RU
  • Optimised IOPS and throughput
  • Future-proof, enduring design: Snap-in next-gen compute, networks
  • New levels of modular, hot-swappable serviceability

 

Isilon hardware design: storage

Features

  • From 72TB to 924TB in 4RU
  • 5 drive sleds per node. 3 to 6 drives per sled
  • Front aisle, hot swap sleds and drives
  • Media flexibility: Flash, SAS, and SATA media

The Isilon Family – mix of performance and capacity

 

Isilon OneFS Enhancements

  • Improved performance driven by software improvements enable new workloads
  • Updated support with leading analytics vendors like Cloudera and HortonWorks
  • New cloud tiering options along with multi-cloud support
  • Ethernet back-end infrastructure support
  • Improved handling of small files for healthcare PACS
  • IsilonSD Edge software defined storage now supported on PowerEdge servers along with vSAN and VxRail
  • Data in flight encryption for improved security

 

Behind the Scenes

Relentless focus on quality & customer experience

  • Isilon engineering organised into CX, 8.X & Innovation acceleration
  • Corporate Dell-wide quality standards and KPIs
  • Tremendous improvement in support and support capabilities
  • Code base in the field is 50% composed of releases in the last 18 months and that is continuing to dramatically accelerate

Results: explosive growth in 8.1 and Gen6 implementation

 

Isilon Pillars of Innovation

  • Flash
  • Archive – great integration with ECS
  • Cloud – cloud pools v1 is already out
  • Analytics – Nautilus, IoT – a lot sits on Isilon and ECS

 

Where we’re investing

Storage challenges

  • Rapid data growth
  • Data solos
  • Cost of infrastructure
  • Insufficient perf
  • Limited IT resources

 

Addressing data growth

Isilon Today

  • Scale-out architecture
  • Scales from 10s of TB to 50+ PB in a single cluster
  • 144 nodes maximum cluster size
  • Provision storage only as needed
  • Simplified management at PB scale
  • Automated tiering between storage tiers and cloud

Future

  • 250+ node cluster size and L/S
  • Cluster aware NDMP
  • Partitioned performance for insights
  • System services QoS
  • Rich filesystem analytics

 

Addressing Data silos

Isilon Today

  • Multi-protocol namespace ideal for infrastructure consolidation
  • Supports unified data lake
  • Enterprise grade features
  • Edge-to-core-to-cloud solution
  • In-place analytics support

Future

  • File and object integration – CloudPools 2 – snapshots will work, and quotas.
  • Increased support for emerging data analytics technologies including streaming analytics*

 

Addressing Infrastructure Cost

Isilon Today

  • Up to 80% storage efficiency
  • Optimised data placement and tiering
  • Flexible deduplication
  • Seamless cloud integration
  • Investment protection – no rip and replace

Future

  • Cloud co-location
  • Inline compression and dedupe for Flash
  • CloudPools v2 with tiering snaps and quotas
  • Writable snapshots

 

Addressing System Performance Needs

Isilon Today

  • OneFS optimised for Flash
  • New All-Flash configuration
  • 6x IOPS and 11x throughput
  • Optimised compute to drive ratios
  • Reduced latency / write optimisation

Future

  • Inline compression and dedupe for Flash
  • File create / delete improvements
  • Streaming improvements (multi-block reads, degraded reads)
  • Performance at scale (cluster, snaps, domains)
  • Large file size
  • Partitioned performance

 

Addressing Manageability Needs

Isilon Today

  • Simple to manage at any scale
  • Single file system
  • Policy based automation
  • Choice of management tools
  • API / web services “first” model

Future

  • On cluster and off cluster UI revolution – including next-generation cluster management technology
  • Cluster integrated authentication
  • Unchained delivery cadence and innovation posture
  • Predictive and prescriptive management

 

Addressing Data protection and security needs

Isilon Today

  • Enterprise data protection features
  • Tolerate up to 4 simultaneous failures
  • WORM, SEDs, RBAC, access zone, compliance
  • Improved failure domain with mirrored journal and boot drive
  • Improved handling of drive failure

Future

  • Multi-factor auth for SSH
  • RBAC per AZ
  • Secure protocols optimisations
  • Enhanced Federal support

 

“Simple is Smart” with Isilon

Simplicity

  • Single volume, single file system architecture

Scalability

  • Expands easily from 10s of TB to 10s of PB without disruption

Efficiency

  • Up to 80% storage utilisation
  • Automated tiering and cloud integration

Data Analytics

  • In-place, high performance data analytics

Data Protection

  • Highly resilient architecture, data replication and snapshots

Security and compliance

  • WORM, data at rest encryption, role-based access control, and more

 

Top session. 5 stars.

Dell EMC’s Isilon All-Flash Is Starting To Make Sense

Disclaimer: I recently attended Storage Field Day 13.  My flights, accommodation and other expenses were paid for by Tech Field Day and Pure Storage. There is no requirement for me to blog about any of the content presented and I am not compensated in any way for my time at the event.  Some materials presented were discussed under NDA and don’t form part of my blog posts, but could influence future discussions.

 

I’ve written about Dell EMC Isilon All-Flash before (here and here). You can see Dell EMC’s Storage Field Day presentation video here and you can grab a copy of my rough notes from here.

 

The Problem?

Dell EMC’s Isilon (and OneFS) has been around for a while now, and Dell EMC tell us it offers the following advantages over competing scale-out NAS offerings:

  • Single, scalable file system;
  • Fully symmetric, clustered architecture;
  • Truly multi-protocol data lake;
  • Transparent tiering with heterogeneous clusters; and
  • Non-disruptive platform and OneFS upgrades.

While this is most likely true, the world (and its workloads) are changing. To this end, Dell EMC have been working with Isilon customers to address some key industry challenges, including:

  • Electronic Design Automation – 7nm and 3D Chip designs;
  • Life Sciences – population-scale genomics;
  • Media and Entertainment – 4K Content and Distribution; and
  • Enterprise – big data and analytics.

 

The Solution?

To cope with the ever-increasing throughput requirements, Dell EMC have developed an all-flash offering for their Isilon range of NAS devices, along with some changes in their OneFS operating environment. The idea of the “F” series of devices is that you can “start small and scale”, with capacities ranging from 72TB – 924TB (RAW) in 4RU. Dell EMC tell me you can go to over 33PB in a single file system. From a performance perspective, Dell EMC say that you can push 250K IOPS (or 15GB/s) in just 4RU and scale to 9M IOPS. These are pretty high numbers, and pointless if your editing workstation is plugged into a 1Gbps switch. But that’s generally not the case nowadays.

One of the neater resilience features that Dell EMC discussed was that the file system layout is “sled-aware” (there are 5 drive sleds per node and 20 sleds per 4RU chassis) meaning that a given file uses one drive per sled, allowing for sled removal for service without data unavailability, with these being treated as temporarily-offline drives.

 

Is All-Flash the Answer (Or Just Another Step?)

I’ve been fascinated with the storage requirements (and IT requirements in general) for media and entertainment workloads for some time. I have absolutely no real-world experience with these types of environments, and it would be silly for me to position myself as any kind of expert in the field. [I am, of course, happy for people working in M&E to get in touch with me and tell me all about what they do]. What I do have is a lot of information that tells me that the move from 2K to 4K (and 8K) is forcing people to rethink their requirements for high bandwidth storage in the ranges of capacities that studios are now starting to look at.

Whilst I was initially a little confused around the move to all-flash on the Isilon platform, the more I think about it, the more it makes sense. You’re always going to have a bunch of data hanging around that you might want to keep on-line for a long time, but it may not need to be retrieved at great speed (think “cheap and deep” storage). For this, it seems that the H (Hybrid) series of Isilon does the job, and does it well. But for workloads where large amounts of data need to be processed in a timely fashion, all-flash options are starting to make a lot more sense.

Is an all-flash offering the answer to everything? Probably not. Particularly not if you’re on a budget. And no matter how much money people have invested in the movie / TV show / whatever, I can guarantee that most of that is going to talent and content, not infrastructure. But there’s definitely a shift from spinning disk to Flash and this will continue as Flash media prices continue to fall. And then we’ll wonder how we ever did anything with those silly spinning disks. Until the next magic medium comes along. In the meantime, if you want to take OneFS for a spin, you can grab a copy of the version 8.1 simulator here. There’s also a very good Isilon overview document that I recommend you check out if that’s the kind of thing you’re into.

Dell EMC Announces Isilon Update (with cameo from ECS)

Disclaimer: I recently attended Dell EMC World 2017.  My flights, accommodation and conference pass were paid for by Dell EMC via the Dell EMC Elect program. There is no requirement for me to blog about any of the content presented and I am not compensated in any way for my time at the event.  Some materials presented were discussed under NDA and don’t form part of my blog posts, but could influence future discussions.

Isilon

The latest generation of Isilon (previewed at Dell EMC World in Austin) was announced today. It’s a modular, in-chassis, flexible platform capable of hosting a mix of all-flash, hybrid and archive nodes. The smaller nodes, with a single socket driving 15 or 20 drives (so they can granularly tune the socket:spindle ratio), come in a 4RU chassis. Each chassis can accommodate 4 “sub-nodes”. Dell EMC are claiming some big improvements over the previous generation of Isilon hardware, with 6X file IOPS, 11X throughput, and 2X the capacity. Density is in too, and you can have up to 80 drives in a single chassis (using SATA drives). Dell EMC notes that it’s NVMe ready, but the CPU power to drive that isn’t there just yet. At launch it supports up to 144 nodes (in 36 chassis) and they’re aiming to get to 400 later in the year. Interestingly, there are now dual modes of backend connectivity (InfiniBand and Ethernet) to accommodate this increased number of nodes.
From a compute perspective, you’ll see the following specs:

  • Intel Broadwell CPU (with optimised compute to drive ratios)
  • Up to 6TB cache per node (2 Flash cards)
  • No SPoF
  • Networking flexibility – Infiniband, 10GbE/40GbE

Nodes can borrow power from neighbours if required too.

Dell EMC tell me this provides the following benefits:

  • 4:1 reduction in RU
  • Optimised IOPS and throughput
  • Future-proof, enduring design – snap in next-gen CPUs, networks
  • New levels of modular, hot-swappable serviceability

From a storage perspective, you’ll see a range of configurations:

  • From 72TB to 924TB in 4RU
  • 5 drive sleds per node. 3-6 drives per sled.
  • Front aisle, hot swap sleds and drives
  • Media flexibility: Flash, SAS and SATA media

Dell EMC tell me this provides the following benefits:

  • Start small and scale
  • Breakthrough density
  • Simplified serviceability and upgrades
  • Future-proof storage

Other Benefits?

Well, you get access to OneFS 8.1. You also get OPEX reduction by occupying a lot less space in the DC, and having the ability to host a lot more diversity of workloads. Dell EMC are also claiming this release provides unmatched resilience, availability, and security.

Scale? They’ve got that too.

From a capacity standpoint, you can start as small as 72TB (in one chassis) and expand that to over 33PB in a single volume and file system. In terms of performance, Dell EMC are telling me they’re getting up to 250K IOPs, 15GB/s, which scales to 9M IOPs, 540GB/s (aggregate throughput). Your mileage might vary, of course.

 

Speeds and Feeds

So what do the new models look like? You can guess, but I’ll say it anyway. F nodes are all flash, H nodes are hybrid, and A nodes are archive nodes.

F800 (All Flash)

  • 1.6TB, 3.2TB and 15.4TB Flash
  • 60 drives, up to 924TB per chassis
  • 250K IOPs per chassis
  • 15GB/s throughput

H600 (Hybrid)

  • 600GB and 1.2TB SAS drives
  • 120 drives and up to 144TB per chassis
  • 117K IOPs per chassis

H500, H400 (Hybrid)

  • 2/4/8TB SATA drives
  • 60 drives per chassis
  • Up to 480TB per chassis

A200 (Archive)

  • 2/4/8TB SATA drives
  • 60 drives per chassis
  • Up to 480TB per chassis

A2000 (Archive)

  • 10TB SATA drives
  • 80 drives per chassis
  • 800TB per chassis

 

“No node left behind”

One of the great things about Isilon is that you can seamlessly add “Next Gen” nodes to existing clusters. You’ve been able to do this with Isilon clusters for a very long time, obviously, and it’s nice to see Dell EMC maintain that capability. The benefits of this approach are that you can:

  • Beef up your existing Isilon clusters with Isilon all flash nodes; and
  • Consolidate your DC footprint by retiring older nodes.

 

OneFS

OneFS has always been pretty cool and it’s now “optimised [for the] performance benefits of flash – without compromising enterprise features”. According to Dell EMC, flash wear is “yesterday’s problem”, and the F800 can sustain more writes per day than its total capacity every day for over 5 years before approaching limits. OneFS is now designed to go “From Edge to Core to Cloud” with IsilonSD Edge, the Next Generation Core and Cloud (with CloudPools -> AWS, Azure, Virtustream).

 

IsilonSD Edge

IsilonSD Edge has some new and improved features now too:

  • VMware ESXi Hypervisor
  • Full vCenter integration
  • Scale up to 36TB
  • Single server deployment
  • Back end SAN: ScaleIO, VSAN and VxRAIL
  • Dell PowerEdge 14G Support

 

ECS

Dell EMC also talked about their vision for ECS.Next, coming in the next year.

  • Data streaming
  • Enterprise Hardening
  • Certifications
  • Compliance
  • Economics
  • Hybrid Cloud

Big bets?

  • Hybrid Cloud
  • Batch and real-time analytics and stream processing
  • Massive scale @ low cost with new enterprise capabilities

 

Hybrid Cloud

Dell EMC are launching a ECS Dedicated Cloud (ECS DC) Service. This is on-demand ECS storage, managed by Dell EMC and running on dedicated, single-tenant servers hosted in a Virtustream DC. It’s available in hybrid and fully hosted multi-site configurations.

So what’s in the box?

You get some dedicated infrastructure

  • Customer owned ECS rack
  • Dedicated network / firewall / load balancer

You also get 24×7 support of hosted sites from a professional DevOps team

  • Strong expertise in operating ECS
  • Proactive monitoring and fast response

As well as broad Geo coverage

  • 5 DCs available across US (Las Vegas, Virginia) and Europe (France, London, Netherlands)
  • Coming to APJ by end of 2017

It will run on a subscription model, with a 1 year or 3 year contract available.

 

Project Nautilus

The team also took us through “Project Nautilus”, a batch and real-time analytics and stream processing solution.

Streaming storage and analytics engine

  • Scale to manage 1000s of high-volume IoT data sources
  • Eliminate real-time and batch analytics silos
  • Tier inactive data seamlessly and cost effectively

I hope to cover more on this later. They’re also working on certifications in terms of Hadoop and Centera migrations too (!). I’m definitely interested in the Centera story.

 

Conclusion

I’ve been a fan of Isilon for some time. It does what it promises on the tin and does it well. The Nitro announcement last year left a few of us scratching our heads (myself included), but I’m on board with a number of the benefits from adopting this approach. Some people are just going to want to consume things in a certain way (VMAX AF is a good example of this), and Dell EMC have been pretty good at glomming onto those market opportunities. And, of course, in much the same way as we’re no longer running SCSI disks everywhere, Flash does seem to be the medium of the future. I’m looking forward to seeing ECS progress as well, given the large numbers of scale-out, object-based storage solutions on the market today. If you’d like to read more about the new Isilon platform, head over to Dell EMC’s blog to check it out.

 

Dell EMC Announces Isilon All-Flash

You get a flash, you get a flash, you all get a flash

Last week at Dell EMC World it was announced that the Isilon All-Flash NAS (formerly “Project Nitro“) offering was available for pre-order (and GA in early 2017). You can check out the specs here, but basically each chassis is comprised of 4 nodes in 4RU. Dell EMC says this provides “[e]xtreme density, modular and incredibly scalable all-flash tier” with the ability to have up to 100 systems with 400 nodes, storing 92.4PB of capacity, 25M IOPS and up to 1.5TB/s of total aggregate bandwidth—all within a single file system and single volume. All OneFS features are supported, and a OneFS update will be required to add these to existing clusters.

isilon_all-flash_001

[image via Dell EMC]

 

Why?

Dell EMC are saying this solution provides 6x greater IOPS per RU over existing Isilon nodes. It also helps in areas where Isilon hasn’t been as competitive, providing:

  • High throughput for large datasets of large files for parallel processing;
  • IOPS intensive: You can now work on billions of small files and large datasets for parallel processing;
  • Predictable latency and performance for mixed workloads; and
  • Improved cost of ownership, with higher density flash providing some level of relief in terms of infrastructure and energy efficiency.

 

Use Cases?

Dell EMC covered the usual suspects – but with greater performance:

  • Media and entertainment;
  • Life sciences;
  • Geoscience;
  • IoT; and
  • High Performance Computing.

 

Thoughts and Further Reading

If you followed along with the announcements from Dell EMC last week you would have noticed that there have been some incremental improvements in the current storage portfolio, but no drastic changes. While it might make for an exciting article when Dell EMC decide to kill off a product, these changes make a lot more sense (FluidFS for XtremIOenhanced support for Compellent, and the addition of a PowerEdge offering for VxRail). The addition of an all-flash offering for Isilon has been in the works for some time, and gives the platform a little extra boost in areas where it may have previously struggled. I’ve been a fan of the Isilon platform since I first heard about it, and while I don’t have details of pricing, if you’re already an Isilon shop the all-flash offering should make for interesting news.

Vipin V.K did a great write-up on the announcement that you can read here. The press release from Dell EMC can be found here. There’s also a decent overview from ESG here. Along with the above links to El Reg, there’s a nice article on Nitro here.

Random Short Take #2

I did one of these 7 years ago – so I guess I never really got into the habit – but here’re a few things that I’ve noticed and thought you might be interested in:

And that’s about it, thanks for reading.

iStock-Unfinished-Business-2

EMC – Isilon – Joining Active Directory

I’ve been doing some work in the EMC vLabs and I thought I’d take note of how to join an Isilon cluster to Active Directory. The cluster in this example is running 3 Isilon virtual nodes with OneFS 7.1.0.0.

Once you’ve logged in, click on Cluster Management and Access Management. Under Access Management, click on Active Directory. Note that there are no Active Directory providers configured in this example.

Isilon_AD1

Click on “Join a domain”. You can then specify the Domain Name, etc. You can also specify the OU and Machine Account if required.

Isilon_AD2

Once you click on Join, you’ll be joined to the AD.

Isilon_AD3

To confirm this, you can also use isi auth status to confirm the status.

Isilon_AD4

And that’s it. As always, I recommend you use a directory service of some type on all of your devices for authentication.

EMC – VSI for VMware vSphere 6.5 Linked Mode Issue – Redux

I wrote in a previous post about having some problems with EMC’s VSI for VMware vSphere 6.5 when running in vCenter 5.5 in Linked Mode. I spoke about deploying the appliance in just one site as a workaround. Turns out that wasn’t much of a workaround. Because workaround implies that I was able to get some functionality out of the situation. While the appliance deployed okay, I couldn’t get it to recognise the deployed volumes as EMC volumes.

 

A colleague of mine had the same problem as me and a little more patience and logged a call with EMC support. Their response was “[c]urrent VSI version does not support for Linked mode, good news is recently we have several customers requesting that enhancement and Dev team are in the process of evaluating impact to their future delivery schedule. So, the linked mode may will be supported in the future. Thanks.”

 

iStock-Unfinished-Business-3

While this strikes me as non-optimal, I am hopeful, but not optimistic, that it will be fixed in a later version. My concern is that Linked Mode isn’t the problem at all, and it’s something else stupid that I’m doing. But I’m short of places I can test this at the moment. If I come across a site where we’re not using Linked Mode, I’ll be sure to fire up the appliance and run it through its paces, but for now it’s back in the box.

EMC – VSI for VMware vSphere 6.5 Linked Mode Issue

As part of a recent deployment I’ve been implementing EMC VSI for VMware vSphere Web Client v6.5 in a vSphere 5.5 environment. If you’re not familiar with this product, it “enables administrators to view, manage, and optimize storage for VMware ESX/ESXi servers and hosts and then map that storage to the hosts.” It covers a bunch of EMC products, and can be really useful in understanding where your VMs sit in relation to your EMC storage environment. It also really helps non-storage admins get going quickly in an EMC environment.

To get up and running, you:

  • Download the appliance from EMC;
  • Deploy the appliance into your environment;
  • Register the plug-in with vCenter by going to https://ApplianceIP:8443/vsi_usm/admin;
  • Register the Solutions Integration Service in the vCenter Web Client; and
  • Start adding arrays as required.

So this is all pretty straightforward. BTW the default username is admin, and the default password is ChangeMe. You’ll be prompted to change the password the first time you log in to the appliance.

 

So the problem for me arose when I went to register a second SIS appliance.

VSI1

By way of background, there are two vCenter 5.5 U2 instances running at two different data centres. I do, however, have them running in Linked Mode. And I think this is the problem. I know that you can only register one instance at a time with one vCenter. While it’s not an issue to deploy a second appliance at the second DC, every time I go to register the service in vCenter, regardless of where I’m logged in, it always points to the first vCenter instance. Which is a bit of a PITA, and not something I’d expected to be a problem. As a workaround, I’ve deployed one instance of the appliance at the primary DC and added both arrays to it to get the client up and running. And yes, I agree, if I have a site down I’m probably not going to be super focused on storage provisioning activities at my secondary DC. But I do enjoy whinging about things when they don’t work the way I expected them in the first instance.

 

I’d read in previous versions that Linked Mode wasn’t supported, but figured this was no longer an issue as it’s not mentioned in the 6.5 Product Guide. This thread on ECN seems to back up what I suspect. I’d be keen to hear if other people have run into this issue.

 

EMC announces Isilon enhancements

I sat in on a recent EMC briefing regarding some Isilon enhancements and I thought my three loyal readers might like to read through my notes. As I’ve stated before, I am literally one of the worst tech journalists on the internet, so if you’re after insight and deep analysis, you’re probably better off looking elsewhere. Let’s focus on skimming the surface instead, yeah? As always, if you want to know further about these announcements, the best place to start would be your local EMC account team.

Firstly, EMC have improved what I like to call the “Protocol Spider”, with support for the following new protocols:

  • SMB 3.0
  • HDFS 2.3*
  • OpenStack SWIFT*

* Note that this will be available by the end of the year.

Here’s a picture that says pretty much the same thing as the words above.

isilon_protocols

 

 

 

 

 

 

 

In addition to the OneFS updates, two new hardware models have also been announced.

S210

S210

 

  • Up to 13.8TB globally coherent cache in a single cluster (96GB RAM per node);
  • Dual Quad-Core Intel 2.4GHz Westmere Processors;
  • 24 * 2.5” 300GB or 600GB 10Krpm Serial Attached SCSI (SAS) 6Gb/s Drives; and
  • 10GbE (Copper & Fiber) Front-end Networking Interface.

 

Out with the old and in with the new.

S200vsS210_cropped

X410

X410

 

  • Up to 6.9TB globally coherent cache in a single cluster (48GB RAM per node);
  • Quad-Core Intel Nehalem E5504 Processor;
  • 12 * 3.5” 500GB, 1TB, 2TB, 3TB 7.2Krpm Serial ATA (SATA) Drives; and
  • 10GbE (Copper & Fiber) Front-end Networking Interface.

Some of the key features include:

  • 50% more DRAM in baseline configuration than current 2U X-series platform;
  • Configurable memory (6GB to 48GB) per node to suit specific application & workflow needs;
  • 3x increase in density per RU thus lowering power, cooling and footprint expenses;
  • Enterprise SSD support for latency sensitive namespace acceleration or file storage apps; and
  • Redesigned chassis that delivers superior cooling and vibration control.

 

Here’s a picture that does a mighty job of comparing the new model to the old one.

X400vsX410_cropped

 

Isilon SmartFlash

EMC also announced SmartFlash for Isilon, which uses SSDs as an addition to DRAM for flash capability. The upshot is that you can have 1PB Flash vs 37TB DRAM. It’s also globally coherent, unlike some of my tweets.

Here’s a picture.

Isilon_SmartFlash