Dell EMC Announces Midrange Storage Line Enhancements

Disclaimer: I recently attended Dell EMC World 2017.  My flights, accommodation and conference pass were paid for by Dell EMC via the Dell EMC Elect program. There is no requirement for me to blog about any of the content presented and I am not compensated in any way for my time at the event.  Some materials presented were discussed under NDA and don’t form part of my blog posts, but could influence future discussions.

Midrange Overview

Dell EMC today announced a number of new midrange storage models and enhancements. According to Dell EMC the midrange is still a big market and they estimate 7% growth over the next 5 years (I may have misheard though). As such, they’re positioning the new midrange family, comprised of the Unity and SC (Compellent) platforms. The goal is to provide common tools for management, mobility and protection (namely PowerPath, ViPR, VPLEX, RecoverPoint, Connectrix, and Data Domain).


Glad you asked. Dell EMC are positioning the two sides of the family as follows:


  • All Flash – simple, flash density, inline efficiency, consistent response time, cloud tier
  • Unified – unified file and block, app, software defined or converged, data in place upgrades

SC Series

  • Hybrid – granular tiering, 0-100% Flash, efficiency for hybrid
  • Best economics – intelligent compression and dedupe, persistent software licenses

It’s obviously not always going to be that cut and dried, but it’s a start.



So, what’s new with Unity? There’ll be new “F” models available from Q2 2017. There’ll also be new code released to support the new line. This will be installable on previous-generation Unity models as well. Note that, according Dell EMC, Unity hybrid isn’t going anywhere.


Speeds and Feeds

  Unity 350F Unity 450F Unity 550F Unity 650F
Processor 6core / 1.7GHz 10core / 2.2GHz 14core / 2.4GHz 14core / 2.4GHz
96GB Memory 128GB Memory 256GB Memory 512GB Memory
Capacity 150 Drives 250 Drives 500 Drives 1000 Drives
2.4PB 4PB 8PB 16PB
Volume 1000 @ 256TB 1500 @ 256TB 2000 @ 256TB 6000 @ 256TB
9000 @ 64TB 9000 @ 64TB 13500 @ 64TB 30000 @ 64TB
1000 @ 256TB 1500 @ 256TB 2000 @ 256TB 4000 @ 256TB
Snaps 8000 14000 20000 30000
256 per volume 256 per volume 256 per volume 256 per volume



Do you hate Java? I do. As do most people who had to use Java-based Unisphere. With Unity Dell EMC have provided a more modern, user-friendly approach to array management.

  • HTML-5 based Unisphere
  • CloudIQ
  • Unified CLI and REST API


Architected for All Flash

Dell EMC tell me the Unity array is “architected for all flash”. It certainly has a lot of the features you’d expect from an all flash array, including:

  • 3D TLC NAND flash drive for all IO types;
  • Multi-core optimized for best CPU utilisation and low latency;
  • Automatic flash wear balance;
  • Zero impact drive firmware based garbage collection;
  • Per object in-memory log for consistent low response time;
  • Write coalescing with full stripe writes to minimise IO;
  • Inline compression; and
  • Mix different flash drive types and capacities for lowest cost.



If you’ve been tracking the Unity you may have noticed the continuous introduction of support for larger drives. With the introduction of the “Dense Shelf”, you’re now looking at 500TB of capacity per RU. That, as they say, is a lot of capacity.

Q2 2016 3.2TB (32TB usable per RU)
Q3 2016 7.6TB (76TB usable per RU)
3.84TB (38TB usable per RU)
Q4 2016 Inline compression (300TB effective per RU)
15.4TB (152TB usable per RU)
Q2 2017 Dense shelf – 500TB effective per RU – 80 drives in 3RU form factor


Dynamic Pools
Unlike standard pools you can now add single drives (distributes the spare capacity and improves the rebuild time). I’ll be digging into this feature a bit more in the future (hopefully).


File System

The u64 file system was introduced with the Unity and has had a bit of an uplift in terms of capacity. It now scales to 256TB usable capacity per file system with 10M+ sub-directories and files. The cool thing is it also supports inline compression on the file system using pointer-based snaps with simple space reclaim and low IO impact. There’s also a cloud archiving and tiering capability. This provides policy-based transparent archival of files to public or private cloud (Virtustream by preference, but I believe there’s also support for Azure and AWS).


Snapshot Mobility

As of Q2 you’ll have the ability to move snapshots from array to array (local to remote to cloud).


Thin clones

  • Deduplicate / shared data set
  • Independent LUNs
  • Independent snap / replication schedules
  • Fast create / populate and restore


Dell EMC are keen as beans for you to have a good experience getting stuff onto your shiny new Unity array. As such they offer a built-in, integrated migration tool (that you run from Unisphere). It:

  • Supports FC, iSCSI, NFS (2H 2016) and SMB (H1 2017) migration from VNX;
  • Migrates LUNs, file systems, quotas, ACLs and exports; and is
  • Transparent to file applications and minimally disruptive for block.

Existing Unity customers will also be able to do data in place (DIP) upgrades online (from 2H 2017).


SC Series

Speeds and Feeds

I haven’t kept up with the SC line in recent years, so I found this table handy. You might too.

  SCv20X0 SC 5020 SC 7020 SC 9000
Processor 4core / 3.6GHz 8core / 2.4GHz 2x8core / 2.5GHz 2x8core / 3.2GHz
16GB Memory 128GB Memory 256GB Memory 512GB Memory
Capacity 168 Drives 222 Drives 500 Drives 1024 Drives
672TB 2PB 3PB 4PB
Volumes 1000 LUNs / Vvols 2000 LUNs / Vvols 2000 LUNs / Vvols 2000 LUNs / Vvols
500TB per volume 500TB per volume 500TB per volume 500TB per volume
Snapshots 2000 4096 16384 32000

Note a DIP upgrade from SC 4020 can also get you to the SC 5020.


Flexible Configuration

Dell EMC are positioning the SC line of arrays as a flexible approach to configuration. Offering a range of performance options, pricing and configurations.

  • All flash, some flash or no flash
  • Start with one configuration and convert to another
  • Designed to fit any workload and budget


Drive Efficiency

  • Activate on lowest tier of media
  • Easy on/off selectable by volume
  • data efficiency works in the background on “inactive” data
  • post-process operation ensures no impact to active data IO after data has been moved from active to inactive tier
  • best for environments (hybrid) that do not require 24x7x365 consistent response time


Intelligent Compression and Deduplication

The SC range has always been about the efficient storage of data. Dell EMC think they’re onto a good thing with intelligent deduplication and compression. I’m keen to see it working for myself before I get too excited.

  • Directs all incoming writes to dynamically partitioned “write” space (R10) on Tier 1 drives
  • Moves inactive data from “write” space to space efficient space (R5/6) on same or other tiers
  • Post-process operation compresses / dedupes inactive data


Investment Protection

Dell EMC don’t want you to feel like you can’t have your cake and eat it too. You may have been an EMC customer before Dell acquired them. Or maybe you’ve dabbled with EqualLogic arrays. That’s okay, you get a certain level of “Investment protection” via cross platform replication.

  • SC Series <-> PS Series via Replication
  • SC Series <-> VMAX, XtremIO, Unity via RecoverPoint VM Replication



Time to throw out your MD devices? Never fear, there’s a built-in migration path from “Legacy”.

  • Migrates LUNs (no snaps) from PS and MD (2H 2017) series to SC Series
  • Built-in solution: self-service without requiring any third-party tool
  • Offers both offline and online thin import depending on use case
  • Online minimally disruptive: requires unmount and mount



Dell EMC offer both centralised and web-based management for the SC series.

HTML5-based Unisphere for SC (that’s right!)

  • Compatible with most modern browsers
  • Unisphere style modern look and feel
  • No separate download or install required


  • Central monitoring and reporting for midrange
  • Cloud-based
  • Support for planning and optimisation

Dell Storage Manager

  • Central management and monitoring of SC and PS arrays
  • Advanced features
  • Supports up to 10 arrays



In terms of “family”, this announcement positions the midrange offering from Dell EMC as more Brady Bunch than Manson family. This is a good thing in my opinion. I’ve seen firsthand some of the opposition put up by EMC or Dell customers prior to the merger, and other vendors have certainly been licking their chops hoping the whole thing would prove too hard and Dell EMC would lose their way. Whilst it would be overly optimistic (and naïve) to expect them to consolidate the midrange platform to one line of arrays in such a short amount of time, the Unity and SC lines cover all the bases and show signs of future, further streamlining activities.

I cut my teeth (figuratively) on an old CLARiiON FC4700 and have watched the progression over the years of the EMC midrange offering. Similarly I have plenty of customers who’ve helped themselves to PS, SC and MD arrays. It’s nice to see all this cool tech coming together. While midrange isn’t anywhere near as sexy as massively scalable object storage, it performs an important function for a wide range of businesses small and large and shouldn’t be ignored. As with other product announcements I cover here, if you have particular queries about the products I recommend you engage with your local Dell EMC team in the first instance. The new Dell EMC Unity All-Flash models will be orderable this month and available in July. The SC5020 is orderable this month and will be generally available in June. If you want it from the horse’s mouth, you can read blog posts from Dell EMC covering the announcements around Unity here and SC series here.

Dell Compellent – ESXi HBA Queue Length

This is a quick post that is more for my reference than for anything else. When I had the Compellent installed, Dell passed on a copy of the “Dell Compellent Storage Center Best Practices with vSphere 5.x” document (Dell P/N 680-041-020). One of the interesting points I noted was around modifying the queue depth, and where that should be done. As with any best practice document, there are going to be factors that may influence the outcomes of these activities in a positive or negative fashion. In other words, YMMV, but I found it useful. As always, test it before launching into production.

Firstly, set the HBA queue depth to 255 via the HBA BIOS. The thinking here is that the VMkernel driver module ultimately controls the HBA’s queue depth. Now, set the queue depth on the driver module. I use QLogic HBAs in my environment.

To find the correct driver name for the loaded module, run the following command.

esxcli system module list |grep qla

The output should be something like qla2xxx

Now run the following command.

esxcli system module parameters set -m qla2xxx -p "ql2xmaxqdepth=255 ql2xloginretrycount=60 qlport_down_retry=60"

Note that you can also set it via Disk.SchedNumReqOutstanding (DNSRO), where the default value is 32. Keep in mind that this setting is only enforced when more than one VM is active on the datastore. This is a global setting too, so if you’ve set the DNSRO value to 64, for example and you have two datastores in place, one with 4 VMs and one with 6 VMs, each VM will get 64 as the queue depth value. VMware recommend that this value be set to the same as the VMkernel module driver value.

You can also modify the queue depth in the Windows guest OS my modifying the registry settings of the OS.

In any case, go check out the document. It’s one of the more useful white papers I’ve seen from a vendor in some time.

Dell Compellent – Storage provisioning with CompCU.jar

I covered getting started with the CompCU.jar tool here. This post is a quick one that covers provisioning storage on the Compellent and then presenting it to hosts. In this example, I create a 400GB volume named Test_Volume1 and place it in the iLAB_Gold2 folder.

E:\CU060301_002A>java -jar CompCU.jar -defaultname saved_default.cli -c "volume create -name "Test_Volume1" -size 400g -folder iLAB_Gold2"
Compellent Command Utility (CompCU)
User Name: Admin
Host/IP Address:
Single Command: volume create -name Test_Volume1 -size 400g -folder iLAB_Gold2
Connecting to Storage Center: with user: Admin
java.lang.IllegalStateException: TrustManagerFactoryImpl is not initialized
Running Command: volume create -name Test_Volume1 -size 400g -folder iLAB_Gold2
Creating Volume using StorageType 1: storagetype='Assigned-Redundant-4096', redundancy=Redundant, pagesize=4096, diskfolder=Assigned.
Successfully created Volume 'Test_Volume1'
Successfully finished running Compellent Command Utility (CompCU) application.

Here’s what it looks like now.


Notice that Test_Volume1 has been created but it inactive – it needs to be mapped to a server before it can be brought online.

E:\CU060301_002A>java -jar CompCU.jar -defaultname saved_default.cli -c "volume map -name 'Test_Volume1' -server 'iLAB_Gold2'"
Compellent Command Utility (CompCU)
User Name: Admin
Host/IP Address:
Single Command: volume map -name 'Test_Volume1' -server 'iLAB_Gold2'
Connecting to Storage Center: with user: Admin
java.lang.IllegalStateException: TrustManagerFactoryImpl is not initialized
Running Command: volume map -name 'Test_Volume1' -server 'iLAB_Gold2'
Successfully mapped Volume 'Test_Volume1' to Server 'iLAB_Gold2'
Successfully finished running Compellent Command Utility (CompCU) application.

Wouldn’t it make more sense to create and map the volume at the same time? Yes, yes it would. Here’s another example where I present the volume to a folder of servers.

E:\CU060301_002A>java -jar CompCU.jar -defaultname saved_default.cli -c "volume create -name "Test_Volume2" -size 400g -folder iLAB_Gold2 -server iLAB_Gold2"
Compellent Command Utility (CompCU)
User Name: Admin
Host/IP Address:
Single Command: volume create -name Test_Volume2 -size 400g -folder iLAB_Gold2 -server iLAB_Gold2
Connecting to Storage Center: with user: Admin
java.lang.IllegalStateException: TrustManagerFactoryImpl is not initialized
Running Command: volume create -name Test_Volume2 -size 400g -folder iLAB_Gold2 -server iLAB_Gold2
Creating Volume using StorageType 1: storagetype='Assigned-Redundant-4096', redundancy=Redundant, pagesize=4096, diskfolder=Assigned.
Successfully mapped Volume 'Test_Volume2' to Server 'iLAB_Gold2'
Successfully created Volume 'Test_Volume2', mapped it to Server 'iLAB_Gold2' on Controller 'SN 22641'
Successfully finished running Compellent Command Utility (CompCU) application.

Note that these commands don’t specify replays. If you want replays configured you should use the -replayprofile option or manually create replays with the replay create command.

Dell Compellent – Getting started with CompCU.jar

CompCU.jar is the Compellent Command Utility. You can download it from Compellent’s support site (registration required). This is a basic article that demonstrates how to get started.

The first thing you’ll want to do is create an authentication file that you can re-use, similar to what you do with EMC’s naviseccli tool. The file I specify is saved in the directory I’m working from, and the Storage Center IP is the cluster IP, not the IP address of the controllers.

E:\CU060301_002A>java –jar CompCU.jar –default -defaultname saved_default -host StorageCenterIP -user Admin -password SCPassword

Now you can run commands without having to input credentials each time. I like to ouput to a text file, although you’ll notice that CompCU also dumps output on the console at the same time. The “system show” command provides a brief summary of the system configuration.

E:\CU060301_002A>java -jar CompCU.jar -defaultname saved_default.cli -c "system show -txt 'outputfile.txt'"
Compellent Command Utility (CompCU)
User Name: Admin
Host/IP Address:
Single Command: system show -txt 'systemshow.txt'
Connecting to Storage Center: with user: Admin
java.lang.IllegalStateException: TrustManagerFactoryImpl is not initialized
Running Command: system show -txt 'outputfile.txt'
SerialNumber Name ManagementIP Version OperationMode PortsBalanced MailServer BackupMailServer
----------------- -------------------------------- ---------------- ---------------- -------------- -------------- -------------------- --------------------
22640 Compellent1 Normal Yes
Save to Text (txt) File: outputfile.txt
Successfully finished running Compellent Command Utility (CompCU) application.

Notice I get java errors every time I run this command. I think that’s related to an expired certificate, but I need to research that further. Another useful command is “storagetype show“. Here’s one I prepared earlier.

E:\CU060301_002A>java -jar CompCU.jar -defaultname saved_default.cli -c "storagetype show -txt 'storagetype.txt'"
Compellent Command Utility (CompCU)
User Name: Admin
Host/IP Address:
Single Command: storagetype show -txt 'storagetype.txt'
Connecting to Storage Center: with user: Admin
java.lang.IllegalStateException: TrustManagerFactoryImpl is not initialized
Running Command: storagetype show -txt 'storagetype.txt'
Index Name DiskFolder Redundancy PageSize PageSizeBlocks SpaceUsed SpaceUsedBlocks SpaceAllocated SpaceAllocatedBlocks
------ -------------------------------- -------------------- -------------------- ---------- --------------- -------------------- -------------------- -------------------- --------------------
1 Assigned-Redundant-4096 Assigned Redundant 2.00 MB 4096 1022.51 GB 2144350208 19.67 TB 42232291328
Save to Text (txt) File: storagetype.txt
Successfully finished running Compellent Command Utility (CompCU) application.

There’s a bunch of useful things you can do with CompCU, particularly when it comes to creating volumes and allocating them to hosts, for example. I’ll cover these in the next little while. In the meantime, I hope this was a useful introduction to CompCU.

Dell Compellent – Preallocating storage

In my first post on the Dell Compellent, I’d mentioned that it was possible to preallocate storage on the array, even though it was thin by default. I can’t really think of a reason why you’d want to do this on this particular array, but here’s how to do it.

Firstly, create the volume but don’t map it to anything. Then wait a little while. I can’t remember how long, but it’s a little while. Then right-click on the volume and the option to “Preallocate Storage” will be there.


You will be warned that you’ve basically turned your back on years of hard work by the Compellent developers. Think about that.


When it says it will take several minutes, it’s not lying.


It will probably take a lot longer than several minutes, particularly if you’re making a 20TB volume, like I am.


In fact, the array will get a bit concerned about how long it’s taking as well.


Five hours later, and about 7.5TB has been preallocated.


Here’s a picture of it when it’s finished. Note the lack of space.


You’ll also notice that the storage is in conservation mode now, basically because there’s not a lot of space left to work with.


You can right-click on these alerts to “Clear Emergency”. Note, however, that you need to have actually cleared the emergency (made space, for example), before you can, er, clear the emergency.


Note also that it takes a little while to delete a 20TB volume.



And there you have it. It is possible, and there might even be a reason for doing it. But then you might just have bought the wrong array for the job.

Dell Compellent – A brief introduction

I don’t really do product evaluations on this blog for a few reasons. Firstly, I’m not a tech journalist, and don’t get sent review samples of equipment to look at all day, nor do I get paid to do product evaluations (generally speaking). Secondly, I’m a bit crap at hardware evaluations, and tend to approach these things with a set of requirements that don’t always match up with what other people find useful in evaluating kit. In my day job, however, I sometimes have the opportunity to look at kit, but usually during the evaluation stage there can be a bit of problem with commercial sensitivity and me shooting my mouth off about Vendor X’s gear while negotiating the purchase of said gear. That said, I’ve been evaluating some Dell Compellent equipment at work lately and thought it might be worthy of a write-up or two. Please note that I’m not suggesting for a minute that you go out and spend yours or someone else’s cash on Compellent arrays without doing your own evaluation. The point of this post is more to talk about some of the things I like and dislike, based on the opportunity I’ve had to have some hands-on experience with it.

For a decent, if now slightly out-dated, review of the Compellent, check out Chris Evans’ reviews here and here. For Dell’s overview of the Dell Compellent Architecture, have a look here. Also worth looking at is the Compellent Software Overview, which can be found here. Finally, a cool feature is “Portable Volume”, you can read about that here.

It was installed in early February and hasn’t really missed a beat. The system came with 2 SC8000 controllers with 64GB of cache each.  We also got 2 trays of disk; one with 24 300GB 15K SAS drives and one with 12 2TB 7.2K NL-SAS drives. Each controller has one 4-port 8Gb FC front-end card and one 4-port 6Gb SAS back-end card installed. It came with the basic Storage Center software licenses, Data Progression licenses and a Virtual Ports Base license.

Here’re some pictures of it installed in the rack in our lab. Note that these pictures probably don’t look too different from a lot of other Compellent arrays installed in racks around the world. They do, however, prove that the person installing the gear was competent.



So, I thought I’d cover off briefly on a few things now, and if anything else comes to mind I’ll do some more posts.

Firstly, Virtual Port technology is kind of cool, once you get your head around it. Basically, NPIV (check out one Scott Lowe’s very useful articles on NPIV here) is used to enable multiple virtual ports on a physical port. Dell suggest that this means you need less ports for failover. While this is true, keep in mind that there may well be bandwidth issues when ports do failover. Obviously, you’d be hoping that the local Dell support technician turned up to replace failed cards in a timely fashion. Still, the idea of not having to worry as much about jumpy host-based failover software is neat (I’m looking at you, older versions of EMC PowerPath). I’m going to do a brief article on virtual ports in the future. In the meantime, this is what it looks like from a zoning perspective:

fcalias name Compellent1_Virtual_Ports_Zone vsan 2
    member pwwn 50:00:d3:10:00:58:70:05
    member pwwn 50:00:d3:10:00:58:70:07
    member pwwn 50:00:d3:10:00:58:70:19
    member pwwn 50:00:d3:10:00:58:70:1b
fcalias name Compellent1_Physical_Ports_Zone vsan 2
    member pwwn 50:00:d3:10:00:58:70:2d
    member pwwn 50:00:d3:10:00:58:70:2f
    member pwwn 50:00:d3:10:00:58:70:31
    member pwwn 50:00:d3:10:00:58:70:33
zone name HOST-023_HBA0_Compellent1_Zone vsan 2
    member fcalias HOST-023_HBA0
    member fcalias Compellent1_Virtual_Ports_Zone
zone name HOST-024_HBA0_Compellent1_Zone vsan 2
    member fcalias HOST-024_HBA0
    member fcalias Compellent1_Virtual_Ports_Zone

And the same again for VSAN 3, obviously with different host HBAs and different physical and virtual ports.

Secondly, note that LUNs / volumes on the Compellent are always configured as thin. You can choose not to do this, but you’ll have Dell people scratching their heads and wondering what you’re doing. People in the street might call you a disk hugger, or worse. It can get nasty. Let’s just say that thin is the new thick. If you absolutely have to configure thick, you used to be able to do so by ticking a box when you configured the volume. I made a 20TB volume that way. Sales people were briefly excited until they realised it was an evaluation array. For the life of me I can’t see where to do that, maybe it was removed as an option.

While you’re thinking about how you might have just wasted a bit of money on advanced Compellent features by insisting on configuring thick-provisioned volumes, you can do some benchmarks. The great thing about benchmarks is that you can have them do whatever you want them to do. We asked for a system that could deliver approximately 10000 IOPS. And that’s what we got. I’m going to do a longer article on synthetic benchmarks and how they can be useful, but let’s just say that if you’re sitting down to do an equipment evaluation and you’ve fired up IOmeter (or whatever you like to use), make sure you have an idea of what it is you want to prove before you click on start. We also were able to get a lot more IOPS out of the system, because 512b 100% sequential reads are pretty common, right? Here’s a picture that represents two volumes running 2 workers with IOmeter. Each worker was doing 70000 IOPS. OMG that’s over 140000 IOPS! Yeah. Which equals approximately 70 MBps for this particular benchmark. It’s been said before, but don’t forget that an IOPS figure in isolation is meaningless.


I’m not in the business of shilling for Dell. But sometimes I get to look at their stuff. If you like the sound of some of this stuff, and think you might be in the market for a new array, it might be worth giving them a call.

What the Dell just happened? – Dell Storage Forum Sydney 2012 – Part 2

Disclaimer: I recently attended the Dell Storage Forum Sydney 2012.  My flights and accommodation were covered by Dell, however there is no requirement for me to blog about any of the content presented and I am not compensated in any way for my time at the event.  Some materials presented were discussed under NDA and don’t form part of my blog posts, but could influence future discussions.

Part 2

In this post I’d like to touch briefly on some of the sessions I went to and point you in the direction of some further reading. I’m working on some more content for the near future.


Dell AppAssure Physical, Virtual and Cloud Recovery

If you’re unfamiliar with AppAssure, head over to their website for a fairly comprehensive look at what they can do. Version 5 was recently released. Dan Moz has been banging on about this product to me for a while, and it actually looks pretty good. Andrew Diamond presented a large chunk of the content while battling some time constraints thanks to the keynote running over time, while Dan was demo boy. Here’s a picture with words (a diagram, if you will) that gives an idea of what AppAssure can do.

(Image source –

Live Recovery is one of my favourite features. With this it’s “not even necessary to wait for a complete restore to be able to access and use the data”. This is really handy when you’re trying to recover 100s of GB of file data but don’t know exactly what the users will want to access first.

Recovery Assure “detects the presence of Microsoft Exchange and SQL and its respective databases and log files and automatically groups the volumes with dependency for comprehensive protection and rapid recovery”. The cool thing here is that you’re going to be told if there’s going to be SNAFU when you recover before you recover. It’s not going to save your bacon every time, but it’s going to help with avoiding awkward conversations with the GM.

In the next few weeks I’m hoping to put together a more detailed brief on what AppAssure can and can’t do.


A Day in the Life of a Dell Compellent Page: How Dynamic Capacity, Data Instant Replay and Data Progression Work Together

Compellent bought serious tiering tech to Dell upon acquisition, and has really driven the Fluid Data play that’s going on at the moment. This session was all about “closely following a page from first write to demotion to low-cost disk”. Sound dry? I must admit it was a little. It was also, however, a great introduction to how pages move about the Compellent and what that means to storage workloads and efficiency. You can read some more about the Compellent architecture here.

The second half of the session comprised a customer testimonial (an Australian on-line betting company) and brief Q & A with the customer. It was good to see that the customer was happy to tell the truth when pushed about some of the features of the Compellent stack and how it had helped and hurt in his environment. Kudos to my Dell AE for bringing up the question of how FastTrack has helped only to watch the customer reluctantly admit it was one of the few problems he’d had since deploying the solution.


Media Lunch ‘Fluid Data and the Storage Evolution’

When I was first approached about attending this event, the idea was that there’d be a blogger roundtable. For a number of reasons, including availability of key people, that had to be canned and I was invited to attend the media lunch instead. Topics covered during the lunch were basically the same as the keynote, but in a “lite” format. There was also two customers providing testimonials about Dell and how happy they were with their Compellent environments. It wasn’t quite the event that Dell had intended, at least from a blogger perspective, but I think they’re very keen to get more of this stuff happening in the future, with some more focus on the tech rather than the financials. At least, I hope that’s the case.


On the Floor

In the exhibition hall I got to look at some bright shinies and talk to some bright folks about new products that have been released. FluidFS (registration required) is available across the Equallogic, Compellent and PowerVault range now. “With FluidFS, our unified storage systems can manage up to 1PB of file data in a single namespace”. Some people were quite excited about this. I had to check out the FS8600, which is the new Compellent Unified offering.

I also had a quick look at the Dell EqualLogic PS-M4110 Blade Array which is basically a PS4000 running in a blade chassis. You can have up to 4 of these things in a single M1000e chassis, and they support 14 2.5″ drives in a variety of combinations. Interestingly you can only have 2 of these in a single group, so you would need 2 groups per chassis if you fully populated it.

Finally I took a brief gander at a PS6500 Series machine. These are 4RU EQL boxes that take up to 48 spindles and basically can give you a bunch of tiering in a big box with a fairly small footprint.



As an attendee at the event I was given a backpack, water bottle, some pens, a SNIA Dictionary and a CommVault yo-yo. I’ll let you know if I won a laptop.

I may or may not have had some problems filling out my registration properly though.





















Thanks, etc

For an inaugural event, I thought the Dell Storage Forum was great, and I’m stoked that vendors are starting to see the value in getting like-minded folk in the same place to get into useful tech stuff, rather than marketing fluff. Thanks to @DanMoz for getting me down there as a blogger in the first place and for making sure I had everything I needed while I was there. Thanks also to the Dell PR and Events people and the other Dell folks who took the time to say hi and check that everything was cool. It was also nice to meet Simon Sharwood in real life, after reading his articles on The Register and stalking him on twitter.

What the Dell just happened? – Dell Storage Forum Sydney 2012 – Part 1

Disclaimer: I recently attended the Dell Storage Forum Sydney 2012.  My flights and accommodation were covered by Dell, however there is no requirement for me to blog about any of the content presented and I am not compensated in any way for my time at the event.  Some materials presented were discussed under NDA and don’t form part of my blog posts, but could influence future discussions.






Rather than give you an edited transcript of the sessions I attended, I thought it would be easier if I pointed out some of the highlights. In the next few weeks I’m going to do some more detailed posts, particularly on AppAssure and some of the new Compellent stuff. This is the first time I’ve paid attention to what was going on on stage in terms of things to blog about, so it might be a bit rough around the edges. If it comes across as a bit of propaganda from Dell, well, it was their show. There was a metric shedload of good information presented on the day and I don’t think I could do it justice in one post. And if I hear one more person mention “fluid architecture” I’ll probably lose it.

Part 1 


Dell is big on the Dell Fluid Data Architecture and they’re starting to execute on that strategy. Introducing the keynote speakers was Jamie Humphrey, Director of Storage and Data Management for Australia & New Zealand. The first speaker introduced was Joe Kremer, Vice President and Managing Director, Dell Australia & New Zealand. He spent some time on the global Dell transformation which involved intellectual property (acquisition and development), progressing Dell’s strategy, and offering solution completeness to customers. He’s also keen to see increased efficiency in the enterprise through standards adoption rather than the use of proprietary systems. Dell are big on simplicity and automation.

Dell is now all about shifting its orientation towards solutions with outcomes rather than the short-term wins they’d previously been focussed on. There have been 24 acquisitions since 2008 (18 since 2010). Perot Systems has apparently contributed significantly in terms of services and reference architectures. There have been 6 storage acquisitions in the last 3 years. Joe also went on to talk about why they went for Equallogic, Compellent, Ocarina, Insite One (a public medical cloud), RNA Networks, AppAssure, Wyse, Force10, and Quest. The mantra seems to be “What do you need? We’ll make it or buy it”. Services people make up the biggest part of the team in Australia now, which is a refreshing change from a few years ago. Dell have also been doing some “on-shoring” of various support teams in Australia, presumably so we’ll feel warm and fuzzy about being that little bit closer to a throat we can choke when we need to.

When Joe was finished, it was time for the expert panel. First up was Brett Roscoe, General Manager and Executive Director, PowerVault and Data Management. He discussed Dell’s opportunity to sell a better “together” story through servers and storage. Nowadays you can buy a closed stack, build it yourself, or do it Dell’s way. Dell wants to put together open storage, server and network to keep costs down, drive automation, ease of use and integration across the product line. The fluid thing is all about everything finding its own level, fitting into whatever container you put it in to. Brett also raised the point that enterprise features from a few years ago are now available in today’s midrange arrays, with midrange prices to match. Dell is keen to keep up the strategy using the following steps: Acquire, Integrate and Innovate. They’re also seeing themselves as the biggest storage start-up in the world, which is a novel concept but makes some sense when you consider the nature of their acquisitions. Dedupe and compression in the filesystem is “coming”. Integration will be the key to Dell successfully executing its strategy. Brett also made some product availability announcements (see On The Floor in Part 2).Brett also had one of the funnier lines of the day – “Before I bring up the smart architect guys, I want to bring up one of our local guys” – when introducing Phil Davis, Vice President, Enterprise Solutions Group, Dell Asia Pacific & Japan to the stage.

They then launched into a series of video-linked whiteboard sessions with a number of “Enterprise Technologists”, with a whiteboard they had setup in front of them being filmed and projected onto the screens in the auditorium so we could see it clearly in the audience. It was a nice way to do the presentation, and a little more engaging than the standard videos and slide deck we normally see with keynotes.

The first discussion was on flash, with a focus on the RNA Networks acquisition. Tim Plaud, Principal Storage Architect at Dell, talked about the move of SSD into the server from the array to avoid the latency. The problem with this? It’s not shared. So why not use it as cache (Fluid Cache)? Devices can communicate with each other over a low latency network using Remote DMA to create a cache pool. Take a 15000 IOPS device in the array, remove the latency (network, controller, SAS) and put it out on the PCI Bus and you can get yourself a 250000 IOPS per device. Now put 4 per server (for Dell 12G servers). How do you protect the write cache? Use cache partners in a physically different server, de-staging in the background in “near real-time”. You can also pick your interface for the cache network. And I’m assuming that Force10 and 40Gb would help here. Servers without the devices can still participate in the cache pool through the use of the software. Cache is de-staged before Replays (snapshots) happen, so the Replays are application- or crash-consistent. Tim also talked about working replication – “Asynchronously, semi-synchronously or truly synchronously”. I’m not sure I want to guess what semi-synchronous is. Upward tiering (to the host), and tiering down / out (to the cloud) is also another strategy that they’re working on.

The second discussion was around how data protection is changing – with RPOs and RTOs getting more insane – driving the adoption of snapshots and replication as protection mechanisms. Mike Davis – Director of Marketing, Storage was called up on stage to talk about AppAssure. He talked about how quickly the application can be back on-line after a failure as the primary driver in a number of businesses. AppAssure promises to do not only the data, but the application state as well, while providing flexible recovery options. AppAssure also promises efficiency through the use of incremental forever and dedupe and compression. AppAssure uses a “Core” server as the primary component – just set one up wherever you might want to recover to – be that a Disaster Recovery site, the cloud, or another environment within the same data centre. You can also use AppAssure to replicate from CMP to EQL to Cloud, etc.

The final topic – software architecture to run in a cloud environment on Equallogic – was delivered by Mark Keating, Director of Storage QA at Dell. He talked about how the array is traditionally comprised of the Management layer / Virtualisation (abstraction) layer / Platform (controllers, drives, RAID, FANs). Dell want to be de-coupling these layers in the future. With Host Virtualized Storage (HVS) they’ll be able to do this, and it’s expected sometime next year. Take the management and virtualisation layer and put them in the cloud as a virtual workload. Use any hardware you want but keep the application integration and scalability of Equallogic (because they love the software on the Equallogic, the rest is just tin). Use cases? Tie it to a virtual application. Make a SAN for Exchange, make one for SQL. Temporary expansion of EQL capacity in the cloud is possible. Use it as a replication target. Multiple “SANs” on the same infrastructure as a means of providing simple multi-tenancy. It’s an interesting concept, and something I’d like to explore further. It also raises a lot of questions about the underlying hardware platform, and just how much you can do with software before being limited by, presumably, the cheap, commodity hardware that it sits on.