Dell EMC Announces Midrange Storage Line Enhancements

Disclaimer: I recently attended Dell EMC World 2017.  My flights, accommodation and conference pass were paid for by Dell EMC via the Dell EMC Elect program. There is no requirement for me to blog about any of the content presented and I am not compensated in any way for my time at the event.  Some materials presented were discussed under NDA and don’t form part of my blog posts, but could influence future discussions.

Midrange Overview

Dell EMC today announced a number of new midrange storage models and enhancements. According to Dell EMC the midrange is still a big market and they estimate 7% growth over the next 5 years (I may have misheard though). As such, they’re positioning the new midrange family, comprised of the Unity and SC (Compellent) platforms. The goal is to provide common tools for management, mobility and protection (namely PowerPath, ViPR, VPLEX, RecoverPoint, Connectrix, and Data Domain).


Glad you asked. Dell EMC are positioning the two sides of the family as follows:


  • All Flash – simple, flash density, inline efficiency, consistent response time, cloud tier
  • Unified – unified file and block, app, software defined or converged, data in place upgrades

SC Series

  • Hybrid – granular tiering, 0-100% Flash, efficiency for hybrid
  • Best economics – intelligent compression and dedupe, persistent software licenses

It’s obviously not always going to be that cut and dried, but it’s a start.



So, what’s new with Unity? There’ll be new “F” models available from Q2 2017. There’ll also be new code released to support the new line. This will be installable on previous-generation Unity models as well. Note that, according Dell EMC, Unity hybrid isn’t going anywhere.


Speeds and Feeds

  Unity 350F Unity 450F Unity 550F Unity 650F
Processor 6core / 1.7GHz 10core / 2.2GHz 14core / 2.4GHz 14core / 2.4GHz
96GB Memory 128GB Memory 256GB Memory 512GB Memory
Capacity 150 Drives 250 Drives 500 Drives 1000 Drives
2.4PB 4PB 8PB 16PB
Volume 1000 @ 256TB 1500 @ 256TB 2000 @ 256TB 6000 @ 256TB
9000 @ 64TB 9000 @ 64TB 13500 @ 64TB 30000 @ 64TB
1000 @ 256TB 1500 @ 256TB 2000 @ 256TB 4000 @ 256TB
Snaps 8000 14000 20000 30000
256 per volume 256 per volume 256 per volume 256 per volume



Do you hate Java? I do. As do most people who had to use Java-based Unisphere. With Unity Dell EMC have provided a more modern, user-friendly approach to array management.

  • HTML-5 based Unisphere
  • CloudIQ
  • Unified CLI and REST API


Architected for All Flash

Dell EMC tell me the Unity array is “architected for all flash”. It certainly has a lot of the features you’d expect from an all flash array, including:

  • 3D TLC NAND flash drive for all IO types;
  • Multi-core optimized for best CPU utilisation and low latency;
  • Automatic flash wear balance;
  • Zero impact drive firmware based garbage collection;
  • Per object in-memory log for consistent low response time;
  • Write coalescing with full stripe writes to minimise IO;
  • Inline compression; and
  • Mix different flash drive types and capacities for lowest cost.



If you’ve been tracking the Unity you may have noticed the continuous introduction of support for larger drives. With the introduction of the “Dense Shelf”, you’re now looking at 500TB of capacity per RU. That, as they say, is a lot of capacity.

Q2 2016 3.2TB (32TB usable per RU)
Q3 2016 7.6TB (76TB usable per RU)
3.84TB (38TB usable per RU)
Q4 2016 Inline compression (300TB effective per RU)
15.4TB (152TB usable per RU)
Q2 2017 Dense shelf – 500TB effective per RU – 80 drives in 3RU form factor


Dynamic Pools
Unlike standard pools you can now add single drives (distributes the spare capacity and improves the rebuild time). I’ll be digging into this feature a bit more in the future (hopefully).


File System

The u64 file system was introduced with the Unity and has had a bit of an uplift in terms of capacity. It now scales to 256TB usable capacity per file system with 10M+ sub-directories and files. The cool thing is it also supports inline compression on the file system using pointer-based snaps with simple space reclaim and low IO impact. There’s also a cloud archiving and tiering capability. This provides policy-based transparent archival of files to public or private cloud (Virtustream by preference, but I believe there’s also support for Azure and AWS).


Snapshot Mobility

As of Q2 you’ll have the ability to move snapshots from array to array (local to remote to cloud).


Thin clones

  • Deduplicate / shared data set
  • Independent LUNs
  • Independent snap / replication schedules
  • Fast create / populate and restore


Dell EMC are keen as beans for you to have a good experience getting stuff onto your shiny new Unity array. As such they offer a built-in, integrated migration tool (that you run from Unisphere). It:

  • Supports FC, iSCSI, NFS (2H 2016) and SMB (H1 2017) migration from VNX;
  • Migrates LUNs, file systems, quotas, ACLs and exports; and is
  • Transparent to file applications and minimally disruptive for block.

Existing Unity customers will also be able to do data in place (DIP) upgrades online (from 2H 2017).


SC Series

Speeds and Feeds

I haven’t kept up with the SC line in recent years, so I found this table handy. You might too.

  SCv20X0 SC 5020 SC 7020 SC 9000
Processor 4core / 3.6GHz 8core / 2.4GHz 2x8core / 2.5GHz 2x8core / 3.2GHz
16GB Memory 128GB Memory 256GB Memory 512GB Memory
Capacity 168 Drives 222 Drives 500 Drives 1024 Drives
672TB 2PB 3PB 4PB
Volumes 1000 LUNs / Vvols 2000 LUNs / Vvols 2000 LUNs / Vvols 2000 LUNs / Vvols
500TB per volume 500TB per volume 500TB per volume 500TB per volume
Snapshots 2000 4096 16384 32000

Note a DIP upgrade from SC 4020 can also get you to the SC 5020.


Flexible Configuration

Dell EMC are positioning the SC line of arrays as a flexible approach to configuration. Offering a range of performance options, pricing and configurations.

  • All flash, some flash or no flash
  • Start with one configuration and convert to another
  • Designed to fit any workload and budget


Drive Efficiency

  • Activate on lowest tier of media
  • Easy on/off selectable by volume
  • data efficiency works in the background on “inactive” data
  • post-process operation ensures no impact to active data IO after data has been moved from active to inactive tier
  • best for environments (hybrid) that do not require 24x7x365 consistent response time


Intelligent Compression and Deduplication

The SC range has always been about the efficient storage of data. Dell EMC think they’re onto a good thing with intelligent deduplication and compression. I’m keen to see it working for myself before I get too excited.

  • Directs all incoming writes to dynamically partitioned “write” space (R10) on Tier 1 drives
  • Moves inactive data from “write” space to space efficient space (R5/6) on same or other tiers
  • Post-process operation compresses / dedupes inactive data


Investment Protection

Dell EMC don’t want you to feel like you can’t have your cake and eat it too. You may have been an EMC customer before Dell acquired them. Or maybe you’ve dabbled with EqualLogic arrays. That’s okay, you get a certain level of “Investment protection” via cross platform replication.

  • SC Series <-> PS Series via Replication
  • SC Series <-> VMAX, XtremIO, Unity via RecoverPoint VM Replication



Time to throw out your MD devices? Never fear, there’s a built-in migration path from “Legacy”.

  • Migrates LUNs (no snaps) from PS and MD (2H 2017) series to SC Series
  • Built-in solution: self-service without requiring any third-party tool
  • Offers both offline and online thin import depending on use case
  • Online minimally disruptive: requires unmount and mount



Dell EMC offer both centralised and web-based management for the SC series.

HTML5-based Unisphere for SC (that’s right!)

  • Compatible with most modern browsers
  • Unisphere style modern look and feel
  • No separate download or install required


  • Central monitoring and reporting for midrange
  • Cloud-based
  • Support for planning and optimisation

Dell Storage Manager

  • Central management and monitoring of SC and PS arrays
  • Advanced features
  • Supports up to 10 arrays



In terms of “family”, this announcement positions the midrange offering from Dell EMC as more Brady Bunch than Manson family. This is a good thing in my opinion. I’ve seen firsthand some of the opposition put up by EMC or Dell customers prior to the merger, and other vendors have certainly been licking their chops hoping the whole thing would prove too hard and Dell EMC would lose their way. Whilst it would be overly optimistic (and naïve) to expect them to consolidate the midrange platform to one line of arrays in such a short amount of time, the Unity and SC lines cover all the bases and show signs of future, further streamlining activities.

I cut my teeth (figuratively) on an old CLARiiON FC4700 and have watched the progression over the years of the EMC midrange offering. Similarly I have plenty of customers who’ve helped themselves to PS, SC and MD arrays. It’s nice to see all this cool tech coming together. While midrange isn’t anywhere near as sexy as massively scalable object storage, it performs an important function for a wide range of businesses small and large and shouldn’t be ignored. As with other product announcements I cover here, if you have particular queries about the products I recommend you engage with your local Dell EMC team in the first instance. The new Dell EMC Unity All-Flash models will be orderable this month and available in July. The SC5020 is orderable this month and will be generally available in June. If you want it from the horse’s mouth, you can read blog posts from Dell EMC covering the announcements around Unity here and SC series here.

Random Short Take #2

I did one of these 7 years ago – so I guess I never really got into the habit – but here’re a few things that I’ve noticed and thought you might be interested in:

And that’s about it, thanks for reading.


Dell Compellent – ESXi HBA Queue Length

This is a quick post that is more for my reference than for anything else. When I had the Compellent installed, Dell passed on a copy of the “Dell Compellent Storage Center Best Practices with vSphere 5.x” document (Dell P/N 680-041-020). One of the interesting points I noted was around modifying the queue depth, and where that should be done. As with any best practice document, there are going to be factors that may influence the outcomes of these activities in a positive or negative fashion. In other words, YMMV, but I found it useful. As always, test it before launching into production.

Firstly, set the HBA queue depth to 255 via the HBA BIOS. The thinking here is that the VMkernel driver module ultimately controls the HBA’s queue depth. Now, set the queue depth on the driver module. I use QLogic HBAs in my environment.

To find the correct driver name for the loaded module, run the following command.

esxcli system module list |grep qla

The output should be something like qla2xxx

Now run the following command.

esxcli system module parameters set -m qla2xxx -p "ql2xmaxqdepth=255 ql2xloginretrycount=60 qlport_down_retry=60"

Note that you can also set it via Disk.SchedNumReqOutstanding (DNSRO), where the default value is 32. Keep in mind that this setting is only enforced when more than one VM is active on the datastore. This is a global setting too, so if you’ve set the DNSRO value to 64, for example and you have two datastores in place, one with 4 VMs and one with 6 VMs, each VM will get 64 as the queue depth value. VMware recommend that this value be set to the same as the VMkernel module driver value.

You can also modify the queue depth in the Windows guest OS my modifying the registry settings of the OS.

In any case, go check out the document. It’s one of the more useful white papers I’ve seen from a vendor in some time.

Dell Compellent – Storage provisioning with CompCU.jar

I covered getting started with the CompCU.jar tool here. This post is a quick one that covers provisioning storage on the Compellent and then presenting it to hosts. In this example, I create a 400GB volume named Test_Volume1 and place it in the iLAB_Gold2 folder.

E:\CU060301_002A>java -jar CompCU.jar -defaultname saved_default.cli -c "volume create -name "Test_Volume1" -size 400g -folder iLAB_Gold2"
Compellent Command Utility (CompCU)
User Name: Admin
Host/IP Address:
Single Command: volume create -name Test_Volume1 -size 400g -folder iLAB_Gold2
Connecting to Storage Center: with user: Admin
java.lang.IllegalStateException: TrustManagerFactoryImpl is not initialized
Running Command: volume create -name Test_Volume1 -size 400g -folder iLAB_Gold2
Creating Volume using StorageType 1: storagetype='Assigned-Redundant-4096', redundancy=Redundant, pagesize=4096, diskfolder=Assigned.
Successfully created Volume 'Test_Volume1'
Successfully finished running Compellent Command Utility (CompCU) application.

Here’s what it looks like now.


Notice that Test_Volume1 has been created but it inactive – it needs to be mapped to a server before it can be brought online.

E:\CU060301_002A>java -jar CompCU.jar -defaultname saved_default.cli -c "volume map -name 'Test_Volume1' -server 'iLAB_Gold2'"
Compellent Command Utility (CompCU)
User Name: Admin
Host/IP Address:
Single Command: volume map -name 'Test_Volume1' -server 'iLAB_Gold2'
Connecting to Storage Center: with user: Admin
java.lang.IllegalStateException: TrustManagerFactoryImpl is not initialized
Running Command: volume map -name 'Test_Volume1' -server 'iLAB_Gold2'
Successfully mapped Volume 'Test_Volume1' to Server 'iLAB_Gold2'
Successfully finished running Compellent Command Utility (CompCU) application.

Wouldn’t it make more sense to create and map the volume at the same time? Yes, yes it would. Here’s another example where I present the volume to a folder of servers.

E:\CU060301_002A>java -jar CompCU.jar -defaultname saved_default.cli -c "volume create -name "Test_Volume2" -size 400g -folder iLAB_Gold2 -server iLAB_Gold2"
Compellent Command Utility (CompCU)
User Name: Admin
Host/IP Address:
Single Command: volume create -name Test_Volume2 -size 400g -folder iLAB_Gold2 -server iLAB_Gold2
Connecting to Storage Center: with user: Admin
java.lang.IllegalStateException: TrustManagerFactoryImpl is not initialized
Running Command: volume create -name Test_Volume2 -size 400g -folder iLAB_Gold2 -server iLAB_Gold2
Creating Volume using StorageType 1: storagetype='Assigned-Redundant-4096', redundancy=Redundant, pagesize=4096, diskfolder=Assigned.
Successfully mapped Volume 'Test_Volume2' to Server 'iLAB_Gold2'
Successfully created Volume 'Test_Volume2', mapped it to Server 'iLAB_Gold2' on Controller 'SN 22641'
Successfully finished running Compellent Command Utility (CompCU) application.

Note that these commands don’t specify replays. If you want replays configured you should use the -replayprofile option or manually create replays with the replay create command.

Dell Compellent – Getting started with CompCU.jar

CompCU.jar is the Compellent Command Utility. You can download it from Compellent’s support site (registration required). This is a basic article that demonstrates how to get started.

The first thing you’ll want to do is create an authentication file that you can re-use, similar to what you do with EMC’s naviseccli tool. The file I specify is saved in the directory I’m working from, and the Storage Center IP is the cluster IP, not the IP address of the controllers.

E:\CU060301_002A>java –jar CompCU.jar –default -defaultname saved_default -host StorageCenterIP -user Admin -password SCPassword

Now you can run commands without having to input credentials each time. I like to ouput to a text file, although you’ll notice that CompCU also dumps output on the console at the same time. The “system show” command provides a brief summary of the system configuration.

E:\CU060301_002A>java -jar CompCU.jar -defaultname saved_default.cli -c "system show -txt 'outputfile.txt'"
Compellent Command Utility (CompCU)
User Name: Admin
Host/IP Address:
Single Command: system show -txt 'systemshow.txt'
Connecting to Storage Center: with user: Admin
java.lang.IllegalStateException: TrustManagerFactoryImpl is not initialized
Running Command: system show -txt 'outputfile.txt'
SerialNumber Name ManagementIP Version OperationMode PortsBalanced MailServer BackupMailServer
----------------- -------------------------------- ---------------- ---------------- -------------- -------------- -------------------- --------------------
22640 Compellent1 Normal Yes
Save to Text (txt) File: outputfile.txt
Successfully finished running Compellent Command Utility (CompCU) application.

Notice I get java errors every time I run this command. I think that’s related to an expired certificate, but I need to research that further. Another useful command is “storagetype show“. Here’s one I prepared earlier.

E:\CU060301_002A>java -jar CompCU.jar -defaultname saved_default.cli -c "storagetype show -txt 'storagetype.txt'"
Compellent Command Utility (CompCU)
User Name: Admin
Host/IP Address:
Single Command: storagetype show -txt 'storagetype.txt'
Connecting to Storage Center: with user: Admin
java.lang.IllegalStateException: TrustManagerFactoryImpl is not initialized
Running Command: storagetype show -txt 'storagetype.txt'
Index Name DiskFolder Redundancy PageSize PageSizeBlocks SpaceUsed SpaceUsedBlocks SpaceAllocated SpaceAllocatedBlocks
------ -------------------------------- -------------------- -------------------- ---------- --------------- -------------------- -------------------- -------------------- --------------------
1 Assigned-Redundant-4096 Assigned Redundant 2.00 MB 4096 1022.51 GB 2144350208 19.67 TB 42232291328
Save to Text (txt) File: storagetype.txt
Successfully finished running Compellent Command Utility (CompCU) application.

There’s a bunch of useful things you can do with CompCU, particularly when it comes to creating volumes and allocating them to hosts, for example. I’ll cover these in the next little while. In the meantime, I hope this was a useful introduction to CompCU.

Dell Compellent – Preallocating storage

In my first post on the Dell Compellent, I’d mentioned that it was possible to preallocate storage on the array, even though it was thin by default. I can’t really think of a reason why you’d want to do this on this particular array, but here’s how to do it.

Firstly, create the volume but don’t map it to anything. Then wait a little while. I can’t remember how long, but it’s a little while. Then right-click on the volume and the option to “Preallocate Storage” will be there.


You will be warned that you’ve basically turned your back on years of hard work by the Compellent developers. Think about that.


When it says it will take several minutes, it’s not lying.


It will probably take a lot longer than several minutes, particularly if you’re making a 20TB volume, like I am.


In fact, the array will get a bit concerned about how long it’s taking as well.


Five hours later, and about 7.5TB has been preallocated.


Here’s a picture of it when it’s finished. Note the lack of space.


You’ll also notice that the storage is in conservation mode now, basically because there’s not a lot of space left to work with.


You can right-click on these alerts to “Clear Emergency”. Note, however, that you need to have actually cleared the emergency (made space, for example), before you can, er, clear the emergency.


Note also that it takes a little while to delete a 20TB volume.



And there you have it. It is possible, and there might even be a reason for doing it. But then you might just have bought the wrong array for the job.

Dell Compellent – A brief introduction

I don’t really do product evaluations on this blog for a few reasons. Firstly, I’m not a tech journalist, and don’t get sent review samples of equipment to look at all day, nor do I get paid to do product evaluations (generally speaking). Secondly, I’m a bit crap at hardware evaluations, and tend to approach these things with a set of requirements that don’t always match up with what other people find useful in evaluating kit. In my day job, however, I sometimes have the opportunity to look at kit, but usually during the evaluation stage there can be a bit of problem with commercial sensitivity and me shooting my mouth off about Vendor X’s gear while negotiating the purchase of said gear. That said, I’ve been evaluating some Dell Compellent equipment at work lately and thought it might be worthy of a write-up or two. Please note that I’m not suggesting for a minute that you go out and spend yours or someone else’s cash on Compellent arrays without doing your own evaluation. The point of this post is more to talk about some of the things I like and dislike, based on the opportunity I’ve had to have some hands-on experience with it.

For a decent, if now slightly out-dated, review of the Compellent, check out Chris Evans’ reviews here and here. For Dell’s overview of the Dell Compellent Architecture, have a look here. Also worth looking at is the Compellent Software Overview, which can be found here. Finally, a cool feature is “Portable Volume”, you can read about that here.

It was installed in early February and hasn’t really missed a beat. The system came with 2 SC8000 controllers with 64GB of cache each.  We also got 2 trays of disk; one with 24 300GB 15K SAS drives and one with 12 2TB 7.2K NL-SAS drives. Each controller has one 4-port 8Gb FC front-end card and one 4-port 6Gb SAS back-end card installed. It came with the basic Storage Center software licenses, Data Progression licenses and a Virtual Ports Base license.

Here’re some pictures of it installed in the rack in our lab. Note that these pictures probably don’t look too different from a lot of other Compellent arrays installed in racks around the world. They do, however, prove that the person installing the gear was competent.



So, I thought I’d cover off briefly on a few things now, and if anything else comes to mind I’ll do some more posts.

Firstly, Virtual Port technology is kind of cool, once you get your head around it. Basically, NPIV (check out one Scott Lowe’s very useful articles on NPIV here) is used to enable multiple virtual ports on a physical port. Dell suggest that this means you need less ports for failover. While this is true, keep in mind that there may well be bandwidth issues when ports do failover. Obviously, you’d be hoping that the local Dell support technician turned up to replace failed cards in a timely fashion. Still, the idea of not having to worry as much about jumpy host-based failover software is neat (I’m looking at you, older versions of EMC PowerPath). I’m going to do a brief article on virtual ports in the future. In the meantime, this is what it looks like from a zoning perspective:

fcalias name Compellent1_Virtual_Ports_Zone vsan 2
    member pwwn 50:00:d3:10:00:58:70:05
    member pwwn 50:00:d3:10:00:58:70:07
    member pwwn 50:00:d3:10:00:58:70:19
    member pwwn 50:00:d3:10:00:58:70:1b
fcalias name Compellent1_Physical_Ports_Zone vsan 2
    member pwwn 50:00:d3:10:00:58:70:2d
    member pwwn 50:00:d3:10:00:58:70:2f
    member pwwn 50:00:d3:10:00:58:70:31
    member pwwn 50:00:d3:10:00:58:70:33
zone name HOST-023_HBA0_Compellent1_Zone vsan 2
    member fcalias HOST-023_HBA0
    member fcalias Compellent1_Virtual_Ports_Zone
zone name HOST-024_HBA0_Compellent1_Zone vsan 2
    member fcalias HOST-024_HBA0
    member fcalias Compellent1_Virtual_Ports_Zone

And the same again for VSAN 3, obviously with different host HBAs and different physical and virtual ports.

Secondly, note that LUNs / volumes on the Compellent are always configured as thin. You can choose not to do this, but you’ll have Dell people scratching their heads and wondering what you’re doing. People in the street might call you a disk hugger, or worse. It can get nasty. Let’s just say that thin is the new thick. If you absolutely have to configure thick, you used to be able to do so by ticking a box when you configured the volume. I made a 20TB volume that way. Sales people were briefly excited until they realised it was an evaluation array. For the life of me I can’t see where to do that, maybe it was removed as an option.

While you’re thinking about how you might have just wasted a bit of money on advanced Compellent features by insisting on configuring thick-provisioned volumes, you can do some benchmarks. The great thing about benchmarks is that you can have them do whatever you want them to do. We asked for a system that could deliver approximately 10000 IOPS. And that’s what we got. I’m going to do a longer article on synthetic benchmarks and how they can be useful, but let’s just say that if you’re sitting down to do an equipment evaluation and you’ve fired up IOmeter (or whatever you like to use), make sure you have an idea of what it is you want to prove before you click on start. We also were able to get a lot more IOPS out of the system, because 512b 100% sequential reads are pretty common, right? Here’s a picture that represents two volumes running 2 workers with IOmeter. Each worker was doing 70000 IOPS. OMG that’s over 140000 IOPS! Yeah. Which equals approximately 70 MBps for this particular benchmark. It’s been said before, but don’t forget that an IOPS figure in isolation is meaningless.


I’m not in the business of shilling for Dell. But sometimes I get to look at their stuff. If you like the sound of some of this stuff, and think you might be in the market for a new array, it might be worth giving them a call.