OpenMediaVault – Expanding the Filesystem

I recently had the opportunity to replace a bunch of ageing 2TB drives in my OpenMediaVault NAS with some 3TB drives. I run it in a 6+2 RAID-6 configuration (yes, I know, RAID is dead). I was a bit cheeky and replaced 2 drives at a time and let it rebuild. This isn’t something I recommend you do in the real world. Everything came up clean after the drives were replaced. I even got to modify the mdadm.conf file again to tell it I had 0 spares. The problem was that the size of the filesystem in OpenMediaVault was the same as it was before. When you click on Grow it expects you to be adding drives. So, you can grow the filesystem, but you need to expand the device to fill the drives. I recommend taking a backup before you do this. And I unmounted my shares before I did this too.

If you’re using a bitmap, you’ll need to remove it first.

mdadm --grow /dev/md0 --bitmap none
mdadm --grow /dev/md0 --size max
mdadm --wait /dev/md0
mdadm --grow /dev/md0 --bitmap internal

In this example, /dev/md0 is the device you want to grow. It’s likely that your device is called /dev/md0. Note, also, that this will take some time to complete. The next step is to expand the filesystem to fit the RAID device. It’s a good idea to run a filesystem check before you do this.

fsck /dev/md0

Then it’s time to resize (assuming you had no problems in the last step).

resize2fs /dev/md0

You should then be able to remount the device and see the additional capacity. Big thanks to kernel.org for having some useful instructions here.

OpenMediaVault – Updating from 2.2.x to 3.0.x

I recently upgraded my home-brew NAS from OpenMediaVault 2.2.14 (Stone burner) to openmediavault 3.0.86 (Erasmus). It’s recommended that you do a fresh install but I thought I’d give the upgrade a shot as it was only a 10TB recovery if it went pear-shaped (!). They also recommend you disable all your plugins before you upgrade.

 

Apt-get all of the things

It’s an OS upgrade as well as an application upgrade. In an ssh session I ran

apt-get update && apt-get dist-upgrade && omv-update

This gets you up to date, then upgrades your distro (Debian), and then gets the necessary packages for omv. I then ran the omv upgrade.

omv-release-upgrade

This seemed to go well. I rebooted the box and could still access the shared data. Happy days. When I tried to access the web UI, however, I could enter my credentials but I couldn’t get in. I then ran

omv-firstaid

And tried to reconfigure the web interface. It kept complaining about a file not being found. So I ran

dpkg -l | grep openmediavault

This told me that there was still a legacy plugin (openmediavault-dnsmasq) running. I’d read on the forums that this might cause some problems. So I used apt-get to remove it.

apt-get remove openmediavault-dnsmasq

The next time I ran apt-get it told me there were some legacy packages present that I could remove. So I did.

apt-get autoremove dnsmasq dnsmasq-base libnetfilter-conntrack3

After that, I was able to login in to the web UI with no problems and everything now seems to be in order. When my new NAS arrives I’ll evacuate this one and rebuild it from scratch. There are a fair few changes in version 3 and it’s worth checking out. You can download the ISO image from here.

 

DNS Matters

The reason I had the dnsmasq plugin installed in the first place was that I’d been using the NAS as a DHCP / DNS server. This had been going reasonably well, but I’d heard about Pi-hole and wanted to give that a shot. That’s a story for another time, but I did notice that my OMV box hadn’t updated its /etc/resolv.conf feel correctly, despite the fact that I’d reconfigured DNS via the web GUI. If you run into this issue, just run

dpkg-reconfigure resolvconf

And you’ll find that resolv.conf is correctly updated. Incidentally, if you’re a bit old-fashioned and don’t like to run everything through DHCP reservations, you can add a second set of static host entries to dnsmasq on your pi-hole machine by following these instructions.

OpenMediaVault – Modifying Monit Parameters

You can file this article under “not terribly useful but something I may refer to again in the future”. I’ve been migrating a bunch of data from one of my QNAP NAS devices at home to my OpenMediaVault NAS. Monit, my “faithful employee”, sent me an email to let me know I was filling up the filesystem on the OMV NAS.

By default OMV alerts at 80% full. You can change this though. Just jump on a terminal and run the following:

nano /etc/default/openmediavault

Add this line to the file

OMV_MONIT_SERVICE_FILESYSTEM_SPACEUSAGE=95

Then run the following commands to update the configuration

omv-mkconf collectd
monit restart collectd

Of course, you need to determine what level of filesystem usage you’re comfortable with. In this example, I’ve set it to 95% as it’s a fairly static environment. If, however, you’re capable of putting a lot of data on the device quickly, then 5% buffer may be insufficient. I’d also like to clarify that I’m not unhappy with QNAP, but the device I’m migrating off is 8 years old now and it would be a pain to have to recover if something went wrong. If you’re interested in reading more about Monit you can find documentation here.

Exablox Isn’t Just Pretty Hardware

Disclaimer: I recently attended Storage Field Day 10.  My flights, accommodation and other expenses were paid for by Tech Field Day. There is no requirement for me to blog about any of the content presented and I am not compensated in any way for my time at the event.  Some materials presented were discussed under NDA and don’t form part of my blog posts, but could influence future discussions.

exablox-logo-black

Before I get started, you can find a link to my raw notes on Exablox‘s presentation here. You can also see videos of the presentation here.  You can find a preview post from Chris M. Evans here.

 

It’s Not Just the Hardware

I waxed lyrical about the Exablox hardware platform after seeing it at Storage Field Day 7. But while the OneBlox hardware is indeed pretty cool (you can see the specifications here), the cloud-based monitoring platform, OneSystem, is really the interesting bit.

According to Exablox, the “OneSystem application is used to combine OneBlox appliances into Rings as well as configuring shares, user access, and remote replication”. It’s the mechanism used for configuration, as well as monitoring, alerting and reporting.

OneSystem is built on a cloud-based, multi-tenant architecture. There’s nothing to install for organisations, VARs, and MSPs. Although if you feel a bit special about how your data is treated, there is an optional, private OneSystem deployment available for on-premises management. Exablox pride themselves on the “world-class” support they provide to customers, with a customer-first culture being one of the dominant themes when talking to them about support capability. Some of the other benefits of the OneSystem approach is:

  • The ability to globally manage OneBlox anywhere; and
  • Deliver seamless OneBlox software upgrades.

Exablox also provide 24×7 proactive monitoring, providing insight into, amongst other things:

  • Storage utilisation and analysis;
  • Storage health and alerts; and
  • OneBlox drive health.

The cool thing about this platform is that it offers the ability to configure custom storage policies and simple scaling for individual applications. In this manner you can configure the following data services on a “per application” basis:

  • Variable or fixed-length deduplication;
  • Compression on/off;
  • Continuous data protection on/off and retention; and
  • Remote replication on/off.

 

I Want My Data Everywhere

While the OneBlox ring is currently limited to 7 systems per cluster, you can have two or more (up to 10) clusters operating in a mesh for replication. You can then conceivably have a whole bunch of different data protection schemes in place depending on what you need to protect and where you need it protected. The great thing is that, with the latest version of OneSystem, you can have a one-to-many replication relationship between directories as well. This kind of flexibility is really neat in my opinion. Note that replication is asynchronous.

SFD10_Exablox_Mutli-siteReplication

 

Further Reading and Final Thoughts

If you’ve read any of my recent posts on the likes of Pure, Nimble and Tintri, it would feel like everyone and their dog is into cloud-based monitoring and analytics systems for storage platforms. This is in no way a bad thing, and something that I’m glad we’re seeing become a prevalent feature with these “modern” storage architectures. We store a whole bunch of data on these things. And sometimes it’s even data that is vital to the success of the various business endeavours we undertake on a daily basis. So it’s great to see vendors are taking this requirement seriously. It also helps somewhat that people are a little more comfortable with the concept of keeping information in “the cloud”. This certainly helps the vendors control the end user experience form a support viewpoint, rather than relyin on arcane systems deployed across multiple VMs that invariably fail at the time you need to dig into the data to find out what’s really going on in the environment.

Exablox have come up with a fairly unique approach to scale-out NAS, and I’m keen to see where they take it from here. Features such as remote replication and the continuing maturity of the OneSystem platform make me think that they’re gearing up to push things a little beyond the BYO drives SMB space. I’ll be interested to see just how that plays out.

Ray Lucchesi did a thorough write-up on Exablox that you can read here, while Francesco Bonetti did a great write-up here. Exablox has also published a technical overview of OneBlox and OneSystem that is worth checking out.

 

OpenMediaVault – Annoying mdadm e-mails after a rebuild

My homebrew NAS running OpenMediaVault (based on Debian) started writing to me recently. I’d had a disk failure and replaced the disk in the RAID set with another one. Everything rebuilt properly, but then this mdadm chap started sending me messages daily.

"This is an automatically generated mail message from mdadm
 running on openmediavault
A SparesMissing event had been detected on md device /dev/md0.
Faithfully yours, etc.
P.S. The /proc/mdstat file currently contains the following:
Personalities : [raid6] [raid5] [raid4] 
 md0 : active raid6 sdi[0] sda[8] sdb[6] sdc[5] sdd[4] sde[3] sdf[2] sdh[1]
 11720297472 blocks super 1.2 level 6, 512k chunk, algorithm 2 [8/8] [UUUUUUUU]
unused devices: <none>"

Which was nice of it to get in touch. But I’d never had spares configured on this md device. The fix is simple, and is outlined here and here. In short, you’ll want to edit /etc/mdadm/mdadm.conf and changes spares=1 to spares=0. This is assuming you don’t want spares configured and are relying on parity for resilience. If you do want spares configured then it’s probably best you look into the problem a little more.

OpenMediaVault – A few notes

Following on from my brief look at FreeNAS here, I thought I’d do a quick article on OpenMediaVault as well. While it isn’t quite as mature as FreeNAS, it is based on Debian. I’ve had a soft spot for Debian ever since I was able to get it running on a DECpc AXP 150 I had lying about many moons ago. The Jensen is no longer with us, but the fond memories remain. Anyway …

Firstly, you can download OpenMediaVault here. It’s recommended that you install it on a hard drive (ideally in a RAID 1 configuration) rather than on USB or SD cards. Theoretically you could put it on a stick and redirect the more frequently written stuff to a RAM disk if you really didn’t want to give up the SATA ports on your board. I decided to use an SSD I had laying about as I couldn’t be bothered with more workarounds and “tweaks”. You can follow this guide to setup some semi-automated backup of the configuration.

Secondly, here’s a list of the hardware I used for this build:

  • Mainboard – ASRock N3700-ITX
  • CPU – Intel Quad-Core Pentium Processor N3700 (on-board)
  • RAM – 2 * Kingston 8GB 1600MHz DDR3 Non-ECC CL11 SODIMM
  • HDDs – 1 * SSD, 8 * Seagate Constellation ES 2TB drives
  • SATA Controller PCIe x1 4-port SATA III controller (non-RAID), using a Marvell 88SE9215 chipset
  • IO Crest Mini PCIe 2-port SATA III controller (RAID capable), using a Syba (?) chipset
  • Case – Fractal Design Node 804
  • PSU – Silverstone Strider Essential 400W

IMG_3054

You’ll notice the lack of ECC RAM, and the board is limited in SATA ports, hence the requirement for a single-lane, 4-port SATA card. I’m really not the best at choosing the right hardware for the job. The case is nice and roomy, but there’s no hot-swap for the disks. A better choice would have been a workstation-class board with support for ECC RAM, a decent CPU and a bunch of SATA ports in a micro-ATX form-factor. I mean, it works, but it could have been better. I’d like to think it’s because the market is a bit more limited in Australia, but it’s more because I’m not very good at this stuff.

Thirdly, if you do end up with the ASRock board, you’ll need to make a change to your grub configuration so that the board will boot headless. To do this, ssh or console onto the machine and edit /etc/default/grub. Uncomment GRUB_TERMINAL=console (by removing the #). You’ll then need to run update-grub and you should be right to boot the machine without a monitor connected.

Finally, the OMV experience has been pretty good thus far. None of these roll-your-own options are as pretty as their QNAP or Synology brethren from a UX perspective, but they do the job in a functional, if somewhat sparse fashion. That said, having been a QNAP user for a about 7 years now, I remember that it wasn’t always the eye candy that it is nowadays. Also of note, OMV has a pretty reasonable plugin ecosystem you can leverage, with Plex and a bunch of extras being fairly simple to install and configure. I’m looking forward to running this thing through its paces and posting the performance and useability results.

 

 

FreeNAS – A few notes

Mat and I have been talking about FreeNAS a lot recently. My QNAP TS-639 Pro is approaching 7 years old and I’m reluctant to invest further money in drives for it. So we’ve been doing a bit of research on what might be good hardware and so forth. I thought I’d put together a few links that I found useful and share some commentary.

Firstly, FreeNAS has been around for a while now, and there is a plethora of useful documentation available via the official documentation, forums and blog posts. While digging through the comments on a post I noticed someone saying that the FreeNAS crowd like to patronise people a lot. It might be a little unfair, although they do sometimes come across as a bit dickish, so be prepared. It’s like anything on the internet really.

Secondly, most of the angst comes about through the choices people make for their DIY hardware builds. There’s a lot of talk about ECC RAM and why it’s critical to a decent build. I have a strong dislike of the word “noobs” and variants, but there are some good points made in this thread. Brian Moses has an interesting counter here, which I found insightful as well. So, your mileage might vary. For what it’s worth, I’m not using ECC RAM in my current build, but I am by no means a shining light when it comes to best practice for IT in the home. If I was going to store data on it that I couldn’t afford to reload from another source (I’m using it to stream mkv files around the house) I would look at ECC.

Thirdly, one of the “folk of the forum”, as I’ll now call them, has a handy primer on FreeNAS that you can view in a few different ways here. It hasn’t been updated in a little while, but it covers off a lot of the salient points when looking at doing your own build and getting started with FreeNAS. If you want a few alternative approaches to what may or may not work for you, have a look at Brian’s post here, as well as this one and this one. Also, if you’re still on the fence about FreeNAS, take a look at Brian’s DIY NAS Software Roundup – it’s well written and covers a number of the important points. The key takeaways when looking at doing your own build are as follows:

  • Do your research before you buy stuff;
  • Don’t go cheap on RAM (ECC if you can);
  • Think about the real requirement for ZIL or L2ARC; and
  • Not everyone on the internet is a prick, but sometimes it will seem like that.

Finally, my experience with FreeNAS itself has been pretty good. I admit that I haven’t used FreeBSD or its variants in quite a few years, but the web interface is pretty easy to navigate. I’ve mucked about a bit with the different zpool configurations, and how to configure the ZIL and L2ARC on a different drive (that post is coming shortly). The installation is straight forward and once I got my head around the concept of jails it was easy to setup Plex and give it a spin too. Performance was good given the hardware I’ve tested on (when the drives weren’t overheating due to the lack of airflow and an Aussie summer). I’m hoping to do the real build this week or next, so I’ll see how it goes then and report back. I might give NexentaStor Community Edition a crack as well. I have a soft spot for them because they gave me some shoes once. In the meantime, if anyone at iXsystems wants to send me a FreeNAS Mini, just let me know.

QNAP – Upgrading Firmware via the CLI

For some reason, I keep persisting with my QNAP TS-639 II, despite the fact that every time something goes wrong with it I spend hours trying to revive it. In any case, I recently had an issue with a disk showing SMART warnings. I figured it would be a good idea to replace it before it became a big problem. I had some disks on the shelf from the last upgrade. When I popped one in, however, it sent me this e-mail.

Server Name: qnap639
IP Address: 192.168.0.110
Date/Time: 28/05/2015 06:27:00
Level: Warning
The firmware versions of the system built-in flash (4.1.3 Build 20150408) and the hard drive (4.1.2 Build 20150126) are not consistent. It is recommended to update the firmware again for higher system stability.

Not such a great result. I ignored the warning and manually rebuilt the /dev/md0 device. When I rebooted, however, I still had the warning. And a missing disk from the md0 device (but that’s a story for later). To get around this problem, it is recommended that you reinstall the array firmware via the shell. I took my instructions from here. In short, you copy the image file to a share, copy that to an update directory, run a script, and reboot. It fixed my problem as it relates to that warning, but I’m still having issues getting a drive to join the RAID device. I’m currently clearing the array again and will put in a new drive next week. Here’s what it looks like when you upgrade the firmware this way.

[/etc/config] # cd /
[/] # mkdir /mnt/HDA_ROOT/update
mkdir: Cannot create directory `/mnt/HDA_ROOT/update': File exists
[/] # cd /mnt/HDA_ROOT/update
[/mnt/HDA_ROOT/update] # ls
[/mnt/HDA_ROOT/update] # cd /
[/] # cp /share/Public/TS-639_20150408-4.1.3.img /mnt/HDA_ROOT/update/
[/] # ln -sf /mnt/HDA_ROOT/update /mnt/update
[/] # /etc/init.d/update.sh /mnt/HDA_ROOT/update/TS-639_20150408-4.1.3.img 
cksum=238546404
Check RAM space available for FW update: OK.
Using 120-bit encryption - (QNAPNASVERSION4)
len=1048576
model name = TS-639
version = 4.1.3
boot/
bzImage
bzImage.cksum
config/
fw_info
initrd.boot
initrd.boot.cksum
libcrypto.so.1.0.0
libssl.so.1.0.0
qpkg.tar
qpkg.tar.cksum
rootfs2.bz
rootfs2.bz.cksum
rootfs_ext.tgz
rootfs_ext.tgz.cksum
update/
update_img.sh
4.1.3 20150408 
OLD MODEL NAME = TS-639
Allow upgrade
Allow upgrade
/mnt/HDA_ROOT/update
1+0 records in
1+0 records out
tune2fs 1.41.4 (27-Jan-2009)
Setting maximal mount count to -1
Setting interval between checks to 0 seconds
Update image using HDD ...
bzImage cksum ... Pass
initrd.boot cksum ... Pass
rootfs2.bz cksum ... Pass
rootfs_ext.tgz cksum ... Pass
rootfs_ext.tgz cksum ... Pass
qpkg.tar cksum ... Pass
Update RFS1...
mke2fs 1.41.4 (27-Jan-2009)
Filesystem label=
OS type: Linux
Block size=1024 (log=0)
Fragment size=1024 (log=0)
13832 inodes, 55296 blocks
0 blocks (0.00%) reserved for the super user
First data block=1
Maximum filesystem blocks=56623104
7 block groups
8192 blocks per group, 8192 fragments per group
1976 inodes per group
Superblock backups stored on blocks: 
8193, 24577, 40961
Writing inode tables: done
Writing superblocks and filesystem accounting information: done
This filesystem will be automatically checked every 21 mounts or
180 days, whichever comes first.  Use tune2fs -c or -i to override.
Checking bzImage ... ok
Checking initrd.boot ... ok
Checking rootfs2.bz ... ok
Checking rootfs_ext.tgz ... ok
Update RFS2...
mke2fs 1.41.4 (27-Jan-2009)
Filesystem label=
OS type: Linux
Block size=1024 (log=0)
Fragment size=1024 (log=0)
13832 inodes, 55296 blocks
0 blocks (0.00%) reserved for the super user
First data block=1
Maximum filesystem blocks=56623104
7 block groups
8192 blocks per group, 8192 fragments per group
1976 inodes per group
Superblock backups stored on blocks: 
8193, 24577, 40961
Writing inode tables: done                            
Writing superblocks and filesystem accounting information: done
This filesystem will be automatically checked every 31 mounts or
180 days, whichever comes first.  Use tune2fs -c or -i to override.
1+0 records in
1+0 records out
Update Finished.
Make a Backup
/share/MD0_DATA
qpkg.tar cksum ... Pass
set cksum [238546404]
[/] # reboot
[/] #


Storage Field Day 7 – Day 3 – Exablox

Disclaimer: I recently attended Storage Field Day 7.  My flights, accommodation and other expenses were paid for by Tech Field Day. There is no requirement for me to blog about any of the content presented and I am not compensated in any way for my time at the event.  Some materials presented were discussed under NDA and don’t form part of my blog posts, but could influence future discussions.

For each of the presentations I attended at SFD7, there are a few things I want to include in the post. Firstly, you can see video footage of the Exablox presentation here. You can also download my raw notes from the presentation here. Finally, here’s a link to the Exablox website that covers some of what they presented.

Brief Overview

Exablox was founded in 2010 and launched publicly in April 2013. There are two key elements to their solution:

  • OneBlox – scale-out storage for the enterprise, offering converged storage for primary and backup / archival data; and
  • OneSystem – manage on-premises storage exclusively from anywhere, providing visibility, control, and security without cost / complexity of traditional management

Here’s a photo of Tad Hunt (CTO and Co-founder) showing us the internals of the Exablox appliance.

IMG_1214_11

 

Architecture

Exablox started the presentation by talking about what we want from storage re-imagined (my words, not theirs):

  • Scale out;
  • Deduplication;
  • Snapshots;
  • Replication;
  • Be simple yet powerful; and
  • Be managed from everywhere.

The Exablox approach is not your father’s standard storage presentation play. Instead of providing block storage via SMB / NFS, or object storage via APIs, it instead presents file protocols via the front-end and services these with object storage on the back-end.

exablox-architecture-diagram

Technology Vision

Exablox’s approach revolves around software-defined storage (SDS) and storage management, with the following goals:

  • Manage the policy, not the technology;
  • SDS “wrapped in tin” for the mid market;
  • Eliminate complexity;
  • Plug-and-play; and
  • Next generation features.

They deliver NAS features atop object storage:

  • Without metadata servers;
  • Without bolt-on NAS gateways;
  • Without separate data and metadata servers; and
  • To scale capacity, performance, or resilience: just add a node.

 

Technology Benefits

Exablox say they can create scale-out NAS and object clusters atop mixed media – HDD, SSD, Shingled drives. This approach delivers the benefits of object storage technology to traditional applications:

  • By using standard file protocols; and
  • eliminating forklift upgrades – single namespace across the scale of the cluster.

They also use “RAID-free” data protection:

  • Self-healing from multiple drive and node failures;
  • Rebalancing time proportional to the quantity of objects on the failed drive;
  • Mix and match drive types, capacities, technologies; and
  • Introduce next generation drives without long validation cycles.

This provides the ability to scale capacity from TB to PB easily, whilst also offering:

  • Zero configuration expansion; and
  • Manage from anywhere capability.

Exablox say they are able to support all NAS workloads well. Whereas other object stores are designed primarily for large files, a OneBlox 3308 can handle 1B objects. All nodes perform all functions: storage, control, NAS interface, with a node being a single failure domain.

 

Hardware Notes and Thoughts

For the purposes of this post, I wanted to focus on the OneBlox appliance. While the OneSystem architecture is super neat, I still get a bit of a nerd tingle when I see some nice hardware. (BTW if Exablox want me test one long-term I’d be happy to oblige).

Exablox claims to be the sole provider of the following features in a single storage solution:

  • Scale-out deduplication;
  • Scale-out, continuous snapshots;
  • Scale-out, RAID-less capacity;
  • Scale-out, site-to-site disaster recovery; and
  • Bring any drive – one at a time at retail pricing.

They also support auto-clustering, with each node adding:

  • Capacity;
  • Performance; and
  • Resiliency.

The Exablox 3308 appliance:

  • Is seriously bloody quiet;
  • Uses 100W under peak load;
  • Has 8 * 3.5” drive bays, supporting up to 48 raw TB; and
  • Can use a mix of SATA & SAS drives.

Here is a picture of some appliances on a rack.

IMG_1213_cropped

Further Reading

I was impressed with the strategy presented to me by Exablox, and the apparent ease of deployment and overall design of the appliance seemed great on the surface. I’d like to be clear that I haven’t used these in the wild, nor have I had any view of any benchmark data, so I can’t comment as to the effective performance of these devices. Like most things in storage, your mileage might vary. But I will say they seem quite inexpensive for what they do, and I recommend taking a more detailed look at them.

I also recommend you check out Keith’s preview post on Exablox.  For a different perspective on the hardware, have a look at Storage Review’s take on things as well.

EMC announces Isilon enhancements

I sat in on a recent EMC briefing regarding some Isilon enhancements and I thought my three loyal readers might like to read through my notes. As I’ve stated before, I am literally one of the worst tech journalists on the internet, so if you’re after insight and deep analysis, you’re probably better off looking elsewhere. Let’s focus on skimming the surface instead, yeah? As always, if you want to know further about these announcements, the best place to start would be your local EMC account team.

Firstly, EMC have improved what I like to call the “Protocol Spider”, with support for the following new protocols:

  • SMB 3.0
  • HDFS 2.3*
  • OpenStack SWIFT*

* Note that this will be available by the end of the year.

Here’s a picture that says pretty much the same thing as the words above.

isilon_protocols

 

 

 

 

 

 

 

In addition to the OneFS updates, two new hardware models have also been announced.

S210

S210

 

  • Up to 13.8TB globally coherent cache in a single cluster (96GB RAM per node);
  • Dual Quad-Core Intel 2.4GHz Westmere Processors;
  • 24 * 2.5” 300GB or 600GB 10Krpm Serial Attached SCSI (SAS) 6Gb/s Drives; and
  • 10GbE (Copper & Fiber) Front-end Networking Interface.

 

Out with the old and in with the new.

S200vsS210_cropped

X410

X410

 

  • Up to 6.9TB globally coherent cache in a single cluster (48GB RAM per node);
  • Quad-Core Intel Nehalem E5504 Processor;
  • 12 * 3.5” 500GB, 1TB, 2TB, 3TB 7.2Krpm Serial ATA (SATA) Drives; and
  • 10GbE (Copper & Fiber) Front-end Networking Interface.

Some of the key features include:

  • 50% more DRAM in baseline configuration than current 2U X-series platform;
  • Configurable memory (6GB to 48GB) per node to suit specific application & workflow needs;
  • 3x increase in density per RU thus lowering power, cooling and footprint expenses;
  • Enterprise SSD support for latency sensitive namespace acceleration or file storage apps; and
  • Redesigned chassis that delivers superior cooling and vibration control.

 

Here’s a picture that does a mighty job of comparing the new model to the old one.

X400vsX410_cropped

 

Isilon SmartFlash

EMC also announced SmartFlash for Isilon, which uses SSDs as an addition to DRAM for flash capability. The upshot is that you can have 1PB Flash vs 37TB DRAM. It’s also globally coherent, unlike some of my tweets.

Here’s a picture.

Isilon_SmartFlash