Cohesity – NAS Data Migration Overview

Data Migration

Cohesity NAS Data Migration, part of SmartFiles, was recently announced as a generally available feature within the Cohesity DataPlatform 6.4 release (after being mentioned in the 6.3 release blog post). The idea behind it is that you can use the feature to perform the migration of NAS data from a primary source to the Cohesity DataPlatform. It is supported for NAS storage registered as SMB or NFS (so it doesn’t necessarily need to be a NAS appliance as such, it can also be a file share hosted somewhere).

 

What To Think About

There are a few things to think about when you configure your migration policy, including:

  • The last time the file was accessed;
  • Last time the file was modified; and
  • The size of the file.

You also need to think about how frequently you want to run the job. Finally, it’s worth considering which View you want the archived data to reside on.

 

What Happens?

When the data is migrated an SMB2 symbolic link is left in place of the file with the same name as the file and the original data is moved to the Cohesity View. Note that on Windows boxes, remote to remote symbolic links are disabled, so you need to run these commands:

C:\Windows\system32>fsutil behavior set SymlinkEvaluation R2R:1
C:\Windows\system32>fsutil behavior query SymlinkEvaluation

Once the data is migrated to the Cohesity cluster, subsequent read and write operations are performed on the Cohesity host. You can move data back to the environment by mounting the Cohesity target View on a Windows client, and copying it back to the NAS.

 

Configuration Steps

To get started, select File Services, and click on Data Migration.

Click on the Migrate Data to configure a migration job.

You’ll need to give it a name.

 

The next step is to select the Source. If you already have a NAS source configured, you’ll see it here. Otherwise you can register a Source.

Click on the arrow to expand the registered NAS mount points.

Select the mount point you’d like to use.

Once you’ve selected the mount point, click on Add.

You then need to select the Storage Domain (formerly known as a ViewBox) to store the archived data on.

You’ll need to provide a name, and configure schedule options.

You can also configure advanced settings, including QoS and exclusions. Once you’re happy, click on Migrate and the job will be created.

You can then run the job immediately, or wait for the schedule to kick in.

 

Other Things To Consider

You’ll need to think about your anti-virus options as well. You can register external anti-virus software or install the anti-virus app from the Cohesity Marketplace

 

Thoughts And Further Reading

Cohesity have long positioned their secondary storage solution as something more than just a backup and recovery solution. There’s some debate about the difference between storage management and data management, but Cohesity seem to have done a good job of introducing yet another feature that can help users easily move data from their primary storage to their secondary storage environment. Plenty of backup solutions have positioned themselves as archive solutions, but many have been focused on moving protection data, rather than primary data from the source. You’ll need to do some careful planning around sizing your environment, as there’s always a chance that an end user will turn up and start accessing files that you thought were stale. And I can’t say with 100% certainty that this solution will transparently work with every line of business application in your environment. But considering it’s aimed at SMB and NFS shares, it looks like it does what it says on the tin, and moves data from one spot to another.

You can read more about the new features in Cohesity DataPlatform 6.4 (Pegasus) on the Cohesity site, and Blocks & Files covered the feature here. Alastair also shared some thoughts on the feature here.

OpenMediaVault – Good Times With mdadm

Happy 2019. I’ve been on holidays for three full weeks and it was amazing. I’ll get back to writing about boring stuff soon, but I thought I’d post a quick summary of some issues I’ve had with my home-built NAS recently and what I did to fix it.

Where Are The Disks Gone?

I got an email one evening with the following message.

I do enjoy the “Faithfully yours, etc” and the post script is the most enlightening bit. See where it says [UU____UU]? Yeah, that’s not good. There are 8 disks that make up that device (/dev/md0), so it should look more like [UUUUUUUU]. But why would 4 out of 8 disks just up and disappear? I thought it was a little odd myself. I had a look at the ITX board everything was attached to and realised that those 4 drives were plugged in to a PCI SATA-II card. It seems that either the slot on the board or the card are now failing intermittently. I say “seems” because that’s all I can think of, as the S.M.A.R.T. status of the drives is fine.

Resolution, Baby

The short-term fix to get the filesystem back on line and useable was the classic “assemble” switch with mdadm. Long time readers of this blog may have witnessed me doing something similar with my QNAP devices from time to time. After panic rebooting the box a number of times (a silly thing to do, really), it finally responded to pings. Checking out /proc/mdstat wasn’t good though.

dan@openmediavault:~$ cat /proc/mdstat
Personalities : [raid6] [raid5] [raid4]
unused devices: <none>

Notice the lack of, erm, devices there? That’s non-optimal. The fix requires a forced assembly of the devices comprising /dev/md0.

dan@openmediavault:~$ sudo mdadm --assemble --force --verbose /dev/md0 /dev/sd[abcdefhi]
[sudo] password for dan:
mdadm: looking for devices for /dev/md0
mdadm: /dev/sda is identified as a member of /dev/md0, slot 0.
mdadm: /dev/sdb is identified as a member of /dev/md0, slot 1.
mdadm: /dev/sdc is identified as a member of /dev/md0, slot 3.
mdadm: /dev/sdd is identified as a member of /dev/md0, slot 2.
mdadm: /dev/sde is identified as a member of /dev/md0, slot 5.
mdadm: /dev/sdf is identified as a member of /dev/md0, slot 4.
mdadm: /dev/sdh is identified as a member of /dev/md0, slot 7.
mdadm: /dev/sdi is identified as a member of /dev/md0, slot 6.
mdadm: forcing event count in /dev/sdd(2) from 40639 upto 40647
mdadm: forcing event count in /dev/sdc(3) from 40639 upto 40647
mdadm: forcing event count in /dev/sdf(4) from 40639 upto 40647
mdadm: forcing event count in /dev/sde(5) from 40639 upto 40647
mdadm: clearing FAULTY flag for device 3 in /dev/md0 for /dev/sdd
mdadm: clearing FAULTY flag for device 2 in /dev/md0 for /dev/sdc
mdadm: clearing FAULTY flag for device 5 in /dev/md0 for /dev/sdf
mdadm: clearing FAULTY flag for device 4 in /dev/md0 for /dev/sde
mdadm: Marking array /dev/md0 as 'clean'
mdadm: added /dev/sdb to /dev/md0 as 1
mdadm: added /dev/sdd to /dev/md0 as 2
mdadm: added /dev/sdc to /dev/md0 as 3
mdadm: added /dev/sdf to /dev/md0 as 4
mdadm: added /dev/sde to /dev/md0 as 5
mdadm: added /dev/sdi to /dev/md0 as 6
mdadm: added /dev/sdh to /dev/md0 as 7
mdadm: added /dev/sda to /dev/md0 as 0
mdadm: /dev/md0 has been started with 8 drives.

In this example you’ll see that /dev/sdg isn’t included in my command. That device is the SSD I use to boot the system. Sometimes Linux device conventions confuse me too. If you’re in this situation and you think this is just a one-off thing, then you should be okay to unmount the filesystem, run fsck over it, and re-mount it. In my case, this has happened twice already, so I’m in the process of moving data off the NAS onto some scratch space and have procured a cheap little QNAP box to fill its role.

 

Conclusion

My rush to replace the homebrew device with a QNAP isn’t a knock on the OpenMediaVault project by any stretch. OMV itself has been very reliable and has done everything I needed it to do. Rather, my ability to build semi-resilient devices on a budget has simply proven quite poor. I’ve seen some nasty stuff happen with QNAP devices too, but at least any issues will be covered by some kind of manufacturer’s support team and warranty. My NAS is only covered by me, and I’m just not that interested in working out what could be going wrong here. If I’d built something decent I’d get some alerting back from the box telling me what’s happened to the card that keeps failing. But then I would have spent a lot more on this box than I would have wanted to.

I’ve been lucky thus far in that I haven’t lost any data of real import (the NAS devices are used to store media that I have on DVD or Blu-Ray – the important documents are backed up using Time Machine and Backblaze). It is nice, however, that a tool like mdadm can bring you back from the brink of disaster in a pretty efficient fashion.

Incidentally, if you’re a macOS user, you might have a bunch of .ds_store files on your filesystem. Or stuff like .@Thumb or some such. These things are fine, but macOS doesn’t seem to like them when you’re trying to move folders around. This post provides some handy guidance on how to get rid of a those files in a jiffy.

As always, if the data you’re storing on your NAS device (be it home-built or off the shelf) is important, please make sure you back it up. Preferably in a number of places. Don’t get yourself in a position where this blog post is your only hope of getting your one copy of your firstborn’s pictures from the first day of school back.

Panasas Overview

A good friend of mine is friends with someone who works at Panasas and suggested I might like to hear from them. I had the opportunity to speak to some of the team, and I thought I’d write a brief overview of what they do. Hopefully I’ll have the opportunity to cover them in the future as I think they’re doing some pretty neat stuff.

 

It’s HPC, But Not As You Know It

I don’t often like to include that slide where the vendor compares themselves to other players in the market. In this case, though, I thought Panasas’s positioning of themselves as “commercial” HPC versus the traditional HPC storage (and versus enterprise scale-out NAS) is an interesting one. We talked through this a little, and my impression is that they’re starting to deal more and more with the non-traditional HPC-like use cases, such as media and entertainment, oil and gas, genomics folks, and so forth. A number of these workloads fall outside HPC, in the sense that traditional HPC has lived almost exclusively in government and the academic sphere. The roots are clearly in HPC, but there are “enterprise” elements creeping in, such as ease of use (at scale) and improved management functionality.

[image courtesy of Panasas]

 

Technology

It’s Really Parallel

The really value in Panasas’s offering is the parallel access to the storage. The more nodes you add, the more performance improves. In a serial system, a client can access data via one node in the cluster, regardless of the number of nodes available. In a parallel system, such as this one, a client accesses data that is spread across multiple nodes.

 

What About The Hardware?

The current offering from Panasas is called ActiveStor. The platform is comprised of PanFS running on Director Blades and Storage Blades. Here’s a picture of the Director Blades (ASD-100) and the Storage Blades (ASH-100). The Director has been transitioned to a 2U4N form factor (it used to be sit in the blade chassis).

[image courtesy of Panasas]

 

Director Nodes are the Control Plane of PanFS, and handle:

  • Metadata processing: directories, file names, access control checks, timestamps, etc.
  • Uses a transaction log to ensure atomicity and durability of structural changes
  • Coordination of client system actions to ensure single-system view and data-cache-coherence
  • “Realm” membership (Panasas’s name for the storage cluster), realm self-repair, etc.
  • Realm maintenance: file reconstruction, automatic capacity balancing, scrubbing, etc.

Storage Nodes are the Data Plane of PanFS, and deal with:

  • Storage of bulk user data for the realm, accessed in parallel by client systems
  • Also stores, but does not operate on, all the metadata of the system for the Director Nodes
  • API based upon the T10 SCSI “Object-Based Storage Device” that Panasas helped define

Storage nodes offer a variety of HDD (4TB, 6TB, 8TB, 10TB, or 12TB) and SSD capacities (480GB, 960GB, 1.9TB) depending on the type of workload you’re dealing with. The SSD is used for metadata and files smaller than 60KB. Everything else is stored on the larger drives.

 

DirectFlow Protocol

DirectFlow is a big part fo what differentiates Panasas from your average scale-out NAS offering. It does some stuff that’s pretty cool, including:

  • Support for parallel delivery of data to / from Storage Nodes
  • Support for fully POSIX-compliant semantics, unlike NFS and SMB
  • Support for strong data cache-coherency across client systems

It’s a proprietary protocol between clients and ActiveStor components, and there’s an installable kernel module for each client system (Linux and macOS). They tell me that pNFS is based upon DirectFlow, and they had a hand in defining pNFS.

 

Resilience

Scale out NAS is exciting but us enterprise types want to know about resilience. It’s all fun and games until someone fat fingers a file, or a disk dies. Well, Panasas, as it happens, have a little heritage when it comes to disk resilience. They use a N + 2 RAID 6 (10 wide + P & Q). You could have more disks working for you, but this number seems to work best for Panasas customers. In terms of realms, there are 3, 5 or 7 “rep sets” per realm. There’s also a “realm president”, and every Director has a backup director. There’s also:

  • Per-file erasure coding of striped files allows the whole cluster to help rebuild a file after a failure;
  • Only need to rebuild data protection on specific files instead of entire drives(s); and
  • The percentage of files in the cluster affected by any given failure approaches zero at scale.

 

Thoughts and Further Reading

I’m the first to admit that my storage experience to date has been firmly rooted in the enterprise space. But, much like my fascination with infrastructure associated wth media and entertainment, I fancy myself as an HPC-friendly storage guy. This is for no other reason than I think HPC workloads are pretty cool, and they tend to scale beyond what I normally see in the enterprise space (keeping in mind that I work in a smallish market). You say genomics to someone, or AI, and they’re enthusiastic about the outcomes. You say SQL 2012 to someone and they’re not as interested.

Panasas are positioning themselves as being suitable, primarily, for commercial HPC storage requirements. They have a strong heritage with traditional HPC workloads, and they seem to have a few customers using their systems for more traditional, enterprise-like NAS deployments as well. This convergence of commercial HPC, traditional and enterprise NAS requirements has presented some interesting challenges, but it seems like Panasas have addressed those in the latest iteration of its hardware. Dealing with stonking great big amounts of data at scale is a challenge for plenty of storage vendors, but Panasas have demonstrated an ability adapt to the evolving demands of their core market. I’m looking forward to seeing the next incarnation of their platform, and how they incorporate technologies such as InfiniBand into their offering.

There’s a good white paper available on the Panasas architecture that you can access here (registration required). El Reg also has some decent coverage of the current hardware offering here.

OpenMediaVault – Expanding the Filesystem

I recently had the opportunity to replace a bunch of ageing 2TB drives in my OpenMediaVault NAS with some 3TB drives. I run it in a 6+2 RAID-6 configuration (yes, I know, RAID is dead). I was a bit cheeky and replaced 2 drives at a time and let it rebuild. This isn’t something I recommend you do in the real world. Everything came up clean after the drives were replaced. I even got to modify the mdadm.conf file again to tell it I had 0 spares. The problem was that the size of the filesystem in OpenMediaVault was the same as it was before. When you click on Grow it expects you to be adding drives. So, you can grow the filesystem, but you need to expand the device to fill the drives. I recommend taking a backup before you do this. And I unmounted my shares before I did this too.

If you’re using a bitmap, you’ll need to remove it first.

mdadm --grow /dev/md0 --bitmap none
mdadm --grow /dev/md0 --size max
mdadm --wait /dev/md0
mdadm --grow /dev/md0 --bitmap internal

In this example, /dev/md0 is the device you want to grow. It’s likely that your device is called /dev/md0. Note, also, that this will take some time to complete. The next step is to expand the filesystem to fit the RAID device. It’s a good idea to run a filesystem check before you do this.

fsck /dev/md0

Then it’s time to resize (assuming you had no problems in the last step).

resize2fs /dev/md0

You should then be able to remount the device and see the additional capacity. Big thanks to kernel.org for having some useful instructions here.

OpenMediaVault – Updating from 2.2.x to 3.0.x

I recently upgraded my home-brew NAS from OpenMediaVault 2.2.14 (Stone burner) to openmediavault 3.0.86 (Erasmus). It’s recommended that you do a fresh install but I thought I’d give the upgrade a shot as it was only a 10TB recovery if it went pear-shaped (!). They also recommend you disable all your plugins before you upgrade.

 

Apt-get all of the things

It’s an OS upgrade as well as an application upgrade. In an ssh session I ran

apt-get update && apt-get dist-upgrade && omv-update

This gets you up to date, then upgrades your distro (Debian), and then gets the necessary packages for omv. I then ran the omv upgrade.

omv-release-upgrade

This seemed to go well. I rebooted the box and could still access the shared data. Happy days. When I tried to access the web UI, however, I could enter my credentials but I couldn’t get in. I then ran

omv-firstaid

And tried to reconfigure the web interface. It kept complaining about a file not being found. So I ran

dpkg -l | grep openmediavault

This told me that there was still a legacy plugin (openmediavault-dnsmasq) running. I’d read on the forums that this might cause some problems. So I used apt-get to remove it.

apt-get remove openmediavault-dnsmasq

The next time I ran apt-get it told me there were some legacy packages present that I could remove. So I did.

apt-get autoremove dnsmasq dnsmasq-base libnetfilter-conntrack3

After that, I was able to login in to the web UI with no problems and everything now seems to be in order. When my new NAS arrives I’ll evacuate this one and rebuild it from scratch. There are a fair few changes in version 3 and it’s worth checking out. You can download the ISO image from here.

 

DNS Matters

The reason I had the dnsmasq plugin installed in the first place was that I’d been using the NAS as a DHCP / DNS server. This had been going reasonably well, but I’d heard about Pi-hole and wanted to give that a shot. That’s a story for another time, but I did notice that my OMV box hadn’t updated its /etc/resolv.conf feel correctly, despite the fact that I’d reconfigured DNS via the web GUI. If you run into this issue, just run

dpkg-reconfigure resolvconf

And you’ll find that resolv.conf is correctly updated. Incidentally, if you’re a bit old-fashioned and don’t like to run everything through DHCP reservations, you can add a second set of static host entries to dnsmasq on your pi-hole machine by following these instructions.

OpenMediaVault – Modifying Monit Parameters

You can file this article under “not terribly useful but something I may refer to again in the future”. I’ve been migrating a bunch of data from one of my QNAP NAS devices at home to my OpenMediaVault NAS. Monit, my “faithful employee”, sent me an email to let me know I was filling up the filesystem on the OMV NAS.

By default OMV alerts at 80% full. You can change this though. Just jump on a terminal and run the following:

nano /etc/default/openmediavault

Add this line to the file

OMV_MONIT_SERVICE_FILESYSTEM_SPACEUSAGE=95

Then run the following commands to update the configuration

omv-mkconf collectd
monit restart collectd

Of course, you need to determine what level of filesystem usage you’re comfortable with. In this example, I’ve set it to 95% as it’s a fairly static environment. If, however, you’re capable of putting a lot of data on the device quickly, then 5% buffer may be insufficient. I’d also like to clarify that I’m not unhappy with QNAP, but the device I’m migrating off is 8 years old now and it would be a pain to have to recover if something went wrong. If you’re interested in reading more about Monit you can find documentation here.

Exablox Isn’t Just Pretty Hardware

Disclaimer: I recently attended Storage Field Day 10.  My flights, accommodation and other expenses were paid for by Tech Field Day. There is no requirement for me to blog about any of the content presented and I am not compensated in any way for my time at the event.  Some materials presented were discussed under NDA and don’t form part of my blog posts, but could influence future discussions.

exablox-logo-black

Before I get started, you can find a link to my raw notes on Exablox‘s presentation here. You can also see videos of the presentation here.  You can find a preview post from Chris M. Evans here.

 

It’s Not Just the Hardware

I waxed lyrical about the Exablox hardware platform after seeing it at Storage Field Day 7. But while the OneBlox hardware is indeed pretty cool (you can see the specifications here), the cloud-based monitoring platform, OneSystem, is really the interesting bit.

According to Exablox, the “OneSystem application is used to combine OneBlox appliances into Rings as well as configuring shares, user access, and remote replication”. It’s the mechanism used for configuration, as well as monitoring, alerting and reporting.

OneSystem is built on a cloud-based, multi-tenant architecture. There’s nothing to install for organisations, VARs, and MSPs. Although if you feel a bit special about how your data is treated, there is an optional, private OneSystem deployment available for on-premises management. Exablox pride themselves on the “world-class” support they provide to customers, with a customer-first culture being one of the dominant themes when talking to them about support capability. Some of the other benefits of the OneSystem approach is:

  • The ability to globally manage OneBlox anywhere; and
  • Deliver seamless OneBlox software upgrades.

Exablox also provide 24×7 proactive monitoring, providing insight into, amongst other things:

  • Storage utilisation and analysis;
  • Storage health and alerts; and
  • OneBlox drive health.

The cool thing about this platform is that it offers the ability to configure custom storage policies and simple scaling for individual applications. In this manner you can configure the following data services on a “per application” basis:

  • Variable or fixed-length deduplication;
  • Compression on/off;
  • Continuous data protection on/off and retention; and
  • Remote replication on/off.

 

I Want My Data Everywhere

While the OneBlox ring is currently limited to 7 systems per cluster, you can have two or more (up to 10) clusters operating in a mesh for replication. You can then conceivably have a whole bunch of different data protection schemes in place depending on what you need to protect and where you need it protected. The great thing is that, with the latest version of OneSystem, you can have a one-to-many replication relationship between directories as well. This kind of flexibility is really neat in my opinion. Note that replication is asynchronous.

SFD10_Exablox_Mutli-siteReplication

 

Further Reading and Final Thoughts

If you’ve read any of my recent posts on the likes of Pure, Nimble and Tintri, it would feel like everyone and their dog is into cloud-based monitoring and analytics systems for storage platforms. This is in no way a bad thing, and something that I’m glad we’re seeing become a prevalent feature with these “modern” storage architectures. We store a whole bunch of data on these things. And sometimes it’s even data that is vital to the success of the various business endeavours we undertake on a daily basis. So it’s great to see vendors are taking this requirement seriously. It also helps somewhat that people are a little more comfortable with the concept of keeping information in “the cloud”. This certainly helps the vendors control the end user experience form a support viewpoint, rather than relyin on arcane systems deployed across multiple VMs that invariably fail at the time you need to dig into the data to find out what’s really going on in the environment.

Exablox have come up with a fairly unique approach to scale-out NAS, and I’m keen to see where they take it from here. Features such as remote replication and the continuing maturity of the OneSystem platform make me think that they’re gearing up to push things a little beyond the BYO drives SMB space. I’ll be interested to see just how that plays out.

Ray Lucchesi did a thorough write-up on Exablox that you can read here, while Francesco Bonetti did a great write-up here. Exablox has also published a technical overview of OneBlox and OneSystem that is worth checking out.

 

OpenMediaVault – Annoying mdadm e-mails after a rebuild

My homebrew NAS running OpenMediaVault (based on Debian) started writing to me recently. I’d had a disk failure and replaced the disk in the RAID set with another one. Everything rebuilt properly, but then this mdadm chap started sending me messages daily.

"This is an automatically generated mail message from mdadm
 running on openmediavault
A SparesMissing event had been detected on md device /dev/md0.
Faithfully yours, etc.
P.S. The /proc/mdstat file currently contains the following:
Personalities : [raid6] [raid5] [raid4] 
 md0 : active raid6 sdi[0] sda[8] sdb[6] sdc[5] sdd[4] sde[3] sdf[2] sdh[1]
 11720297472 blocks super 1.2 level 6, 512k chunk, algorithm 2 [8/8] [UUUUUUUU]
unused devices: <none>"

Which was nice of it to get in touch. But I’d never had spares configured on this md device. The fix is simple, and is outlined here and here. In short, you’ll want to edit /etc/mdadm/mdadm.conf and changes spares=1 to spares=0. This is assuming you don’t want spares configured and are relying on parity for resilience. If you do want spares configured then it’s probably best you look into the problem a little more.

OpenMediaVault – A few notes

Following on from my brief look at FreeNAS here, I thought I’d do a quick article on OpenMediaVault as well. While it isn’t quite as mature as FreeNAS, it is based on Debian. I’ve had a soft spot for Debian ever since I was able to get it running on a DECpc AXP 150 I had lying about many moons ago. The Jensen is no longer with us, but the fond memories remain. Anyway …

Firstly, you can download OpenMediaVault here. It’s recommended that you install it on a hard drive (ideally in a RAID 1 configuration) rather than on USB or SD cards. Theoretically you could put it on a stick and redirect the more frequently written stuff to a RAM disk if you really didn’t want to give up the SATA ports on your board. I decided to use an SSD I had laying about as I couldn’t be bothered with more workarounds and “tweaks”. You can follow this guide to setup some semi-automated backup of the configuration.

Secondly, here’s a list of the hardware I used for this build:

  • Mainboard – ASRock N3700-ITX
  • CPU – Intel Quad-Core Pentium Processor N3700 (on-board)
  • RAM – 2 * Kingston 8GB 1600MHz DDR3 Non-ECC CL11 SODIMM
  • HDDs – 1 * SSD, 8 * Seagate Constellation ES 2TB drives
  • SATA Controller PCIe x1 4-port SATA III controller (non-RAID), using a Marvell 88SE9215 chipset
  • IO Crest Mini PCIe 2-port SATA III controller (RAID capable), using a Syba (?) chipset
  • Case – Fractal Design Node 804
  • PSU – Silverstone Strider Essential 400W

IMG_3054

You’ll notice the lack of ECC RAM, and the board is limited in SATA ports, hence the requirement for a single-lane, 4-port SATA card. I’m really not the best at choosing the right hardware for the job. The case is nice and roomy, but there’s no hot-swap for the disks. A better choice would have been a workstation-class board with support for ECC RAM, a decent CPU and a bunch of SATA ports in a micro-ATX form-factor. I mean, it works, but it could have been better. I’d like to think it’s because the market is a bit more limited in Australia, but it’s more because I’m not very good at this stuff.

Thirdly, if you do end up with the ASRock board, you’ll need to make a change to your grub configuration so that the board will boot headless. To do this, ssh or console onto the machine and edit /etc/default/grub. Uncomment GRUB_TERMINAL=console (by removing the #). You’ll then need to run update-grub and you should be right to boot the machine without a monitor connected.

Finally, the OMV experience has been pretty good thus far. None of these roll-your-own options are as pretty as their QNAP or Synology brethren from a UX perspective, but they do the job in a functional, if somewhat sparse fashion. That said, having been a QNAP user for a about 7 years now, I remember that it wasn’t always the eye candy that it is nowadays. Also of note, OMV has a pretty reasonable plugin ecosystem you can leverage, with Plex and a bunch of extras being fairly simple to install and configure. I’m looking forward to running this thing through its paces and posting the performance and useability results.

 

 

FreeNAS – A few notes

Mat and I have been talking about FreeNAS a lot recently. My QNAP TS-639 Pro is approaching 7 years old and I’m reluctant to invest further money in drives for it. So we’ve been doing a bit of research on what might be good hardware and so forth. I thought I’d put together a few links that I found useful and share some commentary.

Firstly, FreeNAS has been around for a while now, and there is a plethora of useful documentation available via the official documentation, forums and blog posts. While digging through the comments on a post I noticed someone saying that the FreeNAS crowd like to patronise people a lot. It might be a little unfair, although they do sometimes come across as a bit dickish, so be prepared. It’s like anything on the internet really.

Secondly, most of the angst comes about through the choices people make for their DIY hardware builds. There’s a lot of talk about ECC RAM and why it’s critical to a decent build. I have a strong dislike of the word “noobs” and variants, but there are some good points made in this thread. Brian Moses has an interesting counter here, which I found insightful as well. So, your mileage might vary. For what it’s worth, I’m not using ECC RAM in my current build, but I am by no means a shining light when it comes to best practice for IT in the home. If I was going to store data on it that I couldn’t afford to reload from another source (I’m using it to stream mkv files around the house) I would look at ECC.

Thirdly, one of the “folk of the forum”, as I’ll now call them, has a handy primer on FreeNAS that you can view in a few different ways here. It hasn’t been updated in a little while, but it covers off a lot of the salient points when looking at doing your own build and getting started with FreeNAS. If you want a few alternative approaches to what may or may not work for you, have a look at Brian’s post here, as well as this one and this one. Also, if you’re still on the fence about FreeNAS, take a look at Brian’s DIY NAS Software Roundup – it’s well written and covers a number of the important points. The key takeaways when looking at doing your own build are as follows:

  • Do your research before you buy stuff;
  • Don’t go cheap on RAM (ECC if you can);
  • Think about the real requirement for ZIL or L2ARC; and
  • Not everyone on the internet is a prick, but sometimes it will seem like that.

Finally, my experience with FreeNAS itself has been pretty good. I admit that I haven’t used FreeBSD or its variants in quite a few years, but the web interface is pretty easy to navigate. I’ve mucked about a bit with the different zpool configurations, and how to configure the ZIL and L2ARC on a different drive (that post is coming shortly). The installation is straight forward and once I got my head around the concept of jails it was easy to setup Plex and give it a spin too. Performance was good given the hardware I’ve tested on (when the drives weren’t overheating due to the lack of airflow and an Aussie summer). I’m hoping to do the real build this week or next, so I’ll see how it goes then and report back. I might give NexentaStor Community Edition a crack as well. I have a soft spot for them because they gave me some shoes once. In the meantime, if anyone at iXsystems wants to send me a FreeNAS Mini, just let me know.