Random Short Take #16

Here are a few links to some random news items and other content that I recently found interesting. You might find them interesting too. Episode 16 – please enjoy these semi-irregular updates.

  • Scale Computing has been doing a bit in the healthcare sector lately – you can read news about that here.
  • This was a nice roundup of the news from Apple’s recent WWDC from Six Colors. Hat tip to Stephen Foskett for the link. Speaking of WWDC news, you may have been wondering what happened to all of your purchased content with the imminent demise of iTunes on macOS. It’s still a little fuzzy, but this article attempts to shed some light on things. Spoiler: you should be okay (for the moment).
  • There’s a great post on the Dropbox Tech Blog from James Cowling discussing the mission versus the system.
  • The more things change, the more they remain the same. For years I had a Windows PC running Media Center and recording TV. I used IceTV as the XMLTV-based program guide provider. I then started to mess about with some HDHomeRun devices and the PC died and I went back to a traditional DVR arrangement. Plex now has DVR capabilities and it has been doing a reasonable job with guide data (and recording in general), but they’ve decided it’s all a bit too hard to curate guides and want users (at least in Australia) to use XMLTV-based guides instead. So I’m back to using IceTV with Plex. They’re offering a free trial at the moment for Plex users, and setup instructions are here. No, I don’t get paid if you click on the links.
  • Speaking of axe-throwing, the Cohesity team in Queensland is organising a social event for Friday 21st June from 2 – 4 pm at Maniax Axe Throwing in Newstead. You can get in contact with Casey if you’d like to register.
  • VeeamON Forum Australia is coming up soon. It will be held at the Hyatt Regency Hotel in Sydney on July 24th and should be a great event. You can find out more information and register for it here. The Vanguards are also planning something cool, so hopefully we’ll see you there.
  • Speaking of Veeam, Anthony Spiteri recently published his longest title in the Virtualization is Life! catalogue – Orchestration Of NSX By Terraform For Cloud Connect Replication With vCloud Director. It’s a great article, and worth checking out.
  • There’s a lot of talk and slideware devoted to digital transformation, and a lot of it is rubbish. But I found this article from Chin-Fah to be particularly insightful.

OpenMediaVault – Good Times With mdadm

Happy 2019. I’ve been on holidays for three full weeks and it was amazing. I’ll get back to writing about boring stuff soon, but I thought I’d post a quick summary of some issues I’ve had with my home-built NAS recently and what I did to fix it.

Where Are The Disks Gone?

I got an email one evening with the following message.

I do enjoy the “Faithfully yours, etc” and the post script is the most enlightening bit. See where it says [UU____UU]? Yeah, that’s not good. There are 8 disks that make up that device (/dev/md0), so it should look more like [UUUUUUUU]. But why would 4 out of 8 disks just up and disappear? I thought it was a little odd myself. I had a look at the ITX board everything was attached to and realised that those 4 drives were plugged in to a PCI SATA-II card. It seems that either the slot on the board or the card are now failing intermittently. I say “seems” because that’s all I can think of, as the S.M.A.R.T. status of the drives is fine.

Resolution, Baby

The short-term fix to get the filesystem back on line and useable was the classic “assemble” switch with mdadm. Long time readers of this blog may have witnessed me doing something similar with my QNAP devices from time to time. After panic rebooting the box a number of times (a silly thing to do, really), it finally responded to pings. Checking out /proc/mdstat wasn’t good though.

dan@openmediavault:~$ cat /proc/mdstat
Personalities : [raid6] [raid5] [raid4]
unused devices: <none>

Notice the lack of, erm, devices there? That’s non-optimal. The fix requires a forced assembly of the devices comprising /dev/md0.

dan@openmediavault:~$ sudo mdadm --assemble --force --verbose /dev/md0 /dev/sd[abcdefhi]
[sudo] password for dan:
mdadm: looking for devices for /dev/md0
mdadm: /dev/sda is identified as a member of /dev/md0, slot 0.
mdadm: /dev/sdb is identified as a member of /dev/md0, slot 1.
mdadm: /dev/sdc is identified as a member of /dev/md0, slot 3.
mdadm: /dev/sdd is identified as a member of /dev/md0, slot 2.
mdadm: /dev/sde is identified as a member of /dev/md0, slot 5.
mdadm: /dev/sdf is identified as a member of /dev/md0, slot 4.
mdadm: /dev/sdh is identified as a member of /dev/md0, slot 7.
mdadm: /dev/sdi is identified as a member of /dev/md0, slot 6.
mdadm: forcing event count in /dev/sdd(2) from 40639 upto 40647
mdadm: forcing event count in /dev/sdc(3) from 40639 upto 40647
mdadm: forcing event count in /dev/sdf(4) from 40639 upto 40647
mdadm: forcing event count in /dev/sde(5) from 40639 upto 40647
mdadm: clearing FAULTY flag for device 3 in /dev/md0 for /dev/sdd
mdadm: clearing FAULTY flag for device 2 in /dev/md0 for /dev/sdc
mdadm: clearing FAULTY flag for device 5 in /dev/md0 for /dev/sdf
mdadm: clearing FAULTY flag for device 4 in /dev/md0 for /dev/sde
mdadm: Marking array /dev/md0 as 'clean'
mdadm: added /dev/sdb to /dev/md0 as 1
mdadm: added /dev/sdd to /dev/md0 as 2
mdadm: added /dev/sdc to /dev/md0 as 3
mdadm: added /dev/sdf to /dev/md0 as 4
mdadm: added /dev/sde to /dev/md0 as 5
mdadm: added /dev/sdi to /dev/md0 as 6
mdadm: added /dev/sdh to /dev/md0 as 7
mdadm: added /dev/sda to /dev/md0 as 0
mdadm: /dev/md0 has been started with 8 drives.

In this example you’ll see that /dev/sdg isn’t included in my command. That device is the SSD I use to boot the system. Sometimes Linux device conventions confuse me too. If you’re in this situation and you think this is just a one-off thing, then you should be okay to unmount the filesystem, run fsck over it, and re-mount it. In my case, this has happened twice already, so I’m in the process of moving data off the NAS onto some scratch space and have procured a cheap little QNAP box to fill its role.

 

Conclusion

My rush to replace the homebrew device with a QNAP isn’t a knock on the OpenMediaVault project by any stretch. OMV itself has been very reliable and has done everything I needed it to do. Rather, my ability to build semi-resilient devices on a budget has simply proven quite poor. I’ve seen some nasty stuff happen with QNAP devices too, but at least any issues will be covered by some kind of manufacturer’s support team and warranty. My NAS is only covered by me, and I’m just not that interested in working out what could be going wrong here. If I’d built something decent I’d get some alerting back from the box telling me what’s happened to the card that keeps failing. But then I would have spent a lot more on this box than I would have wanted to.

I’ve been lucky thus far in that I haven’t lost any data of real import (the NAS devices are used to store media that I have on DVD or Blu-Ray – the important documents are backed up using Time Machine and Backblaze). It is nice, however, that a tool like mdadm can bring you back from the brink of disaster in a pretty efficient fashion.

Incidentally, if you’re a macOS user, you might have a bunch of .ds_store files on your filesystem. Or stuff like .@Thumb or some such. These things are fine, but macOS doesn’t seem to like them when you’re trying to move folders around. This post provides some handy guidance on how to get rid of a those files in a jiffy.

As always, if the data you’re storing on your NAS device (be it home-built or off the shelf) is important, please make sure you back it up. Preferably in a number of places. Don’t get yourself in a position where this blog post is your only hope of getting your one copy of your firstborn’s pictures from the first day of school back.

Google WiFi – A Few Notes

Like a lot of people who work in IT as their day job, the IT situation at my house is a bit of a mess. I think the real reason for this is because, once the working day is done, I don’t want to put any thought into doing this kind of stuff. As a result, like a lot of tech folk, I have way more devices and blinking lights in my house than I really need. And I’m always sure to pile on a good helping of technical debt any time I make any changes at home. It wouldn’t be any fun without random issues to deal with from time to time.

Some Background – Apple Airport

I’ve been running an Apple Airport Extreme and a number of Airport Express devices in my house for a while in a mesh network configuration. Our house is 2 storeys and it was too hard to wire up properly with Ethernet after we bought it. I liked the Apple devices primarily because of the easy to use interface (via browser or phone), and Airplay, in my mind at least, was a killer feature. So I’ve stuck with these things for some time, despite the frequent flakiness I experienced with the mesh network (I’d often end up connected to an isolated access point with no network access – a reboot of the base station seemed to fix this) and the sometimes frustrating lack of visibility into what was going on in the network. 

Enter Google Wifi

I had some Frequent Flier points available that meant I could get a 3-pack of Google access points for under $200 AU (I think that’s about $15 in US currency). I’d already put up the Christmas tree, so I figured I could waste a few hours on re-doing the home network. I’m not going to do a full review of the Google Wifi solution, but if you’re interested in that kind of thing, Josh Odgers does a great job of that here. In short, it took me about an hour to place the three access points in the house and get everything connected. I have about 30 – 40 devices running, some of which are hardwired to a switch connected to my ISP’s NBN gateway, and most of which connect wirelessly. 

So What’s The Problem?

The problem was that I’d kind of just jammed the primary Google Wifi point into the network (attached to a dumb switch downstream of the modem). As a result, everything connecting wirelessly via the Google network had an IP range of 192.168.86.x, and all of my other devices were in the existing 10.x.x.x range. This wasn’t a massive problem, as the Google solution does a great job of routing stuff between the “wan” and “lan” subnets, but I started to notice that my pi-hole device wasn’t picking up hostnames properly, and some devices were getting confused about which DNS to use. Oh, and my port mapping for Plex was a bit messed up too. I also had wired devices (i.e. my desktop machine) that couldn’t see Airplay devices on the wireless network without turning on Wifi.

The Solution?

After a lot of Googling, I found part of the solution via this Reddit thread. Basically, what I needed to do was follow a more structured topology, with my primary Google device hanging off my ISP’s switch (and connected via the “wan” port on the Google Wifi device). I then connected the “lan” port on the Google device to my downstream switch (the one with the pi-hole, NAS devices, and other stuff connected to it). 

Now the pi-hole could play nicely on the network, and I could point my devices to it as the DNS server via the Google interface. I also added a few more reservations into my existing list of hostnames on the pi-hole (instructions here) so that it could correctly identify any non-DHCP clients. I also changed the DHCP range on the Google Wifi to a single IP address (the one used by the pi-hole) and made sure that there was a reservation set for the pi-hole on the Google side of things. The reason for this (I think) is that you can’t disable DHCP on the Google Wifi device. To solve the Plex port mapping issue, I set a manual port mapping on my ISP modem and pointed it to the static IP address of the primary Google Wifi device. I then created a port mapping on the Google side of things to point to my Plex Media Server. It took a little while, but eventually everything started to work. 

It’s also worth noting that I was able to reconfigure the Airport Express devices connected to speakers to join the new Wifi network and I can still use Airplay around the house as I did before.

Conclusion 

This seems like a lot of mucking about for what is meant to be a plug and play wireless solution. In Google’s defence though, my home network topology is a bit more fiddly than the average punter’s would be. If I wasn’t so in love with pi-hole, and didn’t have devices that I wanted to use static IP addresses and DNS, then I wouldn’t have had as many problems as I did with the setup. From a performance and usability standpoint, I think the Google solution is excellent. Of course, this might all go to hell in a hand basket when I ramp up IPv6 in the house, but for now it’s been working well. Coupled with the fact that my networking skills are pretty subpar and we should all just be happy I was able to post this article on the Internet from my house.

HTPC – Replacing The PC With macOS

The Problem

My HTPC (running Windows 7 Media Center) died a few months ago after around 5 or 6 years of service. It was a shame as I had used it as a backup for my TV recordings and also to rip optical discs to my NAS. At the time it died I was about to depart on a business trip and I couldn’t be bothered trying to work out why it died. So I gave the carcass of the machine to my nephew (who’s learning about PC hardware things) and tried to put something together using other machines in my house (namely an iMac and some other odds and sods). I’m obviously not the first person to use a Mac for these activities, but I thought it would be useful to capture my experiences.

 

Requirements

Requirements? Isn’t this just for home entertainment? Whatever, relaxation is important, and understanding your users is as well. We record a lot of free to air tv with a Fetch Mighty box and sometimes things clash. Or don’t record. “Catch-up” TV services in Australia are improving a lot, but our Netflix catalogue is nowhere near as extensive as the US one. So I like to have a backup option for TV recording. The HTPC provided that. And it had that cool Media Center extender option with the Xbox360 that was actually quite useable and meant I didn’t need a PC sitting in the lounge room.

From a movie consumption perspective, we mostly watch stuff using various clients such as AppleTV or WDTV devices, so the HTPC didn’t really impact anything there, although the way I got data from Blu-ray / DVD / HD-DVD / VCD was impacted as my iMac didn’t have a Blu-ray player attached. Okay, the SuperDrive could deal with VideoCDs, but you get my point.

So, in short, I need something that could:

  • Record free to air tv shows via a schedule;
  • Rip Blu-ray and other content to mkv containers (or similar); and
  • Grab accurate metadata for that media for use with Plex.

 

Solution?

The solution was fairly easy to put together when I thought about what I actually needed.

TV

I backed a Kickstarter for the HDHomeRun Connect some time ago and hadn’t really used the device very effectively save for the odd VLC stream on an iPad. It’s a dual-tuner device that sits on your wired network and is addressable by a few different applications over IP. The good news is that Elgato EyeTV works with both macOS and IceTV (a local TV guide provider) and also supports the HDHomeRun. I haven’t tested multiple HDHomeRun devices with the same EyeTV software and I’m not 100% convinced that this would work. I had 8 tuners on the HTPC so this is a bit of a bummer, but as it’s not the primary recording device I can live with it. The EyeTV software supports multiple export options too, so I can push shows into iTunes and have AppleTV pick them up there.

Optical Discs

I bought a USB-based Pioneer (BDR-XS06) Blu-ray drive that works with macOS, and MakeMKV is still my go to in terms of mkv ripping. It maxes out at 2x. I’m not sure if this is a macOS limitation or firmware crippling or some other thing. Now that I think about it, it’s likely the USB bus on my iMac. If anyone wants to send me a Thunderbolt dock I’m happy to try that out as an alternative. In any case a standard movie Blu-ray takes just shy of an hour to rip. It does work though. If I still need to rip HD-DVDs I can use the drive from the Xbox360 that I have laying about. What? I like to have options.

Metadata

For movie metadata I settled on MediaElch for scraping and have found it to be easy to use and reliable. Why bother with scraping metadata? It sometimes saves a bit of effort when Plex gets the match wrong.

Plex

I run Plex as the primary media server for the house (using either iPad, rasplex or AppleTV), with content being served from a number of NAS devices. It took me a while to get on board with Plex, but the ability to install the app on a 4th generation AppleTV and the portability of media has been useful at times (think plane trips on economy airlines where entertainment is an extra charge). Plex are working on a bunch of cool new features, and I’m going to try and make some time to test out the DVR functionality in the near future.

I’ve also recently started to use the auto_master file as the mechanism for mounting my SMB shares automatically on the iMac. I found User-based Login Items method was a bit flaky and shares would disappear every now and then and confuse Plex. I have three NAS devices all using a variation of the name “Multimedia” as their main share (no I didn’t really think this deployment through). As a result my iMac mounts them under Volumes as “Multimedia-1”, “Multimedia-2”, etc. This is fine, but then depending on the order they’re mounted in when the machine reboots can mess up things a bit. The process to use auto_master is fairly simple and can be found here. I’ve included an overview for your enjoyment. Firstly, fire up a terminal session and make a /mnt directory if you don’t have one already.

Last login: Mon Aug 14 05:10:12 on ttys000
imac27:~ dan$ pwd
/Users/dan
imac27:~ dan$ cd /
imac27:/ dan$ sudo mkdir mnt
Password:

You’ll then want to edit setup the auto_master file to look at auto_fs when it runs.

imac27:/ dan$ sudo nano /etc/auto_master

You should also ensure the NAS directory exists (if it doesn’t already).

imac27:/ dan$ cd /mnt
imac27:mnt dan$ sudo mkdir NAS

You can now create / modify the auto_nas file and include mount points, credentials and the shares.

imac27:/ dan$ sudo nano /etc/auto_nas

Now run your automount command.

imac27:/ dan$ sudo automount -vc
automount: /net updated
automount: /home updated
automount: /mnt/NAS updated
automount: no unmounts

Once this is done you’ll want to test that it works.

imac27:/ dan$ cd /mnt/NAS/
imac27:NAS dan$ ls
831multimedia
imac27:NAS dan$ cd 831multimedia/
-bash: cd: 831multimedia/: Too many users

Note that if you’re having issues with a “Too many users” error, it may be because you’re using a special character (like @) in your password and this is messing up the syntax for auto_master. Check this post for a resolution. Once you’ve sorted that it will look more like this.

imac27:NAS dan$ ls
412multimedia    831multimedia    omvmultimedia
imac27:NAS dan$ cd 412multimedia/
imac27:412multimedia dan$ ls
tv
imac27:412multimedia dan$ cd ..
imac27:NAS dan$ cd omvmultimedia/
imac27:omvmultimedia dan$ ls
basketball    music        music video    skateboarding    star wars    tv

It’s also a good idea to apply some permissions to that auto_nas file because you’re storing credentials in there in plain text.

imac27:/ dan$ sudo chmod 600 /etc/auto_nas
Password:

And now you can point Plex at these mount points and you shouldn’t have any problems with SMB shares disappearing randomly.

There’s just one thing though …

I’ve read a lot of reports of this functionality being broken in more recent versions of macOS. I’ve also witnessed shares disappear and get remounted with root permissions. This is obviously not ideal. There are a number of solutions floating around, including running a bash script to unmount the devices as root and then change directory as a normal user (prompting autofs to remount as that user). I found something in a forum somewhere the suggestion that I use -nosuid in my auto_master file. I did this and rebooted and the drives seem to have mounted as me rather than root. I’ll keep an eye on this and see if it continues to work or whether autofs remounts the shares as root. This would seem a non-ideal solution in a multi-user environment but that’s outside my ken at this stage.

*Update – 2017/08/20*

I ended up having problems with the auto_master method as well, and decided to create new shares on the various NAS devices with different names. These then mount as /Volumes/nas1data, /Volumes/nas2data, etc. So theoretically even if I lose the connection I can remount them manually and know that they’ll come up with a consistent path name and Plex won’t get too confused. I dragged these mounted volumes into my Login Items so that they mount every time the machine reboots. It still bites that they disappear every now and then though.

 

HTPCs Are So Last Decade

I recently threw out my RealMagic Hollywood+ XP cards. Those things were great at a time when playing DVDs on a PC was a real challenge. I remember being excited to play full screen MPEG-2 movies on my NT 4 Workstation running on a Pentium 133. Fast forward to about 8-10 years ago and it seemed everyone was running some kind of “Media Center” PC at home and consuming everything via that. Nowadays people tell me to “just stream it”. But I don’t live in a terribly bandwidth rich environment, and downloading 4GB of data from the Internet to watch a movie needs a little more prior planning than I’d like to do. So I’m still a big fan of physical discs, and still recording television shows via a PVR or via the iMac.

I still have a dream that one day I’ll happen upon the perfect user experience where I can consume whatever digital media I want in the fashion I want to. I’ve tried an awful lot of combinations in terms of backend and frontend and Plex is pretty close to doing everything I need. They don’t like ISO files though (which is justifiable, for sure). I rip most things in mkv containers now, and all of my iTunes content (well, the music at least) is pretty easy to consume, but there’re still some things that aren’t as easy to view. I still haven’t found a reliable way to consume UltraViolet content on a big screen (although I think I could do something with AirPlay). I’ve been reading a bit about various cable boxes in the US that can scan both local and streaming data for whatever you’re after and point you in the right direction. I guess this would be the way to go if you had access to reasonable bandwidth and decent content provider choices.

In any case, it’s still possible to run a HTPC the old-fashioned way with macOS.

Apple – I know too much about iPad recovery after iOS 8

So I now know too much about how to recover old files from iPad backups. I know this isn’t exactly my bread and butter, but I found the process fascinating, and thought it was worth documenting the process here. It all started when I upgraded my wife’s iPad 2 to iOS 8. Bad idea. Basically, it ran like rubbish and was pretty close to unusable. So I rolled it back, using the instructions here. Ok, so that’s cool, but it turns out I can’t restore the data from a backup because that was made with iOS 8 and wasn’t compatible with iOS 7.1.2. Okay, fine, it was probably time to clear out some apps, and all of the photos were saved on the desktop, so no big deal. Fast forward a few days, and we realise that all of her notes were on that device. Now for the fun bit. Note that I’m using a Mac. No idea what you need to do on a Windows machine, but I imagine it’s not too dissimilar.

Step 1. Recover the iPad backup from before the iOS upgrade using Time Machine. Note that you’ll need to be able to see hidden files in Finder, as the backup is stored under HOME/Library/Application Support/MobileSync/Backup and Time Machine uses Finder’s settings for file visibility. I used these instructions. Basically, fire up a terminal and type:

$ defaults write com.apple.finder AppleShowAllFiles TRUE
$ killall Finder

You’ll then see the files you need with Time Machine. When you’re finished, type:

$ defaults write com.apple.finder AppleShowAllFiles FALSE
$ killall Finder

Step 2. Now you can browse to HOME/Library/Application Support/MobileSync/Backup and recover your backup files. If you have more than one iDevice backed up, you might need to dig a bit through the files to recover the correct files. I used these instructions to locate the correct backup files. You’ll want to look for a file called “Info.plist”. In that file, you’ll see something like

<key>Device Name</key>
<string>My iPhone</string>

And from there you can restore the correct files. It will look something like this when recovered:

screen1

Step 3. Now you’ll want to go to the normal location of your iPad backups and rename your current backup to something else. Then copy the files that you recovered from Time Machine to this location.

screen2

Step 4. At this point, I followed these quite excellent instructions from Chris Taylor and used the pretty neat iPhone Backup Extractor to extract the files I needed. Once you’ve extracted the files, you’ll have something like this. Note the path of the files is iOS Files/Library/Notes.

screen3

Step 5. At this point, fire up MesaSQLite and open up the “notes.sqlite” file as per the instructions on Chris’s post. Fantastic, I’ve got access to the text from the notes. Except they have a bunch of html tags in them and are generally unformatted. Well, I’m pretty lazy, so I used the tool at Web 2.0 Generators to decode the html to formatted text for insertion into Notes.app files. And that’s it.

Conclusion. As it happens, I’ve now set this iPad up with iCloud synchronisation. *Theoretically* I won’t need to do this again. Nor should I have had to do it in the first place. But I’ve never come across an update that was quite so ugly on particular iDevices. Thanks to Apple for the learning opportunity.