macOS Catalina and Frequent QNAP SMB Disconnections

TL;DR – I have no idea why this is happening as frequently as it is, what’s causing it, or how to stop it. So I’m using AFP for the moment.

Background

I run a Plex server on my Mac mini 2018 running macOS Catalina 10.15.6. I have all of my media stored on a QNAP TS-831X NAS running QTS 4.4.3.1400 with volumes connected to macOS over SMB. This has worked reasonably well with Catalina for the last year (?) or so, but with the latest Catalina update I’ve had frequent (as in every few hours) disconnections from the shares. I’ve tried a variety of fixes, and thought I’d document them here. None of them really worked, so what I’m hoping is that someone with solid macOS chops will be able to tell me what I’m doing wrong.

 

Possible Solutions

QNAP and SMB

I made sure I was running the latest QNAP firmware version. I noticed in the latest release notes for 4.4.3.1381 that it fixed a problem where “[u]sers could not mount NAS shared folders and external storage devices at the same time on macOS via SMB”. This wasn’t quite the issue I was experiencing, but I was nonetheless hopeful. This was not the case. This thread talked about SMB support levels. I was running my shares with support for SMB 2.1 through 3.0. I’ve since changed that to 3.0 only. No dice.

macOS

This guy on this thread thinks he’s nailed it. He may have, but not for me. I’ve included some of the text for reference below.

Server Performance Mode

https://support.apple.com/en-us/HT202528

The description reads:

Performance mode changes the system parameters of your Mac. These changes take better advantage of your hardware for demanding server applications. A Mac that needs to run high-performance services can turn on performance mode to dedicate additional system resources for server applications.

“Solution:

1 – First check to see if server performance mode is enabled on your machine using this Terminal command. You should see the command return serverperfmode=1 if it is enabled.

nvram boot-args

2 – If you do not see serverperfmode=1 returned, enter this following line of code to enable it. (I recommend rebooting your system afterwards)

sudo nvram boot-args="serverperfmode=1 $(nvram boot-args 2>/dev/null | cut -f 2-)"

I’ve also tried changing the power settings on the Mac mini, and disabled power nap. No luck there either. I’ve also tried using the FQDN of the NAS as opposed to the short name of the device when I map the drives. Nope, nothing.

 

Solution

My QNAP still supports Apple File Protocol, and it supports multiple protocols for the same share. So I turned on AFP and mapped the drives that way. I’m pleased to say that I haven’t had the shares disconnect since (and have thus had a much smoother Plex experience), but I’m sad to say that this is the only solution I have to offer for the moment. And if your storage device doesn’t support AFP? Sod knows. I haven’t tried doing it via NFS, but I’ve heard reports that NFS was its own special bin fire in recent versions of Catalina. It’s an underwhelming situation, and maybe one day I’ll happen across the solution. And I can share it here and we can all do a happy dance.

Random Short Take #42

Welcome to Random Short Take #42. A few players have worn 42 in the NBA, including Vin Baker, but my favourite from this list is Walt Williams.  A big man with a jumpshot and a great tube sock game. Let’s get random.

  • Datadobi has formed a partnership with Melillo Consulting to do more in the healthcare data management space. You can read the release here.
  • It’s that time of the year when Backblaze releases its quarterly hard drive statistics. It makes for some really interesting reading, and I’m a big fan of organisations that are willing to be as transparent as Backblaze is with the experience it’s having in the field. It has over 142000 drives in the field, across a variety of vendors, and the insights it delivers with this report are invaluable. In my opinion this is nothing but a good thing for customers and the industry in general. You can read more about the report here.
  • Was Airplay the reason you littered your house with Airport Express boxes? Same here. Have you been thinking it might be nice to replace the Airport Express with a Raspberry Pi since you’ve moved on to a different wireless access point technology? Same here. This article might just be the thing you’ve been looking for. I’m keen to try this out.
  • I’ve been trying to optimise my weblog, and turned on Cloudflare via my hosting provider. The website ran fine, but I had issues accessing the WordPress admin page after a while. This article got me sorted out.
  • I’ve been a bit loose with the security of my home infrastructure from time to time, but even I don’t use WPS. Check out this article if you’re thinking it might somehow be a good idea.
  • This article on caching versus tiering from Chris Evans made for some interesting reading.
  • This was a thorough review of the QNAP QSW-308-1C Unmanaged Switch, an 11 (!) port unmanaged switch boasting 3 10Gbps ports and 8 1Gbps ports. It’s an intriguing prospect, particularly given the price.
  • DH2i has announced it’s extending free access to DxOdyssey Work From Home (WFH) Software until December 31st. Read more about that here.

 

Storage Field Day 20 – (Fairly) Full Disclosure

Disclaimer: I recently attended Storage Field Day 20.  My flights, accommodation and other expenses were paid for by Tech Field Day. There is no requirement for me to blog about any of the content presented and I am not compensated in any way for my time at the event.  Some materials presented were discussed under NDA and don’t form part of my blog posts, but could influence future discussions.

Here are my notes on gifts, etc, that I received as a conference attendee at Storage Field Day 20. This is by no stretch an interesting post from a technical perspective, but it’s a way for me to track and publicly disclose what I get and how it looks when I write about various things. With all of … this stuff … happening, it’s not going to be as lengthy as normal, but I did receive a couple of boxes of stuff in the mail, so I wanted to disclose it.

The Tech Field Day team sent:

  • Keyboard cloth – a really useful thing to save the monitor on my laptop from being bashed against the keyboard.
  • Tote bag
  • Patch
  • Badge
  • Stickers

VAST Data contributed:

  • h2go 17 oz Stainless Steel bottle
  • Trucker cap
  • Pen
  • Notepad
  • Sunglasses
  • Astronaut figurine
  • Stickers

Nebulon chipped in with a:

  • Hip flask
  • Shoelaces
  • Socks
  • Stickers

Qumulo dropped in some socks and a sticker, while Pure Storage sent over a Pure Storage branded Timbuk2 Backpack and a wooden puzzle.

The Intel Optane team kindly included:

  • Intel Optane Nike golf shirt
  • Intel Optane travel tumbler
  • Intel Optane USB phone charging cable
  • Stickers
  • Tote bag
  • Notepad
  • Flashing Badge
  • Pin
  • Socks

My Secret Santa gift was also in one of the box, and I was lucky enough to receive a:

It wasn’t fancy food and limos this time around. But it was nonetheless an enjoyable event. Thanks again to Stephen and the team for having me back.

Pure Storage and Cohesity Announce Strategic Partnership and Pure FlashRecover

Pure Storage and Cohesity announced a strategic partnership and a new joint solution today. I had the opportunity to speak with Amy Fowler and Biswajit Mishra from Pure Storage, along with Anand Nadathur and Chris Wiborg from Cohesity, and thought I’d share my notes here.

 

Friends In The Market

The announcement comes in two parts, with the first being that Pure Storage and Cohesity are forming a strategic partnership. The idea behind this is that, together, the companies will deliver “industry-leading storage innovations from Pure Storage with modern, flash-optimised backup from Cohesity”.  There are plenty of things in common between the companies, including the fact that they’re both, as Wiborg puts it, “keenly focused on doing the right thing for the customer”.

 

Pure FlashRecover Powered By Cohesity

Partnerships are exciting and all, but what was of more interest was the Pure FlashRecover announcement. What is it exactly? It’s basically Cohesity DataProtect running on Cohesity-certified compute nodes (the whitebox gear you might be familiar with if you’ve bought Cohesity tin previously), using Pure’s FlashBlades as the storage backend.

[image courtesy of Pure Storage]

FlashRecover has a targeted general availability for Q4 CY2020 (October). It will be released in the US initially, with other regions to follow. From a go to market perspective, Pure will handle level 1 and level 2 support, with Cohesity support being engaged for escalations. Cohesity DataProtect will be added to the Pure price list, and Pure becomes a Cohesity Technology Partner.

 

Thoughts

My first thought when I heard about this was why would you? I’ve traditionally associated scalable data protection and secondary storage with slower, high-capacity appliances. But as we talked through the use cases, it started to make sense. FlashBlades by themselves aren’t super high capacity devices, but neither are the individual nodes in Cohesity appliances. String a few together and you have enough capacity to do data protection and fast recovery in a predictable fashion. FlashBlade supports 75 nodes (I think) [Edit: it scales up to 150x 52TB nodes. Thanks for the clarification from Andrew Miller] and up to 1PB of data in a single namespace. Throw in some of the capabilities that Cohesity DataProtect brings to the table and you’ve got an interesting solution. The knock on some of the next-generation data protection solutions has been that recovery can still be quite time-consuming. The use of all-flash takes away a lot of that pain, especially when coupled with a solution like FlashBlade that delivers some pretty decent parallelism in terms of getting data recovered back to production quickly.

An evolving use case for protection data is data reuse. For years, application owners have been stuck with fairly clunky ways of getting test data into environments to use with application development and testing. Solutions like FlashRecover provide a compelling story around protection data being made available for reuse, not just recovery. Another cool thing is that when you invest in FlashBlade, you’re not locking yourself into a particular silo, you can use the FlashBlade solution for other things too.

I don’t work with Pure Storage and Cohesity on a daily basis anymore, but in my previous role I had the opportunity to kick the tyres extensively with both the Cohesity DataProtect solution and the Pure Storage FlashBlade. I’m an advocate of both of these companies because of the great support I received from both companies from pre-sales through to post-sales support. They are relentlessly customer focused, and that really translates in both the technology and the field experience. I can’t speak highly enough of the engagement I’ve experienced with both companies, from both a blogger’s experience, and as an end user.

FlashRecover isn’t going to be appropriate for every organisation. Most places, at the moment, can probably still get away with taking a little time to recover large amounts of data if required. But for industries where time is money, solutions like FlashRecover can absolutely make sense. If you’d like to know more, there’s a comprehensive blog post over at the Pure Storage website, and the solution brief can be found here.

Random Short Take #41

Welcome to Random Short Take #41. A few players have worn 41 in the NBA, but it’s hard to go past Dirk Nowitzki for a quality big man with a sweet, sweet jumpshot. So let’s get random.

  • There have been a lot of articles written by folks about various home office setups since COVID-19 became a thing, but this one by Jason Benedicic deserves a special mention. I bought a new desk and decluttered a fair bit of my setup, but it wasn’t on this level.
  • Speaking of COVID-19, there’s a hunger for new TV content as people across the world find themselves confined to their homes. The Ringer published an interesting article on the challenges of diving in to the archives to dig up and broadcast some television gold.
  • Backblaze made the news a while ago when they announced S3 compatibility, and this blog post covers how you can move from AWS S3 to Backblaze. And check out the offer to cover your data transfer costs too.
  • Zerto has had a bigger cloud presence with 7.5 and 8.0, and Oracle Public Cloud is now a partner too.
  • Speaking of cloud, Leaseweb Global recently announced the launch of its Leaseweb Cloud Connect product offering. You can read the press release here.
  • One of my favourite bands is The Mark Of Cain. It’s the 25th anniversary of the Ill At Ease album (the ultimate gym or breakup album – you choose), and the band has started publishing articles detailing the background info on the recording process. It’s fascinating stuff, and you can read Part 1 here and Part 2 here.
  • The nice folks over at Scale Computing have been doing some stuff with various healthcare organisations lately. You can read more about that here. I’m hoping to check in with Scale Computing in the near future when I’ve got a moment. I’m looking forward to hearing about what else they’ve been up to.
  • Ray recently attended Cloud Field Day 8, and the presentation from Igneous prompted this article.

Storage Field Day 20 – I’ll Be At Storage Field Day 20

Here’s some news that will get you excited. I’ll be virtually heading to the US this week for another Storage Field Day event. If you haven’t heard of the very excellent Tech Field Day events, you should check them out. I’m looking forward to time travel and spending time with some really smart people for a few days. It’s also worth checking back on the Storage Field Day 20 website during the event (August 5 – 7) as there’ll be video streaming and updated links to additional content. You can also see the list of delegates and event-related articles that have been published.

I think it’s a great line-up of both delegates and presenting companies this time around. I know most of them, but there may also still be a few companies added to the line-up. I’ll update this if and when they’re announced.

I’d like to publicly thank in advance the nice folks from Tech Field Day who’ve seen fit to have me back, as well as my employer for letting me take time off to attend these events. Also big thanks to the companies presenting. It’s going to be a lot of fun. And a little weird to be doing this virtually, rather than in person. But I’m really looking forward to this, even if it means doing the night shift for a few days. If you’d like to follow along at home, here’s the current schedule (all times are in US/Pacific).

Wednesday, Aug 5 8:00-10:00 Pensando Presents at Storage Field Day 20
Wednesday, Aug 5 11:00-13:00 Cisco Presents at Storage Field Day 20
Thursday, Aug 6 8:00-9:00 Qumulo Presents at Storage Field Day 20
Thursday, Aug 6 10:00-12:00 Nebulon Presents at Storage Field Day 20
Thursday, Aug 6 13:00-14:00 Intel Presents at Storage Field Day 20
Friday, Aug 7 8:00-9:30 VAST Data Presents at Storage Field Day 20
Friday, Aug 7 11:00-13:00 Pure Storage Presents at Storage Field Day 20

Random Short Take #40

Welcome to Random Short Take #40. Quite a few players have worn 40 in the NBA, including the flat-top king Frank Brickowski. But my favourite player to wear number 40 was the Reign Man – Shawn Kemp. So let’s get random.

  • Dell EMC PowerProtect Data Manager 19.5 was released in early July and Preston covered it pretty comprehensively here.
  • Speaking of data protection software releases and enhancements, we’ve barely recovered from the excitement of Veeam v10 being released and Anthony is already talking about v11. More on that here.
  • Speaking of Veeam, Rhys posted a very detailed article on setting up a Veeam backup repository on NFS using a Pure Storage FlashBlade environment.
  • Sticking with the data protection theme, I penned a piece over at Gestalt IT for Druva talking about OneDrive protection and why it’s important.
  • OpenDrives has some new gear available – you can read more about that here.
  • The nice folks at Spectro Cloud recently announced that its first product is generally available. You can read the press release here.
  • Wiliam Lam put out a great article on passing through the integrated GPU on Apple Mac minis with ESXi 7.
  • Time passes on, and Christian recently celebrated 10 years on his blog, which I think is a worthy achievement.

Happy Friday!

StorCentric Announces Nexsan Unity 3300 And 7900

StorCentric recently announced new Nexsan Unity storage arrays. I had the opportunity to speak to Surya Varanasi, CTO of StorCentric, about the announcement, and thought I’d share some thoughts here.

 

Speeds And Feeds

[image courtesy of Nexsan]

The new Unity models announced are the 3300 and 7900. Both models use two controllers and vary in capacity between 1.6PB and 6.7PB. They both use the Intel Xeon E5 v4 Family processors, and have between 256GB and 448GB of system RAM. There are hybrid storage options available, and both systems support RAID 5, 6, and 10. You can access the spec sheet here.

 

Use Cases

Unbreakable

One of the more interesting use cases we discussed was what StorCentric refer to as “Unbreakable Backup”. The idea behind Nexsan Unbreakable Backup is that you can use your preferred data protection vendor to send backup data to a Unity array. This data can then be replicated to Nexsan’s Assureon platform. The cool thing about the Assureon is that it’s a locked down. So even if you’re hit with a ransomware attack, it’s going to be mighty hard for the bad guys to crack the Assureon platform as well, as StorCentric uses a Key Management System hosted inside StorCentric, and provides minimal privileges to end users.

Data Migration

There’s also a Data Mobility Suite coming at the end of Q3, including:

  • Cloud Connector, giving you the ability to replicate data from Unity to 18 Public clouds including Amazon and Google (for unstructured data, cloud-based backup); and
  • Flexible Data Migrations – streamline Unity implementations, migrate data from heterogeneous systems.

 

Thoughts and Further Reading

I’ve written enthusiastically about Assureon in the past, so it was nice to revisit the platform via this announcement. Ransomware is a scary prospect for many organisations, so a system that can integrate nicely to help with protecting protection data seems like a pretty good idea. Sure, having to replicate the data to a second system might seem like an unnecessary expense, but organisations should be assessing the value of that investment against the cost of having corporate data potentially irretrievably corrupted. Insurance against ransomware attacks probably seems like something that you shouldn’t need to spend money on, until you need to spend money recovering, or sending bitcoin to some clown because you need your data back. It’s not appealing by any stretch, but it’s also important to take precautions wherever possible.

Midrange storage is by no means a sexy topic to talk about. In my opinion it’s a well understood architecture that most tier 1 companies do pretty well nowadays. But that’s the beauty of the midrange system in a lot of ways – it’s a well understood architecture. So you generally know what you’re getting with hybrid (or all-flash) dual controller systems. The Unity range from Nexsan is no different, and that’s not a bad thing. There are a tonne of workloads in the enterprise today that aren’t necessarily well suited to cloud (for the moment), and just need some block or file storage and a bit of resiliency for good measure. The Unity series of arrays from Nexsan offer a bunch of useful features, including tiering and a variety of connectivity options. It strikes me that these arrays are a good fit for a whole lot of workloads that live in the data centre, from enterprise application hosting through to data protection workloads. If you’re after a reliable workhorse, it’s worth looking into the Unity range.

Rancher Labs Announces Longhorn General Availability

This happened a little while ago, and the news about Rancher Labs has shifted to Suse’s announcement regarding its intent to acquire Rancher Labs. Nonetheless, I had a chance to speak to Sheng Liang (Co-founder and CEO) about Longhorn’s general availability, and thought I’d share some thoughts here.

 

What Is It?

Described by Rancher Labs as “an enterprise-grade, cloud-native container storage solution”, Longhorn has been in development for around 6 years, in beta for a year, and is now generally available. It’s comprised of around 40k lines of Golang code, and each volume is a set of independent micro-services, orchestrated by Kubernetes.

Liang described this to me as “enterprise-grade distributed block storage for K8S”, and the features certainly seem to line up with those expectations. There’s support for:

  • Thin-provisioning, snapshots, backup, and restore
  • Non-disruptive volume expansion
  • Cross-cluster disaster recovery volume with defined RTO and RPO
  • Live upgrade of Longhorn software without impacting running volumes
  • Full-featured Kubernetes CLI integration and standalone UI

From a licensing perspective, Longhorn is free to download and use, and customers looking for support can purchase a premium support model with the same SLAs provided through Rancher Support Services. There are no licensing fees, and node-based subscription pricing keeps costs to a minimum.

Use Cases

Why would you use it?

  • Bare metal workloads
  • Edge persistent
  • Geo-replicated storage for Amazon EKS
  • Application backup and disaster recovery

 

Thoughts

One of the barriers to entry when moving from traditional infrastructure to cloud-native is that concepts seem slightly different to the comfortable slippers you may have been used to in enterprise infrastructure land. The neat thing about Longhorn is that it leverages a lot of the same concepts you’ll see in traditional storage deployments to deliver resilient and scalable persistent storage for Kubernetes.

This doesn’t mean that Rancher Labs is trying to compete with traditional storage vendors like Pure Storage and NetApp when it comes to delivering persistent storage for cloud workloads. Liang acknowledges that these shops can offer more storage features than Longhorn can. There seems to be nonetheless a requirement for this kind of accessible and robust solution. Plus it’s 100% open source.

Rancher Labs already has a good story to tell when it comes to making Kubernetes management a whole lot simpler. The addition of Longhorn simply improves that story further. If you’re feeling curious about Longhorn and would like to know more, this website has a lot of useful information.

StorONE Announces AFA.next

StorONE recently announced the All-Flash Array.next (AFAn). I had the opportunity to speak to George Crump (StorONE Chief Marketing Officer) about the news, and thought I’d share some brief thoughts here.

 

What Is It? 

It’s a box! (Sorry I’ve been re-watching Silicon Valley with my daughter recently).

[image courtesy of StorONE]

More accurately, it’s an Intel Server with Intel Optane and Intel QLC storage, powered by StorONE’s software.

S1:Tier

S1:Tier is StorONE’s tiering solution. It operates within the parameters of a high and low watermark. Once the Optane tier fills up, the data is written out, sequentially, to QLC. The neat thing is that when you need to recall the data on QLC, you don’t necessarily need to move it all back to the Optane tier. Rather, read requests can be served directly from QLC. StorONE call this a multi-tier capability, because you can then move data to cloud storage for long-term retention if required.

[image courtesy of StorONE]

S1:HA

Crump noted that the Optane drives are single ported, leading some customers to look highly available configurations. These are catered for with a variation of S1:HA, where the HA solution is now a synchronous mirror between 2 stacks.

 

Thoughts and Further Reading

I’m not just a fan of StorONE because the company occasionally throws me a few dollarydoos to keep the site running. I’m a fan because the folks over there do an awful lot of storage type stuff on what is essentially commodity hardware, and they’re getting results that are worth writing home about, with a minimum of fuss. The AFAn uses Optane as a storage tier, not just read cache, so you get all of the benefit of Optane write performance (many, many IOPS). It has the resilience and data protection features you see in many midrange and enterprise arrays today (namely vRAID, replication, and snapshots). Finally, it has varying support for all three use cases (block, file, and object), so there’s a good chance your workload will fit on the box.

More and more vendors are coming to market with Optane-based storage solutions. It still seems that only a small number of them are taking full advantage of Optane as a write medium, instead focusing on its benefit as a read tier. As I mentioned before, Crump and the team at StorONE have positioned some pretty decent numbers coming out of the AFAn. I think the best thing is that it’s now available as a configuration item on the StorONE TRUprice site as well, so you can see for yourself how much the solution costs. If you’re after a whole lot of performance in a small box, this might be just the thing. You can read more about the solution and check out the lab report here. My friend Max wrote a great article on the solution that you can read here.