StorONE Announces AFA.next

StorONE recently announced the All-Flash Array.next (AFAn). I had the opportunity to speak to George Crump (StorONE Chief Marketing Officer) about the news, and thought I’d share some brief thoughts here.

 

What Is It? 

It’s a box! (Sorry I’ve been re-watching Silicon Valley with my daughter recently).

[image courtesy of StorONE]

More accurately, it’s an Intel Server with Intel Optane and Intel QLC storage, powered by StorONE’s software.

S1:Tier

S1:Tier is StorONE’s tiering solution. It operates within the parameters of a high and low watermark. Once the Optane tier fills up, the data is written out, sequentially, to QLC. The neat thing is that when you need to recall the data on QLC, you don’t necessarily need to move it all back to the Optane tier. Rather, read requests can be served directly from QLC. StorONE call this a multi-tier capability, because you can then move data to cloud storage for long-term retention if required.

[image courtesy of StorONE]

S1:HA

Crump noted that the Optane drives are single ported, leading some customers to look highly available configurations. These are catered for with a variation of S1:HA, where the HA solution is now a synchronous mirror between 2 stacks.

 

Thoughts and Further Reading

I’m not just a fan of StorONE because the company occasionally throws me a few dollarydoos to keep the site running. I’m a fan because the folks over there do an awful lot of storage type stuff on what is essentially commodity hardware, and they’re getting results that are worth writing home about, with a minimum of fuss. The AFAn uses Optane as a storage tier, not just read cache, so you get all of the benefit of Optane write performance (many, many IOPS). It has the resilience and data protection features you see in many midrange and enterprise arrays today (namely vRAID, replication, and snapshots). Finally, it has varying support for all three use cases (block, file, and object), so there’s a good chance your workload will fit on the box.

More and more vendors are coming to market with Optane-based storage solutions. It still seems that only a small number of them are taking full advantage of Optane as a write medium, instead focusing on its benefit as a read tier. As I mentioned before, Crump and the team at StorONE have positioned some pretty decent numbers coming out of the AFAn. I think the best thing is that it’s now available as a configuration item on the StorONE TRUprice site as well, so you can see for yourself how much the solution costs. If you’re after a whole lot of performance in a small box, this might be just the thing. You can read more about the solution and check out the lab report here. My friend Max wrote a great article on the solution that you can read here.

Backup Awareness Month, Backblaze, And A Simple Question

Last month was Backup Awareness Month (at least according to Backblaze). It’s not formally recognised my any government entities, and it’s more something that was made up by Backblaze. But I’m a big fan of backup awareness, so I endorse making up stuff like this. I had a chance to chat to Yev over at Backblaze about the results of a survey Backblaze runs annually and thought I’d share my thoughts here. Yes, I know I’m a bit behind, but I’ve been busy.

As I mentioned previously, as part of the backup awareness month celebrations, Backblaze reaches out to folks in the US and asks a basic question: “How often do you backup all the data on your computer?”. This has shown some interesting facts about consumer backup habits. There has been a positive decrease in the amount of people stating that they have never backed up their data (down to around one fifth of the respondents), and the frequency of which backup has increased.

Other takeaways from the results include:

  • Almost 50% of people lose their data each year;
  • 41% of people do not completely understand the difference between cloud backup and cloud storage;
  • Millennials are the generation most likely to backup their data daily; and
  • Seniors (65+) have gone from being the best age group at backing up data to the worst.

 

Thoughts

I bang on a lot about how important backup (and recovery) is across both the consumer and enterprise space. Surveys like this are interesting because they highlight, I think, the importance of regularly backing up our data. We’re making more and more of it, and it’s not magically protected by the benevolent cloud fairies, so it’s up to us to protect it. Particularly if it’s important to us. It’s scary to think that one in two people are losing data on a regular basis, and scarier still that most folks don’t understand the distinction between cloud storage and cloud backup. I was surprised that Millennials are most likely to backup their data, but my experience with younger generations really only extends to my children, so they’re maybe not the best indicator of what the average consumer is doing. It’s also troubling that older folk are struggling to keep on top of backups. Anecdotally that lines up with my experience as well. So I think it’s great that Yev and the team at Backblaze have been on something of a crusade to educate people about cloud backup and how it can help them. I love that the company is all about making it easier for consumers, not harder.

As an industry we need to be better at making things simple for people to consume, and more transparent in terms of what can be achieved with technology. I know this blog isn’t really focused on consumer technology, and it might seem a bit silly that I carry on a bit about consumer backup. But you all have data stored some place or another that means something to you. And I know not all of you are protecting it appropriately. Backup is like insurance. It’s boring. People don’t like paying for it. But when something goes bang, you’ll be glad you have it. If these kind of posts can raise some awareness, and get one more person to protect the data that means something to them in an effective fashion, then I’ll be happy with that.

Komprise Announces Cloud Capability

Komprise recently made some announcements around extending its product to cloud. I had the opportunity to speak to Krishna Subramanian (President and COO) about the news and I thought I’d share some of my thoughts here.

 

The Announcement

Komprise has traditionally focused on unstructured data stored on-premises. It has now extended the capabilities of Komprise Intelligent Data Management to include cloud data. There’s currently support for Amazon S3 and Wasabi, with Google Cloud, Microsoft Azure, and IBM support coming soon.

 

Benefits

So what do you get with this capability?

Analyse data usage across cloud accounts and buckets easily

  • Single view across cloud accounts, buckets, and storage classes
  • Analyse AWS usage by various metrics accurately based on access times
  • Explore different data archival, replication, and deletion strategies with instant cost projections

Optimise AWS costs with analytics-driven archiving

  • Continuously move objects by policy across Cloud Network Attached Storage (NAS), Amazon S3, Amazon S3 Standard-IA, Amazon S3 Glacier, and Amazon S3 Glacier DeepArchive
  • Minimise costs and penalties by moving data at the right time based on access patterns

Bridge to Big Data/Artificial Intelligence (AI) projects

  • Create virtual data lakes for Big Data, AI – search for exactly what you need across cloud accounts and buckets
  • Native access to moved data on each storage class with full data fidelity

Create Cyber Resiliency with AWS

  • Copy S3 data to AWS to protect from ransomware with an air-gapped copy

[image courtesy of Komprise]

 

Why Is This Good?

The move to cloud storage hasn’t been all beer and skittles for enterprise. Storing large amounts of data in public cloud presents enterprises with a number of challenges, including:

  • Poor visibility – “Bucket sprawl”
  • Insufficient data – Cloud does not easily track last access / data use
  • Cost complexity – Manual data movement can lead to unexpected retrieval cost surprises
  • Labour – Manually moving data is error-prone and time-consuming

Sample Use Cases

Some other reasons you might want to have Komprise manage your data include:

  • Finding ex-employee data stored in buckets.
  • Data migration – you might want to take a copy of your data from Wasabi to AWS.

There’s support for all unstructured data (file and object), so the benefits of Komprise can be enjoyed regardless of how you’re storing your unstructured data. It’s also important to note that there’s no change to the existing licensing model, you’re just now able to use the product on public cloud storage.

 

Thoughts

Effective data management remains a big challenge for enterprises. It’s no secret that public cloud storage is really just storage that lives in another company’s data centre. Sure, it might be object storage, rather than file based, but it’s still just a bunch of unstructured data sitting in another company’s data centre. The way you consume that data may have changed, and certainly the way you pay for it has changed, but fundamentally it’s still your unstructured data sitting on a share or a filesystem. The problems you had on-premises though, still manifest in public cloud environments (i.e. data sprawl, capacity issues, etc). That’s why the Komprise solution seems so compelling when it comes to managing your on-premises storage consumption, and extending that capability to cloud storage is a no-brainer. When it comes to storing unstructured data, it’s frequently a bin fire of some sort or another. The reason for this is because it doesn’t scale well. I don’t mean the storage doesn’t scale – you can store petabytes all over the place if you like. But if you’re still hand crafting your shares and manually moving data around, you’ll notice that it becomes more and more time consuming as time goes on (and your data storage needs grow).

One way to address this challenge is to introduce a level of automation, which is something that Komprise does quite well. If you’ve got many terabytes of data stored on-premises and in AWS buckets (or you’re looking to move some old data from on-premises to the cloud) and you’re not quite sure what it’s all for or how best to go about it, Komprise can certainly help you out.

Brisbane (Virtual) VMUG – July 2020

hero_vmug_express_2011

The July edition of the Brisbane VMUG meeting will be held online via Zoom on Tuesday 28th July from 4pm to 6pm. We have speakers from Runecast and VMware presenting and it promises to be a great afternoon.

Here’s the agenda:

  • VMUG Intro (by me)
  • Runecast Presentation – Predict, Upgrade and Secure your VMware environment proactively with Runecast
  • VMware Presentation – Deploying Tanzu Kubernetes Grid with vRealize Automation with Michael Francis
  • Q&A

The speakers have gone to great lengths to make sure this will be a fun and informative session and I’m really looking forward to hearing what they have to say. You can find out more information and register for the event here. I hope to see you there (online). Also, if you’re interested in sponsoring one of these events, please get in touch with me and I can help make it happen.

Random Short Take #39

Welcome to Random Short Take #39. Not a huge amount of players have worn 39 in the NBA, and I’m not going to pretend I’m any real fan of The Dwightmare. But things are tough all around, so let’s remain optimistic and push through to number 40. Anyway let’s get random.

  • VeeamON 2020 was online this week, and Anthony Spiteri has done a great job of summarising the major technical session announcements here.
  • I’ve known Howard Marks for a while now, and always relish the opportunity to speak with him when I can. This post is pretty hilarious, and I’m looking forward to reading the followup posts.
  • This is a great article from Alastair Cooke on COVID-19 and what En-Zed has done effectively to stop the spread. It was interesting to hear his thoughts on returning to the US, and I do agree that it’s going to be some time until I make the trip across the Pacific again.
  • Sometimes people get crazy ideas about how they might repurpose some old bits of technology. It’s even better when they write about their experiences in doing so. This article on automating an iPod Hi-Fi’s volume control over at Six Colors was fantastic.
  • Chris M. Evans put out a typically thought-provoking piece on data migration challenges recently that I think is worth checking out. I’ve been talking a lot to customers that are facing these challenges on a daily basis, and it’s interesting to see how, regardless of the industry vertical they operate in, it’s sometimes just a matter of the depth varying, so to speak.
  • I frequently bump into Ray Lucchesi at conferences, and he knows a fair bit about what does and doesn’t work. This article on his experiences recently with a number of virtual and online conferences is the epitome of constructive criticism.
  • Speaking of online conferences, the Australian VMUG UserCon will be virtual this year and will be held on the 30th July. You can find out more and register here.
  • Finally, if you’ve spent any time with me socially, you’ll know I’m a basketball nut. And invariably I’ll tell you that Deftones is may favouritest band ever. So it was great to come across this article about White Pony on one of my favourite sports (and popular culture) websites. If you’re a fan of Deftones, this is one to check out.

 

Random Short Take #38

Welcome to Random Short Take #38. Not a huge amount of players have worn 38 in the NBA, and I’m not going to pretend I was ever a Kwame Brown fan. Although it did seem like he had a tough time of it. Anyway let’s get random.

  • Ransomware is the new hotness. Or, rather, protecting storage systems from ransomware is the new hotness. My man Chin-Fah had a writeup on that here. It’s not a matter of if, but rather when you’ll run into a problem. It’s been interesting to see the various approaches being taken by the storage vendors and the data protection companies.
  • Applications for the vExpert program intake for the second half of 2020 are open, but closing soon. It’s a fantastic program to be a part of, so if you think you’ve got the goods, you can apply here. I also recommend this article from Christopher on his experiences.
  • This was a great article from Alastair on some of the differences between networking with AWS and VMC on AWS. As someone who works for a VMware Cloud Provider, I can confirm that NSX (T or V, I don’t care) has a whole slew of capabilities and whole slew of integration challenges.
  • Are you Zoomed out? I am. Even when you think the problem can’t be the network, it might just be the network (I hope my friends in networking appreciate that it’s not always the storage). John Nicholson posted a typically comprehensive overview of how your bandwidth might be one of the things keeping you from demonstrating excellent radio voice on those seemingly endless meetings you’re doing at the moment. It could also be that you’re using crap audio devices too, but I think John’s going to cover that in the future.
  • Scale Computing has a good story to tell about what it’s been doing with a large school district in the U.S. Read more about that here.
  • This is one of those promotions aimed at my friends in Northern America more than folks based where I am, but I’m always happy to talk about deals on data protection. StorCentric has launched its “Retrospect Dads & Grads Promotion” offering a free 90-Day subscription license for every Retrospect Backup product. You can read more about that here.
  • Pure//Accelerate Online was this week, and Max did a nice write-up on Pure Storage File Services over at Gestalt IT.
  • Rancher Labs recently announced the general availability of Longhorn (a cloud-native container storage solution). I’m looking forward to digging in to this a bit more over the next little while.

 

 

Datadobi Announces S3 Migration Capability

Datadobi recently announced S3 migration capabilities as part of DobiMigrate 5.9. I had the opportunity to speak to Carl D’Halluin and Michael Jack about the announcement and thought I’d share some thoughts on it here.

 

What Is It?

In short, you can now use DobiMigrate to perform S3 to S3 object storage migrations. It’s flexible too, offering the ability to migrate data from a variety of on-premises object systems up to public cloud object storage, between on-premises systems, or back to on-premises from public cloud storage. There’s support for a variety of S3 systems, including:

In the future Datadobi is looking to add support for AWS Glacier, object locks, object tags, and non-current object versions.

 

Why Would You?

There are quite a few reasons why you might want to move S3 data around. You could be seeing high egress charges from AWS because you’re accessing more data in S3 than you’d initially anticipated. You might be looking to move to the cloud and have a significant on-premises footprint that needs to go. Or you might be looking to replace your on-premises solution with a solution from another vendor.

 

How Would You?

The process used to migrate object is fairly straightforward, and follows a pattern that will be familiar if you’ve done anything with any kind of storage migration tool before. In short, you setup a migration pair (source and destination), run a scan and first copy, then do some incremental copies. Once you’ve got a maintenance window, there’s a cutover where the final scan and copy is done. And then you’re good to go. Basically.

[image courtesy of Datadobi]

 

Final Thoughts

Why am I so interested in these types of offerings? Part of it is that it reminds of all of the time I burnt through earlier in my career migrating data from various storage platforms to other storage platforms. One of the funny things about storage is that there’s rarely enough to service demand, and it rarely delivers the performance you need after it’s been in use for a few years. As such, there’s always some requirement to move data from one spot to another, and to keep that data intact in terms of its permissions, and metadata.

Amazon’s S3 offering has been amazing in terms of bringing object storage to the front of mind of many storage consumers who had previously only used block or file storage. Some of those users are now discovering that, while S3 is great, it can be expensive if you haven’t accounted for egress costs, or you’ve started using a whole lot more of it than initially anticipated. Some companies simply have to take their lumps, as everything is done in public cloud. But for those organisations with some on-premises footprint, the idea of being able to do performance oriented object storage in their own data centre holds a great deal of appeal. But how do you get it back on-premises in a reliable fashion? I believe that’s where Datadobi’s solution really shines.

I’m a fan of software that makes life easier for storage folk. Platform migrations can be a real pain to deal with, and are often riddled with risky propositions and daunting timeframes. Datadobi can’t necessarily change the laws of physics in a way that will keep your project manager happy, but it can do some stuff that means you won’t be quite as broken after a storage migration as you might have been previously. They already had a good story when it came to file storage migration, and the object to object story enhances it. Worth checking out.

OT – Upgrading From macOS Mojave To Catalina (The Hard Way)

This post is really about the boring stuff I do when I have a day off and isn’t terribly exciting. TL;DR I had some problems upgrading to Catalina, and had to start from scratch.

 

Background

I’ve had an Apple Mac since around 2008. I upgraded from a 24″ iMac to a 27″ iMac and was super impressed with the process of migrating between machines, primarily because of Time Machine’s ability to recover settings, applications, and data in a fairly seamless fashion. I can’t remember what version of macOS I started with (maybe Leopard?), but I’ve moved steadily through the last few versions with a minimal amount of fuss. I was running Mojave on my iMac late last year when I purchased a refurbished 2018 Mac mini. At the time, I decided not to upgrade to Catalina, as I’d had a few issues with my work laptop and didn’t need the aggravation. So I migrated from the iMac to the Mac mini and kept on keeping on with Mojave.

Fast forward to a April this year, and the Mac mini gave up the ghost. With Apple shutting down its stores here in response to COVID-19, it was a 2 week turnaround at the local repair place to get the machine fixed. In the meantime, I was able to use Time Machine to load everything on a 2012 MacBook Pro that was being used sparingly. It was a bit clunky, but had an internal SSD and 16GB of RAM, so it could handle the basics pretty comfortably. When the Mac mini was repaired, I used Time Machine once again to move everything back. It’s important to note that this is everything (settings, applications, and data) that had been accumulated since 2008. So there’s a bit of cruft associated with this build. A bunch of 32-bit applications that I’d lost track of, widgets that were no longer really in use, and so on.

 

The Big Update

I took the day off on Friday last week. I’d been working a lot of hours since COVID-19 restrictions kicked in here, and I’d been filling my commuting time with day job work (sorry blog!). I thought it would be fun to upgrade the Mac mini to Catalina. I felt that things were in a reasonable enough state that I could work with what it had to offer, and I get twitchy when there’s an upgrade notification on the Settings icon. Just sitting there, taunting me.

I downloaded the installer and pressed on. No dice, my system volume wasn’t formatted with APFS. How could this be? Well, even though APFS has been around for a little while now, I’d been moving my installation across various machines. At the time when the APFS conversion was part of the macOS upgrade, I was running an iMac with a spinning disk as the system volume, and so it never prompted to do that upgrade. When I moved to the Mac mini, I didn’t do any macOS upgrade, so I guess it just kept working with the HFS+ volume. It seems a bit weird that Catalina doesn’t offer a workaround for this, but I may just have been looking in the wrong place. Now, there was a lot of chatter in the forums about rebooting into Recovery Mode and converting the drive to an APFS volume. No matter what I tried, I was unable to do this effectively (either using the Recovery Mode console with Mojave or with Catalina booting from USB). I followed articles like this one but just didn’t have the same experience. And when I erased the system drive and attempted to recover from Time Machine backups, it would re-erase the volume as HFS+. So, I don’t know, I guess I’m an idiot. The solution that finally worked for me was to erase the drive, format it as APFS, install Mojave from scratch, and recover from a Time Machine backup. Unfortunately, though, this seemed to only want to transfer around 800KB of settings data. The normal “wait a few hours while we copy your stuff” just didn’t happen. Sod knows why, but what I did know was that I was really wasting my day off with this stuff.

I also ran in to an issue trying to do the installation from USB. You can read about booting from external devices and the T2 security chip here, here, and here. I lost patience with the process and took a different approach.

 

Is That So Bad?

Not really. I have my Photos library and iTunes media on a separate volume. I have one email account that we have used POP with over the years, but I installed Thunderbird, recovered the profile from my Time Machine data, and modified profiles.ini to point to that profile (causing some flashbacks to my early days on a help desk supporting a Netscape user base). The other thing I had to do was recover my Plex database. You can read more on that here. It actually went reasonably well. I’d been storing my iPhone backups on a separate volume too, and had to follow this process to relocate those backup files. Otherwise, Microsoft, to their credit, has made the reinstallation process super simple with Microsoft 365. Once I had most everything setup again, I was able to perform the upgrade to Catalina.

 

Conclusion

If this process sounds like it was a bit of a pain, it was. I don’t know that Apple has necessarily dropped the ball in terms of usability in the last few years, but sometimes it feels like it. I think I just had really high expectations based on some good fortune I’d enjoyed over the past 12 years. I’m not sure what the term is exactly, but it’s possible that because I’ve invested this much money in a product, I’m more forgiving of the issues associated with the product. Apple has done a great job historically of masking the complexity of technology from the end user. Sometimes, though, you’re going to come across odd situations that potentially push you down an odd path. That’s what I tell myself anyway as I rue the time I lost on this upgrade. Was anyone else’s upgrade to Catalina this annoying?

Random Short Take #37

Welcome to Random Short Take #37. Not a huge amount of players have worn 37 in the NBA, but Metta World Peace did a few times. When he wasn’t wearing 15, and other odd numbers. But I digress. Let’s get random.

  • Pavilion Data recently added S3 capability to its platform. It’s based on a variant of MinIO, and adds an interesting dimension to what Pavilion Data has traditionally offered. Mellor provided some good coverage here.
  • Speaking of object storage, Dell EMC recently announced ECS 3.5. You can read more on that here. The architectural white paper has been updated to reflect the new version as well.
  • Speaking of Dell EMC, Preston posted a handy article on Data Domain Retention Lock and NetWorker. Have you pre-ordered Preston’s book yet? I’ll keep asking until you do.
  • Online events are all the rage at the moment, and two noteworthy events are coming up shortly: Pure//Accelerate and VeeamON 2020. Speaking of online events, we’re running a virtual BNEVMUG next week. Details on that here. ZertoCON Virtual is also a thing.
  • Speaking of Pure Storage, this article from Cody Hosterman on NVMe and vSphere 7 is lengthy, but definitely worth the read.
  • I can’t recall whether I mentioned that this white paper  covering VCD on VCF 3.9 is available now, and I can’t be bothered checking. So here it is.
  • I’m not just a fan of Backblaze because of its cool consumer backup solution and object storage platform, I’m also a big fan because of its blog. Articles like this one are a great example of companies doing corporate culture right (at least from what I can see).
  • I have the impression that Datadobi has been doing some cool stuff recently, and this story certainly seems to back it up.

Brisbane (Virtual) VMUG – June 2020

hero_vmug_express_2011

The June edition of the Brisbane VMUG meeting will be held online via Zoom on Tuesday 2nd June. We have speakers from VMware and StorageCraft presenting and it promises to be a great afternoon.

Here’s the agenda:

  • VMUG Intro (by me)
  • StorageCraft Presentation – Object Storage with Jack Alsop
  • VMware Presentation – vRealize Automation with Mark Foley
  • VMware Presentation – Project Pacific with Michael Francis
  • Q&A

The speakers have gone to great lengths to make sure this will be a fun and informative session and I’m really looking forward to hearing what they have to say. You can find out more information and register for the event here. I hope to see you there (online). Also, if you’re interested in sponsoring one of these events, please get in touch with me and I can help make it happen.