Storage Field Day 20 – I’ll Be At Storage Field Day 20

Here’s some news that will get you excited. I’ll be virtually heading to the US this week for another Storage Field Day event. If you haven’t heard of the very excellent Tech Field Day events, you should check them out. I’m looking forward to time travel and spending time with some really smart people for a few days. It’s also worth checking back on the Storage Field Day 20 website during the event (August 5 – 7) as there’ll be video streaming and updated links to additional content. You can also see the list of delegates and event-related articles that have been published.

I think it’s a great line-up of both delegates and presenting companies this time around. I know most of them, but there may also still be a few companies added to the line-up. I’ll update this if and when they’re announced.

I’d like to publicly thank in advance the nice folks from Tech Field Day who’ve seen fit to have me back, as well as my employer for letting me take time off to attend these events. Also big thanks to the companies presenting. It’s going to be a lot of fun. And a little weird to be doing this virtually, rather than in person. But I’m really looking forward to this, even if it means doing the night shift for a few days. If you’d like to follow along at home, here’s the current schedule (all times are in US/Pacific).

Wednesday, Aug 5 8:00-10:00 Pensando Presents at Storage Field Day 20
Wednesday, Aug 5 11:00-13:00 Cisco Presents at Storage Field Day 20
Thursday, Aug 6 8:00-9:00 Qumulo Presents at Storage Field Day 20
Thursday, Aug 6 10:00-12:00 Nebulon Presents at Storage Field Day 20
Thursday, Aug 6 13:00-14:00 Intel Presents at Storage Field Day 20
Friday, Aug 7 8:00-9:30 VAST Data Presents at Storage Field Day 20
Friday, Aug 7 11:00-13:00 Pure Storage Presents at Storage Field Day 20

Random Short Take #40

Welcome to Random Short Take #40. Quite a few players have worn 40 in the NBA, including the flat-top king Frank Brickowski. But my favourite player to wear number 40 was the Reign Man – Shawn Kemp. So let’s get random.

  • Dell EMC PowerProtect Data Manager 19.5 was released in early July and Preston covered it pretty comprehensively here.
  • Speaking of data protection software releases and enhancements, we’ve barely recovered from the excitement of Veeam v10 being released and Anthony is already talking about v11. More on that here.
  • Speaking of Veeam, Rhys posted a very detailed article on setting up a Veeam backup repository on NFS using a Pure Storage FlashBlade environment.
  • Sticking with the data protection theme, I penned a piece over at Gestalt IT for Druva talking about OneDrive protection and why it’s important.
  • OpenDrives has some new gear available – you can read more about that here.
  • The nice folks at Spectro Cloud recently announced that its first product is generally available. You can read the press release here.
  • Wiliam Lam put out a great article on passing through the integrated GPU on Apple Mac minis with ESXi 7.
  • Time passes on, and Christian recently celebrated 10 years on his blog, which I think is a worthy achievement.

Happy Friday!

StorCentric Announces Nexsan Unity 3300 And 7900

StorCentric recently announced new Nexsan Unity storage arrays. I had the opportunity to speak to Surya Varanasi, CTO of StorCentric, about the announcement, and thought I’d share some thoughts here.

 

Speeds And Feeds

[image courtesy of Nexsan]

The new Unity models announced are the 3300 and 7900. Both models use two controllers and vary in capacity between 1.6PB and 6.7PB. They both use the Intel Xeon E5 v4 Family processors, and have between 256GB and 448GB of system RAM. There are hybrid storage options available, and both systems support RAID 5, 6, and 10. You can access the spec sheet here.

 

Use Cases

Unbreakable

One of the more interesting use cases we discussed was what StorCentric refer to as “Unbreakable Backup”. The idea behind Nexsan Unbreakable Backup is that you can use your preferred data protection vendor to send backup data to a Unity array. This data can then be replicated to Nexsan’s Assureon platform. The cool thing about the Assureon is that it’s a locked down. So even if you’re hit with a ransomware attack, it’s going to be mighty hard for the bad guys to crack the Assureon platform as well, as StorCentric uses a Key Management System hosted inside StorCentric, and provides minimal privileges to end users.

Data Migration

There’s also a Data Mobility Suite coming at the end of Q3, including:

  • Cloud Connector, giving you the ability to replicate data from Unity to 18 Public clouds including Amazon and Google (for unstructured data, cloud-based backup); and
  • Flexible Data Migrations – streamline Unity implementations, migrate data from heterogeneous systems.

 

Thoughts and Further Reading

I’ve written enthusiastically about Assureon in the past, so it was nice to revisit the platform via this announcement. Ransomware is a scary prospect for many organisations, so a system that can integrate nicely to help with protecting protection data seems like a pretty good idea. Sure, having to replicate the data to a second system might seem like an unnecessary expense, but organisations should be assessing the value of that investment against the cost of having corporate data potentially irretrievably corrupted. Insurance against ransomware attacks probably seems like something that you shouldn’t need to spend money on, until you need to spend money recovering, or sending bitcoin to some clown because you need your data back. It’s not appealing by any stretch, but it’s also important to take precautions wherever possible.

Midrange storage is by no means a sexy topic to talk about. In my opinion it’s a well understood architecture that most tier 1 companies do pretty well nowadays. But that’s the beauty of the midrange system in a lot of ways – it’s a well understood architecture. So you generally know what you’re getting with hybrid (or all-flash) dual controller systems. The Unity range from Nexsan is no different, and that’s not a bad thing. There are a tonne of workloads in the enterprise today that aren’t necessarily well suited to cloud (for the moment), and just need some block or file storage and a bit of resiliency for good measure. The Unity series of arrays from Nexsan offer a bunch of useful features, including tiering and a variety of connectivity options. It strikes me that these arrays are a good fit for a whole lot of workloads that live in the data centre, from enterprise application hosting through to data protection workloads. If you’re after a reliable workhorse, it’s worth looking into the Unity range.

Rancher Labs Announces Longhorn General Availability

This happened a little while ago, and the news about Rancher Labs has shifted to Suse’s announcement regarding its intent to acquire Rancher Labs. Nonetheless, I had a chance to speak to Sheng Liang (Co-founder and CEO) about Longhorn’s general availability, and thought I’d share some thoughts here.

 

What Is It?

Described by Rancher Labs as “an enterprise-grade, cloud-native container storage solution”, Longhorn has been in development for around 6 years, in beta for a year, and is now generally available. It’s comprised of around 40k lines of Golang code, and each volume is a set of independent micro-services, orchestrated by Kubernetes.

Liang described this to me as “enterprise-grade distributed block storage for K8S”, and the features certainly seem to line up with those expectations. There’s support for:

  • Thin-provisioning, snapshots, backup, and restore
  • Non-disruptive volume expansion
  • Cross-cluster disaster recovery volume with defined RTO and RPO
  • Live upgrade of Longhorn software without impacting running volumes
  • Full-featured Kubernetes CLI integration and standalone UI

From a licensing perspective, Longhorn is free to download and use, and customers looking for support can purchase a premium support model with the same SLAs provided through Rancher Support Services. There are no licensing fees, and node-based subscription pricing keeps costs to a minimum.

Use Cases

Why would you use it?

  • Bare metal workloads
  • Edge persistent
  • Geo-replicated storage for Amazon EKS
  • Application backup and disaster recovery

 

Thoughts

One of the barriers to entry when moving from traditional infrastructure to cloud-native is that concepts seem slightly different to the comfortable slippers you may have been used to in enterprise infrastructure land. The neat thing about Longhorn is that it leverages a lot of the same concepts you’ll see in traditional storage deployments to deliver resilient and scalable persistent storage for Kubernetes.

This doesn’t mean that Rancher Labs is trying to compete with traditional storage vendors like Pure Storage and NetApp when it comes to delivering persistent storage for cloud workloads. Liang acknowledges that these shops can offer more storage features than Longhorn can. There seems to be nonetheless a requirement for this kind of accessible and robust solution. Plus it’s 100% open source.

Rancher Labs already has a good story to tell when it comes to making Kubernetes management a whole lot simpler. The addition of Longhorn simply improves that story further. If you’re feeling curious about Longhorn and would like to know more, this website has a lot of useful information.

StorONE Announces AFA.next

StorONE recently announced the All-Flash Array.next (AFAn). I had the opportunity to speak to George Crump (StorONE Chief Marketing Officer) about the news, and thought I’d share some brief thoughts here.

 

What Is It? 

It’s a box! (Sorry I’ve been re-watching Silicon Valley with my daughter recently).

[image courtesy of StorONE]

More accurately, it’s an Intel Server with Intel Optane and Intel QLC storage, powered by StorONE’s software.

S1:Tier

S1:Tier is StorONE’s tiering solution. It operates within the parameters of a high and low watermark. Once the Optane tier fills up, the data is written out, sequentially, to QLC. The neat thing is that when you need to recall the data on QLC, you don’t necessarily need to move it all back to the Optane tier. Rather, read requests can be served directly from QLC. StorONE call this a multi-tier capability, because you can then move data to cloud storage for long-term retention if required.

[image courtesy of StorONE]

S1:HA

Crump noted that the Optane drives are single ported, leading some customers to look highly available configurations. These are catered for with a variation of S1:HA, where the HA solution is now a synchronous mirror between 2 stacks.

 

Thoughts and Further Reading

I’m not just a fan of StorONE because the company occasionally throws me a few dollarydoos to keep the site running. I’m a fan because the folks over there do an awful lot of storage type stuff on what is essentially commodity hardware, and they’re getting results that are worth writing home about, with a minimum of fuss. The AFAn uses Optane as a storage tier, not just read cache, so you get all of the benefit of Optane write performance (many, many IOPS). It has the resilience and data protection features you see in many midrange and enterprise arrays today (namely vRAID, replication, and snapshots). Finally, it has varying support for all three use cases (block, file, and object), so there’s a good chance your workload will fit on the box.

More and more vendors are coming to market with Optane-based storage solutions. It still seems that only a small number of them are taking full advantage of Optane as a write medium, instead focusing on its benefit as a read tier. As I mentioned before, Crump and the team at StorONE have positioned some pretty decent numbers coming out of the AFAn. I think the best thing is that it’s now available as a configuration item on the StorONE TRUprice site as well, so you can see for yourself how much the solution costs. If you’re after a whole lot of performance in a small box, this might be just the thing. You can read more about the solution and check out the lab report here. My friend Max wrote a great article on the solution that you can read here.

Backup Awareness Month, Backblaze, And A Simple Question

Last month was Backup Awareness Month (at least according to Backblaze). It’s not formally recognised my any government entities, and it’s more something that was made up by Backblaze. But I’m a big fan of backup awareness, so I endorse making up stuff like this. I had a chance to chat to Yev over at Backblaze about the results of a survey Backblaze runs annually and thought I’d share my thoughts here. Yes, I know I’m a bit behind, but I’ve been busy.

As I mentioned previously, as part of the backup awareness month celebrations, Backblaze reaches out to folks in the US and asks a basic question: “How often do you backup all the data on your computer?”. This has shown some interesting facts about consumer backup habits. There has been a positive decrease in the amount of people stating that they have never backed up their data (down to around one fifth of the respondents), and the frequency of which backup has increased.

Other takeaways from the results include:

  • Almost 50% of people lose their data each year;
  • 41% of people do not completely understand the difference between cloud backup and cloud storage;
  • Millennials are the generation most likely to backup their data daily; and
  • Seniors (65+) have gone from being the best age group at backing up data to the worst.

 

Thoughts

I bang on a lot about how important backup (and recovery) is across both the consumer and enterprise space. Surveys like this are interesting because they highlight, I think, the importance of regularly backing up our data. We’re making more and more of it, and it’s not magically protected by the benevolent cloud fairies, so it’s up to us to protect it. Particularly if it’s important to us. It’s scary to think that one in two people are losing data on a regular basis, and scarier still that most folks don’t understand the distinction between cloud storage and cloud backup. I was surprised that Millennials are most likely to backup their data, but my experience with younger generations really only extends to my children, so they’re maybe not the best indicator of what the average consumer is doing. It’s also troubling that older folk are struggling to keep on top of backups. Anecdotally that lines up with my experience as well. So I think it’s great that Yev and the team at Backblaze have been on something of a crusade to educate people about cloud backup and how it can help them. I love that the company is all about making it easier for consumers, not harder.

As an industry we need to be better at making things simple for people to consume, and more transparent in terms of what can be achieved with technology. I know this blog isn’t really focused on consumer technology, and it might seem a bit silly that I carry on a bit about consumer backup. But you all have data stored some place or another that means something to you. And I know not all of you are protecting it appropriately. Backup is like insurance. It’s boring. People don’t like paying for it. But when something goes bang, you’ll be glad you have it. If these kind of posts can raise some awareness, and get one more person to protect the data that means something to them in an effective fashion, then I’ll be happy with that.

Komprise Announces Cloud Capability

Komprise recently made some announcements around extending its product to cloud. I had the opportunity to speak to Krishna Subramanian (President and COO) about the news and I thought I’d share some of my thoughts here.

 

The Announcement

Komprise has traditionally focused on unstructured data stored on-premises. It has now extended the capabilities of Komprise Intelligent Data Management to include cloud data. There’s currently support for Amazon S3 and Wasabi, with Google Cloud, Microsoft Azure, and IBM support coming soon.

 

Benefits

So what do you get with this capability?

Analyse data usage across cloud accounts and buckets easily

  • Single view across cloud accounts, buckets, and storage classes
  • Analyse AWS usage by various metrics accurately based on access times
  • Explore different data archival, replication, and deletion strategies with instant cost projections

Optimise AWS costs with analytics-driven archiving

  • Continuously move objects by policy across Cloud Network Attached Storage (NAS), Amazon S3, Amazon S3 Standard-IA, Amazon S3 Glacier, and Amazon S3 Glacier DeepArchive
  • Minimise costs and penalties by moving data at the right time based on access patterns

Bridge to Big Data/Artificial Intelligence (AI) projects

  • Create virtual data lakes for Big Data, AI – search for exactly what you need across cloud accounts and buckets
  • Native access to moved data on each storage class with full data fidelity

Create Cyber Resiliency with AWS

  • Copy S3 data to AWS to protect from ransomware with an air-gapped copy

[image courtesy of Komprise]

 

Why Is This Good?

The move to cloud storage hasn’t been all beer and skittles for enterprise. Storing large amounts of data in public cloud presents enterprises with a number of challenges, including:

  • Poor visibility – “Bucket sprawl”
  • Insufficient data – Cloud does not easily track last access / data use
  • Cost complexity – Manual data movement can lead to unexpected retrieval cost surprises
  • Labour – Manually moving data is error-prone and time-consuming

Sample Use Cases

Some other reasons you might want to have Komprise manage your data include:

  • Finding ex-employee data stored in buckets.
  • Data migration – you might want to take a copy of your data from Wasabi to AWS.

There’s support for all unstructured data (file and object), so the benefits of Komprise can be enjoyed regardless of how you’re storing your unstructured data. It’s also important to note that there’s no change to the existing licensing model, you’re just now able to use the product on public cloud storage.

 

Thoughts

Effective data management remains a big challenge for enterprises. It’s no secret that public cloud storage is really just storage that lives in another company’s data centre. Sure, it might be object storage, rather than file based, but it’s still just a bunch of unstructured data sitting in another company’s data centre. The way you consume that data may have changed, and certainly the way you pay for it has changed, but fundamentally it’s still your unstructured data sitting on a share or a filesystem. The problems you had on-premises though, still manifest in public cloud environments (i.e. data sprawl, capacity issues, etc). That’s why the Komprise solution seems so compelling when it comes to managing your on-premises storage consumption, and extending that capability to cloud storage is a no-brainer. When it comes to storing unstructured data, it’s frequently a bin fire of some sort or another. The reason for this is because it doesn’t scale well. I don’t mean the storage doesn’t scale – you can store petabytes all over the place if you like. But if you’re still hand crafting your shares and manually moving data around, you’ll notice that it becomes more and more time consuming as time goes on (and your data storage needs grow).

One way to address this challenge is to introduce a level of automation, which is something that Komprise does quite well. If you’ve got many terabytes of data stored on-premises and in AWS buckets (or you’re looking to move some old data from on-premises to the cloud) and you’re not quite sure what it’s all for or how best to go about it, Komprise can certainly help you out.

Brisbane (Virtual) VMUG – July 2020

hero_vmug_express_2011

The July edition of the Brisbane VMUG meeting will be held online via Zoom on Tuesday 28th July from 4pm to 6pm. We have speakers from Runecast and VMware presenting and it promises to be a great afternoon.

Here’s the agenda:

  • VMUG Intro (by me)
  • Runecast Presentation – Predict, Upgrade and Secure your VMware environment proactively with Runecast
  • VMware Presentation – Deploying Tanzu Kubernetes Grid with vRealize Automation with Michael Francis
  • Q&A

The speakers have gone to great lengths to make sure this will be a fun and informative session and I’m really looking forward to hearing what they have to say. You can find out more information and register for the event here. I hope to see you there (online). Also, if you’re interested in sponsoring one of these events, please get in touch with me and I can help make it happen.

Random Short Take #39

Welcome to Random Short Take #39. Not a huge amount of players have worn 39 in the NBA, and I’m not going to pretend I’m any real fan of The Dwightmare. But things are tough all around, so let’s remain optimistic and push through to number 40. Anyway let’s get random.

  • VeeamON 2020 was online this week, and Anthony Spiteri has done a great job of summarising the major technical session announcements here.
  • I’ve known Howard Marks for a while now, and always relish the opportunity to speak with him when I can. This post is pretty hilarious, and I’m looking forward to reading the followup posts.
  • This is a great article from Alastair Cooke on COVID-19 and what En-Zed has done effectively to stop the spread. It was interesting to hear his thoughts on returning to the US, and I do agree that it’s going to be some time until I make the trip across the Pacific again.
  • Sometimes people get crazy ideas about how they might repurpose some old bits of technology. It’s even better when they write about their experiences in doing so. This article on automating an iPod Hi-Fi’s volume control over at Six Colors was fantastic.
  • Chris M. Evans put out a typically thought-provoking piece on data migration challenges recently that I think is worth checking out. I’ve been talking a lot to customers that are facing these challenges on a daily basis, and it’s interesting to see how, regardless of the industry vertical they operate in, it’s sometimes just a matter of the depth varying, so to speak.
  • I frequently bump into Ray Lucchesi at conferences, and he knows a fair bit about what does and doesn’t work. This article on his experiences recently with a number of virtual and online conferences is the epitome of constructive criticism.
  • Speaking of online conferences, the Australian VMUG UserCon will be virtual this year and will be held on the 30th July. You can find out more and register here.
  • Finally, if you’ve spent any time with me socially, you’ll know I’m a basketball nut. And invariably I’ll tell you that Deftones is may favouritest band ever. So it was great to come across this article about White Pony on one of my favourite sports (and popular culture) websites. If you’re a fan of Deftones, this is one to check out.

 

Random Short Take #38

Welcome to Random Short Take #38. Not a huge amount of players have worn 38 in the NBA, and I’m not going to pretend I was ever a Kwame Brown fan. Although it did seem like he had a tough time of it. Anyway let’s get random.

  • Ransomware is the new hotness. Or, rather, protecting storage systems from ransomware is the new hotness. My man Chin-Fah had a writeup on that here. It’s not a matter of if, but rather when you’ll run into a problem. It’s been interesting to see the various approaches being taken by the storage vendors and the data protection companies.
  • Applications for the vExpert program intake for the second half of 2020 are open, but closing soon. It’s a fantastic program to be a part of, so if you think you’ve got the goods, you can apply here. I also recommend this article from Christopher on his experiences.
  • This was a great article from Alastair on some of the differences between networking with AWS and VMC on AWS. As someone who works for a VMware Cloud Provider, I can confirm that NSX (T or V, I don’t care) has a whole slew of capabilities and whole slew of integration challenges.
  • Are you Zoomed out? I am. Even when you think the problem can’t be the network, it might just be the network (I hope my friends in networking appreciate that it’s not always the storage). John Nicholson posted a typically comprehensive overview of how your bandwidth might be one of the things keeping you from demonstrating excellent radio voice on those seemingly endless meetings you’re doing at the moment. It could also be that you’re using crap audio devices too, but I think John’s going to cover that in the future.
  • Scale Computing has a good story to tell about what it’s been doing with a large school district in the U.S. Read more about that here.
  • This is one of those promotions aimed at my friends in Northern America more than folks based where I am, but I’m always happy to talk about deals on data protection. StorCentric has launched its “Retrospect Dads & Grads Promotion” offering a free 90-Day subscription license for every Retrospect Backup product. You can read more about that here.
  • Pure//Accelerate Online was this week, and Max did a nice write-up on Pure Storage File Services over at Gestalt IT.
  • Rancher Labs recently announced the general availability of Longhorn (a cloud-native container storage solution). I’m looking forward to digging in to this a bit more over the next little while.