Pure Storage – Pure1 Makes Life Easy

Disclaimer: I recently attended Storage Field Day 22.  Some expenses were paid for by Tech Field Day. There is no requirement for me to blog about any of the content presented and I am not compensated in any way for my time at the event.  Some materials presented were discussed under NDA and don’t form part of my blog posts, but could influence future discussions.

Pure Storage recently presented at Storage Field Day 22. You can see videos of the presentation here, and download my rough notes from here.

 

What You Need

If you’ve spent any time working with storage infrastructure, you’ll know that it can be a pain to manage and operate in an efficient manner. Pure1 has always been a great tool to manage your Pure Storage fleet. But Pure has taken that idea of collecting and analysing a whole bunch of telemetry data and taken it even further. So what is it you need?

Management and Observation

  • Setup needs to be easy to reduce risk and accelerate delivery
  • Alerting needs to be predictive to prevent downtime
  • Management has to be done anywhere to be responsive

Planning and Upgrades

  • Determining when to buy requires forecasting to manage costs
  • Workload optimisations should be intuitive to help keep users happy
  • Non-disruptive upgrades are critical to prevent disruptions

Purchasing and Scaling

  • Resources should be available as a service for on-demand scaling.
  • Data service purchasing should be self-service for speed and simplicity
  • Hybrid cloud should be available from one vendor, in one place

 

Pure1 Has It

Sounds great, so how do you get that with Pure1? Pure breaks it down into three key areas:

  • Optimise
  • Recommend
  • Empower

Optimise

Reduce the time you spend on management and take the guesswork out of support. With aggregated fleet / group metrics, you get:

  • Capacity utilisation
  • Performance
  • Data reduction savings
  • Alerts and support cases

[image courtesy of Pure Storage]

Recommend

Every organisation wants to improve the speed and accuracy of resource planning while enhancing user experience. Pure1 provides the ability to use “What-If” modelling to stay ahead of demands.

  • Select application to be added
  • Provide sizing details
  • Get recommendations based on Pure best practices and AI analysis of our telemetry databases

[image courtesy of Pure Storage]

The process is alarmingly simple:

  • Pick a Workload Type – Choose a preset application type from a list of the most deployed enterprise applications, including SAP HANA, Microsoft SQL, and more.
  • Set Application Parameter – Define size of the deployment. Attributes are auto-populated based on Pure1 analytics across its global database. Adjust as needed for your environment.
  • Simulate Deployment – Identify where you want to deploy the application data. Pure1 analyses the impact on performance and capacity.

Empower

Build your hybrid-cloud infrastructure your way and on demand without the headaches of legacy purchasing. Pure has a great story to tell when it comes to Pure as-a-Service and OpEx acquisition models.

 

Thoughts and Further Reading

In a previous job, I was a Pure1 user and found the overall experience to be tremendous. Much has changed with Pure1 since I first installed it on my phone, and it’s my opinion that the integration and usefulness of the service have both increased exponentially. The folks at Pure have always understood that it’s not enough to deliver high-performance storage solutions built on All-Flash. This is considered table-stakes nowadays. Instead, Pure has done a great job of focussing on the management and operation of these high-performance storage solutions to ensure that users get what they need from the system. I sound like a broken record, I’m sure, but it’s this relentless focus on the customer experience that I think sets Pure apart from many of its competitors.

Most of the tier 1 storage vendors have had a chop at delivering management and operations systems that make extensive use of field telemetry data and support knowledge to deliver proactive support for customers. Everyone is talking about how they use advanced analytics, AI / ML, and so on to deliver a great support experience. But I think it’s the other parts of the equation that really brings it together nicely for Pure: the “evergreen” hardware lifecycle options, the consumption flexibility, and the focus on constantly improving the day 2 operations experience that’s required when managing storage at scale in the enterprise. Add to that the willingness to embrace hybrid cloud technologies, and the expanding product portfolio, and I’m looking forward to seeing what’s next for Pure. Finally, shout out to Stan Yanitskiy for jumping in at the last minute to present when his colleague had a comms issue – I think the video shows that he handled it like a real pro.

Intel – It’s About Getting The Right Kind Of Fast At The Edge

Disclaimer: I recently attended Storage Field Day 22.  Some expenses were paid for by Tech Field Day. There is no requirement for me to blog about any of the content presented and I am not compensated in any way for my time at the event.  Some materials presented were discussed under NDA and don’t form part of my blog posts, but could influence future discussions.

Intel recently presented at Storage Field Day 22. You can see videos of the presentation here, and download my rough notes from here.

 

The Problem

A lot of countries have used lockdowns as a way to combat the community transmission of COVID-19. Apparently, this has led to an uptick in the consumption of streaming media services. If you’re somewhat familiar with streaming media services, you’ll understand that your favourite episode of Hogan’s Heroes isn’t being delivered from a giant storage device sitting in the bowels of your streaming media provider’s data centre. Instead, it’s invariably being delivered to your device from a content delivery network (CDN) device.

 

Content Delivery What?

CDNs are not a new concept. The idea is that you have a bunch of web servers geographically distributed delivering content to users who are also geographically distributed. Think of it as a way to cache things closer to your end users. There are many reasons why this can be a good idea. Your content will load faster for users if it resides on servers in roughly the same area as them. Your bandwidth costs are generally a bit cheaper, as you’re not transmitting as much data from your core all the way out to the end user. Instead, those end users are getting the content from something close to them. You can potentially also deliver more versions of content (in terms of resolution) easily. It can also be beneficial in terms of resiliency and availability – an outage on one part of your network, say in Palo Alto, doesn’t need to necessarily impact end users living in Sydney. Cloudflare does a fair bit with CDNs, and there’s a great overview of the technology here.

 

Isn’t All Content Delivery The Same?

Not really. As Intel covered in its Storage Field Day presentation, there are some differences with the performance requirements of video on demand and live-linear streaming CDN solutions.

Live-Linear Edge Cache

Live-linear video streaming is similar to the broadcast model used in television. It’s basically programming content streamed 24/7, rather than stuff that the user has to search for. Several minutes of content are typically cached to accommodate out-of-sync users and pause / rewind activities. You can read a good explanation of live-linear streaming here.

[image courtesy of Intel]

In the example above, Intel Optane PMem was used to address the needs of live-linear streaming.

  • Live-linear workloads consume a lot of memory capacity to maintain a short-lived video buffer.
  • Intel Optane PMem is less expensive than DRAM.
  • Intel Optane PMem has extremely high endurance, to handle frequent overwrite.
  • Flexible deployment options – Memory Mode or App-Direct, consuming zero drive slots.

With this solution they were able to achieve better channel and stream density per server than with DRAM-based solutions.

Video on Demand (VoD)

VoD providers typically offer a large library of content allowing users to view it at any time (e.g. Netflix and Disney+). VoD servers are a little different to live-linear streaming CDNs. They:

  • Typically require large capacity and drive fanout for performance / failure domains; and
  • Have a read-intensive workload, with typically large IOs.

[image courtesy of Intel]

 

Thoughts and Further Reading

I first encountered the magic of CDNs years ago when working in a data centre that hosted some Akamai infrastructure. Windows Server updates were super zippy, and it actually saved me from having to spend a lot of time standing in the cold aisle. Fast forward about 15 years, and CDNs are being used for all kinds of content delivery on the web. With whatever the heck this is is in terms of the new normal, folks are putting more and more strain on those CDNs by streaming high-quality, high-bandwidth TV and movie titles into their homes (except in backwards places like Australia). As a result, content providers are constantly searching for ways to tweak the throughput of these CDNs to serve more and more customers, and deliver more bandwidth to those users.

I’ve barely skimmed the surface of how CDNs help providers deliver content more effectively to end users. What I did find interesting about this presentation was that it reinforced the idea that different workloads require different infrastructure solutions to deliver the right outcomes. It sounds simple when I say it like this, but I guess I’ve thought about streaming video CDNs as being roughly the same all over the place. Clearly they aren’t, and it’s not just a matter of jamming some SSDs in one RU servers and hoping that your content will be delivered faster to punters. It’s important to understand that Intel Optane PMem and Intel Optane 3D NAND can give you different results depending on what you’re trying to do, with PMem arguably giving you better value for money (per GB) than DRAM. There are some great papers on this topic available on the Intel website. You can read more here and here.

Random Short Take #60

Welcome to Random Short take #60.

  • VMware Cloud Director 10.3 went GA recently, and this post will point you in the right direction when it comes to planning the upgrade process.
  • Speaking of VMware products hitting GA, VMware Cloud Foundation 4.3 became available about a week ago. You can read more about that here.
  • My friend Tony knows a bit about NSX-T, and certificates, so when he bumped into an issue with NSX-T and certificates in his lab, it was no big deal to come up with the fix.
  • Here’s everything you wanted to know about creating an external bootable disk for use with macOS 11 and 12 but were too afraid to ask.
  • I haven’t talked to the good folks at StarWind in a while (I miss you Max!), but this article on the new All-NVMe StarWind Backup Appliance by Paolo made for some interesting reading.
  • I loved this article from Chin-Fah on storage fear, uncertainty, and doubt (FUD). I’ve seen a fair bit of it slung about having been a customer and partner of some big storage vendors over the years.
  • This whitepaper from Preston on some of the challenges with data protection and long-term retention is brilliant and well worth the read.
  • Finally, I don’t know how I came across this article on hacking Playstation 2 machines, but here you go. Worth a read if only for the labels on some of the discs.

Fujifilm Object Archive – Not Your Father’s Tape Library

Disclaimer: I recently attended Storage Field Day 22.  Some expenses were paid for by Tech Field Day. There is no requirement for me to blog about any of the content presented and I am not compensated in any way for my time at the event.  Some materials presented were discussed under NDA and don’t form part of my blog posts, but could influence future discussions.

Fujifilm recently presented at Storage Field Day 22. You can see videos of the presentation here, and download my rough notes from here.

 

Fujifilm Overview

You’ve heard of Fujifilm before, right? They do a whole bunch of interesting stuff – batteries, cameras, copiers. Nami Matsumoto, Director of DMS Marketing and Operations, took us through some of Fujifilm’s portfolio. Fujifilm’s slogan is “Value From Innovation”, and it certainly seems to be looking to extract maximum value from its $1.4B annual spend on research and development. The Recording Media Products Division is focussed on helping “companies future proof their data”.

[image courtesy of Fujifilm]

 

The Problem

The challenge, as always (it seems), is that data growth continues apace while budgets remain flat. As a result, both security and scalability are frequently sacrificed when solutions are deployed in enterprises.

  • Rapid data creation: “More than 59 Zettabytes (ZB) of data will be created, captured, copied, and consumed in the world this year” (IDC 2020)
  • Shift from File to Object Storage
  • Archive Market – 60 – 80%
  • Flat IT budgets
  • Cybersecurity concerns
  • Scalability

 

Enter The Archive

FUJIFILM Object Archive

Chris Kehoe, Director of DMS Sales and Engineering, spent time explaining what exactly FUJIFILM Object Archive was. “Object Archive is an S3 based archival tier designed to reduce cost, increase scale and provide the highest level of security for long-term data retention”. In short, it:

  • Works like Amazon S3 Glacier in your DC
  • Simply integrates with other object storage
  • Scales on tape technology
  • Secure with air gap and full chain of custody
  • Predictable costs and TCO with no API or egress fees

Workloads?

It’s optimised to handle the long-term retention of data, which is useful if you’re doing any of these things:

  • Digital preservation
  • Scientific research
  • Multi-tenant managed services
  • Storage optimisation
  • Active archiving

What Does It Look Like?

There are a few components that go into the solution, including a:

  • Storage Server
  • Smart cache
  • Tape Server

[image courtesy of Fujifilm]

Tape?

That’s right, tape. The tape library supports LTO7, LTO8, TS1160. The data is written using “OTFormat” specification (you can read about that here). The idea is that it packs a bunch of objects together so they get written efficiently.  

[image courtesy of Fujifilm]

Object Storage Too

It uses an “S3-compatible” API – the S3 server is built on Zenko inside (Scality). From an object storage perspective, it works with Cloudian HyperStore, Caringo Swarm, NetApp StorageGRID, Scality Ring. It also has Starfish and Tiger Bridge support.

Other Notes

The product starts at 1PB of licensing. You can read the Solution Brief here. There’s an informative White Paper here. And there’s one of those nice Infographic things here.

Deployment Example

So what does this look like from a deployment perspective? One example was a typical primary storage deployment, with data archived to an on-premises object storage platform (in this case NetApp StorageGRID). When your archive got really “cold”, it would be moved to the Object Archive.

[image courtesy of Fujifilm]

[image courtesy of Fujifilm]

 

Thoughts

Years ago, when a certain deduplication storage appliance company was acquired by a big storage slinger, stickers with “Tape is dead, get over it” were given out to customers. I think I still have one or two in my office somewhere. And I think the sentiment is spot on, at least in terms of the standard tape library deployments I used to see in small to mid to large enterprise. The problem that tape was solving for those organisations at the time has largely been dealt with by various disk-based storage solutions. There are nonetheless plenty of use cases where tape is still considered useful. I’m not going to go into every single reason, but the cost per GB of tape, at a particular scale, is hard to beat. And when you want to safely store files for a long period of time, even offline? Tape, again, is hard to beat. This podcast from Curtis got me thinking about the demise of tape, and I think this presentation from Fujifilm reinforced the thinking that it was far from on life support – at least in very specific circumstances.

Data keeps growing, and we need to keep it somewhere, apparently. We also need to think about keeping it in a way that means we’re not continuing to negatively impact the environment. It doesn’t necessarily make sense to keep really old data permanently online, despite the fact that it has some appeal in terms of instant access to everything ever. Tape is pretty good when it comes to relatively low energy consumption, particularly given the fact that we can’t yet afford to put all this data on All-Flash storage. And you can keep it available in systems that can be relied upon to get the data back, just not straight away. As I said previously, this doesn’t necessarily make sense for the home punter, or even for the small to midsize enterprise (although I’m tempted now to resurrect some of my older tape drives and see what I can store on them). It really works better at large scale (dare I say hyperscale?). Given that we seem determined to store a whole bunch of data with the hyperscalers, and for a ridiculously long time, it makes sense that solutions like this will continue to exist, and evolve. Sure, Fujifilm has sold something like 170 million tapes worldwide. But this isn’t simply a tape library solution. This is a wee bit smarter than that. I’m keen to see how this goes over the next few years.

Komprise – It’s About Data, Not Storage

Disclaimer: I recently attended Storage Field Day 22.  Some expenses were paid for by Tech Field Day. There is no requirement for me to blog about any of the content presented and I am not compensated in any way for my time at the event.  Some materials presented were discussed under NDA and don’t form part of my blog posts, but could influence future discussions.

Komprise recently presented at Storage Field Day 22. You can see their videos from Storage Field Day 22 here, and download a PDF copy of my rough notes from here.

 

The Age Of Data, Not Storage

It’s probably been the age of data for some time now, but I couldn’t think of a catchy heading. One comment from the Komprise folks during the presentation that really stood out to me was “Data outlives its storage infrastructure”. If I think back ten years to how I thought about managing data movement, it was certainly tied to the storage platform hosting the data, rather than what the data did. Whenever I had to move from one array to the next, or one protocol to another, I wasn’t thinking in terms of where the data would necessarily be best placed to serve the business. Generally speaking, I was approaching the problem in terms of getting good performance for blocks and files, but rarely was I thinking in terms of the value of the data to the business. Nowadays, it seems that there’s an improved focus on getting the “[d]ata in the right place at the right time – not just for efficiency – but to extract maximum value”. We’re no longer thinking about data in terms of old stuff living on slow storage, and fresh bits living on the fast stuff. As the amount of data being managed in enterprises continues to grow at an insane rate, it’s becoming more important than ever to understand just what usefulness the data offers the business.

[image courtesy of Komprise]

The variety of storage platforms available now is also a little more extensive than it was last century, and that presents some more interesting challenges in getting the data to where it needs to be. As I mentioned earlier, data growth is going berserk the world over. Add to this the problem of ubiquitous cloud access (and IT departments struggling to keep up with the governance necessary to wrangle these solutions into some sensible shape), and most enterprises looking to save money wherever possible, and data management can present real problems to most enterprise shops.

[image courtesy of Komprise]

 

Analytics To The Rescue!

Komprise has come up with an analytics-driven approach to data management that is built on some sound foundational principles. The solution needs to:

  1. Go beyond storage efficiency – it’s not just about dedupe and compression at a certain scale.
  2. Must be multi-directional – you need to be able to get stuff back.
  3. Not disrupt users and workflows – do that and you may as well throw the solution in the bin.
  4. Should create new uses for your data – it’s all about value, after all.
  5. Puts your data first.

The final point is possibly the most critical one. If I think about the storage-centric approaches to data management that I’ve seen over the years, there’s definitely been a viewpoint that the underlying storage infrastructure would heavily influence how the data is used, rather than the data dictating how the storage platforms should be architected. Some of that is a question of visibility – if you don’t understand your data, it’s hard to come up with tailored solutions. Some of the problem is also the disconnect that seems to exist between “the business” and IT departments in a large number of enterprises. It’s not an easy problem to solve, by any stretch, but it does explain some of the novel approaches to data management that I’ve seen over the years.

 

Thoughts and Further Reading

Data management is hard, and it keeps getting harder because we keep making more and more data. And we frequently don’t have the time, or take the time, to work out what value the data actually has. This problem isn’t going to go away, so it’s good to see Komprise moving the conversation past that and into the realm of how we can best focus on deriving value from the data itself. There was certainly some interesting discussion during the presentation about the term analytics,  and what that really meant in terms of the Komprise solution. Ultimately, though, I’m a fan of anything that elevates the conversation beyond “I can move your terabytes from this bucket to that bucket”. I want something that starts to tell me more about what type of data I’m storing, who’s using it, and how they’re using it. That’s when it gets interesting from a data management perspective. I think there’s a ways to go in terms of getting this solution right for everyone, but it strikes me that Komprise is on the right track, and I’m looking forward to seeing how the solution evolves alongside the storage technologies it’s using to get the most from everyone’s data. You can read more on the Komprise approach here.

Random Short Take #59

Welcome to Random Short take #59.

  • It’s been a while since I’ve looked at Dell Technologies closely, but Tech Field Day recently ran an event and Pietro put together a pretty comprehensive view of what was covered.
  • Dr Bruce Davie is a smart guy, and this article over at El Reg on decentralising Internet services made for some interesting reading.
  • Clean installs and Time Machine system recoveries on macOS aren’t as nice as they used to be. I found this out a day or two before this article was published. It’s worth reading nonetheless, particularly if you want to get your head around the various limitations with Recovery Mode on more modern Apple machines.
  • If you follow me on Instagram, you’ll likely realise I listen to records a lot. I don’t do it because they “sound better” though, I do it because it works for me as a more active listening experience. There are plenty of clowns on the Internet ready to tell you that it’s a “warmer” sound. They’re wrong. I’m not saying you should fight them, but if you find yourself in an argument this article should help.
  • Speaking of technologies that have somewhat come and gone (relax – I’m joking!), this article from Chris M. Evans on HCI made for some interesting reading. I always liked the “start small” approach with HCI, particularly when comparing it to larger midrange storage systems. But things have definitely changed when it comes to available storage and converged options.
  • In news via press releases, Datadobi announced version 5.12 of its data mobility engine.
  • Leaseweb Global has also made an announcement about a new acquisition.
  • Russ published an interesting article on new approaches to traditional problems. Speaking of new approaches, I was recently a guest on the On-Premise IT Podcast discussing when it was appropriate to scrap existing storage system designs and start again.

 

Ransomware? More Like Ransom Everywhere …

Stupid title, but ransomware has been in the news quite a bit recently. I’ve had some tabs open in my browser for over twelve months with articles about ransomware that I found interesting. I thought it was time to share them and get this post out there. This isn’t comprehensive by any stretch, but rather it’s a list of a few things to look at when looking into anti-ransomware solutions, particularly for NAS environments.

 

It Kicked Him Right In The NAS

The way I see it (and I’m really not the world’s strongest security person), there are (at least) three approaches to NAS and ransomware concerns.

The Endpoint

This seems to be where most companies operate – addressing ransomware as it enters the organisation via the end users. There are a bunch of solutions out there that are designed to protect humans from themselves. But this approach doesn’t always help with alternative attack vectors and it’s only as good as the update processes you have in place to keep those endpoints updated. I’ve worked in a few shops where endpoint protection solutions were deployed and then inadvertently clobbered by system updates or users with too many privileges. The end result was that the systems didn’t do what they were meant to and there was much angst.

The NAS Itself

There are things you can do with NetApp solutions, for example, that are kind of interesting. Something like Stealthbits looks neat, and Varonis also uses FPolicy to get a similar result. Your mileage will vary with some of these solutions, and, again, it comes down to the ability to effectively ensure that these systems are doing what they say they will, when they will.

Data Protection

A number of the data protection vendors are talking about their ability to recover quickly from ransomware attacks. The capabilities vary, as they always do, but most of them have a solid handle on quick recovery once an infection is discovered. They can even help you discover that infection by analysing patterns in your data protection activities. For example, if a whole bunch of data changes overnight, it’s likely that you have a bit of a problem. But, some of the effectiveness of these solutions is limited by the frequency of data protection activity, and whether anyone is reading the alerts. The challenge here is that it’s a reactive approach, rather than something preventative. That said, companies like Rubrik are working hard to enhance its Radar capability into something a whole lot more interesting.

Other Things

Other things that can help limit your exposure to ransomware include adopting generally robust security practices across the board, monitoring all of your systems, and talking to your users about not clicking on unknown links in emails. Some of these things are easier to do than others.

 

Thoughts

I don’t think any of these solutions provide everything you need in isolation, but the challenge is going to be coming up with something that is supportable and, potentially, affordable. It would also be great if it works too. Ransomware is a problem, and becoming a bigger problem every day. I don’t want to sound like I’m selling you insurance, but it’s almost not a question of if, but when. But paying attention to some of the above points will help you on your way. Of course, sometimes Sod’s Law applies, and things will go badly for you no matter how well you think you’ve designed your systems. At that point, it’s going to be really important that you’ve setup your data protection systems correctly, otherwise you’re in for a tough time. Remember, it’s always worth thinking about what your data is worth to you when you’re evaluating the relative value of security and data protection solutions. This article from Chin-Fah had some interesting insights into the problem. And this article from Cohesity outlined a comprehensive approach to holistic cyber security. This article from Andrew over at Pure Storage did a great job of outlining some of the challenges faced by organisations when rolling out these systems. This list of NIST ransomware resources from Melissa is great. And if you’re looking for a useful resource on ransomware from VMware’s perspective, check out this site.

Random Short Take #56

Welcome to Random Short Take #56. Only three players have worn 56 in the NBA. I may need to come up with a new bit of trivia. Let’s get random.

  • Are we nearing the end of blade servers? I’d hoped the answer was yes, but it’s not that simple, sadly. It’s not that I hate them, exactly. I bought blade servers from Dell when they first sold them. But they can present challenges.
  • 22dot6 emerged from stealth mode recently. I had the opportunity to talk to them and I’ll post something soon about that. In the meantime, this post from Mellor covers it pretty well.
  • It may be a Northern Hemisphere reference that I don’t quite understand, but Retrospect is running a “Dads and Grads” promotion offering 90 days of free backup subscriptions. Worth checking out if you don’t have something in place to protect your desktop.
  • Running VMware Cloud Foundation and want to stretch your vSAN cluster across two sites? Tony has you covered.
  • The site name in VMware Cloud Director can look a bit ugly. Steve O gives you the skinny on how to change it.
  • Pure//Accelerate happened recently / is still happening, and there was a bit of news from the event, including the new and improved Pure1 Digital Experience. As a former Pure1 user I can say this was a big part of the reason why I liked using Pure Storage.
  • Speaking of press releases, this one from PDI and its investment intentions caught my eye. It’s always good to see companies willing to spend a bit of cash to make progress.
  • I stumbled across Oxide on Twitter and fell for the aesthetic and design principles. Then I read some of the articles on the blog and got even more interested. Worth checking out. And I’ll be keen to see just how it goes for the company.

*Bonus Round*

I was recently on the Restore it All podcast with W. Curtis Preston and Prasanna Malaiyandi. It was a lot of fun as always, despite the fact that we talked about something that’s a pretty scary subject (data (centre) loss). No, I’m not a DC manager in real life, but I do have responsibility for what goes into our DC so I sort of am. Don’t forget there’s a discount code for the book in the podcast too.

Random Short Take #55

Welcome to Random Short Take #55. A few players have worn 55 in the NBA. I wore some Mutombo sneakers in high school, and I enjoy watching Duncan Robinson light it up for the Heat. My favourite ever to wear 55 was “White Chocolate” Jason Williams. Let’s get random.

  • This article from my friend Max around Intel Optane and VMware Cloud Foundation provided some excellent insights.
  • Speaking of friends writing about VMware Cloud Foundation, this first part of a 4-part series from Vaughn makes a compelling case for VCF on FlashStack. Sure, he gets paid to say nice things about the company he works for, but there is plenty of info in here that makes a lot of sense if you’re evaluating which hardware platform pairs well with VCF.
  • Speaking of VMware, if you’re a VCD shop using NSX-V, it’s time to move on to NSX-T. This article from VMware has the skinny.
  • You want an open source version of BMC? Fine, you got it. Who would have thought securing BMC would be a thing? (Yes, I know it should be)
  • Stuff happens, hard drives fail. Backblaze recently published its drive stats report for Q1. You can read about that here.
  • Speaking of drives, check out this article from Netflix on its Netflix Drive product. I find it amusing that I get more value from Netflix’s tech blog than I do its streaming service, particularly when one is free.
  • The people in my office laugh nervously when I say I hate being in meetings where people feel the need to whiteboard. It’s not that I think whiteboard sessions can’t be valuable, but oftentimes the information on those whiteboards should be documented somewhere and easy to bring up on a screen. But if you find yourself in a lot of meetings and need to start drawing pictures about new concepts or whatever, this article might be of some use.
  • Speaking of office things not directly related to tech, this article from Preston de Guise on interruptions was typically insightful. I loved the “Got a minute?” reference too.

 

Random Short Take #54

Welcome to Random Short Take #54. A few players have worn 54 in the NBA, but my favourite was Horace Grant. Let’s get random.

  • This project looked like an enjoyable, and relatively accessible, home project – building your own NVMe-based storage server.
  • When I was younger I had nightmares based on horror movies and falling out of bed (sometimes with both happening at the same time). Now this is the kind of thing that keeps me awake at night.
  • Speaking of disastrous situations, the OVH problem was a real problem for a lot of people. I wish them all the best with the recovery.
  • Tony has been doing things with vSAN in his lab and in production – worth checking out.
  • The folks at StorageOS have been hard at work improving their Kubernetes storage platform. You can read more about that here.
  • DH2i has a webinar coming up on SQL Server resilience that’s worth checking out. Details here.
  • We’re talking more about burnout in the tech industry, but probably not enough still. This article from Tom was insightful.