Want some news? In a shorter format? And a little bit random? This listicle might be for you. Welcome to #24 – The Kobe Edition (not a lot of passing, but still entertaining). 8 articles too. Which one was your favourite Kobe? 8 or 24?
I wrote an article about how architecture matters years ago. It’s nothing to do with this one from Preston, but he makes some great points about the importance of architecture when looking to protect your public cloud workloads.
Commvault GO 2019 was held recently, and Chin-Fah had some thoughts on where Commvault’s at. You can read all about that here. Speaking of Commvault, Keith had some thoughts as well, and you can check them out here.
Here’s a semi-regular listicle of random news items that might be of some interest.
This is a great article covering QoS enhancements in Purity 5.3. Speaking of Pure Storage I’m looking forward to attending Pure//Accelerate in Austin in the next few weeks. I’ll be participating in a Storage Field Day Exclusive event as well – you can find more details on that here.
My friends at Scale Computing have entered into an OEM agreement with Acronis to add more data protection and DR capabilities to the HC3 platform. You can read more about that here.
DH2i are presenting a webinar on September 10th at 11am Pacific, “On the Road Again – How to Secure Your Network for Remote User Access”. I’ve spoken to the people at DH2i in the past and they’re doing some really interesting stuff. If your timezone lines up with this, check it out.
I’ve been working with Pure Storage’s ObjectEngine in our lab recently, and wanted to share a few screenshots from the Commvault configuration bit, as it had me stumped for a little while. This is a quick one, but hopefully it will help those of you out there who are trying to get it working. I’m assuming you’ve already created your bucket and user in the ObjectEngine environment, and you have the details of your OE environment at hand.
The first step is to add a Cloud Storage Library to your Libraries configuration.
You’ll need to provide a name, and select the type as Amazon S3. You’ll see in this example that I’m using the fully qualified domain name as the Service Host.
At this point you should be able to click on Detect to detect the bucket you’ll use to store data in. For some reason though, I kept getting an error when I did this.
The trick is to put http:// in front of the FQDN. Note that this doesn’t work with https://.
Now when you click on Detect, you’ll see the Bucket that you’ve configured on the OE environment (assuming you haven’t fat-fingered your credentials).
And that’s it. You can then go on and configure your storage polices and SubClient policies as required.