Intel – It’s About Getting The Right Kind Of Fast At The Edge

Disclaimer: I recently attended Storage Field Day 22.  Some expenses were paid for by Tech Field Day. There is no requirement for me to blog about any of the content presented and I am not compensated in any way for my time at the event.  Some materials presented were discussed under NDA and don’t form part of my blog posts, but could influence future discussions.

Intel recently presented at Storage Field Day 22. You can see videos of the presentation here, and download my rough notes from here.

 

The Problem

A lot of countries have used lockdowns as a way to combat the community transmission of COVID-19. Apparently, this has led to an uptick in the consumption of streaming media services. If you’re somewhat familiar with streaming media services, you’ll understand that your favourite episode of Hogan’s Heroes isn’t being delivered from a giant storage device sitting in the bowels of your streaming media provider’s data centre. Instead, it’s invariably being delivered to your device from a content delivery network (CDN) device.

 

Content Delivery What?

CDNs are not a new concept. The idea is that you have a bunch of web servers geographically distributed delivering content to users who are also geographically distributed. Think of it as a way to cache things closer to your end users. There are many reasons why this can be a good idea. Your content will load faster for users if it resides on servers in roughly the same area as them. Your bandwidth costs are generally a bit cheaper, as you’re not transmitting as much data from your core all the way out to the end user. Instead, those end users are getting the content from something close to them. You can potentially also deliver more versions of content (in terms of resolution) easily. It can also be beneficial in terms of resiliency and availability – an outage on one part of your network, say in Palo Alto, doesn’t need to necessarily impact end users living in Sydney. Cloudflare does a fair bit with CDNs, and there’s a great overview of the technology here.

 

Isn’t All Content Delivery The Same?

Not really. As Intel covered in its Storage Field Day presentation, there are some differences with the performance requirements of video on demand and live-linear streaming CDN solutions.

Live-Linear Edge Cache

Live-linear video streaming is similar to the broadcast model used in television. It’s basically programming content streamed 24/7, rather than stuff that the user has to search for. Several minutes of content are typically cached to accommodate out-of-sync users and pause / rewind activities. You can read a good explanation of live-linear streaming here.

[image courtesy of Intel]

In the example above, Intel Optane PMem was used to address the needs of live-linear streaming.

  • Live-linear workloads consume a lot of memory capacity to maintain a short-lived video buffer.
  • Intel Optane PMem is less expensive than DRAM.
  • Intel Optane PMem has extremely high endurance, to handle frequent overwrite.
  • Flexible deployment options – Memory Mode or App-Direct, consuming zero drive slots.

With this solution they were able to achieve better channel and stream density per server than with DRAM-based solutions.

Video on Demand (VoD)

VoD providers typically offer a large library of content allowing users to view it at any time (e.g. Netflix and Disney+). VoD servers are a little different to live-linear streaming CDNs. They:

  • Typically require large capacity and drive fanout for performance / failure domains; and
  • Have a read-intensive workload, with typically large IOs.

[image courtesy of Intel]

 

Thoughts and Further Reading

I first encountered the magic of CDNs years ago when working in a data centre that hosted some Akamai infrastructure. Windows Server updates were super zippy, and it actually saved me from having to spend a lot of time standing in the cold aisle. Fast forward about 15 years, and CDNs are being used for all kinds of content delivery on the web. With whatever the heck this is is in terms of the new normal, folks are putting more and more strain on those CDNs by streaming high-quality, high-bandwidth TV and movie titles into their homes (except in backwards places like Australia). As a result, content providers are constantly searching for ways to tweak the throughput of these CDNs to serve more and more customers, and deliver more bandwidth to those users.

I’ve barely skimmed the surface of how CDNs help providers deliver content more effectively to end users. What I did find interesting about this presentation was that it reinforced the idea that different workloads require different infrastructure solutions to deliver the right outcomes. It sounds simple when I say it like this, but I guess I’ve thought about streaming video CDNs as being roughly the same all over the place. Clearly they aren’t, and it’s not just a matter of jamming some SSDs in one RU servers and hoping that your content will be delivered faster to punters. It’s important to understand that Intel Optane PMem and Intel Optane 3D NAND can give you different results depending on what you’re trying to do, with PMem arguably giving you better value for money (per GB) than DRAM. There are some great papers on this topic available on the Intel website. You can read more here and here.

Random Short Take #60

Welcome to Random Short take #60.

  • VMware Cloud Director 10.3 went GA recently, and this post will point you in the right direction when it comes to planning the upgrade process.
  • Speaking of VMware products hitting GA, VMware Cloud Foundation 4.3 became available about a week ago. You can read more about that here.
  • My friend Tony knows a bit about NSX-T, and certificates, so when he bumped into an issue with NSX-T and certificates in his lab, it was no big deal to come up with the fix.
  • Here’s everything you wanted to know about creating an external bootable disk for use with macOS 11 and 12 but were too afraid to ask.
  • I haven’t talked to the good folks at StarWind in a while (I miss you Max!), but this article on the new All-NVMe StarWind Backup Appliance by Paolo made for some interesting reading.
  • I loved this article from Chin-Fah on storage fear, uncertainty, and doubt (FUD). I’ve seen a fair bit of it slung about having been a customer and partner of some big storage vendors over the years.
  • This whitepaper from Preston on some of the challenges with data protection and long-term retention is brilliant and well worth the read.
  • Finally, I don’t know how I came across this article on hacking Playstation 2 machines, but here you go. Worth a read if only for the labels on some of the discs.

Fujifilm Object Archive – Not Your Father’s Tape Library

Disclaimer: I recently attended Storage Field Day 22.  Some expenses were paid for by Tech Field Day. There is no requirement for me to blog about any of the content presented and I am not compensated in any way for my time at the event.  Some materials presented were discussed under NDA and don’t form part of my blog posts, but could influence future discussions.

Fujifilm recently presented at Storage Field Day 22. You can see videos of the presentation here, and download my rough notes from here.

 

Fujifilm Overview

You’ve heard of Fujifilm before, right? They do a whole bunch of interesting stuff – batteries, cameras, copiers. Nami Matsumoto, Director of DMS Marketing and Operations, took us through some of Fujifilm’s portfolio. Fujifilm’s slogan is “Value From Innovation”, and it certainly seems to be looking to extract maximum value from its $1.4B annual spend on research and development. The Recording Media Products Division is focussed on helping “companies future proof their data”.

[image courtesy of Fujifilm]

 

The Problem

The challenge, as always (it seems), is that data growth continues apace while budgets remain flat. As a result, both security and scalability are frequently sacrificed when solutions are deployed in enterprises.

  • Rapid data creation: “More than 59 Zettabytes (ZB) of data will be created, captured, copied, and consumed in the world this year” (IDC 2020)
  • Shift from File to Object Storage
  • Archive Market – 60 – 80%
  • Flat IT budgets
  • Cybersecurity concerns
  • Scalability

 

Enter The Archive

FUJIFILM Object Archive

Chris Kehoe, Director of DMS Sales and Engineering, spent time explaining what exactly FUJIFILM Object Archive was. “Object Archive is an S3 based archival tier designed to reduce cost, increase scale and provide the highest level of security for long-term data retention”. In short, it:

  • Works like Amazon S3 Glacier in your DC
  • Simply integrates with other object storage
  • Scales on tape technology
  • Secure with air gap and full chain of custody
  • Predictable costs and TCO with no API or egress fees

Workloads?

It’s optimised to handle the long-term retention of data, which is useful if you’re doing any of these things:

  • Digital preservation
  • Scientific research
  • Multi-tenant managed services
  • Storage optimisation
  • Active archiving

What Does It Look Like?

There are a few components that go into the solution, including a:

  • Storage Server
  • Smart cache
  • Tape Server

[image courtesy of Fujifilm]

Tape?

That’s right, tape. The tape library supports LTO7, LTO8, TS1160. The data is written using “OTFormat” specification (you can read about that here). The idea is that it packs a bunch of objects together so they get written efficiently.  

[image courtesy of Fujifilm]

Object Storage Too

It uses an “S3-compatible” API – the S3 server is built on Zenko inside (Scality). From an object storage perspective, it works with Cloudian HyperStore, Caringo Swarm, NetApp StorageGRID, Scality Ring. It also has Starfish and Tiger Bridge support.

Other Notes

The product starts at 1PB of licensing. You can read the Solution Brief here. There’s an informative White Paper here. And there’s one of those nice Infographic things here.

Deployment Example

So what does this look like from a deployment perspective? One example was a typical primary storage deployment, with data archived to an on-premises object storage platform (in this case NetApp StorageGRID). When your archive got really “cold”, it would be moved to the Object Archive.

[image courtesy of Fujifilm]

[image courtesy of Fujifilm]

 

Thoughts

Years ago, when a certain deduplication storage appliance company was acquired by a big storage slinger, stickers with “Tape is dead, get over it” were given out to customers. I think I still have one or two in my office somewhere. And I think the sentiment is spot on, at least in terms of the standard tape library deployments I used to see in small to mid to large enterprise. The problem that tape was solving for those organisations at the time has largely been dealt with by various disk-based storage solutions. There are nonetheless plenty of use cases where tape is still considered useful. I’m not going to go into every single reason, but the cost per GB of tape, at a particular scale, is hard to beat. And when you want to safely store files for a long period of time, even offline? Tape, again, is hard to beat. This podcast from Curtis got me thinking about the demise of tape, and I think this presentation from Fujifilm reinforced the thinking that it was far from on life support – at least in very specific circumstances.

Data keeps growing, and we need to keep it somewhere, apparently. We also need to think about keeping it in a way that means we’re not continuing to negatively impact the environment. It doesn’t necessarily make sense to keep really old data permanently online, despite the fact that it has some appeal in terms of instant access to everything ever. Tape is pretty good when it comes to relatively low energy consumption, particularly given the fact that we can’t yet afford to put all this data on All-Flash storage. And you can keep it available in systems that can be relied upon to get the data back, just not straight away. As I said previously, this doesn’t necessarily make sense for the home punter, or even for the small to midsize enterprise (although I’m tempted now to resurrect some of my older tape drives and see what I can store on them). It really works better at large scale (dare I say hyperscale?). Given that we seem determined to store a whole bunch of data with the hyperscalers, and for a ridiculously long time, it makes sense that solutions like this will continue to exist, and evolve. Sure, Fujifilm has sold something like 170 million tapes worldwide. But this isn’t simply a tape library solution. This is a wee bit smarter than that. I’m keen to see how this goes over the next few years.

Infrascale Puts The Customer First

Disclaimer: I recently attended Storage Field Day 22.  Some expenses were paid for by Tech Field Day. There is no requirement for me to blog about any of the content presented and I am not compensated in any way for my time at the event.  Some materials presented were discussed under NDA and don’t form part of my blog posts, but could influence future discussions.

Infrascale recently presented at Storage Field Day 22. You can see videos of the presentation here, and download my rough notes from here.

 

Infrascale and Customer Experience

Founded in 2011, Infrascale is headquartered is in Reston, Virginia, with around 170 employees and offices in the Ukraine and India as well. As COO Brian Kuhn points out in the presentation, the company is “[a]ll about customers and their data”. Infrascale’s vision is “to be the most trusted data protection provider”.

Build Trust via Four Ps

Predictable

  • Reliable connections, response time, product
  • Work side by side like a dependable friend

Personal

  • People powered – partners, not numbers
  • Your success is our success

Proficient

  • Support and product experts with the right tools
  • Own the issue from beginning to end

Proactive

  • Onboarding, outreach to proactively help you
  • Identify issues before they impact your business

“Human beings dealing with human beings”

 

Product Portfolio

Infrascale Cloud Application Backup (ICAB)

SaaS Backup

  • Backup Microsoft 365, Google Workspace, Salesforce, Box, and Dropbox
  • Recover individual items (mail, file, or record) or entire mailboxes, folders, or databases
  • Close the retention gap between the SaaS provider and corporate, legal, and / or regulatory policy

Infrascale Cloud Backup (ICB)

Endpoint Backup

  • Backup desktop, laptop, or mobile devices directly to the cloud – wherever you work
  • Recover data in seconds – and with ease
  • Optimised for branch office and remote / home workers
  • Provides ransomware detection and remediation

Infrascale Backup and Disaster Recovery (IBDR)

Backup and DR / DRaaS for Servers

  • Backup mission-critical servers to both an on-premises and bootable cloud appliance
  • Boot ready in ~2 minutes (locally or in the cloud)
  • Restore system images or files / folders
  • Optimised for VMware and Hyper-V VMs and Windows bare metal

 

Digging Deeper with IBDR

What Is It?

Infrascale describes IBDR as a hybrid-cloud solution, with hardware and software on-premises, and service infrastructure in the cloud. In terms of DR as a service, Infrascale provides the ability to backup and replicate your data to a secondary location. In the event of a disaster, customers have the option to restore individual files and folders, or the entire infrastructure if required. Restore locations are flexible as well, with a choice of on-premises or in the cloud. Importantly, you also have the ability to failback when everything’s sorted out.

One of the nice features of the service is unlimited DR and failover testing, and there are no fees attached to testing, recovery, or disaster failover.

Range

The IBDR solution also comes in a few different versions, as the table below shows.

[image courtesy of Infrascale]

The appliances are also available in a range of shapes and sizes.

[image courtesy of Infrascale]

Replication Options

In terms of replication, there are multiple destinations available, and you can fairly easily fire up workloads in the Infrascale cloud if need be.

[image courtesy of Infrascale]

 

Thoughts and Further Reading

Anyone who’s worked with data protection solutions will understand that it can be difficult to put together a combination of hardware and software that meets the needs of the business from a commercial, technical, and process perspective – particularly when you’re starting at a small scale and moving up from there. Putting together a managed service for data protection and disaster recovery is possibly harder still, given that you’re trying to accommodate a wide variety of use cases and workloads. And doing this using commercial off-the-shelf offerings can be a real pain. You’re invariably tied to the roadmap of the vendor in terms of features, and your timeframes aren’t normally the same as your vendor (unless you’re really big). So there’s a lot to be said for doing it yourself. If you can get the software stack right, understand what your target market wants, and get everything working in a cost-effective manner, you’re onto a winner.

I commend Infrascale for the level of thought the company has given to this solution, its willingness to work with partners, and the fact that it’s striving to be the best it can in the market segment it’s targeting. My favourite part of the presentation was hearing the phrase “we treat [data] like it’s our own”. Data protection, as I’ve no doubt rambled on about before, is hard, and your customers are trusting you with getting them out of a pickle when something goes wrong. I think it’s great that the folks at Infrascale have this at the centre of everything they’re doing. I get the impression that it’s “all care, all responsibility” when it comes to the approach taken with this offering. I think this counts for a lot when it comes to data protection and DR as a service offerings. I’ll be interested to see how support for additional workloads gets added to the platform, but what they’re doing now seems to be enough for many organisations. If you want to know more about the solution, the resource library has some handy datasheets, and you can get an idea of some elements of the recommended retail pricing from this document.

Komprise – It’s About Data, Not Storage

Disclaimer: I recently attended Storage Field Day 22.  Some expenses were paid for by Tech Field Day. There is no requirement for me to blog about any of the content presented and I am not compensated in any way for my time at the event.  Some materials presented were discussed under NDA and don’t form part of my blog posts, but could influence future discussions.

Komprise recently presented at Storage Field Day 22. You can see their videos from Storage Field Day 22 here, and download a PDF copy of my rough notes from here.

 

The Age Of Data, Not Storage

It’s probably been the age of data for some time now, but I couldn’t think of a catchy heading. One comment from the Komprise folks during the presentation that really stood out to me was “Data outlives its storage infrastructure”. If I think back ten years to how I thought about managing data movement, it was certainly tied to the storage platform hosting the data, rather than what the data did. Whenever I had to move from one array to the next, or one protocol to another, I wasn’t thinking in terms of where the data would necessarily be best placed to serve the business. Generally speaking, I was approaching the problem in terms of getting good performance for blocks and files, but rarely was I thinking in terms of the value of the data to the business. Nowadays, it seems that there’s an improved focus on getting the “[d]ata in the right place at the right time – not just for efficiency – but to extract maximum value”. We’re no longer thinking about data in terms of old stuff living on slow storage, and fresh bits living on the fast stuff. As the amount of data being managed in enterprises continues to grow at an insane rate, it’s becoming more important than ever to understand just what usefulness the data offers the business.

[image courtesy of Komprise]

The variety of storage platforms available now is also a little more extensive than it was last century, and that presents some more interesting challenges in getting the data to where it needs to be. As I mentioned earlier, data growth is going berserk the world over. Add to this the problem of ubiquitous cloud access (and IT departments struggling to keep up with the governance necessary to wrangle these solutions into some sensible shape), and most enterprises looking to save money wherever possible, and data management can present real problems to most enterprise shops.

[image courtesy of Komprise]

 

Analytics To The Rescue!

Komprise has come up with an analytics-driven approach to data management that is built on some sound foundational principles. The solution needs to:

  1. Go beyond storage efficiency – it’s not just about dedupe and compression at a certain scale.
  2. Must be multi-directional – you need to be able to get stuff back.
  3. Not disrupt users and workflows – do that and you may as well throw the solution in the bin.
  4. Should create new uses for your data – it’s all about value, after all.
  5. Puts your data first.

The final point is possibly the most critical one. If I think about the storage-centric approaches to data management that I’ve seen over the years, there’s definitely been a viewpoint that the underlying storage infrastructure would heavily influence how the data is used, rather than the data dictating how the storage platforms should be architected. Some of that is a question of visibility – if you don’t understand your data, it’s hard to come up with tailored solutions. Some of the problem is also the disconnect that seems to exist between “the business” and IT departments in a large number of enterprises. It’s not an easy problem to solve, by any stretch, but it does explain some of the novel approaches to data management that I’ve seen over the years.

 

Thoughts and Further Reading

Data management is hard, and it keeps getting harder because we keep making more and more data. And we frequently don’t have the time, or take the time, to work out what value the data actually has. This problem isn’t going to go away, so it’s good to see Komprise moving the conversation past that and into the realm of how we can best focus on deriving value from the data itself. There was certainly some interesting discussion during the presentation about the term analytics,  and what that really meant in terms of the Komprise solution. Ultimately, though, I’m a fan of anything that elevates the conversation beyond “I can move your terabytes from this bucket to that bucket”. I want something that starts to tell me more about what type of data I’m storing, who’s using it, and how they’re using it. That’s when it gets interesting from a data management perspective. I think there’s a ways to go in terms of getting this solution right for everyone, but it strikes me that Komprise is on the right track, and I’m looking forward to seeing how the solution evolves alongside the storage technologies it’s using to get the most from everyone’s data. You can read more on the Komprise approach here.

Storage Field Day 22 – (Fairly) Full Disclosure

Disclaimer: I recently attended Storage Field Day 22.  Some expenses were paid for by Tech Field Day. There is no requirement for me to blog about any of the content presented and I am not compensated in any way for my time at the event.  Some materials presented were discussed under NDA and don’t form part of my blog posts, but could influence future discussions.

Here are my notes on gifts, etc, that I received as a conference attendee at Storage Field Day 22. This is by no stretch an interesting post from a technical perspective, but it’s a way for me to track and publicly disclose what I get and how it looks when I write about various things. With all of this stuff happening (waves hands around), it’s not going to be as lengthy as normal, but I did receive a box of stuff in the mail, so I wanted to disclose it.

The Tech Field Day team sent over some stickers, a TFD tote bag, and a TFD pin, and a TFD patch. Fujifilm kindly gave me a 16GB USB drive (with both USB 2 and Lightning connectors), a webcam cover, stylus, USB charging cable, a Bluetooth tracker, a phone cradle, and a beach towel. Komprise sent over some neat socks, three Komprise-branded Titleist golf balls, and a sticker.

It wasn’t fancy food and limos this time around, but it was nonetheless an enjoyable event. Hopefully we can get back to in-person events some time this decade. Thanks again to Stephen and the team for having me back. Thanks also to my employer for giving me time away from the office to attend.

Random Short Take #59

Welcome to Random Short take #59.

  • It’s been a while since I’ve looked at Dell Technologies closely, but Tech Field Day recently ran an event and Pietro put together a pretty comprehensive view of what was covered.
  • Dr Bruce Davie is a smart guy, and this article over at El Reg on decentralising Internet services made for some interesting reading.
  • Clean installs and Time Machine system recoveries on macOS aren’t as nice as they used to be. I found this out a day or two before this article was published. It’s worth reading nonetheless, particularly if you want to get your head around the various limitations with Recovery Mode on more modern Apple machines.
  • If you follow me on Instagram, you’ll likely realise I listen to records a lot. I don’t do it because they “sound better” though, I do it because it works for me as a more active listening experience. There are plenty of clowns on the Internet ready to tell you that it’s a “warmer” sound. They’re wrong. I’m not saying you should fight them, but if you find yourself in an argument this article should help.
  • Speaking of technologies that have somewhat come and gone (relax – I’m joking!), this article from Chris M. Evans on HCI made for some interesting reading. I always liked the “start small” approach with HCI, particularly when comparing it to larger midrange storage systems. But things have definitely changed when it comes to available storage and converged options.
  • In news via press releases, Datadobi announced version 5.12 of its data mobility engine.
  • Leaseweb Global has also made an announcement about a new acquisition.
  • Russ published an interesting article on new approaches to traditional problems. Speaking of new approaches, I was recently a guest on the On-Premise IT Podcast discussing when it was appropriate to scrap existing storage system designs and start again.

 

Storage Field Day 22 – I’ll Be At Storage Field Day 22

Here’s some news that will get you excited. I’ll be virtually heading to the US this week for another Storage Field Day event. If you haven’t heard of the very excellent Tech Field Day events, you should check them out. It’s also worth visiting the Storage Field Day 22 website during the event (August 4-6) as there’ll be video streaming and updated links to additional content. You can also see the list of delegates and event-related articles that have been published.

I think it’s a great line-up of both delegates and presenting companies this time around.

 

I’d like to publicly thank in advance the nice folks from Tech Field Day who’ve seen fit to have me back, as well as my employer for letting me take time off to attend these events. Also big thanks to the companies presenting. It’s going to be a lot of fun. Last time was a little weird doing this virtually, rather than in person, but I think it still worked. As things open back up in the US you’ll start to see a blend of in-person and virtual attendance for these events. I know that Komprise will be filming its segment from the Doubletree. Hopefully we’ll get things squared away and I’ll be allowed to leave the country next year. I’m really looking forward to this, even if it means doing the night shift for a few days. Presentation times are below, and all times are US/Pacific.

Wednesday, Aug 4 8:00-9:30 Infrascale Presents at Storage Field Day 22
Wednesday, Aug 4 11:00-13:30 Intel Presents at Storage Field Day 22
Presenters: Allison GoodmanElsa AsadianKelsey PrantisKristie MannNash KleppanSagi Grimberg
Thursday, Aug 5 8:00-10:00 CTERA Presents at Storage Field Day 22
Presenters: Aron BrandJim CrookLiran Eshel
Thursday, Aug 5 11:00-13:00 Komprise Presents at Storage Field Day 22
Presenters: Krishna SubramanianMike PeercyMohit Dhawan
Friday, Aug 6 8:00-9:00 Fujifilm Presents at Storage Field Day 22
Friday, Aug 6 10:00-11:30 Pure Storage Presents at Storage Field Day 22
Presenters: Ralph RonzioStan Yanitskiy

Cohesity DataProtect Delivered As A Service – SaaS Connector

I recently wrote about my experience with Cohesity DataProtect Delivered as a Service. One thing I didn’t really go into in that article was the networking and resource requirements for the SaaS Connector deployment. It’s nothing earth-shattering, but I thought it was worthwhile noting nonetheless.

In terms of the VM that you deploy for each SaaS Connector, it has the following system requirements:

  • 4 CPUs
  • 10 GB RAM
  • 20 GB disk space (100 MB throughput, 100 IOPs)
  • Outbound Internet connection

In terms of scaleability, the advice from Cohesity at the time of writing is to deploy “one SaaS Connector for each 160 VMs or 16 TB of source data. If you have more data, we recommend that you stagger their first full backups”. Note that this is subject to change. The outbound Internet connectivity is important. You’ll (hopefully) have some kind of firewall in place, so the following ports need to be open.

Port
Protocol
Target
Direction (from Connector)
Purpose

443

TCP

helios.cohesity.com

Outgoing

Connection used for control path

443

TCP

helios-data.cohesity.com

Outgoing

Used to send telemetry data

22, 443

TCP

rt.cohesity.com

Outgoing

Support channel

11117

TCP

*.dmaas.helios.cohesity.com

Outgoing

Connection used for data path

29991

TCP

*.dmaas.helios.cohesity.com

Outgoing

Connection used for data path

443

TCP

*.cloudfront.net

Outgoing

To download upgrade packages

443

TCP

*.amazonaws.com

Outgoing

For S3 data traffic

123, 323

UDP

ntp.google.com or internal NTP

Outgoing

Clock sync

53

TCP & UDP

8.8.8.8 or internal DNS

Bidirectional

Host resolution

Cohesity recommends that you deploy more than one SaaS Connector, and you can scale them out depending on the number of VMs / how much data you’re protecting with the service.

If you’re having concerns with bandwidth, you can configure the bandwidth used by the SaaS Connector via Helios.

Navigate to Settings -> SaaS Connections and click on Bandwidth Usage Options. You can then add a rule.

You then schedule bandwidth usage, potentially for quiet times (particularly useful in small environments where Internet connections may be shared with end users). There’s support for upload and download traffic, and multiple schedules as well.

And that’s pretty much it. Once you have your SaaS Connectors deployed you can monitor everything from Helios.

 

Random Short Take #58

Welcome to Random Short take #58.

  • One of the many reasons I like Chin-Fah is that he isn’t afraid to voice his opinion on various things. This article on what enterprise storage is (and isn’t) made for some insightful reading.
  • VMware Cloud Director 10.3 is now GA – you can read more about it here.
  • Feeling good about yourself? That’ll be quite enough of that thanks. This article from Tom on Value Added Resellers (VARs) and technical debt goes in a direction you might not expect. (Spoiler: staff are the technical debt). I don’t miss that part of the industry at all.
  • Speaking of work, this article from Preston on being busy was spot on. I’ve worked in many places in my time where it’s simply alarming how much effort gets expended in not achieving anything. It’s funny how people deal with it in different ways too.
  • I’m not done with articles by Preston though. This one on configuring a NetWorker AFTD target with S3 was enlightening. It’s been a long time since I worked with NetWorker, but this definitely wasn’t an option back then.  Most importantly, as Preston points out, “we backup to recover”, and he does a great job of demonstrating the process end to end.
  • I don’t think I talk about data protection nearly enough on this weblog, so here’s another article from a home user’s perspective on backing up data with macOS.
  • Do you have a few Rubrik environments lying around that you need to report on? Frederic has you covered.
  • Finally, the good folks at Backblaze are changing the way they do storage pods. You can read more about that here.

*Bonus Round*

I think this is the 1000th post I’ve published here. Thanks to everyone who continues to read it. I’ll be having a morning tea soon.