Speaking of old things, El Reg had some info on running (hobbyist) x86-64 editions of OpenVMS. I ran OpenVMS on a DEC Alpha AXP-150 at home for a brief moment, but that feels like it was a long time ago.
This article from JB on the Bowlo was excellent. I don’t understand why Australians are so keen on poker machines (or gambling in general), but it’s nice when businesses go against the grain a bit.
VMware Cloud on AWS has been around for just over 5 years now, and in that time it’s proven to be a popular platform for a variety of workloads, industry verticals, and organisations of all different sizes. However, one of the challenges that a hyper-converged architecture presents is that resource growth is generally linear (depending on the types of nodes you have available). In the case of VMware Cloud on AWS, we (now) have 3 nodes available for use: the I3, I3en, and I4i. Each of these instances provides a fixed amount of CPU, RAM, and vSAN storage for use within your VMC cluster. So when your storage grows past a certain threshold (80%), you need to add an additional node. This is a longwinded way of saying that, even if you don’t need the additional CPU and RAM, you need to add it anyway. To address this challenge, VMware now offers what’s called “Supplemental Storage” for VMware Cloud on AWS. This is ostensibly external dat stores presented to the VMC hosts over NFS. This comes in two flavours: FSx for NetApp ONTAP and VMware Cloud Flex Storage. I’ll cover this in a little more detail below.
[image courtesy of VMware]
Amazon FSx for NetApp ONTAP
The first cab off the rank is Amazon FSx for NetApp ONTAP (or FSxN to its friends). This one is ONTAP-like storage made available to your VMC environment as a native service. It’s fully customer managed, and VMware managed from a networking perspective.
[image courtesy of VMware]
There’s a 99.99% Availability SLA attached to the service. It’s based on NetApp ONTAP, and offers support for:
Multi-Tenancy
SnapMirror
FlexClone
Note that it currently requires VMware Managed Transit Gateway (vTGW) for Multi-AZ deployment (the only deployment architecture currently supported), and can connect to multiple clusters and SDDCs for scale. You’ll need to be on SDDC version 1.20 (or greater) to leverage this service in your SDDC, and there is currently no support for attachment to stretched clusters. While you can only connect datastores to VMC hosts using NFSv3, there is support for connecting directly to guest via other protocols. More information can be found in the FAQ here. There’s also a simulator you can access here that runs you through the onboarding process.
VMware Cloud Flex Storage
The other option for supplemental storage is VMware Cloud Flex Storage (sometimes referred to as VMC-FS). This is a datastore presented to your hosts over NFSv3.
Overview
VMware Cloud Flex Storage is:
A natively integrated cloud storage service for VMware Cloud on AWS that is fully managed by VMware;
Cost effective multi-cloud Cloud storage solution built on SCFS;
Delivered via a two-tier architecture for elasticity and performance (AWS S3 and local NVMe cache); and
Provides integrated Data-Management.
In short, VMware has taken a lot of the technology used in VMware Cloud Disaster Recovery (the result of the Datrium acquisition in 2020) and used it to deliver up to 400 TiB of storage per SDDC.
[image courtesy of VMware]
The intent of the solution, at this stage at least, is that it is only offered as a datastore for hosts via NFSv3, rather than other protocols directly to guests. There are some limitations around the supported topologies too, with stretched clusters not currently supported. From a disaster recovery perspective, it’s important to note that VMware Cloud Flex Storage is currently only offered on a single-AZ basis (although the supporting components are spread across multiple Availability Zones), and there is currently no support for VMware Cloud Disaster Recovery co-existence with this solution.
Thoughts
I’ve only been at VMware for a short period of time, but I’ve had numerous conversations with existing and potential VMware Cloud on AWS customers looking to solve their storage problems without necessarily putting everything on vSAN. There are plenty of reasons why you wouldn’t want to use vSAN for high capacity storage workloads, and I believe these two initial solutions go some ways to solving that issue. Many of the caveats that are wrapped around these two products at General Availability will be removed over time, and the traditional objections relating to VMware Cloud on AWS being not great at high-capacity, cost-effective storage will also have been removed.
Finally, if you’re an existing NetApp ONTAP customer, and were thinking about what you were going to do with that Petabyte of unstructured data you had lying about when you moved to VMware Cloud on AWS, or wanting to take advantage of the sweat equity you’ve poured into managing your ONTAP environment over the years, I think we’ve got you covered as well.
In this episode of Things My Customers Have Asked Me (TMCHAM), I’m going to delve into some questions around the lifecycle of the VMware-managed VMware Cloud on AWS platform, and what customers need to know to make sense of it all.
The SDDC
If you talk to VMware folks about VMware Cloud on AWS, you’ll hear a lot of talk about software-defined data centres (SDDCs). This is the logical construct in place that you use within your Organization to manage your hosts and clusters, in much the same fashion as you would your on-premises workloads. Unlike most on-premises workloads, however, the feeding and watering of the SDDC, from a software currency perspective, is done by VMware.
“Beginning with the SDDC version 1.11 release, odd-numbered releases of the SDDC software are optional and available for new SDDC deployments only. By default, all new SDDC deployments and upgrades will use the most recent even-numbered release. If you want to deploy an SDDC with an odd-numbered release version, contact your VMware TAM, sales, or customer success representative to make the request.”
Updated on: 5 April 2022
Essential Release: VMware Cloud on AWS (SDDC Version 1.18) | 5 April 2022
Optional Release: VMware Cloud on AWS (SDDC Version 1.17) | 19 November 2021
Basically, when you deploy onto the platform, you’ll usually get put on what VMware calls an “Essential” release. From time to time, customers may have requirements that mean that they qualify to be deployed on an “Optional” release. This might be because they have a software integration requirement that hasn’t been handled in 1.16, for example, but is available for 1.17. It’s also important to note that each major release will have a variety of minor releases as well, depending on issues that need to be resolved or features that need to be rolled out. So you’ll also see references to 1.16v5 in places, for example.
Upgrades and Maintenance
So what happens when your SDDC is going to be upgraded? Well, we let you know in advance, and it’s done in phases, as you’d imagine.
[image courtesy of VMware]
You can read more about the process here, and there’s a blog post that covers the release cadence here. VMware also does the rollout of releases in waves, so not every customer has the upgrade done at the same time. If you’re the type of customer that needs to be on the latest version of everything, or perhaps you have a real requirement to be near the front of the line, you should talk to your account team and they’ll liaise with the folks who can make it happen for you. When the upgrades are happening, you should be careful not to:
Perform hot or cold workload migrations. Migrations fail if they are started or in progress during maintenance.
Perform workload provisioning (New/Clone VM). Provisioning operations fail if they are started or in progress during maintenance.
Make changes to Storage-based Policy Management settings for workload VMs.
You should also ensure that there is enough storage capacity (> 30% slack space) in each cluster.
How Long Will It Take?
As usual, it depends. But you can make some (very) rough estimates by following the guidance on this page.
Will My SDDC Expire?
Yes, your SDDC version will some day expire. But it will be upgraded before that happens. There’s a page where you can look up the expiration dates of the various SDDC releases. It’s all part of the lifecycle part of the SDDC lifecycle.
Correlating VMware Cloud on AWS with Component Releases
Ever found yourself wondering what component versions are being used in VMware Cloud on AWS? Wonder no more with this very handy reference.
Conclusion
There’s obviously a lot more that goes on behind the scenes to keep everything running in tip-top shape for our customers. All of this talk of phases, waves, and release notes can be a little confusing if you’re new to the platform. Having worked in a variety of (managed and unmanaged) service providers over the years, I do like that VMware has bundled up all of this information and put it out there for people to check out. As always, if you’ve got questions about how the various software integrations work, and you can’t find the information in the documentation, reach out to your local account team and they’ll be able to help.
In this episode of “Things My Customers Have Asked Me” (or TMCHAM for short), I’m going to dive into a few questions around VMware Cloud Disaster Recovery (VCDR), a service we offer as an add-on to VMware Cloud on AWS. If you’re unfamiliar with VCDR, you can read a bit more about it here.
VCDR Roles and Permissions
Can RBAC roles be customised? Not really, as these are cascaded down from the Cloud Services hub. As I understand it, I don’t believe you have granular control over it, just the pre-defined, default roles as outlined here, so you need to be careful about what you hand out to folks in your organisation. To see what Service Roles have been assigned to your account, in the VMware Cloud Services, go to My Account, and then click on My Roles. Under Service Roles, you’ll see a list of services, such as VCDR, Skyline, and so on. You can then check what roles have been assigned.
VCDR Protection Groups
VCDR Protection Groups are the way that we logically group together workloads to be protected with the same RPO, schedule, and retention. There are two types of protection group: standard-frequency and high-frequency. Standard-frequency snapshots can be run as often as every 4 hours, while high-frequency snapshots can go as often as every 30 minutes. You can read more on protection groups here. It’s important to note that there are some caveats to be aware of with high-frequency snapshots. These are outlined here.
30-minute RPOs were introduced in late 2021, but there are some caveats that you need to be aware of. Some of these are straightforward, such as the minimum software levels for on-premises protection. But you also need to be mindful that VMs with existing vSphere snapshots will not be included, and, more importantly, high-frequency snapshots can’t be quiesced.
Can you have a VM instance in both a standard- and high-frequency snapshot protection group? Would this allow us to get the best of both worlds – e.g. RPO could be as low as 30 minutes, but with a guaranteed snapshot of 4 hours? Once you do a high-frequency snap on a VM, it keeps using that mechanism thereafter, even if it sits in a protection group using standard protection. Note also that you set a schedule for a protection group, so you can have snapshots running ever 30 mins and kept for a particular period of time (customer selects this). You could also run snapshots at 4 hours and keep those for a period of time too. While you can technically have a VM in multiple groups, what you’re better off doing is configuring a variety of schedules for your protection groups to meet those different RPOs.
Quiesced Snapshots
What happens to a VM during a quiesced state – would we experience micro service outages? The best answer I can give is “it depends”. The process for the standard, quiesced snapshot is similar to the one described here. The VM will be stunned by the process, so depending on what kind of activity is happening on the VM, there may be a micro outage to the service.
Other Considerations
The documentation talks about not changing anything when a scheduled snapshot is being run – how do we manage configuration of the SDDC if jobs are running 24/7? Seems odd that nothing can be changed when a scheduled snapshot is being run?This refers more to the VM that is being snapped. i.e. Don’t change configs or make changes to the environment, as that would impact this VM. It’s not a blanket rule for the whole environment.
Like most things, success with VCDR relies heavily on understanding the outcomes your organisation wants to achieve, and then working backwards from there. It’s also important to understand that this is a great way to do DR, but not necessarily a great way to do standard backup and recovery activities. Hopefully this article helps clarify some of the questions folks have around VCDR, and if it doesn’t, please don’t hesitate to get in contact.
I’m starting a new series on the blog. It’s called “Things My Customers Have Asked Me” (or TMCHAM for short). There are frequently occasions where the customer collateral I present on VMware Cloud on AWS doesn’t cover every single use case that my customers are interested in, or perhaps it doesn’t dive deeply enough into some of the material people would like to know more about. The idea behind these posts is that if I have one customer asking about this stuff, chances are another one might like to know about it too. I won’t be talking about internal-only stuff, or roadmap details in these posts (or anywhere publicly, for that matter), but hopefully these articles will be a useful point of information consolidation for folks who are into that sort of thing.
The capability was covered in March 2021, and you can see some of the details in the VMware Cloud on AWS Release Notes. You can also read my learned colleague Greg Vinton’s take on it here, and there’s a YouTube video for people who prefer that sort of thing. To enable PCI compliance on your Organization, you need to request the capability via your VMware account team. It’s not just something that’s configured by default, as some of the requirements around PCI DSS might be considered an unnecessary overhead by some folks. The account team will get it enabled on your Organization, and you can then deploy your SDDC. It’s important to note that your Organization needs to be empty – PCI DSS can’t be enabled on an Organization with SDDCs that are already deployed.
Configuration Changes
There are a number of configuration changes needed to ensure that your SDDC is PCI-compliant too. This includes disabling add-on services like HCX and Site Recovery. To do this, go to Inventory – Settings, and scroll down to Compliance Hardening.
Note that you’ll only see the “Compliance Hardening” section if your Organization has been configured for PCI DSS compliance. You’ll need to finish your HCX migrations before your Organization is compliant. You’ll also need to change your NSX configuration (Network & Security Tab Access). There is some more info on that here and there’s a blog post that also runs through it step by step that you can read here. Note that you’ll need to use the API to change the local NSX Manager user password every 90 days. Information on that can be found here.
Other Considerations
One final thing to note is that this process doesn’t automatically make your Virtual Machines PCI compliant. You’ll still need to ensure that you’ve done the work in that respect. And I can’t repeat this enough – your Organization will only pass a PCI audit if you’ve done these additional steps. Merely requesting that VMware enable this at an Organization level won’t be enough.
If you’ve been following along at home, you may have noticed that the blog has been a little quiet recently. There were a few reasons for that, but the main one was that I joined VMware this year as a Cloud Solutions Architect focussed on VMware Cloud on AWS. It’s an interesting role, and an interesting place to work. I’ve been busy onboarding and thought I’d share some brief notes on VMware Cloud on AWS. I still intend to talk about other things on this blog too, but figured this has been front of mind for me recently, and it might be useful to someone looking to find out more. If you have any questions, or want to know more about something, I’m happy to help where I can. And it doesn’t need to be a sales call.
Overview
In short, VMware Cloud on AWS is “an integrated cloud offering jointly developed by Amazon Web Services (AWS) and VMware.” The idea is that you run VMware’s SDDC stack on AWS bare metal hosts and enjoy the best of both worlds – VMware’s software and access to a broad range of AWS services. I won’t be covering too much of the basics here, but you can read more about it on the product website. I do recommend checking out the product walkthroughs, as these are a great way to get familiar with how the product behaves. Once you’ve done that, you should also check out the solutions index – it’s a great collection of information about various things that run on VMware Cloud on AWS, including things like SQL performance, DNS configuration, and stuff like that. Once you’ve got a handle on the platform and some of the things it can do, it’s also worth running through the Evaluation Guide. This will give you the opportunity to perform a self-guided evaluation of the platform’s features and functionality. There’s also a pretty comprehensive FAQ that you can find here.
Hardware
Node Types
There are 2 types of nodes available at this time: i3.metal and i3en.metal. The storage for nodes is provided by VMware vSAN.
One of the neat things is support for custom core counts on a per-cluster basis. You still pay full price for the hosts, but the idea is that your core licensing for BigDBVendor, or whatever, is under control. Note that you can’t change this core count once your hosts are deployed.
Other Cool Features
Elastic DRS lets you expand your SDDC as required, based on configured thresholds for CPU, RAM, and storage. You can read more about that here.
Configuration Backups
If you’re using HCX, you might want to back up your HCX Manager. You can read more on that here. There’s also a VMware Fling that provides a level of SDDC import / export capability. You can check that out here. (Hat tip to my colleague Michael for telling me about these).
Sizing It Up
If you’re curious about what your current on-premises estate might look like from a sizing perspective, you can run it through the online sizing tool. This has a variety of input options, including support for RVTools imports. It’s fairly easy to use, but for complex scenarios I’d always recommend you get VMware or a partner involved. Pricing for the platform is also publicly available, and you can check that out here. There are a few different ways to consume the platform, including 1-year, 3-year, and on-demand options, and the discounting levels vary according to the commitment.
Note that there are a number of other capabilities sold separately, including:
VMware Site Recovery
VMware Cloud Disaster Recovery
VMware NSX Advanced Firewall
VMware vRealize Automation Cloud
VMware vRealize Operations Cloud
VMware vRealize Log Insight Cloud
VMware vRealize Network Insight Cloud
VMware Tanzu Standard
Lifecycle
One of the things I like about VMware Cloud on AWS is that the release notes for the platform are publicly available, and provide a great summary of new features as they get rolled out to customers.
What Now?
I’ve barely scratched the surface of what I’d like to talk about with VMware Cloud on AWS, and I hope in the future to post articles on some of the stuff that gets me excited, like migration options with HCX, and using VMware Cloud Disaster Recovery. In the meantime, the team (it’s mainly Greg doing the hard work, if I’m being honest) is running a series of webinars next week. If you’re interested in VMware Cloud on AWS and want to know more, you could do worse than checking these out. Details below, and registration is here.
Design and Deploy a VMware Cloud on AWS SDDC
28 February 2022, Monday
9:30am IST | 12:00pm SGT | 1:00pm KST | 3:00pm AEDT
Join us as we walk through the process of Architecting and Deploying a VMware Cloud on AWS SDDC. We will cover: SDDC sizing for an application, sizing of the management CIDR block, connectivity design, VPN vs direct connect, basic networking and dependencies
Application Migration to VMC on AWS
1 March 2022, Tuesday
9:30am IST | 12:00pm SGT | 1:00pm KST | 3:00pm AEDT
In this session we will demonstrate the process of migrating a live application. Topics include: walk through the HCX architecture, HCX deployment process, HCX configuration, extending an L2 network, mobility (location) aware networking, migration types – conversation
Disaster Recovery – Protecting VMC on AWS or On-Prem Based Applications
2 March 2022, Wednesday
9:30am IST | 12:00pm SGT | 1:00pm KST | 3:00pm AEDT
Listen to experts demonstrate the process of Architecting and Deploying a VMware Cloud Disaster Recovery (VCDR), with VMC on AWS to protect an application. We will cover: walk through the VCDR architecture, VCDR deployment process, considerations around VCDR, building a protection group, building a DR plan, executing DR and discuss failback options
Speaking of cloud, I enjoyed this article from Chris M. Evans on the AWS “wobble” (as he puts it) in us-east-1 recently. Speaking of articles Chris has written recently, check out his coverage of the Pure Storage FlashArray//XL announcement.
Speaking of Pure Storage, my friend Jon wrote about his experience with ActiveCluster in the field recently. You can find that here. I always find these articles to be invaluable, if only because they demonstrate what’s happening out there in the real world.
Want some press releases? Here’s one from Datadobi announcing it has released new Starter Packs for DobiMigrate ranging from 1PB up to 7PB.
Data protection isn’t just something you do at the office – it’s a problem for home too. I’m always interested to hear how other people tackle the problem. This article from Jeff Geerling (and the associated documentation on Github) was great.
John Nicholson is a smart guy, so I think you should check out his articles on benchmarking (and what folks are getting wrong). At the moment this is a 2-part series, but I suspect that could be expanded. You can find Part 1 here and Part 2 here. He makes a great point that benchmarking can be valuable, but benchmarking like it’s 1999 may not be the best thing to do (I’m paraphrasing).
Speaking of smart people, Tom Andry put together a great article recently on dispelling myths around subwoofers. If you or a loved one are getting worked up about subwoofers, check out this article.
I had people ask me if I was doing a predictions post this year. I’m not crazy enough to do that, but Mellor is. You can read his article here.
In some personal news (and it’s not LinkedIn official yet) I recently quit my job and will be taking up a new role in the new year. I’m not shutting the blog down, but you might see a bit of a change in the content. I can’t see myself stopping these articles, but it’s likely there’ll be less of the data protection howto articles being published. But we’ll see. In any case, wherever you are, stay safe, happy holidays, and see you on the line next year.
22dot6sprang from stealth in May 2021. and recently announced its TASS Cloud Suite. I had the opportunity to once again catch up with Diamond Lauffin about the announcement, and thought I’d share some thoughts here.
The Product
If you’re unfamiliar with the 22dot6 product, it’s basically a software or hardware-based storage offering that delivers:
File and storage management
Enterprise-class data services
Data and systems profiling and analytics
Performance, scalability
Virtual, physical, and cloud capabilities, with NFS, SMB, and S3 mixed protocol support
According to Lauffin, it’s built on a scale-out, parallel architecture, and can deliver great pricing and performance per GiB.
Components
It’s Linux-based, and can leverage any bare-metal machine or VM. Metadata services live on scale-out, redundant nodes (VSR nodes), and data services are handled via single, clustered, or redundant nodes (DSX nodes).
[image courtesy of 22dot6]
TASS
The key to this all making some kind of sense is TASS (the Transcendent Abstractive Storage System). 22dot6 describes this as a “purpose-built, objective based software integrating users, applications and data services with physical, virtual and cloud-based architectures globally”. Sounds impressive, doesn’t it? Valence is the software that drives everything, providing the ability to deliver NAS and object over physical and virtual storage, in on-premises, hybrid, or public cloud deployments. It’s multi-vendor capable, offering support for third-party storage systems, and does some really neat stuff with analytics to ensure your storage is performing the way you need it to.
The Announcement
22dot6 has announced the TASS Cloud Suite, an “expanded collection of cloud specific features to enhance its universal storage software Valence”. Aimed at solving many of the typical problems users face when using cloud storage, it addresses:
Private cloud, with a “point-and-click transcendent capability to easily create an elastic, scale-on-demand, any storage, anywhere, private cloud architecture”
Hybrid cloud, by combining local and cloud resources into one big pool of storage
Cloud migration and mobility, with a “zero stub, zero pointer” architecture
Cloud-based NAS / Block / S3 Object consolidation, with a “transparent, multi-protocol, cross-platform support for all security and permissions with a single point-and-click”
There’s also support for cloud-based data protection, WORM encoding of data, and a comprehensive suite of analytics and reporting.
Thoughts and Further Reading
I’ve had the pleasure of speaking to Lauffin about 22dot6 on 2 occasions now, and I’m convinced that he’s probably one of the most enthusiastic storage company founders / CEOs I’ve ever been briefed by. He’s certainly been around for a while, and has seen a whole bunch of stuff. In writing this post I’ve had a hard time articulating everything that Lauffin tells me 22dot6 can do, while staying focused on the cloud part of the announcement. Clearly I should have done an overview post in May and then I could just point you to that. In short, go have a look at the website and you’ll see that there’s quite a bit going on with this product.
The solution seeks to address a whole raft of issues that anyone familiar with modern storage systems will have come across at one stage or another. I remain continually intrigued by how various solutions work to address storage virtualisation challenges, while still making a system that works in a seamless manner. Then try and do that at scale, and in multiple geographical locations across the world. It’s not a terribly easy problem to solve, and if Lauffin and his team can actually pull it off, they’ll be well placed to dominate the storage market in the near future.
Spend any time with Lauffin and you realise that everything about 22dot6 speaks to many of the lessons learned over years of experience in the storage industry, and it’s refreshing to see a company trying to take on such a wide range of challenges and fix everything that’s wrong with modern storage systems. What I can’t say for sure, having never had any real stick time with the solution, is whether it works. In Lauffin’s defence, he has offered to get me in contact with some folks for a demo, and I’ll be taking him up on that offer. There’s a lot to like about what 22dot6 is trying to do here, with the Valance Cloud Suite being a small part of the bigger picture. I’m looking forward to seeing how this goes for 22dot6 over the next year or two, and will report back after I’ve had a demo.
VMworld is on this week. I still find the virtual format (and timezones) challenging, and I miss the hallway track and the jet lag. There’s nonetheless some good news coming out of the event. One thing that was announced prior to the event was Tanzu Community Edition. William Lam talks more about that here.
Speaking of VMworld news, Viktor provided a great summary on the various “projects” being announced. You can read more here.
I’ve been a Mac user for a long time, and there’s stuff I’m learning every week via Howard Oakley’s blog. Check out this article covering the Recovery Partition. While I’m at it, this presentation he did on Time Machine is also pretty ace.
Facebook had a little problem this week, and the Cloudflare folks have provided a decent overview of what happened. As someone who works for a service provider, this kind of stuff makes me twitchy.
Fibre Channel? Cloud? Chalk and cheese? Maybe. Read Chin-Fah’s article for some more insights. Personally, I miss working with FC, but I don’t miss the arguing I had to do with systems and networks people when it came to the correct feeding and watering of FC environments.
Remote working has been a challenge for many organisations, with some managers not understanding that their workers weren’t just watching streaming video all day, but actually being more productive. Not everything needs to be a video call, however, and this post / presentation has a lot of great tips on what does and doesn’t work with distributed teams.
I’ve had to ask this question before. And Jase has apparently had to answer it too, so he’s posted an article on vSAN and external storage here.
This is the best response to a trio of questions I’ve read in some time.
Disclaimer: I recently attended Storage Field Day 22. Some expenses were paid for by Tech Field Day. There is no requirement for me to blog about any of the content presented and I am not compensated in any way for my time at the event. Some materials presented were discussed under NDA and don’t form part of my blog posts, but could influence future discussions.
CTERA recently presented at Storage Field Day 22. You can see their videos from Storage Field Day 22 here, and download a PDF copy of my rough notes from here.
CTERA?
In a nutshell, CTERA is:
Enterprise NAS over Object
100% Private
Multi-cloud, hybrid consistent
Delivers data placement policy and mobility
Caching, not tiering
Zero-trust
The Problem
So what’s the problem we’re trying to solve with unstructured data?
Every IT environment is hybrid
More data is being generated at the edge
Workload placement strategies are driving storage placement
Storage must be instrumented and accessible anywhere
Seems simple enough, but edge storage is hard to get right.
[image courtesy of CTERA]
What Else Do You Want?
We want a lot from our edge storage solutions, including the ability to:
Migrate data to cloud, while keeping a fast local cache
Connect branches and users over a single namespace
Enjoy a HQ-grade experience regardless of location
Achieve 80% cost saving with global dedupe and cloud economics.
The Solution?
CTERA Multi-cloud Global File System – a “software-defined file over object with distributed SMB/NFS edge caching and endpoint collaboration”.
[image courtesy of CTERA]
CTERA Architecture
Single namespace connecting HQ, branches and users with ACL support
Object-native backend with cache accelerated access for remote sites
Multi-cloud scale-out to customer’s private or public infrastructure
Source-based encryption and global deduplication
Multi-tenant administration scalable to thousands of sites
Data management ecosystem for content security, analytics and DevOps automation
[image courtesy of CTERA]
Use Cases?
NAS Modernisation – Hybrid Edge Filer, Object-based Filesystem, Elastic scaling, Built-in Backup & DR
Media – Large Dataset Handling, Ultra-Fast Cloud Sync, MacOS Experience, Cloud Streaming
Multi-site Collaboration – Global File System Distributed Sync Scalable Central Mgt.
Edge Data Processing – Integrated HCI Filers Distributed Data Analysis Machine-Generated Data
Container-native – Global File System Across Distributed Kubernetes Clusters and Tethered Cloud Services
Thoughts and Further Reading
It should come as no surprise that people expect data to be available to them everywhere nowadays. And that’s not just sync and share solutions or sneaker net products on USB drives. No, folks want to be able to access corporate data in a non-intrusive fashion. It gets worse for the IT department though, because your end users aren’t just “heavy spreadsheet users”. They’re also editing large video files, working on complicated technical design diagrams, and generating gigabytes of log files for later analysis. And it’s not enough to say “hey, can you download a copy and upload it later and hope that no-one else has messed with the file?”. Users are expecting more from their systems. There are a variety of ways to deal with this problem, and CTERA seems to have provided a fairly robust solution, with many ways of accessing data, collaborating, and storing data in the cloud and at the edge. The focus isn’t limited to office automation data, with key verticals such as media and entertainment, healthcare, and financial services all having solutions suited to their particular requirements.
CTERA’s Formula One slide is impressive, as is the variety of ways it works to help organisations address the explosion unstructured data in the enterprise. With large swathes of knowledge workers now working more frequently outside the confines of the head office, these kinds of solutions are only going to become more in demand, particularly those that can leverage cloud in an effective (and transparent) fashion. I’m excited to see what’s to come with CTERA. Check out Ray’s article for a more comprehensive view of what CTERA does.
We use cookies on our website to give you the most relevant experience by remembering your preferences and repeat visits. By clicking “Accept”, you consent to the use of ALL the cookies.
This website uses cookies to improve your experience while you navigate through the website. Out of these cookies, the cookies that are categorized as necessary are stored on your browser as they are essential for the working of basic functionalities of the website. We also use third-party cookies that help us analyze and understand how you use this website. These cookies will be stored in your browser only with your consent. You also have the option to opt-out of these cookies. But opting out of some of these cookies may have an effect on your browsing experience.