Veeam Basics – Configuring A Scale-Out Backup Repository

I’ve been doing some integration testing with Pure Storage and Veeam in the lab recently, and thought I’d write an article on configuring a scale-out backup repository (SOBR). To learn more about SOBR configurations, you can read the Veeam documentation here. This post from Rick Vanover also covers the what and the why of SOBR. In this example, I’m using a couple of FlashBlade-based NFS repositories that I’ve configured as per these instructions. Each NFS repository is mounted on a separate Linux virtual machine. I’m using a Windows-based Veeam Backup & Replication server running version 9.5 Update 4.

 

Process

Start by going to Backup Infrastructure -> Scale-out Repositories and click on Add Scale-out Repository.

Give it a name, maybe something snappy like “Scale-out Backup Repository 1”?

Click on Add to add the backup repositories.

When you click on Add, you’ll have the option to select the backup repositories you want to use. You can select them all, but for the purpose of this exercise, we won’t.

In this example, Backup Repository 1 and 2 are the NFS locations I configured previously. Select those two and click on OK.

You’ll now see the repositories listed as Extents.

Click on Advanced to check the advanced setttings are what you expect them to be. Click on OK.

Click Next to continue. You’ll see the following message.

You then choose the placement policy. It’s strongly recommended that you stick with Data locality as the placement policy.

You can also pick object storage to use as a Capacity Tier.

You’ll also have an option to configure the age of the files to be moved, and when they can be moved. And you might want to encrypt the data uploaded to your object storage environment, depending on where that object storage lives.

Once you’re happy, click on Apply. You’ll be presented with a summary of the configuration (and hopefully there won’t be any errors).

 

Thoughts

The SOBR feature, in my opinion, is pretty cool. I particularly like the ability to put extents in maintenance mode. And the option to use object storage as a capacity tier is a very useful feature. You get some granular control in terms of where you put your backup data, and what kind of performance you can throw at the environment. And as you can see, it’s not overly difficult to configure the environment. There are a few things to keep on mind though. Make sure your extents are stored on resilient hardware. If you keep your backup sets together with the data locality option, you’l be a sad panda if that extent goes bye bye. And the same goes for the performance option. You’ll also need Enterprise or Enterprise Plus editions of Veeam Backup & Replication for this feature to work. And you can’t use this feature for these types of jobs:

  • Configuration backup job;
  • Replication jobs (including replica seeding);
  • VM copy jobs; and
  • Veeam Agent backup jobs created by Veeam Agent for Microsoft Windows 1.5 or earlier and Veeam Agent for Linux 1.0 Update 1 or earlier.

There are any number of reasons why a scale-out backup repository can be a handy feature to use in your data protection environment. I’ve had the misfortune in the past of working with products that were difficult to manage from a data mobility perspective. Too many times I’ve been stuck going through all kinds of mental gymnastics working out how to migrate data sets from one storage platform to the next. With this it’s a simple matter of a few clicks and you’re on your way with a new bucket. The tiering to object feature is also useful, particularly if you need to keep backup sets around for compliance reasons. There’s no need to spend money on these living on performance disk if you can comfortably have them sitting on capacity storage after a period of time. And if you can control this movement through a policy-driven approach, then that’s even better. If you’re new to Veeam, it’s worth checking out a feature like this, particularly if you’re struggling with media migration challenges in your current environment. And if you’re an existing Enterprise or Enterprise Plus customer, this might be something you can take advantage of.

Using A Pure Storage FlashBlade As A Veeam Repository

I’ve been doing some testing in the lab recently. The focus of this testing has been primarily on Pure Storage’s ObjectEngine and its associated infrastructure. As part of that, I’ve been doing various things with Veeam Backup & Replication 9.5 Update 4, including setting up a FlashBlade NFS repository. I’ve documented the process in a document here. One thing that I thought worthy of noting separately was the firewall requirements. For my Linux Mount Server, I used a CentOS 7 VM, configured with 8 vCPUs and 16GB of RAM. I know, I normally use Debian, but for some reason (that I didn’t have time to investigate) it kept dying every time I kicked off a backup job.

In any case, I set everything up as per Pure’s instructions, but kept getting timeout errors on the job. The error I got was “5/17/2019 10:03:47 AM :: Processing HOST-01 Error: A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond NFSMOUNTHOST:2500“. It felt like it was probably a firewall issue of some sort. I tried to make an exception on the Windows VM hosting the Veeam Backup server, but that didn’t help. The problem was with the Linux VM’s firewall. I used the instructions I found here to add in some custom rules. According to the Veeam documentation, Backup Repository access uses TCP ports 2500 – 5000. Your SecOps people will no doubt have a conniption, but here’s how to open those ports on CentOS.

Firstly, is the firewall running?

[danf@nfsmounthost ~]$ sudo firewall-cmd --state
[sudo] password for danf:
running

Yes it is. So let’s stop it to see if this line of troubleshooting is worth pursuing.

[danf@nfsmounthost ~]$ sudo systemctl stop firewalld

The backup job worked after that. Okay, so let’s start it up again and open up some ports to test.

[danf@nfsmounthost ~]$ sudo systemctl start firewalld
[danf@nfsmounthost ~]$ sudo firewall-cmd --add-port=2500-5000/tcp
success

That worked, so I wanted to make it a more permanent arrangement.

[danf@nfsmounthost ~]$ sudo firewall-cmd --permanent --add-port=2500-5000/tcp
success
[danf@nfsmounthost ~]$ sudo firewall-cmd --permanent --list-ports
2500-5000/tcp

Remember, it’s never the storage. It’s always the firewall. Also, keep in my mind this article is about the how. I’m not offering my opinion about whether it’s really a good idea to configure your host-based firewalls with more holes than Swiss cheese. Or whatever things have lots of holes in them.

Random Short Take #14

Here are a few links to some random news items and other content that I found interesting. You might find them interesting too. Episode 14 – giddy-up!

Dell Technologies World 2019 – Wrap-up and Link-o-rama

Disclaimer: I recently attended Dell Technologies World 2019.  My flights, accommodation and conference pass were paid for by Dell Technologies via the Media, Analysts and Influencers program. There is no requirement for me to blog about any of the content presented and I am not compensated in any way for my time at the event.  Some materials presented were discussed under NDA and don’t form part of my blog posts, but could influence future discussions.

Here’s a quick post with links to the other posts I did surrounding Dell Technologies World 2019, as well as links to other articles I found interesting.

 

Product Announcements

Here’re the posts I did covering the main product-related announcements from the show.

Dell EMC Announces Unity XT And More Cloudy Things

Dell EMC Announces PowerProtect Software (And Hardware)

Dell Announces Dell Technologies Cloud (Platforms and DCaaS)

 

Event-Related

Here’re the posts I did during the show. These were mainly from the media sessions I attended.

Dell – Dell Technologies World 2019 – See You Soon Las Vegas

Dell Technologies World 2019 – Monday General Session – The Architects of Innovation – Rough Notes

Dell Technologies World 2019 – Tuesday General Session – Innovation to Unlock Your Digital Future – Rough Notes

Dell Technologies World 2019 – Media Session – Architecting Innovation in a Multi-Cloud World – Rough Notes

Dell Technologies World 2019 – Wednesday General Session – Optimism and Happiness in the Digital Age – Rough Notes

Dell Technologies World 2019 – (Fairly) Full Disclosure

 

Dell Technologies Announcements

Here are some of the posts from Dell Technologies covering the major product announcements and news.

Dell Technologies and Orange Collaborate for Telco Multi-Access Edge Transformation

Dell Technologies Brings Speed, Security and Smart Design to Mobile PCs for Business

Dell Technologies Powers Real Transformation and Innovation with New Storage, Data Management and Data Protection Solutions

Dell Technologies Transforms IT from Edge to Core to Cloud

Dell Technologies Cloud Accelerates Customers’ Multi-Cloud Journey

Dell Technologies Unified Workspace Revolutionizes the Way People Work

Dell Technologies and Microsoft Expand Partnership to Help Customers Accelerate Their Digital Transformation

 

Tech Field Day Extra

I also had the opportunity to participate in Tech Field Day Extra at Dell Technologies World 2019. Here are the articles I wrote for that part of the event.

Liqid Are Dynamic In The DC

Big Switch Are Bringing The Cloud To Your DC

Kemp Keeps ECS Balanced

 

Other Interesting Articles

TFDx @ DTW ’19 – Get To Know: Liqid

TFDx @ DTW ’19 – Get To Know: Kemp

TFDx @ DTW ’19 – Get to Know: Big Switch

Connecting ideas and people with Dell Influencers

Game Changer: VMware Cloud on Dell EMC

Dell Technologies Cloud and VMware Cloud on Dell EMC Announced

Run Your VMware Natively On Azure With Azure VMware Solutions

Dell Technologies World 2019 recap

Scaling new HPC with Composable Architecture

Object Stores and Load Balancers

 

Conclusion

I had a busy but enjoyable week. I would have liked the get to some of the technical breakout sessions, but being given access to some of the top executives in the company via the Media, Analysts and Influencers program was invaluable. Thanks again to Dell Technologies (particularly Debbie Friez and Konnie) for having me along to the show. And big thanks to Stephen and the Tech Field Day team for having me along to the Tech Field Day event as well.

Big Switch Are Bringing The Cloud To Your DC

Disclaimer: I recently attended Dell Technologies World 2019.  My flights, accommodation and conference pass were paid for by Dell Technologies via the Media, Analysts and Influencers program. There is no requirement for me to blog about any of the content presented and I am not compensated in any way for my time at the event.  Some materials presented were discussed under NDA and don’t form part of my blog posts, but could influence future discussions.

As part of my attendance at Dell Technologies World 2019 I had the opportunity to attend Tech Field Day Extra sessions. You can view the videos from the Big Switch Networks session here, and download my rough notes from here.

 

The Network Is The Cloud

Cloud isn’t a location, it’s a design principle. And networking needs to evolve with the times. The enterprise is hamstrung by:

  • Complex and slow operations
  • Inadequate visibility
  • Lack of operational consistency

It’s time that on-premises needs is built the same way as the service providers do it.

  • Software-defined;
  • Automated with APIs;
  • Open Hardware; and
  • Integrated Analytics.

APIs are not an afterthought for Big Switch.

A Better DC Network

  • Cloud-first infrastructure – design, build and operate your on-premises network with the same techniques used internally by public cloud operators
  • Cloud-first experience – give your application teams the same “as-a-service” network experience on-premises that they get with the cloud
  • Cloud-first consistency – uses the same tool chain to manage both on-premises and in-cloud networks

 

Thoughts and Further Reading

There are a number of reasons why enterprise IT folks are looking wistfully at service providers and the public cloud infrastructure setups and wishing they could do IT that way too. If you’re a bit old fashioned, you might think that loose and fast isn’t really how you should be doing enterprise IT – something that’s notorious for being slow, expensive, and reliable. But that would be selling the SPs short (and I don’t just say that because I work for a service provider in my day job). What service providers and public cloud folks are very good at is getting maximum value from the infrastructure they have available to them. We don’t necessarily adopt cloud-like approaches to infrastructure to save money, but rather to solve the same problems in the enterprise that are being solved in the public clouds. Gone are the days when the average business will put up with vast sums of cash being poured into enterprise IT shops with little to no apparent value being extracted from said investment. It seems to be no longer enough to say “Company X costs this much money, so that’s what we pay”. For better or worse, the business is both more and less savvy about what IT costs, and what you can do with IT. Sure, you’ll still laugh at the executive challenging the cost of core switches by comparing them to what can be had at the local white goods slinger. But you better be sure you can justify the cost of that badge on the box that runs your network, because there are plenty of folks ready to do it for cheaper. And they’ll mostly do it reliably too.

This is the kind of thing that lends itself perfectly to the likes of Big Switch Networks. You no longer necessarily need to buy badged hardware to run your applications in the fashion that suits you. You can put yourself in a position to get control over how your spend is distributed and not feel like you’re feeling to some mega company’s profit margins without getting return on your investment. It doesn’t always work like that, but the possibility is there. Big Switch have been talking about this kind of choice for some time now, and have been delivering products that make that possibility a reality. They recently announced an OEM agreement with Dell EMC. It mightn’t seem like a big deal, as Dell like to cosy up to all kinds of companies to fill apparent gaps in the portfolio. But they also don’t enter into these types of agreements without having seriously evaluated the other company. If you have a chance to watch the customer testimonial at Tech Field Day Extra, you’ll get a good feel for just what can be accomplished with an on-premises environment that has service provider like scalability, management, and performance challenges. There’s a great tale to be told here. Not every enterprise is working at “legacy” pace, and many are working hard to implement modern infrastructure approaches to solve business problems. You can also see one of their customers talk with my friend Keith about the experience of implementing and managing Big Switch on Dell Open Networking.

Kemp Keeps ECS Balanced

Disclaimer: I recently attended Dell Technologies World 2019.  My flights, accommodation and conference pass were paid for by Dell Technologies via the Media, Analysts and Influencers program. There is no requirement for me to blog about any of the content presented and I am not compensated in any way for my time at the event.  Some materials presented were discussed under NDA and don’t form part of my blog posts, but could influence future discussions.

As part of my attendance at Dell Technologies World 2019 I had the opportunity to attend Tech Field Day Extra sessions. You can view the videos from the Kemp session here, and download my rough notes from here.

 

Kemp Overview

Established early 2000s, Kemp has around 25000+ customers globally, with 60000+ app deployments in over 115 countries. Their main focus is an ADC (Application Delivery Controller) that you can think of as a “fancy load balancer”. Here’s a photo of Frank Yue telling us more about that.

Application Delivery – Why?

  • Availability – transparent failover when application resources fail
  • Scalability – easily add and remove application resources to meet changing demands
  • Security – authenticate users and protect applications against attack
  • Performance – offload security processing and content optimisation to Load Balancer
  • Control – visibility on application resource availability, health and performance

Product Overview

Kemp offer a

LoadMaster – scalable, secure apps

  • Load balancing
  • Traffic optimisation 
  • Security

There are a few different flavours of the LoadMaster, including cloud-native, virtual, and hardware-based.

360 Central – control, visibility

  • Management
  • Automation
  • Provisioning

360 Vision – Shorter MTTD / MTTR

  • Predictive analytics
  • Automated incident réponse
  • Observability

Yue made the point that “[l]oad balancing is not networking. And it’s not servers either. It’s somehow in between”. Kemp look to “[d]eal with the application from the networking perspective”.

 

Dell EMC ECS

So what’s Dell EMC ECS then? ECS stands for “Elastic Cloud Storage”, and it’s Dell EMC’s software-defined object storage offering. If you’re unfamiliar with it, here are a few points to note:

  • Objects are bundled data with metadata;
  • The object storage application manages the storage;
  • No real file system is needed;
  • Easily scale by just adding disks;
  • Delivers a low TCO.

It’s accessible via an API and offers the following services:

  • S3
  • Atmos
  • Swift
  • NFS

 

Kemp / Dell EMC ECS Solution

So how does a load balancing solution from Kemp help? One of the ideas behind object storage is that you can lower primary storage costs. You can also use it to accelerate cloud native apps. Kemp helps with your ECS deployment by:

  • Maximising value from infrastructure investment
  • Improving service availability and resilience
  • Enabling cloud storage scalability for next generation apps

Load Balancing Use Cases for ECS

High Availability

  • ECS Node redundancy in the event of failure
  • A load balancer is required to allow for automatic failover and event distribution of traffic

Global Balancing

[image courtesy of Kemp]

  • Multiple clusters across different DCs
  • Global Server Load Balancing provides distribution of connections across these clusters based on proximity

Security

  • Offloading encryption from the Dell EMC ECS nodes to Kemp LoadMaster can greatly increase performance and simplify the management of transport layer security certificates
  • IPv6 to IPv4 – Dell EMC ECS does not support IPv6 natively – Kemp will provide that translation to IPv4

 

Thoughts and Further Reading

The first thing that most people ask when seeing this solution is “Won’t the enterprise IT organisation already have a load-balancing solution in place? Why would they go to Kemp to help with their ECS deployment?”. It’s a valid point, but the value here is more that Dell EMC are recommending that customers use the Kemp solution over the built-in load balancer provided with ECS. I’ve witnessed plenty of (potentially frustrating) situations where enterprises deploy multiple load balancing solutions depending on the application requirements or where the project funding was coming from. Remember that things don’t always make sense when it comes to enterprise IT. But putting those issues aside, there are likely plenty of shops looking to deploy ECS in a resilient fashion that haven’t yet had the requirement to deploy a load balancer, and ECS is that first requirement. Kemp are clearly quite good at what they do, and have been in the load balancing game for a while now. The good news is if you adopt their solution for your ECS environment, you can look to leverage their other offerings to provide additional load balancing capabilities for other applications that might require it.

You can read the deployment guide from Dell EMC here, and check out Adam’s preparation post on Kemp here for more background information.

Liqid Are Dynamic In The DC

Disclaimer: I recently attended Dell Technologies World 2019.  My flights, accommodation and conference pass were paid for by Dell Technologies via the Media, Analysts and Influencers program. There is no requirement for me to blog about any of the content presented and I am not compensated in any way for my time at the event.  Some materials presented were discussed under NDA and don’t form part of my blog posts, but could influence future discussions.

As part of my attendance at Dell Technologies World 2019 I had the opportunity to attend Tech Field Day Extra sessions. You can view the videos from the session here, and download my rough notes from here.

 

Liqid

One of the presenters at Tech Field Day extra was Liqid, a company that specialises in composable infrastructure. So what does that mean then? Liqid “enables Composable Infrastructure with a PCIe fabric and software that orchestrates and manages bare-metal servers – storage, GPU, FPGA / TPU, Compute, Networking”. They say they’re not disaggregating DRAM as the industry’s not ready for that yet. Interestingly, Liqid have made sure they can do all of this with bare metal, as “[c]omposability without bare metal, with disaggregation, that’s just hyper-convergence”.

 

[image courtesy of Liqid]

The whole show is driven through Liqid Command Center, and there’s a switching PCIe fabric as well. You then combine this with various hardware elements, such as:

  • JBoF – Flash;
  • JBoN – Network;
  • JBoG – GPU; and
  • Compute nodes.

There are various expansion chassis options (network, storage, and graphics) and you can add in standard x86 servers. You can read about Liqid’s announcement around Dell EMC PowerEdge servers here.

Other Interesting Use Cases

Some of the more interesting use cases discussed by Liqid included “brownfield” deployments where customers don’t want to disaggregate everything. If they just want to disaggregate GPUs, for example, they can add a GPU pool to a Fabric. This can be done with storage as well. Why would you want to do this kind of thing with networking? There are apparently a few service providers that like the composable networking use case. You can also have multiple fabric types with Liquid managing cross composability.

[image courtesy of Liqid]

Customers?

Liqid have customers across a variety of workload types, including:

  • AI & Deep Learning
    • GPU Scale out
    • Enable GPU Peer-2-Peer at scale
    • GPU Dynamic Reallocation/Sharing
  • Dynamic Cloud
    • CSP, ISP, Private Cloud
    • Flexibility, Resource Utilisation, TCO
    • Bare Metal Cloud Product Offering
  • HPC & Clustering
    • High Performance Computing
    • Lowest Latency Interconnect
    • Enables Massive Scale Out
  • 5G Edge
    • Utilisation & Reduced Foot Print
    • High Performance Edge Compute
    • Flexibility and Ease of Scale Out

Thoughts and Further Reading

I’ve written enthusiastically about composable infrastructure in the past, and it’s an approach to infrastructure that continues to fascinate me. I love the idea of being able to move pools of resources around the DC based on workload requirements. This isn’t just moving VMs to machines that are bigger as required (although I’ve always thought that was cool). This is moving resources to where they need to be. We have the kind of interconnectivity technology available now that means we don’t need to be beholden to “traditional” x86 server architectures. Of course, the success of this approach is in no small part dependent on the maturity of the organisation. There are some workloads that aren’t going to be a good fit with composable infrastructure. And there are going to be some people that aren’t going to be a good fit either. And that’s fine. I don’t think we’re going to see traditional rack mount servers and centralised storage disappear off into the horizon any time soon. But the possibilities that composable infrastructure present to organisations that have possibly struggled in the past with getting the right resources to the right workload at the right time are really interesting.

There are still a small number of companies that are offering composable infrastructure solutions. I think this is in part because it’s viewed as a niche requirement that only certain workloads can benefit from. But as companies like Liqid are demonstrating, the technology is maturing at a rapid pace and, much like our approach to on-premises infrastructure versus the public cloud, I think it’s time that we take a serious look at how this kind of technology can help businesses worry more about their business and less about the resources needed to drive their infrastructure. My friend Max wrote about Liqid last year, and I think it’s worth reading his take if you’re in any way interested in what Liqid are doing.

Dell Announces Dell Technologies Cloud (Platforms and DCaaS)

Disclaimer: I recently attended Dell Technologies World 2019.  My flights, accommodation and conference pass were paid for by Dell Technologies via the Media, Analysts and Influencers program. There is no requirement for me to blog about any of the content presented and I am not compensated in any way for my time at the event.  Some materials presented were discussed under NDA and don’t form part of my blog posts, but could influence future discussions.

Dell Technologies recently announced their Dell Technologies Cloud Platforms and Dell Technologies DCaaS offerings and I thought I’d try and dig in a little more to the announcements here.

 

DTC DCaaS

[image courtesy of Dell Technologies]

Dell Technologies Cloud Data Center-as-a-Service (DTC DCaaS) is all about “bringing public cloud simplicity to your DCs”. So what do you get with this? You get:

  • Data residency and regulatory compliance;
  • Control over critical workloads;
  • Proximity of data with cloud resources;
  • Self-service resource provisioning;
  • Fully managed, maintained and supported; and
  • Increased developer velocity.

VMware Cloud on Dell

At its core, DTC DCaaS is built on VMware Cloud Foundation and Dell EMC VxRail. VMware Cloud on Dell EMC is “cloud infrastructure installed on-premises in your core and edge data centres and consumed as a cloud service”.

[image courtesy of Dell Technologies]

  • Cloud infrastructure delivered as-a-service on-premises
  • Co-engineered and delivered by Dell Technologies; ongoing service fully managed by VMware
  • VMware SDDC including compute, storage and networking
  • Built on VxRail – Dell EMC’s enterprise-grade cloud platform
  • Hybrid cloud control plane to provision and monitor resources
  • Monthly subscription model

How Does It Work?

  • Firstly, you sign into the VMware Cloud service account to create an order. Dell Technologies will then deliver and install your new cloud infrastructure in your core or edge DC location.
  • Next, the system will self-configure and register with VMware Cloud servers, so you can immediately begin provisioning and managing workloads with VMware’s hybrid cloud control plane.

Moving forward the hardware and software is fully managed, just like your public cloud resources.

Speeds And Feeds 

As I understand it there are two configuration options: DC and Edge. The DC configuration is as follows:

  • 1x 42U APC NetShelter rack
  • 4 – 15x E560 VxRail Nodes
  • 2x S5248FF 25GbE ToR Switches, OS10EE
  • 1x S3048 1GbE Management Switch, OS9EE
  • 2x VeloCloud 520
  • 6X Single-phase 30 AMP PDU
  • No UPS option

The Edge Location configuration is as follows:

  • 1x 24U APC NetShelter rack
  • 3 – 6x E560 VxRail Nodes
  • 2X S4128F 10GbE ToR Switches, OS10EE
  • 1X S3048-ON 1GbE Management Switch, OS9EE
  • 2x VeloCloud 520
  • 2x Single-phase 30 AMP PDU
  • 2x UPS with batteries for 30 min hold-up time for 6X E560F

 

Thoughts And Further Reading

I haven’t explained it very clearly in this article, but there are two parts to the announcement. There’s the DTC Platforms announcement, and the DTC DCaaS announcement. You can read a slightly better explanation here, but the Platforms announcement is VCF on VxRail, and VMware Cloud on AWS. DTC DCaaS, on the other hand, is kit delivered into your DC or Edge site and consumed as a managed service.

There was a fair bit of confusion when I spoke to people at the show last week about what this announcement really meant, both for Dell Technologies and for their customers. At the show last year, Dell was bullish on the future of private cloud / on-premises infrastructure. It seems apparent, though, that this kind of announcement is something of an admission that Dell has customers that are demanding a little more activity when it comes to multi-cloud and hybrid cloud solutions.

Dell’s ace in the hole has been (since the EMC merger) the close access to VMware that they’ve enjoyed via the portfolio of companies. It makes sense that they would have a story to tell when it comes to VMware Cloud Foundation and VMware Cloud on AWS. The box slingers at Dell EMC are happy because they can still sell VxRail appliances for use with the DCaaS offering. I’m interested to see just how many customers take up Dell on their vision of seamless integration between on-premises and public cloud workloads.

The public cloud vendors will tell you that eventually (in 5, 10, 20 years?) every workload will be “cloud native”. I think it’s more likely that we’ll always have some workloads that need to remain on-premises. Not necessarily because they have performance requirements that require that level of application locality, but rather because some organisations will have security requirements that will dictate where these workloads live. I think the shelf life of something like VMConAWS is still more limited than some people will admit, but I can see the need for stuff like this.

My only concern is that the DTC story can be complicated to tell in places. I’ve spent some time this week and last digging in to this offering, and I’m not sure I’ve explained it terribly well at all. I also wonder how the organisations (Dell EMC and VMware) will work together to offer a cohesive offering from a technology and support perspective. Ultimately, these types of solutions are appealing because companies want to focus on their core business, rather than operating as a poorly resourced IT organisation. But there’s no point entering in to these kinds of agreements if the vendor can’t deliver on their vision. “Fully managed services” mean different things to different vendors, so I’ll be interested to see how that plays out in the market.

Dell Technologies Cloud Data Center-as-a-Service, delivered as VMware Cloud on Dell EMC with VxRail, is currently is available in beta deployments with limited customer availability planned for the second half of 2019. You can read the solution overview here.

Brisbane VMUG – May 2019

hero_vmug_express_2011

The May 2019 edition of the Brisbane VMUG meeting will be held on Tuesday 28th May at Fishburners from 4pm – 6pm. It’s sponsored by Cohesity and promises to be a great afternoon.

Here’s the agenda:

  • VMUG Intro
  • Cohesity Presentation: Changing Data Protection from Nightmares to Sweet Dreams
  • vCommunity Presentation – Introduction to Hyper-converged Infrastructure
  • Q&A
  • Light refreshments.

Cohesity have gone to great lengths to make sure this will be a fun and informative session and I’m really looking forward to hearing about how they can make recovery simple. You can find out more information and register for the event here. I hope to see you there. Also, if you’re interested in sponsoring one of these events, please get in touch with me and I can help make it happen.

VMware – Unmounting NFS Datastores From The CLI

This is a short article, but hopefully useful. I did a brief article a while ago linking to some useful articles about using NFS with VMware vSphere. I recently had to do some maintenance on one of the arrays in our lab and I was having trouble unmounting the datastores using the vSphere client. I used some of the commands in this KB article (although I don’t have SIOC enabled) to get the job done instead.

The first step was to identify if any of the volumes were still mounted on the individual host.

[root@esxihost:~] esxcli storage nfs list
Volume Name  Host            Share                 Accessible  Mounted  Read-Only   isPE  Hardware Acceleration
-----------  --------------  --------------------  ----------  -------  ---------  -----  ---------------------
Pav05        10.300.300.105  /nfs/GB000xxxxxbbf97        true     true      false  false  Not Supported
Pav06        10.300.300.106  /nfs/GB000xxxxxbbf93        true     true      false  false  Not Supported
Pav01        10.300.300.101  /nfs/GB000xxxxxbbf95        true     true      false  false  Not Supported

In this case there are three datastores that I haven’t been able to unmount.

[root@esxihost:~] esxcli storage nfs remove -v Pav05
[root@esxihost:~] esxcli storage nfs remove -v Pav06
[root@esxihost:~] esxcli storage nfs remove -v Pav01

Now there should be no volumes mounted on the host.

[root@esxihost:~] esxcli storage nfs list
[root@esxihost:~]

See, I told you it would be quick.