At VMware Explore 2022 in the US, VMware announced a number of new offerings for VMware Cloud on AWS, including a new bare-metal instance type: the I4i.metal. You can read the official blog post here. I thought it would be useful to provide some high-level details and cover some of the caveats that punters should be aware of.
By The Numbers
What do you get from a specifications perspective?
Can I use custom core counts? Yep, the I4i will support physical custom core counts of 8, 16, 24, 30, 36, 48, 64.
Is there stretched cluster support? Yes, you can deploy these in stretched clusters (of the same host type).
Can I do in-cluster conversions? Yes, read more about that here.
Other Considerations
Why does the sizer say 20 TiB useable for the I4i? Around 7 TiB is consumed by the cache tier at the moment, so you’ll see different numbers in the sizer. And your useable storage numbers will obviously be impacted by the usual constraints around failures to tolerate (FTT) and RAID settings.
Region Support?
The I4i.metal instances will be available in the following Regions (and Availability Zones):
US East (N. Virginia) – use1-az1, use1-az2, use1-az4, use1-az5, use1-az6
US West (Oregon) – usw2-az1, usw2-az2, usw2-az3, usw2-az4
US West (N. California) – usw1-az1, usw1-az3
US East (Ohio) – use2-az1, use2-az2, use2-az3
Canada (Central) – cac1-az1, cac1-az2
Europe (Ireland) – euw1-az1, euw1-az2, euw1-az3
Europe (London) – euw2-az1, euw2-az2, euw2-az3
Europe (Frankfurt) – euc1-az1, euc1-az2, euc1-az3
Europe (Paris) – euw3-az1, euw3-az2, euw3-az3
Asia Pacific (Singapore) – apse1-az1, apse1-az2, apse1-az3
Asia Pacific (Sydney) – apse2-az1, apse2-az2, apse2-az3
Asia Pacific (Tokyo) – apne1-az1, apne1-az2, apne1-az4
Other Regions will have availability over the coming months.
Thoughts
The i3.metal isn’t going anywhere, but it’s nice to have an option that supports more cores and it a bit more storage and RAM. The I4i.metal is great for SQL workloads and VDI deployments where core count can really make a difference. Coupled with the addition of supplemental storage via VMware Cloud Flex Storage and Amazon FSx for NetApp ONTAP, there are some great options available to deal with the variety of workloads customers are looking to deploy on VMware Cloud on AWS.
On another note, if you want to hear more about all the cloudy news from VMware Explore US, I’ll be presenting at the Brisbane VMUG meeting on October 12th, and my colleague Ray will be doing something in Sydney on October 19th. If you’re in the area, come along.
The October 2022 edition of the Brisbane VMUG meeting will be held on Wednesday 12th October at the Cube (QUT) from 5pm – 7pm. It’s sponsored by NetApp and promises to be a great afternoon.
Two’s Company, Three’s a Cloud – NetApp, VMware and AWS
NetApp has had a strategic relationship with VMware for over 20 years, and with AWS for over 10 years. Recently at VMware Explore we made a significant announcement about VMC support for NFS Datastores provided by the AWS FSx for NetApp ONTAP service.
Come and learn about this exciting announcement and more on the benefits of NetApp with VMware Cloud. We will discuss architecture concepts, use cases and cover topics such as migration, data protection and disaster recovery as well as Hybrid Cloud configurations.
There will be a lucky door prize as well as a prize for best question on the night. Looking forward to see you there!
Wade Juppenlatz – Specialist Systems Engineer – QLD/NT
Chris (Gonzo) Gondek – Partner Technical Lead QLD/NT
PIZZA AND NETWORKING BREAK!
This will be followed by:
All the News from VMware Explore – (without the jet lag)
We will cover a variety of cloudy announcements from VMware Explore, including:
vSphere 8
vSAN 8
VMware Cloud on AWS
VMware Cloud Flex Storage
GCVE, OCVS, AVS
Cloud Universal
VMware Ransomware Recovery for Cloud DR
Dan Frith – Staff Solutions Architect – VMware Cloud on AWS, VMware
And we will be finishing off with:
Preparing for VMware Certifications
With the increase of position requirements in the last few years, certifications help you demonstrate your skills and move you a step forward on getting better jobs. In this Community Ssession we will help you understand how to prepare for a VMware certification exam and some useful tips you can use during the exam.
We will talk about:
Different types of exams
How to schedule an exam
Where to get material to study
Lessons learned from the field per type of exam
Francisco Fernandez Cardarelli – Senior Consultant (4 x VCIX)
Soft drinks and vBeers will be available throughout the evening! We look forward to seeing you there!
Doors open at 5pm. Please make your way to The Atrium, on Level 6.
You can find out more information and register for the event here. I hope to see you there. Also, if you’re interested in sponsoring one of these events, please get in touch with me and I can help make it happen.
Welcome to Random Short Take #76. Summer’s almost here. Let’s get random.
The nice folks at StorPool have announced StorPool Storage v20. I was lucky enough to catch up with Boyan and the team recently, and they told me about their work on supporting NVMe/TCP, StorPool on Amazon AWS, and NFS File Storage. It’s great stuff and worth checking out.
Long term retention – all the kids are doing it, but there are some things you need to think about. Preston has posted a great article on it here.
Speaking of Tony, he’s finally back blogging, with this succinct piece on DHCP with NSX VLAN segments.
Lots going on at VMware Explore 2022 in San Francisco this week. Danny Alvarez has put together a series of links covering the major announcements. I hope to get time to cover the major VMware Cloud on AWS news in the next few weeks.
VMworld is on this week. I still find the virtual format (and timezones) challenging, and I miss the hallway track and the jet lag. There’s nonetheless some good news coming out of the event. One thing that was announced prior to the event was Tanzu Community Edition. William Lam talks more about that here.
Speaking of VMworld news, Viktor provided a great summary on the various “projects” being announced. You can read more here.
I’ve been a Mac user for a long time, and there’s stuff I’m learning every week via Howard Oakley’s blog. Check out this article covering the Recovery Partition. While I’m at it, this presentation he did on Time Machine is also pretty ace.
Facebook had a little problem this week, and the Cloudflare folks have provided a decent overview of what happened. As someone who works for a service provider, this kind of stuff makes me twitchy.
Fibre Channel? Cloud? Chalk and cheese? Maybe. Read Chin-Fah’s article for some more insights. Personally, I miss working with FC, but I don’t miss the arguing I had to do with systems and networks people when it came to the correct feeding and watering of FC environments.
Remote working has been a challenge for many organisations, with some managers not understanding that their workers weren’t just watching streaming video all day, but actually being more productive. Not everything needs to be a video call, however, and this post / presentation has a lot of great tips on what does and doesn’t work with distributed teams.
I’ve had to ask this question before. And Jase has apparently had to answer it too, so he’s posted an article on vSAN and external storage here.
This is the best response to a trio of questions I’ve read in some time.
Welcome to Random Short Take #44. A few players have worn 44 in the NBA, including Danny Ainge and Pistol Pete, but my favourite from this list is Keith Van Horn. A nice shooting touch and strong long sock game. Let’s get random.
ATMs are just computers built to give you money. And it’s scary to think of the platforms that are used to deliver that functionality. El Reg pointed out a recent problem with one spotted in the wild in Ngunnawal.
Speaking of computing at the edge, I found this piece from Keith interesting. As much as things change they stay the same. I think he’s spot on when he says “[m]anufacturers and technology companies must come together with modular solutions that enable software upgrades for these assets’ lives”. We need to be demanding more from the industry when it comes to some of this stuff.
I enjoyed this article from Preston about the difference between bunkers and vaults – worth checking out even if you’re not a Dell EMC customer.
Cloud – it can be tough to know which way to go. And a whole bunch of people have an interest in you using their particular solution. This article from Chris Evans was particularly insightful.
DH2i has launched DxOdyssey for IoT – you can read more about that here.
Speaking of news, Retrospectrecently announced Backup 17.5 too. There are some cloud improvements, and support for macOS Big Sur beta.
It’s the 30th anniversary of Vanilla Ice’s “Ice Ice Baby“, and like me you were probably looking for a comprehensive retrospective on Vanilla Ice’s career. Look no further than this article over at The Ringer.
Storage Field Day 18 was a little while ago, but that doesn’t mean that the things that were presented there are no longer of interest. Stephen Foskett wrote a great piece on IBM’s approach to data protection with Spectrum Protect Plus that’s worth read.
Speaking of data protection, it’s not just for big computers. Preston wrote a great article on the iOS recovery process that you can read here. As someone who had to recently recover my phone, I agree entirely with the idea that re-downloading apps from the app store is not a recovery process.
NetApp were recently named a leader in the Gartner Magic Quadrant for Primary Storage. Say what you will about the MQ, a lot of folks are still reading this report and using it to help drive their decision-making activities. You can grab a copy of the report from NetApp here. Speaking of NetApp, I’m happy to announce that I’m now a member of the NetApp A-Team. I’m looking forward to doing a lot more with NetApp in terms of both my day job and the blog.
Disclaimer: I recently attended VMworld 2019 – US. My flights and accommodation were paid for by Digital Sense, and VMware provided me with a free pass to the conference and various bits of swag. There is no requirement for me to blog about any of the content presented and I am not compensated by VMware for my time at the event. Some materials presented were discussed under NDA and don’t form part of my blog posts, but could influence future discussions.
A quick post to provide some closing thoughts on VMworld US 2019 and link to the posts I did during the event. Not in that order. I’ll add to this as I come across interesting posts from other people too.
This was my fourth VMworld US event, and I had a lot of fun. I’d like to thank all the people who helped me out with getting there, the people who stopped and chatted to me at the event, everyone participating in the vCommunity, and VMware for putting on a great show. I’m looking forward to (hopefully) getting along to it again in 2020 (August 30 – September 3).
Disclaimer: I recently attended VMworld 2019 – US. My flights and accommodation were paid for by Digital Sense, and VMware provided me with a free pass to the conference and various bits of swag. There is no requirement for me to blog about any of the content presented and I am not compensated by VMware for my time at the event. Some materials presented were discussed under NDA and don’t form part of my blog posts, but could influence future discussions.
As part of my attendance at VMworld US 2019 I had the opportunity to attend Tech Field Day Extra sessions. You can view the videos from the Apstra session here, and download my rough notes from here.
More Than Meets The Eye
A lot of people like to talk about how organisations need to undertake “digital transformation”. One of the keys to success with this kind of transformation comes in the form of infrastructure transformation. The idea is that, if you’re doing it right, you can improve:
Business agility;
Application reliability; and
Control costs.
Apstra noted that “a lot of organisations start with choosing their hardware and all other choices are derived from that choice, including the software”. As a result of this, you’re constrained by the software you’ve bought from that vendor. The idea is you need to focus on business-oriented outcomes, which are then used to determine the technical direction you’ll need to take to achieve those outcomes.
But even if you’ve managed to get yourself a platform that helps you achieve the outcomes you’re after, if you don’t have an appropriate amount of automation and visibility in your environment, you’re going to struggle with deployments being slowed down. You’ll likely also find that that a lack of efficient automation can lead to:
Physical and logical topologies that are decoupled but dependent;
Error-prone deployments; and
No end to end validation.
When you’re in that situation, you’ll invariably find that you’ll struggle with reduced operational agility and a lack of visibility. This makes it hard to troubleshoot issues in the field, and people generally feel sad (I imagine).
Intent, Is That What You Mean?
So how can Apstra help? Will they magically make everything work the way you want it to? Not necessarily. There are a bunch of cool features available within the Apstra solution, but you need to do some work up front to understand what you’re trying to achieve in the first place. But once you have the framework in place, you can do some neat stuff, using AOS to accelerate initial and day 2 fabric configuration. You can, for example, deploy new racks and L2 / L3 fabric VLANs at scale in a few clicks:
Streamline new rack design and deployment;
Automate fabric VLAN deployment;
Closed-loop validation (endpoint configuration, EVPN routes expectations); and
Include jumbo frame configuration for overlay networks.
The idea behind intent-based networking (IBN) is fairly straightforward:
Collect intent;
Expose intent;
Validate; and
Remediate.
You can read a little more about IBN here. There’s a white paper on Intent-based DCs can be found here.
Thoughts
I don’t deal with complicated network deployments on a daily basis, but I do know some people who play that role on TV. Apstra delivered a really interesting session that had me thinking about the effectiveness of software solutions to control infrastructure architecture at scale. There’s been a lot of talk during conference keynotes about the importance of digital transformation in the enterprise and how we all need to be leveraging software-defined widgets to make our lives better. I’m all for widgets making life easier, but they’re only going to be able to do that when you’ve done a bit of work to understand what it is you’re trying to do with all of this technology. The thing that struck me about Apstra is that they seem to understand that, while they’re selling some magic software, it’s not going to be any good to you if you haven’t done some work to prepare yourself for it.
I rabbit on a lot about how technology organisations struggle to understand what “the business” is trying to achieve. This isn’t a one-way problem either, and the business frequently struggles with the idea that technology seems to be a constant drain on an organisation’s finances without necessarily adding value to the business. In most cases though, technology is doing some really cool stuff in the background to make businesses run better, and more efficiently. Apstra is a good example of using technology to deliver reliable services to the business. Whether you’re an enterprise networker, or toiling away at a cloud service provider, I recommend checking out how Apstra can make things easier when it comes to keeping your network under control.
Here’s a semi-regular listicle of random news items that might be of some interest.
This is a great article covering QoS enhancements in Purity 5.3. Speaking of Pure Storage I’m looking forward to attending Pure//Accelerate in Austin in the next few weeks. I’ll be participating in a Storage Field Day Exclusive event as well – you can find more details on that here.
My friends at Scale Computing have entered into an OEM agreement with Acronis to add more data protection and DR capabilities to the HC3 platform. You can read more about that here.
Commvaultjust acquired Hedvig for a pretty penny. It will be interesting to see how they bring them into the fold. This article from Max made for interesting reading.
DH2i are presenting a webinar on September 10th at 11am Pacific, “On the Road Again – How to Secure Your Network for Remote User Access”. I’ve spoken to the people at DH2i in the past and they’re doing some really interesting stuff. If your timezone lines up with this, check it out.
I caught up with Zerto while I was at VMworld US last week, and they talked to me about their VAIO announcement. Justin Paul did a good job of summarising it here.
Speaking of VMworld, William has posted links to the session videos – check it out here.
Disclaimer: I recently attended VMworld 2019 – US. My flights and accommodation were paid for by Digital Sense, and VMware provided me with a free pass to the conference and various bits of swag. There is no requirement for me to blog about any of the content presented and I am not compensated by VMware for my time at the event. Some materials presented were discussed under NDA and don’t form part of my blog posts, but could influence future discussions.
As part of my attendance at VMworld US 2019 I had the opportunity to attend Tech Field Day Extra sessions. You can view the videos from the NetApp session here, and download my rough notes from here.
Enhanced DC Workloads
In The Beginning There Were Workloads
Andy Banta started his presentation by talking about the evolution of the data centre (DC). The first-generation DCs were resource-constrained. As long as there was something limiting (disk, CPU, memory), things didn’t get done. The later first-generation DCs were comprised of standalone hosts with applications. Andy called “2nd-generation DCs” those hosts that were able to run multiple workloads. The evolution of these 2nd-generation DCs was virtualisation – now you could run multiple applications and operating systems on one host.
The DC though, is still all about compute, memory, throughput, and capacity. As Andy described it, “the DC is full of boxes”.
[image courtesy of NetApp]
But There’s Cool Stuff Happening
Things are changing in the DC though, primarily thanks to a few shifts in key technologies that have developed in recent times.
Persistent Memory
Persistent memory has become more mainstream, and application vendors are developing solutions that can leverage this technology effectively. There’s also technology out there that will let you slice this stuff up and share it around, just like you would a pizza. And it’s resilient too, so if you drop your pizza, there’ll be some still left on your plate (or someone else’s plate). Okay I’ll stop with the tortured analogy.
Microvisors
Microvisors are being deployed more commonly in the DC (and particularly at the edge). What’s a microvisor? “Imagine a Hypervisor stripped down to only what you need to run modern Linux based containers”. The advent of the microvisor is leading to different types of workloads (and hardware) popping up in racks where they may not have previously been found.
Specialised Cores on Demand
You can now also access specialised cores on demand from most service providers. You need access to some GPUs to get some particular work done? No problem. There are a bunch of different ways you can slice this stuff up, and everyone’s hip to the possibility that you might only need them for a short time, but you can pay a consumption fee for however long that time will be.
HPC
Even High Performance Compute (HPC) is doing stuff with new technology (in this case NVMeoF). What kinds of workloads?
Banking – low-latency transactions
Fluid dynamics – lots of data being processed quickly in a parallel stream
Medical and nuclear research
Thoughts
My favourite quote from Andy was “NVMe is grafting flesh back on to the skeleton of fibre channel”. He (and most of us in the room) are of the belief that FC (in its current incantation at least) is dead. Andy went on to say that “[i]t’s out there for high margin vendors” and “[t]he more you can run on commodity hardware, the better off you are”.
The DC is changing, and not just in the sense that a lot of organisations aren’t running their own DCs any more, but also in the sense that the types of workloads in the DC (and their form factor) are a lot different to those we’re used to running in first-generation DC deployments.
Where does NetApp fit in all of this? The nice thing about having someone like Andy speak on their behalf is that you’re not going to get a product pitch. Andy has been around for a long time, and has seen a lot of different stuff. What he can tell you, though, is that NetApp have started developing (or selling) technology that can accommodate these newer workloads and newer DC deployments. NetApp will be happy to sell you storage that runs over IP, but they can also help you out with compute workloads (in the core and edge), and show you how to run Kubernetes across your estate.
The DC isn’t just full of apps running on hosts accessing storage any more – there’s a lot more to it than that. Workload diversity is becoming more and more common, and it’s going to be really interesting to see where it’s at in ten years from now.
We use cookies on our website to give you the most relevant experience by remembering your preferences and repeat visits. By clicking “Accept”, you consent to the use of ALL the cookies.
This website uses cookies to improve your experience while you navigate through the website. Out of these cookies, the cookies that are categorized as necessary are stored on your browser as they are essential for the working of basic functionalities of the website. We also use third-party cookies that help us analyze and understand how you use this website. These cookies will be stored in your browser only with your consent. You also have the option to opt-out of these cookies. But opting out of some of these cookies may have an effect on your browsing experience.