Random Short Take #57

Welcome to Random Short Take #57. Only one player has worn 57 in the NBA. So it looks like this particular bit is done. Let’s get random.

  • In the early part of my career I spent a lot of time tuning up old UNIX workstations. I remember lifting those SGI CRTs from desk to desk was never a whole lot of fun. This article about a Sun Ultra 1 project bought back a hint of nostalgia for those days (but not enough to really get into it again). Hat tip to Scott Lowe for the link.
  • As you get older, you realise that people talk a whole lot of rubbish most of the time. This article calling out audiophiles for the practice was great.
  • This article on the Backblaze blog about one company’s approach to building its streaming media capability on B2 made for interesting reading.
  • DH2i recently announced the general availability of DxEnterprise (DxE) for Containers, enabling cloud-native Microsoft SQL Server container Availability Groups outside and inside Kubernetes.
  • Speaking of press releases, Zerto has made a few promotions recently. You can keep up with that news here.
  • I’m terrible when it comes to information security, but if you’re looking to get started in the field, this article provides some excellent guidance on what you should be focussing on.
  • We all generally acknowledge that NTP is important, and most of us likely assume that it’s working. But have you been checking? This article from Tony does a good job of outlining some of the reasons you should be paying some more attention to NTP.
  • This is likely the most succinct article from John you’ll ever read, and it’s right on the money too.

OT – I Voted. Now It’s Over To You

Eric Siebert has opened up voting for the Top vBlog 2018. I’m listed on the vLaunchpad and you can vote for me under storage and independent blog categories as well. There are a bunch of great blogs listed on Eric’s vLaunchpad, so if nothing else you may discover someone you haven’t heard of before, and chances are they’ll have something to say that’s worth checking out. If this stuff seems a bit needy, it is. But it’s also nice to have people actually acknowledging what you’re doing. I’m hoping that people find this blog useful, because it really is a labour of love (random vendor t-shirts notwithstanding).

Oracle Announces Ravello on Oracle Cloud Infrastructure

It seems to be the season for tech company announcements. I was recently briefed by Oracle on their Ravello on Oracle Cloud Infrastructure announcement and thought I’d take the time to provide some coverage.

 

What’s a Ravello?

Ravello is an overlay cloud that enables enterprises to run their VMware and KVM workloads with DC-like (L2) networking ‘as-is’ on public cloud without any modifications”. It’s pretty cool stuff, and I’ve covered it briefly in the past. They’ve been around for a while and were acquired by Oracle last year. The held a briefing day for bloggers in early 2017, and Chris Wahl did a comprehensive write-up here.

 

HVX

The technology components are a:

  • High-performance nested virtualisation engine (or nested hypervisor);
  • Software-defined network; and
  • Storage overlay.

[image courtesy of Oracle]

The management layer manages the technology components, provides the user interface and API for all environment definitions and deployments and handles image management and monitoring. Ravello in its current iteration is software-based, nested virtualisation. This is what you may have used in the past to run ESXi on AWS or GCP.

[image courtesy of Oracle]

 

Ravello on Oracle Cloud Infrastructure

Ravello on Oracle Cloud Infrastructure (OCI) provides you with the option of leveraging either “hardware-assisted, nested virtualisation” or bare-metal.

[images courtesy of Oracle]

Oracle are excited about the potential performance gains from running Ravello on OCI, stating that there is up to a 14x performance improvement over running Ravello on other cloud services. The key here is that they’ve developed extensions that integrate directly with Oracle’s Cloud platform. Makes sense when you consider they purchased Ravello for reasons.

 

Why Would You?

So why would you use Ravello? It provides enterprises with the ability to “take any VMware based multi-VM application and run it on public cloud without making any changes”. You don’t have to worry about:

  • Re-platforming – You normally can’t run VMware VMs on public clouds.
  • P2V Conversions – Your physical hosts can’t go to the public cloud.
  • Re-networking – Layer 2? Nope.
  • Re-configuration – What about all of your networking and security appliances?

This is all hard to do and points to the need to re-write your applications and re-architect your platforms. Sounds expensive and time-consuming and there are other things people would rather be doing.

 

Conclusion and Further Reading

I am absolutely an advocate for architecting applications to run natively on cloud infrastructure. I don’t think that lift and shift is a sustainable approach to cloud adoption by any stretch. That said, I’ve worked in plenty of large enterprises running applications that are poorly understood and nonetheless critical to the business. Yes, it’s silly. But if you’ve spent any time in any enterprise you’ll start to realise that silly is quite a common modus operandi. Coupled with increasing pressure on CxOs to reduce their on-premises footprint and you’ll see that this technology is something of a life vest for enterprises struggling to make the leap from on-premises to public cloud with minimal modification to their existing applications.

I don’t know what this service will cost you, so I can’t tell you whether this service will provide you with value for money. That’s something you’re better off speaking to Oracle about. Sometimes return on investment is hard to judge unless you’re against the wall with no alternatives. I’ll always say you should re-write your apps rather than lift and shift, but sometimes you don’t have the choice. If you’re in that position, you should consider Ravello’s offering. You can sign up for a free trial here. You can read Oracle’s post on the news here, and Tim’s insights here.

Uila Are Using Your Network (And Some Smart Analytics) To Understand What’s Really Going On

I frequently get briefing invitations from various companies focused on storage and data centre infrastructure. Sometimes their product isn’t directly related to things I might write about, but I like to take these briefings when I can because it gives me something new to learn. Whilst infrastructure and application monitoring plays a big part of in the data centre, it’s not something I write about with any great frequency. All this is a long way of saying that I took a briefing with Uila recently and was pleasantly surprised.

 

What’s a Uila Then?

Pronounced “wee-luh”, Uila is focused on full-stack visibility. They aim to provide you with the ability to:

  • –Troubleshoot complex issues to root cause quickly
  • –Monitor end user, application performance, availability and infrastructure health
  • –Perform planning, optimisation and issue prevention

The cool thing is they don’t just focus on virtualised workloads.

[image courtesy of Uila]

 

Application and Network Intelligence

The key to Uila is the network-centric approach to monitoring. This is done primarily via the Virtual Smart Traffic Taps (vST):

  • Distributed VMs (vST) sniff packets from (D)vSwitch
  • Deep packet inspection
  • Network performance and flow analysis
  • 4000+ application identification and meta data analysis
  • Application transaction response time & volume tracking

 

Compute & Storage & OS Process Intelligence

The Virtual Information Controller (vIC) takes care of all the integration pieces, offering:

  • API integration with cloud virtualisation system;
  • SNMP integration with network switches;
  • SSH & WMI  API integration with application server; and
  • Service availability monitoring via active tests.

 

Management & Analytics System

There’s also a “Management Analytics System” available as either a SaaS offering from Uila or on-premises. It offers:

  • Scalability & Redundancy with Hadoop/Hbase;
  • Full stack correlation for root cause identification; and
  • An analytics visualisation engine.

 

IT is Hard

IT operations can be hard at the best of times. At any given time in all but the most mature infrastructure organisations something is on fire. Sometimes literally. Understanding where to look for the problems is difficult. It’s also difficult to identify the root cause of these issues in a fast and efficient manner. The first reaction is often to treat the cause, not the system. Another thing I’ve noticed is that the various silos of support staff (storage, virtualisation, OS support, network, security, etc.) all like to use their own tools to do their troubleshooting. I once worked in a place that had deployed 4 or 5 monitoring platforms in various states of usefulness. When there was a problem it took hours just to get everyone to look in the same place at the issue.

As much as I’m reluctant to trust a lot of what networking folks say, I think Uila’s approach to monitoring and root cause analysis is a smart one. This isn’t the nineties, and networks are everywhere in your enterprise nowadays. Why not leverage that pervasiveness and get a real feel for what exactly is going on in your environment? But it’s not just about collecting data, it’s about what you do with that data. And this is where I think Uila shines, based on the demonstration I saw and what I’ve read thus far. Having a bunch of data at hand is great, but oftentimes we need to get to the root cause of the problem to understand what’s really happening (and how to fix the problem). Uila are heavily focused on making this a quick and easy process. I’m looking forward to looking at their offering some more in the future (when I get my act together and put the lab back into production).

Uila presented at  Tech Field Day 13, and you can see video of their presentations here, and read Thom Greene‘s thoughts here. You can also read more about Uila’s architecture here.

OT – You Vote Now

Eric Siebert has opened up voting for the Top vBlog 2017. I’m listed on the vLaunchpad under the top 100, and you can vote for me under storage and independent blog categories as well. I climbed the heady heights to number 78 last year. So thanks to my mother for voting for me. You can go directly to the voting survey here. There are a bunch of great blogs listed on Eric’s vLaunchpad, so if nothing else you may discover someone you haven’t heard of before, and chances are they’ll have something to say that’s worth hearing. Or reading. Look, you know what I mean. If this stuff seems a bit needy, it is. But it’s also nice to have people actually acknowledging what you’re doing. This all means nothing without your validation.

2017 – The New What Next

I’m not terribly good at predicting the future, particularly when it comes to technology trends. I generally prefer to leave that kind of punditry to journalists who don’t mind putting it out there and are happy to be proven wrong on the internet time and again. So why do a post referencing a great Hot Water Music album? Well, one of the PR companies I deal with regularly sent me a few quotes through from companies that I’m generally interested in talking about. And let’s face it, I haven’t had a lot to say in the last little while due to day job commitments and the general malaise I seem to suffer from during the onset of summer in Brisbane (no, I really don’t understand the concept of Christmas sweaters in the same way my friends in the Northern Hemisphere do).

Long intro for a short post? Yes. So I’ll get to the point. Here’s one of the quotes I was sent. “As concerns of downtime grow more acute in companies around the globe – and the funds for secondary data centers shrink – companies will be turning to DRaaS. While it’s been readily available for years, the true apex of adoption will hit in 2017-2018, as prices continue to drop and organizations become more risk-averse. There are exceptional technologies out there that can solve the business continuity problem for very little money in a very short time.” This was from Justin Giardina, CTO of iland. I was fortunate enough to meet Justin at the Nimble Storage Predictive Flash launch event in February this year. Justin is a switched on guy and while I don’t want to give his company too much air time (they compete in places with my employer), I think he’s bang on the money with his assessment of the state of play with DR and market appetite for DR as a Service.

I think there are a few things at play here, and it’s not all about technology (because it rarely is). The CxO’s fascination with cloud has been (rightly or wrongly) fiscally focused, with a lot of my customers thinking that public cloud could really help reduce their operating costs. I don’t want to go too much into the accuracy of that idea, but I know that cost has been front and centre for a number of customers for some time now. Five years ago I was working in a conservative environment where we had two production DCs and a third site dedicated to data protection infrastructure. They’ve since reduced that to one production site and are leveraging outsourced providers for both DR and data protection capabilities. The workload hasn’t changed significantly, nor has the requirement to have the data protected and recoverable.

Rightly or wrongly the argument for appropriate disaster recovery infrastructure seems to be a difficult one to make in organisations, even those that have been exposed to disaster and have (through sheer dumb luck) survived the ordeal. I don’t know why it is so difficult for people to understand that good DR and data protection is worth it. I suppose it is the same as me taking a calculated risk on my insurance every year and paying a lower annual rate and gambling on the fact that I won’t have to make a claim and be exposed to higher premiums.

It’s not just about cost though. I’ve spoken to plenty of people who just don’t know what they’re doing when it comes to DR and data protection. And some of these people have been put in the tough position of having lost some data, or had a heck of a time recovering after a significant equipment failure. In the same way that I have a someone come and look at my pool pump when water is coming out of the wrong bit, these companies are keen to get people in who know what they’re doing. If you think about it, it’s a smart move. While it can be hard to admit, sometimes knowing your limitations is actually a good thing.

It’s not that we don’t have the technology, or the facilities (even in BrisVegas) to do DR and data protection pretty well nowadays. In most cases it’s easier and more reliable than it ever was. But, like on-premises email services, it seems to be a service that people are happy to make someone else’s problem. I don’t have an issue with that as a concept, as long as you understand that you’re only outsourcing some technology and processes, you’re not magically doing away with the risk and result when something goes pear-shaped. If you’re a small business without a dedicated team of people to look after your stuff, it makes a lot of sense. Even the bigger players can benefit from making it someone else’s thing to worry about it. Just make sure you know what you’re getting into.

Getting back to the original premise of this post, I agree with Justin that we’re at a tipping point regarding DRaaS adoption, and I think 2017 is going to be really interesting in terms of how companies make use of this technology to protect their assets and keep costs under control.

Tech Field Day – I’ll Be At TFD Extra at VMworld US 2016

TFD-Extra-VMworld-300

Sure, the title is a bit of a mouthful. But I think it gets the point across. I mentioned recently that I’ll be heading to the US in less than a week for VMworld. This is a quick post to say that I’ll also have the opportunity to participate in my first Tech Field Day Extra event while at VMworld.  If you haven’t heard of the very excellent Tech Field Day events, you should check them out. You can also check back on the TFDx website during the event as there’ll likely be video streaming along with updated links to additional content. You can also see the list of delegates and event-related articles that they’ve published.

I think it’s a great line-up of companies this time around, with some I’m familiar with and some not so much. I’m attending the Tuesday session and will be hearing from ClearSky Storage, NooBaa and Paessler.

TFDx_Tuesday

It should be a lot of fun!

OT – Top 78

Eric Siebert recently published (okay, fine, it was three weeks ago) the full results of the Top vBlog voting. I was pleased to find I’d made a jump up from last year.

vBlog_2016_snip

I’ve previously changed my tune on asking for votes in this competition, not because I don’t think it’s a good bit of fun, but I think there’re a bunch of other bloggers you should be voting for. A few people like to huff and puff about it being a popularity contest, but if nothing else I’ve found these types of lists (and Eric’s site in general) to be extremely useful when tracking down links to things on the internet that I know I need but can’t remember how I googled them in the first place. A lot of work goes into the site, so thanks Eric, and please keep it up! Thanks also to anyone who did throw a vote my way, I do actually appreciate it.

VMware vSphere Next Beta Applications Are Now Open

VMware recently announced that applications for the next VMware vSphere Beta Program are now open. People wishing to participate in the program can now indicate their interest by filling out this simple form. The vSphere team will grant access to the program to selected candidates in stages. This vSphere Beta Program leverages a private Beta community to download software and share information. There will be discussion forums, webinars, and service requests to enable you to share your feedback with VMware.

So what’s involved? Participants are expected to:

  • Accept the Master Software Beta Test Agreement prior to visiting the Private Beta Community;
  • Install beta software within 3 days of receiving access to the beta product;
  • Provide feedback within the first 4 weeks of the beta program;
  • Submit Support Requests for bugs, issues and feature requests;
  • Complete surveys and beta test assignments; and
  • Participate in the private beta discussion forum and conference calls.

All testing is free-form and you’re encouraged to use the software in ways that interest you. This will provide VMware with valuable insight into how you use vSphere in real-world conditions and with real-world test cases.

Why participate? Some of the many reasons to participate include:

  • Receiving early access to the vSphere Beta products;
  • Interacting with the vSphere Beta team consisting of Product Managers, Engineers, Technical Support, and Technical Writers;
  • Providing direct input on product functionality, configurability, usability, and performance;
  • Providing feedback influencing future products, training, documentation, and services; and
  • Collaborating with other participants, learning about their use cases, and sharing advice and learnings.

I’m a big fan of public beta testing. While we’re not all experts on how things should work, it’s a great opportunity to at least have your say on how you think that vSphere should work. While the guys in vSphere product management may not be able to incorporate every idea you have for how vSphere should work, you’ll at least have an opportunity to contribute feedback and give VMware some insight on how their product is being used in the wild. In my opinion this is extremely valuable for both VMware and us, the consumers of their product. Plus, you’ll get a sneak peak into what’s coming up.

So, if you’re good with NDAs and have some time to devote to some testing of next-generation vSphere, this is the program for you. So head over to the website and check it out.

Ravello – Basics – Deploying an ESXi instance

I’ve been using Ravello a bit recently, thanks primarily to their kind offer of free time for vExperts. I thought it would be worth while doing a few posts on what you need to do to get started. While this information is available via a number of sources already, I thought I’d update it a little to reflect the steps required when using the latest version of the dashboard and ESXi 6. Documentation is also a good way for me to learn things, and it’s my blog so I can afford to be self-indulgent.

In any case, the original steps I followed are here. The article I did is available here. Justin Warren did a nice series on using Ravello, and his post on “How To Import OVA/OVF Into Ravello” was particularly useful. Emad Younis also has an excellent article on deploying the vCenter Server Appliance 6 on Ravello – you can read it here.

I like what Ravello does, so much so that I put a little badge on my blog. And I think there’s a crapload of cool use cases for this technology. If you’re a vExpert and not taking advantage of Ravello’s offer – what’s wrong with you? Get on there and check it out.