VMUG – Getting Started With The New VMUG Website

I wrote previously about the next Brisbane VMUG meeting coming up in February. Registration numbers have been very low. Some of that is no doubt due to summer holidays and lack motivation to think about “work-related” things. Some of it is due to local folk being interested in other events, or not having time to go to an event on a Tuesday afternoon. And some of it might be that they aren’t into the topic. However, some of it might also be due to the fact that people were finding the new VMUG website a little confusing to navigate. At least, I hope that’s why. In any case I put together a brief article that shows you in 22 or so simple steps how to sign up for a VMUG account and register for a local meeting. Now you have no excuse. You can check it out here.

Tintri ChatOps – Because All I Do Is Hang Out On Slack Anyway

 

I’m a bit behind the times with my tech news, but Tintri sent me a link to a video they did demonstrating their new “ChatOps” feature. I was going to make fun of it, but it’s actually pretty neat. If you’ve used Slack before, you probably know it’s got a fairly extensible engine that you can use to do a bunch of cool things. With ChatOps, you can send your Tintri arrays commands and things get done. Not only does it do stuff for you, it does them in a sensible / efficient fashion as well. And since I spend a lot of time on Slack in any case, this feature just might take off.

You can read more about this and some other new features from Tintri at El Reg. And I agree with Chris that a focus by Tintri beyond table stakes is a smart move.

Rubrik – Cloud Data What?

I’ve done a few posts on Cohesity in the past, and I have some friends who work at Rubrik. So it seemed like a good idea to put up a short article on what Rubrik do. Thanks to Andrew Miller at Rubrik for helping out with the background info.

 

The Platform

It’s converged hardware and software (called “Briks” – there are different models but 2RU (4 nodes) are the most common).

[image via Rubrik’s website]

The Rubrik solution:

  • Is fundamentally built on a scale out architecture;
  • Provides a built-in backup application/catalogue with deduplication and compression;
  • Uses a custom file system, distributed task scheduler, distributed metadata, etc;
  • Delivers cloud native archiving, policy driven at the core around imperative vs. declarative;
  • Can leverage cloud native archive (with native hooks into AWS/Azure/etc.);
  • Has a custom VSS provider to help with STUN (super VMware friendly),
  • Provides a native API since day one (REST-based), and along with vSphere (VADP, CBT, NBDSSL), handles SQL and Linux natively (there’s apparently more to come on that front); and
  • There’s an edge appliance for ROBO, amongst other things.

 

Cloud Data Management

Rubrik position their solution as “Cloud Data Management”.

In a similar fashion to Cohesity, Rubrik are focused on a bunch of stuff, not just backup and recovery or copy data management. There’s a bunch of stuff you can do around archive and compliance, and Rubrik tell me the search capabilities are pretty good too.

It also works well with technologies such as VMware vSAN. Chris Wahl and Cormac Hogan wrote a whitepaper on the integration that you can get here (registration required).

 

Thoughts

As you can see from this post there’s a lot to look into with Rubrik (and Cohesity for that matter) and I’ve really only barely scratched the surface. The rising popularity of smarter secondary storage solutions such as these points to a desire in the marketplace to get sprawling data under control via policy rather than simple tiers of disk. This is a good thing. Add in the heavy focus on API-based control and I think we’re in for exciting times (or as exciting as this kind of stuff gets in any case). If you’re interested in some of what you can do with Rubrik there’s a playlist on YouTube with some demos that give a reasonable view of what you can do. I’m hoping to dig a little deeper into the Rubrik solution in the next little while, and I’m particularly interested to see what it can do from an archiving perspective, so stay tuned.

Brisbane VMUG – February 2017

hero_vmug_express_2011

The February 2017 edition of the Brisbane VMUG meeting will be held on Tuesday 21st February at the Fox Hotel (71-73 Melbourne St, South Brisbane QLD 4101) from 2pm. It’s sponsored by Veeam and promises to be a great afternoon.

Here’s the agenda:

  • VMUG Intro (by me)
  • VMware Presentation: CloudFoundation and SDDC Manager (Bruce Perram)
  • Veeam Presentation: What’s new with Veeam 9.5 (Dilupa Ranatunga)
  • Veeam Customer Presentation
  • Q&A
  • Refreshments and drinks.

Veeam have gone to great lengths to make sure this will be a fun and informative session and I’m really looking forward to hearing about their Veeam Availability Suite 9.5 updates. You can find out more information and register for the event here. I hope to see you there. Also, if you’re interested in sponsoring one of these events, please get in touch with me and I can help make it happen.

Random Short Take #3

It’s only been 13 months since I did one of these, so clearly the frequency is increasing. Here’re a few things that I’ve noticed and thought may be worthy of your consideration:

  • This VMware KB article on Calculating Bandwidth Requirements for vSphere Replication is super helpful and goes into a lot of depth on planning for replication. There’s also an appliance available via a Fling, and a calculator you can use.
  • NetWorker 9.1 was recently released and Preston has a great write-up on some of the new features. “Things are hotting up”, as they say.
  • Rawlinson has now joined Cohesity, and I look forward to hearing more about that in the near future. In the meantime,  this article on automated deployment for HCI is very handy.
  • My hosting provider moved me to a new platform in September. By October I’d decided to move somewhere else based on the poor performance of the site and generally shoddy support experience. I’m now with SiteGround. They’re nice, fast and cheap enough for me. I’ve joined their affiliate program, so if you decide to sign up with them I can get some cash.
  • My blog got “hacked” yesterday. Someone put a redirect in place to a men’s performance pill site. Big thanks to Mike Yurick for pointing it out to me and to my colleague Josh for answering my pleas for help and stepping in and cleaning it up while I was on a plane inter-state. He used Wordfence to scan and clean up the site – check them out and make sure your backups are up to date. If it happens to you, and you don’t have a Josh, check out this guidance from WordPress.
  • The next Brisbane VMUG will be held on Tuesday February 21st. I’ll be putting up an article on it in the next few weeks. It will be sponsored by Veeam and should be great.
  • I always enjoy spending time with Stephen Foskett, and when I can’t be with him I like to read his articles (it’s not stalky at all). This one on containers was particularly good.

That’s it for the moment. Hopefully you all have an enjoyable and safe holiday season. And if my site looks like this in the future – let me know.

OT – Career Advice

If you’ve ever checked out my LinkedIn profile you’ll know I’m not necessarily a shining light of consistency in terms of the work I do and who I do it for. That said, while I’m not a GreyBeard yet, my sideburns have silvered somewhat and I’m nothing if not opinionated when it comes to giving advice about working in IT (for good and bad). Funnily enough someone I know on the Internet (Neil) was curious about what IT folk had to say about getting into IT and put together a brief article and quotes from myself and 110 other people who know a bit about this stuff. I hate the term “guru”, but there are certainly a bunch of smart folk giving out some great advice here. Check it out when you have a moment.

Faith-based Computing – Just Don’t

I’d like to be very clear up front that this post isn’t intended as a swipe at people with faith. I have faith. Really, it’s a swipe at people who can’t use the tools available to them.

 

The Problem

I get cranky when IT decisions are based on feelings rather than data. As an example, I’ve been talking to someone recently about who has outsourced support of their IT to a third party. However, they’re struggling immensely with their inability to trust someone else looking after their infrastructure. I asked them why it was a problem. They told me they didn’t think the other party could do it as well as they did. I asked for evidence of this assertion. There was none forthcoming. Rather, they just didn’t feel that the other party could do the job.

 

The Data

In IT organisations / operations there’s a lot of data available. You can get uptime statistics, performance statistics, measurements of performance against time allocated for case resolution, all kinds of stuff. And you can get it not only from your internal IT department, but also from your vendors, and across most technology in the stack from the backend to the client-facing endpoint. Everyone’s into data nowadays, and everyone wants to show you theirs. So what I don’t understand is why some people insist on ignoring the data at hand, making decisions based solely on “feelings” rather than the empirical evidence laid out in front of them.

 

What’s Happening?

I call this focus on instinct “faith-based computing”. It’s similar to faith-based medicine. While I’m a believer, I’m also a great advocate of going to my doctor when I’m suffering from an ailment. Pray for my speedy recovery by all means, but don’t stop me from talking to people of science. Faith-based computing is the idea that you can make significant decisions regarding IT based on instinct rather than the data in front of you. I’m not suggesting that in life there aren’t opportunities for instinct to play a bigger part in how you do things rather than scientific data, but IT has technology in the name. Technology is a science, not a pseudo-science like numerology. Sure, I agree there’re a bunch of factors that influence our decision-making, including education, cultural background, shiny things, all kinds of stuff.

 

Conclusion

I come across organisations on a daily basis operating without making good use of the data in front of them. This arguably keeps me in business as a consultant, but doesn’t necessarily make it fun for you. Use the metrics at hand. If you must make a decision based on instinct or non-technical data, at least be sure that you’ve evaluated the available data. Don’t just dismiss things out of hand because you don’t feel like it’s right.

VMware – vSphere Basics – vCenter 6.5 Upgrade Scenarios

I did an article on the vSphere 6 Platform Services Controller a while ago. After attending a session on changes in vSphere 6.5 at vFORUM, I thought it would be an idea to revisit this, and frame it in the context of vCenter 6.5 upgrades.

 

vSphere Components

In vCenter 6.5, the architecture is a bit different to 5.x. With the PSC, you get:

  • VMware vCenter Single Sign-On
  • License service
  • Lookup service
  • VMware Directory Services
  • VMware Certificate Authority

And the vCenter Server Service gives you:

  • vCenter Server
  • VMware vSphere Web Client
  • VMware vSphere Auto Deploy
  • VMware vSphere ESXi Dump Collector
  • vSphere Syslog Collector on Windows and vSphere Syslog Service for VMware vCenter Server Appliance
  • vSphere Update Manager

 

Architecture Choices

There are some basic configurations that you can go with, but I generally don’t recommend these for anything outside of a lab or test environment. In these configurations, the PSC is either embedded or external to the vCenter Server. The choice here will be dependent on the sizing and feature requirements of your environment.

If you want to use Enhanced Linked Mode an external PSC is recommended. If you want it highly available, you’ll still need to use a load balancer. This VMware KB  article provides some handy insights and updates from 6.0.

 

vCenter Upgrade Scenarios

Your upgrade architecture you’ll choose depends on where your vCenter services reside. If your vCenter server has SSO installed, it becomes a vCenter Server with an embedded PSC.

If, however, some of the vSphere components are installed on separate VMs then the Web Client and Inventory Service become part of the “Management Node” (your vCenter box) and the PSC (with SSO) is separate/external.

Note also that vSphere 6.5 still requires a load balancer for vSphere High Availability.

 

Final Thoughts

This is not something that’s necessarily going to come up each day. But if you’re working either directly with VMware, via an integrator or doing it yourself, your choice of vCenter architecture should be a key consideration in your planning activities. As with most upgrades to key infrastructure components, you should take the time to plan appropriately.

2017 – The New What Next

I’m not terribly good at predicting the future, particularly when it comes to technology trends. I generally prefer to leave that kind of punditry to journalists who don’t mind putting it out there and are happy to be proven wrong on the internet time and again. So why do a post referencing a great Hot Water Music album? Well, one of the PR companies I deal with regularly sent me a few quotes through from companies that I’m generally interested in talking about. And let’s face it, I haven’t had a lot to say in the last little while due to day job commitments and the general malaise I seem to suffer from during the onset of summer in Brisbane (no, I really don’t understand the concept of Christmas sweaters in the same way my friends in the Northern Hemisphere do).

Long intro for a short post? Yes. So I’ll get to the point. Here’s one of the quotes I was sent. “As concerns of downtime grow more acute in companies around the globe – and the funds for secondary data centers shrink – companies will be turning to DRaaS. While it’s been readily available for years, the true apex of adoption will hit in 2017-2018, as prices continue to drop and organizations become more risk-averse. There are exceptional technologies out there that can solve the business continuity problem for very little money in a very short time.” This was from Justin Giardina, CTO of iland. I was fortunate enough to meet Justin at the Nimble Storage Predictive Flash launch event in February this year. Justin is a switched on guy and while I don’t want to give his company too much air time (they compete in places with my employer), I think he’s bang on the money with his assessment of the state of play with DR and market appetite for DR as a Service.

I think there are a few things at play here, and it’s not all about technology (because it rarely is). The CxO’s fascination with cloud has been (rightly or wrongly) fiscally focused, with a lot of my customers thinking that public cloud could really help reduce their operating costs. I don’t want to go too much into the accuracy of that idea, but I know that cost has been front and centre for a number of customers for some time now. Five years ago I was working in a conservative environment where we had two production DCs and a third site dedicated to data protection infrastructure. They’ve since reduced that to one production site and are leveraging outsourced providers for both DR and data protection capabilities. The workload hasn’t changed significantly, nor has the requirement to have the data protected and recoverable.

Rightly or wrongly the argument for appropriate disaster recovery infrastructure seems to be a difficult one to make in organisations, even those that have been exposed to disaster and have (through sheer dumb luck) survived the ordeal. I don’t know why it is so difficult for people to understand that good DR and data protection is worth it. I suppose it is the same as me taking a calculated risk on my insurance every year and paying a lower annual rate and gambling on the fact that I won’t have to make a claim and be exposed to higher premiums.

It’s not just about cost though. I’ve spoken to plenty of people who just don’t know what they’re doing when it comes to DR and data protection. And some of these people have been put in the tough position of having lost some data, or had a heck of a time recovering after a significant equipment failure. In the same way that I have a someone come and look at my pool pump when water is coming out of the wrong bit, these companies are keen to get people in who know what they’re doing. If you think about it, it’s a smart move. While it can be hard to admit, sometimes knowing your limitations is actually a good thing.

It’s not that we don’t have the technology, or the facilities (even in BrisVegas) to do DR and data protection pretty well nowadays. In most cases it’s easier and more reliable than it ever was. But, like on-premises email services, it seems to be a service that people are happy to make someone else’s problem. I don’t have an issue with that as a concept, as long as you understand that you’re only outsourcing some technology and processes, you’re not magically doing away with the risk and result when something goes pear-shaped. If you’re a small business without a dedicated team of people to look after your stuff, it makes a lot of sense. Even the bigger players can benefit from making it someone else’s thing to worry about it. Just make sure you know what you’re getting into.

Getting back to the original premise of this post, I agree with Justin that we’re at a tipping point regarding DRaaS adoption, and I think 2017 is going to be really interesting in terms of how companies make use of this technology to protect their assets and keep costs under control.

Scale Computing and single-node configurations for the discerning punter

Scale_Logo_High_Res

I had the opportunity to speak to Jason Collier and Alan Conboy from Scale Computing about a month ago after they announced their “single-node configuration of its HC3 virtualization platform”. This is “designed to deliver affordable, flexible hyperconverged infrastructure for distributed enterprise and Disaster Recovery (DR) use cases”. I’m a big fan of Scale’s HC3 platform (and not just because they gave me a HC1000 demo cluster to trial). It’s simple, it’s cheap, and it works well for its target market. Traditionally (!), HCI and other converged offerings have called for 3 nodes (and sometimes 2) to be considered resilient at a cluster level. When Scale sell you their HC3 solution you start at 3 nodes and work up from there. So how do they do the single-node thing?

 

Use Cases

In the market I operate in, the use of a single node at the edge makes quite a bit of sense. Customers with edge sites generally don’t have access to IT resources or even suitable facilities but they sometimes have a reasonable link back to the core or decent WAN optimisation in place. So why not use the core to provide the resiliency you would have normally configured locally? Scale call this the “distributed enterprise” model, with the single node HC3 solution replicating workload back to the core. If something goes wrong at the edge you can point users to your workload in the core and keep on going [do you like how I made that sound really simple?].

You can also use the single-node configuration as a DR target for an HC3 cluster. The built-in replication can be configured quickly and without extra software to a single HC3 node located locally or remotely. This, again, is a cheap and cheerful solution to what can sometimes be seen as an expensive insurance policy against infrastructure failure.

 

Thoughts and Further Reading

Risk versus cost is a funny thing. Some people are very good at identifying risk but not very good at understanding what it will cost them if something goes wrong. Some people like to talk a lot about risk but then their only mitigation is to note it in a “risk register”. In any case, I think Scale have come up with a smart solution because they’re selling to people in a cost sensitive market. This solution helps to alleviate some of the expense related to providing DR and distribute compute (and thus provides some reduced risk for SMBs). That said, everyone’s situation is different, and you need to assess the risk related to running workload at the edge on a single node. If that node fails, you can failover your workload to the core. Assuming you have a process in place for this. And applications that support it. And sufficient bandwidth from the core to the edge. In any case, if you’re in the market for low-cost HCI, Scale should be on your radar. And if you’re looking at distributed HCI you should definitely be looking at the single-node solution. You can read some more in Scale’s press release here. The solution brief can be found here. El Reg also has something to say about it here. Some generally very useful technical resources from Scale can be found here.