Random Short Take #20

Here are some links to some random news items and other content that I recently found interesting. You might find them interesting too. Episode 20 – feels like it’s becoming a thing.

  • Scale Computing seems to be having a fair bit of success with their VDI solutions. Here’s a press release about what they did with Harlingen WaterWorks System.
  • I don’t read Corey Quinn’s articles enough, but I am glad I read this one. Regardless of what you think about the enforceability of non-compete agreements (and regardless of where you’re employed), these things have no place in the modern workforce.
  • If you’re getting along to VMworld US this year, I imagine there’s plenty in your schedule already. If you have the time – I recommend getting around to seeing what Cody and Pure Storage are up to. I find Cody to be a great presenter, and Pure have been doing some neat stuff lately.
  • Speaking of VMworld, this article from Tom about packing the little things for conferences in preparation for any eventuality was useful. And if you’re heading to VMworld, be sure to swing past the VMUG booth. There’s a bunch of VMUG stuff happening at VMworld – you can read more about that here.
  • I promise this is pretty much the last bit of news I’ll share regarding VMworld. Anthony from Veeam put up a post about their competition to win a pass to VMworld. If you’re on the fence about going, check it out now (as the competition closes on the 19th August).
  • It wouldn’t be a random short take without some mention of data protection. This article about tiering protection data from George Crump was bang on the money.
  • Backblaze published their quarterly roundup of hard drive stats – you can read more here.
  • This article from Paul on freelancing and side gigs was comprehensive and enlightening. If you’re thinking of taking on some extra work in the hopes of making it your full-time job, or just wanting to earn a little more pin money, it’s worthwhile reading this post.

Automation Anywhere – The Bots Are Here To Help

Disclaimer: I recently attended Tech Field Day 19.  My flights, accommodation and other expenses were paid for by Tech Field Day. There is no requirement for me to blog about any of the content presented and I am not compensated in any way for my time at the event.  Some materials presented were discussed under NDA and don’t form part of my blog posts, but could influence future discussions.

 

Automation Anywhere recently presented at Tech Field Day 19. You can see videos of their presentation here, and download my rough notes from here.

 

Robotic What?

Robotic Process Automation (RPA) is the new hotness in enterprise software. Automation Anywhere raised over $550 million in funding in the last 12 months. That’s a lot of money. But what is RPA? It’s a way to develop workflows so that business processes can be automated. One of the cool things, though, is that it can develop these automation actions by observing the user perform the actions in the GUI, and then repeating those actions. There’s potential to make this more accessible to people who aren’t necessarily software development types.

Automation Anywhere started back in 2003, and the idea was to automate any application. Automation anywhere want to “democratise automation”, and “anything that can be automated, should be automated”. The real power of this kind of approach is that it, potentially, allows you do things you never did before. Automation Anywhere want us to “imagine a world where every job has a digital assistant working side by side, allowing people doing what they do best”.

[image courtesy of Automation Anywhere]

 

Humans are the Resource

This whole automating all the things mantra has been around for some time, and the idea has always been that we’re “[m]oving humans up the value chain”. Not only that, but RPA isn’t about digital transformation in the sense that a lot of companies see it currently, i.e. as a way to change the way they do things to better leverage digital tools. What’s interesting is that RPA is more focused on automating what you already have. You can then decide whether the process is optimal or whether it should be changed. I like this idea, if only because of the number of times I’ve witnessed small and large companies go through “transformations”, only to realise that what they were doing previously was pretty good, and they’d just made a few mistakes in terms of manual process creeping in.

Automation Anywhere told us that some people start with “I know that my job cannot be automated”, but it turns out that about 80% of their job is business tools based, and a lack of automation is holding them back from thinking strategically. We’ve seen this problem throughout the various industrial revolutions that have occurred, and people have invariably argued against steam-powered devices, and factory lines, and self-healing infrastructure.

 

Thoughts and Further Reading

Automation is a funny thing. It’s often sold to people as a means to give them back time in their day to do “higher order” activities within the company. This has been a message that has been around as long as I’ve been in IT. There’s an idea that every worker is capable of doing things that could provide more value to the company, if only they had more time. Sometimes, though, I think some folks are just good at breaking rocks. They don’t want to do anything else. They may not really be capable of doing anything else. And change is hard, and is going to be hard for them in particular. I’m not anticipating that RPA will take over every single aspect of the workplace, but there’s certainly plenty of scope for it to have a big presence in the modern enterprise. So much time is wasted on process that should really be automated, because it can give you back a lot of time in your day. And it also provides the consistency that human resources lack.

As Automation Anywhere pointed out in their presentation “every piece of software in the world changes how we work, but rarely do you have the opportunity to change what the work is”. And that’s kind of the point, I think. We’re so tied to do things in a business a certain way, and oftentimes we fill the gaps in workflows with people because the technology can’t keep up with what we’re trying to do. But if you can introduce tools into the business that can help you move past those shortfalls in workflow, and identify ways to improve those workflows, that could really be something interesting. I don’t know if RPA will solve all of our problems overnight, because humans are unfortunately still heavily involved in the decision making process inside enterprise, but it seems like there’s scope to do some pretty cool stuff with it.

If you’d like to read some articles that don’t just ramble on, check out Adam’s article here, Jim’s view here, and Liselotte’s article here. Marina posted a nice introduction to Automation Anywhere here, and Scott’s impression of Automation Anywhere’s security approach made for interesting reading. There’s a wealth of information on the Automation Anywhere website, and a community edition you can play with too.

Random Short Take #18

Here are some links to some random news items and other content that I recently found interesting. You might find them interesting too. Episode 18 – buckle up kids! It’s all happening.

  • Cohesity added support for Active Directory protection with version 6.3 of the DataPlatform. Matt covered it pretty comprehensively here.
  • Speaking of Cohesity, Alastair wrote this article on getting started with the Cohesity PowerShell Module.
  • In keeping with the data protection theme (hey, it’s what I’m into), here’s a great article from W. Curtis Preston on SaaS data protection, and what you need to consider to not become another cautionary tale on the Internet. Curtis has written a lot about data protection over the years, and you could do a lot worse than reading what he has to say. And that’s not just because he signed a book for me.
  • Did you ever stop and think just how insecure some of the things that you put your money into are? It’s a little scary. Shell are doing some stuff with Cybera to improve things. Read more about that here.
  • I used to work with Vincent, and he’s a super smart guy. I’ve been at him for years to start blogging, and he’s started to put out some articles. He’s very good at taking complex topics and distilling them down to something that’s easy to understand. Here’s his summary of VMware vRealize Automation configuration.
  • Tom’s take on some recent CloudFlare outages makes for good reading.
  • Google Cloud has announced it’s acquiring Elastifile. That part of the business doesn’t seem to be as brutal as the broader Alphabet group when it comes to acquiring and discarding companies, and I’m hoping that the good folks at Elastifile are looked after. You can read more on that here.
  • A lot of people are getting upset with terms like “disaggregated HCI”. Chris Mellor does a bang up job explaining the differences between the various architectures here. It’s my belief that there’s a place for all of this, and assuming that one architecture will suit every situation is a little naive. But what do I know?

NetApp Wants You To See The Whole Picture

Disclaimer: I recently attended Tech Field Day 19.  My flights, accommodation and other expenses were paid for by Tech Field Day. There is no requirement for me to blog about any of the content presented and I am not compensated in any way for my time at the event.  Some materials presented were discussed under NDA and don’t form part of my blog posts, but could influence future discussions.

NetApp recently presented at Tech Field Day 19. You can see videos of their presentation here, and download my rough notes from here.

 

Management or Monitoring?

James Holden (Director, Cloud Analytics) delivered what I think was a great presentation on NetApp Cloud Insights. Early on he made the comment that “[w]e’re as read-only as we possibly can be. Being actionable puts you in a conversation where you’re doing something with the infrastructure that may not be appropriate.” It’s a comment that resonated with me, particularly as I’ve been on both sides of the infrastructure management and monitoring fence (yes, I know, it sounds like a weird fence – just go with it). I remember vividly providing feedback to vendors that I wanted their fancy single pane of glass monitoring solution to give me more management capabilities as well. And while they were at it, it would be great if they could develop software that would automagically fix issues in my environment as they arose.

But do you want your cloud monitoring tools to really have that much control over your environment? Sure, there’s a lot of benefit to be had deploying solutions that can reduce the stick time required to keep things running smoothly, but I also like the idea that the software won’t just dive in a fix what it perceives as errors in an environment based on a bunch of pre-canned constraints that have been developed by people that may or may not always have a good grip on what’s really happening in these types of environments.

Keep Your Cloud Happy

So what can you do with Cloud Insights? As it turns out, all kinds of stuff, including cost optimisation. It doesn’t always sound that cool, but customers are frequently concerned with the cost of their cloud investment. What they get with Cloud Insights is:

Understanding

  • What’s my last few months cost?
  • What’s my current month running cost
  • Cost broken down by AWS service, account, region?
  • Does it meet the budget?

Analysis

  • Real time cost analysis to alert on sudden rise in cost
  • Project cost over period of time

Optimisation

  • Save costs by using “reserved instances”
  • Right sizing compute resources
  • Remove waste: idle EC2 instances, unattached EBS volumes, unused reserved instances
  • Spot instance use

There are a heap of other features, including:

  • Alerting and impact analysis; and
  • Forensic analysis.

It’s all wrapped up in an alarmingly simple SaaS solution meaning quick deployment and faster time to value.

The Full Picture

One of my favourite bits of the solution though is that NetApp are striving to give you access to the full picture:

  • There are application services running in the environment; and
  • There are operating systems and hardware underneath.

“The world is not just VMs on compute with backend storage”, and NetApp have worked hard to ensure that the likes of micro services are also supported.

 

Thoughts and Further Reading

One of the recurring themes of Tech Field Day 19 was that of management and monitoring. When you really dig into the subject, every vendor has a different take on what can be achieved through software. And it’s clear that every customer also has an opinion on what they want to achieve with their monitoring and management solutions. Some folks are quite keen for their monitoring solutions to take action as events arise to resolve infrastructure issues. Some people just want to be alerted about the problem and have a human intervene. And some enterprises just want an easy way to report to their C-level what they’re spending their money on. With all of these competing requirements, it’s easy to see how I’ve ended up working in enterprises running 10 different solutions to monitor infrastructure. They also had little idea what the money was being spent on, and had a large team of operations staff dealing with issues that weren’t always reported by the tools, or they got buried in someone’s inbox.

IT operations has been a hard nut to crack for a long time, and it’s not always the fault of the software vendors that it isn’t improving. It’s not just about generating tonnes of messages that no-one will read. It’s about doing something with the data that people can derive value from. That said, I think NetApp’s solution is a solid attempt at providing a useful platform to deliver on some pretty important requirements for the modern enterprise. I really like the holistic view they’ve taken when it comes to monitoring all aspects of the infrastructure, and the insights they can deliver should prove invaluable to organisations struggling with the myriad of moving parts that make up their (private and public) cloud footprint. If you’d like to know more, you can access the data sheet here, and the documentation is hosted here.

Random Short Take #15

Here are a few links to some random news items and other content that I recently found interesting. You might find them interesting too. Episode 15 – it could become a regular thing. Maybe every other week? Fortnightly even.

Random Short Take #14

Here are a few links to some random news items and other content that I found interesting. You might find them interesting too. Episode 14 – giddy-up!

Big Switch Are Bringing The Cloud To Your DC

Disclaimer: I recently attended Dell Technologies World 2019.  My flights, accommodation and conference pass were paid for by Dell Technologies via the Media, Analysts and Influencers program. There is no requirement for me to blog about any of the content presented and I am not compensated in any way for my time at the event.  Some materials presented were discussed under NDA and don’t form part of my blog posts, but could influence future discussions.

As part of my attendance at Dell Technologies World 2019 I had the opportunity to attend Tech Field Day Extra sessions. You can view the videos from the Big Switch Networks session here, and download my rough notes from here.

 

The Network Is The Cloud

Cloud isn’t a location, it’s a design principle. And networking needs to evolve with the times. The enterprise is hamstrung by:

  • Complex and slow operations
  • Inadequate visibility
  • Lack of operational consistency

It’s time that on-premises needs is built the same way as the service providers do it.

  • Software-defined;
  • Automated with APIs;
  • Open Hardware; and
  • Integrated Analytics.

APIs are not an afterthought for Big Switch.

A Better DC Network

  • Cloud-first infrastructure – design, build and operate your on-premises network with the same techniques used internally by public cloud operators
  • Cloud-first experience – give your application teams the same “as-a-service” network experience on-premises that they get with the cloud
  • Cloud-first consistency – uses the same tool chain to manage both on-premises and in-cloud networks

 

Thoughts and Further Reading

There are a number of reasons why enterprise IT folks are looking wistfully at service providers and the public cloud infrastructure setups and wishing they could do IT that way too. If you’re a bit old fashioned, you might think that loose and fast isn’t really how you should be doing enterprise IT – something that’s notorious for being slow, expensive, and reliable. But that would be selling the SPs short (and I don’t just say that because I work for a service provider in my day job). What service providers and public cloud folks are very good at is getting maximum value from the infrastructure they have available to them. We don’t necessarily adopt cloud-like approaches to infrastructure to save money, but rather to solve the same problems in the enterprise that are being solved in the public clouds. Gone are the days when the average business will put up with vast sums of cash being poured into enterprise IT shops with little to no apparent value being extracted from said investment. It seems to be no longer enough to say “Company X costs this much money, so that’s what we pay”. For better or worse, the business is both more and less savvy about what IT costs, and what you can do with IT. Sure, you’ll still laugh at the executive challenging the cost of core switches by comparing them to what can be had at the local white goods slinger. But you better be sure you can justify the cost of that badge on the box that runs your network, because there are plenty of folks ready to do it for cheaper. And they’ll mostly do it reliably too.

This is the kind of thing that lends itself perfectly to the likes of Big Switch Networks. You no longer necessarily need to buy badged hardware to run your applications in the fashion that suits you. You can put yourself in a position to get control over how your spend is distributed and not feel like you’re feeling to some mega company’s profit margins without getting return on your investment. It doesn’t always work like that, but the possibility is there. Big Switch have been talking about this kind of choice for some time now, and have been delivering products that make that possibility a reality. They recently announced an OEM agreement with Dell EMC. It mightn’t seem like a big deal, as Dell like to cosy up to all kinds of companies to fill apparent gaps in the portfolio. But they also don’t enter into these types of agreements without having seriously evaluated the other company. If you have a chance to watch the customer testimonial at Tech Field Day Extra, you’ll get a good feel for just what can be accomplished with an on-premises environment that has service provider like scalability, management, and performance challenges. There’s a great tale to be told here. Not every enterprise is working at “legacy” pace, and many are working hard to implement modern infrastructure approaches to solve business problems. You can also see one of their customers talk with my friend Keith about the experience of implementing and managing Big Switch on Dell Open Networking.

Dell Announces Dell Technologies Cloud (Platforms and DCaaS)

Disclaimer: I recently attended Dell Technologies World 2019.  My flights, accommodation and conference pass were paid for by Dell Technologies via the Media, Analysts and Influencers program. There is no requirement for me to blog about any of the content presented and I am not compensated in any way for my time at the event.  Some materials presented were discussed under NDA and don’t form part of my blog posts, but could influence future discussions.

Dell Technologies recently announced their Dell Technologies Cloud Platforms and Dell Technologies DCaaS offerings and I thought I’d try and dig in a little more to the announcements here.

 

DTC DCaaS

[image courtesy of Dell Technologies]

Dell Technologies Cloud Data Center-as-a-Service (DTC DCaaS) is all about “bringing public cloud simplicity to your DCs”. So what do you get with this? You get:

  • Data residency and regulatory compliance;
  • Control over critical workloads;
  • Proximity of data with cloud resources;
  • Self-service resource provisioning;
  • Fully managed, maintained and supported; and
  • Increased developer velocity.

VMware Cloud on Dell

At its core, DTC DCaaS is built on VMware Cloud Foundation and Dell EMC VxRail. VMware Cloud on Dell EMC is “cloud infrastructure installed on-premises in your core and edge data centres and consumed as a cloud service”.

[image courtesy of Dell Technologies]

  • Cloud infrastructure delivered as-a-service on-premises
  • Co-engineered and delivered by Dell Technologies; ongoing service fully managed by VMware
  • VMware SDDC including compute, storage and networking
  • Built on VxRail – Dell EMC’s enterprise-grade cloud platform
  • Hybrid cloud control plane to provision and monitor resources
  • Monthly subscription model

How Does It Work?

  • Firstly, you sign into the VMware Cloud service account to create an order. Dell Technologies will then deliver and install your new cloud infrastructure in your core or edge DC location.
  • Next, the system will self-configure and register with VMware Cloud servers, so you can immediately begin provisioning and managing workloads with VMware’s hybrid cloud control plane.

Moving forward the hardware and software is fully managed, just like your public cloud resources.

Speeds And Feeds 

As I understand it there are two configuration options: DC and Edge. The DC configuration is as follows:

  • 1x 42U APC NetShelter rack
  • 4 – 15x E560 VxRail Nodes
  • 2x S5248FF 25GbE ToR Switches, OS10EE
  • 1x S3048 1GbE Management Switch, OS9EE
  • 2x VeloCloud 520
  • 6X Single-phase 30 AMP PDU
  • No UPS option

The Edge Location configuration is as follows:

  • 1x 24U APC NetShelter rack
  • 3 – 6x E560 VxRail Nodes
  • 2X S4128F 10GbE ToR Switches, OS10EE
  • 1X S3048-ON 1GbE Management Switch, OS9EE
  • 2x VeloCloud 520
  • 2x Single-phase 30 AMP PDU
  • 2x UPS with batteries for 30 min hold-up time for 6X E560F

 

Thoughts And Further Reading

I haven’t explained it very clearly in this article, but there are two parts to the announcement. There’s the DTC Platforms announcement, and the DTC DCaaS announcement. You can read a slightly better explanation here, but the Platforms announcement is VCF on VxRail, and VMware Cloud on AWS. DTC DCaaS, on the other hand, is kit delivered into your DC or Edge site and consumed as a managed service.

There was a fair bit of confusion when I spoke to people at the show last week about what this announcement really meant, both for Dell Technologies and for their customers. At the show last year, Dell was bullish on the future of private cloud / on-premises infrastructure. It seems apparent, though, that this kind of announcement is something of an admission that Dell has customers that are demanding a little more activity when it comes to multi-cloud and hybrid cloud solutions.

Dell’s ace in the hole has been (since the EMC merger) the close access to VMware that they’ve enjoyed via the portfolio of companies. It makes sense that they would have a story to tell when it comes to VMware Cloud Foundation and VMware Cloud on AWS. The box slingers at Dell EMC are happy because they can still sell VxRail appliances for use with the DCaaS offering. I’m interested to see just how many customers take up Dell on their vision of seamless integration between on-premises and public cloud workloads.

The public cloud vendors will tell you that eventually (in 5, 10, 20 years?) every workload will be “cloud native”. I think it’s more likely that we’ll always have some workloads that need to remain on-premises. Not necessarily because they have performance requirements that require that level of application locality, but rather because some organisations will have security requirements that will dictate where these workloads live. I think the shelf life of something like VMConAWS is still more limited than some people will admit, but I can see the need for stuff like this.

My only concern is that the DTC story can be complicated to tell in places. I’ve spent some time this week and last digging in to this offering, and I’m not sure I’ve explained it terribly well at all. I also wonder how the organisations (Dell EMC and VMware) will work together to offer a cohesive offering from a technology and support perspective. Ultimately, these types of solutions are appealing because companies want to focus on their core business, rather than operating as a poorly resourced IT organisation. But there’s no point entering in to these kinds of agreements if the vendor can’t deliver on their vision. “Fully managed services” mean different things to different vendors, so I’ll be interested to see how that plays out in the market.

Dell Technologies Cloud Data Center-as-a-Service, delivered as VMware Cloud on Dell EMC with VxRail, is currently is available in beta deployments with limited customer availability planned for the second half of 2019. You can read the solution overview here.

Dell EMC Announces Unity XT And More Cloudy Things

Disclaimer: I recently attended Dell Technologies World 2019.  My flights, accommodation and conference pass were paid for by Dell Technologies via the Media, Analysts and Influencers program. There is no requirement for me to blog about any of the content presented and I am not compensated in any way for my time at the event.  Some materials presented were discussed under NDA and don’t form part of my blog posts, but could influence future discussions.

 

Dell EMC Unity XT

As part of their storage announcements this week, Dell EMC announced the new Unity XT. Here’s a photo of one from the show floor at Dell Technologies World.

There are two variants of Unity XT, and you can grab the All-Flash data sheet here, and the Hybrid data sheet here. The spec sheet for both flavours is here. There are 8 models in all, and the smallest one in hybrid and all-flash won’t support NVMe (to keep the cost down for smaller customers). I’m told the largest model will scale up to 1500 drives, with Dell EMC revisiting the kind of specs that they had with the VNX 7600 and 8000 range.

From an efficiency perspective, Dell EMC are claiming

  • Up to 5:1 data reduction
  • 85% system efficiency

Wait, what about performance? Dell EMC are telling me the Unity XT delivers up to:

  • 2x More Performance (IOPS)*
  • 75% Lower Latency**
  • 67% Faster performance than competition***

Like all performance claims, there are a few caveats:

  • *100% reads, 100% writes & mixed workload – compared to previous generation
  • ** @ 150K IOPS, 8K block size, 70/30 R/W ratio
  • *** Compared to leading vendor

 

Dell Storage and the Cloud

It’s a multi-cloud world. And Dell EMC have been working to make sure their involved in various cloud things, including:

  • Dell Technologies Cloud Platform (certified with Unity and PowerMax);
  • Cloud Data Services;
  • Cloud Connected Systems; and
  • Cloud Data Insights.

Dell Technologies Cloud Platform

This was a reasonably significant announcement, and I’ll be covering it in a separate article.

 

Cloud Data Services

Dell EMC are also offering a range of storage and protection data services available in the public cloud provider of your choice.

Dell EMC Cloud Storage Services

Dell EMC have announced that Early Access is coming soon for Dell EMC Cloud Storage Services Integrated with Google Cloud Platform (GCP) for File.

[image courtesy of Dell EMC]

  • Ideal for HPC applications, analytics, media and entertainment, life sciences, etc.
  • Backed by enterprise SLAs
  • Pay-as-you-use pricing
  • Proactive monitoring, maintenance, and hardware life- cycle management

They’ve also announced that Dell EMC Cloud Storage Services is now available.

[image courtesy of Dell EMC]

  • Fast – High-speed, low latency connection to the cloud;
  • Trusted – Durable, persistent storage with up to 6-9’s availability and enterprise grade security; and
  • Flexible – Control your data with multi-cloud agility; Independently scale capacity and compute.

 

DR Services

The cool thing about cloud data services is that you can do cool things with them, such as using VMC on AWS for Automated Disaster Recovery

[image courtesy of Dell EMC]

Dell EMC tell me it’s a:

  • Seamlessly integrated VMware environment;
  • Delivering automated DR operations;
  • With enterprise-grade, pay- as-you-go DRaaS;
  • You only pay for compute in the cloud when failover occurs; and
  • This gives you access to lower RPOs and RTOs

It’s a multi-cloud world though, so you can also access multiple cloud providers for Disaster Recovery.

[image courtesy of Dell EMC]

The benefits of this approach are numerous, including:

  • No secondary DC to manage;
  • Enterprise-grade infrastructure;
  • A Pay-as-you-go model;
  • Only pay for compute in the cloud in the event of a failure; and
  • Lower RPOs.

And it wouldn’t be multi-cloud capable if you couldn’t do other cool stuff like workload migration, analytics and more:

  • Flexible, multi-cloud support;
  • No vendor lock-in with data independent of the cloud;
  • Leverage cloud(s) of choice based on application needs;
  • Reduce risk with centralised, durable storage; and
  • Fast, low cost set up – no additional infrastructure to setup or manage.

Cloud Data Insights

Proactively monitor and manage infrastructure and data with intelligent cloud-based analytics. With CloudIQ you get access to a few neat things, including:

Predictive Modelling

  • Capacity Forecasting
  • Competing Workload Analysis

Accelerated Resolution

  • 3X Faster Insight
  • Performance Anomaly Detection

Broader Support

  • Primary Storage Portfolio
  • VMware
  • Connectrix
  • Isilon and PowerVault*

[image courtesy of Dell EMC]

Dell EMC ClarityNow

  • Single pane of glass view of all file and object storage;
  • Accelerated scan and indexing of unstructured data;
  • High-speed search across heterogeneous storage;
  • Detailed reporting with chargeback views; and
  • Data mobility for self-service archive in cloud.

[image courtesy of Dell EMC]

 

Thoughts and Further Reading

The Unity XT is an evolution of the Unity line, rather than a revolutionary array. Dell EMC are doing all the things you’d expect them to do with their midrange line, including improving performance and adding support for NVMe on most of the models. I imagine people still have questions about the breadth of Dell EMC’s storage portfolio, with a range of products available from Unity to SC to XtremIO to PowerMax. There’s also Isilon dominating the file options, and ECS delivering some interesting object capabilities. It’s clear there’s still some room for consolidation, but I think it’s smart that Dell EMC have stuck with the “portfolio company” line. Instead of having too many options, the idea is that they can see you exactly what you want. They are, after all, in the business of making money. And if people want to keep buying Compellent, then Dell EMC are going to keep selling it to them. At least in the near term.

The Cloud Data Services announcements are also interesting. I’ve seen plenty of those cloud-native folks question why you’d want something like Isilon running on GCP. But those people aren’t really the ones who’l’ benefit from these types of solutions. Rather, it’s the enterprise who’ve built up particular workloads that rely on file, but still need to shift some of those workloads to a public cloud provider. Remember, not every tech company goes out and builds products without having a user base that has asked for said products. Dell EMC are very much in the camp of not doing things without having a quantifiable appetite from the customer base.

I’m glad I don’t work in a job where I have to manage lots of storage devices anymore. Because I’m not so sure I’d like to do it on my mobile phone. But the ability to view the health of these devices via an app is appealing. Sure, you’re not going to necessarily want to use element managers on your phone, but whne you need to know that status of something without diving too deep, something like CloudIQ becomes super useful. As does the ability to see all of your devices in one place with ClarityNow.

I didn’t hear anything revolutionary in Dell EMC’s storage announcements this year, but they continue to stay the course, and they’re setting the scene for bigger things to come. For another perspective, you can read Max’s thoughts on the storage announcements here. I’m looking forward to digging in to what Dell Technologies Cloud really means, and hope to have something out on that in the next week or so.

Axellio Announces Azure Stack HCI Support

Microsoft recently announced their Azure Stack HCI program, and I had the opportunity to speak to the team from Axellio (including Bill Miller, Barry Martin, and Kara Smith) about their support for it.

 

Azure Stack Versus Azure Stack HCI

So what’s the difference between Azure Stack and Azure Stack HCI? You can think of Azure Stack as an extension of Azure – designed for cloud-native applications. The Azure Stack HCI is more for your traditional VM-based applications – the kind of ones that haven’t been refactored (or can’t be) for public cloud.

[image courtesy of Microsoft]

The Azure Stack HCI program has fifteen vendor partners on launch day, of which Axellio is one.

 

Axellio’s Take

Miller describes the Axellio solution as “[n]ot your father’s HCI infrastructure”, and Axellio tell me it “has developed the new FabricXpress All-NVMe HCI edge-computing platform built from the ground up for high-performance computing and fast storage for intense workload environments. It delivers 72 NVMe SSDS per server, and packs 2 servers into one 2U chassis”. Cluster sizes start at 4 nodes and run up to 16. Note that the form factor measurement in the table below includes any required switching for the solution. You can grab the data sheet from here.

[image courtesy of Axellio]

It uses the same Hyper-V based software-defined compute, storage and networking as Azure Stack and integrates on-premises workloads with Microsoft hybrid data services including Azure Site Recovery and Azure Backup, Cloud Witness and Azure Monitor.

 

Thoughts and Further Reading

When Microsoft first announced plans for a public cloud presence, some pundits suggested they didn’t have the chops to really make it. It seems that Microsoft has managed to perform well in that space despite what some of the analysts were saying. What Microsoft has had working in its favour is that it understands the enterprise pretty well, and has made a good push to tap that market and help get the traditionally slower moving organisations to look seriously at public cloud.

Azure Stack HCI fits nicely in between Azure and Azure Stack, giving enterprises the opportunity to host workloads that they want to keep in VMs hosted on a platform that integrates well with public cloud services that they may also wish to leverage. Despite what we want to think, not every enterprise application can be easily refactored to work in a cloud-native fashion. Nor is every enterprise ready to commit that level of investment into doing that with those applications, preferring instead to host the applications for a few more years before introducing replacement application architectures.

It’s no secret that I’m a fan of Axellio’s capabilities when it comes to edge compute and storage solutions. In speaking to the Axellio team, what stands out to me is that they really seem to understand how to put forward a performance-oriented solution that can leverage the best pieces of the Microsoft stack to deliver an on-premises hosting capability that ticks a lot of boxes. The ability to move workloads (in a staged fashion) so easily between public and private infrastructure should also have a great deal of appeal for enterprises that have traditionally struggled with workload mobility.

Enterprise operations can be a pain in the backside at the best of times. Throw in the requirement to host some workloads in public cloud environments like Azure, and your operations staff might be a little grumpy. Fans of HCI have long stated that the management of the platform, and the convergence of compute and storage, helps significantly in easing the pain of infrastructure operations. If you then take that management platform and integrate it successfully with you public cloud platform, you’re going to have a lot of fans. This isn’t Axellio’s only solution, but I think it does fit in well with their ability to deliver performance solutions in both the core and edge.

Thomas Maurer wrote up a handy article covering some of the differences between Azure Stack and Azure Stack HCI. The official Microsoft blog post on Azure Stack HCI is here. You can read the Axellio press release here.