Datadobi Announces StorageMAP

Datadobi recently announced StorageMAP – a “solution that provides a single pane of glass for organizations to manage unstructured data across their complete data storage estate”. I recently had the opportunity to speak with Carl D’Halluin about the announcement, and thought I’d share some thoughts here.

 

The Problem

So what’s the problem enterprises are trying to solve? They have data all over the place, and it’s no longer a simple activity to work out what’s useful and what isn’t. Consider the data on a typical file / object server inside BigCompanyX.

[image courtesy of Datadobi]

As you can see, there’re all kinds of data lurking about the place, including data you don’t want to have on your server (e.g. Barry’s slightly shonky home videos), and data you don’t need any more (the stuff you can move down to a cheaper tier, or even archive for good).

What’s The Fix?

So how do you fix this problem? Traditionally, you’ll try and scan the data to understand things like capacity, categories of data, age, and so forth. You’ll then make some decisions about the data based on that information and take actions such as relocating, deleting, or migrating it. Sounds great, but it’s frequently a tough thing to make decisions about business data without understanding the business drivers behind the data.

[image courtesy of Datadobi]

What’s The Real Fix?

The real fix, according to Datadobi, is to add a bit more automation and smarts to the process, and this relies heavily on accurate tagging of the data you’re storing. D’Halluin pointed out to me that they don’t suggest you create complex tags for individual files, as you could be there for years trying to sort that out. Rather, you add tags to shares or directories, and let the StorageMAP engine make recommendations and move stuff around for you.

[image courtesy of Datadobi]

Tags can represent business ownership, the role of the data, any action to be taken, or other designations, and they’re user definable.
[image courtesy of Datadobi]

How Does This Fix It?

You’ll notice that the process above looks awfully similar to the one before – so how does this fix anything? The key, in my opinion at least, is that StorageMAP takes away the requirement for intervention from the end user. Instead of going through some process every quarter to “clean up the server”, you’ve got a process in place to do the work for you. As a result, you’ll hopefully see improved cost control, better storage efficiency across your estate, and (hopefully) you’ll be getting a little bit more value from your data.

 

Thoughts

Tools that take care of everything for you have always had massive appeal in the market, particularly as organisations continue to struggle with data storage at any kind of scale. Gone are the days when your admins had an idea where everything on a 9GB volume was stored, or why it was stored there. We now have data stored all over the place (both officially and unofficially), and it’s becoming impossible to keep track of it all.

The key things to consider with these kinds of solutions is that you need to put in the work with tagging your data correctly in the first place. So there needs to be some thought put into what your data looks like in terms of business value. Remember that mp4 video files might not be warranted in the Accounting department, but your friends in Marketing will be underwhelmed if you create some kind of rule to automatically zap mp4s. The other thing to consider is that you need to put some faith in the system. This kind of solution will be useless if folks insist on not deleting anything, or not “believing” the output of the analytics and reporting. I used to work with customers who didn’t want to trust a vendor’s automated block storage tiering because “what does it know about my workloads?”. Indeed. The success of these kind of intelligence and automation tools relies to a certain extent on folks moving away from faith-based computing as an operating model.

But enough ranting from me. I’ve covered Datadobi a bit over the last few years, and it makes sense that all of these announcements have finally led to the StorageMAP product. These guys know data, and how to move it.

Random Short Take #62

Welcome to Random Short take #62. It’s Friday afternoon, so I’ll try and keep this one brief.

  • Tony was doing some stuff in his lab and needed to clean up a bunch of ports in his NSX-T segment. Read more about what happened next here.
  • Speaking of people I think of when I think automation, Chris Wahl wrote a thought-provoking article on deep work that is well worth checking out.
  • While we’re talking about work, Nitro has published its 2022 Productivity Report. You can read more here.
  • This article from Backblaze on machine learning and predicting hard drive failure rates was interesting. Speaking of Backblaze, if you’re thinking about signing up with them, use my code and we’ll both get some free time.
  • Had a security problem? Need to recover? How do you know when to hit the big red button? Preston can help.
  • Speaking of doom and gloom (i.e. losing data), Curtis’s recent podcast episode covering ZFS and related technologies made for some great listening.
  • Have you been looking for a “A Unique Technology to Scan and Interrogate Petabyte-Scale Unstructured Data Lakes”? Maybe, maybe not. If you have, Datadobi has you covered with Datadobi Query Language. You can read the press release here.
  • I love when bloggers take the time to do hands-on articles, and this one from Dennis Faucher covering VMware Tanzu Community Edition was fantastic.

Random Short Take #52

Welcome to Random Short Take #52. A few players have worn 52 in the NBA including Victor Alexander (I thought he was getting dunked on by Shawn Kemp but it was Chris Gatling). My pick is Greg Oden though. If only his legs were the same length. Let’s get random.

  • Penguin Computing and Seagate have been doing some cool stuff with the Exos E 5U84 platform. You can read more about that here. I think it’s slightly different to the AP version that StorONE uses, but I’ve been wrong before.
  • I still love Fibre Channel (FC), as unhealthy as that seems. I never really felt the same way about FCoE though, and it does seem to be deader than tape.
  • VMware vSAN 7.0 U2 is out now, and Cormac dives into what’s new here. If you’re in the ANZ timezone, don’t forget that Cormac, Duncan and Frank will be presenting (virtually) at the Sydney VMUG *soon*.
  • This article on data mobility from my preferred Chris Evans was great. We talk a lot about data mobility in this industry, but I don’t know that we’ve all taken the time to understand what it really means.
  • I’m a big fan of Tech Field Day, and it’s nice to see presenting companies take on feedback from delegates and putting out interesting articles. Kit’s a smart fellow, and this article on using VMware Cloud for application modernisation is well worth reading.
  • Preston wrote about some experiences he had recently with almost failing drives in his home environment, and raised some excellent points about resilience, failure, and caution.
  • Speaking of people I worked with briefly, I’ve enjoyed Siobhán’s series of articles on home automation. I would never have the patience to do this, but I’m awfully glad that someone did.
  • Datadobi appears to be enjoying some success, and have appointed Paul Repice to VP of Sales for the Americas. As the clock runs down on the quarter, I’m going two for one, and also letting you know that Zerto has done some work to enhance its channel program.

Back To The Future With Tintri

Disclaimer: I recently attended Storage Field Day 21.  My flights, accommodation and other expenses were paid for by Tech Field Day. There is no requirement for me to blog about any of the content presented and I am not compensated in any way for my time at the event.  Some materials presented were discussed under NDA and don’t form part of my blog posts, but could influence future discussions.

Tintri recently presented at Storage Field Day 21. You can see videos of the presentation here, and download my rough notes from here.

 

Tintri? 

Remember Tintri? The company was founded in 2008, fell upon difficult times in 2018, and was acquired by DDN. It’s still going strong, and now offers a variety of products under the Tintri brand, including VMstore, IntelliFlash, and NexentaStor. I’ve had exposure to all of these different lines of business over the years, and was interested to see how it was all coming together under the DDN acquisition.

 

Does Your Storage Drive Itself?

Ever since I got into the diskslinger game, self-healing infrastructure has been talked about as the next big thing in terms of reducing operational overheads. We build this stuff, can teach it how to do things, surely we can get it to fix itself when it goes bang? As those of you who’ve been in the industry for some time would likely know, we’re still some ways off that being a reality across a broad range of infrastructure solutions. But we do seem closer than we were a while ago.

Autonomous Infrastructure

Tintri spent some time talking about what it was trying to achieve with its infrastructure by comparing it to autonomous vehicle development. If you think about it for a minute, it’s a little easier to grasp the concept of a vehicle driving itself somewhere, using a lot of telemetry and little computers to get there, than it is to think about how disk storage might be able to self-repair and redirect resources where they’re most needed. Of most interest to me was the distinction made between analytics and intelligence. It’s one thing to collect a bunch of telemetry data (something that storage companies have been reasonably good at for some time now) and analyse it after the fact to come to conclusions about what the storage is doing well and what it’s doing poorly. It’s quite another thing to use that data on the fly to make decisions about what the storage should be doing, without needing the storage manager to intervene.

[image courtesy of Tintri]

If you look at the various levels of intelligence, you’ll see that autonomy eventually kicks in and the concept of supervision and management moves away. The key to the success of this is making sure that your infrastructure is doing the right things autonomously.

So What Do You Really Get?

[image courtesy of Tintri]

You get an awful lot from Tintri in terms of information that helps the platform decide what it needs to do to service workloads in an appropriate fashion. It’s interesting to see how the different layers deliver different outcomes in terms of frequency as well. Some of this is down to physics, and time to value. The info in the cloud may not help you make an immediate decision on what to do with your workloads, but it will certainly help when the hapless capacity manager comes asking for the 12-month forecast.

 

Conclusion

I was being a little cheeky with the title of this post. I was a big fan of what Tintri was able to deliver in terms of storage analytics with a virtualisation focus all those years ago. It feels like some things haven’t changed, particularly when looking at the core benefits of VMstore. But that’s okay, because all of the things that were cool about VMstore back then are still actually cool, and absolutely still valuable in most enterprise storage shops. I don’t doubt that there are VMware shops that have definitely taken up vVols, and wouldn’t get as much out of VMstore as those shops running oldey timey LUNs, but there are plenty of organisations that just need storage to host VMs on, storage that gives them insight into how it’s performing. Maybe it’s even storage that can move some stuff around on the fly to make things work a little better.

It’s a solid foundation upon which to add a bunch of pretty cool features. I’m not 100% convinced that what Tintri is proposing is the reality in a number of enterprise shops (have you ever had to fill out a change request to storage vMotion a VM before?), but that doesn’t mean it’s not a noble goal, and certainly one worth pursuing. I’m a fan of any vendor that is actively working to take the work out of infrastructure, and allowing people to focus on the business of doing business (or whatever it is that they need to focus on). It looks like Tintri has made some really progress towards reducing the overhead of infrastructure, and I’m keen to see how that plays out across the product portfolio over the next year or two.

 

 

Automation Anywhere – The Bots Are Here To Help

Disclaimer: I recently attended Tech Field Day 19.  My flights, accommodation and other expenses were paid for by Tech Field Day. There is no requirement for me to blog about any of the content presented and I am not compensated in any way for my time at the event.  Some materials presented were discussed under NDA and don’t form part of my blog posts, but could influence future discussions.

 

Automation Anywhere recently presented at Tech Field Day 19. You can see videos of their presentation here, and download my rough notes from here.

 

Robotic What?

Robotic Process Automation (RPA) is the new hotness in enterprise software. Automation Anywhere raised over $550 million in funding in the last 12 months. That’s a lot of money. But what is RPA? It’s a way to develop workflows so that business processes can be automated. One of the cool things, though, is that it can develop these automation actions by observing the user perform the actions in the GUI, and then repeating those actions. There’s potential to make this more accessible to people who aren’t necessarily software development types.

Automation Anywhere started back in 2003, and the idea was to automate any application. Automation anywhere want to “democratise automation”, and “anything that can be automated, should be automated”. The real power of this kind of approach is that it, potentially, allows you do things you never did before. Automation Anywhere want us to “imagine a world where every job has a digital assistant working side by side, allowing people doing what they do best”.

[image courtesy of Automation Anywhere]

 

Humans are the Resource

This whole automating all the things mantra has been around for some time, and the idea has always been that we’re “[m]oving humans up the value chain”. Not only that, but RPA isn’t about digital transformation in the sense that a lot of companies see it currently, i.e. as a way to change the way they do things to better leverage digital tools. What’s interesting is that RPA is more focused on automating what you already have. You can then decide whether the process is optimal or whether it should be changed. I like this idea, if only because of the number of times I’ve witnessed small and large companies go through “transformations”, only to realise that what they were doing previously was pretty good, and they’d just made a few mistakes in terms of manual process creeping in.

Automation Anywhere told us that some people start with “I know that my job cannot be automated”, but it turns out that about 80% of their job is business tools based, and a lack of automation is holding them back from thinking strategically. We’ve seen this problem throughout the various industrial revolutions that have occurred, and people have invariably argued against steam-powered devices, and factory lines, and self-healing infrastructure.

 

Thoughts and Further Reading

Automation is a funny thing. It’s often sold to people as a means to give them back time in their day to do “higher order” activities within the company. This has been a message that has been around as long as I’ve been in IT. There’s an idea that every worker is capable of doing things that could provide more value to the company, if only they had more time. Sometimes, though, I think some folks are just good at breaking rocks. They don’t want to do anything else. They may not really be capable of doing anything else. And change is hard, and is going to be hard for them in particular. I’m not anticipating that RPA will take over every single aspect of the workplace, but there’s certainly plenty of scope for it to have a big presence in the modern enterprise. So much time is wasted on process that should really be automated, because it can give you back a lot of time in your day. And it also provides the consistency that human resources lack.

As Automation Anywhere pointed out in their presentation “every piece of software in the world changes how we work, but rarely do you have the opportunity to change what the work is”. And that’s kind of the point, I think. We’re so tied to do things in a business a certain way, and oftentimes we fill the gaps in workflows with people because the technology can’t keep up with what we’re trying to do. But if you can introduce tools into the business that can help you move past those shortfalls in workflow, and identify ways to improve those workflows, that could really be something interesting. I don’t know if RPA will solve all of our problems overnight, because humans are unfortunately still heavily involved in the decision making process inside enterprise, but it seems like there’s scope to do some pretty cool stuff with it.

If you’d like to read some articles that don’t just ramble on, check out Adam’s article here, Jim’s view here, and Liselotte’s article here. Marina posted a nice introduction to Automation Anywhere here, and Scott’s impression of Automation Anywhere’s security approach made for interesting reading. There’s a wealth of information on the Automation Anywhere website, and a community edition you can play with too.

Random Short Take #7

Here are a few links to some random things that I think might be useful, to someone. Maybe.

Puppet Announces Puppet Discovery, Can Now Find and Manage Your Stuff Everywhere

Puppet recently wrapped up their conference, PuppetConf2017, and made some product announcements at the same time. I thought I’d provide some brief coverage of one of the key announcements here.

 

What’s a Discovery Puppet?

No, it’s Puppet Discovery, and it’s the evolution of Puppet’s focus on container and cloud infrastructure discovery, and the result of feedback from their customers on what’s been a challenge for them. Puppet describe it as “a new turnkey approach to traditional and cloud resource discovery”.

It also provides:

  • Agentless service discovery for AWS EC2, containers, and physical hosts;
  • Actionable views across the environment; and
  • The ability to bring unmanaged resources under Puppet management.

Puppet Discovery currently allows for discovery of VMware vSphere VMs, AWS and Azure resources, and containers, with support for other cloud vendors, such as Google Cloud Platform, to follow.

 

Conclusion and Further Reading

Puppet have been around for some time and do a lot of interesting stuff. I haven’t covered them previously on this blog, but that doesn’t mean they’re not doing interesting stuff. I have a lot of customers leveraging Puppet in the wild, and any time companies make the discovery, management and automation of infrastructure easier I’m all for it. I’m particularly enthusiastic about the hybrid play, as I agree with Puppet’s claim that a lot of these types of solutions work particularly well on static, internal networks but struggle when technologies such as containers and public cloud come into play.

Just like VM sprawl before it, cloud sprawl is a problem that enterprises, in particular, are starting to experience with more frequency. Tools like Discovery can help identify just what exactly has been deployed. Once users have a better handle on that, they can start to make decisions about what needs to stay and what should go. I think this is key to good infrastructure management, regardless of whther you were jeans and a t-shirt to work or prefer a suit and tie.

The press release for Puppet Discovery can be found here. You can apply to participate in the preview phase here. There’s also a blog post covering the announcement here.

Brisbane VMUG – September 2016

hero_vmug_express_2011

The September version of the Brisbane VMUG meeting will be held on Thursday 22nd September at Telstra in the city from 2 – 4 pm. It’s sponsored by SimpliVity and should make for a great session.

Here’s the agenda:

  • 2:00 – 2:15pm: Registration and Welcome
  • 2:15 – 2:30pm: VMUG and VMworld Update
  • 2:30 – 3:15pm: SimpliVity HCI Market announcements VMworld 2016
    1. SimpliVity All Flash
    2. Enterprise Scale for VDI
    3. Database backup enhancements
    4. Automated DR testing
    5. Analyst Coverage

     

  • 3:15 – 3:45pm: vROps and SimpliVity live demo
  • 3:45 – 4:00pm: Q&A with customers

SimpliVity have done some great presentations for VMUG in the past and I’m really looking forward to hearing about their recent product announcements and seeing their vROps demo. You can find out more information and register for the event here. I hope to see you there. Also, if you’re interested in sponsoring one of these events, please get in touch with me and I can help make it happen.

Brisbane VMUG – August 2016

hero_vmug_express_2011

The August edition of the Brisbane VMUG will be held on Tuesday 8th August at EMC’s office in the city (Level 11, 345 Queen Street, Brisbane) from 2:30 – 4:30 pm. It’s sponsored by VMware and should be a lot of fun.

Here’s the agenda:

I’m really looking forward to Michael Francis continuing his enablement series on vRO. You can find out more information and register for the event here. I hope to see you there. Also, if you’re interested in sponsoring one of these events, please get in touch with me and I can help make it happen.