VMware – VMworld 2019 – HBI2537PU – Cloud Provider CXO Panel with Cohesity, Cloudian and PhoenixNAP

Disclaimer: I recently attended VMworld 2019 – US.  My flights and accommodation were paid for by Digital Sense, and VMware provided me with a free pass to the conference and various bits of swag. There is no requirement for me to blog about any of the content presented and I am not compensated by VMware for my time at the event.  Some materials presented were discussed under NDA and don’t form part of my blog posts, but could influence future discussions.

Here are my rough notes from “HBI2537PU – Cloud Provider CXO Panel with Cohesity, Cloudian and PhoenixNAP”, a panel-type presentation with the following people:

You can grab a PDF copy of my notes from here.

Introductions are done.

YR: William, given your breadth of experience, what are some of the emerging trends you’ve been seeing?

WB: Companies are struggling to keep up with the pace of information generation. Understanding the data, storing and retaining it, and protecting it. Multi-cloud adds a lot of complexity. We’ve heard studies that say 22% of data generated is actually usable. It’s just sitting there. Public cloud is still hot, but it’s settling down a little.

YR: William comes from a massive cloud provider. What are you guys using?

WB: We’ve standardised on vCloud Director (vCD) and vSphere. We came from build our own but it wasn’t providing the value that we hoped it would. Customers want a seamless way to manage multiple cloud resources.

YR: Are you guys familiar with VCPP?

AP: VCPP is the crown jewel of our partner program at VMware. 4000+ providers, 120+ countries, 10+ million VMs, 10000+ DCs. We help you save money, make money (things are services ready). We’re continuing to invest in vCD. Kubernetes, GPUs, etc. Lots of R&D.

YR: William, you mentioned you standardised on the VMware platform. Talk to us about your experience. Why vCD?

WB: It’s been a checkered past for vCD. We were one of the first five on the vCloud Express program in 2010 / 11. We didn’t like vCD in its 1.0 version. We thought we can do this better. And we did. We launched the first on-demand, pay by the hour public cloud for enterprise in 2011. But it didn’t really work out. 2012 / 13 we started to see investments being made in vCD. 5.0 / 5.5 improved. Many people thought vCD was gong to die. We now see a modern, flexible portal that can be customised. And we can take our devs and have them customise vCD, rather than build a customised portal. That’s where we can put our time and effort. We’ve always done things differently. Always been doing other things. How do we bring our work in visual cloud into that cloud provider portal with vCD?

YR: You have an extensive career at VMware.

RR: I was one of the first people to take vCD out to the world. But Enterprise wasn’t mature enough. When we focused on SPs, it was the right thing to do. DIY portals needs a lot of investment. VMware allows a lot of extensibility now. For us, as Cohesity, we want to be able to plug in to that as well.

WB: At one point we had 45 devs working on a proprietary portal.

YR: We’ve been doing a lot on the extensibility side. What role are services playing in cloud providers?

AP: It takes away the complexities of deploying the stack.

JT: We’re specifically in object. A third of our customers are service providers. You guys know that object is built for scale, easy to manage, cost-effective. 20% of the data gets used. We hear that customers want to improve on that. People are moving away from tape. There’s a tremendous opportunity for services built on storage. Amazon has shown that. Data protection like Cohesity. Big data with Splunk. You can offer an industry standard, but differentiate based on other services.

YR: As we move towards a services-oriented world, William how do you see cloud management services evolving?

WB: It’s not good enough to provide some compute infrastructure any more. You have to do something more. We’re stubbornly focussed on different types of IaaS. We’re not doing generic x86 on top of vSphere. Backup, DR – those are in our wheelhouse. From a platform perspective, more and more customers want some kind of single pane of glass across their data. For some that’s on-premises, for some its public, for some it’s SaaS. You have to be able to provide value to the customer, or they will disappear. Object storage, backup with Cohesity. You need to keep pace with data movement. Any cloud, any data, any where.

AP: I’ve been at VMware long enough not to drink the Kool-Aid. Our whole cloud provider business is rooted in some humility. vCD can help other people doing better things to integrate. vCD has always been about reducing OPEX. Now we’re hitting the top line. Any cloud management platform today needs to open, extensible, not try to do anything.

YR: Is the crowd seeing pressure on pure IaaS?

Commentator: Coming from an SP to enterprise is different. Economics. Are you able to do a show back with vCD 9 and vROps?

WB: We’re putting that in the hands of customers. Looking at CloudHealth. There’s a benefit to being in the business management space. You have the opportunity to give customers a better service. That, and more flexible business models. Moving into flexible billing models – gives more freedom to the enterprise customer. Unless you’re the largest of the large – enterprises have difficulty acting as a service provider. Citibank are an exception to this. Honeywell do it too. If you’re Discount Tire – it’s hard. You’re the guy providing the service, and you’re costing them money. There’s animosity – and there’s no choice.

Commentator: Other people have pushed to public because chargeback is more effective than internal show back with private cloud.

WB: IT departments are poorly equipped to offer a breadth of services to their customers.

JT: People are moving workloads around. They want choice and flexibility. VMware with S3 compatible storage. A common underlying layer.

YR: Economics, chargeback. Is VMware (and VCPP) doing enough?

WB: The two guys to my right (RR and JT) have committed to building products that let me do that. I’ve been working on object storage use cases. I was talking to a customer. They’re using our IaaS and connected to Amazon S3. You’ve gone to Amazon. They didn’t know about it though. Experience and cost that can be the same or better. Egress in Amazon S3 is ridiculous. You don’t know what you don’t know. You can take that service and deliver it cost-effectively.

YR: RR talk to us about the evolution of data protection.

RR: Information has grown. Data is fragmented. Information placement is almost unmanageable. Services have now become available in a way that can be audited, secured, managed. At Cohesity, first thing we did was data protection, and I knew the rest was coming. Complexity’s a problem.

YR: JT. We know Cloudian’s a leader in object storage. Where do you see object going?

JT: It’s the underlying storage layer of the cloud. Brings down cost of your storage layer. It’s all about TCO. What’s going to help you build more revenue streams? Cloudian has been around since 2011. New solutions in backup, DR, etc, to help you build new revenue streams. S3 users on Amazon are looking for alternatives. Many of Cloudian’s customers are ex-Amazon customers. What are we doing? vCD integration. Search Cloudian and vCD on YouTube. Continuously working to drive down the cost of managing storage. 1.5PB in a 4RU box in collaboration with Seagate.

WB: Expanding service delivery, specifically around object storage, is important. You can do some really cool stuff – not just backup, it’s M&E, it’s analytics. Very few of our customers are using object just to store files and folders.

YR: We have a lot of providers in the room. JT can you talk more about these key use cases?

JT: It runs the gamut. You can break it down by verticals. M&E companies are offering editing suites via service providers. People are doing that for the legal profession. Accounting – storing financial records. Dental records and health care. The back end is the same thing – compute with S3 storage behind it. Cloudian provides multi-tenanted, scalable performance. Cost is driven down as you get larger.

YR: RR your key use cases?

RR: DRaaS is hot right now. When I was at VMware we did stuff with SRM. DR is hard. It’s so simple now. Now every SP can do it themselves. Use S3 to move data around from the same interface. And it’s very needed too. Everyone should have ubiquitous access to their data. We have that capability. We can now do vulnerability scans on the data we store on the platform. We can tell you if a VM is compromised. You can orchestrate the restoration of an environment – as a service.

YR: WB what are the other services you want us to deliver?

WB: We’re an odd duck. One of our major practices is information security. The idea that we have intelligent access to data residing in our infrastructure. Being able to detect vulnerabilities, taking action, sending an email to the customer, that’s the type of thing that cloud providers have. You might not be doing it yet – but you could.

YR: Security, threat protection. RR – do you see Cohesity as the driver to solve that problem?

RR: Cohesity will provide the platform. Data is insecure because it’s fragmented. Cohesity lets you run applications on the platform. Virus scanners, run books, all kinds of stuff you can offer as a service provider.

YR: William, where does the onus lie, how do you see it fitting together?

WB: The key for us is being open. Eg Cohesity integration into vCD. If I don’t want to – I don’t have to. Freedom of choice to pick and choose where we went to deliver our own IP to the customer. I don’t have to use Cohesity for everything.

JT: That’s exactly what we’re into. Choice of hardware, management. That’s the point. Standards-based top end.

YR: Security

*They had 2 minutes to go but I ran out of time and had to get to another meeting. Informative session. 4 stars.

VMware – VMworld 2019 – HCI2888BU – Site Recovery Manager 8.2: What’s New and Demo

Disclaimer: I recently attended VMworld 2019 – US.  My flights and accommodation were paid for by Digital Sense, and VMware provided me with a free pass to the conference and various bits of swag. There is no requirement for me to blog about any of the content presented and I am not compensated by VMware for my time at the event.  Some materials presented were discussed under NDA and don’t form part of my blog posts, but could influence future discussions.

Here are my rough notes from “HCI2888BU – Site Recovery Manager 8.2: What’s New and Demo”, presented by Cato Grace and Velina Krasteva (Senior PM for SRM and vSphere replication, VMware). You can grab a PDF copy of my notes from here.

 

SRM Product Overview

When you hear “disaster recovery” what do you think of? Natural disasters? DR is not just about natural disasters. It can also power, networking, people. Site Recovery Manager supports hypervisor-based and array-based replication. SRM about adding value to your replication.

Workflows?

Non-disruptive testing

  • Automated testing isolated network
  • Ensures predictability of RTO

Automated Failback

  • Re-protect using original recovery plan
  • Streamlines bi-directional migrations

Automated Failover

  • Runbook automation
  • Single-click initiation
  • Emphasises fastest possible recovery after outage

Planned Migration

  • Ensures zero data loss and app consistency
  • Enables disaster avoidance and DC maintenance or migration

*Demo

VMware Site Recovery (DRaaS) for VMware Cloud on AWS

DRaaS

  • Accelerate time to protection
  • Cloud economics with on-demand pricing
  • Integrated into VMware Cloud console
  • Post-failover cluster scaling with Elastic DRS
  • Inter-region protection

 

What’s New In 8.2?

Simplified deployment and operations with SRM as an appliance

  • Parity with Windows version
  • Simple OVF deployment
  • SRAs with the appliance setup as Docker containers
  • Greatly simplify SRM deployment, maintenance and upgrades.

Built on PhotonOS

Upgrading to the appliance – upgrade to 8.2 on Windows first, then migrate to the appliance. There’s a blog post on that here, and documentation here.

Improved ease of use with config import / export UI

  • Now entirely UI based
  • Export / backup and import / restore capabilities for entire SRM configuration
  • Includes entire SRM configuration (VMs, PGs, RPs, IP customisation, array managers, etc)
  • Enables simple DB migration

API and vRO workflow enhancements

  • Configure IP customisation
  • Add / remove datastores from array-based replication PGs
  • Remove post-power on tasks
  • Check status of VR replication
  • List replicated VMs
  • Get VR configuration
  • List replicated RDMs and Array Managers

New Workflows

New in SRM

  • Set IP settings
  • Update group datastore
  • Delete callouts

New in vSphere Replication

  • Check replication stats

Enhancements to SRM pack for vROps

  • Overcome DR monitoring challenges with global visibility into SRM environment
  • Mitigate risk associated with SRM component downtime
  • New views displaying
    • Recovery status
    • Count of VMs in recovery plans
    • Lots more
  • New arms for VMs that are in Protection Groups and not part of recovery plans

vSphere Replication Pack for vROps

Ability to monitor

  • RPO violations
  • Per VM metrics
  • Incoming replications
  • Outgoing replications
  • Replication status
  • Transferred bytes
  • Alerts
  • Replication Settings

UI Enhancements

  • Adjust colour schemes for optimal viewing
  • Capacity information available in the Protection Groups Datastores tab
  • Ability to provide in-product feedback – the smiley face icon

Support for NSX-T

  • Integration with NSX-T lets you use the network virtualisation to simplify the creation snd execution of recovery plans and accelerate recovery

Encrypted VMs Support

  • Full support for replicating, protecting, and recovering encrypted VMs

Encryption of replication traffic available per VM

Improved Logging Options with Syslog Support

  • Increased awareness of potential issues
  • Easier to troubleshoot issues
  • More opportunity for analysis

 

Tech Preview

We also went through a tech preview of what might be on the horizon with SRM. Note that this is all futures, and VMware may or may not end up delivering this as part of a future product.

  • SRM Support for vVols with Array-based Replication
  • Support protection and orchestrated recovery of VMs that are running on Virtual Volume datastore and are replicated by policy-based native array replication.
  • Automatic protection
  • Disk resting feature for vSphere Replication

 

Thoughts

I always enjoy these SRM sessions. Every time I make it along to VMworld US I try and get to Cato’s sessions. Even if you’re familiar with SRM, they’re a great summary of current, latest, and future product capability. SRM is a really cool solution for managing both migration and DR activities. And I don’t want to think about the number of times vSphere Replication has gotten us out of a spot doing cross-platform storage migrations. Cato and the team really know their stuff, so if you get a chance, do check out their other sessions this week.

VMware – VMworld 2019 – Monday General Session Notes

Disclaimer: I recently attended VMworld 2019 – US.  My flights and accommodation were paid for by Digital Sense, and VMware provided me with a free pass to the conference and various bits of swag. There is no requirement for me to blog about any of the content presented and I am not compensated by VMware for my time at the event.  Some materials presented were discussed under NDA and don’t form part of my blog posts, but could influence future discussions.

Here are my rough notes from the Monday General Session at VMworld US 2019. You can grab a PDF of them here.

Pat Gelsinger takes the stage. Welcome to VMworld 2019 and thank you. Thrilled to be back in San Francisco, the hub of tech on the planet. Sorry to those Las Vegas lovers out there.

Tech in the age of any. There’s a natural tension between choice and complexity. We create that choice, but we have to make it work.  “Strength lies in differences, not similarities” (Stephen Covey).

At VMworld there are 100 countries represented, 5000 different organisations, 22% are learning Klingon (!). NuqneH!

[image courtesy of VMware]Welcome the Pivotal family to team VMware. Welcome Carbon Black to the family too. Read more about those acquisitions here.

In October it will be 4 decades in technology for Pat. Each of these areas are permeating our every day life

2009

  • 52 Million apps in the world
  • 5 million app developers

2019

  • 335 Million apps
  • 13.5 million developers

 

Technology for Good

Tech amplifies the good and the bad. But what does it mean to shape technology as a force for good? We, as technologists, need to participate in the shaping. For example, schools at Nairobi in Kenya. On Friday, Callum Eade swam the English channel to raise money for cancer research. Callum takes the stage.

PG: Why do it?

CE: It was an insurmountable goal. Something I wanted to do was “Make My Mark”. I’m also wearing a jacket from Tour de Cure. Specific cancer – children cancer. No one should have to bury a child. Program for a tumour called DIPG. Mortality rate unchanged – every diagnosed child dies within the first 12 months.

PG: My son battled Hodgkins-Lymphoma. It takes a team.

CE: It does. My wife, son, and daughter are here in the front row. The training was arduous. Pat, you stood up at our all hands meeting in APJ six months ago and reinforced the importance of career and family balance.

PG: Why the English Channel?

CE: 10 years ago it seemed like a really good idea. The distance is the same as from SFO to Palo Alto and it was really cold.

PG: What’s next?

CE: As far as the charity was concerned, a lot of people in this room contributed. The request from the pledge was $100K. We raised $130K. We’re going to go on holiday as a family, do something together.

CE leaves the stage.

PG: From floating hospitals to flying hospitals. Let’s hear from Angel MedFlight – Video.

Higher success bringing the patient to the organ.

But what about the law of unanticipated consequences. We’ve bought together communities online, and then created platforms for disinformation. End of privacy. Bitcoin is bad – it’s not okay. Apparently. VMware wants to “Do good engineering, and engineering for good”. 12 million non-profits in the world. 4.5% of global GDP is non-profit sector. Larger than the GDP of Germany. They do great work, but they’re technology sucks. Video – Working with Tech Soup enable all of those other missions out there. Grow the tech talent pool. Historically served the smallest organisations. Working with a million coders. 24% are women (not enough). Rebecca Masisak (CEO) is here. Working with them to scale their mission.

Who’s Doing It?

“There’s never been a more exciting time to be a technologist”. There’s also never been a more important time to be a technologist. Who will operationalise the powers of edge, AI, IoT in the world? You. No one is more qualified, more capable to do that. If you’re a VCP, you’re qualified. Shout out to VCIX folks, VMUG leaders, the VCDXs.

The VMware vision remains unchanged. Any cloud, any app, any device, with intrinsic security. Is this innovation helping us or harming us? Does it inspire us? Empower us?

Customers

  • Comcast – getting a handle on chaos. Built a modern private cloud with connection to the public cloud.
  • FedEx – leveraging VCF and Pivotal.
  • IHS Markit

 

Multi-Cloud Strategy

VMware have worked hard on multi-loud, and have built a preferred partnership with Amazon. They’re also working with IBM, Google, and Microsoft, enabling an “any-cloud” environment. They want you to be able to “Build, Run, Manage, Connect, Protect” your workloads. Kubernetes has emerged as ubiquitous infrastructure, joining developers and IT operators. Now Pat’s going to “get some help from a Kubernetes celebrity”. Joe Beda – the first committer to Kubernetes, this guy’s a rock star – and he’s now a Principal Engineer at VMware. Joe takes the stage. PG: “Can we take a selfie?”. Joe talks about “Goldilocks level between devs and IT ops”. Wanted it to span any infrastructure.

Complexity vs flexibility.

What enterprises need is a secured opinion through this journey, and VMware is well placed to do this, with Heptio acquired, Pivotal, and Bitnami. We’re excited to be bringing this to market – VMware Tanzu (Swahili for branch).

  • Build Modern Apps
  • Run Enterprise Kubernetes
  • Manage Kubernetes for developers AND IT

Pivotal and Kubernetes like peanut butter and jelly. What if we build it into vSphere? Announcing Project Pacific.

  • Uniting vSphere and Kubernetes
  • Extending vSphere for ALL modern apps
  • Enabling Dev and IT Ops collaboration
  • 30% faster than a traditional Linux VM, 8% faster than bare metal

VMware Tanzu Mission Control. How do get started right now? PKS is the answer.

Managing Multi-cloud

Talked about build, but what about run? The challenge is how we manage across a diverse environment. There’s a loss of efficiency, security. This is why VMware acquired CloudHealth and they’ve now announced “CloudHealth Hybrid”. But what’s the difference between multi-cloud and hybrid cloud? The goal is to give you the tools to manage or have consistent ops in a multi-cloud world. What about hybrid cloud? You also want a consistent infrastructure experience. VMware Cloud Foundation is the platform for Hybrid cloud.

Migrate or modernise? Migrate and modernise?

Cost of migration is expensive

  • Native cloud – $1M to migrate 1000 VMs
  • Years to refactor apps

Migrate to VMware Cloud – it’s easy :)

Video – Jensen Huang.

Also working with MS Azure (announced at DTW earlier this year). Now globally available. US and Europe. Australia by the end of the year. Announced Project Dimension with Dell. Now available VMware Cloud on Dell EMC. DCaaS. Partnership with Equinix

Operate in a hybrid cloud world?

Taking vRealize hybrid

  • Automation
  • Operation
  • Network insight
  • Log insight

DRaaS and DPaaS – doing a lot with Dell EMC.

 

Edge and Telco

What’s the edge?

1. Where the physical and the digital worlds intersect

2. Distributed, low-latency infrastructure located close […]

Thin edge, Medium Edge, Thick Edge

5G

Massive capital scale out

Vertical hardware to horizontal software

Thrilled to acquire Uhana.

Video – Verizon – 5G will save the world

Started this journey a number of years ago. Started in the DC. Now connecting all the clouds together.

NSX-T supports all types of workloads

VeloCloud – #1 in SD-WAN marketshare, 150K+ sites

AVI Networks acquisition

vRealize Network Insight

It’s time to move all that hardware into software

  • 59% redaction in CAPEX
  • 55% reduction in OPEX

 

Customer Testimonials

Sanjay Poonen (COO) takes the stage. You fired up?

2 Customers

  • Rathi Murthy – CTO of GAP
  • Tim Snyder – Deputy CTO of Freddie Mac

RM: 80% of purchases still happen in store. We’re all getting more and more impatient by the day. Speed, responsiveness, scale is all becoming critical. 90% of our production workloads on VMware

TS: We have 600 apps. 5 or 10 SaaS. The others needed to move to cloud. Why not refactor? We’re a heavily regulated environment. We can’t take the risks to transform our apps. Sitting on 100 million lines of code. Moved all apps to VMC on AWS. Well, 95% +. We’re 2/3 through the journey.

SP: Do you migrate, modernise, do both?

RM: 1000 apps across the board. Critical to us to migrate what’s critical to the business without disruption. There hasn’t been a database we didn’t like. Being able to modernise without disrupting was a challenge. Pivotal played a critical role. 60 – 70% of apps modernised through PKS.

SP: We don’t call it lift and shift, we prefer migrate.

SP: Your migrate strategy is Azure. But in the middle is VMware. Yes.

SP: Tim, in the M and M world.

TS: Running 5% of apps in containers now. We wanted to push to migrate this year.

Why is consumer life so easy and enterprise so hard?

Consumer simple, enterprise secure. “Every employee in the world should be using Workspace ONE”. Dell unified workspace product.

Why invest in digital employee experience?

  • 23% more likely to be an industry leader
  • 60% more likely to be a growth company
  • 41% more likely to have a positive Employee Net Promotor Score (eNPS)

Video plays. SP: My latest favourite topic is security.

“Intrinsic security”

5 control points

  • Endpoint device
  • Endpoint workload
  • Network infrastructure
  • Apps
  • Data

Talked briefly about the Carbon Black acquisition.

Solid session. 3.5 stars. Watch the replay here. Read more about Project Pacific and Tanzu here. Scott Lowe’s coverage of the general session can be found here.

Pure Storage – Configuring ObjectEngine Bucket Security

This is a quick post as a reminder for me next time I need to do something with basic S3 bucket security. A little while I ago I was testing Pure Storage’s ObjectEngine (OE) device with a number of data protection products. I’ve done a few articles previously on what it looked like from the Cohesity and Commvault perspective, but thought it would be worthwhile to document what I did on the OE side of things.

The first step is to create the bucket in the OE dashboard.

You’ll need to call it something, and there are rules around the naming convention and length of the name.

In this example, I’m creating a bucket for Commvault to use, so I’ve called this one “commvault-test”.

Once the bucket has been created, you should add a security policy to the bucket.

Click on “Add” and you’ll be prompted to get started with the Bucket Policy Editor.

I’m pretty hopeless with this stuff, but fortunately there’s a policy generator on the AWS site you can use.

Once you’ve generated your policy, click on Save and you’ll be good to go. Keep in mind that any user you reference in the policy will need to exist in OE for the policy to work.

Here’s the policy I applied to this particular bucket. The user is commvault, and the bucket name is commvault-test.

{
  "Id": "Policy1563859773493",
  "Version": "2012-10-17",
  "Statement": [
    {
      "Sid": "Stmt1563859751962",
      "Action": "s3:*",
      "Effect": "Allow",
      "Resource": "arn:aws:s3:::commvault-test",
      "Principal": {
        "AWS": [
          "arn:aws:iam::0:user/commvault"
        ]
      }
    },
    {
      "Sid": "Stmt1563859771357",
      "Action": "s3:*",
      "Effect": "Allow",
      "Resource": "arn:aws:s3:::commvault-test/*",
      "Principal": {
        "AWS": [
          "arn:aws:iam::0:user/commvault"
        ]
      }
    }
  ]
}

You can read more about the policy elements here.

Formulus Black Announces Forsa 3.0

Formulus Black recently announced version 3.0 of its Forsa product. I had the opportunity to speak with Mark Iwanowski and Jing Xie about the announcement and wanted to share some thoughts here.

 

So What’s A Forsa Again?

It’s a software solution for running applications in memory without needing to re-tool your applications or hardware. You can present persistent storage (think Intel Optane) or non-persistent memory (think DRAM) as a block device to the host and run your applications on that. Here’s a look at the architecture.

[image courtesy of Formulus Black]

Is This Just a Linux Thing?

No, not entirely. There’s Ubuntu and CentOS support out of the box, and Red Hat support is imminent. If you don’t use those operating systems though, don’t stress. You can also run this using a KVM-based hypervisor. So anything supported by that can be supported by Forsa.

But What If My Memory Fails?

Formulus Black has a technology called “BLINK” which provides the ability to copy your data down to SSDs, or you can failover the data to another host.

Won’t I Need A Bunch Of RAM?

Formulus Black uses Bit Markers – a memory efficient technology (like deduplication) – to make efficient use of the available memory. They call it “amplification” as opposed to deduplication, as it amplifies the available space.

Is This Going To Cost Me?

A little, but not as much as you’d think (because nothing’s ever free). The software is licensed on a per-socket basis, so if you decide to add memory capacity you’re not up for additional licensing costs.

 

Thoughts and Further Reading

I don’t do as much work with folks requiring in-memory storage solutions as much as I’d like to do, but I do appreciate the requirement for these kinds of solutions. The big appeal here is the lack of requirement to re-tool your applications to work in-memory. All you need is something that runs on Linux or KVM and you’re pretty much good to go. Sure, I’m over-simplifying things a little, but it looks like there’s a good story here in terms of the lack of integration required to get some serious performance improvements.

Formulus Black came out of stealth around 4 and a bit months ago and have already introduced a raft of improvements over version 2.0 of their offering. It’s great to see the speed with which they’ve been able to execute on new features in their offering. I’m curious to see what’s next, as there’s obviously been a great focus on performance and simplicity.

The cool kids are all talking about the benefits of NVMe-based, centralised storage solutions. And they’re right to do this, as most applications will do just fine with these kinds of storage platforms. But there are still going to be minuscule bottlenecks associated with these devices. If you absolutely need things to run screamingly fast, you’ll likely want to run them in-memory. And if that’s the case, Formulus Black’s Forsa solution might be just what you’re looking for. Plus, it’s a pretty cool name for a company, or possibly an aspiring wizard.

Random Short Take #20

Here are some links to some random news items and other content that I recently found interesting. You might find them interesting too. Episode 20 – feels like it’s becoming a thing.

  • Scale Computing seems to be having a fair bit of success with their VDI solutions. Here’s a press release about what they did with Harlingen WaterWorks System.
  • I don’t read Corey Quinn’s articles enough, but I am glad I read this one. Regardless of what you think about the enforceability of non-compete agreements (and regardless of where you’re employed), these things have no place in the modern workforce.
  • If you’re getting along to VMworld US this year, I imagine there’s plenty in your schedule already. If you have the time – I recommend getting around to seeing what Cody and Pure Storage are up to. I find Cody to be a great presenter, and Pure have been doing some neat stuff lately.
  • Speaking of VMworld, this article from Tom about packing the little things for conferences in preparation for any eventuality was useful. And if you’re heading to VMworld, be sure to swing past the VMUG booth. There’s a bunch of VMUG stuff happening at VMworld – you can read more about that here.
  • I promise this is pretty much the last bit of news I’ll share regarding VMworld. Anthony from Veeam put up a post about their competition to win a pass to VMworld. If you’re on the fence about going, check it out now (as the competition closes on the 19th August).
  • It wouldn’t be a random short take without some mention of data protection. This article about tiering protection data from George Crump was bang on the money.
  • Backblaze published their quarterly roundup of hard drive stats – you can read more here.
  • This article from Paul on freelancing and side gigs was comprehensive and enlightening. If you’re thinking of taking on some extra work in the hopes of making it your full-time job, or just wanting to earn a little more pin money, it’s worthwhile reading this post.

Brisbane VMUG – September 2019

hero_vmug_express_2011

The September 2019 edition of the Brisbane VMUG meeting will be held on Tuesday 10th September at Fishburners (Level 2, 155 Queen Street, Brisbane City) from 4 – 6pm. It’s sponsored by StorageCraft and promises to be a great afternoon.

Here’s the agenda:

  • VMUG Intro
  • VMware Presentation
  • StorageCraft Presentation
  • Q&A
  • Light refreshments

StorageCraft have gone to great lengths to make sure this will be a fun and informative session and I’m really looking forward to hearing about what they’ve been up to. You can find out more information and register for the event here. I hope to see you there. Also, if you’re interested in sponsoring one of these events, please get in touch with me and I can help make it happen.

Burlywood Tech Announces TrueFlash Insight

Burlywood Tech came out of stealth a few years ago, and I wrote about their TrueFlash announcement here. I had another opportunity to speak to Mike Tomky recently about Burlywood’s TrueFlash Insight announcement and thought I’d share some thoughts here.

 

The Announcement

Burlywood’s “TrueFlash” product delivers what they describe as a “software-defined SSD” drive. Since they’ve been active in the market they’ve gained traction in what they call the Tier 2 service provider segments (not the necessarily the “Big 7” hyperscalers).

They’ve announced TrueFlash Insight because, in a number of cases, customers don’t know what their workloads really look like. The idea behind TrueFlash Insight is that it can be run in a production environment for a period of time to collect metadata and drive telemetry. Engineers can also be sent on site if required to do the analysis. The data collected with TrueFlash Insight helps Burlywood with the process of designing and tuning the TrueFlash product for the desired workload.

How It Works

  • Insight is available only on Burlywood TrueFlash drives
  • Enabled upon execution of a SOW for Insight analysis services
  • Run your application as normal in a system with one or more Insight-enabled TrueFlash drives
  • Follow the instructions to download the telemetry files
  • Send telemetry data to Burlywood for analysis
  • Burlywood parses the telemetry, analyses data patterns, shares performance information, and identifies potential bottlenecks and trouble spots
  • This information can then be used to tune the TrueFlash SSDs for optimal performance

 

Thoughts and Further Reading

When I wrote about Burlywood previously I was fascinated by the scale that would be required for a company to consider deploying SSDs with workload-specific code sitting on them. And then I stopped and thought about my comrades in the enterprise space struggling to get the kind of visibility into their gear that’s required to make these kinds of decisions. But when your business relies so heavily on good performance, there’s a chance you have some idea of how to get information on the performance of your systems. The fact that Burlywood are making this offering available to customers indicates that even those customers that are on board with the idea of “Software-defined SSDs (SDSSDs?)” don’t always have the capabilities required to make an accurate assessment of their workloads.

But this solution isn’t just for existing Burlywood customers. The good news is it’s also available for customers considering using Burlywood’s product in their DC. It’s a reasonably simple process to get up and running, and my impression is that it will save a bit of angst down the track. Tomky made the comment that, with this kind of solution, you don’t need to “worry about masking problems at the drive level – [you can] work on your core value”. There’s a lot to be said for companies, even the ones with very complex technical requirements, not having to worry about the technical part of the business as much as the business part of the business. If Burlywood can make that process easier for current and future customers, I’m all for it.

StorONE Announces S1-as-a-Service

StorONE recently announced its StorONE-as-a-Service (S1aaS) offering. I had the opportunity to speak to Gal Naor about it and thought I’d share some thoughts here.

 

The Announcement

StorONE’s S1-as-a-Service (S1aaS), is a use-based solution integrating StorONE’s S1 storage services with Dell Technologies and Mellanox hardware. The idea is they’ll ship you an appliance (available in a few different configurations) and you plug it in and away you go. There’s not a huge amount to say about it as it’s fairly straightforward. If you need more that the 18TB entry-level configuration, StorONE can get you up and running with 60TB thanks to overnight shipping.

Speedonomics

The as-a-Service bit is what most people are interested in, and S1aaS starts at $999 US per month for the 18TB all-flash array that delivers up to 150000 IOPS. There are a couple of other configurations available as well, including 36TB at $1797 per month, and 54TB at $2497 per month. If, for some reason, you decide you don’t want the device any more, or you no longer have that particular requirement, you can cancel your service with 30 days’ notice.

 

Thoughts and Further Reading

The idea of consuming storage from vendors on-premises via flexible finance plans isn’t a new one. But S1aaS isn’t a leasing plan. There’s no 60-month commitment and payback plan. If you want to use this for three months for a particular project and then cancel your service, you can. Just as you could with cable. From that perspective, it’s a reasonably interesting proposition. A number of the major storage vendors would struggle to put that much capacity and speed in such a small footprint on-premises for $999 per month. This is the major benefit of a software-based storage product that, by all accounts, can get a lot out of commodity server hardware.

I wrote about StorONE when they came out of stealth mode a few years ago, and noted the impressive numbers they were posting. Are numbers the most important thing when it comes to selecting storage products? No, not always. There’s plenty to be said for “good enough” solutions that are more affordable. But it strikes me that solutions that go really fast and don’t cost a small fortune to run are going to be awfully compelling. One of the biggest impediments to deploying on-premises storage solutions “as-a-Service” is that there’s usually a minimum spend required to make it worthwhile for the vendor or service provider. Most attempts previously have taken more than 2RU of rack space as a minimum footprint, and have required the customer to sign up for minimum terms of 36 – 60 months. That all changes (for the better) when you can run your storage on a server with NVMe-based drives and an efficient, software-based platform.

Sure, there are plenty of enterprises that are going to need more than 18TB of capacity. But are they going to need more than 54TB of capacity that goes at that speed? And can they build that themselves for the monthly cost that StorONE is asking for? Maybe. But maybe it’s just as easy for them to look at what their workloads are doing and decide whether they want everything to on that one solution. And there’s nothing to stop them deploying multiple configurations either.

I was impressed with StorONE when they first launched. They seem to have a knack for getting good performance from commodity gear, and they’re willing to offer that solution to customers at a reasonable price. I’m looking forward to seeing how the market reacts to these kinds of competitive offerings. You can read more about S1aaS here.

Random Short Take #19

Here are some links to some random news items and other content that I recently found interesting. You might find them interesting too. Episode 19 – let’s get tropical! It’s all happening.

  • I seem to link to Alastair’s blog a lot. That’s mainly because he’s writing about things that interest me, like this article on data governance and data protection. Plus he’s a good bloke.
  • Speaking of data protection, Chris M. Evans has been writing some interesting articles lately on things like backup as a service. Having worked in the service provider space for a piece of my career, I wholeheartedly agree that it can be a “leap of faith” on the part of the customer to adopt these kinds of services.
  • This post by Raffaello Poltronieri on VMware’s vRealize Operations session at Tech Field Day 19 makes for good reading.
  • This podcast episode from W. Curtis Preston was well worth the listen. I’m constantly fascinated by the challenges presented to infrastructure in media and entertainment environments, particularly when it comes to data protection.
  • I always enjoy reading Preston’s perspective on data protection challenges, and this article is no exception.
  • This article from Tom Hollingsworth was honest and probably cut too close to the bone with a lot of readers. There are a lot of bad habits that we develop in our jobs, whether we’re coding, running infrastructure, or flipping burgers. The key is to identify those behaviours and work to address them where possible.
  • Over at SimplyGeek.co.uk, Gavin has been posting a number of Ansible-related articles, including this one on automating vSphere VM and ova deployments. A number of folks in the industry talk a tough game when it comes to automation, and it’s nice to see Gavin putting it on wax and setting a great example.
  • The Mark Of Cain have announced a national tour to commemorate the 30th anniversary of their Battlesick album. Unfortunately I may not be in the country when they’re playing in my part of the woods, but if you’re in Australia you can find out more information here.