Dell Technologies World 2019 – Wrap-up and Link-o-rama

Disclaimer: I recently attended Dell Technologies World 2019.  My flights, accommodation and conference pass were paid for by Dell Technologies via the Media, Analysts and Influencers program. There is no requirement for me to blog about any of the content presented and I am not compensated in any way for my time at the event.  Some materials presented were discussed under NDA and don’t form part of my blog posts, but could influence future discussions.

Here’s a quick post with links to the other posts I did surrounding Dell Technologies World 2019, as well as links to other articles I found interesting.

 

Product Announcements

Here’re the posts I did covering the main product-related announcements from the show.

Dell EMC Announces Unity XT And More Cloudy Things

Dell EMC Announces PowerProtect Software (And Hardware)

Dell Announces Dell Technologies Cloud (Platforms and DCaaS)

 

Event-Related

Here’re the posts I did during the show. These were mainly from the media sessions I attended.

Dell – Dell Technologies World 2019 – See You Soon Las Vegas

Dell Technologies World 2019 – Monday General Session – The Architects of Innovation – Rough Notes

Dell Technologies World 2019 – Tuesday General Session – Innovation to Unlock Your Digital Future – Rough Notes

Dell Technologies World 2019 – Media Session – Architecting Innovation in a Multi-Cloud World – Rough Notes

Dell Technologies World 2019 – Wednesday General Session – Optimism and Happiness in the Digital Age – Rough Notes

Dell Technologies World 2019 – (Fairly) Full Disclosure

 

Dell Technologies Announcements

Here are some of the posts from Dell Technologies covering the major product announcements and news.

Dell Technologies and Orange Collaborate for Telco Multi-Access Edge Transformation

Dell Technologies Brings Speed, Security and Smart Design to Mobile PCs for Business

Dell Technologies Powers Real Transformation and Innovation with New Storage, Data Management and Data Protection Solutions

Dell Technologies Transforms IT from Edge to Core to Cloud

Dell Technologies Cloud Accelerates Customers’ Multi-Cloud Journey

Dell Technologies Unified Workspace Revolutionizes the Way People Work

Dell Technologies and Microsoft Expand Partnership to Help Customers Accelerate Their Digital Transformation

 

Tech Field Day Extra

I also had the opportunity to participate in Tech Field Day Extra at Dell Technologies World 2019. Here are the articles I wrote for that part of the event.

Liqid Are Dynamic In The DC

Big Switch Are Bringing The Cloud To Your DC

Kemp Keeps ECS Balanced

 

Other Interesting Articles

TFDx @ DTW ’19 – Get To Know: Liqid

TFDx @ DTW ’19 – Get To Know: Kemp

TFDx @ DTW ’19 – Get to Know: Big Switch

Connecting ideas and people with Dell Influencers

Game Changer: VMware Cloud on Dell EMC

Dell Technologies Cloud and VMware Cloud on Dell EMC Announced

Run Your VMware Natively On Azure With Azure VMware Solutions

Dell Technologies World 2019 recap

Scaling new HPC with Composable Architecture

Object Stores and Load Balancers

Tech Field Day Extra with Liqid and Kemp

 

Conclusion

I had a busy but enjoyable week. I would have liked the get to some of the technical breakout sessions, but being given access to some of the top executives in the company via the Media, Analysts and Influencers program was invaluable. Thanks again to Dell Technologies (particularly Debbie Friez and Konnie) for having me along to the show. And big thanks to Stephen and the Tech Field Day team for having me along to the Tech Field Day event as well.

Big Switch Are Bringing The Cloud To Your DC

Disclaimer: I recently attended Dell Technologies World 2019.  My flights, accommodation and conference pass were paid for by Dell Technologies via the Media, Analysts and Influencers program. There is no requirement for me to blog about any of the content presented and I am not compensated in any way for my time at the event.  Some materials presented were discussed under NDA and don’t form part of my blog posts, but could influence future discussions.

As part of my attendance at Dell Technologies World 2019 I had the opportunity to attend Tech Field Day Extra sessions. You can view the videos from the Big Switch Networks session here, and download my rough notes from here.

 

The Network Is The Cloud

Cloud isn’t a location, it’s a design principle. And networking needs to evolve with the times. The enterprise is hamstrung by:

  • Complex and slow operations
  • Inadequate visibility
  • Lack of operational consistency

It’s time that on-premises needs is built the same way as the service providers do it.

  • Software-defined;
  • Automated with APIs;
  • Open Hardware; and
  • Integrated Analytics.

APIs are not an afterthought for Big Switch.

A Better DC Network

  • Cloud-first infrastructure – design, build and operate your on-premises network with the same techniques used internally by public cloud operators
  • Cloud-first experience – give your application teams the same “as-a-service” network experience on-premises that they get with the cloud
  • Cloud-first consistency – uses the same tool chain to manage both on-premises and in-cloud networks

 

Thoughts and Further Reading

There are a number of reasons why enterprise IT folks are looking wistfully at service providers and the public cloud infrastructure setups and wishing they could do IT that way too. If you’re a bit old fashioned, you might think that loose and fast isn’t really how you should be doing enterprise IT – something that’s notorious for being slow, expensive, and reliable. But that would be selling the SPs short (and I don’t just say that because I work for a service provider in my day job). What service providers and public cloud folks are very good at is getting maximum value from the infrastructure they have available to them. We don’t necessarily adopt cloud-like approaches to infrastructure to save money, but rather to solve the same problems in the enterprise that are being solved in the public clouds. Gone are the days when the average business will put up with vast sums of cash being poured into enterprise IT shops with little to no apparent value being extracted from said investment. It seems to be no longer enough to say “Company X costs this much money, so that’s what we pay”. For better or worse, the business is both more and less savvy about what IT costs, and what you can do with IT. Sure, you’ll still laugh at the executive challenging the cost of core switches by comparing them to what can be had at the local white goods slinger. But you better be sure you can justify the cost of that badge on the box that runs your network, because there are plenty of folks ready to do it for cheaper. And they’ll mostly do it reliably too.

This is the kind of thing that lends itself perfectly to the likes of Big Switch Networks. You no longer necessarily need to buy badged hardware to run your applications in the fashion that suits you. You can put yourself in a position to get control over how your spend is distributed and not feel like you’re feeling to some mega company’s profit margins without getting return on your investment. It doesn’t always work like that, but the possibility is there. Big Switch have been talking about this kind of choice for some time now, and have been delivering products that make that possibility a reality. They recently announced an OEM agreement with Dell EMC. It mightn’t seem like a big deal, as Dell like to cosy up to all kinds of companies to fill apparent gaps in the portfolio. But they also don’t enter into these types of agreements without having seriously evaluated the other company. If you have a chance to watch the customer testimonial at Tech Field Day Extra, you’ll get a good feel for just what can be accomplished with an on-premises environment that has service provider like scalability, management, and performance challenges. There’s a great tale to be told here. Not every enterprise is working at “legacy” pace, and many are working hard to implement modern infrastructure approaches to solve business problems. You can also see one of their customers talk with my friend Keith about the experience of implementing and managing Big Switch on Dell Open Networking.

Kemp Keeps ECS Balanced

Disclaimer: I recently attended Dell Technologies World 2019.  My flights, accommodation and conference pass were paid for by Dell Technologies via the Media, Analysts and Influencers program. There is no requirement for me to blog about any of the content presented and I am not compensated in any way for my time at the event.  Some materials presented were discussed under NDA and don’t form part of my blog posts, but could influence future discussions.

As part of my attendance at Dell Technologies World 2019 I had the opportunity to attend Tech Field Day Extra sessions. You can view the videos from the Kemp session here, and download my rough notes from here.

 

Kemp Overview

Established early 2000s, Kemp has around 25000+ customers globally, with 60000+ app deployments in over 115 countries. Their main focus is an ADC (Application Delivery Controller) that you can think of as a “fancy load balancer”. Here’s a photo of Frank Yue telling us more about that.

Application Delivery – Why?

  • Availability – transparent failover when application resources fail
  • Scalability – easily add and remove application resources to meet changing demands
  • Security – authenticate users and protect applications against attack
  • Performance – offload security processing and content optimisation to Load Balancer
  • Control – visibility on application resource availability, health and performance

Product Overview

Kemp offer a

LoadMaster – scalable, secure apps

  • Load balancing
  • Traffic optimisation 
  • Security

There are a few different flavours of the LoadMaster, including cloud-native, virtual, and hardware-based.

360 Central – control, visibility

  • Management
  • Automation
  • Provisioning

360 Vision – Shorter MTTD / MTTR

  • Predictive analytics
  • Automated incident réponse
  • Observability

Yue made the point that “[l]oad balancing is not networking. And it’s not servers either. It’s somehow in between”. Kemp look to “[d]eal with the application from the networking perspective”.

 

Dell EMC ECS

So what’s Dell EMC ECS then? ECS stands for “Elastic Cloud Storage”, and it’s Dell EMC’s software-defined object storage offering. If you’re unfamiliar with it, here are a few points to note:

  • Objects are bundled data with metadata;
  • The object storage application manages the storage;
  • No real file system is needed;
  • Easily scale by just adding disks;
  • Delivers a low TCO.

It’s accessible via an API and offers the following services:

  • S3
  • Atmos
  • Swift
  • NFS

 

Kemp / Dell EMC ECS Solution

So how does a load balancing solution from Kemp help? One of the ideas behind object storage is that you can lower primary storage costs. You can also use it to accelerate cloud native apps. Kemp helps with your ECS deployment by:

  • Maximising value from infrastructure investment
  • Improving service availability and resilience
  • Enabling cloud storage scalability for next generation apps

Load Balancing Use Cases for ECS

High Availability

  • ECS Node redundancy in the event of failure
  • A load balancer is required to allow for automatic failover and event distribution of traffic

Global Balancing

[image courtesy of Kemp]

  • Multiple clusters across different DCs
  • Global Server Load Balancing provides distribution of connections across these clusters based on proximity

Security

  • Offloading encryption from the Dell EMC ECS nodes to Kemp LoadMaster can greatly increase performance and simplify the management of transport layer security certificates
  • IPv6 to IPv4 – Dell EMC ECS does not support IPv6 natively – Kemp will provide that translation to IPv4

 

Thoughts and Further Reading

The first thing that most people ask when seeing this solution is “Won’t the enterprise IT organisation already have a load-balancing solution in place? Why would they go to Kemp to help with their ECS deployment?”. It’s a valid point, but the value here is more that Dell EMC are recommending that customers use the Kemp solution over the built-in load balancer provided with ECS. I’ve witnessed plenty of (potentially frustrating) situations where enterprises deploy multiple load balancing solutions depending on the application requirements or where the project funding was coming from. Remember that things don’t always make sense when it comes to enterprise IT. But putting those issues aside, there are likely plenty of shops looking to deploy ECS in a resilient fashion that haven’t yet had the requirement to deploy a load balancer, and ECS is that first requirement. Kemp are clearly quite good at what they do, and have been in the load balancing game for a while now. The good news is if you adopt their solution for your ECS environment, you can look to leverage their other offerings to provide additional load balancing capabilities for other applications that might require it.

You can read the deployment guide from Dell EMC here, and check out Adam’s preparation post on Kemp here for more background information.

Liqid Are Dynamic In The DC

Disclaimer: I recently attended Dell Technologies World 2019.  My flights, accommodation and conference pass were paid for by Dell Technologies via the Media, Analysts and Influencers program. There is no requirement for me to blog about any of the content presented and I am not compensated in any way for my time at the event.  Some materials presented were discussed under NDA and don’t form part of my blog posts, but could influence future discussions.

As part of my attendance at Dell Technologies World 2019 I had the opportunity to attend Tech Field Day Extra sessions. You can view the videos from the session here, and download my rough notes from here.

 

Liqid

One of the presenters at Tech Field Day extra was Liqid, a company that specialises in composable infrastructure. So what does that mean then? Liqid “enables Composable Infrastructure with a PCIe fabric and software that orchestrates and manages bare-metal servers – storage, GPU, FPGA / TPU, Compute, Networking”. They say they’re not disaggregating DRAM as the industry’s not ready for that yet. Interestingly, Liqid have made sure they can do all of this with bare metal, as “[c]omposability without bare metal, with disaggregation, that’s just hyper-convergence”.

 

[image courtesy of Liqid]

The whole show is driven through Liqid Command Center, and there’s a switching PCIe fabric as well. You then combine this with various hardware elements, such as:

  • JBoF – Flash;
  • JBoN – Network;
  • JBoG – GPU; and
  • Compute nodes.

There are various expansion chassis options (network, storage, and graphics) and you can add in standard x86 servers. You can read about Liqid’s announcement around Dell EMC PowerEdge servers here.

Other Interesting Use Cases

Some of the more interesting use cases discussed by Liqid included “brownfield” deployments where customers don’t want to disaggregate everything. If they just want to disaggregate GPUs, for example, they can add a GPU pool to a Fabric. This can be done with storage as well. Why would you want to do this kind of thing with networking? There are apparently a few service providers that like the composable networking use case. You can also have multiple fabric types with Liquid managing cross composability.

[image courtesy of Liqid]

Customers?

Liqid have customers across a variety of workload types, including:

  • AI & Deep Learning
    • GPU Scale out
    • Enable GPU Peer-2-Peer at scale
    • GPU Dynamic Reallocation/Sharing
  • Dynamic Cloud
    • CSP, ISP, Private Cloud
    • Flexibility, Resource Utilisation, TCO
    • Bare Metal Cloud Product Offering
  • HPC & Clustering
    • High Performance Computing
    • Lowest Latency Interconnect
    • Enables Massive Scale Out
  • 5G Edge
    • Utilisation & Reduced Foot Print
    • High Performance Edge Compute
    • Flexibility and Ease of Scale Out

Thoughts and Further Reading

I’ve written enthusiastically about composable infrastructure in the past, and it’s an approach to infrastructure that continues to fascinate me. I love the idea of being able to move pools of resources around the DC based on workload requirements. This isn’t just moving VMs to machines that are bigger as required (although I’ve always thought that was cool). This is moving resources to where they need to be. We have the kind of interconnectivity technology available now that means we don’t need to be beholden to “traditional” x86 server architectures. Of course, the success of this approach is in no small part dependent on the maturity of the organisation. There are some workloads that aren’t going to be a good fit with composable infrastructure. And there are going to be some people that aren’t going to be a good fit either. And that’s fine. I don’t think we’re going to see traditional rack mount servers and centralised storage disappear off into the horizon any time soon. But the possibilities that composable infrastructure present to organisations that have possibly struggled in the past with getting the right resources to the right workload at the right time are really interesting.

There are still a small number of companies that are offering composable infrastructure solutions. I think this is in part because it’s viewed as a niche requirement that only certain workloads can benefit from. But as companies like Liqid are demonstrating, the technology is maturing at a rapid pace and, much like our approach to on-premises infrastructure versus the public cloud, I think it’s time that we take a serious look at how this kind of technology can help businesses worry more about their business and less about the resources needed to drive their infrastructure. My friend Max wrote about Liqid last year, and I think it’s worth reading his take if you’re in any way interested in what Liqid are doing.

Dell – Dell Technologies World 2019 – See You Soon Las Vegas

This is a quick post to let you all know that I’ll be heading to Dell’s annual conference (Dell Technologies World) this year in Las Vegas, NV. I’m looking forward to catching up with some old friends and meeting some new ones. If you haven’t registered yet but feel like that’s something you might want to do – the registration page is here. To get a feel for what’s on offer, you can check out the agenda here. I’m looking forward to hearing more about stuff like this.

I’ll also be participating in a Tech Field Day Extra event at Dell Technologies World. You can check out the event page for that here.

Massive thanks to Konstanze and Debbie from Dell for organising the “influencer” pass for me. Keep an eye out for me at the conference and surrounding events and don’t be afraid to come and say hi (if you need a visual – think Grandad Wolverine).

Kingston’s NVMe Line-up Is The Life Of The Party

Disclaimer: I recently attended VMworld 2017 – US.  My flights were paid for by ActualTech Media, VMware provided me with a free pass to the conference and various bits of swag, and Tech Field Day picked up my hotel costs. There is no requirement for me to blog about any of the content presented and I am not compensated in any way for my time at the event.  Some materials presented were discussed under NDA and don’t form part of my blog posts, but could influence future discussions.

You can view the video of Kingston‘s presentation at Tech Field Day Extra VMworld US 2017 here, and download a PDF copy of my rough notes from here.

 

It’s A Protocol, Not Media

NVMe has been around for a few years now, and some people get it confused for a new kind of media that they plug into their servers. But it’s not really, it’s just a standard specification for accessing Flash media via the PCI Express bus. There’re a bunch of reasons why you might choose to use NVMe instead of SAS, including lower latency and less CPU overhead. My favourite thing about it though is the plethora of form factors available to use. Kingston touched on these in their presentation at Tech Field Day Extra recently. You can get them in half-height, half-length (HHHL) add-in cards (AIC), U.2 (2.5″) and M.2 sizes. To give you an idea of the use cases for each of these, Kingston suggested the following applications:

  • HHHL (AIC) card
    • Server / DC applications
    • High-end workstations
  • U.2 (2.5″)
    • Direct-attached, server backplane, just a bunch of flash (JBOF)
    • White box and OEM-branded
  • M.2
    • Client applications
    • Notebooks, desktops, workstations
    • Specialised systems

 

It’s Pretty Fast

NVMe has proven to be pretty fast, and a number of companies are starting to develop products that leverage the protocol in an extremely efficient manner. Coupled with the rise of NVMe/F solutions and you’ve got some pretty cool stuff coming to market. The price is also becoming a lot more reasonable, with Kingston telling us that their DCP1000 NVMe HHHL comes in at around “$0.85 – $0.90 per GB at the moment”. It’s obviously not as cheap as things that spin at 7200RPM but the speed is mighty fine. Kingston also noted that the 2.5″ form factor would be hanging around for some time yet, as customers appreciated the serviceability of the form factor.

 

[Kingston DCU1000 – Image courtesy of Kingston]

 

This Stuff’s Everywhere

Flash media has been slowly but surely taking over the world for a little while now. The cost per GB is reducing (slowly, but surely), and the range of form factors means there’s something for everyone’s needs. Protocol advancements such as NVMe make things even easier, particularly at the high end of town. It’s also been interesting to see these “high end” solutions trickle down to affordable form factors such as PCIe add-in cards. With the relative ubiquity of operating system driver support, NVMe has become super accessible. The interesting thing to watch now is how we effectively leverage these advancements in protocol technologies. Will we use them to make interesting advances in platforms and data access? Or will we keep using the same software architectures we fell in love with 15 years ago (albeit with dramatically improved performance specifications)?

 

Conclusion and Further Reading

I’ll admit it took me a little while to come up with something to write about after the Kingston presentation. Not because I don’t like them or didn’t find their content interesting. Rather, I felt like I was heading down the path of delivering another corporate backgrounder coupled with speeds and feeds and I know they have better qualified people to deliver that messaging to you (if that’s what you’re into). Kingston do a whole range of memory-related products across a variety of focus areas. That’s all well and good but you probably already knew that. Instead, I thought I could focus a little on the magic behind the magic. The Flash era of storage has been absolutely fascinating to witness, and I think it’s only going to get more interesting over the next few years. If you’re into this kind of thing but need a more comprehensive primer on NVMe, I recommend you check out J Metz’s article on the Cisco blog. It’s a cracking yarn and enlightening to boot. Data Centre Journal also provide a thorough overview here.

Druva Is Useful, And Modern

Disclaimer: I recently attended VMworld 2017 – US.  My flights were paid for by ActualTech Media, VMware provided me with a free pass to the conference and various bits of swag, and Tech Field Day picked up my hotel costs. There is no requirement for me to blog about any of the content presented and I am not compensated in any way for my time at the event.  Some materials presented were discussed under NDA and don’t form part of my blog posts, but could influence future discussions.

You can view the video of Druva‘s presentation here, and you can download a PDF copy of my rough notes from here.

 

DMaaS

Druva have been around for a while, and I recently had the opportunity to hear from them at a Tech Field Day Extra event. They have combined their Phoenix and inSync products into a single platform, yielding Druva Cloud Platform. This is being positioned as a “Data Management-as-a-Service” offering.

 

Data Management-as-a-Service

Conceptually, it looks a little like this.

[image via Druva]

According to Druva, the solution takes into account all the good stuff, such as:

  • Protection;
  • Governance; and
  • Intelligence.

It works with both:

  • Local data sources (end points, branch offices, and DCs); and
  • Cloud data sources (such as IaaS, Cloud Applications, and PaaS).

The Druva cloud is powered by AWS, and provides, amongst other things:

  • Auto-tiering in the cloud (S3/S3IA/Glacier); and
  • Easy recovery to any location (servers or the cloud).

 

Just Because You Can Put A Cat …

With everything there’s a right way and a wrong way to do it. Sometimes you might do something and think that you’re doing it right, but you’re not. Wesley Snipes’s line in White Men Can’t Jump may not be appropriate for this post, but Druva came up with one that is: “A VCR in the cloud doesn’t give you Netflix”. When you’re looking at cloud-based data protection solutions, you need to think carefully about just what’s on offer. Druva have worked through a lot of these requirements and claim their solution:

  • Is fully managed (no need to deploy, manage, support software);
  • Offers predictable lower costs
  • Delivers linear and infinite (!) scalability
  • Provides automatic upgrades and patching; and
  • Offers seamless data services.

I’m a fan of the idea that cloud services can offer a somewhat predictable cost models to customers. One of the biggest concerns faced by the C-level folk I talk to is the variability of cost when it comes to consuming off-premises services. The platform also offers source side global deduplication, with:

  • Application-aware block-level deduplication;
  • Only unique blocks being sent; and
  • Forever incremental and efficient backups.

The advantage of this approach is that, as Druva charge based on “post-globally deduped storage consumed”, chances are you can keep your costs under control.

 

It Feels Proper Cloudy

I know a lot of people who are in the midst of the great cloud migration. A lot of them are only now (!) starting to think about how exactly they’re going to protect all of this data in the cloud. Some of them are taking their existing on-premises solutions and adapting them to deal with hybrid or public cloud workloads. Others are dabbling with various services that are primarily cloud-based. Worse still are the ones assuming that the SaaS provider is somehow magically taking care of their data protection needs. Architecting your apps for multiple geos is a step in the right direction towards availability, but you still need to think about data protection in terms of integrity, not just availability. The impression I got from Druva is that they’ve taken some of the best elements of their on-premises and cloud offerings, sprinkled some decent security in the mix, and come up with a solution that could prove remarkably effective.

Tech Field Day – I’ll Be At TFD Extra at VMworld US 2017

The name says it all. I mentioned recently that I’ll be heading to the US in less than a week for VMworld. This is a quick post to say that I’ll also have the opportunity to participate in the Tech Field Day Extra event while at VMworld.  If you haven’t heard of the very excellent Tech Field Day events, I recommend you check them out. You can also check back on the TFDx website during the event as there’ll likely be video streaming along with updated links to additional content. You can also see the list of delegates and event-related articles that they’ve published.

I think it’s a great line-up of companies this time around, with some I’m familiar with and some not so much. I’m attending three or four of the sessions and recommend you tune in if you can to hear from Druva, Kingston, Pluribus Networks and NetApp.

Here’s the calendar of events as it stands (note that this might change).

August 29, 2017 11:00-12:00 Kingston Presents at Tech Field Day Extra at VMworld US 2017
Delegate Panel: Dan Frith, Mike Preston, TBD
August 29, 2017 13:00-14:00 Druva Presents at Tech Field Day Extra at VMworld US 2017
Delegate Panel: Dan Frith, Mike Preston, Sean Thulin, TBD
August 29, 2017 14:30-15:30 Pluribus Networks Presents at Tech Field Day Extra at VMworld US 2017
Delegate Panel: Carl Fugate, Eyvonne Sharp, Mike Preston, Rob Coote, Sean Thulin, TBD
August 29, 2017 16:00-17:00 NetApp Presents at Tech Field Day Extra at VMworld US 2017
Delegate Panel: Carl Fugate, Dan Frith, Eyvonne Sharp, Mike Preston, Rick Schlander, Rob Coote, Sean Thulin, TBD

 

Storage Field Day – I’ll Be At Storage Field Day 13 and Pure Accelerate

Storage Field Day 13

In what can only be considered excellent news, I’ll be heading to the US in early June for another Storage Field Day event. If you haven’t heard of the very excellent Tech Field Day events, you should check them out. I’m looking forward to time travel and spending time with some really smart people for a few days. It’s also worth checking back on the Storage Field Day 13 website during the event (June 14 – 16) as there’ll be video streaming and updated links to additional content. You can also see the list of delegates and event-related articles that have been published.

I think it’s a great line-up of presenting companies this time around. There are a few I’m very familiar with and some I’ve not seen in action before.

*Update – NetApp have now taken the place of Seagate. I’ll update the schedule when I know more.

 

I won’t do the delegate rundown, but having met a number of these people I can assure the videos will be worth watching.

Here’s the rough schedule (all times are ‘Merican Pacific and may change).

Wednesday, Jun 14 09:30-10:30 StorageCraft Presents at Storage Field Day 13
Wednesday, Jun 14 16:00-17:30 NetApp Presents at Storage Field Day 13
Thursday, Jun 15 08:00-12:00 Dell EMC Presents at Storage Field Day 13
Thursday, Jun 15 13:00-14:00 SNIA Presents at Storage Field Day 13
Thursday, Jun 15 15:00-17:00 Primary Data Presents at Storage Field Day 13
Friday, Jun 16 10:30-12:30 X-IO Technologies Presents at Storage Field Day 13

 

Storage Field Day Exclusive at Pure Accelerate 2017

You may have also noticed that I’ll be participating in the Storage Field Day Exclusive at Pure Accelerate 2017. This will be running from June 12 – 14 in San Francisco and promises to be a whole lot of fun. Check the landing page here for more details of the event and delegates in attendance.

I’d like to publicly thank in advance the nice folks from Tech Field Day who’ve seen fit to have me back, as well as Pure Storage for having me along to their event as well. Also big thanks to the companies presenting. It’s going to be a lot of fun. Seriously.

ClearSky Data Are Here To Help

Disclaimer: I recently attended VMworld 2016 – US.  My flights were paid for by myself, VMware provided me with a free pass to the conference and various bits of swag, and Tech Field Day picked up my hotel costs. There is no requirement for me to blog about any of the content presented and I am not compensated in any way for my time at the event.  Some materials presented were discussed under NDA and don’t form part of my blog posts, but could influence future discussions.

TFD-Extra-VMworld-300

clearsky_orange_blue

ClearSky Data presented recently at Tech Field Day Extra VMworld US 2016. You can see video from the presentation here. My rough notes on the session are here.

 

Overview

Lazarus Vekiarides, CTO and Co-founder, took us through an overview. “ClearSky’s Global Storage Network delivers enterprise storage, spanning the entire data lifecycle, as a fully-managed service”. Sounds good. I like when people talk about lifecycles, and fully managed. These things are hard to do though.

ClearSky are aiming to provide “the performance and availability of on-premises storage with the economics and scale of the cloud”. They do this with:

  • economics
  • scalability
  • reliability
  • security
  • performance

According to ClearSky, we’ve previously used a “Fragmented Hybrid” model when it comes to cloud storage.

tfdx-clearskydata-fragmented_hybrid

I must have been watching too much Better Off Ted with my eldest daughter, but when I heard of the Global Storage Network, it sounded a lot like something from a Veridian Dynamics advertisement. It’s not though, it’s cooler than that. With the Global Storage Network, ClearSky brings it all together.

tfdx-clearskydata_globalstoragenetwork

You can read a whitepaper from ClearSky here, and there’s a data sheet here.

 

These Pictures are Compelling, But What Is It?

ClearSky say they are changing how enterprises access data

  • eliminate storage silos
  • pay only for what you use – up to 100% useable storage only
  • guaranteed 100% uptime
  • multi-site data access without replication
  • maximum of 30minute response time for Sev 1 and 2 tickets

tfdx-clearsky_data_at_a_glance

This is all delivered via consumption-based model. The idea behind this is that you get charged for only the capacity you use, but your applications have all the performance they need. Like all good consumption models, if you delete data, you give back the space ClearSky and are no longer billed for any of it.

“Customers simply plug into the ClearSky service to get the storage they need, when and where they need it, with the security, scalability and resilience that a business depends on.”

 

I’m Still Not Sure

That’s because I’m bad at explaining things. There’s an edge appliance (2RU appliance / 24 slots – about 6TB of flash cache) that is used. Cache is available (on resilient storage), but not copied. ClearSky POPs then offer distributed and optimised storage, with multiple copies to the cloud. Maybe a picture will explain it a bit better.

tfdx-clearskydata-architecture

With this architecture, ClearSky manages the entire data lifecycle. Active data lives either next to your applications, or in the metro area near your applications. Any cold data, backup and DR stuff is stored as multiple copies of data geographically dispersed in the network.

There’s support for iSCSI or FC today and write back cache is processed every 10 minutes and pushed to the metro cache or cloud.

 

What Do I Use It For?

Data in the ClearSky network can be accessed from multiple locations without replication, offering mobility and availability.

Multi-site availability

  • Load balancing and disaster recovery

Workload mobility

  • In-metro and cross-metro
  • Application data can be accessed from other metros

And you can use it in all the ways you think you would, including DR, DC migration, and load balancing.

 

Make it Splunky

You probably know that companies use Splunk to analyse machine data. I’ve used it at home to munge squid logs when trying to track my daughter’s internet use. Splunk captures, indexes and correlates machine data in a searchable repository from which it can generate graphs, reports, alerts, and visualisations. Spunk demands high performance and agile storage, and ClearSky have some experience with this. There’s also a Splunk Reference Architecture. ClearSky say they’re a good fit for Splunk Enterprise. The indexers simply write to the ClearSky Edge Cache & ClearSky manages index migration through cache and storage layers – greatly simplifying the solution. They also offer “[h]ighly consistent ingest performance, cloud capacity, and integrated backup using ClearSky snapshot technology”.

 

Conclusion

This was the first time I’d encountered ClearSky Data, and I liked the sound of a lot of what I heard. They make some big claims on performance, but the architecture seems to support these, at least on the face of it. I’m a fan of people who are into fully-managed data lifecycles. I hope to have the opportunity to dig further into this technology at some stage to see if they’re the real deal. People use caching solutions because they have the ability to greatly improve the perceived (and actual) performance of infrastructure. And managed services are certainly popular with enterprises looking at alternatives to their current, asset-heavy, models of storage consumption. If ClearSky can do everything it says it can, they are worth looking into further.