Elastifile Announces v3.0

Elastifile recently announced version 3.0 of their product. I had the opportunity to speak to Jerome McFarland (VP of Marketing) and thought I’d share some information from the announcement here. If you haven’t heard of them before, “Elastifile augments public cloud capabilities and facilitates cloud consumption by delivering enterprise-grade, scalable file storage in the cloud”.

 

The Announcement

ClearTier

One of the major features of the 3.0 release is “ClearTier”, delivering integration between file and object storage in public clouds. With ClearTier, you have object storage expanding the file system namespace. The cool thing about this is that Elastifile’s ECFS provides transparent read / write access to all data. No need to re-tool applications to take advantage of the improved economics of object storage in the public cloud.

How Does It Work?

All data is accessible through ECFS via a standard NFS mount, and application access to object data is routed automatically. Data tiering occurs automatically according to user-defined policies specifying:

  • Targeted capacity ratio between file and object;
  • Eligibility for data demotion (i.e. min time since last access); and
  • Promotion policies control response to object data access.

Bursting

ClearTier gets even more interesting when you combine it with Elastifile’s CloudConnect, by using CloudConnect to get data to the public cloud in the first place, and then using CloudTier to push data to object storage.

[image courtesy of Elastifile]

It becomes a simple process, and consists of two steps:

  1. Move on-premises data (from any NAS) to cloud-based object storage using CloudConnect; and
  2. Deploy ECFS with pointer to designated object store.

Get Snappy

ClearTier also provides the ability to store snapshots on an object tier. Snapshots occur automatically according to user- defined policies specifying:

  • Data to include;
  • Destination for snapshot (i.e. file storage / object storage); and
  • Schedule for snapshot creation.

The great thing is that all snapshots are accessible through ECFS via the same NFS mount.

 

Thoughts And Further Reading

I was pretty impressed with Elastifile’s CloudConnect solution when they first announced it. When you couple CloudConnect with something like ClearTier, and have it sitting on top of the ECFS foundation, it strikes me as a pretty cool solution. If you’re using applications that rely heavily on NFS, for example, ClearTier gives you a way to leverage the traditionally low cost of cloud object storage with the improved performance of file. I like the idea that you can play with the ratio of file and object, and I’m a big fan of not having to re-tool my file-centric applications to take advantage of object economics. The ability to store a bunch of snapshots on the object tier also adds increased flexibility in terms of data protection and storage access options.

The ability to burst workloads is exactly the kind of technical public cloud use case that we’ve been talking about in slideware for years now. The reality, however, has been somewhat different. It looks like Elastifile are delivering a solution that competes aggressively with some of the leading cloud providers’ object solutions, whilst also giving the storage array vendors, now dabbling in cloud solutions, pause for thought. There are a bunch of interesting use cases, particularly if you need to access a bunch of compute, and large data sets via file-based storage, in a cloud environment for short periods of time. If you’re looking for a cost-effective, scalable storage solution, I think that Elastifile are worth checking out.

Storage Field Day 17 – Wrap-up and Link-o-rama

Disclaimer: I recently attended Storage Field Day 17.  My flights, accommodation and other expenses were paid for by Tech Field Day. There is no requirement for me to blog about any of the content presented and I am not compensated in any way for my time at the event.  Some materials presented were discussed under NDA and don’t form part of my blog posts, but could influence future discussions.

This is a quick post to say thanks once again to Stephen and Ben, and the presenters at Storage Field Day 17. I had a super fun and educational time. For easy reference, here’s a list of the posts I did covering the events (they may not match the order of the presentations).

Storage Field Day – I’ll Be At Storage Field Day 17

Storage Field Day 17 – (Fairly) Full Disclosure

I Need Something Like Komprise For My Garage

NGD Systems Are On The Edge Of Glory

Intel’s Form Factor Is A Factor

StarWind Continues To Do It Their Way

 

Also, here’s a number of links to posts by my fellow delegates (in no particular order). They’re all very smart people, and you should check out their stuff, particularly if you haven’t before. I’ll attempt to keep this updated as more posts are published. But if it gets stale, the Storage Field Day 17 landing page will have updated links.

 

Max Mortillaro (@DarkkAvenger)

I will be at Storage Field Day 17! Wait what is a “Storage Field Day”?

The Rise of Computational Storage

Komprise: Data Management Made Easy

 

Ray Lucchesi (@RayLucchesi)

Screaming IOP performance with StarWind’s new NVMeoF software & Optane SSDs

 

Arjan Timmerman (@ArjanTim)

EP10 – Computational Storage: A Paradigm Shift In The Storage Industry with Scott Shadley and NGD Systems

 

Aaron Strong (@TheAaronStrong)

Komprise Systems Overview from #SFD17

NGD Systems from #SFD17

 

Jeffrey Powers (@Geekazine)

Komprise Transforming Data Management with Disruption at SFD17

Starwind NVMe Over Fabrics for SMB and ROBO at SFD17

NGD Systems Help Make Cat Searches Go Faster with Better Results at SFD17

 

Joe Houghes (@JHoughes)

 

Luigi Danakos (@NerdBlurt)

Tech Stand UP Episode 8 – SFD17 – Initial Thoughts on Komprise Podcast

Tech Stand Up Episode 9 – SFD17 – Initial thoughts NGD Systems Podcast

 

Mark Carlton (@MCarlton1983)

 

Enrico Signoretti (@ESignoretti)

Secondary Storage Is The New Primary

The Era of Composable Storage Infrastructures is Coming

The Fascinating World Of Computational Storage

Secondary Data and Komprise with Krishna Subramanian

 

Jon Hudson (@_Desmoden)

 

[photo courtesy of Ramon]

StarWind Continues To Do It Their Way

Disclaimer: I recently attended Storage Field Day 17.  My flights, accommodation and other expenses were paid for by Tech Field Day. There is no requirement for me to blog about any of the content presented and I am not compensated in any way for my time at the event.  Some materials presented were discussed under NDA and don’t form part of my blog posts, but could influence future discussions.

 

StarWind recently presented at Storage Field Day 17. You can see their videos from Storage Field Day 17 here, and download a PDF copy of my rough notes from here.

 

StarWind Do All Kinds Of Stuff

I’ve written enthusiastically about StarWind previously. If you’re unfamiliar with them, they have three main focus areas:

They maintain a strict focus on the SMB and Enterprise ROBO markets, and aren’t looking to be the next big thing in the enterprise any time soon.

 

So What’s All This About NVMe [over Fabrics]?

According to Max and the team, NVMe over Fabrics is “the next big thing in [network] storage”. Here’s a photo of Max saying just that.

Why Hate SAS?

It’s not that people hate SAS, it’s just that the SAS protocol was designed for disk, and NVMe was designed for Flash devices.

SAS (iSCSI / iSER) NVMe [over Fabrics]
Complex driver built around archaic SCSI Simple driver built around block device (R/W)
Single short queue per controller One device = one controller, no bottlenecks
Single short queue per device Many long queues per device
Serialised access, locks Non-serialised access, no locks
Many-to-One-to-Many Many-to-Many, true Point-to-Point

 

You Do You, Boo

StarWind have developed their own NVMe SPDK for Windows Server (as Intel doesn’t currently provide one). In early development they had some problems with high CPU overheads. CPU might be a “cheap resource”, but you still don’t want to use up 8 cores dishing out IO for a single device. They’ve managed to move a lot of the work to user space and cut down on core consumption. They’ve also built their own Linux (CentOS) based initiator for NVMe over Fabrics. They’ve developed a NVMe-oF initiator for Windows by combining a Linux initiator and stub driver in the hypervisor. “We found the elegant way to bring missing SPDK functionality to Windows Server: Run it in a VM with proper OS! First benefit – CPU is used more efficiently”. They’re looking to do something similar with ESXi in the very near future.

 

Thoughts And Further Reading

I like to think of StarWind as the little company from the Ukraine that can. They have a long, rich heritage in developing novel solutions to everyday storage problems in the data centre. They’re not necessarily trying to take over the world, but they’ve demonstrated before that they have an ability to deliver solutions that are unique (and sometimes pioneering) in the marketplace. They’ve spent a lot of time developing software storage solutions over the years, so it makes sense that they’d be interested to see what they could do with the latest storage protocols and devices. And if you’ve ever met Max and Anton (and the rest of their team), it makes even more sense that they wouldn’t necessarily wait around for Intel to release a Windows-based SPDK to see what type of performance they could get out of these fancy new Flash devices.

All of the big storage companies are coming out with various NVMe-based products, and a number are delivering NVMe over Fabrics solutions as well. There’s a whole lot of legacy storage that continues to dominate the enterprise and SMB storage markets, but I think it’s clear from presentations such as StarWind’s that the future is going to look a lot different in terms of the performance available to applications (both at the core and edge).

You can check out this primer on NVMe over Fabrics here, and the ratified 1.0a specification can be viewed hereRay Lucchesi, as usual, does a much better job than I do of explaining things, and shares his thoughts here.

Intel’s Form Factor Is A Factor

Disclaimer: I recently attended Storage Field Day 17.  My flights, accommodation and other expenses were paid for by Tech Field Day. There is no requirement for me to blog about any of the content presented and I am not compensated in any way for my time at the event.  Some materials presented were discussed under NDA and don’t form part of my blog posts, but could influence future discussions.

 

The Intel Optane team recently presented at Storage Field Day 17. You can see their videos from Storage Field Day 17 here, and download a PDF copy of my rough notes from here. I urge you to check out the videos, as there was a heck of a lot of stuff in there. But rather than talk about benchmarks and the SPDK, I’m going to focus on what’s happening with Intel’s approach to storage in terms of the form factor.

 

Of Form Factors And Other Matters Of Import

An Abbreviated History Of Drive Form Factors

Let’s start with little bit of history to get you going. IBM introduced the first hard drive – the IBM 350 disk storage unit – in 1956. Over time we’ve gone from a variety of big old drives to smaller form factors. I’m not old enough to reminisce about the Winchester drives, but I do remember the 5.25″ drives in the XT. Wikipedia provides a good a place to start as any if you’re interested in knowing more about hard drives. In any case, we now have the following prevailing form factors in use as hard drive storage:

  • 3.5″ drives – still reasonably common in desktop computers and “cheap and deep” storage systems;
  • 2.5″ drives (SFF) – popular in laptops and used as a “dense” form factor for a variety of server and storage solutions;
  • U.2 – mainstream PCIe SSD form factor that has the same dimensions as 2.5″ drives; and
  • M.2 – designed for laptops and tablets.

Challenges

There are a number of challenges associated with the current drive form factors. The most notable of these is the density issue. Drive (and storage) vendors have been struggling for years to try and cram more and more devices into smaller spaces whilst increasing device capacities as well. This has led to problems with cooling, power, and overall reliability. Basically, there’s only so much you can put in 1RU without the whole lot melting.

 

A Ruler? Wait, what?

Intel’s “Ruler” is a ruler-like, long (or short) drive based on the EDSFF (Enterprise and Datacenter Storage Form Factor) specification. There’s a tech brief you can view here. There are a few different versions (basically long and short), and it still leverages NVMe via PCIe.

[image courtesy of Intel]

It’s Denser

You can cram a lot of these things in a 1RU server, as Super Micro demonstrated a few months ago.

  • Up to 32 E1.L 9.5mm drives per 1RU
  • Up to 48 E1.S drives per 1RU

Which means you could be looking at around a petabyte of raw storage in 1RU (using 32TB E1.L drives). This number is only going to go up as capacities increase. Instead of half a rack of 4TB SSDs, you can do it all in 1RU.

It’s Cooler

Cooling has been a problem for storage systems for some time. A number of storage vendors have found out the hard way that jamming a bunch of drives in a small enclosure has a cost in terms of power and cooling. Intel tell us that they’ve had some (potentially) really good results with the E1.L and E1.S based on testing to date (in comparison to traditional SSDs). They talked about:

  • Up to 2x less airflow needed per E1.L 9.5mm SSD vs. U.2 15mm (based on Intel’s internal simulation results); and
  • Up to 3x less airflow needed per E1.S SSD vs. U.2 7mm.

Still Serviceable

You can also replace these things when they break. Intel say they’re:

  • Fully front serviceable with an integrated pull latch;
  • Support integrated, programmable LEDs; and
  • Support remote, drive specific power cycling.

 

Thoughts And Further Reading

SAN and NAS became popular in the data centre because you could jam a whole bunch of drives in a central location and you weren’t limited by what a single server could support. For some workloads though, having storage decoupled from the server can be problematic either in terms of latency, bandwidth, or both. Some workloads need their storage as close to the processor as possible. Technologies such as NVMe over Fabrics are addressing that issue to an extent, and other vendors are working to bring the compute closer to the storage. But some people just want to do what they do, and they need more and more storage to do it. I think the “ruler” form factor is an interesting approach to the issue traditionally associated with cramming a bunch of capacity in a small space. It’s probably going to be some time before you see this kind of thing in data centres as a matter of course, because it takes a long time to change the way that people design their servers to accommodate new standards. Remember how long it took for SFF drives to become as common in the DC as they are? No? Well it took a while. Server designs are sometimes developed years (or at least months) ahead of their release to the market. That said, I think Intel have come up with a really cool idea here, and if they can address the cooling and capacity issues as well as they say they can, this will likely take off. Of course, the idea of having 1PB of data sitting in 1RU should be at least a little scary in terms of failure domains, but I’m sure someone will work that out. It’s just physics after all, isn’t it?

There’s also an interesting article at The Register on the newer generation of drive form factors that’s worth checking out.

Axellio Announces FX-WSSD

 

Axellio (a division of X-IO Technologies) recently announced their new FX-WSSD appliance based on Windows Server 2019. I had the opportunity to speak to Bill Miller (CEO) and Barry Martin (Product Manager for the HCI WSSD product) and thought I’d share some thoughts here.

 

What Is It?

Axellio recently announced the new FabricXpress Hyper-Converged Infrastructure (HCI) | Windows Server Software-Defined Datacenter (known as FX-WSSD to its friends). It’s built on the Axellio Edge FX-1000 platform and comes licensed with Windows Server Datacenter Edition 2019 and runs Microsoft Storage Spaces Direct. You can manage it with Windows Admin Center and the (optional) 5nine management suite.

 

Density

A big part of the Axellio story here revolves around density. You get 4 nodes in 4 RU, and up to 36 NVMe drives per server. Axellio tell me you can pack up to 920TB of raw NVMe-based storage in these things (assuming you’re deploying 6.4TB NVMe drives). You can also have a minimum of 4 drives per server if you have a requirement that is more reliant on processing. There’s a full range of iWARP adapters from Chelsio Communications available with support for 4x 10, 40, or 100GbE connections.

[image courtesy of Axellio]

You can start small and scale up (or out) if required. There’s support for up to 16 nodes in a cluster, and you can manage multiple clusters together if need be.

 

Not That Edge

When I think of edge computing I think of scientific folks doing funky things with big data and generally running Linux-type workloads. While this type of edge computing is still common (and well-catered for with Axellio’s solutions), Axellio are going after what they refer to as the “enterprise edge” market as opposed to the non-Windows workloads. The Windows DC Edition licensing makes sense if you want to run Hyper-V and a number of Windows-based workloads, such as Active Directory domain controllers, file and print services, small databases (basically the type of enterprise workloads traditionally found in remote offices).

 

Thoughts and Further Reading

I’m the first to admit that my working knowledge of current Windows technologies is nowhere near what it was 15 years ago. But I understand why choosing Windows as the foundation platform for the edge HCI appliance makes sense for Axellio. There’s a lot less investment they need to make in terms of raw product development, the Windows virtualisation platform continues to mature, there’s already a big install base of Windows in the enterprise, and operations folks will be fairly comfortable with the management interface.

I’ve written about Axellio’s Edge solution previously, and this new offering is a nice extension of that with some Windows chops and “HCI” sensibilities. I’m not interested in getting into a debate about whether this is really a hyper-converged offering or not, but there’s a bunch of compute, storage and networking stuck together with a hypervisor and management tier to help keep it running. Whatever you want to call it, I can see this being a useful (and flexible) solution for those shops who need to have certain workloads close to the edge, and are already leveraging the Windows operating platform to do it.

You can grab the Axellio Data Sheet from here, and a copy of the press release can be found here.

NGD Systems Are On The Edge Of Glory

Disclaimer: I recently attended Storage Field Day 17.  My flights, accommodation and other expenses were paid for by Tech Field Day. There is no requirement for me to blog about any of the content presented and I am not compensated in any way for my time at the event.  Some materials presented were discussed under NDA and don’t form part of my blog posts, but could influence future discussions.

 

NGD Systems recently presented at Storage Field Day 17. You can see their videos from Storage Field Day 17 here, and download a PDF copy of my rough notes from here.

 

Edgy

Storage and compute / processing requirements at the edge aren’t necessarily new problems. People have been trying to process data outside of their core data centres for some time now. NGD Systems have a pretty good handle on the situation, and explained it thusly:

  • A massive amount of data is now produced at the edge;
  • AI algorithms demand large amounts of data; and
  • Moving data to cloud is often not practical.

They’ve taken a different approach with “computational storage” by moving the compute to storage. It then becomes a problem to solve in terms of Power/TB + $/GB + in-situ processing. Their focus has been on delivering a power efficient, low cost, computational storage solution.

A Novel Solution – Move Compute to Storage

Key attributes:

  • Maintain familiar methodology (no new learning)
  • Use standard protocols (NVMe) and processes (no new commands)
  • Minimise interface traffic (power and time savings)
  • Enhancing limited footprint with maximum benefit (customer TCO)

Moving Computation to Data is Cheaper than moving Data

  • A computation requested by an application is much more efficient if it is executed near the data it operates on
    • Minimises network traffic
    • Increases effective throughput and performance of the system (eg Hadoop Distributed File System)
    • Enables distributed processing
  • Especially true for big data (analytics): large sets and unstructured data
  • Traditional approach: high-performance servers coupled with SAN/NAS storage – Eventually limited by networking bottlenecks

 

Thoughts and Further Reading

NGD are targeting some interesting use cases, including:

  • Hyperscalers;
  • Content Delivery Networks; and
  • Fog Storage” market.

They say that these storage solutions solve the low power, more efficient compute needs without placing strain on the edge and “fog” platforms. I found the CDN use case to be particularly interesting. When you have a bunch of IP-addressable storage sitting in a remote point of presence it can sometimes be a pain to have them talking back to a centralised server to get decryption keys for protected content, for example. In this case you can have the drives do the key handling and authentication, providing faster access to content than would be possible in latency-constrained environments.

It seems silly to quote Gaga Herself when writing about tech, but I think NGD Systems are taking a really interesting approach to solving some of the compute problems at the edge. They’re not just talking about jamming a bunch of disks together with some compute. Instead, they’re jamming the compute in each of the disks. It’s not a traditional approach to solving some of the challenges of the edge, but it seems like it has legs for those use cases mentioned above. Edge compute and storage is often deployed in reasonably rugged environments that are not as well-equipped as large DCs in terms of cooling and power. The focus on delivering processing at storage that relies on minimal power and uses standard protocols is intriguing. They say they can do it at a reasonable price too, making the solution all the more appealing for those companies facing difficulties using more traditional edge storage and compute solutions.

You can check out the specifications of the Newport Platform here. Note the various capacities depend on the form factor you are consuming. There’s also a great paper on computational storage that you can download from here. For some other perspectives on computational storage, check out Max‘s article here, and Enrico’s article here.

Violin Systems Announces Violin XVS 8

Violin Systems recently announced their new XVS 8 platform. I had the opportunity to speak to Gary Lyng (Chief Marketing Officer) and thought I’d share some thoughts here.

 

Background

A few things have changed for Violin since they folded as Violin Memory and were acquired by Soros in 2017. Firstly, they’re now 100% channel focused. And secondly, according to Lyng, they’re “all about microseconds”.

What Really Matters?

Violin are focused on extreme performance, specifically:

  • Low latency;
  • Consistent performance (24x7x365); and
  • Enterprise data services.

The key use cases they’re addressing are:

  • Tier 0;
  • Realtime insight;
  • OLTP, DB, VDI;
  • AI / ML;
  • Commercial IoT; and
  • Trading, supply chain.

 

The Announcement

The crux of the announcement is the Violin XVS 8.

[image courtesy of Violin Systems]

Specifications

Performance Latency as low 50µs to 800µs

Dedupe LUN performance improved by >40%

Capacity Usable –  44.3TB – 88.7TB

Effective –  256TB – 512TB

 

Enterprise Data Services
Efficiency Dedupe + compression reduction Ratio 6:1

Low impact Snapshots, Thin Provisioning, Thin and Thick Clones

Continuity

Protection

Scalability

Synchronous Replication (Local/Metro) | Asynchronous Replication |Stretch clusters (0 RPO & RTO – 7700) |NDU

Snapshots (crash consistent) |Consistency Groups (snaps & replication)

Transparent LUN mirroring

Online LUN expansion

Capacity pooling across shelves

Single Name Space

Hosts  8x 32Gb FC (NVMe Ready) or 8×10 GbE iSCSI

Feature Summary

Performance & Experience Advances

  • Consistent-Performance Guarantee
  • Cloud-based predictive analytics providing insight into future performance needs
  • NVMe over FC

Flexibility & Efficiency

  • Single Platform with selectable dedupe per LUN / Application
  • Snap-Dedupe

Application Infrastructure Ecosystems

Other Neat Features

32Gbps FC connectivity

Concerto OS updates (expected early Q1 2019)

  • Simple software upgrade to existing systems
  • Lowered IO Latency, Higher Bandwidth
  • Lower CPU usage and enable cost savings through compute and software consolidation
  • Optimised for transporting data from solid state storage to numerous processors

Everyone Has An App Now

All the cool storage vendors have an app. You can walk into your DC and (assuming you have the right credentials) scan a code on the front of the box. This will get you access to cloud-based analytics to see just how your system is performing.

[image courtesy of Violin Systems]

 

Thoughts

Violin Memory were quite the pioneers in the all-flash storage market many years ago. The pundits lamented the issues that Violin had with keeping pace with some of the smaller start-ups and big box sellers in recent times. The decision to focus on the “extreme performance” space is an interesting one. Violin certainly have some decent pedigree when it comes to the enterprise data services that these types of high-end customers would be looking for. And it’s not just about speed, it’s also about resilience and reliability. I asked about the decision to pursue NVMe over FC, and Lyng said that the feeling was that technologies such as RocE weren’t quite there yet.

I’m curious to see whether Violin can continue to have an impact on the market. This isn’t their first rodeo, and if the box can deliver the numbers that have been touted, it will make for a reasonably compelling offering. Particularly in the financial services / transactional space where time is money.

I Need Something Like Komprise For My Garage

Disclaimer: I recently attended Storage Field Day 17.  My flights, accommodation and other expenses were paid for by Tech Field Day. There is no requirement for me to blog about any of the content presented and I am not compensated in any way for my time at the event.  Some materials presented were discussed under NDA and don’t form part of my blog posts, but could influence future discussions.

 

Komprise recently presented at Storage Field Day 17. You can see their videos from Storage Field Day 17 here, and download a PDF copy of my rough notes from here. Here’s a blurry photo (love that iPhone camera quality) of Kumar K. Goswami (Founder and CEO of Komprise) presenting.

 

What’s In Your Garage?

My current house has a good sized garage, and we only have one car. So I have a lot of space to store things in it. When we moved in we added some storage cupboards and some additional shelving to accommodate our stuff. Much like Parkinson’s Law (and the corollary for storage systems), the number of things in my garage has expanded to fill the available space. I have toys from when my children were younger, old university assignments, clothes, Christmas decorations, oft-neglected gym equipment. You get the idea. Every year I give a bunch of stuff away to charities or throw it out. But my primary storage (new things) keeps expanding too, so I need to keep moving stuff to my garage for storage.

If you’ve ever had the good (!) fortune of managing file servers, you’ll understand that there’s a lot of data being stored in corporate environments that people don’t know what to do with. As Komprise pointed out in their presentation, we’re “[d]rowning in unstructured data”. Komprise wants to help out by “[i]dentifying cold data and syphoning it off before it goes into the data workflow and data protection systems”. The idea is that it delivers non-disruptive data management. Unlike cleaning up my garage, things just move about based on policies.

 

How’s That Work Then?

Komprise works by moving unstructured data about the place. It’s a hybrid SaaS solution, with a console in the cloud, and “observers” running in VMs on-premises.

[image courtesy of Komprise]

I don’t want to talk too much about how the product works, as I think the video presentation does a better job of that than I would. And there’s also an excellent article on their website covering the Komprise Filesystem. From a visualisation perspective though, the dashboard presents a “green doughnut”, providing information including:

  • Data by age;
  • File analytics (size, types, top users, etc); and
  • Then set policies and see ROI based on the policy (customer enters their own costs).

When files are moved around, Komprise leaves a “breadcrumb” on the source storage. They were careful not to call it a stub – it’s a Komprise Dynamic Link – a 4KB symbolic link.

 

It’s A Real Problem

One thing that really struck me about Komprise’s presentation was when they said they wanted to “[m]ove things you don’t want to cheaper storage”. It got me thinking that a lot of corporate file servers are very similar to my garage. There’s an awful lot of stuff being stored on them. Some of it is regularly used (much like my Christmas decorations), and some of it not as much (more like my gym equipment). So why don’t we throw stuff out? Well, when you’re in business, you generally have to work within the confines of various frameworks and regulations. So it’s not as simple as saying “Let’s get rid of the old stuff we haven’t used in 24 months”. Unlike those particularly unhelpful self-help books on decluttering, trashing corporate data isn’t the same as throwing out old boxes of magazines.

This is a real problem for corporations, and is only going to get worse. More and more data is being generated every day, much of it simply dumped on unstructured file stores with little to no understanding of the data’s value. Komprise seem to be doing a good job of helping to resolve an old problem. I still naively like to think that this would be better if people would use document management systems properly and take some responsibility for their stuff. But, much like the mislabelled boxes of files in my garage, it’s often not that simple. People move on, don’t know to do with the data, and assume that the IT folks will take care of it. I think solutions like the one from Komprise, while being technically very interesting, also have an important role to play in the enterprise. I’m just wondering if I can do something like this with all of the stuff in my garage.

 

Further Reading

I heartily recommend checking out Enrico’s post, as well as Aaron’s take on the data management problem.

Storage Field Day 17 – (Fairly) Full Disclosure

Disclaimer: I recently attended Storage Field Day 17.  My flights, accommodation and other expenses were paid for by Tech Field Day. There is no requirement for me to blog about any of the content presented and I am not compensated in any way for my time at the event.  Some materials presented were discussed under NDA and don’t form part of my blog posts, but could influence future discussions.

Here are my notes on gifts, etc, that I received as a delegate at Storage Field Day 17. I’d like to point out that I’m not trying to play companies off against each other. I don’t have feelings one way or another about receiving gifts at these events (although I generally prefer small things I can fit in my suitcase). Rather, I’m just trying to make it clear what I received during this event to ensure that we’re all on the same page as far as what I’m being influenced by. Some presenters didn’t provide any gifts as part of their session – which is totally fine. I’m going to do this in chronological order, as that was the easiest way for me to take notes during the week. Whilst every delegate’s situation is different, I took 5 days of unpaid leave to be at this event.

 

Saturday

I took one of those “ride-sharing” services (that seem awfully similar to taxis) to the airport. I have some status with Qantas so I get a little bit of special treatment and lounge access. In Sydney I had some cheese and crackers and a couple of Coopers Original Pale Ales in the lounge. I flew BNE – SYD – SFO, and paid for my own upgrade to Premium Economy for the SYD – SFO leg of the trip. It was very nice to be at the front, rather than the back, of the plane for a change.

 

Tuesday

I dropped in at Georgiana Comsa‘s house for a coffee, and we then had lunch at Una Mas Mexican Grill prior to meeting with a vendor. I had the Foghead chicken and bacon burrito, with home-made BBQ sauce, cheese, guacamole, sour cream, black beans, rice and Corn Salsa in a tomato tortilla along with an Agua Fresca. This was paid for by Georgiana. After our meeting Georgiana very kindly dropped me off at Mountain View, saving me the hassle of getting a Caltrain there.

I then met up with Craig Waters at the Pure Storage office in Mountain View. I had a few Modelo Especial beers while we caught up. We then went to QBB for dinner. I had a pulled pork sandwich, pickles, potato salad, and a Kölsch beer. It seems like this would be the place to go if you’re into bourbon too. We followed this up with a cheeky brew at Bierhaus. I had a Paulaner Hefe-Weißbier. I then took a ride sharing service back to Menlo Park. This was all covered by Pure Storage. It was great to catch up with Craig again as it had been a while.

 

Wednesday

While waiting for the other delegates to join us for dinner I had 2 Modelo Negra beers at the hotel bar with Howard Marks. I paid for these myself. When everyone turned up I had another Modelo, paid for by Tech Field Day. We had dinner at Spencer’s for Steaks and Chops at the hotel. I had some prawns and cheese and crackers prior to the main. Howard picked out a nice chianti to accompany the meal. I had a wedge salad with cranberries, blue cheese, candied pecans, bacon, apples and balsamic vinaigrette. For the main I had a 14oz USDA prime boneless ribeye with asparagus and potato mousseline. This was followed up by praline cake for dessert. I was feeling pretty full by this stage, but, hoping to ensure I got a poor night’s sleep, I had one more Modelo in the hotel bar before bed.

I was also lucky enough to score two signed George R. R. Martin books as part of the Yankee gift swap we did. Well, I hunted them down, as Howard had hinted he’d be bringing something like that along as a gift. Ben from Tech Field Day also gave each delegate a gift bag of various snacks, etc. I left most of these in my room when I left, but I did take the bag with me, as it had a Minion riding a unicorn on it, and this seemed like something I’d want to take grocery shopping.

 

Thursday

We had breakfast at the hotel on Thursday morning. This was buffet-style affair and I had bacon, scrambled eggs, sausage, fruit and strawberry yoghurt, along with some orange juice and coffee. We were all given a Gestalt IT clear bag with a few bits and pieces in it. Stephen also gave us a 3D-printed SFD17 souvenir. I’ll leave you to work out what it is.

Komprise gave each delegate a sticker and pen.

We had lunch at the hotel. It was Mexican style. I had salmon, lettuce, a chipotle chicken fajita, guacamole, sour cream and whole pinto beans in a flour tortilla, a cheese enchilada, some iced tea, and a Mexican chocolate caramel bite.

For dinner we went to Loft Bar and Bistro. I had some crispy calamari, tomato and fresh mozzarella salad, Chinese chicken salad, pesto salmon and grilled bistro filet. I also had 4 805 beers.

I followed this up with a Modelo at the hotel bar before retiring for the evening.

 

Friday

We had breakfast at Mikayla’s. I’ve been here before and really like it. I had the supreme breakfast wrap with bacon, freshly squeezed orange juice and coffee. I had some water during the Intel session. Intel also gave each delegate an Intel SPDK carry bag and Optane socks. Lunch at Intel was from Dish n’ Dash. I had a beef shawarma wrap and some water.

NGD Systems gave each of us a USB fan (something like this), a cap, sticker, and webcam cover (I don’t have that many webcams, but maybe one day). I had 3 Modelo Negra beers at the hotel before we headed out to dinner at Mexicali Grill. I decided to play it safe this time and didn’t load up with a jumbo shrimp burrito prior to a 14-hour flight. Instead I had the Camarones Cancun and a Modelo beer, along with some guacamole.

Stephen and Ben dropped me at SFO. I had 2 Heinekens and some crackers and cheese in the Cathay Pacific lounge (yay status!). All in all, it was a great trip. Thanks again to Tech Field Day for having me, thanks to the other delegates for being super nice and smart, and thanks to the presenters for some educational and engaging sessions.

Storage Field Day 17 – Day 0

Disclaimer: I recently attended Storage Field Day 17.  My flights, accommodation and other expenses were paid for by Tech Field Day. There is no requirement for me to blog about any of the content presented and I am not compensated in any way for my time at the event.  Some materials presented were discussed under NDA and don’t form part of my blog posts, but could influence future discussions.

This is just a quick post to share some thoughts on day zero at Storage Field Day 17. Thanks again Stephen and the team at Gestalt IT for having me back, making sure everything is running according to plan and for just being really very decent people. I’ve really enjoyed catching up with the people I’ve met before and meeting the new delegates. Look out for some posts related to the Tech Field Day sessions in the next few weeks. And if you’re in a useful timezone, check out the live streams from the event here, or the recordings afterwards.

Here’s the rough schedule for the next three days (all times are ‘Merican Pacific).

Thursday, Sep 20 11:00-12:00 Komprise Presents at Storage Field Day 17
Thursday, Sep 20 13:00-15:00 StarWind Presents at Storage Field Day 17
Friday, Sep 21 9:30-11:30 Intel Optane Presents at Storage Field Day 17
Friday, Sep 21 13:00-15:00 NGD Systems Presents at Storage Field Day 17

You could also follow along with the livestream here.