StorONE Announces AFA.next

StorONE recently announced the All-Flash Array.next (AFAn). I had the opportunity to speak to George Crump (StorONE Chief Marketing Officer) about the news, and thought I’d share some brief thoughts here.

 

What Is It? 

It’s a box! (Sorry I’ve been re-watching Silicon Valley with my daughter recently).

[image courtesy of StorONE]

More accurately, it’s an Intel Server with Intel Optane and Intel QLC storage, powered by StorONE’s software.

S1:Tier

S1:Tier is StorONE’s tiering solution. It operates within the parameters of a high and low watermark. Once the Optane tier fills up, the data is written out, sequentially, to QLC. The neat thing is that when you need to recall the data on QLC, you don’t necessarily need to move it all back to the Optane tier. Rather, read requests can be served directly from QLC. StorONE call this a multi-tier capability, because you can then move data to cloud storage for long-term retention if required.

[image courtesy of StorONE]

S1:HA

Crump noted that the Optane drives are single ported, leading some customers to look highly available configurations. These are catered for with a variation of S1:HA, where the HA solution is now a synchronous mirror between 2 stacks.

 

Thoughts and Further Reading

I’m not just a fan of StorONE because the company occasionally throws me a few dollarydoos to keep the site running. I’m a fan because the folks over there do an awful lot of storage type stuff on what is essentially commodity hardware, and they’re getting results that are worth writing home about, with a minimum of fuss. The AFAn uses Optane as a storage tier, not just read cache, so you get all of the benefit of Optane write performance (many, many IOPS). It has the resilience and data protection features you see in many midrange and enterprise arrays today (namely vRAID, replication, and snapshots). Finally, it has varying support for all three use cases (block, file, and object), so there’s a good chance your workload will fit on the box.

More and more vendors are coming to market with Optane-based storage solutions. It still seems that only a small number of them are taking full advantage of Optane as a write medium, instead focusing on its benefit as a read tier. As I mentioned before, Crump and the team at StorONE have positioned some pretty decent numbers coming out of the AFAn. I think the best thing is that it’s now available as a configuration item on the StorONE TRUprice site as well, so you can see for yourself how much the solution costs. If you’re after a whole lot of performance in a small box, this might be just the thing. You can read more about the solution and check out the lab report here. My friend Max wrote a great article on the solution that you can read here.

Komprise Announces Cloud Capability

Komprise recently made some announcements around extending its product to cloud. I had the opportunity to speak to Krishna Subramanian (President and COO) about the news and I thought I’d share some of my thoughts here.

 

The Announcement

Komprise has traditionally focused on unstructured data stored on-premises. It has now extended the capabilities of Komprise Intelligent Data Management to include cloud data. There’s currently support for Amazon S3 and Wasabi, with Google Cloud, Microsoft Azure, and IBM support coming soon.

 

Benefits

So what do you get with this capability?

Analyse data usage across cloud accounts and buckets easily

  • Single view across cloud accounts, buckets, and storage classes
  • Analyse AWS usage by various metrics accurately based on access times
  • Explore different data archival, replication, and deletion strategies with instant cost projections

Optimise AWS costs with analytics-driven archiving

  • Continuously move objects by policy across Cloud Network Attached Storage (NAS), Amazon S3, Amazon S3 Standard-IA, Amazon S3 Glacier, and Amazon S3 Glacier DeepArchive
  • Minimise costs and penalties by moving data at the right time based on access patterns

Bridge to Big Data/Artificial Intelligence (AI) projects

  • Create virtual data lakes for Big Data, AI – search for exactly what you need across cloud accounts and buckets
  • Native access to moved data on each storage class with full data fidelity

Create Cyber Resiliency with AWS

  • Copy S3 data to AWS to protect from ransomware with an air-gapped copy

[image courtesy of Komprise]

 

Why Is This Good?

The move to cloud storage hasn’t been all beer and skittles for enterprise. Storing large amounts of data in public cloud presents enterprises with a number of challenges, including:

  • Poor visibility – “Bucket sprawl”
  • Insufficient data – Cloud does not easily track last access / data use
  • Cost complexity – Manual data movement can lead to unexpected retrieval cost surprises
  • Labour – Manually moving data is error-prone and time-consuming

Sample Use Cases

Some other reasons you might want to have Komprise manage your data include:

  • Finding ex-employee data stored in buckets.
  • Data migration – you might want to take a copy of your data from Wasabi to AWS.

There’s support for all unstructured data (file and object), so the benefits of Komprise can be enjoyed regardless of how you’re storing your unstructured data. It’s also important to note that there’s no change to the existing licensing model, you’re just now able to use the product on public cloud storage.

 

Thoughts

Effective data management remains a big challenge for enterprises. It’s no secret that public cloud storage is really just storage that lives in another company’s data centre. Sure, it might be object storage, rather than file based, but it’s still just a bunch of unstructured data sitting in another company’s data centre. The way you consume that data may have changed, and certainly the way you pay for it has changed, but fundamentally it’s still your unstructured data sitting on a share or a filesystem. The problems you had on-premises though, still manifest in public cloud environments (i.e. data sprawl, capacity issues, etc). That’s why the Komprise solution seems so compelling when it comes to managing your on-premises storage consumption, and extending that capability to cloud storage is a no-brainer. When it comes to storing unstructured data, it’s frequently a bin fire of some sort or another. The reason for this is because it doesn’t scale well. I don’t mean the storage doesn’t scale – you can store petabytes all over the place if you like. But if you’re still hand crafting your shares and manually moving data around, you’ll notice that it becomes more and more time consuming as time goes on (and your data storage needs grow).

One way to address this challenge is to introduce a level of automation, which is something that Komprise does quite well. If you’ve got many terabytes of data stored on-premises and in AWS buckets (or you’re looking to move some old data from on-premises to the cloud) and you’re not quite sure what it’s all for or how best to go about it, Komprise can certainly help you out.

Datadobi Announces S3 Migration Capability

Datadobi recently announced S3 migration capabilities as part of DobiMigrate 5.9. I had the opportunity to speak to Carl D’Halluin and Michael Jack about the announcement and thought I’d share some thoughts on it here.

 

What Is It?

In short, you can now use DobiMigrate to perform S3 to S3 object storage migrations. It’s flexible too, offering the ability to migrate data from a variety of on-premises object systems up to public cloud object storage, between on-premises systems, or back to on-premises from public cloud storage. There’s support for a variety of S3 systems, including:

In the future Datadobi is looking to add support for AWS Glacier, object locks, object tags, and non-current object versions.

 

Why Would You?

There are quite a few reasons why you might want to move S3 data around. You could be seeing high egress charges from AWS because you’re accessing more data in S3 than you’d initially anticipated. You might be looking to move to the cloud and have a significant on-premises footprint that needs to go. Or you might be looking to replace your on-premises solution with a solution from another vendor.

 

How Would You?

The process used to migrate object is fairly straightforward, and follows a pattern that will be familiar if you’ve done anything with any kind of storage migration tool before. In short, you setup a migration pair (source and destination), run a scan and first copy, then do some incremental copies. Once you’ve got a maintenance window, there’s a cutover where the final scan and copy is done. And then you’re good to go. Basically.

[image courtesy of Datadobi]

 

Final Thoughts

Why am I so interested in these types of offerings? Part of it is that it reminds of all of the time I burnt through earlier in my career migrating data from various storage platforms to other storage platforms. One of the funny things about storage is that there’s rarely enough to service demand, and it rarely delivers the performance you need after it’s been in use for a few years. As such, there’s always some requirement to move data from one spot to another, and to keep that data intact in terms of its permissions, and metadata.

Amazon’s S3 offering has been amazing in terms of bringing object storage to the front of mind of many storage consumers who had previously only used block or file storage. Some of those users are now discovering that, while S3 is great, it can be expensive if you haven’t accounted for egress costs, or you’ve started using a whole lot more of it than initially anticipated. Some companies simply have to take their lumps, as everything is done in public cloud. But for those organisations with some on-premises footprint, the idea of being able to do performance oriented object storage in their own data centre holds a great deal of appeal. But how do you get it back on-premises in a reliable fashion? I believe that’s where Datadobi’s solution really shines.

I’m a fan of software that makes life easier for storage folk. Platform migrations can be a real pain to deal with, and are often riddled with risky propositions and daunting timeframes. Datadobi can’t necessarily change the laws of physics in a way that will keep your project manager happy, but it can do some stuff that means you won’t be quite as broken after a storage migration as you might have been previously. They already had a good story when it came to file storage migration, and the object to object story enhances it. Worth checking out.

StorONE Announces S1:TRUprice

StorONE recently announced S1:TRUprice. I had the opportunity to talk about the announcement with George Crump, and thought I’d share some of my notes here.

 

What Is It?

A website that anyone can access that provides a transparent view of StorONE’s pricing. There are three things you’ll want to know when doing a sample configuration:

  • Capacity
  • Use case (All-Flash, Hybrid, or All-HDD); and
  • Preferred server hardware (Dell EMC, HPE, Supermicro)

There’s also an option to do a software-only configuration if you’d rather roll your own. In the following example, I’ve configured HPE hardware in a highly available fashion with 92TB of capacity. This costs US $97243.14. Simple as that. Once you’re happy with the configuration, you can have a formal quote sent to you, or choose to get on a call with someone.

 

Thoughts and Further Reading

Astute readers will notice that there’s a StorONE banner on my website, and the company has provided funds that help me pay the costs of running my blog. This announcement is newsworthy regardless of my relationship with StorONE though. If you’ve ever been an enterprise storage customer, you’ll know that getting pricing is frequently a complicated endeavour. there’s rarely a page hosted on the vendor’s website that provides the total cost of whatever array / capacity you’re looking to consume. Instead, there’ll be an exercise involving a pre-sales engineer, possibly some sizing and analysis, and a bunch of data is put into a spreadsheet. This then magically determines the appropriate bit of gear. This specification is sent to a pricing team, some discounts to the recommended retail price are usually applied, and it’s sent to you to consider. If it’s a deal that’s competitive, there might be some more discount. If it’s the end of quarter and the sales person is “motivated”, you might find it’s a good time to buy. There are a whole slew of reasons why the price is never the price. But the problem with this is you can never know the price without talking to someone working for the vendor. Want to budget for some new capacity? Or another site deployment? Talk to the vendor. This makes a lot of sense for the vendor. It gives the sales team insight into what’s happening in the account. There’s “engagement” and “partnership”. Which is all well and good, but does withholding pricing need to be the cost of this engagement?

The Cloud Made Me Do It

The public availability of cloud pricing is changing the conversation when it comes to traditional enterprise storage consumption. Not just in terms of pricing transparency, but also equipment availability, customer enablement, and time to value. Years ago we were all beholden to our storage vendor of choice to deliver storage to us under the terms of the vendor, and when the vendor was able to do it. Nowadays, even enterprise consumers can go and grab the cloud storage they want or need with only a small modicum of fuss. This has changed the behaviours of the traditional storage vendors in a way that I don’t think was foreseen. Sure, cloud still isn’t the answer to every solution, and if you’re selling big tin into big banks, you might have a bit of runway before you need show your customers too much of what’s happening behind the curtain. But this move by StorONE demonstrates that there’s a demand for pricing transparency in the market, and customers are looking to vendors to show some innovation when it comes to the fairly boring business of enterprise storage. I’m very curious to see what other vendors decide to follow suit.

We won’t automatically see the end of some of the practices surrounding enterprise storage pricing, but initiatives like this certainly put some pressure back on the vendors to justify the price per GB they’re slinging gear for. It’s a bit easier to keep prices elevated when your customers have to do a lot of work to go to a competitor and find out what it charges for a similar solution. There are reasons for everything (including high prices), and I’m not suggesting that the major storage vendors have been colluding on price by any means. But something like S1:TRUprice is another nail in the coffin of the old way of doing things, and I’m happy about that. For another perspective on this news, check out Chris M. Evans’ article here.

Spectro Cloud – Profile-Based Kubernetes Management For The Enterprise

 

Spectro Cloud launched in March. I recently had the opportunity to speak to Tenry Fu (CEO) and Tina Nolte (VP, Products) about the launch, and what Spectro Cloud is, and thought I’d share some notes here.

 

The Problem?

I was going to start this article by saying that Kubernetes in the enterprise is a bin fire, but that’s too harsh (and entirely unfair on the folks who are doing it well). There is, however, a frequent compromise being made between ease of use, control, and visibility.

[image courtesy of Spectro Cloud]

According to Fu, the way that enterprises consume Kubernetes shouldn’t just be on the left or the right side of the diagram. There is a way to do both.

 

The Solution?

According to the team, Spectro Cloud is “a SaaS platform that gives Enterprises control over Kubernetes infrastructure stack integrations, consistently and at scale”. What does that mean though? Well, you get access to the “table stakes” SaaS management, including:

  • Managed Kubernetes experience;
  • Multi-cluster and environment management; and
  • Enterprise features.

Profile-Based Management

You also get some cool stuff that heavily leverages profile-based management, including infrastructure stack modelling and lifecycle management that can be done based on integration policies. In short, you build cluster profiles and then apply them to your infrastructure. The cluster profile usually describes the OS flavour and version, Kubernetes version, storage configuration, networking drivers, and so on. The Pallet orchestrator then ensures these profiles are used to maintain the desired cluster state. There are also security-hardened profiles available out of the box.

If you’re a VMware-based cloud user, the appliance (deployed from an OVA file) sits in your on-premises VMware cloud environment and communicates with the Spectro Cloud SaaS offering over TLS, and the cloud properties are dynamically propagated.

Licensing

The solution is licensed on the number of worker node cores under management. This is tiered based on the number of cores and it follows a simple model: More cores and a longer commitment equals a bigger discount.

 

The Differentiator?

Current Kubernetes deployment options vary in their complexity and maturity. You can take the DIY path, but you might find that this option is difficult to maintain at scale. There are packaged options available, such as VMware Tanzu, but you might find that multi-cluster management is not always a focus. The managed Kubernetes option (such as those offered by Google and AWS) has its appeal to the enterprise crowd, but those offerings are normally quite restricted in terms of technology offerings and available versions.

Why does Spectro Cloud have appeal as a solution then? Because you get control over the integrations you might want to use with your infrastructure, but also get the warm and fuzzy feeling of leveraging a managed service experience.

 

Thoughts

I’m no great fan of complexity for complexity’s sake, particularly when it comes to enterprise IT deployments. That said, there are always reasons why things get complicated in the enterprise. Requirements come from all parts of the business, legacy applications need to be fed and watered, rules and regulations seem to be in place simply to make things difficult. Enterprise application owners crave solutions like Kubernetes because there’s some hope that they, too, can deliver modern applications if only they used some modern application deployment and management constructs. Unfortunately, Kubernetes can be a real pain in the rear to get right, particularly at scale. And if enterprise has taught us anything, it’s that most enterprise shops are struggling to do the basics well, let alone the needlessly complicated stuff.

Solutions like the one from Spectro Cloud aren’t a silver bullet for enterprise organisations looking to modernise the way applications are deployed, scaled, and managed. But something like Spectro Cloud certainly has great appeal given the inherent difficulties you’re likely to experience if you’re coming at this from a standing start. Sure, if you’re a mature Kubernetes shop, chances are slim that you really need something like this. But if you’re still new to it, or are finding that the managed offerings don’t give you the flexibility you might need, then something like Spectro Cloud could be just what you’re looking for.

Datadobi Announces DobiMigrate 5.8 – Introduces Chain of Custody

Datadobi recently announced version 5.8 of its DobiMigrate software and introduced a “Chain of Custody” feature. I had the opportunity to speak to Carl D’Halluin and Michael Jack about the announcement and thought I’d share some thoughts on it here.

 

Don’t They Do File Migration?

If you’re unfamiliar with Datadobi, it’s a company that specialises in NAS migration software. It tends to get used a lot by the major NAS vendors as rock solid method of moving data of a competitor’s box and onto theirs. Datadobi has been around for quite a while, and a lot of the founders have heritage with EMC Centera.

Chain of Custody?

So what exactly does the Chain of Custody feature offer?

  • Tracking files and objects throughout an entire migration
  • Full photo-finish of source and destination system at cutover time
  • Forensic input which can serve as future evidence of tampering
  • Available for all migrations.
    • No performance hit.
    • No enlarged maintenance window.

[image courtesy of Datadobi]

Why Is This Important?

Organisations are subject to a variety of legislative requirements the word over to ensure that the data presented as evidence in courts of law hasn’t been tampered with. Some of them spend an inordinate amount of money ensuring that the document management systems (and the hardware those systems reside on) offer all kinds of compliance and governance features that ensure that you can reliably get up in front of a judge and say that nothing has been messed with. Or you can reliably say that it has been messed with. Either way though, it’s reliable. Unfortunately, nothing lasts forever (not even those Centera cubes we put in years ago).

So what do you do when you have to migrate your data from one platform to another? If you’ve just used rsync or robocopy to get the data from one share to another, how can you reliably prove that you’ve done so, without corrupting or otherwise tampering with the data? Logs are just files, after all, so what’s to stop someone “losing” some data. along the way?

It turns out that a lot of folks in the legal profession have been aware that this was a problem for a while, but they’ve looked the other way. I am no lawyer, but as it was explained to me, if you introduce some doubt into the reliability of the migration process, it’s easy enough for the other side to counter that your stuff may not have been so reliable either, and the whole thing becomes something of a shambles. Of course, there’s likely a more coherent way to explain this, but this is tech blog and I’m being lazy.

 

Thoughts

I’ve done all kinds of data migrations over the years. I think I’ve been fortunate that I’ve never specifically had to deal with a system that was being relied on seriously for legislative reasons, because I’m sure that some of those migrations were done more by the seat of my pants than anything else. Usually the last thing on the organisation’s mind (?) was whether the migration activity was compliant or not. Instead, the focus of the project manager was normally to get the data from the old box to the new box as quickly as possible and with as little drama / downtime as possible.

If you’re working on this stuff in a large financial institution though, you’ll likely have a different focus. And I’m sure the last thing your corporate counsel want to hear is that you’ve been playing a little fast and loose with data over the years. I anticipate this announcement will be greeted with some happiness by people who’ve been saddled with these kinds of daunting tasks in the past. As we move to a more and more digital world, we need to carry some of the concepts from the physical world across. It strikes me that Datadobi has every reason to be excited about this announcement. You can read the press release here.

 

FalconStor Announces StorSafe

Remember FalconStor? You might have used its VTL product years ago? Or perhaps the Network Storage Server product? Anyway, it’s still around, and recently announced a new product. I had the opportunity to speak to Todd Brooks (CEO) and David Morris (VP Products) to discuss StorSafe, something FalconStor is positioning as “the industry’s first enterprise-class persistent data storage container”.

 

What Is It?

StorSafe is essentially a way to store data via containers. It has the following features:

  • Long-term archive storage capacity reduction drives low cost;
  • Multi-cloud archive storage;
  • Automatic archive integrity validation & journaling in the cloud;
  • Data egress fee optimisation; and
  • Unified Management and Analytics Console.

Persistent Virtual Storage Container

StorSafe is a bit different to the type of container you might expect from a company with VTL heritage.

  • Does not rely on traditional tape formats, e.g. LTO constraints
  • Variable Payload Capacity of Archival Optimisation by Type
  • Execution capabilities for Advanced Features
  • Encryption, Compression, and Best-in-Class Deduplication
  • Erasure coding for Redundancy across On-premise/Clouds
  • Portable – Transfer Container to Storage System or any S3 Cloud
  • Archival Retention for 10, 25, 50, & 100 years

[image courtesy of FalconStor]

Multi-Cloud Erasure Coding

  • The VSC is sharded into multiple Mini-Containers that are protected with Erasure Coding
  • These Mini-Containers can then be moved to multiple local, private data centres, or cloud destinations for archive
  • Tier Containers depending on Access Criticality or Limited Access needs

[image courtesy of FalconStor]

 

Thoughts And Further Reading

If you’re familiar with my announcement posts, you’ll know that I try to touch on the highlights provided to me by the vendor about its product, and add my own interpretation. I feel like I haven’t really done StorSafe justice however. It’s a cool idea, in a lot of ways. This idea that you can take a bunch of storage and dump it all over the place in a distributed fashion and have it be highly accessible and resilient. This isn’t designed for high performance storage requirements. This is very much focused on the kinds of data you’d be keen to store long-term, maybe on tape. I can’t tell you what this looks like from an implementation or performance perspective, so I can’t tell you whether the execution matches up with the idea that Falconstor has had. I find the promise of portability, particularly for data that you want to keep for a long time, extremely compelling. So let’s agree that this idea seems interesting, and watch this space for more on this as I learn more about it. You can read the press release here, and check out Mellor’s take on it here.

Retrospect Announces Backup 17 And Virtual 2020

Retrospect recently announced new versions of its Backup (17) and Virtual (2020) products. I had the opportunity to speak to JG Heithcock (GM, Retrospect) about the announcement and thought I’d share some thoughts here.

 

What’s New?

Retrospect Backup 17 has the following new features:

  • Automatic Onboarding: Simplified and automated deployment and discovery;
  • Nexsan E-Series / Unity Certification;
  • 10x Faster ProactiveAI; and
  • Restore Preflight for restores from cold storage.

Retrospect Virtual 2020 has the following enhancements:

  • Automatic Onboarding: Physical and Virtual monitoring from a single website;
  • 50% Faster;
  • Wasabi Cloud Support;
  • Backblaze B2 Cloud Support; and
  • Flexible licensing between VMware and Hyper-V.

Automatic Onboarding?

So what exactly is automatic onboarding? You can onboard new servers and endpoints for faster deployment and automatic discovery.

  • Share one link with your team. No agent password required.
  • Retrospect Backup finds and protects new clients with ProactiveAI.
  • Add servers, desktops, and laptops to Retrospect Backup.
  • Single pane of glass for entire backup infrastructure with Retrospect Management Console.
  • Available for Windows, Mac, and Linux.

You can also onboard a new Retrospect Backup server for faster, simplified deployment.

  • Protect group or site.
  • Customised installer with license built-in.
  • Seamless Management Console integration.
  • Available for Windows and Mac.

Onboard new Retrospect Virtual server for complete physical and virtual monitoring.

  • Customised installer
  • Seamless Management Console integration.
  • Monitor Physical + Virtual

Pricing

There’s a variety of pricing available. When you buy a perpetual license, you have access to any new minor or major version upgrades for 12 months. With the monthly subscription model you have access to the latest version of the product for as long as you keep the subscription active.

[image courtesy of Retrospect]

 

Thoughts And Further Reading

Retrospect was acquired by StorCentric in June 2019 after bouncing around a few different owners over the years. It’s been around for a long time, and has a rich history of delivering data protection solutions for small business and “prosumer” markets. I have reasonably fond memories of Retrospect from the time when it was shipped with Maxtor OneTouch external hard drives. Platform support is robust, with protection options available across Windows, macOS and some Linux, and the pricing is competitive. Retrospect is also benefitting from joining the StorCentric family, and I’m looking forward to hearing about more product integrations as time goes on.

Why would I cover a data protection product that isn’t squarely targeted at the enterprise or cloud market? Because I’m interested in data protection solutions across all areas of IT. I think the small business and home market is particularly under-represented when it comes to easy to deploy and run solutions. There is a growing market for cloud-based solutions, but simple local protection options still seem to be pretty rare. The number of people I talk to who are just manually copying data from one spot to another is pretty crazy. Why is it so hard to get good backup and recovery happening on endpoints? It shouldn’t be. You could argue that, with the advent of SaaS services and cloud-based storage solutions, the requirement to protect endpoints the way we used to has changed. But local protection options still makes it a whole lot quicker and easier to recover.

If you’re in the market for a solution that is relatively simple to operate, has solid support for endpoint operating systems and workloads, and is competitively priced, then I think Retrospect is worth evaluating. You can read the announcement here.

StorCentric Announces QLC E-Series 18F

Nexsan recently announced the release of its new E-Series 18F (E18F) storage platform. I had the chance to chat with Surya Varanasi, CTO of StorCentric, about the announcement and thought I’d share some thoughts here.

 

Less Disk, More Flash

[image courtesy of Nexsan]

The E18F is designed and optimised for quad-level cell (QLC) NAND technology. If you’re familiar with the Nexsan E-Series range, you’d be aware of the E18P that preceded this model. This is the QLC Flash version of that.

Use Cases

We spoke about a couple of use cases for the E18F. The first of these was with data lake environments. These are the sort of storage environents with 20 to 30PB installations that are subjected to random workload pressures. The idea of using QLC is to increase the performance without significantly increasing the cost. That doesn’t mean that you can do a like for like swap of HDDs for QLC Flash. Varanasi did, however, suggest that Nexsan had observed a 15x improvement over hard drive installation for around 3-4 times the cost, and he’s expecting that to go down to 2-3 times in the future. There is also the option to use just a bit of QLC Flash with a lot of HDDs to get some performance improvement.

The other use case discussed was the use of QLC in test and dev environments. Users are quite keen, obviously, on getting Flash in their environments at the price of HDDs. This isn’t yet a realistic goal, but it’s more achievable with QLC than it is with something like TLC.

 

QLC And The Future

We spoke briefly about more widespread adoption of QLC across the range of StorCentric storage products. Varanasi said the use “will eventually expand across the portfolio”, and they were looking at how it might be adopted with the larger E-Series models, as well as with the Assureon and Vexata range. They were treating Unity more cautiously, as the workloads traditionally hosted on that platform were a little more demanding.

 

Thoughts and Further Reading

The kind of workloads we’re throwing at what were once viewed as “cheap and deep” platforms is slowly changing. Where once it was perhaps acceptable to wait a few days for reporting runs to finish, there’s no room for that kind of performance gap now. So it makes sense that we look to Flash as a way of increasing the performance of the tools we’re using. The problem, however, is that when you work on data sets in the petabyte range, you need a lot of capacity to accommodate that. Flash is getting cheaper, but it’s still not there when compared to traditional spinning disks. QLC is a nice compromise between performance and capacity. There’s a definite performance boost to be had, and the increase in cost isn’t eye watering. StorCentric Announces QLC E-Series 18F

I’m interested to see how this solution performs in the real world, and whether QLC has the expected durability to cope with the workloads that enterprise will throw at it. I’m also looking forward to seeing where else Nexsan decide to use QLC in its portfolio. There’s good story here in terms of density, performance, and energy consumption – one that I’m sure other vendors will also be keen to leverage. For another take on this, check out Mellor’s article here.

Scale Computing Makes Big Announcement About Small HE150

Scale Computing recently announced the HE150 series of small edge servers. I had the chance to chat with Alan Conboy about the announcement, and thought I’d share some thoughts here.

 

Edge, But Smaller

I’ve written in the past about additions to the HC3 Edge Platform. But those things had a rack-mount form factor. The newly announced HE150 runs on Intel NUC devices. Wait, what? That’s right, hyper-converged infrastructure on really small PCs. But don’t you need a bunch of NICs to do HC3 properly? There’s no need for backplane switch requirement, as they use some software-defined networking to tunnel the backplane network across the NIC. The HC3 platform uses less than 1GB RAM per node, and each node has 2 cores. The storage sits on an NVMe drive and you can get hold of this stuff at a retail price of around $5K US for 3 nodes.

[image courtesy of Scale Computing]

Scale at Scale?

How do you deploy these kinds of things at scale then? Conboy tells me there’s full Ansible integration, RESTful API deployment capabilities, and they come equipped with Intel AMT. In short, these things can turn up at the remote site, be plugged in, and be ready to go.

Where would you?

The HE150 solution is 100% specific to multi-site edge implementations. It’s not trying to go after workloads that would normally be serviced by the HE500 or HE1000. Where it can work though, is with:

  • Oil and Gas exploration – with one in each ship (they need 4-5 VMs to handle sensor data to make command decisions)
  • Grocery and retail chains
  • Manufacturing platforms
  • Telcos – pole-side boxes

In short, think of environments that require some amount of compute and don’t have IT people to support it.

 

Thoughts

I’ve been a fan of what Scale Computing has been doing with HCI for some time now. Scale’s take on making things simple across the enterprise has been refreshing. While this solution might surprise some folks, it strikes me that there’s an appetite for this kind fo thing in the marketplace. The edge is often a place where less is more, and there’s often not a lot of resources available to do basic stuff, like deploy a traditional, rackmounted compute environment. But a small, 3-node HCI cluster that can be stacked away in a stationery cupboard? That might just work. Particularly if you only need a few virtual machines to meet those compute requirements. As Conboy pointed out to me, Scale isn’t looking to use this as a replacement for the higher-preforming options it has available. Rather, this solution is perfect for highly distributed retail environments where they need to do one or two things and it would be useful if they didn’t do those things in a data centre located hundreds of kilometres away.

If you’re not that excited about Intel NUCs though, you might be happy to hear that solutions from Lenovo will be forthcoming shortly.

The edge presents a number of challenges to enterprises, in terms of both its definition and how to deal with it effectively. Ultimately, the success of solutions like this will hinge on ease of use, reliability, and whether it really is fit for purpose. The good folks at Scale don’t like to go off half-cocked, so you can be sure some thought went into this product – it’s not just a science project. I’m keen to see what the uptake is like, because I think this kind of solution has a place in the market. The HE150 is available for purchase form Scale Computing now. It’s also worth checking out the Scale Computing presentations at Tech Field Day 20.