Random Short Take #70

Welcome to Random Short Take #70. Let’s get random.

Intel Optane And The DAOS Storage Engine

Disclaimer: I recently attended Storage Field Day 20.  My flights, accommodation and other expenses were paid for by Tech Field Day. There is no requirement for me to blog about any of the content presented and I am not compensated in any way for my time at the event.  Some materials presented were discussed under NDA and don’t form part of my blog posts, but could influence future discussions.

Intel recently presented at Storage Field Day 20. You can see videos of the presentation here, and download my rough notes from here.

 

Intel Optane Persistent Memory

If you’re a diskslinger, you’ve very likely heard of Intel Optane. You may have even heard of Intel Optane Persistent Memory. It’s a little different to Optane SSD, and Intel describes it as “memory technology that delivers a unique combination of affordable large capacity and support for data persistence”. It looks a lot like DRAM, but the capacity is greater, and there’s data persistence across power losses. This all sounds pretty cool, but isn’t it just another form factor for fast storage? Sort of, but the application of the engineering behind the product is where I think it starts to get really interesting.

 

Enter DAOS

Distributed Asynchronous Object Storage (DAOS) is described by Intel as “an open source software-defined scale-out object store that provides high bandwidth, low latency, and high I/O operations per second (IOPS) storage containers to HPC applications”. It’s ostensibly a software stack built from the ground up to take advantage of the crazy speeds you can achieve with Optane, and at scale. There’s a handy overview of the architecture available on Intel’s website. Traditional object (and other storage systems) haven’t really been built to take advantage of Optane in quite the same way DAOS has.

[image courtesy of Intel]

There are some cool features built into DAOS, including:

  • Ultra-fine grained, low-latency, and true zero-copy I/O
  • Advanced data placement to account for fault domains
  • Software-managed redundancy supporting both replication and erasure code with online rebuild
  • End-to-end (E2E) data integrity
  • Scalable distributed transactions with guaranteed data consistency and automated recovery
  • Dataset snapshot capability
  • Security framework to manage access control to storage pools
  • Software-defined storage management to provision, configure, modify, and monitor storage pools

Exciting? Sure is. There’s also integration with Lustre. The best thing about this is that you can grab it from Github under the Apache 2.0 license.

 

Thoughts And Further Reading

Object storage is in its relative infancy when compared to some of the storage architectures out there. It was designed to be highly scalable and generally does a good job of cheap and deep storage at “web scale”. It’s my opinion that object storage becomes even more interesting as a storage solution when you put a whole bunch of really fast storage media behind it. I’ve seen some media companies do this with great success, and there are a few of the bigger vendors out there starting to push the All-Flash object story. Even then, though, many of the more popular object storage systems aren’t necessarily optimised for products like Intel Optane PMEM. This is what makes DAOS so interesting – the ability for the storage to fundamentally do what it needs to do at massive scale, and have it go as fast as the media will let it go. You don’t need to worry as much about the storage architecture being optimised for the storage it will sit on, because the folks developing it have access to the team that developed the hardware.

The other thing I really like about this project is that it’s open source. This tells me that Intel are both focused on Optane being successful, and also focused on the industry making the most of the hardware it’s putting out there. It’s a smart move – come up with some super fast media, and then give the market as much help as possible to squeeze the most out of it.

You can grab the admin guide from here, and check out the roadmap here. Intel has plans to release a new version every 6 months, and I’m really looking forward to seeing this thing gain traction. For another perspective on DAOS and Intel Optane, check out David Chapa’s article here.

 

 

Pure Storage Announces Second Generation FlashArray//C with QLC

Disclaimer: I recently attended Storage Field Day 20.  My flights, accommodation and other expenses were paid for by Tech Field Day. There is no requirement for me to blog about any of the content presented and I am not compensated in any way for my time at the event.  Some materials presented were discussed under NDA and don’t form part of my blog posts, but could influence future discussions.

Pure Storage recently announced its second generation FlashArray//C – an all-QLC offering offering scads of capacity in a dense form factor. Pure Storage presented on this topic at Storage Field Day 20. You can see videos of the presentation here, and download my rough notes from here.

 

It’s A Box!

FlashArray//C burst on to the scene last year as an all-flash, capacity-optimised storage option for customers looking for storage that didn’t need to go quite as fast the FlashArray//X, but that wasn’t built on spinning disk. Available capacities range from 1.3PB to 5.2PB (effective).

[image courtesy of Pure Storage]

There are a number of models available, with a variety of capacities and densities.

  Capacity Physical
 

//C60-366

 

Up to 1.3PB effective capacity**

366TB raw capacity**

3U; 1000–1240 watts (nominal–peak)

97.7 lbs (44.3 kg) fully loaded

5.12” x 18.94” x 29.72” chassis

 

//C60-494

 

Up to 1.9PB effective capacity**

494TB raw capacity**

3U; 1000–1240 watts (nominal–peak)

97.7 lbs (44.3 kg) fully loaded

5.12” x 18.94” x 29.72” chassis

 

//C60-840

 

Up to 3.2PB effective capacity**

840TB raw capacity**

6U; 1480–1760 watts (nominal–peak)

177.0lbs (80.3kg) fully loaded

10.2” x 18.94 x 29.72” chassis

 

//C60-1186

 

Up to 4.6PB effective capacity**

1.2PB raw capacity**

6U; 1480–1760 watts (nominal–peak)

185.4 lbs (84.1 kg) fully loaded

15.35” x 18.94 x 29.72” chassis

 

//C60-1390

 

Up to 5.2PB effective capacity**

1.4PB raw capacity**

9U; 1960–2280 watts (nominal–peak)

273.2 lbs (123.9 kg) fully loaded

15.35” x 18.94 x 29.72” chassis

Workloads

There are reasons why the FlashArray//C could be a really compelling option for workload consolidation. More and more workloads are “business critical” in terms of both performance and availability. There’s a requirement to do more with less, while battling complexity, and a strong desire to manage everything via a single pane of glass.

There are some other cool things you could use the //C for as well, including:

  • Automated policy-based VM tiering between //X and //C arrays;
  • DR using the //X at production and //C at your secondary site;
  • Consolidating multiple //X array workloads on a single //C array for test and dev; and
  • Consolidating multiple //X array snapshots to a single //C array for long-term retention.

 

It’s a QLC World, Sort Of

The second generation is FlashArray//C means you can potentially now have flash all through the data centre.

  • Apps and VMs – provision your high performance workloads to //X, lower performance / high capacity workloads to //C
  • Modern Data Protection & Disaster Recovery – on-premises production applications on //X efficiently replicated or backed up to //C at DR site
  • User File Shares – User file access with Purity 6.0 via SMB, NFS

QLC nonetheless presents significant engineering challenges with traditionally high write latency and low endurance (when compared to SLC, MLC, and TLC). Pure Storage’s answer to that problem has been to engineer the crap out of DirectFlash to get the required results. I’d do a bad job of explaining it, so instead I recommend you check out Pete Kirkpatrick’s explanation.

 

Thoughts And Further Reading

I covered the initial FlashArray//C announcement here and many of the reasons why this type of offering is appealing remain the same. The knock on Pure Storage in the last few years has been that, while FlashArray//X is nice and fast and a snap to use, it couldn’t provide the right kind of capacity (i.e. cheap and deep) that a number of price-sensitive punters wanted.  Sure, they could go and buy the FlashArray//X and then look to another vendor for a dense storage option, but the motivation to run with a number of storage vendors in smaller enterprise shops is normally fairly low. The folks in charge of technology in these environments are invariably stretched in terms of bodies on the floor to run the environments, and cash in the bank to procure those solutions. A single vendor solution normally makes sense for them (as opposed to some of the larger shops, or specialist organisations that really have very specific requirements that can only be serviced by particular solutions).

So now Pure Storage has the FlashArray//C, and you can get it with some decent density, some useful features (thanks in part to some new features in Purity 6), and integration with the things you know and like about Pure Storage, such as Pure1 and Evergreen storage. It seems like Pure Storage has done an awful lot of work to squeeze performance out of QLC whilst ensuring that the modules don’t need replacing every other week. There’s a lot to like about the evolving Pure Storage story, and I’m interested to see how they tie it all together as the portfolio continues to expand. You can read the press release here, access the data sheet here, and read Mellor’s take on the news here.

StorONE Announces AFA.next

StorONE recently announced the All-Flash Array.next (AFAn). I had the opportunity to speak to George Crump (StorONE Chief Marketing Officer) about the news, and thought I’d share some brief thoughts here.

 

What Is It? 

It’s a box! (Sorry I’ve been re-watching Silicon Valley with my daughter recently).

[image courtesy of StorONE]

More accurately, it’s an Intel Server with Intel Optane and Intel QLC storage, powered by StorONE’s software.

S1:Tier

S1:Tier is StorONE’s tiering solution. It operates within the parameters of a high and low watermark. Once the Optane tier fills up, the data is written out, sequentially, to QLC. The neat thing is that when you need to recall the data on QLC, you don’t necessarily need to move it all back to the Optane tier. Rather, read requests can be served directly from QLC. StorONE call this a multi-tier capability, because you can then move data to cloud storage for long-term retention if required.

[image courtesy of StorONE]

S1:HA

Crump noted that the Optane drives are single ported, leading some customers to look highly available configurations. These are catered for with a variation of S1:HA, where the HA solution is now a synchronous mirror between 2 stacks.

 

Thoughts and Further Reading

I’m not just a fan of StorONE because the company occasionally throws me a few dollarydoos to keep the site running. I’m a fan because the folks over there do an awful lot of storage type stuff on what is essentially commodity hardware, and they’re getting results that are worth writing home about, with a minimum of fuss. The AFAn uses Optane as a storage tier, not just read cache, so you get all of the benefit of Optane write performance (many, many IOPS). It has the resilience and data protection features you see in many midrange and enterprise arrays today (namely vRAID, replication, and snapshots). Finally, it has varying support for all three use cases (block, file, and object), so there’s a good chance your workload will fit on the box.

More and more vendors are coming to market with Optane-based storage solutions. It still seems that only a small number of them are taking full advantage of Optane as a write medium, instead focusing on its benefit as a read tier. As I mentioned before, Crump and the team at StorONE have positioned some pretty decent numbers coming out of the AFAn. I think the best thing is that it’s now available as a configuration item on the StorONE TRUprice site as well, so you can see for yourself how much the solution costs. If you’re after a whole lot of performance in a small box, this might be just the thing. You can read more about the solution and check out the lab report here. My friend Max wrote a great article on the solution that you can read here.

Pure Storage Expands Portfolio, Adds Capacity And Performance

Disclaimer: I recently attended Pure//Accelerate 2019.  My flights, accommodation, and conference pass were paid for by Pure Storage. There is no requirement for me to blog about any of the content presented and I am not compensated by Pure Storage for my time at the event.  Some materials presented were discussed under NDA and don’t form part of my blog posts, but could influence future discussions.

Pure Storage announced two additions to its portfolio of products today: FlashArray//C and DirectMemory Cache. I had the opportunity to hear about these two products at the Storage Field Day Exclusive event at Pure//Accelerate 2019 and thought I’d share some thoughts here.

 

DirectMemory Cache

DirectMemory Cache is a high-speed caching system that reduces read latency for high-locality, performance-critical applications.

  • High speed: based on Intel Optane SCM drives
  • Caching system: repeated accesses to “hot data” are sped up automatically – no tiering = no configuration
  • Read latency: only read performance is affected – no changes to latency
  • High-locality: only workloads that reuse often a dates that fits in the cache will benefit
  • Performance-Critical: high-throughput latency sensitive workloads

According to Pure, “DirectMemory Cache is the functionality within Purity that provides direct access to data and accelerates performance critical applications”. Note that this is only for read data, write caching is still done via DRAM.

How Can This Help?

Pure has used Pure1 Meta analysis to arrive at the following figures:

  • 80% of arrays can achieve 20% lower latency
  • 40% of arrays can achieve 30-50% lower latency (up to 2x boost)

So there’s some real potential to improve existing workloads via the use of this read cache.

DirectMemory Configurations

Pure Storage DirectMemory Modules plug directly into FlashArray//X70 and //X90, are inserted into the chassis, and are available in the following configurations:

  • 3TB (4x750GB) DirectMemory Modules
  • 6TB (8x750GB) DirectMemory Modules

Top of Rack Architecture

Pure are positioning the “top of rack” architecture as a way to compete some of the architectures that have jammed a bunch of flash in DAS or in compute to gain increased performance. The idea is that you can:

  • Eliminate data locality;
  • Bring storage and compute closer;
  • Provide storage services that are not possible with DAS;
  • Bring the efficiency of FlashArray to traditional DAS applications; and
  • Offload storage and networking load from application CPUs.

 

FlashArray//C

Typical challenges in Tier 2

Things can be tough in the tier 2 storage world. Pure outlined some of the challenges they were seeking to address by delivering a capacity optimised product.

Management complexity

  • Complexity / management
  • Different platforms and APIs
  • Interoperability challenges

Inconsistent Performance

  • Variable app performance
  • Anchored by legacy disk
  • Undersized / underperforming

Not enterprise class

  • <99.9999% resiliency
  • Disruptive upgrades
  • Not evergreen

The C Stands For Capacity Optimised All-Flash Array

Flash performance at disk economics

  • QLC architecture enables tier 2 applications to benefit from the performance of all-flash – predictable 2-4ms latency, 5.2PB (effective) in 9U delivers 10x consolidation for racks and racks of disk.

Optimised end-to-end for QLC Flash

  • Deep integration from software to QLC NAND solves QLC wear concerns and delivers market-leading economics. Includes the same evergreen maintenance and wear replacement as every FlashArray

“No Compromise” enterprise experience

  • Built for the same 99.9999%+ availability, Pure1 cloud management, API automation, and AI-driven predictive support of every FlashArray

Flash for every data workflow

  • Policy driven replication, snapshots, and migration between arrays and clouds – now use Flash for application tiering, DR, Test / Dev, Backup, and retention

Configuration Details

Configuration options include:

  • 366TB RAW – 1.3PB effective
  • 878TB RAW – 3.2PB effective
  • 1.39PB RAW – 5.2PB effective

Use Cases

  • Policy based VM tiering between //X and //C
  • Multi-cloud data protection and DR – on-premises and multi-site
  • Multi-cloud test / dev – workload consolidation

*File support (NFS / SMB) coming in 2020 (across the entire FlashArray family, not just //C)

 

Thoughts

I’m a fan of companies that expand their portfolio based on customer requests. It’s a good way to make more money, and sometimes it’s simplest to give the people what they want. The market has been in Pure’s ear for some time about delivering some kind of capacity storage solution. I think it was simply a matter of time before the economics and the technology intersected at a point where it made sense for it to happen. If you’re an existing Pure customer, this is a good opportunity to deploy Pure across all of your tiers of storage, and you get the benefit of Pure1 keeping an eye on everything, and your “slow” arrays will still be relatively performance-focused thanks to NVMe throughout the box. Good times in IT isn’t just about speeds and feeds though, so I think this announcement is more important in terms of simplifying the story for existing Pure customers that may be using other vendors to deliver Tier 2 capabilities.

I’m also pretty excited about DirectMemory Cache, if only because it’s clear that Pure has done its homework (i.e. they’ve run the numbers on Pure1 Meta) and realised that they could improve the performance of existing arrays via a reasonably elegant solution. A lot of the cool kids do DAS, because that’s what they’ve been told will yield great performance. And that’s mostly true, but DAS can be a real pain in the rear when you want to move workloads around, or consolidate performance, or do useful things like data services (e.g. replication). Centralised storage arrays have been doing this stuff for years, and it’s about time they were also able to deliver the performance required in order for those companies not to have to compromise.

You can read the press release here, and the Tech Field Day videos can be viewed here.

Burlywood Tech Announces TrueFlash

Burlywood Tech came out of stealth late last year and recently announced their TrueFlash product. I had the opportunity to speak with Mike Tomky about what they’ve been up to since emerging from stealth and thought I’d cover the announcement here.

 

Burlywood TrueFlash

So what is TrueFlash? It’s a “modular controller architecture that accelerates time-to-market of new flash adoption”. The idea is that Burlywood can deliver a software-defined solution that will sit on top of commodity Flash. They say that one size doesn’t fit all, particularly with Flash, and this solution gives customers the opportunity to tailor the hardware to better meet their requirements.

It offers the following features:

  • Multiple interfaces (SATA, SAS, NVMe)
  • FTL Translation (Full SSD to None)
  • Capacity ->100TB
  • Traffic optimisation
  • Multiple Protocols (Block (NVMe, NVMe/F), File, Object, Direct Memory)

[image courtesy of Burlywood Tech]

 

Who’s Buying?

This isn’t really an enterprise play – those aren’t the types of companies that would buy Flash at the scale that this would make sense. This is really aimed at the hyperscalers, cloud providers, and AFA / HCI vendors. They sell the software, controller and SSD Reference Design to the hyperscalers, but treat the cloud providers and AFA vendors a little differently, generally delivering a completed SSD for them. All of their customers benefit from:

  • A dedicated support team (in-house drive team);
  • Manufacturing assembly & test;
  • Technical & strategic support in all phases; and
  • Collaborative roadmap planning.

The key selling point for Burlywood is that they claim to be able to reduce costs by 10 – 20% through better capacity utilisation, improved supply chain and faster product qualification times.

 

Thoughts

You know you’re doing things at a pretty big scale if you’re thinking it’s a good idea to be building your own SSDs to match particular workloads in your environment. But there are reasons to do this, and from what I can see, it makes sense for a few companies. It’s obviously not for everyone, and I don’t think you’ll be seeing this n the enterprise anytime soon. Which is the funny thing, when you think about it. I remember when Google first started becoming a serious search engine and they talked about some of their earliest efforts with DIY servers and battles with doing things at the scale they needed. Everyone else was talking about using appliances or pre-built solutions “optimised” by the vendors to provide the best value for money or best performance or whatever. As the likes of Dropbox, Facebook and LinkedIn have shown, there is value in going the DIY route, assuming the right amount of scale is there.

I’ve said it before, very few companies really qualify for the “hyper” in hyperscalers. So a company like Burlywood Tech isn’t necessarily going to benefit them directly. That said, these kind of companies, if they’re successful in helping the hyperscalers drive the cost of Flash in a downwards direction, will indirectly help enterprises by forcing the major Flash vendors to look at how they can do things more economically. And sometimes it’s just nice to peak behind the curtain to see how this stuff comes about. I’m oftentimes more interested in how networks put together their streaming media services than a lot of the content they actually deliver on those platforms. I think Burlywood Tech falls in that category as well. I don’t care for some of the services that the hyperscalers deliver, but I’m interested in how they do it nonetheless.

Storbyte Come Out Of Stealth Swinging

I had the opportunity to speak to Storbyte‘s Chief Evangelist and Design Architect Diamond Lauffin recently and thought I’d share some information on their recent announcement.

 

Architecture

ECO-FLASH

Storbyte have announced ECO-FLASH, positioning it as “a new architecture and flash management system for non-volatile memory”. Its integrated circuit, ASIC-based architecture abstracts independent SSD memory modules within the flash drive and presents the unified architecture as a single flash storage device.

 

Hydra

Each ECO-FLASH module is comprised of 16 mSATA modules, running in RAID 0. 4 modules are managed by each Hydra, with 4 “sub-master” Hydras being managed by a master Hydra. This makes up one drive that supports RAID 0, 5, 6 and N, so if you were only running a single-drive solution (think out at the edge), you can configure the modules to run in RAID 5 or 6.

 

[image courtesy of Storbyte]

 

Show Me The Product

[image courtesy of Storbyte]

 

The ECO-FLASH drives come in 4, 8, 16 and 32TB configurations, and these fit into a variety of arrays. Storbyte is offering three ECO-FLASH array models:

  • 131TB raw capacity in 1U (using 4 drives);
  • 262TB raw capacity in 2U (using 16 drives); and
  • 786TB raw capacity in 4U (using 48 drives).

Storbyte’s ECO-FLASH supports a blend of Ethernet, iSCSI, NAS and InfiniBand primary connectivity simultaneously. You can also add Storbyte’s 4U 1.18PB spinning disk JBOD expansion units to deliver a hybrid solution.

 

Thoughts

The idea behind Storbyte came about because some people were working in forensic security environments that had a very heavy write workload, and they needed to find a better way to add resilience to the high performance storage solutions they were using. Storbyte are offering a 10 year warranty on their product, so they’re clearly convinced that they’ve worked through a lot of the problems previously associated with the SSD Write Cliff (read more about that here, here, and here). They tell me that Hydra is the primary reason that they’re able to mitigate a number of the effects of the write cliff and can provide performance for a longer period of time.

Storbyte’s is not a standard approach by any stretch. They’re talking some big numbers out of the gate and have a pretty reasonable story to tell around capacity, performance, and resilience as well. I’ve scheduled another session with Storbyte to talk some more about how it all works and I’ll be watching these folks with some interest as they enter the market and start to get some units running workload on the floor. There’s certainly interesting heritage there, and the write cliff has been an annoying problem to solve. Coupled with some aggressive economics and support for a number of connectivity options and I can see this solution going in to a lot of DCs and being used for some cool stuff. If you’d like to read another perspective, check out what Rich over at Gestalt IT wrote about them and you can read the full press release here.

X-IO Announces ISE 900 Series G4

X-IO Technologies recently announced the ISE 900 Series G4. I had the chance to speak to Bill Miller about it and thought I’d provide some coverage of the announcement here. If you’re unfamiliar with X-IO, ISE stands for Intelligent Storage Elements. This is X-IO Technologies’ “next-generation ISE”, and X-IO will also be continuing to support their disk-based and hybrid arrays. They will, however, be discontinuing the 800 series AFAs.

 

What’s In The Box?

There are two boxes – the ISE 920 and ISE 960. You get all of the features of ISE hardware and software, such as:

  • High Availability
  • QoS
  • Encryption (at rest)
  • Management REST API
  • Simple Web-based Management
  • Monitored Telemetry
  • Predictive Analytics

They used to use sealed “DataPacs” in the disk drive days but this isn’t needed in the all-flash world. ISE still manages SSDs in groups of 10 and still overprovisions capacity up to a point. The individual drives are now hot-swappable though.

You also get features such as “Performance-Optimized Deduplication”, and deduplication can be disabled by volume.

The ISE also uses Enhanced Matrixed RAID Data Allocation, where you get:

  • Up to 60 individually hot-swappable SSDs (for the 960, 20 for the 920)
  • Writes to SSDs balanced across drives for better wear and performance

ISE Software for “resilient in-place media loss”, meaning

  • Less frequent drive replacement
  • Global parity and spare allocation
  • Failed drives do not have the same urgency for replacement as traditional arrays

Web-based Management Interface

  • Simplified management with X-IO’s OptimISE
  • Support for multi-system management through a single session
  • At-a-glance and in-depth performance metrics
  • Customizable widget based layout

As with most modern storage arrays, the user interface is clean and simple to navigate. OptimISE replaces ISE Manager, although you’ll still need it to manage your Gen1 – Gen3 arrays. X-IO are considering adding support for Gen3 arrays to OptimISE, but they’re waiting to see whether there’s customer demand.

[image courtesy of X-IO Technologies]

 

X-IO tell me that snapshots and replication are on the roadmap and will be added in the future, with X-IO aiming to have these features available in H1 next year (but don’t hold them to that though). They’ll also be aiming to add support for iglu systems.

 

Show Me Your Specs

It wouldn’t be a product announcement without a box shot.

 

[image courtesy of X-IO Technologies]

 

2U Dual-Controller Active/Active

  • 8Gbps FC (16Gbps field upgradeable in the future)
  • 4 ports per controller (8 ports will be field upgradeable in the future)

Hot-Swappable FRUs

  • Controller
  • Power Supplies
  • Fans
  • Regulators
  • SSDs min – max
    • ISE 920: 10 – 20
    • ISE 960: 10 – 60
  • Two hot-swappable 1600 Watt PSUs

Capacity (*Effective capacity assumes 5:1 deduplication ratio)

  • ISE 920: 9.6TB – 242TB
  • ISE 960: 9.6TB – 725TB

Capacity expansion (up to 60 drives) is done in 10 drive increments.

Performance

X-IO tell me they can get performance along the lines of:

  • Up to 400,000 IOPS; and
  • Access Time <1ms.

 

Conclusion and Further Reading

X-IO released a really good overview of the Intelligent Storage Element (ISE) platform a while ago that I think is worth checking out. X-IO’s deduplication solution promises to deliver some pretty decent results at a highly efficient clip. If you want some insight into how they go about doing it, check out Richard Lary’s presentation from Storage Field Day 13. This is their first array with deduplication built in, and I’m interested to see how it performs in the field. The goal is to deliver the same results as their competitors, but with improved efficiency. This seems to be the goal behind much of the hardware design, with X-IO telling me that they come in around 60 cents (US) per effective GB of capacity. That seems mighty efficient.

X-IO have been around for a while, and I’ve found their Axellio Edge product to be fascinating. The AFA market is crowded with vendors saying that they do all things for all people. It’s nice to see that X-IO aren’t promising the world to customers, but they are offering some decent features at a compelling price.