Random Short Take #36

Welcome to Random Short Take #36. Not a huge amount of players have worn 36 in the NBA, but Shaq did (at the end of his career), and Marcus Smart does. This one, though, goes out to one of my favourite players from the modern era, Rasheed Wallace. It seems like Boston is the common thread here. Might have something to do with those hall of fame players wearing numbers in the low 30s. Or it might be entirely unrelated.

  • Scale Computing recently announced its all-NVMe HC3250DF as a new appliance targeting core data centre and edge computing use cases. It offers higher performance storage, networking and processing. You can read the press release here.
  • Dell EMC PowerStore has been announced. Chris Mellor covered the announcement here. I haven’t had time to dig into this yet, but I’m keen to learn more. Chris Evans also wrote about it here.
  • Rubrik Andes 5.2 was recently announced. You can read a wrap-up from Mellor here.
  • StorCentric’s Nexsan recently announced the E-Series 32F Storage Platform. You can read the press release here.
  • In what can only be considered excellent news, Preston de Guise has announced the availability of the second edition of his book, “Data Protection: Ensuring Data Availability”. It will be available in a variety of formats, with the ebook format already being out. I bought the first edition a few times to give as a gift, and I’m looking forward to giving away a few copies of this one too.
  • Backblaze B2 has been huge for the company, and Backblaze B2 with S3-compatible API access is even huger. Read more about that here. Speaking of Backblaze, it just released its hard dive stats for Q1, 2020. You can read more on that here.
  • Hal recently upgraded his NUC-based home lab to vSphere 7. You can read more about the process here.
  • Jon recently posted an article on a new upgrade command available in OneFS. If you’re into Isilon, you might just be into this.

Dell EMC Isilon – Cloudy With A Chance Of Scale Out

Disclaimer: I recently attended Storage Field Day 19.  My flights, accommodation and other expenses were paid for by Tech Field Day. There is no requirement for me to blog about any of the content presented and I am not compensated in any way for my time at the event.  Some materials presented were discussed under NDA and don’t form part of my blog posts, but could influence future discussions.

Dell EMC recently presented at Storage Field Day 19. You can see videos of the presentation here, and download my rough notes from here.

 

It’s A Scaling Thing

Unbounded Scaling

One of the key features of the Isilon platform has been its scalability. OneFS automatically expands the filesystem across additional nodes. This scalability is impressive, and the platform has the ability to linearly scale both capacity and performance. It supports up to 252 nodes, petabytes of capacity and millions of file operations. My favourite thing about the scalability story, though, is that it’s non-disruptive. Dell EMC says it takes less than 60 seconds to add a node. That assumes you’ve done a bit of pre-work, but it’s a good story to tell. Even better, Isilon supports automated workload rebalancing – so your data is automatically redistributed to take advantage of new nodes when they’re added.

One Filesystem

They call it OneFS for a reason. Clients can read / write from any Isilon node, and client connections are distributed across cluster. Each file is automatically distributed across the cluster. This means that the larger the cluster, the better the efficiency and performance is. OneFS is also natively multi-protocol – clients can read / write same data over multiple protocols.

Always-on

There are some neat features in terms of resiliency too.

  • The cluster can sustain multiple failures with no impact – no impact for failures of up to 4 nodes or 4 drives in each pool
  • Non-disruptive tech refresh – non-disruptively add, remove or replace nodes in the cluster
  • No dedicated spare nodes or drives – better efficiency as no node or drive is unused

There is support for an ultra dense configuration: 4 nodes in 4U, offering up to 240TB raw per RU.

 

Comprehensive Enterprise Software

  • SmartDedupe and Compression – storage efficiency
  • SmartPools – Automated Tiering
  • CloudPools – Cloud tiering
  • SmartQuotas – Thin provisioning
  • SmartConnect – Connection rebalancing
  • SmartLock – Data integrity
  • SnapshotIQ – Rapid Restore
  • SyncIQ – Disaster Recovery

Three Approaches to Data Reduction

  1. Inline compression and deduplication
  2. Post-process deduplication
  3. Small file packing

Configurable tiering based on time

  • Policy based tiering at file level
  • Transparent to clients / apps

 

Other Cool Stuff

SmartConnect with NFS Failover

  • High Availability
  • No RTO or RPO

SnapshotIQ

  • Very fast file recovery
  • Low RTO and RPO

SyncIQ via LAN

  • Disk-based backup and business continuity
  • Medium RTO and RPO

SyncIQ via WAN

  • Offsite DR
  • Medium – high RTO and RPO

NDMP Backup

  • Backup to tape
  • FC backup accelerator
  • Higher RTO and RPO

Scalability

Key Features

  • Support for files up to 16TB in size
  • Increase of 4X over previous versions

Benefits

  • Support applications and workloads that typically deal with large files
  • Use Isilon as a destination or temporary staging area for backups and database

 

Isilon in the Cloud

All this Isilon stuff is good, but what if you want to leverage those features in a more cloud-friendly way? Dell EMC has you covered. There’s a good story with getting data to and from the major public cloud providers (in a limited amount of regions), and there’s also an interesting solution when it comes to running OneFS in the cloud itself.

[image courtesy of Dell EMC]

 

Thoughts and Further Reading

If you’re familiar with Isilon, a lot of what I’ve covered here wouldn’t be news, and would likely be a big part of the reason why you might even be an existing customer. But the OneFS in the public cloud stuff may come as a bit of a surprise. Why would you do it? Why would you pay over the odds to run appliance-like storage services when you could leverage native storage services from these cloud providers? Because the big public cloud providers expect you to have it all together, and run applications that can leverage existing public cloud concepts of availability and resiliency. Unfortunately, that isn’t always the case, and many enterprises find themselves lifting and shifting workloads to public clouds. OneFS gives those customers access to features that may not be available to them using the platform natively. These kinds of solutions can also be interesting in the verticals where Isilon has traditionally proven popular. Media and entertainment workloads, for example, often still rely on particular tools and workflows that aren’t necessarily optimised for public cloud. You might have a render job that you need to get done quickly, and the amount of compute available in the public cloud would make that a snap. So you need storage that integrates nicely with your render workflow. Suddenly these OneFS in X Cloud services are beginning to make sense.

It’s been interesting to watch the evolution of the traditional disk slingers in the last 5 years. I don’t think the public cloud has eaten their lunch by any means, but enterprises continue to change the way they approach the need for core infrastructure services, across all of the verticals. Isilon continues to do what it did in the first place – scale out NAS – very well. But Dell EMC has also realised that it needs to augment its approach in order to keep up with what the hyperscalers are up to. I don’t see on-premises Isilon going away any time soon, but I’m also keen to see how the product portfolio develops over the next few years. You can read some more on OneFS in Google Cloud here.

Dell Technologies World 2018 – storage.27 – Isilon: What’s New in 2018 & Future Directions Notes

Disclaimer: I recently attended Dell Technologies World 2018.  My flights, accommodation and conference pass were paid for by Dell Technologies via the Press, Analysts and Influencers program. There is no requirement for me to blog about any of the content presented and I am not compensated in any way for my time at the event.  Some materials presented were discussed under NDA and don’t form part of my blog posts, but could influence future discussions.

Here are my rough notes from the storage.27 session. This was presented by John Hayden, VP Software Engineering, Unstructured Data Storage, and covered Isilon: What’s New in 2018 & Future Directions. This is a futures session, so some of this may not come to pass exactly as it was described here, and there are no dates. The choice was to talk about dates, with no technical details, or talk technical details, but no dates. Option 2 made for a more entertaining session.

Momentum with Isilon – >3.5 EB shipped calendar year 2017

 

What’s new since DEW 2017?

What’s new since 2017?

  • Release of OneFS 8.1
  • New generation of Isilon hardware products
    • All-Flash
    • Hybrid
    • Archive
  • 3 year satisfaction guarantee

 

Isilon hardware design: compute

Features

  • 4 nodes in 4U chassis
  • Intel Broadwell CPU optimised compute to drive ratios
  • Up to 6TB cache per node
  • No single points of failure
  • Networking flexibility: InfiniBand, 10GbE / 40GbE

Benefits

  • 4:1 reduction in RU
  • Optimised IOPS and throughput
  • Future-proof, enduring design: Snap-in next-gen compute, networks
  • New levels of modular, hot-swappable serviceability

 

Isilon hardware design: storage

Features

  • From 72TB to 924TB in 4RU
  • 5 drive sleds per node. 3 to 6 drives per sled
  • Front aisle, hot swap sleds and drives
  • Media flexibility: Flash, SAS, and SATA media

The Isilon Family – mix of performance and capacity

 

Isilon OneFS Enhancements

  • Improved performance driven by software improvements enable new workloads
  • Updated support with leading analytics vendors like Cloudera and HortonWorks
  • New cloud tiering options along with multi-cloud support
  • Ethernet back-end infrastructure support
  • Improved handling of small files for healthcare PACS
  • IsilonSD Edge software defined storage now supported on PowerEdge servers along with vSAN and VxRail
  • Data in flight encryption for improved security

 

Behind the Scenes

Relentless focus on quality & customer experience

  • Isilon engineering organised into CX, 8.X & Innovation acceleration
  • Corporate Dell-wide quality standards and KPIs
  • Tremendous improvement in support and support capabilities
  • Code base in the field is 50% composed of releases in the last 18 months and that is continuing to dramatically accelerate

Results: explosive growth in 8.1 and Gen6 implementation

 

Isilon Pillars of Innovation

  • Flash
  • Archive – great integration with ECS
  • Cloud – cloud pools v1 is already out
  • Analytics – Nautilus, IoT – a lot sits on Isilon and ECS

 

Where we’re investing

Storage challenges

  • Rapid data growth
  • Data solos
  • Cost of infrastructure
  • Insufficient perf
  • Limited IT resources

 

Addressing data growth

Isilon Today

  • Scale-out architecture
  • Scales from 10s of TB to 50+ PB in a single cluster
  • 144 nodes maximum cluster size
  • Provision storage only as needed
  • Simplified management at PB scale
  • Automated tiering between storage tiers and cloud

Future

  • 250+ node cluster size and L/S
  • Cluster aware NDMP
  • Partitioned performance for insights
  • System services QoS
  • Rich filesystem analytics

 

Addressing Data silos

Isilon Today

  • Multi-protocol namespace ideal for infrastructure consolidation
  • Supports unified data lake
  • Enterprise grade features
  • Edge-to-core-to-cloud solution
  • In-place analytics support

Future

  • File and object integration – CloudPools 2 – snapshots will work, and quotas.
  • Increased support for emerging data analytics technologies including streaming analytics*

 

Addressing Infrastructure Cost

Isilon Today

  • Up to 80% storage efficiency
  • Optimised data placement and tiering
  • Flexible deduplication
  • Seamless cloud integration
  • Investment protection – no rip and replace

Future

  • Cloud co-location
  • Inline compression and dedupe for Flash
  • CloudPools v2 with tiering snaps and quotas
  • Writable snapshots

 

Addressing System Performance Needs

Isilon Today

  • OneFS optimised for Flash
  • New All-Flash configuration
  • 6x IOPS and 11x throughput
  • Optimised compute to drive ratios
  • Reduced latency / write optimisation

Future

  • Inline compression and dedupe for Flash
  • File create / delete improvements
  • Streaming improvements (multi-block reads, degraded reads)
  • Performance at scale (cluster, snaps, domains)
  • Large file size
  • Partitioned performance

 

Addressing Manageability Needs

Isilon Today

  • Simple to manage at any scale
  • Single file system
  • Policy based automation
  • Choice of management tools
  • API / web services “first” model

Future

  • On cluster and off cluster UI revolution – including next-generation cluster management technology
  • Cluster integrated authentication
  • Unchained delivery cadence and innovation posture
  • Predictive and prescriptive management

 

Addressing Data protection and security needs

Isilon Today

  • Enterprise data protection features
  • Tolerate up to 4 simultaneous failures
  • WORM, SEDs, RBAC, access zone, compliance
  • Improved failure domain with mirrored journal and boot drive
  • Improved handling of drive failure

Future

  • Multi-factor auth for SSH
  • RBAC per AZ
  • Secure protocols optimisations
  • Enhanced Federal support

 

“Simple is Smart” with Isilon

Simplicity

  • Single volume, single file system architecture

Scalability

  • Expands easily from 10s of TB to 10s of PB without disruption

Efficiency

  • Up to 80% storage utilisation
  • Automated tiering and cloud integration

Data Analytics

  • In-place, high performance data analytics

Data Protection

  • Highly resilient architecture, data replication and snapshots

Security and compliance

  • WORM, data at rest encryption, role-based access control, and more

 

Top session. 5 stars.

Dell EMC’s Isilon All-Flash Is Starting To Make Sense

Disclaimer: I recently attended Storage Field Day 13.  My flights, accommodation and other expenses were paid for by Tech Field Day and Pure Storage. There is no requirement for me to blog about any of the content presented and I am not compensated in any way for my time at the event.  Some materials presented were discussed under NDA and don’t form part of my blog posts, but could influence future discussions.

 

I’ve written about Dell EMC Isilon All-Flash before (here and here). You can see Dell EMC’s Storage Field Day presentation video here and you can grab a copy of my rough notes from here.

 

The Problem?

Dell EMC’s Isilon (and OneFS) has been around for a while now, and Dell EMC tell us it offers the following advantages over competing scale-out NAS offerings:

  • Single, scalable file system;
  • Fully symmetric, clustered architecture;
  • Truly multi-protocol data lake;
  • Transparent tiering with heterogeneous clusters; and
  • Non-disruptive platform and OneFS upgrades.

While this is most likely true, the world (and its workloads) are changing. To this end, Dell EMC have been working with Isilon customers to address some key industry challenges, including:

  • Electronic Design Automation – 7nm and 3D Chip designs;
  • Life Sciences – population-scale genomics;
  • Media and Entertainment – 4K Content and Distribution; and
  • Enterprise – big data and analytics.

 

The Solution?

To cope with the ever-increasing throughput requirements, Dell EMC have developed an all-flash offering for their Isilon range of NAS devices, along with some changes in their OneFS operating environment. The idea of the “F” series of devices is that you can “start small and scale”, with capacities ranging from 72TB – 924TB (RAW) in 4RU. Dell EMC tell me you can go to over 33PB in a single file system. From a performance perspective, Dell EMC say that you can push 250K IOPS (or 15GB/s) in just 4RU and scale to 9M IOPS. These are pretty high numbers, and pointless if your editing workstation is plugged into a 1Gbps switch. But that’s generally not the case nowadays.

One of the neater resilience features that Dell EMC discussed was that the file system layout is “sled-aware” (there are 5 drive sleds per node and 20 sleds per 4RU chassis) meaning that a given file uses one drive per sled, allowing for sled removal for service without data unavailability, with these being treated as temporarily-offline drives.

 

Is All-Flash the Answer (Or Just Another Step?)

I’ve been fascinated with the storage requirements (and IT requirements in general) for media and entertainment workloads for some time. I have absolutely no real-world experience with these types of environments, and it would be silly for me to position myself as any kind of expert in the field. [I am, of course, happy for people working in M&E to get in touch with me and tell me all about what they do]. What I do have is a lot of information that tells me that the move from 2K to 4K (and 8K) is forcing people to rethink their requirements for high bandwidth storage in the ranges of capacities that studios are now starting to look at.

Whilst I was initially a little confused around the move to all-flash on the Isilon platform, the more I think about it, the more it makes sense. You’re always going to have a bunch of data hanging around that you might want to keep on-line for a long time, but it may not need to be retrieved at great speed (think “cheap and deep” storage). For this, it seems that the H (Hybrid) series of Isilon does the job, and does it well. But for workloads where large amounts of data need to be processed in a timely fashion, all-flash options are starting to make a lot more sense.

Is an all-flash offering the answer to everything? Probably not. Particularly not if you’re on a budget. And no matter how much money people have invested in the movie / TV show / whatever, I can guarantee that most of that is going to talent and content, not infrastructure. But there’s definitely a shift from spinning disk to Flash and this will continue as Flash media prices continue to fall. And then we’ll wonder how we ever did anything with those silly spinning disks. Until the next magic medium comes along. In the meantime, if you want to take OneFS for a spin, you can grab a copy of the version 8.1 simulator here. There’s also a very good Isilon overview document that I recommend you check out if that’s the kind of thing you’re into.

Dell EMC Announces Isilon Update (with cameo from ECS)

Disclaimer: I recently attended Dell EMC World 2017.  My flights, accommodation and conference pass were paid for by Dell EMC via the Dell EMC Elect program. There is no requirement for me to blog about any of the content presented and I am not compensated in any way for my time at the event.  Some materials presented were discussed under NDA and don’t form part of my blog posts, but could influence future discussions.

Isilon

The latest generation of Isilon (previewed at Dell EMC World in Austin) was announced today. It’s a modular, in-chassis, flexible platform capable of hosting a mix of all-flash, hybrid and archive nodes. The smaller nodes, with a single socket driving 15 or 20 drives (so they can granularly tune the socket:spindle ratio), come in a 4RU chassis. Each chassis can accommodate 4 “sub-nodes”. Dell EMC are claiming some big improvements over the previous generation of Isilon hardware, with 6X file IOPS, 11X throughput, and 2X the capacity. Density is in too, and you can have up to 80 drives in a single chassis (using SATA drives). Dell EMC notes that it’s NVMe ready, but the CPU power to drive that isn’t there just yet. At launch it supports up to 144 nodes (in 36 chassis) and they’re aiming to get to 400 later in the year. Interestingly, there are now dual modes of backend connectivity (InfiniBand and Ethernet) to accommodate this increased number of nodes.
From a compute perspective, you’ll see the following specs:

  • Intel Broadwell CPU (with optimised compute to drive ratios)
  • Up to 6TB cache per node (2 Flash cards)
  • No SPoF
  • Networking flexibility – Infiniband, 10GbE/40GbE

Nodes can borrow power from neighbours if required too.

Dell EMC tell me this provides the following benefits:

  • 4:1 reduction in RU
  • Optimised IOPS and throughput
  • Future-proof, enduring design – snap in next-gen CPUs, networks
  • New levels of modular, hot-swappable serviceability

From a storage perspective, you’ll see a range of configurations:

  • From 72TB to 924TB in 4RU
  • 5 drive sleds per node. 3-6 drives per sled.
  • Front aisle, hot swap sleds and drives
  • Media flexibility: Flash, SAS and SATA media

Dell EMC tell me this provides the following benefits:

  • Start small and scale
  • Breakthrough density
  • Simplified serviceability and upgrades
  • Future-proof storage

Other Benefits?

Well, you get access to OneFS 8.1. You also get OPEX reduction by occupying a lot less space in the DC, and having the ability to host a lot more diversity of workloads. Dell EMC are also claiming this release provides unmatched resilience, availability, and security.

Scale? They’ve got that too.

From a capacity standpoint, you can start as small as 72TB (in one chassis) and expand that to over 33PB in a single volume and file system. In terms of performance, Dell EMC are telling me they’re getting up to 250K IOPs, 15GB/s, which scales to 9M IOPs, 540GB/s (aggregate throughput). Your mileage might vary, of course.

 

Speeds and Feeds

So what do the new models look like? You can guess, but I’ll say it anyway. F nodes are all flash, H nodes are hybrid, and A nodes are archive nodes.

F800 (All Flash)

  • 1.6TB, 3.2TB and 15.4TB Flash
  • 60 drives, up to 924TB per chassis
  • 250K IOPs per chassis
  • 15GB/s throughput

H600 (Hybrid)

  • 600GB and 1.2TB SAS drives
  • 120 drives and up to 144TB per chassis
  • 117K IOPs per chassis

H500, H400 (Hybrid)

  • 2/4/8TB SATA drives
  • 60 drives per chassis
  • Up to 480TB per chassis

A200 (Archive)

  • 2/4/8TB SATA drives
  • 60 drives per chassis
  • Up to 480TB per chassis

A2000 (Archive)

  • 10TB SATA drives
  • 80 drives per chassis
  • 800TB per chassis

 

“No node left behind”

One of the great things about Isilon is that you can seamlessly add “Next Gen” nodes to existing clusters. You’ve been able to do this with Isilon clusters for a very long time, obviously, and it’s nice to see Dell EMC maintain that capability. The benefits of this approach are that you can:

  • Beef up your existing Isilon clusters with Isilon all flash nodes; and
  • Consolidate your DC footprint by retiring older nodes.

 

OneFS

OneFS has always been pretty cool and it’s now “optimised [for the] performance benefits of flash – without compromising enterprise features”. According to Dell EMC, flash wear is “yesterday’s problem”, and the F800 can sustain more writes per day than its total capacity every day for over 5 years before approaching limits. OneFS is now designed to go “From Edge to Core to Cloud” with IsilonSD Edge, the Next Generation Core and Cloud (with CloudPools -> AWS, Azure, Virtustream).

 

IsilonSD Edge

IsilonSD Edge has some new and improved features now too:

  • VMware ESXi Hypervisor
  • Full vCenter integration
  • Scale up to 36TB
  • Single server deployment
  • Back end SAN: ScaleIO, VSAN and VxRAIL
  • Dell PowerEdge 14G Support

 

ECS

Dell EMC also talked about their vision for ECS.Next, coming in the next year.

  • Data streaming
  • Enterprise Hardening
  • Certifications
  • Compliance
  • Economics
  • Hybrid Cloud

Big bets?

  • Hybrid Cloud
  • Batch and real-time analytics and stream processing
  • Massive scale @ low cost with new enterprise capabilities

 

Hybrid Cloud

Dell EMC are launching a ECS Dedicated Cloud (ECS DC) Service. This is on-demand ECS storage, managed by Dell EMC and running on dedicated, single-tenant servers hosted in a Virtustream DC. It’s available in hybrid and fully hosted multi-site configurations.

So what’s in the box?

You get some dedicated infrastructure

  • Customer owned ECS rack
  • Dedicated network / firewall / load balancer

You also get 24×7 support of hosted sites from a professional DevOps team

  • Strong expertise in operating ECS
  • Proactive monitoring and fast response

As well as broad Geo coverage

  • 5 DCs available across US (Las Vegas, Virginia) and Europe (France, London, Netherlands)
  • Coming to APJ by end of 2017

It will run on a subscription model, with a 1 year or 3 year contract available.

 

Project Nautilus

The team also took us through “Project Nautilus”, a batch and real-time analytics and stream processing solution.

Streaming storage and analytics engine

  • Scale to manage 1000s of high-volume IoT data sources
  • Eliminate real-time and batch analytics silos
  • Tier inactive data seamlessly and cost effectively

I hope to cover more on this later. They’re also working on certifications in terms of Hadoop and Centera migrations too (!). I’m definitely interested in the Centera story.

 

Conclusion

I’ve been a fan of Isilon for some time. It does what it promises on the tin and does it well. The Nitro announcement last year left a few of us scratching our heads (myself included), but I’m on board with a number of the benefits from adopting this approach. Some people are just going to want to consume things in a certain way (VMAX AF is a good example of this), and Dell EMC have been pretty good at glomming onto those market opportunities. And, of course, in much the same way as we’re no longer running SCSI disks everywhere, Flash does seem to be the medium of the future. I’m looking forward to seeing ECS progress as well, given the large numbers of scale-out, object-based storage solutions on the market today. If you’d like to read more about the new Isilon platform, head over to Dell EMC’s blog to check it out.

 

Dell EMC Announces Isilon All-Flash

You get a flash, you get a flash, you all get a flash

Last week at Dell EMC World it was announced that the Isilon All-Flash NAS (formerly “Project Nitro“) offering was available for pre-order (and GA in early 2017). You can check out the specs here, but basically each chassis is comprised of 4 nodes in 4RU. Dell EMC says this provides “[e]xtreme density, modular and incredibly scalable all-flash tier” with the ability to have up to 100 systems with 400 nodes, storing 92.4PB of capacity, 25M IOPS and up to 1.5TB/s of total aggregate bandwidth—all within a single file system and single volume. All OneFS features are supported, and a OneFS update will be required to add these to existing clusters.

isilon_all-flash_001

[image via Dell EMC]

 

Why?

Dell EMC are saying this solution provides 6x greater IOPS per RU over existing Isilon nodes. It also helps in areas where Isilon hasn’t been as competitive, providing:

  • High throughput for large datasets of large files for parallel processing;
  • IOPS intensive: You can now work on billions of small files and large datasets for parallel processing;
  • Predictable latency and performance for mixed workloads; and
  • Improved cost of ownership, with higher density flash providing some level of relief in terms of infrastructure and energy efficiency.

 

Use Cases?

Dell EMC covered the usual suspects – but with greater performance:

  • Media and entertainment;
  • Life sciences;
  • Geoscience;
  • IoT; and
  • High Performance Computing.

 

Thoughts and Further Reading

If you followed along with the announcements from Dell EMC last week you would have noticed that there have been some incremental improvements in the current storage portfolio, but no drastic changes. While it might make for an exciting article when Dell EMC decide to kill off a product, these changes make a lot more sense (FluidFS for XtremIOenhanced support for Compellent, and the addition of a PowerEdge offering for VxRail). The addition of an all-flash offering for Isilon has been in the works for some time, and gives the platform a little extra boost in areas where it may have previously struggled. I’ve been a fan of the Isilon platform since I first heard about it, and while I don’t have details of pricing, if you’re already an Isilon shop the all-flash offering should make for interesting news.

Vipin V.K did a great write-up on the announcement that you can read here. The press release from Dell EMC can be found here. There’s also a decent overview from ESG here. Along with the above links to El Reg, there’s a nice article on Nitro here.

Random Short Take #2

I did one of these 7 years ago – so I guess I never really got into the habit – but here’re a few things that I’ve noticed and thought you might be interested in:

And that’s about it, thanks for reading.

iStock-Unfinished-Business-2

EMC – Isilon – Joining Active Directory

I’ve been doing some work in the EMC vLabs and I thought I’d take note of how to join an Isilon cluster to Active Directory. The cluster in this example is running 3 Isilon virtual nodes with OneFS 7.1.0.0.

Once you’ve logged in, click on Cluster Management and Access Management. Under Access Management, click on Active Directory. Note that there are no Active Directory providers configured in this example.

Isilon_AD1

Click on “Join a domain”. You can then specify the Domain Name, etc. You can also specify the OU and Machine Account if required.

Isilon_AD2

Once you click on Join, you’ll be joined to the AD.

Isilon_AD3

To confirm this, you can also use isi auth status to confirm the status.

Isilon_AD4

And that’s it. As always, I recommend you use a directory service of some type on all of your devices for authentication.

EMC – VSI for VMware vSphere 6.5 Linked Mode Issue – Redux

I wrote in a previous post about having some problems with EMC’s VSI for VMware vSphere 6.5 when running in vCenter 5.5 in Linked Mode. I spoke about deploying the appliance in just one site as a workaround. Turns out that wasn’t much of a workaround. Because workaround implies that I was able to get some functionality out of the situation. While the appliance deployed okay, I couldn’t get it to recognise the deployed volumes as EMC volumes.

 

A colleague of mine had the same problem as me and a little more patience and logged a call with EMC support. Their response was “[c]urrent VSI version does not support for Linked mode, good news is recently we have several customers requesting that enhancement and Dev team are in the process of evaluating impact to their future delivery schedule. So, the linked mode may will be supported in the future. Thanks.”

 

iStock-Unfinished-Business-3

While this strikes me as non-optimal, I am hopeful, but not optimistic, that it will be fixed in a later version. My concern is that Linked Mode isn’t the problem at all, and it’s something else stupid that I’m doing. But I’m short of places I can test this at the moment. If I come across a site where we’re not using Linked Mode, I’ll be sure to fire up the appliance and run it through its paces, but for now it’s back in the box.

EMC – VSI for VMware vSphere 6.5 Linked Mode Issue

As part of a recent deployment I’ve been implementing EMC VSI for VMware vSphere Web Client v6.5 in a vSphere 5.5 environment. If you’re not familiar with this product, it “enables administrators to view, manage, and optimize storage for VMware ESX/ESXi servers and hosts and then map that storage to the hosts.” It covers a bunch of EMC products, and can be really useful in understanding where your VMs sit in relation to your EMC storage environment. It also really helps non-storage admins get going quickly in an EMC environment.

To get up and running, you:

  • Download the appliance from EMC;
  • Deploy the appliance into your environment;
  • Register the plug-in with vCenter by going to https://ApplianceIP:8443/vsi_usm/admin;
  • Register the Solutions Integration Service in the vCenter Web Client; and
  • Start adding arrays as required.

So this is all pretty straightforward. BTW the default username is admin, and the default password is ChangeMe. You’ll be prompted to change the password the first time you log in to the appliance.

 

So the problem for me arose when I went to register a second SIS appliance.

VSI1

By way of background, there are two vCenter 5.5 U2 instances running at two different data centres. I do, however, have them running in Linked Mode. And I think this is the problem. I know that you can only register one instance at a time with one vCenter. While it’s not an issue to deploy a second appliance at the second DC, every time I go to register the service in vCenter, regardless of where I’m logged in, it always points to the first vCenter instance. Which is a bit of a PITA, and not something I’d expected to be a problem. As a workaround, I’ve deployed one instance of the appliance at the primary DC and added both arrays to it to get the client up and running. And yes, I agree, if I have a site down I’m probably not going to be super focused on storage provisioning activities at my secondary DC. But I do enjoy whinging about things when they don’t work the way I expected them in the first instance.

 

I’d read in previous versions that Linked Mode wasn’t supported, but figured this was no longer an issue as it’s not mentioned in the 6.5 Product Guide. This thread on ECN seems to back up what I suspect. I’d be keen to hear if other people have run into this issue.