Cisco Introduces HyperFlex 4.5

Disclaimer: I recently attended Storage Field Day 20.  My flights, accommodation and other expenses were paid for by Tech Field Day. There is no requirement for me to blog about any of the content presented and I am not compensated in any way for my time at the event.  Some materials presented were discussed under NDA and don’t form part of my blog posts, but could influence future discussions.

Cisco presented a sneak preview of HyperFlex 4.5 at Storage Field Day 20 a little while ago. You can see videos of the presentation here, and download my rough notes from here. Note that this preview was done some time before the product was officially announced, so there may be a few things that did or didn’t make it into the final product release.

 

Announcing HyperFlex 4.5

4.5: Meat and Potatoes

So what are the main components of the 4.5 announcement?

  • iSCSI Block storage
  • N:1 Edge data replication
  • New edge platforms / SD-WAN
  • HX Application Platform (KVM)
  • Intersight K8s Service
  • Intersight Workload Optimizer

Other Cool Stuff

  • HX Boost Mode – virtual CPU configuration change in HX controller VM, the boost is persistent (scale up).
  • ESXi & VC 7.0, Native VC Plugin, 6.0 is EoS, HX Native HTML5 vCenter Plugin (this has been available since HX 4.0)
  • Secure Boot – protect the hypervisor against bootloader attacks with secure boot anchored in Cisco hardware root of trust
  • Hardened SDS Controller – reduce the attack surface and mitigate against compromised admin credentials

The HX240 Short Depth nodes have been available since HX 4.0, but there’s now a new Edge Option – the HX240 Edge. This is a new 2RU form factor option for HX Edge (2N / 3N / 4N), A-F and hybrid, 1 or 2 sockets, up to 3TB RAM and 175TB capacity, and PCIe slots for dense GPUs.

 

iSCSI in HX 4.5(1a)

[image courtesy of Cisco]

iSCSI Topologies

[image courtesy of Cisco]

 

Thoughts and Further Reading

Some of the drama traditionally associated with HCI marketing seems to have died down now, and people have mostly stopped debating what it is or isn’t, and started focusing on what they can get from the architecture over more traditional infrastructure deployments. Hyperconverged has always had a good story when it comes to compute and storage, but the networking piece has proven problematic in the field. Sure, there have been attempts at making software-defined networking more effective, but some of these efforts have run into trouble when they’ve hit the northbound switches.

When I think of Cisco HyperFlex I think of it as the little HCI solution that could. It doesn’t dominate the industry conversation like some of the other vendors, but it’s certainly had an impact, in much the same way UCS has. I’ve been a big fan of Springpath for some time, and HyperFlex has taken a solid foundation and turned it into something even more versatile and fully featured. I think the key thing to remember with HyperFlex is that it’s a networking company selling this stuff – a networking company that knows what’s up when it comes to connecting all kinds of infrastructure together.

The addition of iSCSI keeps the block storage crowd happy, and the new edge form-factor will have appeal for customers trying to squeeze these boxes into places they probably shouldn’t be going. I’m looking forward to seeing more HyperFlex from Cisco over the next 12 months, as I think it finally has a really good story to tell, particularly when it comes to integration with other Cisco bits and pieces.

Cisco MDS, NVMe, and Flexibility

Disclaimer: I recently attended Storage Field Day 20.  My flights, accommodation and other expenses were paid for by Tech Field Day. There is no requirement for me to blog about any of the content presented and I am not compensated in any way for my time at the event.  Some materials presented were discussed under NDA and don’t form part of my blog posts, but could influence future discussions.

Cisco recently presented at Storage Field Day 20. You can see videos of the presentation here, and download my rough notes from here.

 

NVMe, Yeah You Know Me

Non-Volatile Memory Express, known more commonly as NVMe, is a protocol designed for high performance SSD storage access.  In the olden days, we used to associate fibre channel and iSCSI networking options with high performance block storage. Okay, maybe not the 1Gbps iSCSI stuff, but you know what I mean. Time has passed, and the storage networking landscape has changed significantly with the introduction of All-Flash and NVMe. But NVMe’s adoption hasn’t been all smooth sailing. There have been plenty of vendors willing to put drives in storage arrays that support NVMe while doing some translation on the backend that negated the real benefits of NVMe. And, like many new technologies, it’s been a gradual process to get end-to-end NVMe in place, because enterprises, and the vendors that sell to them, only move so fast. Some vendors support NVMe, but only over FC. Others have adopted the protocol to run over RoCEv2. There’s also NVMe-TCP, in case you weren’t confused enough about what you could use. I’m doing a poor job of explaining this, so you should really just head over to Dr J Metz’s article on NVMe for beginners at SNIA.

 

Cisco Are Ready For Anything

As you’ve hopefully started to realise, you’ll see a whole bunch of NVMe implementations available in storage fabrics, along with a large number of enterprises continuing to have conversations about and deploy new storage equipment that uses traditional block fabrics, such as iSCSI or FC or, perish the thought, FCoE. The cool thing about Cisco MDS is that it supports all this crazy and more. If you’re running the latest and greatest NVMe end to end implementation and have some old block-only 8Gbps FC box sitting in the corner they can likely help you with connectivity. The diagram below hopefully demonstrates that point.

[image courtesy of Cisco]

 

Thoughts and Further Reading

Very early in my storage career, I attended a session on MDS at Cisco Networkers Live (when they still ran those types of events in Brisbane). Being fairly new to storage, and running a smallish network of one FC4700 and 8 Unix hosts, I’d tended to focus more on the storage part of the equation rather than the network part of the SAN. Cisco was still relatively new to the storage world at that stage, and it felt a lot like it had adopted a very network-centric view of the storage world. I was a little confused why all the talk was about backplanes and port density, as I was more interested about the optimal RAID configuration for mail server volumes and how I should protect the data being stored on this somewhat sensitive piece of storage. As time went on, I was invariably exposed to larger and larger environments where decisions around core and edge storage networking devices started to become more and more critical to getting optimal performance out of the environment. A lot of the information I was exposed to in that early MDS session started to make more sense (particularly as I was tasked with deploying larger and larger MDS-based fabrics).

Things have obviously changed quite a bit since those heady days of a network upstart making waves in the storage world. We’ve seen increases in network speeds become more and more common in the data centre, and we’re no longer struggling to get as many IOPS as we can out of 5400 RPM PATA drives with an interposer and some slightly weird firmware. What has become apparent, I think, is the importance of the fabric when it comes to getting access to storage resources in a timely fashion, and with the required performance. As enterprises scale up and out, and more and more hosts and applications connect to centralised storage resources, it doesn’t matter how fast those storage resources are if there’s latency in the fabric.

The SAN still has a place in the enterprise, despite was the DAS huggers will tell you, and you can get some great performance out of your SAN if you architect it appropriately. Cisco certainly seems to have an option for pretty much everything when it comes to storage (and network) fabrics. It also has a great story when it comes to fabric visibility, and the scale and performance at the top end of its MDS range is pretty impressive. In my mind, though, the key really is the variety of options available when build a storage network. It’s something that shouldn’t be underestimated given the plethora of options available in the market.

Brisbane VMUG – November 2018

hero_vmug_express_2011

The November 2018 edition of the Brisbane VMUG meeting (and last one of the year) will be held on Tuesday 20th November at Toobirds at 127 Creek Street from 4:30 pm – 6:30 pm. It’s sponsored by Cisco and promises to be a great afternoon.

Here’s the agenda:

  • VMUG Intro
  • VMware Presentation:Workspace ONE UEM Modern Management for Windows 10
  • Cisco Presentation:Cloud First in a Multi-cloud world
  • Q&A
  • Refreshments and drinks.

Cisco have gone to great lengths to make sure this will be a fun and informative session and I’m really looking forward to hearing what they’ve been up to. You can find out more information and register for the event here. I hope to see you there. Also, if you’re interested in sponsoring one of these events, please get in touch with me and I can help make it happen.

Random Short Take #4

Welcome to the 2017 edition of the Random Short Take. Here are a few links to a few things that I think might be useful, to someone. Maybe.

I’ve been doing some vSphere designs lately, and found these links handy:

I don’t think we’re talking enough about protecting the vCenter Server Appliance. I found these links to be pretty handy.

Need some info on Cisco UCS? Go here.

And if you’re working out power draw in the DC, this might be helpful.

Oracle VM came up in a project I was working on recently. This overview page was a reasonable starting point. Finally, check out Stephen Foskett’s article on ZFS. I thought it was well-balanced and a good read, and the article comments reminded me why I’ve stayed the hell away from that particular community. In any case, if you’re going to be at VMworld US this year, come and say hi.

 

Cisco – Reset snmp user password

More often than not, I have problems with Cisco MDS switches because I’ve done something stupid. For example, last week I replaced some switch configs but did something to the password for the snmp admin user. As a result, I could log into the switch with admin credentials, and I could see the switch in DCNM, but I couldn’t access it using SNMP credentials. It’s a simple fix, for I’m a simple fellow.

switch1# conf t
Enter configuration commands, one per line.  End with CNTL/Z.
switch1(config)# snmp-server user admin network-admin auth md5 yourpasswordgoeshere
switch1(config)# exit
switch1# copy run start
[########################################] 100%
Copy complete, now saving to disk (please wait)...
switch1#

Cisco – DCNM, why are you like this?

I thought it would be a good idea to upgrade the copy of Cisco DCNM installed on my laptop (in standalone mode) from 5.2(2) to 6.1(1) the other day. I ran the 32-bit installer and got an error about “Upgradation” being unsupported from this mode.

dcnm_upgrade

This probably should have set off alarm bells. But I hadn’t read the release notes, and wasn’t really paying attention. So I dutifully uninstalled 5.2(2) and had another go at it.

dcnm_upgrade2

Sigh. I know that in production you wouldn’t be using a Windows 7 laptop to run this software. And I know that I should have carefully read the requirements before I attempted installation. If I had, I would have read this: “Cisco DCNM SAN Release 6.1(1a) and later releases do not support running the Cisco DCNM SAN client in standalone mode. If you were running the SAN client in standalone mode in Release 5.2(x), you should uninstall it and install Cisco DCNM SAN server Release 6.1(1a) or a later release. You cannot upgrade the standalone SAN client from DCNM Release 5.2(x) to Release 6.1(1a) or a later release”. But surely it could have popped up with this warning before telling me that I had to uninstall 5.2(2) first? DCNM developers have moved back to the top of my list. If they can’t code around my ignorance and laziness then I want no part of their product. And what the hell is “Upgradation” anyway?

Cisco – Restoring MDS configurations from somewhere else

We recently had to replace a Cisco MDS 9124e in our lab. I used to use this method to copy and restore configuration files to MDS switches.

switch# copy tftp://192.168.0.20/switch.cfg startup-config
Trying to connect to tftp server......
Connection to server Established. Copying Started.....
|
TFTP get operation was successful
This command is deprecated. To obtain the same results, please use
the sequence 'write erase' + 'reload' + 'copy <file> running-config' + 'copy running-config startup-config'.

It was rough, but it used to work. So now I do this.

switch# copy tftp://192.168.0.20/switch.cfg bootflash:
Trying to connect to tftp server......
Connection to server Established. Copying Started.....
|
TFTP get operation was successful
switch# dir
      15155    Feb 05 21:37:37 2013  switch.cfg

write erase
reload
copy switch.cfg running-config
copy run start

It makes sense, as the write erase and reload commands make you think about what you’re doing, and you need to be sure that you want to overwrite the running or startup config.

Updated Articles page

I’ve added a brief article covering the steps involved in installing the Cisco Prime DCNM in standalone mode – used for management and maintenance of Cisco fabrics. I had to re-install this software after a workstation replacement and thought it might be useful to document the steps required.

Cisco MDS Scheduler with AAA

This is probably very old news but it’s here more for my reference than anything else. A little while ago we introduced 2 new MDS 9513 switches into our core and needed to setup a simple scheduled backup task to copy the configs to a tftp server daily. For some reason I wasn’t able to create the job in the scheduler when I was logged in as a user that had authenticated against AAA. MDS9513(config)# scheduler enable MDS9513(config)# scheduler job name backup_config Error: AAA authentication password not configured (for logged in user) I may have the reason behind this arse-backwards, but it seems like I’ve probably never been able to do this. I think what I’ve been doing is setting up the configs on the switches and then adding them to ACS. I could be wrong about that too, but I’m really just interested in workarounds, not understanding the problem.

For some information on using the scheduler with a AAA user, have a look at this link on Cisco’s website.  So here’s how to give the AAA user privileges to configure scheduled tasks.

login as: username
User Access Verification
Using keyboard-interactive authentication.
Password:

Cisco Nexus Operating System (NX-OS) Software
TAC support: http://www.cisco.com/tac
Copyright (c) 2002-2009, Cisco Systems, Inc. All rights reserved.
The copyrights to certain works contained in this software are
owned by other third parties and used and distributed under
license. Certain components of this software are licensed under
the GNU General Public License (GPL) version 2.0 or the GNU
Lesser General Public License (LGPL) Version 2.1. A copy of each
such license is available at
http://www.opensource.org/licenses/gpl-2.0.php and
http://www.opensource.org/licenses/lgpl-2.1.php

MDS9513# conf t
Enter configuration commands, one per line. End with CNTL/Z.
MDS9513(config)# scheduler enable
MDS9513(config)# scheduler aaa-authentication user username password password
MDS9513(config)# scheduler job name backup_config
MDS9513(config-job)# copy running-config startup-config
MDS9513(config-job)# copy startup-config tftp://tftphost/Backup/MDS9513_cfg_$(TIMESTAMP).txt
MDS9513(config-job)# end
MDS9513# show scheduler job name backup_config

Job Name: backup_config
-----------------------
copy running-config startup-config
copy startup-config tftp://tftphost/Backup/MDS9513_cfg_$(TIMESTAMP).txt
==============================================================================
 

The problem with this is that you might prefer to use a service account to get this done. But perhaps you’re lazy and can’t be bothered asking for a service account. So if you’ve used your admin account you might want to remove it. Note that this *shouldn’t* have an impact on your scheduler configuration.

MDS9513# conf t
Enter configuration commands, one per line. End with CNTL/Z.
MDS9513(config)# no scheduler aaa-authentication username username password password
MDS9513(config)# end
MDS9513# show running-config | include "scheduler aaa-authentication"
MDS9513# show scheduler job name backup_config
Job Name: backup_config
-----------------------
copy running-config startup-config
copy startup-config tftp://tftphost/Backup/MDS9513_cfg_$(TIMESTAMP).txt
==============================================================================

MDS9513#

Cisco MDS blades are being returned …

I was going to write a long and angsty post about how I think Cisco should be publicly villified for their continued publication of specs that don’t add up, but I’ll leave that to analysts who know more about such things than I do. I’m sure a lot of our issues arise from the fact that our procurement guy asks the vendor for a number of ports and then buys them, rather than checking with the technical guys. Suffice to say that we’re sending 4 48-port blades back because, well, if we wanted to run the ports at 4Gbps we’d have to disable 24 of the 48 ports. Hey Cisco, 2005 called and they want their shitty bandwidth back. I’m sure these blades are great for hosting providers who promise a lot and count on oversubscription to get by with less but it doesn’t work for us.