Cohesity Basics – Excluding VMs Using Tags

I’ve been doing some work with Cohesity in our lab and thought it worth covering some of the basic features that I think are pretty neat. In this edition of Cohesity Basics, I thought I’d quickly cover off how to exclude VMs from protection jobs based on assigned tags. In this example I’m using version 6.0.1b_release-20181014_14074e50 (a “feature release”).

 

Process

The first step is to find the VM in vCenter that you want to exclude from a protection job. Right-click on the VM and select Tags & Custom Attributes. Click on Assign Tag.

In the Assign Tag window, click on the New Tag icon.

Assign a name to the new tag, and add a description if that’s what you’re into.

In this example, I’ve created a tag called “COH-Test”, and put it in the “Backup” category.

Now go to the protection job you’d like to edit.

Click on the Tag icon on the right-hand side. You can then select the tag you created in vCenter. Note that you may need to refresh your vCenter source for this new tag to be reflected.

When you select the tag, you can choose to Auto Protect or Exclude the VM based on the applied tags.

If you drill in to the objects in the protection job, you can see that the VM I wanted to exclude from this job has been excluded based on the assigned tag.

 

Thoughts

I’ve written enthusiastically about Cohesity’s Auto Protect feature previously. Sometimes, though, you need to exclude VMs from protection jobs. Using tags is a quick and easy way to do this, and it’s something that your virtualisation admin team will be happy to use too.

Getting Started With The Pure Storage CLI

I used to write a lot about how to manage CLARiiON and VNX storage environments with EMC’s naviseccli tool. I’ve been doing some stuff with Pure Storage FlashArrays in our lab and thought it might be worth covering off some of the basics of their CLI. This will obviously be no replacement for the official administration guide, but I thought it might come in useful as a starting point.

 

Basics

Unlike EMC’s CLI, there’s no executable to install – it’s all on the controllers. If you’re using Windows, PuTTY is still a good choice as an ssh client. Otherwise the macOS ssh client does a reasonable job too. When you first setup your FlashArray, a virtual IP (VIP) was configured. It’s easiest to connect to the VIP, and Purity then directs your session to whichever controller is the current primary controller. Note that you can also connect via the physical IP address if that’s how you want to do things.

The first step is to login to the array as pureuser, with the password that you’ve definitely changed from the default one.

login as: pureuser
pureuser@10.xxx.xxx.30's password:
Last login: Fri Aug 10 09:36:05 2018 from 10.xxx.xxx.xxx

Mon Aug 13 10:01:52 2018
Welcome pureuser. This is Purity Version 4.10.4 on FlashArray purearray
http://www.purestorage.com/

“purehelp” is the command to run to list available commands.

pureuser@purearray> purehelp
Available commands:
-------------------
pureadmin
purealert
pureapp
purearray
purecert
pureconfig
puredns
puredrive
pureds
purehelp
purehgroup
purehost
purehw
purelog
pureman
puremessage
purenetwork
purepgroup
pureplugin
pureport
puresmis
puresnmp
puresubnet
puresw
purevol
exit
logout

If you want to get some additional help with a command, you can run “command -h” (or –help).

pureuser@purearray> purevol -h
usage: purevol [-h]
               {add,connect,copy,create,destroy,disconnect,eradicate,list,listobj,monitor,recover,remove,rename,setattr,snap,truncate}
               ...

positional arguments:
  {add,connect,copy,create,destroy,disconnect,eradicate,list,listobj,monitor,recover,remove,rename,setattr,snap,truncate}
    add                 add volumes to protection groups
    connect             connect one or more volumes to a host
    copy                copy a volume or snapshot to one or more volumes
    create              create one or more volumes
    destroy             destroy one or more volumes or snapshots
    disconnect          disconnect one or more volumes from a host
    eradicate           eradicate one or more volumes or snapshots
    list                display information about volumes or snapshots
    listobj             list objects associated with one or more volumes
    monitor             display I/O performance information
    recover             recover one or more destroyed volumes or snapshots
    remove              remove volumes from protection groups
    rename              rename a volume or snapshot
    setattr             set volume attributes (increase size)
    snap                take snapshots of one or more volumes
    truncate            truncate one or more volumes (reduce size)

optional arguments:
  -h, --help            show this help message and exit

There’s also a facility to access the man page for commands. Just run “pureman command” to access it.

Want to see how much capacity there is on the array? Run “purearray list –space”.

pureuser@purearray> purearray list --space
Name        Capacity  Parity  Thin Provisioning  Data Reduction  Total Reduction  Volumes  Snapshots  Shared Space  System  Total
purearray  12.45T    100%    86%                2.4 to 1        17.3 to 1        350.66M  3.42G      3.01T         0.00    3.01T

Need to check the software version or generally availability of the controllers? Run “purearray list –controller”.

pureuser@purearray> purearray list --controller
Name  Mode       Model   Version  Status
CT0   secondary  FA-450  4.10.4   ready
CT1   primary    FA-450  4.10.4   ready

 

Connecting A Host

To connect a host to an array (assuming you’ve already zoned it to the array), you’d use the following commands.

purehost create hostname
purehost create -wwnlist WWNs hostname
purehost list
purevol connect --host [host] [volume]

 

Host Groups

You might need to create a Host Group if you’re running ESXi and want to have multiple hosts accessing the same volumes. Here’re the commands you’ll need. Firstly, create the Host Group.

purehgroup create [hostgroup]

Add the hosts to the Host Group (these hosts should already exist on the array)

purehgroup setattr --hostlist host1,host2,host3 [hostgroup]

You can then assign volumes to the Host Group

purehgroup connect --vol [volume] [hostgroup]

 

Other Volume Operations

Some other neat (and sometimes destructive) things you can do with volumes are listed below.

To resize a volume, use the following commands.

purevol setattr --size 500G [volume]
purevol truncate --size 20GB [volume]

Note that a snapshot is available for 24 hours to roll back if required. This is good if you’ve shrunk a volume to be smaller than the data on it and have consequently munted the filesystem.

When you destroy a volume it immediately becomes unavailable to host, but remains on the array for 24 hours. Note that you’ll need to remove the volume from any hosts connected to it first.

purevol disconnect [volume] --host [hostname]
purevol destroy [volume]

If you’re running short of capacity, or are just curious about when a deleted volume will disappear, use the following command.

purevol list --pending

If you need the capacity back immediately, the deleted volume can be eradicated with the following comamnd.

purevol eradicate [volume]

 

Further Reading

The Pure CLI is obviously not a new thing, and plenty of bright folks have already done a few articles about how you can use it as part of a provisioning workflow. This one from Chadd Kenney is a little old now but still demonstrates how you can bring it all together to do something pretty useful. You can obviously extend that to do some pretty interesting stuff, and there’s solid parity between the GUI and CLI in the Purity environment.

It seems like a small thing, but the fact that there’s no need to install an executable is a big thing in my book. Array vendors (and infrastructure vendors in general) insisting on installing some shell extension or command environment is a pain in the arse, and should be seen as an act of hostility akin to requiring Java to complete simple administration tasks. The sooner we get everyone working with either HTML5 or simple ssh access the better. In any csase, I hope this was a useful introduction to the Purity CLI. Check out the Administration Guide for more information.

Rubrik Basics – Role-based Access Control

I’ve been doing some work with Rubrik in our lab and thought it worth covering some of the basic features that I think are pretty neat. In this edition of Rubrik Basics, I thought I’d quickly cover off how to get started with the Role Based Access Control (RBAC) feature.

 

Roles

The concept of RBAC is not a new one. It is, however, one of the first things that companies with more than one staff member ask for when they have to manage infrastructure. Rubrik uses the concept of Roles to deliver particular access to their environment. The available roles are as follows:

  • Administrator role – Full access to all Rubrik operations on all objects;
  • End User role – For assigned objects: browse snapshots, recover files and Live Mount; and
  • No Access role – Cannot log in to the Rubrik UI and cannot make REST API calls.

The End User role has a set of privileges that align with the requirements of a backup operator role.

Privilege type Description
Download data from backups Data download only from assigned object types:

  • vSphere virtual machines
  • Hyper-V virtual machines
  • AHV virtual machines
  • Linux & Unix hosts
  • Windows hosts
  • NAS hosts
  • SQL Server databases
  • Managed volumes
Live Mount or Export virtual machine snapshot Live Mount or Export a snapshot only from specified virtual machines and only to specified target locations.
Export data from backups Export data only from specified source objects.
Restore data over source Write data from backups to the source location, overwriting existing data, only for assigned objects, and only when ‘Allow overwrite of original’ is enabled for the user account or group account.

The good news is that Rubrik supports local authentication as well as Active Directory. You can then tie these roles to particular groups within your organisation. You can have more than one domain that you use for authentication, but I’ll cover that in a future post on multi-tenancy.

I don’t believe that the ability to create custom roles is present (at least in the UI). I’m happy for people from Rubrik to correct me if I’ve gotten that wrong.

 

Configuration

Configuring access to the Rubrik environment for users is fairly straightforward. In this example I’ll be giving my domain account access to the Brik as an administrator. To get started, click on the Gear icon in the UI and select Users (under Access Management).

I don’t know who Grant Authorization is in real life, but he’s the guy who can help you out here (my dad jokes are both woeful and plentiful – just ask my children).

In this example I’m granting access to a domain user.

This example also assumes that you’ve added the domain to the appliance in the first place (and note that you can add multiple domains). In the dropdown box, select the domain the user resides in.

You can then search for a name. In this example, the user I’m searching for is danf. Makes sense, if you think about it.

Select the user account and click on Continue.

By default users are assigned No Access. If you have one of these accounts, the UI will let you enter a username and password and then kick you back to the login screen.

If I assign the user the End User role, I can assign access to various objects in the environment. Note that I can also provide access to overwrite original files if required. This is disabled by default.

In this example, however, I’m providing my domain account with full access via the Administrator role. Click on Assign to continue.

I can now log in to the Rubrik UI with my domain user account and do things.

And that’s it. In a future post I’ll be looking in to multi-tenancy and fun things you can do with organisations and multiple access levels.

Cohesity Basics – Cloud Tier

I’ve been doing some work with Cohesity in our lab and thought it worth covering some of the basic features that I think are pretty neat. In this edition of Cohesity Basics, I thought I’d quickly cover off how to get started with the “Cloud Tier” feature. You can read about Cohesity’s cloud integration approach here. El Reg did a nice write-up on the capability when it was first introduced as well.

 

What Is It?

Cohesity have a number of different technologies that integrate with the cloud, including Cloud Archive and Cloud Tier. With Cloud Archive you can send copies of snapshots up to the cloud to keep as a copy separate to the backup data you might have replicated to a secondary appliance. This is useful if you have some requirement to keep a monthly or six-monthly copy somewhere for compliance reasons. Cloud Tier is an overflow technology that allows you to have cold data migrated to a cloud target when the capacity of your environment exceeds 80%. Note that “coldness” is defined in this instance as older than 60 days. That is, you can’t just pump a lot of data in to your appliance to see how this works (trust me on that). The coldness level is configurable, but I recommend you engage with Cohesity support before you go down that track. It’s also important to note that once you turn on Cloud Tier for a View Box, you can’t turn it off again.

 

How Do I?

Here’s how to get started in 10 steps or less. Apologies if the quality of some of these screenshots is not great. The first thing to do is register an External Target on your appliance. In this example I’m running version 5.0.1 of the platform on a Cohesity Virtual Edition VM. Click on Protection – External Target.

Under External Targets you’ll see any External Targets you’ve already configured. Select Register External Target.

You’ll need to give it a name and choose whether you’re using it for Archival or Cloud Tier. This choice also impacts some of the types of available targets. You can’t, for example, configure a NAS or QStar target for use with Cloud Tier.

Selecting Cloud Tier will provide you with more cloudy targets, such as Google, AWS and Azure.

 

In this example, I’ve selected S3 (having already created the bucket I wanted to test with). You need to know the Bucket name, Region, Access Key ID and your Secret Access Key.

If you have it all correct, you can click on Register and it will work. If you’ve provided the wrong credentials, it won’t work. You then need to enable Cloud Tier on the View Box. Go to Platform – Cluster.

Click on View Boxes and the click on the three dots on the right to Edit the View Box configuration.

You then can toggle Cloud Tier and select the External Target you want to use for Cloud Tier.

Once everything is configured (and assuming you have some cold data to move to the cloud and your appliance is over 80% full) you can click on the cluster dashboard and you’ll see an overview of Cloud Tier storage in the Storage part of the overview.

 

 

Thoughts?

All the kids are getting into cloud nowadays, and Cohesity is no exception. I like this feature because it can help with managing capacity on your on-premises appliance, particularly if you’ve had a sudden influx of data into the environment, or you have a lot of old data that you likely won’t be accessing. You still need to think about your egress charges (if you need to get those cold blocks back) and you need to think about what the cost of that S3 bucket (or whatever you’re using) really is. I don’t see the default coldness level being a problem, as you’d hope that you sized your appliance well enough to cope with a certain amount of growth.

Features like this demonstrate both a willingness on behalf of Cohesity to embrace cloud technologies, as well as a focus on ease of use when it comes to reasonably complicated activities like moving protection data to an alternative location. My thinking is that you wouldn’t necessarily want to find yourself in the position of having to suddenly shunt a bunch of cold data to a cloud location if you can help it (although I haven’t done the maths on which is a better option) but it’s nice to know that the option is available and easy enough to setup.

Cohesity Basics – Auto Protect

I’ve been doing some work with Cohesity in our lab and thought it worth covering some of the basic features that I think are pretty neat. In this edition of Cohesity Basics, I thought I’d quickly cover off the “Auto Protect” feature. If you read their white paper on data protection, you’ll find the following line: “As new virtual machines are added, they are auto discovered and included in the protection policy that meets the desired SLAs”. It seems like a pretty cool feature, and was introduced in version 4.0. I wanted to find out a bit more about how it works.

 

What Is It?

Auto Protect will “protect new VMs that are added to a selected parent Object (such as a Datacenter, Folder, Cluster or Host)”. The idea behind this is that you can add a source and have Cohesity automatically protect all of the VMs in a folder, cluster, etc. The cool thing is that it will also protect any new VMs added to that source.

When you’re adding Objects to a Protection Job, you can select what to auto protect. In the screenshot below you can see that the Datacenter in my vCenter has Auto Protect turned off.

The good news is that you can explicitly exclude Objects as well. Here’s what the various icons mean.

[Image courtesy of Cohesity]

 

What Happens?

When you create a Protection Job in Cohesity you add Objects to the job. If you select to Auto Protect this Object, anything under that Object will automatically be protected. Every time the Protection Job runs, if the Object hierarchy has been refreshed on the Cohesity Cluster, new VMs are also backed up even though the new VM has not been manually included in the Protection Job. There are two ways that the Object hierarchy gets refreshed. It is automatically done every 4 hours by the cluster. If you’re in a hurry though, you can do it manually. Go to Protection -> Sources and click on the Source you’d like to refresh. There’s a refresh button to click on and you’ll see your new Objects showing up.

 

Why Wouldn’t You?

As part of my testing, I’ve been creating “catchall” Protection Jobs and adding all the VMs in the environment into the jobs. But we have some VMware NSX Controller VMs in our lab, and VMware “only supports backing up the NSX Edge and controller through the NSX Manager“. Not only that, but it simply won’t work.

In any case, you can use FTP to back up your NSX VMs if you really feel like that’s emoting you want to do. More info on that is here. You also want to be careful that you’re not backing up stuff you don’t need to, such as clones and odds and sods. Should I try protecting the Cohesity Virtual Edition appliance VM? I don’t know about that …

 

Thoughts

I generally prefer data protection configurations that “protect everything and exclude as required”. While Auto Protect is turned off by default, it’s simple enough to turn on when you get started. And it’s a great feature, particularly in dynamic environments where there’s no automation of data protection when new workloads are provisioned (a problem for another time). Hat tip to my Cohesity SE Pete Marfatia for pointing this feature out to me.

VMware – vSphere Basics – Create a Custom Role

I’ve been evaluating a data protection solution in the lab recently and wanted to create a custom role in vCenter for the solution to use. It’s a basic thing, but if you don’t do it often it might not be that obvious where to click. The VMware documentation site has more information on creating a custom role as well. Why would you do this? In the same way it’s a bad idea to give every service Domain Administrator privileges, it’s also a bad idea to give your data protection solutions elevated privileges in your environment. If you’re into that kind of thing, read this guidance on roles and permissions too. In this example, I created a “CohesityTest” user as a domain user in Active Directory. I then wanted to assign that user to a custom role in vCenter and assign it certain privileges. In this example I’m using vCenter 6.5 with the Web Client. The process is as follows.

Go to the Home screen in vCenter and click on “Administration”.

In this example, I’ve already created a Role called Cohesity (following the instructions above) and assigned privileges to that Role.

Click on “Global Permissions” and the click on the green plus sign.

I want to add a user to a role. Click on “Add”.

The user I want to add is a domain user, so I use the drop down box to select the domain the user resides in.

Typing “coh” into the search field yields the only user with those letters in their name.

Once the user is selected, you can click on “Add” and then “OK”.

Make sure the user has the appropriate Role assigned. In this example, I’m assigning the CohesityTest user to the Cohesity Role and propagating these changes to child objects. Click “OK”. And then you’re done.

To check your role has the correct privileges, click on “Roles”, “Role Name”, and then “Privileges” and you can expand the items to check the correct privileges are assigned.

Once I’d done this I went back and re-added the vCenter to the Cohesity environment using the CohesityTest user and I was good to go.

Dell Compellent – Storage provisioning with CompCU.jar

I covered getting started with the CompCU.jar tool here. This post is a quick one that covers provisioning storage on the Compellent and then presenting it to hosts. In this example, I create a 400GB volume named Test_Volume1 and place it in the iLAB_Gold2 folder.

E:\CU060301_002A>java -jar CompCU.jar -defaultname saved_default.cli -c "volume create -name "Test_Volume1" -size 400g -folder iLAB_Gold2"
Compellent Command Utility (CompCU) 6.3.1.2
=================================================================================================
User Name: Admin
Host/IP Address: 192.168.0.10
Single Command: volume create -name Test_Volume1 -size 400g -folder iLAB_Gold2
=================================================================================================
Connecting to Storage Center: 192.168.0.10 with user: Admin
java.lang.IllegalStateException: TrustManagerFactoryImpl is not initialized
Running Command: volume create -name Test_Volume1 -size 400g -folder iLAB_Gold2
Creating Volume using StorageType 1: storagetype='Assigned-Redundant-4096', redundancy=Redundant, pagesize=4096, diskfolder=Assigned.
Successfully created Volume 'Test_Volume1'
Successfully finished running Compellent Command Utility (CompCU) application.

Here’s what it looks like now.

vol2

Notice that Test_Volume1 has been created but it inactive – it needs to be mapped to a server before it can be brought online.

E:\CU060301_002A>java -jar CompCU.jar -defaultname saved_default.cli -c "volume map -name 'Test_Volume1' -server 'iLAB_Gold2'"
Compellent Command Utility (CompCU) 6.3.1.2
=================================================================================================
User Name: Admin
Host/IP Address: 192.168.0.10
Single Command: volume map -name 'Test_Volume1' -server 'iLAB_Gold2'
=================================================================================================
Connecting to Storage Center: 192.168.0.10 with user: Admin
java.lang.IllegalStateException: TrustManagerFactoryImpl is not initialized
Running Command: volume map -name 'Test_Volume1' -server 'iLAB_Gold2'
Successfully mapped Volume 'Test_Volume1' to Server 'iLAB_Gold2'
Successfully finished running Compellent Command Utility (CompCU) application.

Wouldn’t it make more sense to create and map the volume at the same time? Yes, yes it would. Here’s another example where I present the volume to a folder of servers.

E:\CU060301_002A>java -jar CompCU.jar -defaultname saved_default.cli -c "volume create -name "Test_Volume2" -size 400g -folder iLAB_Gold2 -server iLAB_Gold2"
Compellent Command Utility (CompCU) 6.3.1.2
=================================================================================================
User Name: Admin
Host/IP Address: 192.168.0.10
Single Command: volume create -name Test_Volume2 -size 400g -folder iLAB_Gold2 -server iLAB_Gold2
=================================================================================================
Connecting to Storage Center: 192.168.0.10 with user: Admin
java.lang.IllegalStateException: TrustManagerFactoryImpl is not initialized
Running Command: volume create -name Test_Volume2 -size 400g -folder iLAB_Gold2 -server iLAB_Gold2
Creating Volume using StorageType 1: storagetype='Assigned-Redundant-4096', redundancy=Redundant, pagesize=4096, diskfolder=Assigned.
Successfully mapped Volume 'Test_Volume2' to Server 'iLAB_Gold2'
Successfully created Volume 'Test_Volume2', mapped it to Server 'iLAB_Gold2' on Controller 'SN 22641'
Successfully finished running Compellent Command Utility (CompCU) application.

Note that these commands don’t specify replays. If you want replays configured you should use the -replayprofile option or manually create replays with the replay create command.

Dell Compellent – Getting started with CompCU.jar

CompCU.jar is the Compellent Command Utility. You can download it from Compellent’s support site (registration required). This is a basic article that demonstrates how to get started.

The first thing you’ll want to do is create an authentication file that you can re-use, similar to what you do with EMC’s naviseccli tool. The file I specify is saved in the directory I’m working from, and the Storage Center IP is the cluster IP, not the IP address of the controllers.

E:\CU060301_002A>java –jar CompCU.jar –default -defaultname saved_default -host StorageCenterIP -user Admin -password SCPassword

Now you can run commands without having to input credentials each time. I like to ouput to a text file, although you’ll notice that CompCU also dumps output on the console at the same time. The “system show” command provides a brief summary of the system configuration.

E:\CU060301_002A>java -jar CompCU.jar -defaultname saved_default.cli -c "system show -txt 'outputfile.txt'"
Compellent Command Utility (CompCU) 6.3.1.2
 =================================================================================================
User Name: Admin
Host/IP Address: 192.168.0.10
Single Command: system show -txt 'systemshow.txt'
=================================================================================================
Connecting to Storage Center: 192.168.0.10 with user: Admin
java.lang.IllegalStateException: TrustManagerFactoryImpl is not initialized
Running Command: system show -txt 'outputfile.txt'
SerialNumber Name ManagementIP Version OperationMode PortsBalanced MailServer BackupMailServer
----------------- -------------------------------- ---------------- ---------------- -------------- -------------- -------------------- --------------------
22640 Compellent1 192.168.0.10 6.2.2.15 Normal Yes 192.168.0.200 192.168.0.201
Save to Text (txt) File: outputfile.txt
Successfully finished running Compellent Command Utility (CompCU) application.

Notice I get java errors every time I run this command. I think that’s related to an expired certificate, but I need to research that further. Another useful command is “storagetype show“. Here’s one I prepared earlier.

E:\CU060301_002A>java -jar CompCU.jar -defaultname saved_default.cli -c "storagetype show -txt 'storagetype.txt'"
Compellent Command Utility (CompCU) 6.3.1.2
=================================================================================================
User Name: Admin
Host/IP Address: 192.168.0.10
Single Command: storagetype show -txt 'storagetype.txt'
=================================================================================================
Connecting to Storage Center: 192.168.0.10 with user: Admin
java.lang.IllegalStateException: TrustManagerFactoryImpl is not initialized
Running Command: storagetype show -txt 'storagetype.txt'
Index Name DiskFolder Redundancy PageSize PageSizeBlocks SpaceUsed SpaceUsedBlocks SpaceAllocated SpaceAllocatedBlocks
------ -------------------------------- -------------------- -------------------- ---------- --------------- -------------------- -------------------- -------------------- --------------------
1 Assigned-Redundant-4096 Assigned Redundant 2.00 MB 4096 1022.51 GB 2144350208 19.67 TB 42232291328
Save to Text (txt) File: storagetype.txt
Successfully finished running Compellent Command Utility (CompCU) application.
E:\CU060301_002A>

There’s a bunch of useful things you can do with CompCU, particularly when it comes to creating volumes and allocating them to hosts, for example. I’ll cover these in the next little while. In the meantime, I hope this was a useful introduction to CompCU.

Cisco MDS 9XXX Basics – Part 1

So we’ve finally started delivering on the project that I’ve been working on for the last 12 – 18 months. It’s fun to see my detailed designs turn into running infrastructure.

As part of this, I’ve been doing some configuration of some new Cisco 9513 and 9124e switches for our fabric. I have every intention of writing a downloadable article with some of the basic stuff, but I thought I’d do a few, smaller articles for my own reference more than anything else.

Now, most Cisco nerds will already know this stuff, but for someone like me who cut their teeth on Brocade Fabric OS, it’s a little different.

To connect to a 9124e (Cisco’s blade switch), I recommend using the HP OA’s serial connection.

Connect to the active OA via serial, login using your normal credentials and run

connect interconnect 3

This will connect you to the serial console of the first 9124e switch in the chassis. This assumes that you have other devices in bays 1 and 2, such as Cisco 3120s, or whatever.

If this is the first time you’ve connected to the switch, or if you’ve not configured it yet, you’ll get to a very useful first setup screen.

Press [Enter] to display the switch console:
  Enter the password for “admin”:
  Confirm the password for “admin”:

         —- Basic System Configuration Dialog —-

This setup utility will guide you through the basic configuration of
the system. Setup configures only enough connectivity for management
of the system.

Please register Cisco MDS 9000 Family devices promptly with your
supplier. Failure to register may affect response times for initial
service calls. MDS devices must be registered to receive entitled
support services.

Press Enter at anytime to skip a dialog. Use ctrl-c at anytime
to skip the remaining dialogs.

Would you like to enter the basic configuration dialog (yes/no): yes

 

  Create another login account (yes/no) [n]:

  Configure read-only SNMP community string (yes/no) [n]:

  Configure read-write SNMP community string (yes/no) [n]:

  Enter the switch name : FCswitch1

  Continue with Out-of-band (mgmt0) management configuration? (yes/no) [y]:

    Mgmt0 IPv4 address : 192.168.0.10

    Mgmt0 IPv4 netmask : 255.255.255.0

  Configure the default gateway? (yes/no) [y]:

    IPv4 address of the default gateway : 192.168.0.254

  Configure advanced IP options? (yes/no) [n]:

  Enable the ssh service? (yes/no) [y]:

    Type of ssh key you would like to generate (dsa/rsa) [rsa]:

    Number of rsa key bits <768-2048> [1024]:

  Enable the telnet service? (yes/no) [n]:

  Enable the http-server? (yes/no) [y]:

 Configure clock? (yes/no) [n]:

 Configure timezone? (yes/no) [n]:

 Configure summertime? (yes/no) [n]:

  Configure the ntp server? (yes/no) [n]:

  Configure default switchport interface state (shut/noshut) [shut]:

  Configure default switchport trunk mode (on/off/auto) [on]:

  Configure default switchport port mode F (yes/no) [n]:

  Configure default zone policy (permit/deny) [deny]:

  Enable full zoneset distribution? (yes/no) [n]:

  Configure default zone mode (basic/enhanced) [basic]:

The following configuration will be applied:
  password strength-check
  switchname FCswitch1
  interface mgmt0
    ip address 192.168.0.10 255.255.255.0
    no shutdown
  ip default-gateway 192.168.0.254
  ssh key rsa 1024 force
  feature ssh
  no feature telnet
  feature http-server
  system default switchport shutdown
  system default switchport trunk mode on
  no system default zone default-zone permit
  no system default zone distribute full
  no system default zone mode enhanced

Would you like to edit the configuration? (yes/no) [n]:

Use this configuration and save it? (yes/no) [y]:

At this point, the switch does a copy run start and reboots. For some reason we’ve been getting this error.

 Error: There was an error executing at least one of the commands
Please verify the following log for the command execution errors.
Disabling ssh: as its enabled right now:
 ssh: Cannot disable both telnet and SSH

I’ve been ignoring this error. So, too, has NX-OS. You’ll then see the following:

Would you like to save the running-config to startup-config? (yes/no) [n]: y

[########################################] 100%

The switch then reboots and you can monitor it for any errors. Once you’re satisfied with the config, use CTRL-SHIFT-_ and press d to disconnect from the 9124e terminal. The process is identical for the Cisco MDS 9513, except for the bit about it being a blade switch :)