EMC – naviseccli – checking your iSCSI ports are running at the correct speed

It’s been a while since I wrote about naviseccli and I admit I’ve missed it. I once wrote about using naviseccli to identify MirrorView ports on a CLARiiON array. Normally the MirrorView port is consistently located, but in that example we’d upgraded from a CX3-80 to a Cx4-960 and it was in a different spot. Oh how we laughed when we realised what the problem was. Anyway, we’ve been doing some work on an ever so slightly more modern VNX5300 and needed to confirm that some newly installed iSCSI SLICs were operating at the correct speed. (Note that these commands were run from the Control Station).

The first step is to list the ports

=~=~=~=~=~=~=~=~=~=~=~= PuTTY log 2016.09.07 08:59:37 =~=~=~=~=~=~=~=~=~=~=~=
[[email protected] ~]$ navicli -h A_VNXSP connection -getport

SP:  A
Port ID:  8
Port WWN:  iqn.1992-04.com.emc:cx.cetv2223700017.a8
iSCSI Alias:  0017.a8
IP Address:  192.168.0.13
Subnet Mask:  255.255.255.0
Gateway Address:  192.168.0.254
Initiator Authentication:  false

SP:  A
Port ID:  9
Port WWN:  iqn.1992-04.com.emc:cx.cetv2223700017.a9
iSCSI Alias:  0017.a9

SP:  A
Port ID:  10
Port WWN:  iqn.1992-04.com.emc:cx.cetv2223700017.a10
iSCSI Alias:  017.a10

SP:  A
Port ID:  11
Port WWN:  iqn.1992-04.com.emc:cx.cetv2223700017.a11
iSCSI Alias:  017.a11

SP:  B
Port ID:  8
Port WWN:  iqn.1992-04.com.emc:cx.cetv2223700017.b8
iSCSI Alias:  0017.b8
IP Address:  192.168.0.14
Subnet Mask:  255.255.255.0
Gateway Address:  192.168.0.254
Initiator Authentication:  false

SP:  B
Port ID:  9
Port WWN:  iqn.1992-04.com.emc:cx.cetv2223700017.b9
iSCSI Alias:  0017.b9

SP:  B
Port ID:  10
Port WWN:  iqn.1992-04.com.emc:cx.cetv2223700017.b10
iSCSI Alias:  017.b10

SP:  B
Port ID:  11
Port WWN:  iqn.1992-04.com.emc:cx.cetv2223700017.b11
iSCSI Alias:  017.b11

Once you’ve done that, you can list the port speed for a particular port

[[email protected] ~]$ navicli -h A_VNXSP connection -getport -sp a -portid 8 -speed
SP:  A
Port ID:  8
Port WWN:  iqn.1992-04.com.emc:cx.cetv2223700017.a8
iSCSI Alias:  0017.a8
IP Address:  192.168.0.13
Subnet Mask:  255.255.255.0
Gateway Address:  192.168.0.254
Initiator Authentication:  false
Port Speed:  1000 Mb
Auto-Negotiate:  Yes
Available Speeds:  10 Mb
-               :  100 Mb
-               :  1000 Mb
-               :  Auto

If you have a lot of ports to check this may not be the most efficient way to do it (ioportconfig may be more sensible), but if your network team are reporting on one particular port being an issue – this is a great way to narrow it down.

EMC – Next-Generation VNX – Data In Place Upgrades

Approximately 4 or 500 years ago, I spent a number of nights in various data centres around the country upgrading customers’ CLARiiON arrays from CX200s to Cx500s, CX300s to CX3-20s, and so on. The neat thing about the CLARiiON was that EMC had a pretty reasonable way of doing data in place (DIP) upgrades, including across generations if required. With the introduction of the VNX, that changed. Primarily because of the switch from FC to SAS on the back-end. And with the “Next-Generation” VNX (VNX2), you also can’t go from VNX to VNX2. Which some people have been understandably unhappy about. The procedure hasn’t changed much over the years, and you can read Rob’s post here for a pretty thorough look at what’s involved.

So why would you want to do this anyway? Especially given that, if you’re upgrading a VNX5200 for example, you’ve probably only had the array in operation for a few years. Well, requirements change, companies grow, people need more horsepower. Sometimes EMC makes it a commercially viable option to do a DIP upgrade rather than replace the array with another one. There’re are a bunch of reasons.

I don’t want to go into exactly what the steps are, as your friendly EMC service folk or partner will be able to go through that with you, but I thought it might be an idea to share a few things to know prior to launching into one of these procedures (or even making the decision to upgrade in this fashion).

The supported source systems include:

  • VNX5200;
  • VNX5400;
  • VNX5600; and
  • VNX5800.

Note that you cannot convert a VNX7600, nor can you go from VNX to VNX2 (as I mentioned before). Also, the VNX8000 can’t be a source system, because that’s already as big as the VNX goes.

Supported targets for upgrade include:

  • VNX5400;
  • VNX5600;
  • VNX5800; and
  • VNX7600.

You can’t go to a VNX8000. You can also upgrade the type of array as follows:

  • Block to block;
  • File to file; and
  • Unified to unified.

You can’t convert from a block system to a higher performing unified system. You can, however, do a block conversion, and do a block-to-unified upgrade. It generally takes about six hours to complete a DIP conversion. As always, if you’re considering this approach, talk to EMC about it.

 

EMC – Next-Generation VNX – Block Deduplication Caveats

Ever since the VNX2 was announced, customers have asked me about using deduplication with their configs. I did an article on it around the time of the product announcement but have been meaning to talk a bit more about it for some time. But before I do, check out Joel Cason’s great post on this. Anyway, here’s a brief article listing some of the caveats and things to look out for with block deduplication. A few of my clients have used this feature in the field, and have learnt the hard way that if you don’t follow EMC’s guidelines, you may have a sub-optimal experience. Most of the information here has been taken from the “EMC VNX2 Deduplication and Compression” which can be downloaded here.

 

  • If you’re running a workload with more than 30% writes, compression and deduplication may be a problem. EMC state that, “[f]or applications requiring consistent and predictable performance, EMC recommends using Thick pool LUNs or Classic LUNs. If Thin LUN performance is not acceptable, then do not use Block Deduplication”. I can’t stress this enough – know your workload!
  • Block deduplication is done on a per pool LUN basis. EMC recommended that deduplication be enabled at the time of LUN creation. If you enable it on an existing LUN, the LUN is migrated into the deduplication container using a background process. The data must reside in the container before the deduplication process can run on the dataset.
  • There is only one deduplication container per storage pool. This is where your deduplicated data is stored. When a deduplication container is created, the SP owning the container needs to be determined. The container owner is matched to the Allocation Owner of the first deduplicated LUN within the pool. As a result of this process, EMC recommends that all LUNs with Block Deduplication enabled within the same pool should be owned by the same SP. This can be a big problem in smaller environments where you’ve only deployed one pool.

 

There’s a bit more to consider, particularly if you’re looking at leveraging compression as well. But if you can’t get past these first few considerations, it’s likely that the VNX2’s version of deduplication on primary storage is probably not for you. Read the whitepaper – it’s readily accessible and fairly clear about what can and can’t be achieved within the constraints of the product.

EMC – Using naviseccli to configure a VNX domain

The concept of domains have been with CLARiiON and (later) VNX arrays since the early part of the 21st Century. The configuration is fairly simple, and, in keeping with the idea that you can do anything with naviseccli, I thought I’d do a quick post on using naviseccli to join SPs to a domain. This assumes you have security setup with your naviseccli environment, and you know the IPs of the SPs you’re trying to add to the domain.

You can the set the master node for a domain with this command. Note that the nominated node can’t be a member of another domain at the time.

naviseccli -h SPA-IP-Address domain -setmaster SPB-IP-Address
 WARNING: You are about to set the following node as the master of the domain: SPB-IP-Address
 Proceed? (y/n) y

If a node is a problem, or you’re about to remove an array from your environment, it’s a good idea to remove it from the domain before you rip it out of the rack.

naviseccli -h SPA-IP-Address domain -remove SPA-IP-Address
 WARNING: You are about to remove the following node from the domain: SPA-IP-Address
 Proceed? (y/n) y

You may also wish to add another couple of nodes, particularly if you have a number of arrays in the environment.

naviseccli -h SPB-IP-Address domain -add SPA-IP-Address
 WARNING: You are about to remove the following node from the domain: SPA-IP-Address
 Proceed? (y/n) y

And that’s it. I recommend you check out EMC’s white paper – Domain Management with EMC Unisphere for VNX (p/n h8853.4) – for more information on VNX domain management.

EMC – VNX – Slow Disk Rebuild Times

I’ve been a bit behind on my VNX OE updates, and have only recently read docu59127_VNX-Operating-Environment-for-Block-05.33.000.5.102-and-for-File-8.1.6.101,-EMC-Unisphere-1.3.3.1.0096-Release-Notes covering VNX OE 5.33…102. Checking out the fixed problems, I noticed the following item.

VNX_OE_RN

The problem, you see, came to light some time ago when a few of our (and no doubt other) VNX2 customers started having disk failures on reasonably busy arrays. EMC have a KB on the topic on the support site – VNX2 slow disk rebuild speeds with high host I/O (000187088). To quote EMC “The code has been written so that the rebuild process is considered a lower priority than the Host IO. The rebuild of the new drive will take much longer if the workload from the hosts are high”. Which sort of makes sense, because host I/O is a pretty important thing. But, as a number of customers pointed out to EMC, there’s no point prioritising host I/O if you’re in jeopardy of having a data unavailable or data loss event because your private RAID groups have taken so long to complete.

Previously, the solution was to “[r]educe the amount of host I/O if possible to increase the speed of the drive rebuild”. Now, however, updated code comes to the rescue. So, if you’re running a VNX2, upgrade to the latest OE if you haven’t already.

 

 

EMC – vVNX – A Brief Introduction

A few people have been asking me about EMC’s vVNX product, so I thought I’d share a few thoughts, feelings and facts. This isn’t comprehensive by any stretch, and the suitability of this product for use in your environment will depend on a whole shedload of factors, most of which I won’t be going into here. I do recommend you check out the “Introduction to the vVNX Community Edition” white paper as a starting point. Chad, as always, has a great post on the subject here.

 

Links

Firstly, here are some links that you will probably find useful:

When it comes time to license the product, you’ll need to visit this page.

vVNX_license

 

Hardware Requirements

A large number of “software-defined” products have hardware requirements, and the vVNX is no different. You’ll need to be running VMware vSphere 5.5 or later to get this running too. I haven’t tried this with Fusion yet.

Element Requirement
Hardware Processor Xeon E5 Series Quad/Dual Core CPU 64-bit x86 Intel 2 GHz (or greater)
Hardware Memory 16GB (minimum)
Hardware Network 2×1 GbE or 2×10 GbE
Hardware RAID (for Server DAS) Xeon E5 Series Quad/Dual Core CPU 64-bit x86 Intel 2 GHz (or greater)
Virtual Processor Cores 2 (2GHz+)
Virtual System Memory 12GB
Virtual Network Adapters 5 (2 ports for I/O, 1 for Unisphere, 1 for SSH, 1 for CMI)

There are a few things to note with the disk configuration. Obviously, the appliance sits on a disk subsystem attached to the ESXi host and is comprised of a number of VMDK files. EMC recommends that the disk provisioning used is “Thick Provisioned Eager Zeroed”. You also need to manually select the tier when you add disk to the pool as the vVNX just sees a number of VMDKs. The available tiers will be familiar to VNX users – extreme performance, performance and capacity. These correspond to SSD, SAS and NL-SAS.

 

Connectivity

The vVNX offers block connectivity via iSCSI, and file connectivity via Multiprotocol / SMB / NFS. No, there is no “passthrough FC” option as such. Let it go already.

 

Features

What’s pretty cool, in my opinion, is that the vVNX supports native asynchronous block replication between other vVNXs as well as the VNXe3200. As well as this, vVNX systems have integrated deduplication and compression support for file-based storage (file systems and VMware NFS Datastores). Note that this is file-based, so it operates on whole files that are stored in a file system. The filesystem is scanned for files that have not been accessed in 15 days.  Files can be excluded from deduplication and compression operations on either a file extension or path basis.

 

Big Brother

The VNXe3200 is ostensibly the vVNX’s big brother. Whilst EMC use the VNXe3200 as a comparison model when discussing vVNX capabilities but, as EMC point out in their introductory whitepaper, there are still a few differences.

VNXe3200 vVNX
Maximum Drives 150 (Dual SP) 16 vDisks (Single SP)
Total System Memory 48 GB 12 GB
Supported Drive Type 3.5”/2.5” SAS, NL-SAS, Flash vDisk
Supported Protocols SMB, NFS, iSCSI & FC SMB, NFS, iSCSI
Embedded IO Ports per SP 4 x 10GbE 2 x 1GbE or 2 x 10GbE
Backend Connectivity per SP 1 x 6 Gb/s x4 SAS vDisk
Max. Drive/vDisk Size 4TB 2TB
Max. Total Capacity 500TB 4TB
Max. Pool LUN Size 16TB 4TB
Max. Pool LUNs Per System 500 64
Max. Pools Per System 20 10
Max. NAS Servers 32 4
Max. File Systems 500 32
Max. Snapshots Per System 1000 128
Max. Replication Sessions 16 256

There are a few other key differences as well, before you get too carried away with replacing all of your VNXe3200s (not that I think people will get too carried away with this). The following points are taken from the “Introduction to the vVNX Community Edition” white paper:

  • MCx – Multicore Cache on the vVNX is for read cache only. Multicore FAST Cache is not supported by the vVNX and Multicore RAID is not applicable as redundancy is provided via the backend storage.
  • FAST Suite – The FAST Suite is not available with the vVNX.
  • Replication – RecoverPoint integration is not supported by the vVNX.
  • Unisphere CLI – Some commands, such as those related to disks and storage pools, will be different in syntax for the vVNX than the VNXe3200. Features that are not available on the vVNX will not be accessible via Unisphere CLI.
  • High Availability – Because the vVNX is a single instance implementation, it does not have the high availability features seen on the VNXe3200.
  • Software Upgrades – System upgrades on a vVNX will force a reboot, taking the system offline in order to complete the upgrade.
  • Fibre Channel Protocol Support – The vVNX does not support Fibre Channel.

 

Conclusion

I get excited whenever a vendor offers up a virtualised version of there product, either as a glorified simulator, a lab tool, or a test bed. It’s no doubt taken a lot of people inside EMC a lot of work to convince people in charge to release this thing into the wild. I’m looking forward to doing some more testing with it and publishing some articles that cover what it can and can’t do.

EMC – VNX – Configuring LDAP Authentication

I’m surprised that I haven’t done an article on configuring Active Directory (AD) authentication on the VNX. It’s pretty easy to do, and a good idea. Big thanks to Sean Thulin for documenting this in a clear and concise fashion, and to EMC Support‘s website for filling in some of the blanks I had (via Primus emc308583).

DNS

Firstly, you should have DNS configured on your array. This is just a basic thing that you should do. Stop making excuses.

Dsquery

For AD authentication, you need the following information:

  • Domain Controller (DC) hostname;
  • A basic account on AD with read permission on AD on Users and Group containers – this account is called the Bind DN; and
  • Full path information for the Bind DN, the User container, and the Group container.

To obtain this, log in to a Windows computer with dsquery installed. You don’t need Domain Admin rights to get this information.

To determine the DC hostname, run set | findstr “LOGONSERVER” to return the hostname.

If there isn’t a Bind DN account created, you’ll need one. This can be a normal user account with the password preferably set to “Not Expired” to avoid issues down the track. Once the user is created anywhere in AD, use Dsquery thusly:

C:\Users\dan>dsquery user -name ldap_account

You’ll get this:

"CN=ldap_account,OU=Service Accounts,DC=domain,DC=com"

The above is fully qualified path name for the account “ldap_account,” which will be used as the Bind DN. You’ll need access to the password of this service account.

The User container is where the VNX will look for the user login be used for authentication. In this example the user name is “Storage User”.

C:\Users\dan>dsquery user -name "Storage User"
"CN=Storage User,OU=Storage Admins,OU=Administrators,DC=domain,DC=com"

The User Container path here that you need to note is: OU=Storage Admins,OU=Administrators,DC=domain,DC=com

For the group, you can do the same thing. In a number of environments, this will be the same location as the Users.

C:\Users\dan>dsquery group -name "Storage Admins"
"CN=Storage Admins,OU=Storage Admin Groups,OU=Administrators,DC=domain,DC=com"

The path name for group container is : OU=Storage Admin Groups,OU=Administrators,DC=domain,DC=com

Manage LDAP

Now you’re ready to set things up. Go to Domain -> Manage LDAP  and configure using the above collected information.

LDAP1

You can configure two service connections. These would usually be DCs that are at discrete data centres.

LDAP2

Click on Add or Modify.

LDAP3

Here’s what you need to fill in:

  • Host Name or IP Address – Use the FQDN, it’s 2015 and DNS should work in your environment;
  • Port 389 for LDAP, 636 for LDAPS – This will change depending on whether you select LDAP or LDAPS as the protocol;
  • Server Type – Choose “Active Directory”;
  • Domain Name – Specify the domain name;
  • BindDN – This is where you put the distinguished name of the LDAP service account;
  • Bind Password – The password for the LDAP service account;
  • Confirm Bind Password – Confirmed;
  • User Search Path – This is the info we got earlier;
  • Group Search Path – Ditto; and
  • Add certificate – If you’re using LDAPS, you’ll need this.

Role Mapping

LDAP4

Note that it is recommended to use group names with no special characters and with fewer than 32 characters. The main roles include:

  • Operator – Read-only privilege for storage and domain operations; no privilege for security operations.
  • Network Administrator – All operator privileges and privileges to configure DNS, IP settings, and SNMP.
  • NAS Administrator – Full privileges for file operations. Operator privileges for block and security operations.
  • SAN Administrator – Full privileges for block operations. Operator privileges for file and security operations.
  • Storage Administrator – Full privileges for file and block operations. Operator privileges for security operations.
  • Security Administrator – Full privileges for security operations including domains. Operator privileges for file and block operations.
  • Administrator – Full privileges for file, block, and security operations. This role is the most privileged role.
  • VM Administrator – Enables you to view and monitor basic storage components of your VNX system through vCenter by using VMware’s vSphere Storage APIs for Storage Awareness (VASA).

Note that some of these roles apply to “Unified” configs (NAS), rather than block-only.

Conclusion

Don’t forget to synchronise the information once you’ve created the connections. And that’s it. you should now be able to log in to your VNX with your AD credentials. Just make sure “Use LDAP” is ticked.

EMC – VSI for VMware vSphere 6.5 Linked Mode Issue – Redux

I wrote in a previous post about having some problems with EMC’s VSI for VMware vSphere 6.5 when running in vCenter 5.5 in Linked Mode. I spoke about deploying the appliance in just one site as a workaround. Turns out that wasn’t much of a workaround. Because workaround implies that I was able to get some functionality out of the situation. While the appliance deployed okay, I couldn’t get it to recognise the deployed volumes as EMC volumes.

 

A colleague of mine had the same problem as me and a little more patience and logged a call with EMC support. Their response was “[c]urrent VSI version does not support for Linked mode, good news is recently we have several customers requesting that enhancement and Dev team are in the process of evaluating impact to their future delivery schedule. So, the linked mode may will be supported in the future. Thanks.”

 

iStock-Unfinished-Business-3

While this strikes me as non-optimal, I am hopeful, but not optimistic, that it will be fixed in a later version. My concern is that Linked Mode isn’t the problem at all, and it’s something else stupid that I’m doing. But I’m short of places I can test this at the moment. If I come across a site where we’re not using Linked Mode, I’ll be sure to fire up the appliance and run it through its paces, but for now it’s back in the box.

EMC – Using naviseccli to create a VNX Snapshot

If you’re a VNX customer you’ve probably heard someone bang on about how easy to use VNX Snapshots are, particularly if they’ve used SnapView in the past. If you’re after the good word on VNX Snapshots, check out this whitepaper from EMC here. Tomek has a reasonable write-up here as well.

In any case I’ve been working with a customer on some migration scripts and they wanted to take VNX Snapshots as well as VM snapshots while they update their OS and apps. I wrote about creating SnapView Clones with naviseccli some time ago, but I find VNX Snapshots a shedload easier to work with. This is will, as always, be dictated by your own set of requirements, circumstances and religious beliefs.

So here’s what you need to do to get from start to finish. Note that I haven’t covered creating Snapshot Mount Points (SMPs) in this, nor do I talk about using host-based tools such as SnapCLI. I’ll follow up in the future with some words around this.

[Update] I forgot to mention @Dynamoxxx / Storage Monkey‘s excellent posts on this subject too – have a look here for Linux and here for Windows.

Microsoft Windows [Version 6.3.9600]
(c) 2013 Microsoft Corporation. All rights reserved.

C:\Program Files (x86)\EMC\Navisphere CLI>NaviSECCli.exe
Not enough arguments
  Usage:
    [-User <username>] [-Password <password>]
    [-Scope <0 - global; 1 - local; 2 - LDAP>]
    [-Address <IPAddress | NetworkName> | -h <IPAddress | NetworkName>]
    [-Port <portnumber>] [-Timeout <timeout> | -t <timeout>]
    [-AddUserSecurity | -RemoveUserSecurity | -DeleteSecurityEntry]
    [-Parse | -p] [-NoPoll | -np] [-cmdtime]
    [-Xml] [-f <filename>] [-Help] CMD <Optional Arguments>
    [security -certificate]

You’ll need to set yourself up if you’re using a fresh installation.

C:\Program Files (x86)\EMC\Navisphere CLI>NaviSECCli.exe -addusersecurity -scope 0 -user sysadmin

You can then create a snapshot of LUN 7 called “testsnap1” which is read/write and will be kept for 4 hours.

C:\Program Files (x86)\EMC\Navisphere CLI>NaviSECCli.exe -address 192.168.0.100 snap -create -res 7 -resType LUN -name "testsnap1" -descr "snap via CLI" -keepFor 4h -allowreadwrite yes
Unable to validate the identity of the server.  There are issues with the certificate presented.
Only import this certificate if you have reason to believe it was sent by a trusted source.
Certificate details:
Subject:        CN=192.168.0.100,CN=SPA,OU=CLARiiON
Issuer: CN=192.168.0.100,CN=SPA,OU=CLARiiON
Serial#:        fcd99068
Valid From:     2015:01:15:02:55:01
Valid To:       2020:01:14:02:55:01
Would you like to [1]Accept the certificate for this session, [2] Accept and store, [3] Reject the certificate?
Please input your selection(The default selection is [1]):
2

Note that there’s no output from this command. If you want to check out the snapshots you have, you can list them.

C:\Program Files (x86)\EMC\Navisphere CLI>naviseccli -address 192.168.0.100 snap -list

Name:  testsnap1
Description:  snap via CLI
Creation time:  05/19/15 10:22:37
Source LUN(s):  7
Source CG:  N/A
State:  Ready
Allow Read/Write:  Yes
Modified:  No
Allow auto delete:  No
Expiration date:  05/19/15 14:22:37

Want to change the ID of the snapshot or change the autodelete setting?

C:\Program Files (x86)\EMC\Navisphere CLI>naviseccli -address 192.168.0.100 snap -modify -id "testsnap1" -name "testsnap2" -allowautodelete yes
Setting auto-delete on this Snapshot will clear expiration date on it. Are you sure you want to perform this operation?(y/n): n
C:\Program Files (x86)\EMC\Navisphere CLI>naviseccli -address 192.168.0.100 snap -modify -id "testsnap1" -name "testsnap2"

Great, now let’s get rid of it.

C:\Program Files (x86)\EMC\Navisphere CLI>naviseccli -address 192.168.0.100 snap -destroy -id "testsnap2"
Are you sure you want to perform this operation?(y/n): y

And that’s about it.

EMC – VSI for VMware vSphere 6.5 Linked Mode Issue

As part of a recent deployment I’ve been implementing EMC VSI for VMware vSphere Web Client v6.5 in a vSphere 5.5 environment. If you’re not familiar with this product, it “enables administrators to view, manage, and optimize storage for VMware ESX/ESXi servers and hosts and then map that storage to the hosts.” It covers a bunch of EMC products, and can be really useful in understanding where your VMs sit in relation to your EMC storage environment. It also really helps non-storage admins get going quickly in an EMC environment.

To get up and running, you:

  • Download the appliance from EMC;
  • Deploy the appliance into your environment;
  • Register the plug-in with vCenter by going to https://ApplianceIP:8443/vsi_usm/admin;
  • Register the Solutions Integration Service in the vCenter Web Client; and
  • Start adding arrays as required.

So this is all pretty straightforward. BTW the default username is admin, and the default password is ChangeMe. You’ll be prompted to change the password the first time you log in to the appliance.

 

So the problem for me arose when I went to register a second SIS appliance.

VSI1

By way of background, there are two vCenter 5.5 U2 instances running at two different data centres. I do, however, have them running in Linked Mode. And I think this is the problem. I know that you can only register one instance at a time with one vCenter. While it’s not an issue to deploy a second appliance at the second DC, every time I go to register the service in vCenter, regardless of where I’m logged in, it always points to the first vCenter instance. Which is a bit of a PITA, and not something I’d expected to be a problem. As a workaround, I’ve deployed one instance of the appliance at the primary DC and added both arrays to it to get the client up and running. And yes, I agree, if I have a site down I’m probably not going to be super focused on storage provisioning activities at my secondary DC. But I do enjoy whinging about things when they don’t work the way I expected them in the first instance.

 

I’d read in previous versions that Linked Mode wasn’t supported, but figured this was no longer an issue as it’s not mentioned in the 6.5 Product Guide. This thread on ECN seems to back up what I suspect. I’d be keen to hear if other people have run into this issue.