HP MSA array failover

I’ve blogged briefly about the MSA array before, thinking it was a reasonable piece of kit for the price, assuming your expectations were low. But I had a problem recently with a particular MSA2012fc and don’t know whether I’ve got it right or whether I’m missing something fundamental.

I had it setup in a DAS configuration. Interconnect was turned on, and loop was the default topology in place.  This worked fine for the 2 RHEL boxes attached to the host. Later I connected the array and 2 hosts to 2 Brocade 300 Switches with discrete fabrics. I changed the topology to point-to-point, and changed to straight-through from interconnect. This seemed like a reasonable thing to do based on my understanding of the admin, user and reference guides.

In a switched topology / straight-through / point-to-point connection, LUNs owned by a vdisk on controller A are only presented via paths from controller A. If controller A fails however, I don’t believe the vdisk fails over. If, however, a cable or switch fails, you’re covered, because each controller is cabled to each fabric. I believe this is why I saw two paths to everything – these being the fibre ports of the controller owning the vdisk that owns the LUN.

In a direct-attach / interconnect / loop setup, controllers mirror their peer’s LUNs via the higher ports, so Controller A presents paths to controller B’s LUNs via A1. In this setup, you could sustain a controller failure, as a vdisk would be presented via the peer.The problem with this, however, is that interconnect is never used in a switched environment. I don’t believe changing the ports to loop will help, nor would removing the switches.

Have I totally missed the point here? Has anyone else seen this? Was there a workaround? Or something fixed in later revs of the code? It seems strange that HP would advertise this as an active-active array, but only for DAS configs.