What have I been doing? – Part 2

Dell memory configuration

The Dell PowerEdge 2950 has some reasonably specific rules for memory installation. You can read them here.

“FBDs must be installed in pairs of matched memory size, speed, and technology, and the total number of FBDs in the configuration must total two, four, or eight. For best system performance, all four, or eight FBDs should be identical memory size, speed, and technology.”

Dear Sales Guy, that means you can’t sell 6, or some other weird number, and expect it to just work.

SAN Copy Pull from CX200 to AX4-5

SAN Copy is a neat bit of software that EMC (or DG, I forget which) developed to rip data off competitor’s arrays when it came time to trade in your HDS system for something a little more EMC-like. It’s also very useful for doing other things like once-off backups, etc, but I won’t bore you with that right now. However, if the two CLARiiONs you’re migrating between are not in the same management domain, you need to enter the WWN of the LUN to be copied across. In this instance I was doing a SAN Copy push from a CX200 to an AX4-5 in order to avoid using host-based copy techniques on the NCS cluster (my ncopy skills are not so good).

So when you are selecting the destination storage and can’t see the AX4-5, you’ll need the WWN:

sancopy1

Click on “Enter WWN…”

sancopy2

And you should then be able to see the destination LUN.

You’ll also need an enabler loaded on the AX4-5 to use SAN Copy – this enabler is not the same enabler you would load on a CX, CX3 or CX4. Trust me on that one …

Setting Up MV/A

I setup MV/A between two AX4-5 arrays recently (woot! baby SANs) and thought I’d shed some light on the configuration requirements:

MirrorView/A Setup
– MirrorView/A software must be loaded on both Primary and Secondary storage systems (ie the enabler file via ndu)
– Secondary LUN must be exactly the same size as Primary LUN (use the block count when creating the mirrors)
– Secondary LUN does not need to be the same RAID type as Primary (although it’s a nice-to-have)
– Secondary LUN is not directly accessible to host(s) (Mirror must be removed or Secondary promoted to Primary for host to have access)
– Bi-directional mirroring fully supported (hooray for confusing DR setups and active/active sites!)

Management via Navisphere Manager and Secure CLI
– Provides ease of management
– Returns detailed status information

So what resources are used by MirrorView/A? I’m glad you asked.

SnapView
– Makes a golden copy of remote image before update starts

Incremental SAN Copy
– Transfers data to secondary image. Uses SnapView as part of ISC

MirrorView/A license (enabler – the bit you paid for)
– MirrorView/A licensed for user
– SnapView, SAN Copy licensed for system use (meaning you can’t use it to make your own snapshots)

Adequate Reserved LUN Pool space
– Local and remote system (this is critical to getting it working and often overlooked during the pre-sales and design phases)

Provisioning of adequate space in the Reserved LUN Pool on the primary and secondary storage systems is vital to the successful operation of MirrorView/A. The exact amount of space needed may be determined in the same manner as the required space for SnapView is calculated.

So how do we do that?

Determine average Source LUN size
– Total size of Source LUNs / # of Source LUNs

Reserved LUN size = 10% average Source LUN size
– COFW factor

Create 2x as many Reserved LUNs as Source LUNs
– Overflow LUN factor

Example
– LUNs to be snapped: 10 GB, 20 GB, 30 GB, 100 GB
– Average LUN size = 160 GB/4 = 40 GB
– Make each Reserved LUN 4 GB in size
– Make 8 Reserved LUNs

Due to the dynamic nature of Reserved LUN assignment per Source LUN, it may be better to have many smaller LUNs that can be used as a pool of individual resources. A limiting factor here is that the total number of Reserved LUNs allowed varies by storage system model. Each Reserved LUN can be a different size, and allocation to Source LUNs is based on which is the next available Reserved LUN, without regard to size. This means that there is no mechanism to ensure that a specified Reserved LUN will be allocated to a specified Source LUN. Because of the dynamic nature of the SnapView environment, assignment may be regarded as a random event (though, in fact, there are rules governing the assignment of Reserved LUNs).

Frequency of synchronization. What are you using MirrorView/A for? If you promote the secondary LUN to a DR target, what are you hoping to see? Setting up MV/A is normally a question of understanding the Recovery Point Objective (RPO) of the business data you’re trying to protect, as well as the Recovery Time Objective (RTO). With MV/A (and MV/S), the RTO can be quite quick – normally as long as it takes to promote the secondary image and mount the data on a DR host, assuming you have a warm standby DR site in operation. Of course, what data you’ve replicated will decide how fast you get back up and running. If it’s a simple fileserver, then presenting the clone to another host is fairly trivial. But if you’ve presented a bunch of VMs sitting on a VMFS, you need to do some other things (discussed later), to get back up and running. So how do you decide on an acceptable RPO? Well, start with how much money you can spend on a link (kidding). It’s probably a good idea to look at what the business can tolerate in terms of data loss before you go off and configure mirrors with a 5-day replication cycle. Once you’ve worked that out, and tested it, you should be good to go. Oh, and keep in mind that, if you’re going to failover an entire site to DR, you need to consider the storage subsystem you’ve put in place at the DR site. Imagine trying to run a full-blown ESX environment on 11 1TB SATA-II spindles in RAID 5. Uh-huh.

One Comment

  1. Pingback: compellent

Comments are closed.