I got sucked into this job, and the only benefit of it is that it’s in California, which, when it comes down to it is not a bad place to be when it’s snowing back home. Bygones.
Anyway, my job, whether I want to or not, is to figure out a way to move about 4 Terabytes of Celerra data from one set of disks (6+1 R5 -7.2KSATA) to newer, faster disks (4+1R5-15KFC)
And the rub is, that I have to do it online.
This is one of those places where I hate the celerra. Found this great primus article, (emc144545 if you’re interested) that states quite unequivocally that you can only use the back-end Clariion lun migration if you are migrating to an identical raid group, which to me, negates the reasoning for doing it in the first place.
Identical raid group. If it’s SATA, the target has to be SATA. If it’s 4+1 Raid-5 the target MUST be 4+1 Raid-5.
Near as I can figure, and this is not stated clearly in the article, that it has to do with how the Celerra builds it’s raid-pools. Since you USUALLY build filesystems and set them to expand into a raid-pool, my guess is that changing the make-up of the disks underneath the filesystem screws up the pool database.
Come on guys, this should be an easy fix. (This and the ability to easily shrink a filesystem would be nice) When a customer makes one of those mistakes, you know, buying the wrong disks from the outset because they’re focused on capacity and forget a little thing called performance, the hardware should offer an easy way to fix this.
In the case of the customer I’m working on now, Clariion LUN migration was out because of the disk-mark issue, the standard SecureCopy is out because minimal downtime is allowed.
Long and the short of it is I’m getting ready to do an internal CDMS migration. Now anyone who has used CDMS knows it’s not the fastest product in the world. You also know it can be maddening because one of the things you *STILL* can’t see is what percentage complete the migration is.
But as far as technology goes, the usefulness of it is awe inspiring.
CDMS is a “Copy-On-Access” file-level clone. Essentially it builds a duplicate I-Node table pointing to the old files, and presents this to the client. Browsing the new directory structure shows you all of the filesystem structure exactly as it is on the old source box. WHen you attempt the access a file for the first time, it then copies that file from the old filesystem to the new file system and then passes it to the end-client.
Now this is where it’s a pain. it’s a slow process, depending on the speed of the source system/network/etc you can increase your initial access time 20-fold. (subsequent accesses come from the new disks, so it’s a one-time-hit.)
So tomorrow, I start moving almost 4TB this way. Running 32 threads internally (this is inter-Celerra migration) it should run fairly fast, depending on how fast the network stack can process it.
To EMC – fix the disk-mark database to allow a celerra lun to be migrated without peanalty (or at least with easy-to-moderate reconfiguration) You’ll sell more disks because people won’t feel married to the disks they’ve got, or worry about committing to a disk-type if they’re not absolutely sure of the perofrmance numbers of it.
To all Sales people. Don’t sell SATA disks for production-level applications. They don’t work. (see my next post, SATA, SAS, and Fibrechannel)