To boot or not to boot?

One of the most common questions I’m asked is “Can I boot from SAN?”As with most technology questions, the answer is generally “It depends.”

Booting from SAN offers a great deal of flexibility and usually a significant performance bost when you’ve got the right hardware behind it.

My first experience with a Symmetrix was using an HP3000 on a Symmetrix 4.8. (We had an older Symm 3 at the time, but as a lowly operator (and a fairly uninformed one no less) I didn’t know it as much more than “the big cabinet”)

We had this old HP3000 running MPE/ix, which was HP’s old answer mainframe. I was working at Intuit and we ran 90% of our order processing through a single system at the time, and during tax time that was quite an amazing load for a system that had 8 100mhz PA-RISC processors. (remember, this was 1996)

So the HP3000 series 995/800 took two floor tiles by itself. Next to it were four racks of 2GB harddrives. The “spread it wide and not deep” mentality was alive and well at the time.

Enter the Symmetrix. We migrated all 80 or so 2GB harddrives to a single Symm frame, with, I believe it was 8GB of cache (which was a lot at the time) And most importantly, we were booting from the Symm.

We saw an amazing improvement. I mean something like 20 – 30 percent. It took me a while to realize (I was kind of daft at that point in time) that there were two reasons for this.

First, because the system was now paging to cache instead of to a physical volume write times that used to be measured in the 20ms to 30ms range were now being measured in the 5-10ms range. (Before you ask, this is an estimate based on my memory of marketing data provided to me over 10 years ago, so don’t quote me) The operating system can get a lot more done when it’s not waiting on virtual memory.

Second, because the disk mirroring that used to be handed by the operating system was now being handled by the symmetrix. As far as the OS was concerned it was connected to unprotected drives, and not having to expend any IO cycles doing multiple writes per transaction.

Third, reads and writes were distributed across many spindles, enabling the array to grab data from multiple sources at the same time.

Well the same holds true today, in fact even more so. While application performance demands have increased hundreds-fold, physical drives can only spin so fast, and as such can only provide data so fast irregardless of the speed of processors and memory.

So I’m in Kansas, working on setting up an IBM BladeCenter to boot from a Clariion CX3-20. There are better ways to go than Clariion when it comes to boot from SAN, HBA registration is required on the Clariion, and when you’re booting from SAN this has to be done manually before you install the OS. So it’s a lot of extra typing at first. (It’s also more typing when you have to replace an HBA, but that’s a different story.)The step-by-step, in case you’re interested is:

  1. Connect the HBA to the switch and enable the HBA BIOS.
  2. Boot the system (and let it time out, this forces the HBA to log into the swich)
  3. Zone the HBA to the required storage. (If your OS isn’t multi-path aware, you’ll need to zone *ONLY* to a single path at first, and enable the redundant paths after the fact.
  4. Assign the storage:
  1. (for Clariion) Reboot the host to force it to log into the Clariion.
  2. Register the HBA
  3. Assign it to a host
  4. assign the host to a Storage group with the boot lun in it.
  1. (for Symmetrix) Mask the HBA to the boot lun on a single path.
  • Reboot the host – storage should be visible.
  • The downside to booting from SAN occurs especially in the windows environment. When a windows host loses access to the swap file, even for a second, you’re running the very real risk of BSOD’ing your system. This means that when the Clariion trespasses a device, it’s a crap shoot as to whether or not your system is going to stay up. The good news is that powerpath increases the timeout by adding it’s own timeout, so youre best of running powerpath, even in unlicensed mode, to handle the failover time..


    2 pings

    Skip to comment form

      • on September 16, 2007 at 5:10 am
      • Reply

      Good timing. I’m going through this exact same evaluation of booting from Blade servers right now at my company. It sounds like your using the Optical Pass through modules for your Blade config instead of the Blade Switch modules? I’ve discovered a potential “Con” (if I’m correct) about booting from these Blades.. From my discussions with our Wintel Engineer, in the H series Blade model that we are looking at, there is only 1 PCI/FC daughter card in the Blade. That means that “if” the daughter card fails…now you’ve got both your “boot” FC port as well as your “data” FC port down. Unless you stock extra daughter boards…you’re stuck waiting for a part shipment to get the blade back up and running. In the” traditional” server boot-from-san world, at least you typically had the 2nd HBA in the server already…and could switch it over to be the Boot card in an emergency. I’m still learning about Blade technology though…so your model and mileage may vary.

    1. It’s very true, and a definite shortcoming of the blade technology as a whole. Personally I prefer to use something alone the lines of the Dell 1850 “Pizza Box” since you can put two distinct PCI cards in it.

      However the obvious draw-back is the density. If you want to go full density you have to go blades.

      However – that being said, the dual-port QLogic cards that are on the IBM Bladecenter H21 blades are pretty solid. The main reason being that there are no optics on the card, either fixed or otherwise. It’s straight copper to the switch module.

      We didn’t use the optical pass-through in this case, instead opting for the 20port McData FCSW modules. It makes life easier and cheaper in the long run, as we were able to test by direct-connecting the storage to the McData switches, then bringing the “CORE” fabric online when we were satisfied as to the performance. All of the zoning we created on the McData fabrics automatically merged with the core fabric when we connected them.

      Peice of cake.

      The main reason to use the McData modules as opposed to the optical pass-through is cost, when you are connecting directly to your core switches, you’re paying between $1,000 and $2,000 per port for your connections. Even dual-pathing the McData modules to the core means you’re spending only $2,000 – $4,000 per ISL. A number which quickly pays for itself when you think of the cost of dual-connecting 14 servers per chassis directly to the core switces.

      I don’t know what the McData modules cost, but I’m guessing they’re not cheap. They are very versitile and the ones we used were capable for 4gbit/sec.

      In short it was a great setup. 18 servers in 16U of rack space – with much space to spare. (The H series holds 14 blades per chassis, for a total of 28 possible in this scenario)

      • on September 20, 2007 at 6:02 am
      • Reply

      They run about US$12,000 list for either the Cisco, Brocade or McData 20 port switches. About $7k for the 10 port ones. That’s excellent when you consider the density you’re getting out of those 8U’s (almost a 200% improvement over 1U “pizza” boxes).

      • on September 21, 2007 at 4:59 am
      • Reply

      @storagedude: I suggest taking a look at the Sun bladeserver then. It uses pci-type cards that are easily swapped rather than the traditional Mezz cards that dell/hp/IBM use 😉

      • Jesse on September 28, 2007 at 12:28 am
      • Reply

      How many cards does each blade support? In order to put two PCI cards in a blade it would either have to be about 10″ wide or 4″ deep.

      • on October 1, 2007 at 6:20 am
      • Reply

      Better than 2, try 6 🙂 You have to remember, say what you want about sun, their systems engineers know what they’re doing.

      Key Specifications

      * 4 dual-core AMD Opteron processors per server module
      * 6 PCI-Express interfaces per server module

    1. Real Free Home Online Jobs…

      Real Free Home Online Jobs…

    2. work at home typing jobs no cost ever free start up…

      work at home typing jobs no cost ever free start up…

    Leave a Reply

    Your email address will not be published.