«

»

Aug 31

VMWare Booting…

Ok, I’m curious as to whether anyone has an answer for this.

Why don’t more people boot VMWare ESX from the SAN?

It occurred to me the other night that I have 2 36G drives in each of my servers that I use possibly 10G of, when I already have a High-Availability storage solution at my fingertips.  I’ve got plenty of storage space, not even including the vault drives.

So I tried it.  I took one of my off-line VMware boxes. (I use DPM so at any given time 2 of my 3 VMWare hosts are probably in StandBy mode) and popped the drives out of it.

I turned it on, went into the BIOS and disabled the onboard RAID controller and enabled the boot BIOS on one of the Emulex HBA.s

I created an 18G lun on the clariion and assigned it to the host as LUN0 and poof, I have a boot disk.

Worked like a charm.  The one surprise (pleasant) is that VMWare seems aware of the multi-pathed boot device even without any form of powerpath on the system.  (That was my biggest concern)

So now I have my VMWare infrastructure running on a host with ZERO fixed-disk drives spinning in it.

So has anyone else tried this and know of any gotchas involved that I may not have run across yet?  I’ve done windows and Linux native boot-from-san many many times, but this is my first attempt at VMWare.

I’ve not however tried pulling a path to see just HOW resilient it is…I should probably should try that before I convert the other two systems to diskless operation, right?

😉

12 comments

Skip to comment form

  1. brerrabbit

    There used to be a couple of features that were not supported if you did Boot-From-SAN in 3.0 (haven’t checked since), the one that sticks out was that Windows Clustering between VM’s was a no-no. I can’t say that I know anyone that does that, so I’m not sure how much of a show stopper it would be

    I always liked the idea of BFS because on a mid-to-high range SAN you can do things like snapshots/clones of your hosts, which can be nice for code updates and such. Routine backups of the host have always seemed overkill to me, there’s just not that much unique data on there, so having the ability to do a quick disk-based copy on demand is an elegant solution to me….

    1. Jesse

      Windows clustering within VMWare is a complete and total waste of time and money when it comes down to it. A single VMWare HA license can do the work of dozens of Windows clusters, without all of the complexity and licensing cost of Windows..

      So far so good, 24 hours running on the SAN boot, and even with a power fail today (though it wasn’t long enough to cause the systems to stop, the twin 5KVA batteries I have powering my rack held out) it’s running like clock-work.

      My big benefit is power consumption and heat generation, anything that cuts the number of spinning disks in my office is going to save me pennies, and they all add up.

      Now If I can just get snap to work instead of VCB that would save me an arm and a leg, and 4 more spinning disks. (Local landing zone for my VCB backups is 4x 146G drives (Raid-5) I then back it up to the portable/removable drive from there to take offsite.

  2. william bishop

    My biggest issue is that it’s so small, an internal drive is more than adequate. Meanwhile, I have to do all the associated disk tasks (zoning, mapping, creating a small disk only visible to that host) and when you hit over a hundred hosts, that all adds up. More administrative overhead….Pass.

    1. Jesse

      Yeah, I can see that – I think for me it was more of a “can I do it” rather than “should I do it”…

      I could see using something like this in a smallish cluster, 6-12 hosts, where you’re booting from san to allow for a QUICK replacement of a failed host. Swap the hardware, zone/mask, and you’re back up and no new configuration needs to be done.

  3. InsaneGeek

    Used to be that there were lots o restrictions on booting from the SAN, I think a good portion of them have been removed but back in the 2.xx day it was fairly hairy. So it’s got a bit of history to burn through before people will go there again. Additionally boot from SAN in general scares people I don’t know why exactly but it seems to generally cause people in the organization to be afraid.

    What I think might make it take off is if you are able to abstract the HBA’s WWN from it’s physical hardware. Where it truely becomes a physically replaceable bit of equipment have a datacenter monkey pull a cpu and push in a new one in a blade chassis. This is also where I think they are trying to take FCOE (which scares the living crap out of me… my FC switches don’t even take a 1 sec outage; but every other day the networking guys are causing some minute long spanning tree event).

    1. Jesse

      Booting from SAN provides *SO* many benefits over fixed/local disks. Especially when you’re willing to spend the extra money and do it from a Symm. Booting windows from clariion ads it’s own limitations, it’s easy to saturate 2 Gigs of cache with random memory paging, thereby decreasing performance array-wide.

      However I’ve booted Windows from the Symm before, and given that you’re swapping directly to cache, the OS performance boost is amazing.

      The other bonus to it is being able to replciate (using SRDF) the boot volumes to a cold DR site. Add that to fixed DHCP (Assigning your IP by DHCP to the mac address of the box) you can have the same host boot at two different sites, grab different IP addresses, and register that IP with DNS so as to provide, god forbid, the ability to have an actual zero-RPO on the ENTIRE server.

      Which of course you can do much more easily using VMWare.

  4. william bishop

    I’m with you insane, my experience with the network side (I’m also cisco certified on the networking side), is that there are far too many opportunities for things to go sideways….plus…No matter what, that ethernet packet payload just keeps coming up in front of me.

    Really, I’d go there again, with san boot, but it’s so much of a headache, and I don’t really need it. I lose a blade, no biggie, another blade takes the workload, I rebuild the host and move the workload back. We build in N+1, so I don’t have any heartburn over the outage to begin with.

    1. Jesse

      I’ve seen it done using port zoning – which of course is…well…not recommended…

      It’s cool that way, a blade fails, server monkey pulls it, drops a replacement in, hits the power, and presto, new server.

    2. InsaneGeek

      I’ve often thought about exploring using the SAN as a software mirror drive of the local outside of VMware. It seems slightly insane in that you are paying for even more storage and then you are using software mirroring, but I see these benefits:

      1) Initial SAN configuration is much easier I actually have a OS I can interact with rather than just a HBA bios
      2) I can now roll back the entire system to a snapshot during patching, etc
      3) I now have a complete DR of the system
      4) Sys admins are happy in that if some storage event occurs they can still get to system logs
      5) Backup windows… how about snapshots or clones instead

      1. Jesse

        Exactly – it’s amazing how much people *THINK* they’re backing up, only to find that without an image copy of their boot volume they are still going to end up rebuilding an OS.

        on Saturday morning at about 4:30am in an IHOP in Arlington, VA, I was tasked with showing how VMWare will provide DR.

        I set up the remote mirrors yesterday afternoon, took me about 10 minutes.

        I set up the snaps (since this was clariion) today of the Secondary volumes today and brought them online in about 10 minutes.

        I mounted a server from the downtown VMWare server on the DR VMWare server and booted it. All you have to do is change the IP’s and you have an EXACT image of your production server, up to the MINUTE of the split/failure.

        Why don’t more people do this?

  5. Tom

    Hey Jesse,
    I have a question concerning ESX performance on a BFS environment. It still seems to me that by have a local boot instead would allow you to have a separate disk for SWAP. This would keep the traffic local and off the SAN and any possible latency. While I see all the advantages, when it comes down to pure performance BFS may not be the best. Any thoughts anyone??

    1. Jesse

      Hey – Long time. 🙂

      First off, VMWare BFS and a regular OS install BFS are two different monkeys.

      In most environments, Booting from SAN is almost always faster than booting from internals, because instead of the typical 128Meg – 512Meg internally cached controllers, you have storage arrays like the Clariion with 4-8 *GIGS* of cache. All swapping is done directly to cache, and as such I’ve seen as much as a 20% – 30% performance improvement just from swap.

      However, that being said, boot from SAN on VMWare isn’t like booting from SAN with a natively installed OS. VMWare itself swaps to the boot disks, and as I said above, a caching array will almost always perform better on cached writes over physical drives, and the read-hit rate on swap data is quite high (estimated 80% or better) because by definition memory thrown to swap isn’t there for long, it’s almost always recalled to active memory quickly.

      For the VM’s themselves, swapping is done within the VMFS volume (remember the checkbox for where you want to store your paging file , so even in a situation where you have vmware booting off internal volumes, your VM Swapping is being done to SAN disks and not internal. (Remember, this is also a requirement for vmotion, because it allows active-state memory to get offloaded to a common area quickly, then handed over to the receiving system)

      If you right-click on the cluster, click on Edit Settings, and then go down to “Swapfile Location” in the tree on the left you’ll see what I’m getting at.

      Hope this helps. 🙂 Give me a call some time, I owe you a beer.

Leave a Reply

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong>