Ok, I’m curious as to whether anyone has an answer for this.
Why don’t more people boot VMWare ESX from the SAN?
It occurred to me the other night that I have 2 36G drives in each of my servers that I use possibly 10G of, when I already have a High-Availability storage solution at my fingertips. I’ve got plenty of storage space, not even including the vault drives.
So I tried it. I took one of my off-line VMware boxes. (I use DPM so at any given time 2 of my 3 VMWare hosts are probably in StandBy mode) and popped the drives out of it.
I turned it on, went into the BIOS and disabled the onboard RAID controller and enabled the boot BIOS on one of the Emulex HBA.s
I created an 18G lun on the clariion and assigned it to the host as LUN0 and poof, I have a boot disk.
Worked like a charm. The one surprise (pleasant) is that VMWare seems aware of the multi-pathed boot device even without any form of powerpath on the system. (That was my biggest concern)
So now I have my VMWare infrastructure running on a host with ZERO fixed-disk drives spinning in it.
So has anyone else tried this and know of any gotchas involved that I may not have run across yet? I’ve done windows and Linux native boot-from-san many many times, but this is my first attempt at VMWare.
I’ve not however tried pulling a path to see just HOW resilient it is…I should probably should try that before I convert the other two systems to diskless operation, right?