HomeLab – The Next Generation

HomeLab – Before

This is the “Lab” I’ve used for probably the last 5 years or so. It’s a Dell Precision T7400 with 64G of PC5300F memory and Dual Xeon X5460 Quad-Core Processors, 8x 2T Seagate SATA Drives, 2x 500G Samsung EVO850 SSD disks, and runs VMWare ESXi 6.0 nicely.

Not a bad toy, in the grand scheme. Yeah, it’s *WAY* out of date…but when it comes to playing with VMWare, having a dumping ground for various “Test Cases” I have needed to use over the years.. etc..

For Example:

  • A 4-node Isilon cluster for testing and scripting..
  • A full Windows AD/Exchange infrastructure for backup testing.
  • EMC Control Center (back in the day) Infrastructure (5 hosts)
  • Various coding environments, CentOS7, Fedora28.
  • Windows 2012 Server running Veritas Volume Manager (for Migration Testing)

You get my drift… It’s great to have a “burn down” environment that is entirely within your own control. (And have the power-switch within reach for emergencies)

But there have been a few things I’ve not been able to play with.

The X5460 processor doesn’t support ESX6.5, so I’ve been hamstrung there. I reached a plateau as far as what I was able to test/play with as far as upgrades.

It’s all, still a single host. So anything involving VMotion, HA/DRS, VSAN, was beyond my abilities.

So I decided I needed an upgrade. A friend had what can only be described as an early “blade” enclosure floating around, and donated it to the cause. It’s a Dell model C6100 enclosure. 4 Blades, distinct cross-connects for disks, not a bad toy.

The first new member of my family.

These are great little blades. Dual E5540 (quad-core, 2.5ghz) processors, 32G of PC3-10600 RAM in each blade. I did some research, and found that these can be had for from between $250 (for a 2-blade unit) up through $1,000 (for a fully loaded 4-blade unit)

Dell C6100 – Side-by-Side/Over-Under configuration (top panel removed)

They’ll support up to 12 sticks of PC3-10600 RAM in each node, so if you wanted to, you could fill all 4 nodes with 96G of RAM each without really breaking the bank. (I bought 4 8G sticks for about $50 to fill out the last node)

So placement was my next issue. I don’t have a rack in my basement anymore (don’t judge), so I needed a place to put them that allowed me easy access, as well as keep them stable. I found a $25 wire-shelf from Lowes to do the trick nicely. Added a Dell PowerConnect 5324 managed gigabit switch to the rack as both my interconnect and “back end” switch (also, that I had lying around)

I also, because I had a specific purpose in mind, found a second C6100 on EBay so that I would have 8 nodes to play with.. and “mounted” them both in the rack with the network switch.

That’s 8 VMWare Nodes in 4U of rack space.

The Front-View – each enclosure has 12 3.5″ disk bays. I found 120G SSD disks on Amazon for $21/each for the Cache volumes, and repurposed my 8 2TB volumes from my old Lab box so that each node got one 2T volume. (The original box had 8x 500G disks in it, I re-distributed it so each node gets 1x 120G SSD, 1x 500G SATA, and 1x 2TB SATA.)

The back-end..  (Ugly, but functional)

So I carved the switch into 3 parts…sort of. The Blue links are the “Primary” network (vmnic0), used for data and external access. They’re in VLAN2. the White links are the “Storage” Back-end network (vmnic1), which is used for vmotion, HA/DRS, and VSAN, those are in VLAN100, which doesn’t have an uplink.

Same gigabit switch, so performance isn’t great at the moment, but it works.

The Black links are for IPMI/management. Put them (also) in VLAN2 so I can get to them from my desktop. Screwed up in my math, forgot that the 8×3 = 24 and I have a 24-port switch, which doesn’t allow for an uplink, so I removed one and will move things around as I need. I have a keyboard and mouse that I can move around to the units as necessary, so it’s not like that’s hyper-critical.

My 4-node VSAN cluster

So here – you see each node’s 2T SATA disk, and each node’s 120G SSD disk. This is the part I’m still learning about. It’s my understanding (encouraging anyone to correct me if I’m wrong) The SSD is used as a sort of ‘flash-cache’ Writes go to the local SSD disks and are then de-staged from there to the other nodes in the cluster. I still haven’t quite figured out how the back-end protection is handled… I just know there is some level of redundancy to guard against a single node failure. I’ll keep reading.

Before you go trying to hack in, 50micron.net is my internal network. No, I’m not stupid enough to make it available to the outside world. 😉

The goal, in all of this, is for me to have a platform where I can easily simulate a “production” vsan environment and try to see what breaks it, what works, what doesn’t, so that when someone asks me “Have you ever done xxxx” I can answer honestly. (I’ve never, in my career, told a customer something worked that I hadn’t actually seen work – something that drove my sales people nuts sometimes – but there is often a bit of a disconnect between marketing and reality, and the one thing I’ve got to my name, that no-one can take away, is my sense of ethics.)

So next steps… I need a better back-end. I’ve run storage on Gig-E before, and while it works in a pinch, when you don’t have other options, it isn’t a great option either. In looking around, trying to find a better back-end for the storage, i started thinking about using Infiniband… A little digging provided me the win. I’m waiting on a 32-port infiniband switch I found on eBay for $56, and 8 low-profile QLogic 7340 Infiniband adapters. It was a shot in the dark, but I think the 40Gbit back-end will be a big step up…and will be fun to see if I can get configured without breaking my current storage. 🙂

I’ll keep you in the loop.

Leave a Reply

Your email address will not be published.