In case you’re wondering…

Point of reference – A few months ago I wrote a post that I never ended up publishing that started with the line:

“My gods I need to work with technology that wasn’t conceived of in the 1990s.”

With that in mind, in case you’re wondering where I’ve been this past month or so…

I’ve been playing with this beast…

8 Engine VMAX

225 400G SSD Drives (90 TB Raw)

Direct Attached to *ONE* host.

Biggest.  Thumbdrive.  Ever.

Well I was saying I needed to get some serious hands-on VMAX experience.  When you put a request like that out there, sometimes the universe answers LOUDLY. 😉


Skip to comment form

  1. Damn, that’s an insane amount of IOPS (got to be over a million) for a single direct-attached host. What’s the workload of the server?

    The power savings compared to the equivalent 6-7k fibre-channel drive solution must be ridiculous…

    Jealous. :p

  2. I know –

    Without giving the customer away, it’s essentially a giant Point of Sale system >:-) I’m guessing that when all is said and done we’ll be between 10 and 20 percent utilized when it comes to performance. 😉

    Power…And floor space.. The existing system is using IBM DS series… 🙂

    They’re going to reclaim like 70% of their floorspace. 🙂

  3. This is a really interesting looking beast. Given that the SSDs appear to be overkill (if your only using 10-20% of the IOPS), why not use EMCs FAST technologies to reduce the spindle count (and footprint?). Was the performance concern that big? Or was the price right?

    I am also curious about the 70% floorspace saving. A three frame DS8300 with 146GB disks offers about the same raw capacity.

    Great photo BTW, most clients I work with don’t let you take photos in their datacenters.

    1. The concern was that high…

      They have MANY frames in the existing setup… I wasn’t the one who architected the sale so I’m not sure what the logic was behind it.

      I know they have *50* or so racks of IBM disk that are being replaced with this + one Three-tier VMAX running FAST-VP for the open systems side of the house.

      It will be a thing of beauty once it’s done.. 🙂

    2. “I am also curious about the 70% floorspace saving. A three frame DS8300 with 146GB disks offers about the same raw capacity.”

      Capacity != Performance
      I’ll be generous and assume the DS8300 with 90TB of capacity you’re referring to has 650 x 146GB 15K disks; that’ll generate <10% of the IOPS of the above VMAX, thus forcing you to buy an additional 30 or so *FRAMES* of DS8300 FC disks just so you can match the performance. Hence the recovery of 70% of the floor space from the previous solution.

      EFDs (without auto-tiering) make the most sense when you're buying disks strictly for IOPS. I've seen OLTP systems where they were using 1-2% of the drive's capacity, but needed the disks to meet the performance requirements of the system.

  4. Everything you say could well be exactly correct and true, but there are way too many assumptions here to actually know what the ‘before’ picture actually looked like… or whether the ‘after’ picture is horrendous over-kill in terms of IOPS. The ‘before’ picture could be lots of modular systems with tiny caches.

    Would love to see what sort of IOPS the client actually achieves. I hope they have lots of server grunt to actually drive all those SSDs.

    1. Sorry, I’ve just heard the “why do I need 200 drives dedicated for my 10,000 mailbox mail system when I can buy 10x2TB SATA drives and still have space left over” argument too many times over the years and jumped at it without fully understanding what you were hinting at. 🙂

      It does sound like a crazy amount of IOPS, but I’ve seen worse (but there aren’t a lot of workloads that can drive that amount of IOPS from a single host – I assume this “single host” is System z). I agree the 10-20% utilization is “concerning”, and because of that you might be right in that it was designed with a “I don’t care how much it costs just make sure performance isn’t an issue” guideline from the business without any kind of sanity check.

    2. There *ARE* a lot of assumptions here. As I said, I wasn’t here for the architecture, so I can’t speak to that. There may be more servers coming in down the road, planning for performance / growth / etc. There are too many questions.

      Actually the really funny part is that this array is not my primary focus, the Open Systems one is. I’m decidedly NOT a mainframe / iOS guy, so I couldn’t tell you. 🙂

      It’s cool though, from the purely ‘toy’ perspective.

      1. That and saying the phrase “Biggest. Thumbdrive. Ever.” is just damn fun. :p

  5. Sub microsecond response time on reads and writes.

    Gotta love it. 🙂

Leave a Reply

Your email address will not be published.