«

»

May 26

Multivendor or Single Source? Is there a right answer?

Every time I turn around it seems I seem to be running into the same question.

Is it better to be multi-vendor or single source?

Well the easy answer to that is, it depends.  Different vendors do things differently, work better/worse with some hardware, etc.

The arguments in favor of a single-vendor solution is easy.  Cost, Simplicity, Management, Interoperability.

Even if you’re buying a more expensive solution, there can STILL be major cost savings.

First, in staffing.  When you maintain multiple vendors, you have to maintain support-staff knowledgable for each vendor.

If you’ve got a storage team that consists of 5 people, and two of them work almost exclusively on Veritas Netbackup.  You *MIGHT* be lucky if you get one subject matter expert capable of doing Tier1 (IE Symmetrix) one for Tier2 (Clariion) and one for NAS (Celerra) .

But throw in HDS, IBM DSxxxx, XiV, IBM GPFS, IBM HPSS, NetApp, SONAS, Sun StorEdge, etc. etc. etc.  And what do you have?

You either have an overworked staff (and as i’ve discussed, union protected salaried federal employees aren’t known for 70 hour weeks) or stuff just plain doesn’t get done.

If you don’t spend the money on staffing, you *WILL* spend the money in support and professional services.  Now support is one thing.  If my XiV or Symm or whatever loses a harddrive, I expect the vendor to own that problem and fix it.

They will *NOT* however send people out to help with day-to-day provisioning without a pretty hefty P.O. associated with it.

And the last reason for single-vendor options is simple.  I want stuff that is going to work together.  Now yes, functionality costs, but one of the things I like about EMC is that when it comes down to it, it *ALL* works together.  I can move data from Symm to Clariion or vice-versa using SanCopy, I can migrate fileservers to celerra and within storage tiers as needed.

There is nothing worse than needing to expand one storage system by 20TB and having the storage somewhere else, but unusable.  It means you’re wasting money buying storage you already have.  (Especially when your purchase cycle is 4-6 months on average.)

Not a happy thing to explain to the boss.

“Yes we have 80TB of Clariion avaialble, but the IBM DS4800 is running short so I need to spend an extra $100k on disks.”

“Yes, I know this isn’t budgeted, but the data grew faster than we’d expected.”

(Of course, you can span filesystems across arrays, as long as it’s not replicated data, because you can’t get a consistent split when half of your extents are on one array and half on another)

2 comments

  1. Han Solo

    Bah, even INSIDE single vendor shops, you have to have experts in totally different software stacks if you use that vendors different tiers. Ex DMX storage has TOTALLY radically different tool sets than Clariion storage does.

    The answer is simple, put the storage all behind an IBM SVC Cluster.

    Single interface to do ALL your storage provisioning, snapshots, replication, migrations etc.. no matter what the vendor, storage array type etc.. Its a no brainier for having multiple tiers. Not to mention you can stop paying all of that maintenance on all those different pieces of software for each tier.

    You can buy whatever you want for Teir 1, add in some cheap tier 2, and even go wild and buy some ultra ultra cheap tier 3-4 storage from someone like HP or LSI, or heck even a fly by night vendor who cares…and its ALL managed from the same interface by the storage team.

    Of course the vendors hate this idea because:

    a) it totally ruins their income stream on all the different pieces of software they sell you and charge maintenance on..eg purchase of SRDF -and- Mirrorview from EMC to do replication. Instead you can use your replication/snapshot licensees on whatever storage you want in whatever capacity you want.

    b) it totally ruins their income stream they get from making you REPURCHASE all your software each time you upgrade the array since their licenses (SRDF,Timfinder,Snapview,Mirrorview) are all tied to THAT array. With something like SVC, you buy the replication/snapshot licensees ONCE and keep it forever.

    c) it totally ruins their income stream by forcing you to have to do replication to the same device on the other side. Eg if you want to replicate DMX storage to your DR site you have to purchase expensive DMX storage to just SIT THERE forever. With something like SVC, you could do sync/async replication off your DMX to a very low cost storage solution at the DR site if you wanted to.

    1. Jesse

      Bah back atcha. 🙂

      I’m on record as not being a big fan of appliances. (Even the EMC ones)

      Appliances are nothing but a hardware vendor’s feeble attempts to increase profit margin by putting software on consumer grade servers and calling it a hardware solution.

      With SVC you’re forcing everything through another appliance, and that adds latency. (Though I’ve never been able to get IBM to a number as far as inter/intra device latency goes when you throw a device in place to interpret / redirect IOs. And since *all* of these boxes are nothing but glorified PC’s… You get the picture.

      When you go from ‘Storage–>Switch–>Host’ to ‘Storage–>Switch–>SVC appliance–>Switch–>Host’ you are actually adding two full hops *PLUS* processing time of the SVC appliance to every IO.

      If you must do it, an invista blade for Cisco is a better way to do it, though not by much.

      That also doesn’t solve the problem of having different hardware vendors, I mean, the storage STILL has to be provisioned, the only difference is that you’re provisioning it to the SVC appliance and THEN provisioning it to the host itself from there.

      While you do gain some flexibility, you double the time it takes to add new storage.

Leave a Reply

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong>