EMC/Cisco Announce 8Gigabit Fibrechannel – Will there be a rush?

I doubt it actually.

From everything I’ve seen, customers aren’t using the 2 or 4 gigabit they already have – so why the sudden rush to faster and faster fibrechannel?

Mostly it’s to keep fibrechannel having the appearance of relevance in a world of 10G Ethernet and Infiniband.  While I personally view SCSI over FC as being a much more mature protocol, I guess there is always the possibility that other options will be adopted.  (I’ve done enough iSCSI implementations to know the powerful draw of cheap crap.)

My supposition is that while the upgrade to 8Gig FC will happen, it will not be a rush.  Much as the push to 4Gig from 2Gig happened – through attrition.

I’ve been consulting for a long time.  I don’t know a single customer that went out and bought new switches, new HBA’s and new storage just for the 4Gig technology.  They did it the way they all do it – when replacing hardware that was going to be upgraded anyway.

I forsee the same adoption curve with 8GBit.  It will happen because it’s a logical progression.

I would start finding a graceful way to explain to customers why they aren’t seeing a performance improvement though. (Outside of coincidentally adopting SSD at the same time)  If you’re not using all of your 4Gigabit (or even your 2Gigabit for that matter) you’re going to end up not using all of your 8Gigabit….

…even faster.

11 comments

Skip to comment form

    • InsaneGeek on January 12, 2009 at 11:01 am
    • Reply

    I think the only people that are that interested in this is the ones that have to use ISL ports between their core switches. If I have a 1000 ports, if it hels me half the quantity (or double the overall speed) of my ISL’s that can be a pretty big win… but the ratio of those organizations compared to all the others in the world I’m guesiing is a fairly low number. Also could be a boon to storage arrays that have less connectivity ports compared to a DMX, Tagmastore, etc where you’ve got only a couple of FC ports per node (whether or not they actually would be requested or can support that level of bandwidth is something else). To the host it seems a rather point solution where you truely are out of I/O ports and bandwidth and can’t add anymore physically.

  1. That does make sense. In a Core/Edge topology where the storage is connected to the core and the hosts are connected to the edge, there is the potential for saturating a 2gig link during, for example, a non TimeFinder/Snap based backup in which the backup is being taken right from the host (NDMP or SSO)

    This normally was, of course, mitigated by trunking ISL connections but when ports cost on average about $1,000 each, throwing multiple pairs of ports at an interconnect can become an expensive solution.

    I usually don’t bother myself with costs, knowing how much the toys cost never really contributed much to the joy of playing with them. 🙂

    • JM on January 13, 2009 at 7:25 pm
    • Reply

    Agreed on the cost statement Jesse. Speaking of cost, if you needed the speed across a single link you’ve had the option of 10Gbps FC for years (on Cisco FC gear anyhow). However, if you’re just connecting two switches in the same datacenter you can get 48Gbps of ISL bandwidth in a 12x4Gbps line card vs. 40Gbps in a 4x10Gbps line card. It always bugged me that by going with 10Gbps FC you’re leaving 8Gbps of bandwidth per slot on the table. The 4Gbps card buys you a bit more flexibility too since you can add or remove 4Gbps from a port channel at a time rather than 10Gbps. Both options have their pluses and minuses.

  2. Agreed I’ve been monitoring fiber channel fabrics for a few years using tools like Cacti & MRTG and haven’t even noticed a dent even in a 2GB FC link, let alone a 4GB link.

    I’ve heard people from Brocade say that 8GB and above would be more used in virtualization

    1. I could see that, especially in bladecenter environments where10-15 blades’ worth of traffic is moving up two or four uplinks from the internal switches to the enterprise/core switches. I also know however that VMWare (and assumably other Virtualization engines (*cough*microsoft*cough*) do some pretty nifty write-combining algorhythms that reduce the total bandwidth by packaging up multiple smaller IO’s into larger frames.

      Just a thought.

    • TimC on January 23, 2009 at 6:49 pm
    • Reply

    Cisco? How many 8Gb ports can they service in a blade without being oversubscribed? 10? Are they still holding out hope on everyone jumping to their nexus and fcoe bandwagon? (PSSS Cisco: I wouldn’t put all your eggs in that basket).

  3. Tim – Very good point. How many 8GBit ports can they have on a blade realistically? What kind of crossbar traffic is that going to entail? Are we going to have 24-port blades with 192GBits of available bandwidth on the back-end?

    You get back to, in this case, having to design switches down to the ASIC level. Making sure you plug your hosts and storage ports into the same chip and cut down on inter-ASIC traffic.

    • TimC on January 24, 2009 at 10:41 am
    • Reply

    Ahhh, but that’s simply Cisco. Brocade isn’t nearly as bad backplane wise. AND on their new blades, if storage/hosts are even on the same blade, the traffic never hits the backplane and there’s 0 over subscription, so you only have to get it on the same blade in that case. Obviously in some cases you can’t get EVERYTHING on the same backplane, but it’s STILL got massively more bandwidth there as well.

    I was a huge cisco fan at first and figured after the brocade/mcdata merger we’d see brocade belly up by now, but they’ve really pulled it off (IMO). The DCX is pretty darn nice, and I’m still left wondering if cisco is actually going to counter. The sad part, in this case, is I think they can actually get by doing absolutely nothing “because they’re cisco”. Hopefully the market starts punishing them sooner or later so they get off their laurels and get a serious update to the 95xx line-up.

    • TimC on January 24, 2009 at 10:42 am
    • Reply

    er,

    *sometimes you can’t get EVERYTHING on the same blade, but it’s STILL got massively more bandwidth on the backplane as well.

    • william bishop on February 1, 2009 at 4:41 pm
    • Reply

    I can tell you that in a serious blade environment, using virtualization, I still don’t peg my ISL’s or my fc switches at 4G….Will I go to 8G? Yep. Because insane is right, I can move my bottleneck further up the line with 8g on my array and my core connections. I don’t need it down river at all, just like gig at the desktop doesn’t do a lot of good (the bottleneck is further up) in many instances.

  4. William – If you don’t peg your ISL’s @ 4G I don’t think you’re going to come anywhere close. 99.99% of the time the ISL *IS* the bottleneck.

    The long and the short of it is that very much like CPU utilization I believe we’ve already reached the switching speed limits. The only real difference between 4G and 8G is compression rates. But the bits still travel down the wire at the same speed, latency is still the same because they haven’t changed the speed of light that I know of.

    I guess on thinking about it I can see the biggest benefit to 8G Fibrechannel will be when presented with SSD Disks. When you take the physical layer out of the mix, I think that if you can put it 8G from end-to-end (HBA all the way to disk) along with enough CPU power to push the data at top speed.

    I’m still not sold on the benefits of SSD though, because until they overcome the limitations of NAND memory it’s still “volatile” in my book.

Leave a Reply

Your email address will not be published.