SAN vs iSCSI. Any opinions?

I very often get email questions about SAN vs. iSCSI.  It’s funny that earlier, within minutes of completing a post on another site on the subject, I got queried in email about it. 

I don’t see iSCSI making great inroads because it’s limited by the limitations of IP, that being that IP is a connectionless protocol that doesn’t guarantee delivery of packets at all, let alone in-order delivery.  (FC doesn’t guarantee in-order delivery, but instead assembles the packets in order on receipt)

also, I think it’s universal that no one wants to put your storage traffic and your user traffic on the same network, especially since to date, Gigabit is it.  Think about it.  You request data from a client, the request is sent to the server, which then pulls the information across the same network from another server or iSCSI appliance, (such as NetApp or Clariion) and then sent back to the client over the SAME NETWORK.

That’s 3-4 times the network traffic.

Unless you’re going to have dedicated networks for storage and IP, it doesn’t make much sense.  You’re still doubling your investment.  Now granted with iSCSI you probably don’t need a dedicated SAN person (gasp, I might be out of a job soon!) but you do double the load on your network admins.  (and they are a fiesty sort who don’t like having extra work heaped on them)

Though I’m guessing in the long run, when Ethernet reaches 10Gbit, it will bring the total cost down quite a lot, but by then fibrechannel will be so much the mainstream that it won’t matter.

I am surprised that in all my installs I’ve not seen more people using the IP over fibrechannel options that the full port drives offer.  If you could run IP over a 2gig fibre link, say for dedicated backup purposes, wouldn’t that make a lot more sense?  Especially since it seems that would free up the extra gigabit interface that comes on most servers for load-balancing and failover of network connections.

I alpha-tested some of the first iSCSI drivers in 2000, and didnt find them to be stable (or worthwhile) but then again, that was before the protocol was really refined.  In the interest of fairness I should probably revisit it.  Especially since we bought the iSCSI license on our Celerra. :) 


I’m going to do something I’ve not done yet – I’d like to ask for comments on the subject.  As highly as I do think of myself, I don’t pretend to know everything.  If iSCSI is proving itself out I’d like to know, as I’ll re-evaluate my stance on it.



Skip to comment form

    • on September 12, 2006 at 7:44 am
    • Reply

    iSCSI uses TCP, which is not a connectionless protocol.. (connection-full? ;). And TCP connections enable in-order delivery, by reassembling them in order on the receiving end, just like FC.

    Also, most vendors recommend going with a separate network for iSCSI traffic. But keep in mind that Ethernet gear is MUCH cheaper than FC gear. Just look at the cost of FC HBAs vs. Ethernet NICs.. or FC directors/switches vs. Ethernet switches.

    The big benefit of FC over TCP/IP is lower latency and higher bandwidth. IP has more protocol overhead, which tends to slow things down.

    I’ve never seen anyone use IP-FC.. I don’t think it really makes much sense in most cases, because you’re generally only going to be able to communicate with other hosts that are attached to the same FC fabric. I’m not aware of any way to route your IP-FC connections out of the fabric and into your main LAN, so you won’t be able to talk to other servers on the LAN, or the Internet.. and even if you could route out to the rest of the LAN, you’d probably run into issues of control — since FC networks are generally run by a storage admin group, and IP networks are generally run by a network admin group. And for SAN-based backups, most people just share tape/VTL devices over the SAN with regular FC and use some kind of device sharing software option, like Netbackup SSO or Networker DDS.

    That being said.. iSCSI is still kind of “cutting edge” in my opinion — not the kind of technology most enterprises are going to want to rely on for tier1 services. But there are many people experimenting with it succesfully, and using it in tier2/3..

  1. FCIP is used far and wide for replication, –> but when you wrap a SCSI packet into a FC packet and then wrap all that in an IP packet, your overhead has to be through the roof.  I’m more partial to spending the extra money and putting the GIG-E boards in the back of the Symm, that way it hooks directly into the ethernet network and can be routed accordingly.  You don’t get the performance that you do with RDF over Fibre, but it’s at least fewer physical devices to route through.

    As I said, I’m always open to new ideas. We have a iSCSI compatible Celerra (on Clariion) NAS box. Once we have our production data migrated over to the Symm (long story, Clariion was a stop-gap measure because we didn’t have the data center ready for the symm intime for our busy season) I might be forced to play with it a bit.

    We’re still in “start-up” mode so play-time is limited.  Sometimes I have to choose between this site and….oh….SLEEP. 🙂

    Thanks for your reply, and welcome. 🙂

    • on September 12, 2006 at 11:29 am
    • Reply

    I was actually talking about IP-FC in my original comment, which is IP over FC.. in response to your original point on why people don’t often use the IP functionality of their HBA drivers.

    FCIP is FC over IP, as you stated.. and then there’s iFCP, which is another protocol used to extend SANs, typically for long distance replication (used by Nishan).. There are way too many acronyms and protocols in the storage industry 😉

    Like you said, Gig-E is definitely the way to go with RDF.. FC is pretty weak for long distance replication, because of the additional gear & encapsulation, and because there are two round-trips required per write.. that can be mitigated with write acceleration on the gateways at least.

    The Gig-E/multiprotocol boards on the DMX can actually do iSCSI natively now.. and some kind of VTL emulation as well from what I hear. I think that’s only in 5771 code.. could be wrong on that one though.

    • Jesse on September 12, 2006 at 11:56 am
    • Reply

    I’m just not sure I would ever want to run iSCSI, at this time an inherently underperforming protocol, on the “really-fast-expensive-redundant” Storage.

    The same goes with VTL emulation. Why would you want to take your 4 million dollar DMX-3 (assuming you are silly enough to pay full retail and buy the petabyte configuration up front) and use it to emulate a 20,000 tape drive? I have enough emotional hang-ups about the 17TB Veritas Disk-Staging-Unit that I’ve got on 500G SATA Clariion devices, and those were free. (sort of – they threw them in with the last quarter-million dollar order we placed)

    As far as the IP/FC – sorry, still suffering from my own brand of insanity, called sleep deprivation. it’s what you get when you have a wife, three kids, full time job, and active online presence. 🙂

    yes, I’m starting to move the 10.0.x.x addresses that Veritas is using now over to the Emulex cards. I figure it gives me faster backup times on my production databases, and free up the second internal Gig-E connection for load balancing.

    Enabling Spanning on the two LP1050’s will be my next trick. 🙂

Leave a Reply

Your email address will not be published.