DWDM Limitations – how far is too far?

I saw this post on http://lordegg.wordpress.com and felt that the comment I posted to him there would make pretty good topic here.

Most people don’t understand that the speed of light has become a serious limitation in computing.  Even the original Cray, which was installed in Los Alamos in 1976, had some million individual wires pushing data, no single one of them was more than something like a foot long, due to the time it took to push electrons across them. (I wish I could remember the exact numbers, but I’ve been up for going on 20 hours now, my brain is shutting down)


DWDM is a great technology – allowing 4-8 different signals to travel down the same link.

The down side is when you get, say 8 channels going down a 60km link, you’ve created a very wide path indeed.

But you’ve not fixed the latency problem. Under ideal circumstances latency over fibrechannel is about 2ms per kilometer.

2ms per k at 60k is 120ms. That’s each way, there is a return trip as well for each ACK transmission.

Now when you add multiple data paths, the only thing that changes is now instead of having one I/O outstanding, waiting for it’s ACK, you’ve got four or eight.

60k is more than twice what I as an engineer would recomend without some sort of repeater, especialy when you consider that optical cable is not an “ideal” transmission medium.

The speed of light has some profound implications for networking technology. Light, or electromagnetic radiation, travels at 299,792,458 meters per second in a vacuum. Within a copper conductor the propagation speed is some three quarters of this speed, and in a fibre optic cable the speed of propagation is slightly slower, at two thirds of this speed.

At 2/3 the speed of light, latency is actually closer to 3ms/km.

3 comments

    • on March 21, 2007 at 11:36 am
    • Reply

    I think your math is a bit off. Are you sure you don’t mean microseconds vs. milliseconds?

  1. I stand corrected, it’s Microseconds i believe. 4 decimal places. .0002

    it still adds up, and it still can cause the losses described. I know that when I was working up on Capitol hill, they were replicating in full Synchronous mode over 35km. The only reasons they got away with it was that they were using a real operating system (AIX) and that the customers never saw it with the replication down, as that would have totally ruined it.

    If they didn’t know how fast the system could be, they would never complain about its speed. 😉

    • on March 24, 2007 at 4:30 pm
    • Reply

    Thanks for you comments, I’m really grateful for insights. I’ve got an update on my blog now. Shorter links, greater pain 🙁

Leave a Reply

Your email address will not be published.