Hacker News new | past | comments | ask | show | jobs | submit login
OSI: The Internet That Wasn’t (ieee.org)
110 points by florent_k on July 29, 2013 | hide | past | favorite | 62 comments



As usual, /usr/games/fortune provides some insightful commentary:

"On the other hand, the TCP camp also has a phrase for OSI people. There are lots of phrases. My favorite is `nitwit' -- and the rationale is the Internet philosophy has always been you have extremely bright, non-partisan researchers look at a topic, do world-class research, do several competing implementations, have a bake-off, determine what works best, write it down and make that the standard.

The OSI view is entirely opposite. You take written contributions from a much larger community, you put the contributions in a room of committee people with, quite honestly, vast political differences and all with their own political axes to grind, and four years later you get something out, usually without it ever having been implemented once.

So the Internet perspective is implement it, make it work well, then write it down, whereas the OSI perspective is to agree on it, write it down, circulate it a lot and now we'll see if anyone can implement it after it's an international standard and every vendor in the world is committed to it. One of those processes is backwards, and I don't think it takes a Lucasian professor of physics at Oxford to figure out which."

-- Marshall Rose, "The Pied Piper of OSI"


This is why every time Mozilla starts whining about Google playing with new things without consulting everybody else first (e.g. Dart, NaCl), I lose more respect for them (actually, at this point, they've pretty well exhausted it all). Basic advancement comes from somebody deciding to try something new, not from getting everybody to agree on what comes next.


It's slightly different because Google is in a relatively unique position of being able to unilaterally deploy new technology, as they both serve a large fraction of the traffic on the internet, and have a very popular browser.

Mozilla is understandably wary about new technologies from Google, because it would be very easy for them to make it so that the best youtube experience can only happen in chrome, which would be quite detrimental to Mozilla.


Just because they can do large deployments doesn't mean it will be successful. Look at what happened with Buzz and Wave for example.


Likewise, Mozilla is in a unique position in that they prefer the status quo - having Brendan Eich on board with them - and they can easily maintain the status quo by refusing to support other party's efforts.


>Mozilla is understandably wary about new technologies from Google, because it would be very easy for them to make it so that the best youtube experience can only happen in chrome, which would be quite detrimental to Mozilla.

Surely Mozilla adoption of Dart or NaCl would make this less likely, not more.


If they can keep up. Google has the deep pockets to add a new, complex feature to the browser every other week - they don't have to be better than current tools (though they very well may be) just completely different. If Mozilla just tried to chase and implement every one, not only would they set a pattern of being willing to follow, which would lead to them being expected to follow, but they also have less money and manpower, so they would implement those new GoogleThings(TM) badly. They wouldn't have time for their own stuff, so their only option left would be to follow badly and be nothing but the company that implements Google stuff with a slightly more liberal license. They would lose all market share to Chrome for being a shitty imitation of Chrome, and eventually cease to exist.

So there's an incentive to intensely scrutinize everything that Google is trying to do, and only to implement the best stuff. And let Mozilla's projects be specs, or in javascript, or be protocols, and sell the other browsers on them, hopefully getting other vendors like MS, Apple, Google, and Opera to think they're neat enough to buy in.

Though friendly, here's an inherent asymmetry in the relationship between Mozilla and Google. Mozilla should throw around its little market power as hard as it can.


So, invent new things, just don't get anyone else to use them. Got it. Please surrender your lightbulbs immediately.


Thanks for replying to a strawman version of what I said.


Hardly. Your argument is a caricature of itself. Success and the mere capability to engage in purely hypothetical anti-competitive behaviors suddenly means ordinary innovation practices are evil.

Presumably at this point Mozilla would like to join Microsoft in one of its various anti-trust complaints against Google for having dared to offer users better products.


So long as google is storing the password to my network in plain text on its servers just because I let one of my friends access my network, they do not get to boast about innovation. I cannot clearly express the loss of respect that action has engendered.


Not to defend Google, but... set up a guest network.


heh. yes, I could setup a guest network to workaround the fact that google stores my network access passwords in plain text on their servers whenever I allow someone using their software on my network.

or they could stop betraying the trust of their users. I mean, seriously. It would be one thing if they were 'only' totally compromising the network security of people who chose to use their software. But they are compromising the security of anyone with a network who lets their friends and family on board.

My network isn't a problem one way or another, I am actually fairly comfortable with the protections it has. OTOH the networks of non-techy people and family across the entire world that have been compromised by google's astounding arrogance(?) and stupidity(?) kind of bother me.

Honestly, I do not understand how an entirely tech oriented organisation like Google could do something like this. I am Jacks bewildered confusion.


Like I said, I am not arguing that Google did right. I'm just saying, if you're handing out a single key to people with arbitrary configurations you should probably assume it's compromised. Your friends could just as well have had some other malware.


yes, that is a generally correct statement.


That isn't the same at all: Mozilla and Google are producing actual products that embody competing standards, and both Google and Mozilla have a lot of power/control over "what comes next". Given both Google and Mozilla's track records, I'd say that Mozilla's concern is warranted.

Your analogy might work better if Mozilla was a non-practicing committee that brought a competing standard with no implementation out much later.


I don't see how this statement applies to Dart or NaCl:

"the Internet philosophy has always been you have extremely bright, non-partisan researchers look at a topic, do world-class research, do several competing implementations, have a bake-off, determine what works best, write it down and make that the standard."

I think it fails on "non-partisan" and "competing implementations" for sure. Maybe Mozilla is whiny. I don't know. But these aren't great examples of the benefits of the Internet philosophy as stated above.


Lucasian professor of physics at Oxford

Lucasian Professor of Mathematics at Cambridge, surely?


Your question intrigued me. You are literally correct, but as it turns out the position since 1849 has been held by a physicist (at least, if we include one 10-year term of someone specializing in what Wikipedia called a "fluid mechanics" specialty "physics"), so as an imprecise conversational term it's not entirely wrong. I was unable to turn up the original source of the quote in Google, but odds are quite good it was written prior to 2009 (and almost certainly post-1979, as the timeline theoretically works for that but it would be amazing prescient in that time :) ), in which case the Lucasian Professor of Mathematics at Cambridge was Stephen Hawking.


Doesn't this description of the OSI way look similar to how the W3C works?


Not anymore. It used to work like that, which is why things like XHTML 2.0 never got traction.


Isn't the W3C working that way, at least at a certain point in the past, the reason why WHATWG exists outside of W3C?


sssssh youll give away the joke


To sum up my reply to replies to my OP: the biggest thing I take away from this quote, personally, is that you should have a good, practical knowledge of something before you go about making a standard, and input from the real world is also a very good idea. Some upfront design is necessary, yes, but when politics dominates technical concerns, failure is bound to follow.


Because, of course, TCP/IP doesn't have any bloody stupid, boneheaded decisions that it continues to perpetuate through multiple revisions (hey, isn't it great we have to read a whole header to work out the address information!)


Yet, here you are, using TCP/IP to criticize it and to read my response.

Say, you're not a LISPer, are you?


C'est la normes


I think one of the subtleties that is overlooked is that by being the "future" of networking, OSI drew in all the "big guns" who worked hard to control it to their advantage. Whether it was the folks at IBM who were pushing Token Ring/SNA, or Netware, or anything really. Since it was the standard to be, influencing it was the high priority.

That allowed the IETF to continue in relative calm and without serious interference during the 80's. That all changed in the 90's when people started figuring out they were betting on the wrong horse.

In 1993 at the 27th IETF [1] I put forth the proposal that this code we had all been using (RPC/XDR) that was described in various informational RFCs (RFC1057/RFC1014) which everyone treated like a 'standard' actually be blessed as a standard. Pretty much the consensus was that it was a fine idea except that forces at that point that were rather anti "Sun" went out of their way to kill it. It was sad to watch, and folks who had been going to IETF meetings for a decade or more were appalled but damned if it had become impossible for the IETF to bless something as a standard any more. That really soured me on 'standards' for a long time.

[1] Pg: 533 Advances in ONC http://www.ietf.org/proceedings/27.pdf


There were also some differences in design tastes. For example OSI went with ASN.1[1] for binary descriptions rather than plain and simple. TCP and IP just use 8/16/32 bit fields in a predefined byte ordering and that is it. That makes TCP/IP considerably easier to eyeball.

In higher level protocols like email, the TCP/IP world tended towards ASCII based protocols which are far more flexible and future proof, while OSI again did ASN.1. I once worked on an X.400 (OSI Email) gateway and was highly amused that they defined a different error code for every possible reason an email could be refused. There were pages of them (including recipient is dead!) while SMTP allowed for arbitrary text and an overal numeric code to indicate the type of error. Again you can see which was easier to eyeball and diagnose.

Practicality tends to be very effective.

[1] http://en.wikipedia.org/wiki/Abstract_Syntax_Notation_One


I've implemented ASN.1 BER from scratch before (don't ask). If "three different string formats" doesn't say designed by committee, I don't know what does.

Sounds like OSI would have been an internet built on what is essentially binary-encoded XML. I'd say we dodged a bullet.


The danger of committees is that it's easy for a few members to add tremendous complexity. Why would anyone do that? Isn't "easy to understand and simple to implement" always a goal of everyone working on tech standards?

Not if you're a big company competing with small companies. Adding 10 man-years of implementation effort is a small thing for IBM- or Cisco-sized companies, but a giant barrier to entry for upstart competitors. Also, vagueness in the spec requires extensive testing to make things interoperate reliably, which favors big incumbents.

These aren't just emergent, subconscious effects. I've been in on discussions about making a spec harder to implement to frustrate competitors. (not with any YC companies)

The dark forces of competitive advantage affect non-committee specs too. Microsoft SMB is an example. The x86 instruction set is another.


> Why would anyone do that? Isn't "easy to understand and simple to implement" always a goal of everyone working on tech standards?

Patents and "competitive advantage".


"Everything was up for debate—even trivial nuances of language, like the difference between 'you will comply' and 'you should comply,' triggered complaints."

That's not actually a trivial nuance---those are significantly different semantic behaviors. IETF RFC's use "MUST" and "SHOULD" (yes, in caps) for the same distinction.


…but we're going to keep asking questions about the 7 layer model in interviews as if it were useful in the real world because that's what we learned about in school.


"I hear they're going to 42 layers because it's a sacred number in Bali."

Not from this guy. I might ask you to critique the 7 layer model, or provide me with ideas for speeding things up by skipping layers.

It's all mushed together in real stacks anyway, with dogs down in the cat layer, and mice doing double duty both at the metal and up there in the UI. I swear to god, OSI would have been standardizing finger lengths for keyboard interaction if someone had let them.


I counter with the 9-layer model and that causes them to ask a more relevant question. This happens a lot when you're receiving a generic interview from a US Megacorp that has only recently ditched irrelevant (save for "how does this person think") quizzes for cross-functional interviews from people whose function is far removed from what you thought you were going to be interviewing for. (They all seem to have migrated to this strategy, for what that's worth.)


Came here looking for 7 layer model. It seems to be the only surviving legacy of OSI. From memory Computer Networks by Tanenbaum mapped every protocol into that damn model. Typical of committees to have 3 more layers than needed.


X.500 was tried and deemed too complex/unworkable and has since been mostly abandoned. (Who remembers X.500 DNs leaking out as e-mail addresses some twenty years ago?)

The idea of a directory service is a good one, so we ended up with LDAP and AD.


Obviously the best job candidates are the ones that scoff at the prospect of learning something more than is absolutely necessary.


Straw man. Curiosity is good, but it's foolish to rule people out because an obsolete network model hasn't caught their interest or made it to the top of their research queue yet.


It's not just obsolete, it was proven to be unworkable. What's worth knowing about it isn't the details of the model, it's why the model should be ridiculed. The fact that the model keeps getting brought up as something that one should know because it's good is an example of why we have problems.


I remember reading with great amusement an article written by an X.400 proponent, sometime in the late 80's, asserting that X.400 would start to dominate SMTP once the telecoms got the price of sending an X.400 message under a dollar per e-mail. (And this was back in the 80's, when dollars were bigger back then!)

Thus proving that the OSI folks really didn't have a clue....


I can never see the term OSI an not think of this classic. It gave me and friends a good laugh back in the day ;p (original site down but this is a mirror)

http://pablotron.org/files/7_layer_burrito.html


I had always looked to the OSI model as a _frame_ thought which a communition channel can be analysed. If I intercept my internet cable, I assume I can idealy measure the signal and fit it on the 7 layers of the OSI model. I fail (honestly) to see how that is a failure, as seems to be the starting point. Isn't the OSI model _one_ way to look at a medium?


That was not the original intention. That is the story that for some reason network professors continue to tell themselves as a reason to teach this model. The original intent was for there to be 7 clear layers, with various protocols for each.

Why the profs stick so close to this instead of teaching a more realistic model is beyond me. Why use an inaccurate theoretical model when you could use an accurate one instead? It's going to be simplified relative to reality either way, as befits a model, but it might as well be a correct simplification. I mean, it's not like the OSI 7-layer model is a mathematical truth or anything... it's just an artifact produced by a committee a long time ago.

To anyone leaping up to defend it, let me set the frame I'll be judging the defenses by in advance: If the real world was as it is today except there was no such thing as the OSI model, and someone proposed the 7-layer model today as a model for understanding the network for the very first time, would you really consider your defense as a reason to go with it, even despite the fact the model is actively inaccurate? I don't think inertia is an adequate defense to stick with something, when we aren't even using it anyhow... what real inertia does it have?


Why do you say the OSI model is unrealistic? Writing the simplest Hello World web service call takes advantage of the OSI model, even though you might not notice it unless you also write network drivers and manufacture network cards. In fact, I've worked at companies where it would be accidentally accurate if you simply labelled the dev teams with the layer of the OSI model their work corresponds with. You practically can't read a network sniffer trace unless you understand the OSI reference model.


"Why do you say the OSI model is unrealistic?"

The layers do not reflect reality. The higher up you go, the less obvious the mapping to the things that really exist gets, and the mapping is getting fuzzier over time (or, phrased another way, the OSI model is not only unrealistic, but increasingly unrealistic).

This is as opposed to possible meanings of the phrase, like "impractical". It is probably theoretically possible to write something that really would have seven clear layers, though I have to hedge; a lot of really high-performance stuff is even fuzzier than the consumer stuff. The recent trend towards userland-level networking at the highest performance end pretty much collapses layer 3 and above into one application. But it would at least ship bytes from here to there, it's certainly not an impossible design. It just isn't the real one. (And I'd have grave concerns about its performance at the top end.)

I have to admit this is another trend in programming that I just Do Not Get. This bizarre insistence on taking some inappropriate model, then with malice aforethought deliberately squinting at things that manifestly do not fit into the model until your vision is so fuzzy that they do seem to fit together, then yelling at anyone who dares point out you've damn near closed your eyes and probably aren't seeing clearly. See also every web framework's desperate need to insist that they are MVC, even as the lines that must be drawn between the various components to show where the M, V, and C are wildly and drunkenly veer hither and yon in a terrifically convoluted manner, criss-crossing dozens of components, instead of simply explaining what they actually are. I just don't get it. Models aren't blueprints, let alone the very definition of virtue. If they don't work, dispose of them.


> You practically can't read a network sniffer trace unless you understand the OSI reference model

I read these all the time. I can't remember the last time I needed to know what the OSI layers were called; they're utterly irrelevant to networking as near as I can tell.


I was referring to the whole concept of the model and the purpose of the layers, not simply reciting from memory the names of the layers.


You don't need the OSI model to understand layers of abstraction. Since the OSI model doesn't fit the observed layers very well, it seems pretty useless from that perspective.


Interesting and we can see the outcome of the palace revolt is IPv6 OSI /telecoms guys would have insisted on having a workable inter-operation plan.

EX X.400 hacker here I used to have root on the UK ADMD back in the day :-)


Not too impressed with this account, which I will admit starting to skim around halfway. E.g.:

I'm not sure the author realizes TCP is a virtual circuit protocol (then again I'm sure OSI had one or more much heavier weight ones).

The real fatal flaw of OSI, before even getting to the point of finding out if their protocols worked---many of them did get far enough in standardization process---was their going to ISO in the first place. If you just needed to get things done, the difference between spending around $2,000 ($1,000 in 1988 dollars) to buy a shelf of the standards documents, or $0 or thereabouts to get all the RFCs (chicken and egg, you might need to buy a CD or a tape to get them to your systems), made a very big difference.

If you're really interested in all this, I highly recommend Padlipsky's very opinionated "The Elements of Networking Style: And Other Essays & Animadversions on the Art of Intercomputer Networking" (http://www.amazon.com/Elements-Networking-Style-Animadversio...), a very colorful work with rhetorical gems like "gilding the ragweed".


> spending around $2,000 ($1,000 in 1988 dollars) to buy a shelf of the standards documents

The ISLISP standardization group did an interesting workaround for this: http://www.islisp.info/history.html

An ISO committee to standardize the language was constituted but stalled on doing any real work drafting a standard. While they waited around, the community drew up a draft standard, which was published as a public recommendation to the ISO committee, with the draft put into the public domain. The committee then voted to adopt the community's draft as the standard unchanged. So now an official ISO document is available for the usual fee, but you can get a predecessor document that looks surprisingly similar, for free as a PDF.

Of course, this requires everyone on the committee agreeing to not really take the ISO process seriously. You might wonder why one would bother with it at all then, and in this case it seems to have been to reassure clients that ISLISP is stable by getting it an ISO standard.


Reminds me of how I always used drafts of the ANSI SCSI standard, or vendor documents.

"in this case it seems to have been to reassure clients that ISLISP is stable by getting it an ISO standard"

Which is a nice hat trick if nobody uses the actual standards document ... although I suppose Franz and perhaps a few others did buy a copy....


Common Lisp also has an interesting history with regard to copyright, which resulted in the HyperSpec. I can't find the history here, but someone (KMP maybe?) wrote about the discussion when it came time to transfer the copyright to ISO.


TCP really isn't a virtual circuit protocol in the same sense that X.25 PLP is a virtual circuit protocol: In VC protocols you say something like "network, I want to talk to address X," establish a channel (getting a channel ID allocated) and from then on send packets to the channel (not mentioning the remote address again), knowing that they'll arrive there. This means the routers on the way are stateful. TCP theoretically doesn't care about routing, and IP is stateless.


"not sure the author realizes TCP is a virtual circuit protocol"

That struck me too -- kind of undermining the simple narrative of "connection-full telecom biggies vs. connection-less insurgents". It would have been worth a note in the article, because it's in IEEE Spectrum after all.

Not a networking pro, but I believe the converse is also true, that OSI has specifically connection-less protocol elements (e.g., CLNS) that are at a lower level in its own hierarchy, and more equivalent to IP in TCP/IP.

I just saw part of an interview with Cerf (http://www.internet-history.info/media-library/mediaitem/100...), in which he said that part of the motivation for packet switching is to maintain command and control in the chaos of a post-nuclear-strike world. That explains why the person connected with packet switching (Paul Baran) was at RAND, which was not a networking place, but very much a "let's plan for the apocalypse" place.


Well, there was a big controversy over basing your protocols on datagrams with stateless nodes in the middle. The CCITT's X.25 didn't.

When I was working for Lisp Machines Inc. (LMI) in 1982-3, when TCP/IP was being mandated for the Arpanet (and therefore Lisp Machines, sometime I worked on at LMI), I can remember the father of the Lisp Machine, Richard Greenblatt, arguing that it wouldn't work because too many packets would get lost in the middle. With stateless nodes---a necessary virtue for the small memory minicomputers the Arpanet was built on---re-transmission has to come from the endpoints, and if the error rate was too high it would have failed to be practical. Look at TCP over ATM for a modern example of this sort of lossage.

I assume Vint Cerf et. al. did the math based on observed error rates, then tested it to make sure (TCP/IP and NCP ran side by side for some time on the Arpanet) and Greenblatt, who was rather distracted designing his 3rd generation Lisp Machine, was going by general principles.


Actually, it seems ARPANET was not designed to survive nuclear war:

http://www.internetsociety.org/internet/what-internet/histor...

> It was from the RAND study that the false rumor started claiming that the ARPANET was somehow related to building a network resistant to nuclear war. This was never true of the ARPANET, only the unrelated RAND study on secure voice considered nuclear war. However, the later work on Internetting did emphasize robustness and survivability, including the capability to withstand losses of large portions of the underlying networks.

Charles Herzfeld, ARPA Director (1965–1967), said:

http://inventors.about.com/library/inventors/bl_Charles_Herz...

> The ARPANET was not started to create a Command and Control System that would survive a nuclear attack, as many now claim. To build such a system was, clearly, a major military need, but it was not ARPA's mission to do this; in fact, we would have been severely criticized had we tried. Rather, the ARPANET came out of our frustration that there were only a limited number of large, powerful research computers in the country, and that many research investigators, who should have access to them, were geographically separated from them.

Paul Baran:

http://www.wired.com/wired/archive/9.03/baran.html

> Wired: The myth of the Arpanet - which still persists - is that it was developed to withstand nuclear strikes. That's wrong, isn't it?

> Paul Baran: Yes. Bob Taylor had a couple of computer terminals speaking to different machines, and his idea was to have some way of having a terminal speak to any of them and have a network. That's really the origin of the ARPANET. The method used to connect things together was an open issue for a time.


Cerf does say that part of the reasoning behind packet switching was robust command and control in the event of nuclear war. Cerf doesn't say that ARPANET, in itself, was designed for that (and neither did I).

This is also consistent with the following quote in the Baran interview you cite:

  Baran: But the origin of packet switching itself is very much Cold War.
  The argument was: To have a credible defense, you had to be able to 
  withstand an attack and at least be able to show you had the capability 
  to return the favor in kind.
So there is no disagreement on that point, and your corrective "Actually, ..." is misplaced.


> I'm not sure the author realizes TCP is a virtual circuit protocol (then again I'm sure OSI had one or more much heavier weight ones).

You've missed that they're referring to level 2 and 3 virtual circuits, IE "data is always delivered along the same network path, i.e. through the same nodes" ( https://en.wikipedia.org/wiki/Virtual_circuits#Layer_2.2F3_v... )


I was hoping someone (else) would mention Padlipsky.

In my dissertation, Elements of Networking Style is the reference for the OSI section (which was needed because everybody is still using the damn terminology).




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: