Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Bob has been an active member of the Austin startup community for 10+ years and I've talked with him many times. As a EE, it was cool meeting him the first time and once I'd chatted with him a few times, I finally asked the question I'd been dying to ask: How'd you come up with "Metcalfe's Law"?

Metcalfe's Law states the value of a network is proportional to the square of the number of devices of the system.

When I finally asked him, he looked at me and said "I made it up."

Me: .. what?

Him: I was selling network cards and I wanted people to buy more.

Me: .. what?

Him: If I could convince someone to buy 4 instead of 2, that was great. So I told them buying more made each of them more valuable.

It was mind blowing because so many other things were built on that "law" that began as a sales pitch. Lots of people have proven out "more nodes are more valuable" but that's where it started.

He also tells a story about declining a job with Steve Jobs to start 3Com and Steve later coming to his wedding. He also shared a scan of his original pitch deck for 3Com which was a set of transparencies because Powerpoint hadn't been invented yet. I think I kept a copy of it..



Btw, when I say "an active member of the Austin startup community" - I mean that seriously.

Not only did he teach a class on startups at the University of Texas but regularly came to a coffee meetup for years, attended Startup Weekend demo time, came to Techstars Demo Day, and was generally present. I even got to do the Twilio 5 Minute Demo for one of his classes (circa 2012).

It was always cool to have someone who shaped our industry just hanging out and chatting with people.


Absolutely correct. Chatted with him several times circa 2015 to 2016 when working out of Capital Factory in Austin. He was present for all sorts of event such as mentor hours, startup pitches, etc. Funnily enough, he would give you a very stern look if he thought you were taking him for a ride. Have not been there recently as as much as I would like, but I imagine he is still around to be found.


Had a very similar experience hanging out with him and his equally-brilliant wife Robyn in ATX between 2011-2012. Very approachable guy -- impressively so, given his stature in the industry -- but could be quick with the "what the hell are you talking about?" look.


I respect Metcalfe a lot, but halfway through undergraduate discrete math it was pretty obvious to most people in the class even before seeing a formal proof that a fully connected graph has O(n^2) edges. I just figured that people wowed by "Metcalfe's Law" were business types who didn't any formal theory into computing.


Metcalfe's law is about network impact or value, not about connections.


Yeah, but basically it’s a statement that value scales linearly with the number of pairwise connection.


but it's a loose approximation so it's not good to overanalyze it.

The number of pairwise connections grows as the number of pairwise connections, and connections ("how many people can you talk to") are valuable, so value grows. But individual connections to networks grow the pairwise connections by N, so that's even better.

broadcast (one to many connections, like giving a speech to a crowd) is an efficiency hack, which is good, and efficiency hacks grow as the number of connections grow, so that's good too...

... is more how I think about what Metcalfe was talking about. Which aspects are x, which are x squared, which are log x is interesting, but that's not all bound up in his simple statment, despite his "as the square" wording.

and Bob Metcalfe is personally a great guy in all the ways people are saying, but it's not soooo unique, that's the way a lot of tech types were as the mantle passed from the Greatest Generation to the Boomers (and what was that one in the middle, "lost" or "invisible" or something) I'm not suggesting we've lost that (we may have) just saying that's how it was, for instance as an undergrad you could walk into any professor's office and get serious attention.


It counts connections and uses them as an estimate of value.

However not all connections are equally valuable. And therefore the "law" is incorrect. An estimate in far better agreement with the data is O(n log(n)), and you can find multiple lines of reasoning arriving at that in https://www-users.cse.umn.edu/~odlyzko/doc/metcalfe.pdf.


I see only two real 'quantitative' arguments in https://www-users.cse.umn.edu/~odlyzko/doc/metcalfe.pdf#page... . Your first argument, 'connections aren't of equal value', doesn't defeat Metcalfe. Your second argument, that Metcalfe's law would mean efficient markets would merge all networks into one, is both the most amazing overestimate of the competence & economic rationality of telecom giants I've ever seen and also not actually true as a matter of economic theory (https://gwern.net/doc/economics/automation/metcalfes-law/201...). So neither of your handwaving arguments was very good to begin with.

Better agreement with what data, exactly? It's definitely not in that paper, and every single paper I find with data empirically testing your nlogn proposal against Metcalfe finds your nlogn doesn't fit the data at all while Metcalfe can fit well: Facebook (https://gwern.net/doc/economics/automation/metcalfes-law/201...), the entire EU (https://gwern.net/doc/economics/automation/metcalfes-law/201...), Tencent (https://gwern.net/doc/economics/automation/metcalfes-law/201...), and Bitcoin (https://gwern.net/doc/economics/automation/metcalfes-law/201...).


Only two?

The gravity law argument based on geographic distribution of traffic, Zipf's Law and Bradford's law all have empirical evidence behind them. That's three. Additionally another version of the same paper Bob Briscoe contributed data from British Telecom usage that supported the same scaling rule.

The second paper that you gave is interesting. Odlyzko was the one who contributed that particular argument. It is right that there are rational reasons to not interconnect. But Metcalfe would imply more of a first mover advantage than we actually see. In social networks we had Friendster, MySpace and Facebook, each of which overtook the other. How could a new entrant supplant the king? Not once, but twice?

Since then new social networks have continued to sprout and succeed. Facebook managed to stay on top, in part through purchasing other networks. One of which (Instagram) is on track to surpass Facebook in revenue.

Now let's look at the 4 papers that you collected.

The first and third have the same flaw. They are looking at revenue over time as the network grew. But the growth of the network is not the only change that happened over time.

1. The Facebook product improved to become more compelling, even for the same users. In part by adding new channels through purchasing other networks.

2. Facebook kept adding new ways to monetize people, improving revenue.

3. People's behavior has shifted to more online over time. Thus it was easier to get value from the same users in 2014 than in 2008.

Because so much has changed, comparing users in 2008 to users in 2014 is not apples to apples.

Next, let's turn to the last paper. I'm in agreement with patio11 that Bitcoin's valuation has been driven by the largest Ponzi scheme in history. Therefore I view most of its valuation as fake. And so am not inclined to accept arguments from that valuation as valid.

And I saved the best for last. In section 2.4 the EU paper argues that Briscoe's law (I think Odlyzko should be credited, but Bob Briscoe is in the EU) is more accurate than Metcalfe's law after you hit scale.

Their argument in effect is a variant of one that was discussed privately before we wrote our paper. Our immediate perception of the size of a network is based on how much of our personal social groups are on it. The value we get from that network is based on the same. Therefore our perception of the size of the network is correlated with the value we get from it. If the network mostly contains parts of groups, you do get something like Metcalfe's Law out of this. But once the network contains a lot of completed social groups, members of those groups slow down how much value they gain as the network continues to grow.

In other words when the connections in the network are a random sampling of the connections that matter to us, growing the network adds valuable connections. Once the network contains the connections that we know matters to us, most of us only benefit marginally from continued growth.


Was it not specifically "compatibly communicating devices" or something and not users like how it was marketed.


I thought it was a "combinatorial explosion?"

https://en.wikipedia.org/wiki/Combinatorial_explosion#Commun...


well, according to Alonzo Church, if this is x squared and that is x squared, then this is that.


(n²-n)/2 is O(n²) as GP claimed (not that it's exactly n², unless edited before I saw it), you're both right.


HN comment of the year winner right here! Makes you wonder how many other laws are built on nothing.

If there's one thing I leaned doing a Ph.D. is if you dig deep enough, you find many foundational laws of nature rely on some necessary assumption that, if proven incorrect, would topple the whole thing


It's worth mentioning Moore's Law, which was actually a short term prediction, arguably turned into a business goal. The "law" states that the number of transistors in integrated circuits (like CPUs) will be doubled every two years (or 18 months by some variations). It wasn't entirely made up, as it was mostly based on advances in manufacturing technology, but it was a prediction made in 1965 that was supposed to hold for ten years. However reality kept up with this prediction for far longer than anticipated until the physical limits of silicon miniaturization became apparent in recent (ish) years, until the mid 00's (maybe later?).


I think it technically kept going into the early 2010s due to additional advancements and technically it hasn’t yet stopped but people are generally skeptical that TSMC and Samsung can keep this party going (a party that seems to have stopped for Intel itself apparently).

Dennard scaling though did end in the mid 00s and this impacted Koomey’s law which talks about performance/watt and saw a similar petering out.

Apparently the bound at even a conservative rate puts the thermodynamic doubling limit at 2080. After that we’ll have to really hope smart people have figured out how to make reversible computing a thing.


CPU clock speed stopped improving slightly sooner than that. Performance continued to improve, but they switched from making single threaded code faster to adding more cores.

This was a bit of a bummer for programmers working in single threaded languages who found that their code stopped getting faster for free.


lots of (not only european) public funding made progress to euv of asml, zeiss and other possible.


Moore's law is still going , or so I thought. It's Dennard scaling that stopped around 2006.

https://ourworldindata.org/grapher/transistors-per-microproc...


Sorta. It didn't hit a hard boundary per se, but SRAM has practically stopped scaling and even random logic is only scaling at about 1.5x every three years or so. Like a lot of cases it's an s-curve, and we're on the other side of the bend at this point.


Moore's law is almost the opposite of Metcalfe's - Metcalfe's encourages you to build out the network as fast as possible to get the most value; Moore's implies you should wait as long as possible before buying processing power to get the most you can.


Moore’s law isn’t even dead. It says that the number of transistors per dollar rises at that rate, which is still going. Commenters tend to omit the cost component of Moore’s remark.


CS "laws" like Metcalfe's are closer to Murphy's Law than Newton's...


> Makes you wonder how many other laws are built on nothing.

variance/standard deviation (also btw, a sum of squares concept)

it marks the inflection points on the gaussian curves, but so what, the 2nd derivative points to something significant about the integral? not really. But even if we accept that it does, what does two standard deviations mean? a linear double on the x coordinates says what about the a hairy population density function? nothing.

or similar to Metcalfe's Law, the very widely used Herfindahl Index (also squares!). It's a cross between a hash and a compression, it says something about the original numbers, but wildly different scenarios can collide.


When observation is translated to "law". That is an act of judgment on the part of the law-maker, purely. Call it "built on nothing" if you like. But as opposed to what?


Do you know of any in particular?


Read ‚The Unlocking Project’


"IF proven incorrect" is the important part.

This "law" isn't somehow less true just because it was originally used as a sales tactic.


How would you even test such a vague law, let alone disproving it?


The law implies testable consequences, such as what the economic incentives should be from interconnecting networks. They are good enough that we should expect to see more drive to interconnect, and stronger barriers to entry for future networks, than history actually shows.

https://www-users.cse.umn.edu/~odlyzko/doc/metcalfe.pdf offers this and several other lines of evidence that the law is wrong, and O(n log(n)) is a more accurate scaling law.


It is quite telling that when Bob Metcalfe 'makes stuff up' he still hits it out of the park.


A little confirmation bias on this one. In addition to the infamous internet will collapse prediction he was also pretty whole hog on the Segway scooter revolutionizing transit.


So let me enlighten you a bit: we did collapse the internet, and got a testy email from a bunch of backbone maintainers that they were going to block our live video streams (on port 2047) in four weeks time or so. Which resulted in us moving to the other side of the Atlantic to relieve the transatlantic cable. So even if it didn't make the news Metcalfe was 100% on the money on that particular prediction. The Segway never had a chance as far as I'm concerned but the other thing he got just so. But maybe he never knew (and I never knew about his bet).


He made a very specific prediction - it didn’t pan out - that there have been near misses and even global degradation events multiple times in the past 3 decades is not relevant. He admitted he was wrong and literally ate his words.


> But I predict the Internet, which only just recently got this section here in InfoWorld, will soon go spectacularly supernova and in 1996 catastrophically collapse.

- Bob Metcalfe


Not only did he make it up, but it is false! Multiple lines of evidence point to a O(n log(n)) law instead.

https://www-users.cse.umn.edu/~odlyzko/doc/metcalfe.pdf has the details.


From the paper:

> In general, connections are not used with the same intensity... so assigning equal value to them is not justified. This is the basic objection to Metcalfe’s Law...

In my architectonic opinion, the perfect network comprises all nodes operating equally. Ergo the ideal is indeed Metcalfe's law, but architecture and design can be costly, which is simple the inefficient use of resources. These being very precise machines, anything less than 99.999% is amateur, ergo the law obtains.


We are talking about computer systems that connect a network of humans. Humans are notoriously imprecise and unreliable machines. Anything more than 0.00001% is therefore a miracle.


Lol, networking people has produced little of real value except the paradigm itself, and social networking is little more than making humans more efficient at marketing to each other. Networking is for DATA. When people behave like networked machines... well that's global capital communism tbqh.


I remember trying to get NICs to work in Linux and the best advice was usually “just try the 3c509 driver”.


I remember when I bought my first fast ethernet card, there was some Linux HOWTO that discussed various ethernet NIC's, and crucially, their Linux drivers in excruciating detail. And the takeaway was that if you had a choice, pick either 3com 5xx(?) or Intel card. The 3com card was slightly cheaper at the local computer shop, so that's what I ended up with (595 Vortex, maybe?).


Yeah, I had gold-plated 100Mb 3Com cards and they were the best. (something-905-series?) With full-duplex, hardware offloading, good drivers. I still have one lying somewhere. )


As a poor college student I scavenged 3c509 cards to build a computer network in an apartment I shared with two other chronic internet users.

That was right about the time someone has solved a bug with certain revisions of the card behaving differently. So suddenly the availability jumped considerably.


Similar to "try the HP 4si driver" for printers?


Practically a mantra.


It was well known when I started that you got a card that would work with that (and later for gigabit it was e1000).


Although he made it up, there's an argument that the value goes up more than linearly. But as the network grows, every node doesn't necessarily need to talk to every other node except in rare circumstances, or they can reach each other through an intermediate point. So maybe O(n log n) would be closer.


I recall seeing an article a number of years ago that argued just that. That the network effect is nlogn. Still enough to help explain why large networks grow larger, but it also means that overcoming the incumbent is not the insurmountable wall it may seem to be. You may only need to work twice as hard to catch up, rather than orders of magnitude harder.


He may have "made it up" to improve sales, but from a certain viewpoint it's correct. If decide to measure the "value" of a network based on the number of node connections, then the number of connections for n nodes is n(n-1)/2 = 0.5n^2 - 0.5n which is O(n^2).

Of course, the value of something is hard to measure. Typically you measure value as "benefits - costs", and try to convert everything to a currency. E.g., see: https://www.investopedia.com/terms/c/cost-benefitanalysis.as... . But there are often many unknowns, as well as intangible benefits and costs. That make that process - which seems rigorous at first - a lot harder to do in reality.

So while he may have "made it up" on the spot, he had a deep understanding of networking, and I'm sure he knew that the number of connections is proportional to the square of the number of nodes. So I suspect his intuition grabbed a quick way to estimate value, using what he knew about connection growth. Sure, it's nowhere near as rigorous as "benefits - costs", but that is hard to really measure, and many decisions simply need adequate enough information to make a reasonable decision. In which case, he both "made it up" and made a claim that you can justify mathematically.


And yet it's trivially true. Value accrues with connectivity, which is number of the edges in a fully connected graph being n(n-1)/2, which as n grows larger approximates to n^2. I would be surprised he said he "made it up", other than as a joke about elementary computer science.


As n grows larger, the number of edges approximates n²/2. I may be pedantic but I feel that the difference between something and it's half is non-negligible.


You're assuming complete connectivity; no one builds networks of nontrivial size that way.


If that copy still exists, having it on archive.org might be interesting.


Love the story, man!




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: