Shameless plug for a research project going on at my school: https://priv.io/ can replace most social networks you can think of in a decentralized, private, encrypted manner.
This is a terrible solution. If you think the situation right now is bad, your solution will only exacerbate the problem.
Right now, these exchange points you discuss already exist in the form of IXPs (Internet eXchange Points), which are privately owned and operated. Anyone that wants to peer at an IXP can do so (usually, and assuming you lay your own cable to get there), and whether or not a peering is made is decided by the would be peers. In the world you illustrate, these peering decisions are made by government regulators.
Why is this bad?
If everyone has to peer with each other for free, why would anyone ever lay their own cables to anything but an IXP? More importantly, the profitability of being a transit provider (think AT&T, Level3, etc.) is already dropping off a cliff due to ridiculously low margins. In your world, there is literally zero money to be made as a transit provider. In fact, you strictly lose money providing that service. Short of a government takeover of all Internet infrastructure, there is no feasible way to implement your solution without putting companies out of business in droves.
You effectively make the entire backbone of the Internet a public service. Your solution asks companies, which have expended enormous amounts of capital on infrastructure, to share their capital expenditure with anyone and everyone. That's entirely unfair to those companies, and it renders their investment useless.
>Anyone that wants to peer at an IXP can do so (usually, and assuming you lay your own cable to get there), and whether or not a peering is made is decided by the would be peers.
Right, and the three rules are about regulating how those decisions get made.
>In the world you illustrate, these peering decisions are made by government regulators.
No, they're still made by the ISPs, it's just that as long as one of them wants it the other can't say no.
>In your world, there is literally zero money to be made as a transit provider.
Not really. Unless you're willing to connect to all the interchange points in the world you will be paying a transit provider to access all the interchange points you don't peer at directly, just like it happens today.
>Your solution asks companies, which have expended enormous amounts of capital on infrastructure, to share their capital expenditure with anyone and everyone. That's entirely unfair to those companies, and it renders their investment useless.
Which companies are your referring to? The end-user ISPs will continue to charge their clients. The transit providers will continue to charge ISPs to connect to other interchange points. The CDNs will continue to provide the same service. No one is getting to use infrastructure for free. In fact the exchange points are just physical points, the fibers going into them are paid by each of the ISPs peering. Everyone pays their own way.
>So, if I'm Netflix and I operate an AS based in California, I connect to an IXP and every customer facing ISP has to take my traffic.
Every customer facing ISP that operates in the area of that IXP yes.
>So then I send my traffic over AT&T's line to NYC where AT&T connects to an IXP
AT&T has no obligation to provide you with that transit. It just has to accept traffic at your IXP and deliver it inside their network, not to other IXPs. To make it even fairer to large ISPs you could even say that they only have to deliver it to the part of their network covered by the IXP where you inject the traffic.
>and each ISP at that IXP has to pay AT&T for access to AT&T's backbone.
I don't follow. What's forcing them to pay anything? Definitely not my peering rules. If AT&T has decided to transit traffic between IXPs the ISPs are definitely not being forced to pay for it.
Here's a more descriptive version of my proposal. All customer facing links need to be connected with free peering to a regional IXP, making the last mile net neutral. To get actual global routing of traffic you need a route to all IXPs in the world. You can build infrastructure to every IXP in the world yourself or you can pay a transit provider to do that for you, possibly in coordination with other transit providers. This way ISPs can't use their last mile monopoly to extract rents and there's still competition between transit providers to create good backbones.
Do you guys store passwords in plain text? Shouldn't you only be able to get password hashes from a vulnerable server? I might be reading too much into your statement, but I'd like to know if I'm misunderstanding the situation.
Heartbleed doesn't give you access to storage - it gives you access to the raw heap of the process that's linked to OpenSSL. Passwords are typically transmitted unhashed, albeit encrypted by TLS, and the application decrypts the TLS stream to heap, which means an unhashed version of the password is in process memory for some amount of time. An attacker using Heartbleed has a chance to see that memory, and could therefore see hashes OR unhashed passwords.
Example from a memory dump on a vulnerable server (username & password changed to protect the innocent):
`..?...t?.R...t>
...ned....userna
me=0000000+0ew+0
user&password=my
passw0rD.~Jt....
.3z..a..........
One of the things that caught me off guard, but isn't surprising is that some hosting companies don't use VM isolation, thus it was possible to pull memory from other sites which may have been patched. Hopefully hosting vendors that don't have isolated VM's don't also allow users to install their own OpenSSL as this would become a vector to compromise neighboring hosts. Of course allowing any custom software install in such an environment is just asking for it.
"[H]osting vendors that don't have isolated VM's don't also allow users to install their own OpenSSL as this would become a vector to compromise neighboring hosts."
Could you explain this please?
Here's a possible scenario... I root virtual machine X running on host Z (using heartbleed). Another machine running on Z is virtual machine Y. Because X and Y are not isolated, and I am running whatever I want on X, I can find some uncleared memory (somehow -- how?) that was previously used by Y, thus giving me access to Y. (Seems a bit handwavy, and I'm not sure this is what you meant, so any details would be helpful.)
Actually, another likely scenario is a load balancer shared by multiple sites... As long as the ssl is terminated at tr load balancer, it's vulnerable.
you are underestimating the severity of the bug. The bug leaks server memory - in which case unencrypted passwords are being sent to the server by the user's browser in order to be hashed to be compared to the hashed versions in storage.
Normally this is protected by tls, but as you can see, for servers that suffer from this hole, it's as good as naught.
Note that this occurs for "any" connections hitting the vulnerable server, meaning that the patient attacker can just run this in a script and scoop up passwords, credit card #'s, form information POST'ed in by all users of the web service all day long until the hole is closed. and even then there's a good chance that the private keys were already exposed, in which case the attacker can now masquerade as the server.
Perhaps you should over-write them with garbage before doing this. Simply deallocating them (or assigning them to new objects) could leave them hanging around in memory until something else overwrites the same memory space.
If the passwords were POSTed over and left on the heap, then they were vulnerable to being scooped up via Heartbleed, even if they are stored hashed in the database.
Your original post is a huge tangent from the topic of discussion. That article is pure fearmongering and is exactly the type of propagandized information that stifles innovation.
Obviously it makes sense to talk about the ramifications of new technology, but the article you linked is doing this in a heavy-handed way to get people to read the article. The result is disproportionate value being placed on that idea for the average reader of dailymail, which necessarily detracts from the discussion of balance surrounding this research.
No article stifles innovation unless some idiot reads it as scripture!
Meanwhile what do you know about the average Daily Mail reader? Presumably you've researched the around 70 million readers (an exaggeration to be sure but it's a lot of people) who take a daily look at this popular newspaper. I rather think not. But that doesn't stop people like you making predictable and patronizing observations of this kind at every opportunity. No need to comment. We can all 'read' and draw our own conclusions.
I'm lumping "average daily mail reader" with average person. Surely you are not ignorant to the effects that propaganda can have on the population. The linked article does nothing but narrowly discuss one possible consequence of this type of technology in a manner meant to illicit fear or general negative association. This is by definition propaganda and it detracts from a thoughtful analysis of the larger topic of discussion. I'm not patronizing anyone by drawing attention to the idea that plenty of people use news articles as end points for information; I'm simply drawing the connection between that phenomenon and the type of biased reporting seen in the article above.
I know you're being sarcastic, but it's upsetting that people are capable of having this attitude. There is no excuse that can ameliorate this behavior in the name of democracy or any other alleged moral acumen the propaganda machine spits out. It strictly amoral to attack freedom of the press, which is why we have so many protections in place, whether genuine or artificial, to prevent just that.
I'm not personally involved in the project.