Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> I remember how Node when it was just rising in popularity was usually demonstrated by writing a primitive HTTP server that served a "hello world" HTTP page.

That is still possible in the exact same way.

But a toy is just a toy. All websites should encrypt their content with TLS. In fact, all protocols should encrypt their communications. The result? Sure, it is a binary stream of random-looking bits.

Yet to me, what matters about text protocols is not the ASCII encoding. It is the ability to read and edit the raw representation.

As long as your protocol has an unambiguous one-to-one textual representation with two-way conversion, I can inspect it and modify it with no headache.

An outstanding example of that is WASM, which converts to and from WAT: https://en.wikipedia.org/wiki/WebAssembly#Code_representatio...



>All websites should encrypt their content with TLS. In fact, all protocols should encrypt their communications.

I reject the notion that encryption should be mandatory for all websites. It should be best practice, especially for a "modern" website with millions of users, but we don't need every single website encrypted.


Strongly disagree. At the very least, all sites should serve HTTPS. I don't want to get ads and spyware injected from my ISP, nor do I want everyone tracking what I read, on news sites for example. Provide HTTP if you want, but only for backcompat.


> I don't want to get ads and spyware injected from my ISP

Honestly this is a notion that i do not understand. Why do you accept your ISP doing this? Why aren't people banding together and complain about it in an organized manner? If there aren't alternative ISPs in your area it means there will be a lot of people affected by this so more voices to be heard. Why are you just accepting ISPs adding ads and spyware to your content as some force of nature that everyone else must work to keep at bay?


In many places there are not alternatives, all available vendors have a history of hijinks. It's quite hard to band many small voices together and even when we do government agencies pick the wrong thing.

It's not really accepting. It's more like picking the least-shitty of a shitty set of options.


But as i wrote if there are no alternatives then it means there is a larger pool of people to band together. And you can start by making noise towards the company as one voice not with the government. Honestly this sounds like you've given up without trying and then blaming the sites for not trying to work around your ISP being shitty. Here the blame lies with your ISP, not the sites.


These are brilliant ideas. It's amazing it doesn't actually play out this way.


> Why aren't people banding together and complain about it in an organized manner?

Because the notion of collective organizing has been completely eroded and squeezed out of modern society at every level and replaced with "consumer choice". Is this supposed to be a difficult question or a rhetorical one? I'm not being factitious, that's the actual answer and it's very easy to observe if you look around. (If you check your watch I'm sure it's only a matter of time until someone here literally replies telling you to get a new ISP, in fact.)

That said, even beyond the need for collective organizing of regulation for cases like ISPs, it's been like a decade since Firesheep made waves over the internet because it turns out just being able to snoop passwords at Starbucks was in fact, not good, and actually was quite bad. So it's not like ISPs are the only unscrupulous actors out there, and unless you want to get into the realms of "This software is illegal to possess" (normally a pretty hot-button topic here) then someone has to deal, in this case. The whole pathway between a user and their destination is, by design, insecure; combine that with the fact the internet is practically a wild west, and you have a somewhat different problem.

Making system administrators adopt TLS en masse was probably the right course of action anyway, all things considered, and happens to help neutralize an array of problems here, even if you regulated ISPs and punished them excessively for hijinks like content manipulation (which I would wholeheartedly love to see, honestly.)

(The other histrionics about "simplicity" of HTTP/2 or text vs binary whatever are all masturbatory red herrings IMO so I'm just ignoring them)


This isn’t about having a shitty ISP specifically it’s the fact that the network path between your machine and the server is by definition untrusted. The much harder problem is securing the entire internet so that you don’t need encryption. Or you could just encrypt the content and be sure you have a clean connection.


> Why do you accept your ISP doing this?

You say this as if ISPs don't exist as natural monopolies that can unilaterally ignore customer complaints because "Who cares what you think? You're stuck with us.".


Indeed. When it comes to technology, I think resiliency and robustness in general should trump almost all other concerns.

It would be nice if HTTP were extended to accommodate the inverse of the Upgrade header. Something to signal to the server something like, "Please, I insist. I really need you to just serve me the content in clear text. I have my reasons." The server would of course be free to sign the response.


While I agree with you, it is best to be on the safe side. The damage from having a wrong website unencrypted could be massive vs. cost of simply encrypting everything. Demanding 100% encryption is an extra layer to protect against human mistakes.


Demanding 100% encryption also locks out some retrocomputing hardware that had existing browsers in the early Internet days. Not all sites need encryption. Where it's appropriate, most certainly. HTTPS should be the overwhelming standard. But there is a place for HTTP, and there should always be. Same for other unencrypted protocols. Unencrypted FTP still has a place.


HTTP/FTP certainly have their place, but that is not on the open internet. For retro computing and otherwise special cases a proxy on the local network can do HTTP->HTTPS conversion.


>HTTP/FTP certainly have their place, but that is not on the open internet.

Then it ceases to be the "open" internet.


It's unfortunate that there doesn't seem to be a turn-key solution for this at the moment. I'm currently using Squid so I can use web-enabled applications on an older version of OS X, and it's great, but figuring out how to set it up took a solid day of work (partly because their documentation isn't very good), and the result will only work on macOS.

Mitmproxy is much easier to set up, but too heavy for 24/7 use.

Ideally this would be a DDWRT package, or maybe a Raspberry Pi image, all preconfigured and ready to go...


You can always uses a MITM proxy that presents an unencrypted view of the web. As long as you keep to HTML+CSS, that should be enough. Some simple js also, but you can't generate https URLs on the client side. Which, for retrocomputing, is probably fine.

You wouldn't want to expose these "retro" machines to the Internet anyways.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: