It's now been about five years since I heard about the Rust programming language for the first time. It was when I was starting to write an operating system in C and the "Rust people" (as they seem to often refer to themselves, which should already be a red flag) told me that I should write it in Rust. (Later, they have several times told me to rewrite it in Rust.) Rust is "memory safe", which in the context of Rust means that the whole language is designed in such way that it is impossible to have memory-related bugs in programs that are written in Rust.
More than a year ago I wrote a document that I named "UEFI fact sheet". The purpose was to create a more truthful counterpart to a similarly named document which the UEFI forum was spreading on various Internet sites. For a long time my document was the first search result on most search engines when searching for "UEFI fact sheet". Recently I noticed that Bing (which is owned and maintained by Microsoft) had put my document to the second page of search results, and the first result now points to a disinformation document that is published by the UEFI forum.
ST-DOS is a DOS implementation, but it is not meant to be a clone of MS-DOS. It is mostly syscall-compatible with MS-DOS, but the driver API and many other things are completely different. After all the definition of DOS is just "disk operating system".
All real mode programs that are compiled with Watcom C/C++ should work. The most recent versions of Watcom's protected mode runtime don't currently work, because they use some undocumented MS-DOS syscalls that are not implemented in ST-DOS. I intend to create a compatibility TSR that will solve most issues with those MS-DOS programs.
>I was there too. People always say this, but just because a thing changed once does not mean it will happen again.
The problem is that the web standards have now grown so much that it is impossible to write a complete new web browser from scratch. Firefox is not coming back, because Mozilla seems to prioritize other things than code quality and the actual usability of their software.
And yes, I know that the SerenityOS developers are trying to do it, but while some very advanced things work "good enough" in their browser so that Twitter and Discord's web client works to some extent, the more basic things are so broken that their browser cannot even render basic HTML 3.2 sites properly.
Google's end goal is probably to "deprecate" HTTP 1.x and force everyone into using their own replacement for the protocol. Their protocol is going to be like the thing they call "HTTP2", an insanely complex protocol that is impossible to implement by a small developer team. In the end their own protocol becomes a "rolling release" protocol that only works with Google's own app, at which point they can completely stop releasing RFCs for it.
> Non-corporate-supported browsers might transition to being more friendly to this process instead of the unhelpful and scary SSL warnings provided now.
That's exactly what I would like to see happening. The current warnings make no sense and they only make security worse.
Browsers are not different from any other applications at this.
When I'm talking about "lookalike domains", I mean domain names that look exactly the same. It is simply not possible with 7-bit ASCII.
A self-signed certificate can also be used to make sure that the connection is private. Sometimes the private key may have leaked and then the certificate can be "trusted" without being private - though it's easier to just register a lookalike domain and a certificate for it than have a leaked private key.
1: You can always use a stronger encryption. You don't have to use decades-old encryption that has already been compromised.
2: So clearly in this case the route wasn't trusted. The encryption was however used correctly, but the users were ignorant and continued using the service even after the certificate suddenly changed.
3: Intranets are vulnerable only if there is untrusted devices in the network.
As I wrote, encryption is a good thing and improves security when used correctly, but all software must respect the user's choices. Nothing can fix stupidity and ignorance.
Which? As I said, I assumed you were talking about MPPE.
> So clearly in this case the route wasn't trusted.
Then trusted routes don't exist.
> Intranets are vulnerable only if there is untrusted devices in the network.
I don't think I would ever be willing to assert an intranet is free of untrusted devices.
> software must respect the user's choices
Software performing security functions generally should not give users (or even developers) choices where they are unlikely to understand the potential consequences. If someone can supply a "--insecure-tell-fvey-my-kinks" command line argument, fine, but otherwise no.
Any choice a user can freely make is one that they can be manipulated into making. Failing to protect them accordingly because of "stupidity and ignorance" is effectively social darwinism.
Please consider that most people don't have your level of technical sophistication, nor is it reasonable to expect them to.
> The encryption was however used correctly, but the users were ignorant and continued using the service even after the certificate suddenly changed.
When designing anything that's going to be used by the general public on the Internet, you have to keep in mind that's the entire public, including grandma and grandpa that don't even realize that their Facebook app is not Google and post their search queries as status updates.
For fuck's sake, we can't even get professional office workers to not fall for painfully obvious phishing campaigns, and now you want to try to teach them how to recognize a bad SSL certificate?
I am living in reality. I don't want to limit the user's freedom. Sometimes people have to learn the hard way, but the other option (giving away your freedoms) is always worse.
This isn't reasonable or ethical - and it's telling you didn't respond to my other comment replying to you.
If someone experiences harm due to use of unencrypted HTTP because they didn't understand the implications of it, they're not going to be able to make a causal association. Without the causal association, there is no opportunity to learn.
The way to "preserve freedoms" in these sorts of situations is to require a "warranty voiding" action.
After the OOM handler had halted everything the 10:th time, I finally decided to do something (at least the kernel and the TCP/IP stack seem to be very stable, because it still did not crash!)
I set the maximum amount of sockets to 32, the maximum amount of file handles of the DOS kernel to 40, and the maximum amount of file descriptors per one VPU process to 40. Now it should (maybe) be able to do its work without randomly running out of memory.