Sure, but the same goes for verbatim strings, or leading whitespaces in python. "Code formatting" questions usually only concern themself with those degrees of freedom that do not alter the semantic meaning of the code.
That is only true if your path to AGI is to take models similar to current models, and feed them with tons of data.
Advances in architecture and training protocols can and will easily dwarf "more data". I think that is quite obvious from the fact that humans learn to be quite intelligent using only a fraction of the data available to current LLMs. Our advantage is a very good pre-baked model, and feedback-based training.
It should be solved by smoothing the image to remove high frequencies that are close to the sampling rate, to avoid aliasing effects during downsampling.
The term to search for is Nyquist–Shannon sampling theorem.
This is a quite well understood part of digital signal processing.
Machines suffer mutation rate too. Cosmic ray induced bitflips could be possible. Although since we are all spitballing anyhow maybe you can handwave a cosmic shield along with your cosmic explorer.
It would also be interesting if the host system collapsed. That would be some interesting scifi fodder: advanced civilization sends out probes but by the time FTL visitors show up, their civilization already collapsed to the stone age.
Strongly disagree with this. It is the default go-to for companies that cannot use cloud-based services for IP or regulatory reasons (think of defense contractors). Isn't that the main reason to use "open" models, which are still weaker than closed ones?
Just wondering how SQLite would ever work if it had zero control over this. Surely there must be some "flush" operation that guarantees that everthing so far is written to disk? Otherwise, any "old" block that contains data might have not been written. SQLite says:
> Local devices also have a characteristic which is critical for enabling database management software to be designed to ensure ACID behavior: When all process writes to the device have completed, (when POSIX fsync() or Windows FlushFileBuffers() calls return), the filesystem then either has stored the "written" data or will do so before storing any subsequently written data.
A "flush" command does indeed exist... but disk and controller vendors are like patients in Dr. House [1] - everybody lies. Especially if there are benchmarks to be "optimized". Other people here have written up that better than I ever could [2].
You're joking, but many of the code bases I saw that were produced by/with AI-support are not maintainable by any sane human. The more you go AI, the less you can turn back.