Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Instead of compiling and running code, just replace all of your code with composable functions that are each small enough to lookup the results for in a precomputed hardware dictionary. In this way you have a Turing complete cpu with only one instruction.

I started this comment as a tongue in cheek satire of yours, but now I’m honestly wondering if it could be a viable idea for a radically simplified cpu (I’m not a computer engineer). I suppose the lookup tables rapidly become too large, possibly before they are useful?



For an 8-bit CPU you could maybe do it. For a 64 bit cpu, the lookup table for addition alone would be massive. (You can of course do addition in smaller increments, like adding one digit at a time and keeping track of the carry, but then you just have a normal full adder).

The biggest issue is that this CPU would be conceptually simple, but very difficult to make fast. Memory is slow, and accessing a 64k lookup table uses more transistors than just doing the addition


Just build 4 of them and chain them together to get 64-bits.

(Beyond just being a joke, it is often how things like this get scaled. 1-Bit Adders become 2-Bits with a Carry Line between, then 3-bits with another Carry Line, and so forth, ad infinitum and beyond. The real joke is the complexity involved in however you "just" chain them together.)


Again, I'm no expert here but find this stuff fascinating. Could the simplicity possibly allow some radically different CPU design that makes the lookups alone nearly instantaneous? I could imagine some types of optical/photonic physical 3D hash table structures - even ones where the same physical structure could support a large number of read operations in parallel if pre-arranged by a compiler to not physically interfere. I imagine a cpu with only one instruction could be physically miniscule, and therefore pack a lot of cores in a small space.

Hypothetically, if it were orders of magnitude faster than a normal CPU, one could still perform rapid computation on larger numbers as needed while keeping the cpu physically 8 bit- yet be able to revert to even higher performance when less precision is needed.


If you're interested in One Instruction Computers, check out: https://en.m.wikipedia.org/wiki/One-instruction_set_computer


Thanks! I half expected to be told one instruction computers were impossible.


There are packages for those things in npm, but they you'll have to start using javascript...

https://www.npmjs.com/package/@samuelmarina/is-even

The javascript crowd. They are always one step ahead!


Haha, a 100mb library, it doesn't say so but is this really a dictionary of numbers with even/odd as values? I love that they have an entirely separate is-odd package.

//edit: looked at the code, it's literally 100mb of if else statements- and a bunch of them are wrong! The github pull requests to add additional integers are hilarious.



I've been (very slowly) working on this for a 4-bit CPU. Not for any practical reason, just for fun.


Any more details you could share? Is this a physical cpu design, or a VM/assembler?


CPUs do use a ton o LUTs under the hood.


I wonder if this was inspired by this discussion, but this just made the front page. I didn’t realize the famous Pentium fdiv bug was due to an error in a lookup table used for division: https://news.ycombinator.com/item?id=42391079




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: