We usually ask around on the NANOG mailing list. Someone on that list usually already knows the contact method or a person at an ISP, datacenter or hyperscaler.
Let's hope we don't see any 'brown trout' in there :-)
Seriously, though, I love a nice green algae for some good old oxygen.
I just saw some all along the shore of a small tributary at a local park that had tons of little (but not tiny) bubbles all over it. I thought it might be oxygen.
We could use some Complex Adaptive System architecture to create galaxies. Or we could call them Gestalts instead of CAS.
In gestalts
entities,
use rules
to interact with a read/write message bus.
Ants. Pheromone trails. Write when you find food, not if you don't.
We have the pieces for this. Entities - people. We have a r/w bus - the internet. What we don't have are rules. We can think of rules - the kind Wolfram talks about in NKofS - instead as a language. A language in the sense of Sapir-Whorf. "Twitter and Tear Gas" (Zeynep Tufecki) is a good illustration of Sapir-Whorf in internet languages. (Available as a pdf I think)
That language is the missing link. In Real Life there is a language for collaboration. Perhaps the task is to understand the elements of IRL collaboration and then transpose it.
One could make some guesses about fundamentals. As we have all discovered, one fundamental is trust. IRL Trust is identity and reputation based. With many caveats. Perhaps we could start by considering
A New Kind of Identity. ?
I was born and grew up in sight of this lock where the fish doorbell is, 61 years ago.
I never knew we had this many fish swimming by until this camera perplexed me.
My boat was moored 30 meters away.
My ISP offices where a little further upstream.
2000 Years ago the city of Utrecht grew around this river branch, now called the Kromme Rijn/Vecht but originally it was the main river Rhine. It started out as a Roman frontier fort at the river crossing.
Let’s be honest, saying “just fix the page tables” is like telling someone they can fly if they “just rewrite gravity.”
Yes, on Apple Silicon, the hardware supports shared physical memory, and with enough “convincing”, you can rig up a contiguous virtual address space for both the CPU and GPU. Apple’s unified memory architecture makes that possible, but Apple’s APIs and memory managers don’t expose this easily or safely for a reason. You’re messing with MMU-level mappings on a tightly integrated system that treats memory as a first-class citizen of the security model.
Oh yes I programmed all the Amiga models, mostly in assembly level. I reprogrammed the ROMs. I also published a magazine on all the Commodore computers internals and build lots of hardware for these machines.
We had the parallel Inmos Transputer systems during the heyday of the Amiga, they where much better designed than any the custom Amiga chips.
Inmos was a disaster. No application ever shipped on one. EVER. It used a serial bus to resolve the problems that should have never been problems. Clearly you never wrote code for one. Each oslink couldn't reach more than 3 feet. What a disaster that entire architecture was.
I shipped 5 applications on an 800 Inmos Transputer supercomputer. Sold my parallel C compilers, macro Assembler. Also an OS, Macintosh Nubus interface card, Transputer graphics cards, a full paper copier and laserprinter. I know of dozens of successful products.
Hey don't shit on my retro alternative timeline nostalgia. We were all writing Lisp programs on 64 CPU Transputer systems with FPGA coprocessors, dynamically reconfigured in realtime with APL.
/s/LISP/Prolog and you've basically described the old "Fifth Generation" research project. Unfortunately it turns out that trying to parallelize Prolog is quite a nightmare, the language is really, really not built for it. So the whole thing was a dead-end in practice. Arguably we didn't have a real "fifth-gen" programming language prior to Rust, given how it manages to uniquely combine ease of writing parallel+concurrent code with bare-metal C like efficiency. (And Rust is now being used to parallelize database query, which comfortably addresses the actual requirement that Prolog had been intended for back then - performing "search" tasks on large and complex knowledge bases.)
I haven't yet read the full blog post but so far my response is you can have this good parallel computer. See my previous HN comments the past months on building an M4 Mac mini supercomputer.
For example reverse engineering the Apple M3 Ultra GPU and Neural Engine instruction set and IOMMU and pages tables that prevent you from programming all processor cores in the chip (146 cores to over ten thousand depending on how you delineate what a core is) and making your own Abstract Syntax Tree to assembly compiler for these undocumented cores will unleash at least 50 trillion operations per second. I still have to benchmark this chip and make the roofline graphs for the M4 to be sure, it might be more.
Lots of later follow-up research has been published.
I am proposing to fund a secure parallel operating system, GUI, applications and hardware from scratch in 20 KLOC for the European Community to gain computational independence from the US. I consider it the production version of the STEPS research.
We are in the signing up stage of the researchers, programmers and chip designers and have regular meetings and presentations [1].
Half a trillion Euro's is the wider funding pool, several hundred million for European chips and operating systems, billions for European chip fabs, dozens of billions for buying the secure EU software and chips for government, schools and military.
An unsolved problem is how to program a webbrowser in less than 20 KLOC.
I think that the STEPS research was a resounding succes as was proven by the demonstration of the software system in Alan Kay's talks[2] and confirmed by studying the source code. As mentioned before in my earlier HN post, I have a working version of Frank and most other parts of the STEPS research.
> An unsolved problem is how to program a webbrowser in less than 20 KLOC.
Can you even specify a modern web browser in under 20k lines of English? Between the backward compatibility and huge multimedia APIs, including all the references, I'd be surprised.
...Yields about 36k lines of C++. With the libraries, the LOC count balloons to 310k.
If a still-in-alpha browser already has 300k lines of code to deal with, there's very little chance that a spec-compliant browser will be able to do the same within 30k lines.
I made the conversion of the Squeak verison to Pharo many years ago and I just tried to make it work in the latest version (which was not straightforward becasue Pharo deprecated and removed some Morphic parts it used). So, mostly the curiosity if it can still work and, if yes, how well/poorly.
I am definitely interested, as someone that has been doing independent research on the work of STEPS and particularly Piumarta and Warth for the past few years — I'm not sure how to get in contact with this initiative. Any pointers?
Honestly I think the focus should move farther than Smalltalk; it has shown what computers could be like in the 80s, but in the age of multi-core and pervasive networking, some of its ideas do not map well.
My research these days is on L4-type microkernels, capabilities which are an improvement over "basic" object-orientation, and RISC-V processors with CHERI technology: incidentally I just learned there is a European company working on this integration (Codasip), which would finally allow one to write distributed and secure systems starting from simple primitives such as message passing.
If you know where to contact people working on this kind of problems, EU-based, I am most interested. Email in profile.
Free (liber) software is already independent of the US by virtue of being open source and free. In what way would your solution offer more/better independence ?
I am all for a production 20K trusted free+open computing base, but … I don’t understand the logic.
It's humanly impossible to know what a program does when it grows beyond the size anyone can read in reasonable amount of time.
For comparison, consider this: I'm in my late 40s and I've never read In Search of Lost Time. My memory isn't what it used to be in my 20s... All eight volumes are about 5K pages, so about 150K lines. I can read about 100 pages per day. So, it will take me about two month to read the whole book (likely a lot longer, since I won't be reading every day, and won't read as many as 100 pages every time I do etc.) By the time I'm done, I will have already lost some memories of what happened two months ago. Also, the beginning of the novel will have to be reinterpreted in the light of what came next.
Reading programs is substantially harder than reading prose. Of course, people are different, and there is no hard limit on how much of program code one can internalize... but there's definitely a number of lines that makes most programmers unable to process the entire program. If we want programs to be practically understandable, we need to keep them shorter than that number.
You have just given the rationale for STEPS, which I am aware of and agree with.
But the claim was that the EU should embark and find this to “gain independence from the US”, even though free software already gives you that independence.
So, my question is: in what way would this project make the EU less dependent?
North Korea reportedly has a Linux distribution, for example.
> even though free software already gives you that independence.
No, not in the way I'd want (and probably not in the way parent wants). For all the same reasons. If you are given something you cannot understand, you depend on the provider for support of the thing you cannot understand. Even if your PC were to be shipped with the blueprints for the CPU, you'd still depend on the CPU manufacturer to make your PCs. The fact that you can sort of figure out how the manufacturer made one doesn't help you to become the real owner of the PC (because of the complexity of the manufacturing process that will make it prohibitively expensive for you to become the PC true owner).
But, let's move this back into software world, where the problem is just as real (if not more so). Realistically, there are only two Web browsers, and the second one makes every effort to alienate its users and die being forgotten and irrelevant. Chrome (or Chromium and Co) are "free", but they are so complex that if you wanted a substantial change to their behavior, you, alone wouldn't be really able to effect that change. (Hey, remember user scripts? Like in Opera before it folded and became Chromium clone? Was super useful, but adding this functionality back would be impossible nowadays without a major team effort.)
So... the Chromium and Co aren't really free. They are sort-of free.
There are, unfortunately, many novel and insidious ways in which software freedom is attacked, subversion attempts come in a relentless tide. Complexity is one of the enemies of software freedom.
There are a lot of people in Europe working on KDE, including the really open web browser. They are the provider.
The problem with a web implementation not being a small thing is inherent due to the size of the spec. You can definitely make a browser with many of the same functional capabilities in 20K lines but it won’t be showing the existing web or be a replacement for chrome.
Many companies have a customized Linux kernel which means you aren’t actually dependent on the provider.
In my opinion, the GGP’s claim the EU should fund their STEPS-like project because “it will help avoid American dependence” is … not in line with reality, just a straw man argument to grab available funds.
Other than that, I agree it’s desirable for everyone to have such a thing. But not in any way because of American hegemony over Chrome.
>An unsolved problem is how to program a webbrowser in less than 20 KLOC.
How about instead of a full web runtime you change the problem to be implementing (or inventing) server and client protocols for common web services? (Vblogging, Q&A forums, micro blogging, social bookmarking, wiki's etc.)
The reason the web won is that it does NOT need specific clients for every single thing.
Essentially every kind of service (e.g. email, blogging, q&a, live news) is available without JavaScript, thus, using a pure html through http interface. The problem with a-standard-protocol-per-service is that new uses arrive in a distributed, unplanned manner.
Looking at instant messaging history is instructive: there were 3 protocols in major use (aim, msn, icq) about 20 other in common use. The “standard” committee was sabotaged by the major players for years and eventually disbanded, culminating in the only open option in some use (not major use, just some use) - XMPP - to win by default, except the providers explicitly chose to NOT interop (Facebook, WhatsApp when it was independent, Google chat).
People still use clients for certain services otherwise many sites wouldn't still be making them.
Of course you're right that this would all be hardcoded and would not allow new types of sites to work right away.
I don't know that you'd even need a protocol or service for each category of site. It would probably make more sense to use the same architecture for all types of services with something like a manifest to determine the mode. I think the challenge would be making the APIs public in a way that would be practical for the servers implementing them.
I think you misunderstood me. I'm not at all describing anything like a application runtime such as the web. By architecture I meant something more like server APIs that are flexible enough to be used as the backend in different kinds of sites.
For example: Instead of an API for microblogs and another for blogs and another for news sites it could just be one API with flags or something that determines which other calls are used and how.
So let’s say we have blogger with a blogger API, and twitter with a Twitter API, and by some miracle they agree on a merged-with-flags API.
Along comes Tumblr, and … the APIs don’t fit, in the sense that they significantly limit the planned functionality.
Now what? A new API! Or a committee to update the existing API!
Central planning doesn’t work for these things. And when you do finally have agreement on something (e.g. email), you either stagnate because no one can improve things, or get incompatible implementations because a significant player decides they will break the standard for some reason (e.g. recall and TNEF on outlook, which only work on outlook).
The internet started the way you describe (with finger .plan for “social” status, Unix talk for one-to-one, IRC for chat, email, nntp, gopher, ftp, Xwindows, RDP, etc etc). All of these is are 98% web based these days, because the protocol+client+server model cannot adapt quickly enough. The modern web with presentation layer + runtime code delivery + transport protocol does allow lightning fast evolution.
So we had finger, talk, irc, smtp, nntp, ftp, etc, all layered on TCP.
And now we have dozens of popular implementations of roughly the same functionality layered on REST/HTTP or JSON/WebSockets or whatever.
I suggest that the complexity is basically that same, from a programmers point of view. You define messages and a state machine and a bunch of event-handling …
The UI is now universal (mostly) but the programming model for HTML/CSS isn’t simpler than say Xaw/Xt: it is more capable, but putting together a decent UI for a browser-based email client is not substantially easier than doing it in Xaw/Xt.
With one exception: our programming languages are better, and the ecosystem of libraries and frameworks makes what would once have been weeks of work an import statement.
We could do the same things in the same time using custom protocols over at TCP as we do using JSON over WebSockets, using modern tooling, but the world has moved on. The entire ecosystem of libraries and services and network infrastructure channels you into using a vastly more complex stack.
The point of the flags, manifests or whatever is so functionally can set by the site. Like one site wants to support live chat and another wants to support bulletin board style posting.
The web is the best application runtime for making clients. However, I don't think it's existence invalidates the creation of these kind of protocols and server APIs. In fact some web standards such as RSS feeds could be described as such.
This is something I often think about — if I understood you correctly. It sounds like an evolution of Gopher, with predefined structures for navigation, documents, and media. When we browse, we care more about the content than the presentation. There’s no real need for different layouts for blogs, documentation, news, forums, CRUD apps, streamings, emails, shops, banking, and so on. If the scope were tightly restricted, implementing clients and servers would be much simpler. But it's just something I wonder about...
Yeah, that's right. Though, it needs not be just one protocol. Many sites already have clients. It's just that the APIs are typically controlled by the site and are not client neutral and require credentials as opposed to something like an RSS feed.
I feel that it's worth mentioning that Kay and others believe the web browser has a fundamental flaw: you send data formats that the browser interprets rather than self contained "objects" that know how to execute themselves.
This is why we've been stuck with tech like CSS, JavaScript, and HTML and it's so hard to break out.
Their version of a browser would likely be an address bar with the ability to safely run arbitrary programs (objects) in the space below.
> ... studying the source code. As mentioned before in my earlier HN post, I have a working version of Frank and most other parts of the STEPS research.
If you've managed to get most of it working or recompiled, please consider writing a detailed blog post documenting your process.
This would be an invaluable resource for the people who are interested in the results of the STEPS project, showing your methodology and providing step-by-step instructions for others to follow.
I don't think you realize how many people have attempted this before and failed.
> An unsolved problem is how to program a webbrowser in less than 20 KLOC.
that would be amazing if possible, but i wonder since "the web" is so full of workarounds and hacks would it really be usable in most scenarios of done so succinctly...
I propose a different lens to look at this problem.
A neural net can be defined with less than 100LoC. The knowledge is in the weights. What if we went from source code of the web (HTML, CSS, JS, WASM) directly to generated interactive simulation of the said web? https://gamengen.github.io
What if this blob of weights could interpret way more stuff, not just the web?
Yes, what if instead of the computer being an Internet Communications Device (as Steve Jobs called the iPhone), it would just pretend to allow us to communicate with other humans while actually trapping us in a false reality, as if we were all in the Truman Show?
It might work, as indicated by the results in your link ("Human raters are only slightly better than random chance at distinguishing short clips of the game from clips of the simulation."), but the result would be a horrific dystopian nightmare, so why would we do this to ourselves?
Anyway, there is one aspect where the STEPS work is similar to this idea, in that it tries to build a more concise model of the system. But it does this using domain-specific languages rather than lossy blobs of model weights, so the result is (ideally) the complete opposite of what you proposed: A less blobby, more transparent and more comprehensible expression of what the system does.
We already interact with false reality through our "old school" computers – internet is full of bots arguing with each other and with us. But my proposition doesn't have to distort the interpreted content.
Neural nets (RNNs) are Turing-complete, so they can simulate web browser 1:1. In theory, of course. Let say we find a way to train a neural net to identically simulate web browser. The weights of this blob might at first seem like an opaque non-sense, but in reality it would/could contain a more efficient implementation than whatever we have came up with.
Alan Kay believed computer science should take its cues from biology. Rather than constructing software like static buildings, we ought to cultivate it like living, evolving organisms.
I’m very fascinated by this, and I hope that your proposal gets approved! I’m a community college instructor in Silicon Valley, and my plan this summer (which is when I have nearly three months off from teaching) is to work on a side project involving domain-specific languages for systems software. I’ve been inspired by the STEPS project, and I dream of systems being built with higher levels of abstraction, with very smart compilers optimizing them.
Why not collaborate, it will help you avoid reinventing some wheels.
For example make a 3D version of the 2.5D graphics Nile/Gezira. You could do it in less than 500 lines of code and within 3 months.
Other system software could be a new filesystem in 600 lines of code or a TCP/IP in 120 LOC.
I also think a SPICE or physics simulator could be around 600 lines of code.
I'll do the parallelizing optimizing adaptive compilers and autotuners (in Ometa 2).
I intend to target a cluster of M3/M4 Macs with 32-core CPU, 80-core GPU and 32-core Neural Engine cores with an estimated 80 trillion TOPS and 800 Gbps memory bandwidth each. A smaller $599 base model M4 Mac mini would do between a fifth and a third of that performance.
Together we could beat NVDIA's complexity and performance per dollar per Watt in a few thousand lines of code.
It is exciting. The life's work of a dozen people.
Imagine proving the entire IT business field, Silicon Valley and computer science wrong: you can write a complete operating system and all the functionality of the mayor apps (word processing, graphics, spreadsheets, social media, WYSIWYG, browsers) and the hardware it runs on in less than 20000 lines of (high level language) code. They achieved it a few times before in 10000 lines (Smalltalk-80 and earlier versions), a little over 20000 (Frank) and 300000 lines (Squeak/Etoys/Croquet) and a few programmers in a few years.
Not like Unix/Linux/Android/MacOS/iOS or Windows in hundreds of millions of lines of code but in orders of magnitude less.
> They achieved it a few times before in 10000 lines (Smalltalk-80 and earlier versions), a little over 20000 (Frank) and 300000 lines (Squeak/Etoys/Croquet)
Smalltalk-80 is significantly more than 10kloc. 6–10kloc gets you a Blue Book virtual machine without the compiler, editor, GUI, class library, etc. The full system is about 80kloc (Cuis is a currently runnable version of it, plus color: https://cuis.st/). Nobody's seen Frank (though you say you have a mostly working copy, I note that you haven't included a link). The 300kloc estimate for Squeak with all the trappings is at least in the ballpark but probably also a bit low.
None of the three contained spreadsheets, "social media" similar to Mastodon or MySpace, or a web browser, though they did contain a different piece of software they called a "browser" (the currently popular term for such software is "IDE" or "editor".)
You could implement those features inside that complexity budget, as long as you weren't trying to implement all of JS or HTML5, but they didn't. In the case of Smalltalk-80, those things hadn't been invented yet—though hypertext had, and Smalltalk-80 didn't even include a hypertext browser, though it has some similar features in its IDE. In the other cases it's because they didn't think it was a good idea for their purposes. The other two systems do support hypertext, at least.
These systems are indeed very inspirational, and they pack a lot of functionality into a small amount of code, but exaggerating their already remarkable achievements does nobody any favors.
Smalltalk-76 was around 10k lines, though probably you need to leave out the microcode/VM to get that number, I forget. (I have the source I'm thinking of on another computer powered down at the moment.) -80 was definitely bigger but -76 was a lot more like it than -72 was.
Yeah, that seems about right. The Smalltalk-76 VM was pretty small, though. A lot smaller than the -80 VM. I think it's fair to say that Smalltalk-76 had WYSIWYG word processing and graphics, including things like paint programs. Like Smalltalk-80, I think, it's missing spreadsheets, social media, and hypertext browsers.
you can see for yourself, e.g. by looking at the Smalltalk emulators that run in the browser, reading Smalltalk books, etc.
I think it's the "blue book" that was used by the Smalltalk group to revive Smalltalk-80 in the form of Squeak. it's well-documented for instance in the "back to the future" paper. I haven't had the fortune of studying Squeak or other Smalltalks in depth but it seems fairly clear to me that there are very powerful ideas being expressed very concisely in these systems. likewise with VPRI/STEPS.
so although it might be somewhat comparing apples to oranges, I do think when, e.g., Alan Kay mentions in a talk that his group built a full personal computing system (operating system, "apps", etc) in ~20kLOC (iirc, but it's the same order of magnitude anyway), that it is important to take this seriously and consider the implications.
similar when one considers Sutherland's Sketchpad, Engelbart's mother of all demos, Hypercard, etc. and contrasts with (pardon my French) the absolute trash that is most of what we use today (web browsers - not to knock the people who work on them, some of whom are clearly extremely capable and intelligent - generally no WYSIWYG, text and parsing all over the place, etc etc)
like, I just saw a serious rendering glitch just now while typing this, where some text that came first was being displayed after text that came later, which made me go back and erase text just to realize the text was fine, type it again, and see the same glitch again. that to me seems completely insane. how is there such a rendering error in a textbox in 2025 on an extremely simple website?
and this all points to a great deal of things that Alan Kay points out. some of his quips: "point if view is worth 80 IQ points", "stop reinventing the flat tire", and "most ideas are mediocre down to bad".
guess I misunderstood your question, and also went on a bit of a rant.
morphle said "you can write a complete operating system and all the functionality of the mayor apps (word processing, graphics, spreadsheets, social media, WYSIWYG, browsers) and the hardware it runs on in less than 20000 lines of (high level language) code. They achieved it a few times before in 10000 lines (Smalltalk-80 and earlier versions), a little over 20000 (Frank) and 300000 lines (Squeak/Etoys/Croquet) and a few programmers in a few years."
to which you replied "did they?"
to which I replied something along the lines of "you can take a look at Smalltalk systems" to answer your question. to clarify, I meant you can look at what the extent of what they were capable of is, and look at their code. which, again, to me is a bit apples to oranges, but is nonetheless something that ought not to be dismissed.
I think one difference in these big pieces of software and the older systems is that the old systems ran on bespoke hardware and only one platform where as UNIX & Windows must support a lot of different hardware.
That's not to say they do seem far too large for what they do, but it is a factor. How much code in ROM vs loaded at runtime?
It also depends on what you include in those LoC. Solitaire, paint, etc? I18N files?
not that I've done the LOC calculations, but I would guess Alan Kay, etc. include, e.g., Etoys, the equivalent of paint, and some games in what they consider to be their personal computing system, and therefore in the ~20 kLOC.
and hardware is one of the crucial points Alan makes in his talks. the way he describes it, hardware is a part of your system and is something you should be designing, not buying from vendors. the situation would be improved if vendors made their chips configurable at runtime with microcode. it doesn't seem like a coincidence to me that a lof of big tech companies are now making their own chips (Apple, Google, Amazon, Microsoft are all doing this now). part of it is the AI hype (a mistake in my opinion, but I might be completely wrong there, time will tell). but maybe they are also discovering that, while you can optimize your software for your hardware, you can also optimize your hardware for the type of software you are trying to write.
another point is that any general purpose computer can be used to simulate any other computer, i.e. a virtual machine. meaning if software is bundled with its own VM and your OS doesn't get in the way, all you need for your softare to run on a given platform is an implementation of the VM for the platform. which I think begs many questions such as "how small can you make your OS" and "is it possible generate and optimize VM implementations for given hardware".
also something that came to mind is a general point on architecture, again from Alan Kay's ideas. he argues that biological systems (and the Internet, specifically TCP/IP, which he argues takes inspiration from biology) have the only architecture we know of that scales by many orders of magnitude. other architectures stop working when you try to make them significantly bigger or significantly smaller. which makes me wonder about much of hardware architecture being essentially unchanged for decades (with a growing number of exceptions), and likewise with software architecture (again with exceptions, but it seems to me like modern-day Linux, for instance, is not all that different in its core ideas to decades-old Unix systems).
In this respect, Kay's methods seem to be merging with those of Chuck Moore: the difference lies in that Moore doesn't seem to perceive software as a "different thing" from hardware - the Forth systems he makes always center on extension from the hardware directly into the application, with no generalization to an operating system in between.
I don't think supporting a lot of different hardware is a big factor, but if you think it is, building standardized hardware to your liking is a pretty affordable thing to do today. You can use a highly standardized Raspberry Pi, you can implement custom digital logic on an FPGA, or you can wire together some STM32 or ESP32 microcontrollers programmed to do what you want. Bitbanging a VGA signal is well within their capacity, you can easily get US$3 microcontrollers with more CPU power than a SPARC 5, and old DRAM is basically free.
I think it's probably possible to achieve the level of simplification we're talking about, but as I explained in https://news.ycombinator.com/item?id=43332195, the older systems we're talking about here are in fact significantly more code than they are being represented as here.
L11n message catalogs (.po files, which I think is what your remark about i18n is intended to refer to) are not conventionally considered to contain lines of source code.
You can write a lot of games in not much code, especially when efficiency isn't a major concern. I've made some efforts in this direction myself.
Space Invaders without a game engine, using only a color-rectangle-fill primitive for its graphics, is about 160 lines of JS http://canonical.org/~kragen/sw/dev3/invaders ; when I factored out a simple game engine to make the code more readable, and some added elaborate explosion effects, it's closer to 170 lines of JS http://canonical.org/~kragen/sw/dev3/qvaders. (These have a bug I need to fix where they're unplayably fast on displays running at 120Hz or more.)
And, though it doesn't quite rise to the level of being a game, a Wolf3D-style raycasting engine with an editable maze world is about 130 lines of JS http://canonical.org/~kragen/sw/dev3/raycast.
My experience writing these tiny games is that a lot of the effort is tuning parameters. If Tetris speeds up too fast, it's too frustrating; if it speeds up too slowly, it's boring. If the space invaders drop too many bombs, or advance too quickly, it's too frustrating; if too few or too slowly, it's boring. I spent more time on the explosions in Qvaders than on writing the entire rest of the game, twice. This gameplay-tuning work doesn't show up as extra source code. All of these games significantly lack variety and so don't reward more than a few minutes of play, but that's potentially fixable without much extra code, for example with interesting phenomena that stochastically occur more rarely or with more level designs.
MacPaint, for what it's worth, is about 2000 lines of assembly and 3500 lines of Pascal: https://computerhistory.org/blog/macpaint-and-quickdraw-sour... but you could clearly put together a usable paint program in much less code than that, especially if you weren't trying to run on an 8MHz 68000 with 128KiB of RAM.