Hacker Newsnew | past | comments | ask | show | jobs | submit | more flufluflufluffy's commentslogin

I don’t really understand the initial impetus. I like scripting in Python. That’s one of the things it’s good at. You can extremely quickly write up a simple script to perform some task, not worrying about types, memory, yada yada yada. I don’t like using Python as the main language for a large application.


I love scripting in Python too. I just hate trying to install other people’s scripts.


> hate trying to install other people’s scripts.

This phrasing sounds contradictory to me. The whole idea of scripts is that there's nothing to install (besides one standard interpreter). You just run them.


By that logic, you don't install an OS, you just put the bootloader and other supporting files on your storage medium of choice and run it


> The whole idea of scripts is that there's nothing to install

and yet without fail, when I try to run basically any `little-python-script.py`, it needs 14 other packages that aren't installed by default and I either need to install some debian packages or set up a virtual environment.


Its up to the programmer whether they use external packages or not. I dont ser the problem with setting up a venv, but if this is packaged correctly you could do uv run.


You're hand waving away the core problem - if I'm running someone else's script, it's not up to me if they used external packages or not, and in python using someone else's script that relies on external packages is a pain in the ass because it either leaves setting up a venv and dealing with python's shortcomings up to me, or I have to juggle system packages.

I'm sure uv can handle this _if_ a given script is packaged "correctly", but most random python scripts aren't - that's the issue we're talking about in this thread.

The whole point of a scripting language IMO is that scripts should _just run_ without me doing a bunch of other crap to get the system into some blessed state. If I need to learn a bunch of python specific things, like "just add _pycache_ to all of your .gitignore files in every directory you might run this script", then it isn't a useful scripting language to me.


Anytime I have the need to write a script, I write it myself. When I do this, I like to do it in Python, rather than Go.

I very rarely, if ever, “install” or run other people’s scripts. Because typically, a script is a specialized piece of code that does something specific the user was trying to do when they wrote it.

However, I do often install applications or libraries written by other people in Python. Typically, I don’t have a problem, but sometimes I run into dependency hell. But this is something different than me just writing scripts. For scripting, Python is great.


If they use https://packaging.python.org/en/latest/specifications/inline... then it becomes a breeze to run with uv. Not even a thing.


but then you need uv

it's not as portable


Inline script metadata itself is not tied to uv because it's a Python standard. I think the association between the two comes from people discovering ISM through uv and from their simultaneous rise.

pipx can run Python scripts with inline script metadata. pipx is implemented in Python and packaged by Linux distributions, Free/Net/OpenBSD, Homebrew, MacPorts, and Scoop (Windows): https://repology.org/project/pipx/versions.


Yes, many things can use inline script metadata.

But a script only has one shebang.


Perhaps a case for standardizing on an executable name like `python-script-runner` that will invoke uv, pipx, etc. as available and preferred by the user. Scripts with inline metadata can put it in the shebang line.

I see it has been proposed: https://discuss.python.org/t/standardized-shebang-for-pep-72....


I get the impression that others didn't really understand your / the OP's idea there. You mean that the user should locally configure the machine to ensure that the standardized name points at something that can solve the problem, and then accepts the quirks of that choice, yes?

A lot of people seem to describe a PEP 723 use case where the recipient maybe doesn't even know what Python is (or how to check for a compatible version), but could be instructed to install uv and then copy and run the script. This idea would definitely add friction to that use case. But I think in those cases you really want to package a standalone (using PyInstaller, pex, Briefcase or any of countless other options) anyway.


> You mean that the user should locally configure the machine to ensure that the standardized name points at something that can solve the problem, and then accepts the quirks of that choice, yes?

I was thinking that until I read the forum thread and Stephen Rosen's comments. Now I'm thinking the most useful meta-runner would just try popular runners in order.

I have put up a prototype at https://github.com/dbohdan/python-script-runner.


Neat. Of course it doesn't have much value unless it's accepted as a standard and ships with Python ;) But I agree with your reasoning. Might be worth reviving that thread to talk about it.


> I just hate trying to install other people’s scripts.

This notion is still strange to me. Just... incompatible with how I understand the term "script", I guess.


You don't understand the concept of people running software written by other people?

One of my biggest problems with python happens to be caused by the fact that a lot of freecad is written in python, and python3 writes _pycache_ directories everywhere a script executes (which means everywhere, including all over the inside of all my git repos, so I have to add _pycache_ to all the .gitignore ) and the env variable that is supposed to disable that STUPID behavior has no effect because freecad is an appimage and my env variable is not propagating to the environment set up by freecad for itself.

That is me "trying to install other people's scripts" the other people's script is just a little old thing called FreeCAD, no big.


> That is me "trying to install other people's scripts" the other people's script is just a little old thing called FreeCAD, no big.

What I don't understand is why you call it a "script".

> and python3 writes _pycache_ directories everywhere a script executes (which means everywhere, including all over the inside of all my git repos, so I have to add _pycache_ to all the .gitignore )

You're expected to do that anyway; it's part of the standard "Python project" .gitignore files offered by many sources (including GitHub).

But you mean that the repo contains plugins that FreeCAD will import? Because otherwise I can't fathom why it's executing .py files that are within your repo.

Anyway, this seems like a very tangential rant. And this is essentially the same thing as Java producing .class files; I can't say I run into a lot of people who are this bothered by it.


This is 99% of the complaints in these threads. "I had this very specific problem and I refuse to handle it by using best practise, and I have not used python to anything else, but I have very strong opinions".


I've had many problems with many python scripts over the many years. Problems that I did not have with other scripts from other ecosystems. And that's just sticking to scripts and not languages in general.

A random sh script from 40 years ago usually works or works with only the tiniest adjustment on any current system of any unix-like.

A random py script from 6 months ago has a good chance of not working even on the same system let alone another system of the same platform, let alone another system of a different platform.

Now please next by all means assume that I probably am only complaining about the 2->3 trasition and nothing that actually applies since 15 years ago.


>A random py script from 6 months ago has a good chance of not working even on the same system

This just isn't true and nothing I have experienced over many years of python programming.

Maybe your problems with python scripts (is it scripts or program?) is an issue with the code?


I wouldn't use "script" to describe FreeCAD. Regardless, this problem is much more with FreeCAD than with Python.

> I have to add _pycache_ to all the .gitignore

I just add that, once, in my global gitignore.


It seems to be Linux specific (does it even work on other unix like OSes?) and Linux usually has a system Python which is reasonably stable for things you need scripting for, whereas this requires go to be installed.

You could also use shell scripting or Python or another scripting language. While Python is not great at backward compatibility most scripts will have very few issues. Shell scripts are backward compatible as are many other scripting languages are very backward compatible (e.g. TCL) and they areG more likely to be preinstalled. If you are installing Go you could just install uv and use Python.

The article does say "I started this post out mostly trolling" which is part of it, but mostly the motivation would be that you have a strong preference for Go.


Works on macos too (unix by way of bsd)


I just don't get how JS is any worse as a scripting language.

bla bla bla

node bla.js


It’s not worse, but Python has better batteries out of the box. Toml, csv, real multi-threading (since 3.13), rudimentary gui, much better repl (out of the box and excellent, installable ipython), argparse and a lot more.


Bun has a lot of this built in, plus Bun shell


You do have to worry about types, you always do. You have to know, what did this function return, what can you do with it.

When you know well the language, you dont need to search for this info for basic types, because you remember them.

But that's also true for typed languages.


This is more than just trivially true for Python in a scripting context, too, because it doesn’t do things like type coercion that some other scripting languages do. If you want to concat an int with a string you’ll need to cast the int first, for example. It also has a bunch of list-ish and dict-ish built in types that aren’t interchangeable. You have to “worry about types” more in Python than in some of its competitors in the scripting-language space.


Python is great for the coder, and unholy garbage for everyone else.

If you care about anyone but yourself, don't write things in python for other people to distribute, install, integrate, run, live with.

If you don't care about anyone else, enjoy python.


Nonsense.


Ah yes, the Nextjs app with access to personally identifiable information for every federal employee.


I did find that troubling too. I can see the logic of a short lived / well funded project using nextjs, but for something like this that's meant to be a simple form that needs to be reliable, easy to maintain, and long lived, my first thought would be to make a classic restful MPA. Introduction of a complex frontend framework like next seems like it would lead to more headaches than it's worth. Had similar thoughts about the Azur vendor lockin. I seriously doubt they had the traffic to justify needing something like Azur functions and batch processing. I'd love to hear some more justification for why they choose this stack, and if they considered any alternatives first.


If it was really down to two engineers, it's almost certainly what one or both of them were already comfortable or familiar with and no other reason. Six months is such a short time frame for long term projects like this that I imagine they could not spare much time for analysis of alternatives.


If you're comfortable with nextjs, you should be even more comfortable with a nodejs SSR application. It's the same thing, but simpler. The HTML doesn't even have to be pretty. We're really just querying a DB, showing forms, and saving forms. Hell, use PHP if you want.


I had assumed that these people were not junior devs left unsupervised to handle important government work.


Huh, hacker news keeps telling me we should run the government like a business though?


It's definitely a step up from PowerApps though.


is it? msft have enterprise level RBAC, what does this next.js app have?


you telling me when I stick my pen up my nose I’m hacking it?


I would call that a hack, not a good one though.


Yeah but there was no lock; somebody put a box around the doorknob without anything holding it there, and somebody removed the box and opened the door.


I’m still waiting to be able to move 2 layers at once in GIMP


No need to wait! In GIMP 3.0, you can shift-click multiple layers in the layers dockable, then drag them around on the canvas with the Move tool.


The whole “simulation hypothesis” thing has always irked me. To me, the question of whether our universe was [“intentionally” “created” by some other “being(s)”] vs [“naturally” happened] is meaningless. Whatever it was on the other side is way too insanely unfathomable to be classified into those 2 human-created ideas. Ugh the whole thing is so self-centered.


It appeals to sophomoric modern atheists who can't comprehend that infinity and nothing exists at the same time. People seek a reason "why" not realizing the question is the answer. The universe exists because 'why not?' because Infinity seeks to prevail over nothing. Nothing strikes at the heel of infinity. The truth is not in these lines or that theory but betwixt here and there and once "you" realize it, it realizes "you." Because it is you and you are it for it is itself. This may sound like my mumbo jumbo woo but once you know it knows you know it knows you know.


yes haha, it is mumbo jumbo to the uninitiated (which can mean many different things!)


You're not unititiated you're just testing the hypothesis. That things—including yourself—seek meaning is the meaning. Math is language, language is math as LLMs are showing us.


Came here to say most of this, also worth calling out the note at the bottom:

> Note: This research was presented as an abstract at the ACS Clinical Congress Scientific Forum. Research abstracts presented at the ACS Clinical Congress Scientific Forum are reviewed and selected by a program committee but are not yet peer reviewed.

My guess is when it gets to peer review, one of the reviewers will request at least mentioning these limitations. As it was only an abstract, it’s possible the paper itself does mention these limitations already as well.


They must have access to the full data distribution, right?


Error tolerance in this context means the parser produces a walkable AST even if the input code is syntactically invalid, instead of just throwing/reporting the error. It’s useful for IDEs, where the code is often in an invalid state as the developer is typing, but you still want to be able to report diagnostics on whatever parts of the code are syntactically valid.


Me: oh cool, this is interesting, I don’t quite understand what exactly that means, let me read the thread to learn more…

The thread: > Replacing ECCA1 by version with step after the direction change could save something like 1% of the ecca1 bits size. Compiling agnosticized program instead of fixed lane program by ecca1 could save something like 1% as well (just guesses). Build of smaller ECCA1 would shorten binary portion, but it would be hardly seen in the ship size.

> Using agnosticized recipe in the fuse portion would definitely reduce its size. Better cordership seed and better salvo for gpse90 would help…

Dear lord I had no idea there’s this much jargon in the game of life community. Gonna be reading the wiki for hours


Their free book "Conway’s Game of Life: Mathematics and Construction" is a great starting point - https://conwaylife.com/book/conway_life_book.pdf


You sent me down a rabbit hole: https://esolangs.org/wiki/APGsembly is mentioned in the book


And for a related rabbit hole where people actually went all the to the bottom, there's of course the full implementation of Tetris in GoL which was nerd-sniped by a CodeGolf challenge

https://codegolf.stackexchange.com/questions/11880/build-a-w...


Sometimes you see something that makes you wonder how it is that you get to exist in the same world as people with the drive and intelligence to do truly awesome ( in the original sense) thing like this. I am proud of myself when the compiler works on the first try.


I think it's awesome that they can do this amazing fun esoteric stuff, but at the same time a small part of me thinks maybe they need to be doing something more meaningful in the real world.


This small part is what makes broken people. Whoever reads this, go have fun! :)


You know what? I think I will.


I wonder, what would that be, that thing that is more meaningful?

I would make the case that, zoomed out far enough, nothing at all is meaningful, so you might as well make beautiful things, and this is a delightfully beautiful thing.


the only thing that's meaningful is having fun, everything else is a waste of time


Once a year or so I find myself on those forums and I'm always astounded how many people there are that dedicate massive amounts of time and brain power to this.


I think it appeals to the same itch that languages like Brainfuck scratch.

There's something exceedingly interesting about how you can model complexity with something extremely simple. Brainfuck is fun because it forces you to think extremely low level, because ultimately it is basically just a raw implementation of a Turing machine. I wouldn't want to write a big program in it, but it is fun to think about how you might express a complicated algorithm with it.

Similarly with CGOL, it is really interesting to see how far you can stretch really simple rules into something really complex.

I've written CGOL dozens of times, it's a common project that I do to "break in" a language I've learned, since it's not completely trivial but it's simple enough to not be frustrating, and I completely understand why math/computability-theory folks find it something to dedicate brain power to.


I have a weird love for Brainfuck. It's a tiny, incredibly simple language that you can write an interpreter for in an hour, it's effectively a super-simple byte code that can easily be extended and used as a compilation target for simple languages.

Honestly, as an educational tool, the only thing wrong with it is the name!


for those who think brainfuck is too pedestrian, have a browse through the esolang wiki:

https://esolangs.org/wiki/Language_list


StupidStackLanguage is by far my favorite:

https://esolangs.org/wiki/StupidStackLanguage


Piet is mine - the programs are 2D images: https://esolangs.org/wiki/Piet

Primarily because of the note on the "calculating pi" example program:

> Richard Mitton supplies this amazing program which calculates an approximation of pi... literally by dividing a circular area by the radius twice.

> Naturally, a more accurate value can be obtained by using a bigger program.

https://www.dangermouse.net/esoteric/piet/samples.html


One of my favorite calculations of pi is to pick random coordinates in a unit square and count how many of them are in a circle. it's so stupid and so clever at the same time.

This was recreated from memory. I think it is close but I may have a bounding bug.

    import random

    def pi(count):
      inside = 0
      for i in range(count):
        test_x = random.random() 
        test_y = random.random()
        if test_x ** 2 + test_y ** 2 < 1:
          inside += 1
        return inside / count * 4 #above is a quarter circle
    
    print(pi(2 ** 30) )


With the metatopic of this thread being obscure languages, I had some fun squeezing this into some list comprehensions (maybe someone's got an idea of how to keep track of the state within the list):

```

$ cat << EOF > pi.py

state = [0, 0, 2*8, 2*12]; _ = [print(f'\rRun {state.__setitem__(0, state[0] + 1) or state[0]}/{state[3]} | Last \u03c0: {current_pi:.6f} | *Average \u03c0: {(state.__setitem__(1, state[1] + current_pi) or state[1]) / state[0]:.6f}*', end='', flush=True) for current_pi in [(4 * sum([1 for _ in range(state[2]) if __import__("random").random()*2 + __import__("random").random()*2 < 1]) / state[2]) for _ in range(state[3])]]; print()

EOF

$ time python3 pi.py

Run 4096/4096 | Last π: 3.140625 | *Average π: 3.143051*

python3 pi.py 0.41s user 0.01s system 99% cpu 0.429 total

```

Play around with the `2*8` and `2*10` values in the state, they control the amount of rounds and the range in which the random values get generated respectively.


I'm not sure about a bounding bug, but there's definitely an indent error on the return line (good old Python!)



I have no idea how I'd be able to pitch this to a university (or even who I could pitch it to), but I would absolutely love to teach a computability course using Brainfuck as the language, just to really show students how low-level logic can be.

I would probably need to find a similar language with a different name though.


You might find mlatu-6[0] interesting- it’s convertible to SKI calculus but concatenative (like Forth) rather than applicative. It’s actually a subset of Mlatu, a language I created for similar reasons to explore “how low can you go.”

[0]: https://esolangs.org/wiki/Mlatu-6


When I was an undergrad at Georgia Tech, one of my intro computer science classes had us implement something in brainfuck. Turns out college kids are quite comfortable with swear words.


Of course they are ... It's college administration that are uncomfortable


How about Assembly?


Assembly is higher level logic than brainfuck, especially on modern chips. You have built in instructions for arithmetic and conditionals/branches and you can allocate memory and point to it.

You don’t really get any of that with brainfuck. You have a theoretical tape and counters and that’s basically it.


SKI calculus is pretty neat, too. You get no tape, no counters. (But it's not quite as bad to program in as brainfuck, because you can built more ergonomic contraptions to help you along.)


SKI can, of course, be de-optimised a bit further by replacing I with SKK. You are right though that it is relatively simply to go from something that looks like a normal program languages to a pile of S and K combinators. Not the most efficient way to compute though!


Unlambda solves that problem.

"What if we had the lambda calculus without the lambda forms?" asked no one.

http://www.madore.org/~david/programs/unlambda/


Oh my god, that list lacks DATAFlex!


Dear Lord.


I remain utterly baffled how they made a lisp compiler with malbolge


> I've written CGOL dozens of times, it's a common project that I do to "break in" a language I've learned, since it's not completely trivial but it's simple enough to not be frustrating, and I completely understand why math/computability-theory folks find it something to dedicate brain power to.

Writing a naive CGOL is fun and quick. But writing a _fast_ one can get arbitrarily complicated.

https://en.wikipedia.org/wiki/Hashlife is one particular example of where you can go for a faster-than-naive CGOL.


Yeah, a million years ago I did that one as a convolutions so I could run it on the GPU when I was learning OpenCL. That was my first exposure to “optimizing” CGOL.


In the music world there are people who will build whole symphonies out of one sample, one filter, and one delay patch.


"So did Achilles lose his friend in war, and Homer did no injustice to his grief by writing about it in dactylic hexameters" - Tobias Wolff, Old School


Same with Your World of Text [0], still going strong.

[0] https://www.yourworldoftext.com/


if you find that fascinating then you'll be blown away by something called 'Wolfarm physics project'. it basically is trying to recreate entire physics using such baseline 'graph update' rules like 'Game of Life'. So far no predictions yet but very interesting.


Wolfram is kind of obsessed with cellular automata, even went and wrote a whole book about them titled "A New Kind of Science". The reception to it was a bit mixed. CA are Turing-complete, so yeah, you can compute anything with them, I'm just not sure that in itself leads to any greater Revealed Truths. Does make for some fun visualizations though.


A new kind of science is one of my favorite books, I read the entirety of the book during a dreadful vacation when I was 19 or 20 on an iPod touch.

It goes much beyond just cellular automata, the thousand pages or so all seem to drive down the same few points:

- "I, Stephen Wolfram, am an unprecedented genius" (not my favorite part of the book) - Simple rules lead to complexity when iterated upon - The invention of field of computation is as big and important of an invention as the field of mathematics

The last one is less explicit, but it's what I took away from it. Computation is of course part of mathematics, but it is a kind of "live" mathematics. Executable mathematics.

Super cool book and absolutely worth reading if you're into this kind of thing.


I would give the same review, without seeing any of this as a positive. NKS was bloviating, grandiose, repetitive, and shallow. The fact that Wolfram himself didn’t show that CA were Turing complete when most theoretical computer scientists would say “it’s obvious, and not that interesting” kinda disproves his whole point about him being an under appreciated genius. Shrug.


That CA in general were Turing complete is 'obvious'. What was novel is that Wolfram's employee proved something like Turing completeness for a 1d CA with two states and only three cells total in the neighbourhood.

I say something-like-Turing completeness, because it requires a very specially prepared tape to work that makes it a bit borderline. (But please look it up properly, this is all from memory.)

Having said all that, the result is a nice optimisation / upper bound on how little you need in terms of CA to get Turing completeness, but I agree that philosophically nothing much changes compared to having to use a slightly more complicated CA to get to Turing completeness.


The question really ultimately resolves to whether the universe can be quantized at all levels or whether it is analog. If it is quantized I demand my 5 minutes with god, because I would see that as proof of all of this being a simulation. My lack of belief in such a being makes me hope that it is analog.


Computation does not necessarily need to be quantized and discrete; there are fully continuous models of computation, like ODEs or continuous cellular automata.


That's true, but we already know that a bunch of stuff about the universe is quantized. The question is whether or not that holds true for everything or rather not. And all 'fully continuous models of computation' in the end rely on a representation that is a quantized approximation of an ideal. In other words: any practical implementation of such a model that does not end up being a noise generator or an oscillator and that can be used for reliable computation is - as far as I know - based on some quantized model, and then there are still the cells themselves (arguably quanta) and their location (usually on a grid, but you could use a continuous representation for that as well). Now, 23 or 52 bits (depending on the size of the float representation you use for the 'continuous' values) is a lot, but it is not actually continuous. That's an analog concept and you can't really implement that concept with a fidelity high enough on a digital computer.

You could do it on an analog computer but then you'd be into the noise very quickly.

In theory you can, but in practice this is super hard to do.


If your underlying system is linear and stable, you can pick any arbitrary precision you are interested in and compute all future behaviour to that precision on a digital computer.

Btw, quantum mechanics is both linear and stable--and even deterministic. Admittedly it's a bit of a mystery how the observed chaotic nature of eg Newtonian billard balls emerges from quantum mechanics.

'Stable' in this case means that small perturbations in the input only lead to small perturbations in the output. You can insert your favourite epsilon-delta formalisation of that concept, if you wish.

To get back to the meat of your comment:

You can simulate such a stable system 'lazily'. Ie you simulate it with any given fixed precision at first, and (only) when someone zooms in to have a closer look at a specific part, you increase the precision of the numbers in your simulation. (Thanks to the finite speed of light, you might even get away with only re-simulating that part of your system with higher fidelity. But I'm not quite sure.)

Remember those fractal explorers like Fractint that used to be all the rage: they were digital at heart---obviously---but you could zoom in arbitrarily as if they had infinite continuous precision.


> If your underlying system is linear and stable

Sure, but that 'If' isn't true for all but the simplest analog systems. Non-linearities are present in the most unexpected places and just about every system can be made to oscillate.

That's the whole reason digital won out: not because we can't make analog computers but because it is impossible to make analog computers beyond a certain level of complexity if you want deterministic behavior. Of course with LLMs we're throwing all of that gain overboard again but the basic premise still holds: if you don't quantize you drown in an accumulation of noise.


> Sure, but that 'If' isn't true for all but the simplest analog systems.

Quantum mechanics is linear and stable. Quantum mechanics is behind all systems (analog or otherwise), unless they become big enough that gravity becomes important.

> That's the whole reason digital won out: not because we can't make analog computers but because it is impossible to make analog computers beyond a certain level of complexity if you want deterministic behavior.

It's more to do with precision: analog computers have tolerances. It's easier and cheaper to get to high precision with digital computers. Digital computers are also much easier to make programmable. And in the case of analog vs digital electronic computers: digital uses less energy than analog.


Just be careful with how aligned to reality your simulation is. When you get it exactly right, it's no longer just a simulation.


"It looks designed" means nothing. It could be our ignorance at play (we have a long proven track record of being ignorant about how things work).


Yes. Or it could be an optimisation algorithm like evolution.

Or even just lots and lots of variation and some process selecting which one we focus our attention one. Compare the anthropic principle.


For all we know, it could be distinct layers all the way down to infinity. Each time you peel one, something completely different comes up. Never truly knowable. The universe has thrown more than a few hints that our obsession with precision and certainty could be seen cosmically as "silly".

In our current algorithmic-obsessed era, this is reminiscent of procedural generation (but down/up the scale of complexity, not "one man's sky" style of PG).

However, we also have a long track record of seeing the world as nails for our latest hammer. The idea of an algorithm, or even computation in general, could be in reality conceptually closer to "pointy stone tool" than "ultimate substrate".


> For all we know, it could be distinct layers all the way down to infinity. Each time you peel one, something completely different comes up. Never truly knowable. The universe has thrown more than a few hints that our obsession with precision and certainty could be seen cosmically as "silly".

That's a tempting thing to say, but quantum mechanics suggests that we don't have infinite layers at the bottom. Most thermodynamic arguments combined with quantum mechanics. See eg also https://en.wikipedia.org/wiki/Bekenstein_bound about the amount of information that can even in theory be contained in a specific volume of space time.


From the link you shared:

> the maximum amount of information that is required to perfectly describe a given physical system _down to the quantum level_

(emphasis added by me)

It looks like it makes predictions for the quantum layer and above.

--

Historically, we humans have a long proven track record of missing layers at the bottom that were unknown but now are known.


If you're interested in science fiction based on this concept, Greg Egan has a book called Permutation City which is pretty interesting.


I think I've read it or maybe a portion of it, was not very captivating. will try again.


That's because it's not "game of life jargon", it's "cellular automata" jargon. Which is a field of math and comes along with a bunch of math jargon from related fields.


I searched several of these terms and they are all specifically jargon of game of life enthusiasts, (i.e. search reaults are all on fansites related to game of life) not general cellular automata jargon.


I assume there's fanfic shipping of automata...

(If you don't recognize that use of "shipping", don't google it at work.)


https://tvtropes.org/pmwiki/pmwiki.php/Main/Shipping should be reasonably safe for work. As long as you can avoid getting sucked in to an all-day wiki bender.


short for relationship


conwaylife.com isn't just for GOL, although it is the main subject, there is an Other Cellular Automata forum. also it's not really a fansite, it's a bit more academic than what would be considered a "fansite".


mmmh I don't think so. I've read several papers on cellular automata and I don't recognize the terms


I use this operator all the time in a similar but not quite the same way:

<input type=“text” defaultValue={user.email ?? “”}>

The user is an entity from the DB, where the email field is nullable, which makes perfect sense. The input component only accepts a string for the defaultValue prop. So you coalesce the possible null to a string.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: