When TCO recursion was first developed it was very explicitly called out as a syntactic and structurally improved GOTO but still fundamentally a GOTO that could take params.
Recursion isn't physically real, any book that teaches the abstraction before explaining either the call stack (for non-TCO recursion) or in the GOTO context is a book actively trying to gatekeeper CS and weed out readers. (Not that any of those old pascal turbo/boreland books from the 90s actually shipped code that compiled.)
I had several people on HN of all places try to "teach me" recursion after this simple line inside a larger comment:
> It's like acting like recursion is physically real (it's not) when it smuggles in the call stack.
Recursion IS real abstractly. It's just not physically real, it was developed before we knew how DNA/RNA encoding handles the same issues in a more performant way.
Recursive functions are a mathematical concept, like the "imaginary" number, or "trascendental" numbers. Or negative numbers for that matter.
Simple example, the Fibonacci sequence.
FIB(1) = 1
FIB(2) = 1
FIB(N) = FIB(N-1) + FIB(N-2)
There's no programming language or "physical" implementation needed in order to calculate FIB(N) for arbitrary N. Pencil and paper will do for small numbers
> Recursion isn't physically real, any book that teaches the abstraction before explaining either the call stack (for non-TCO recursion) or in the GOTO context
Do you also believe that loops and functions should only be taught after the call stack and goto are taught? Neither of them are real either by your measure.
Loops and functions can be physically represented as a stand alone, they can be physically carved onto a mechanical surface and observed.
They don't smuggle anything in conceptually, their abstraction doesn't leave anything critical to their structure out. They are real and can be physicalized as stand alone objects.
I see you've never tried to teach a software class to children or other learners, historically recursion is _very_ poorly taught by those who already understand the concept, but I'm not saying you have to care about that, a lot of people think there are too many programers already.
> It's just not physically real, it was developed before we knew how DNA/RNA encoding handles the same issues in a more performant way.
That was a sharp left turn -- how do you figure DNA/RNA are relevant here? I feel like iteration pre-dates our modern understanding of RNA in particular (though I could be mistaken) so I struggle to imagine how DNA/RNA were particularly informative in this regard.
Open Source is one of the most humanistic, progress driving cultures people have ever come up with.
I mean I am fucking shocked that people don't get this, our whole fucking modern world, all the parts that make stuff work, every last bit of it is built on top of or dependent on OSS.
There isn't a single lab, company, person or country that doesn't use and benefit from open source. Whether they know it or not.
It is what has supported widespread fractal improvement starting at the individual level. It's the greatest grassroots story movement ever and it's still driven by grassroot adoption.
It's like programatic peer review writ large with no gate keeping journals and its changed humanity forever, if we ever deflect that asteroid headed towards earth or if we make it to the stars, or if we figure out how to avoid the heat death, it's because open source got us there.
It's funny that there are so many innovations right now the recent part of the chart just has to arbitrarily exclude an insane amount of stuff innovation that's happening.
No HIV vaccine. mRNA vaccine get's a single entry instead of vaccine per disease like prior vaccines. No battery stuff since 1985. Just amazing, fractal improvement is everywhere.
Great phrase - fractal improvement. It's kind of the idea of this book [0]
Even more cool: commercial progress trails tech. It takes a long time for companies to figure out how to turn a new idea or a cheaper input into a new product/industry, and then for related companies to grow into an economic ecosystem.
So one would expect to see some spectacular economics over the next couple of centuries.
Topos theory can be added to type theory to provide a categorical semantics.
But even with Grothendieck topology (total and co-total categories) requires sets with least upper bound (join) and a greatest lower bound (meet).
The problem is that you get semi-algorthms, thus will only halt if true.
IMHO it is better to think of recursion as enabling realizability, and forcing fixed points. The ycombinator can be used for recursion because it is a fixed point combinator.
From descriptive complexity FO+LFP=SO+Krom=P, while FO by itself is AC^0 IIRC?
Using model logic+relations is another path.
The problem I have found is those paths tend to require someone to at least entertain intuitionistic concepts and that is a huge barrier.
Mostly people tend to use type theory to convert symantic properties to trivial symantic properties for convenience.
Blackburn/Rijke/Venema's “Modal Logic” is the most CS friendly book for that if you want to try.
Yes, it's interesting to see when coders start dealing with physical world problems and sensor systems and don't realize they are relying on constructivity to do anything at all. I've mostly only met other devs who have also worked in the CRDT space that grok intuitionistic reasoning.
Funny I fought for a good 5 min trying to convert everything to semantic on my phone, and finally gave up, figuring it also proved I wasn't a bot :) I have fixed it now.
But yes: semantic, run-time, extensional, etc... from Rice/Godel/etc...
There's no call stack in a tail-recursive function. It's equivalent to a loop.
And that, if you think about it a bit, might help you understand what recursion is, because your current understanding seems a bit off.
The only time you need a "call stack" with recursive calls is if the calls need to ultimately return to the call site and continue executing. It's still recursive.
By contrast, the model in which every function call pushes a return address, whether or not it's really needed, is a kind of mistake that comes from not understanding recursion.
Yes. This is correct, tail call optimized recursion is physically different than other types of recursion. It's much needed syntactic and structural sugar for goto.
Yes well we all have our pet theories about the world. Thankfully the people that think the natural numbers aren't "real" or that recursion isn't "real" haven't won out and destroyed our ability to make meaningful progress.
I'm curious about what you mean by recursion not being physically real? Do you mean it doesn't convert to CPU instructions, or something around occurence in nature?
I've never seen something so egregious before, it made it impossible to read without covering it with my hand.
But I realized something by attempting to read this article several times first.
If I ever want to write an article and reduce peoples ability to critically engage with the argument in it I should add a focus pulling animation that thwarts concerted focus.
It's like the blog equivalent of public speakers who ramble their audience into a coma.
Do you mean the live chat? Those are, appropriately, for live streams. They do replay afterwards as depending on the type of stream the video may not make complete sense without them (and they're easy enough to fold if they don't have any value e.g. premieres).
The real problem is that even though most problems are actually text problems (or can be represented by text chars) we don't have version control with the same level of granularity for spreadsheets.
It is often said that xlsx and docx files are basically zipped folders of XML files and assets, I wonder how effective it would be to add the decompressed forms to git to see and track changes (or similarly for git to show diffs for the files inside a tar/zip
Not hard at all as it turns out. I had to do this a few years back to extract PBI metadata (metric names, queries, data sources, etc.) from of our dashboards.
If bouncers copied my id, my home address and a bunch of private data every time I went to a bar I'd never go out.
This whole premise is absurd. There is tons of research and empirical and historical evidence that living in a surveillance state stifled free expression and thus narrows the richness of human creation and experimentation.
How old are you that you think constant surveillance is any kind of way to live? It's a thin gruel of a life.
This seems like such a lost cause to carry on about.
The fact that the post originates from a what appears to be an furry-aligned individual is probably not going to help get a majority of people to be sympathetic.
There appears to be no formidable organized resistance against the recent decades of surveillance boom.
With tech and many tech employees actively accelerating surveillance.
Horrible? yes. And extremely unlikely to be rolled back anytime soon.
(Disagree? I'd love to believe you are right!)
Lost causes are worth fighting for and keeping in the public eye, in the history of ideas being written off or challenges being considered impossible to overcome many end up swinging back hard the other way as long as they are ripe (the framework is preserved and still championed) and there is an inciting incident that swings public sentiment.
Defeatist attitudes and throwing in the towel almost never makes sense, engaging at a lower degree of time commitment is sensible for some, the meta commentary about it being hopeless is one of the worst types of self defeating comments a person can make, especially if they aren't in opposition, whose time are you trying to optimize with the comment? To what point and purpose are you saying this except to further deflate sails on an already still day?
The zeitgeist isn't purely rational or stable, change is often non linear, I've seen small subcultures with "impossible" headwinds completely own the space within my lifetime, we're just at the heel turn now and it's not universally popular, many people don't speak up because they are just getting vpns or moving to other forms of non-violent non-compliance.
I suspect a lot of doomposting online is someone writing down their negative self talk hoping some stranger will finally provide a convincing argument that they can use to fight their own feelings on the matter. It's like... involuntary group therapy?
You keep making this comparison, but it's not appropriate. The closest real-world analogy: in order to buy alcohol, you need to wear a tracking bracelet at all times, and be identified at every store you enter, even you you choose to purchase nothing. If our automated systems can't identify you with certainty, you'll be limited to only being able to do things a child could do.
And the real world has a huge gap between a child and an adult. If an 8-year old walked into Home Depot and bought a circular saw, there's no law against it, but the store might have questions. If a 14-year old did it, you might get a different result. At 17, they'd almost certainly let you.
The real world has people that are observing things and using judgement. Submitting to automated age checks online is not that.
It's appropriate (to me) as a limit society has decided it wants, and we should consider if there is a reason similar limits should, or should not, apply to the internet. The whole article we are discussing is how that could be implemented in a much more privacy-sade way.
But my point is that it won't be. The laws are getting passed, and there is no privacy preservation, there are no ZKPs, there's nothing except "submit your ID". You keep holding out for good faith, but the folks making the rules aren't acting in good faith. I very much appreciate the discussion here, but I think we're coming into the discussion with a different set of priors, so even if our values match, we might not agree.
Just to emphasize the point, the EU's age verification laws are actively preventing Android users from utilizing third party app stores because the implementation is tied to Google Play integrity services.
The greatest use of LLMs is the ability to get accurate answers to queries in a normalized format without having to wade through UI distraction like ads and social media.
It's the opposite of finding an answer on reddit, insta, tvtropes.
I can't wait for the first distraction free OS that is a thinking and imagination helper and not a consumption device where I have to block urls on my router so my kids don't get sucked into a skinners box.
I love being able to get answers from documentation and work questions without having to wade through some arbitrary UI bs a designer has implemented in adhoc fashion.
I don't find the "AI" answers all that accurate, and in some cases they are bordering on a liability even if way down below all the "AI" slop it says "AI responses may include mistakes".
>It's the opposite of finding an answer on reddit, insta, tvtropes.
Yeah it really is because I can tell when someone doesn't know the topic well on reddit, or other forums, but usually someone does and the answer is there. Unfortunately the "AI" was trained on all of this, and the "AI" is just as likely to spit out the wrong answer as the correct one. That is not an improvement on anything.
> wade through UI distraction like ads and social media
Oh, so you think "AI" is going to be free and clear forever? Enjoy it while it lasts, because these "AI" companies are in way over their heads, they are bleeding money like their aorta is a fire hose, and there will be plenty of ads and social whatever coming to brighten your day soon enough. The free ride won't go on forever - think of it as a "loss leader" to get you hooked.
I agree with the whole first half, but I disagree that LLM usage is doomed to ad-filled shittyness. AI companies may be hemmoraging money, but that's because their product costs so much to run; it's not like they don't have revenue. The thing that will bring profitability isn't ads, it will be innovations that let current-gen-quality LLMs run at a fraction of the electricity and power cost.
Will some LLMs have ads? Sure, especially at a free tier. But I bet the option to pay $20/month for ad-free LLM usage will always be there.
Silicon will improve, but not fast enough to calm investors. And better silicon won't change the fact that the current zeitgeist is basically a word guessing game.
$20 month won't get you much, if you're paying above what it costs to run the "AI", and for what? Answers that are in the ballpark of suspicious and untrustworthy?
Maybe they just need to keep spending until all the people who can tell slop from actual knowledge are all dead and gone.
Recursion isn't physically real, any book that teaches the abstraction before explaining either the call stack (for non-TCO recursion) or in the GOTO context is a book actively trying to gatekeeper CS and weed out readers. (Not that any of those old pascal turbo/boreland books from the 90s actually shipped code that compiled.)
I had several people on HN of all places try to "teach me" recursion after this simple line inside a larger comment:
> It's like acting like recursion is physically real (it's not) when it smuggles in the call stack.
Recursion IS real abstractly. It's just not physically real, it was developed before we knew how DNA/RNA encoding handles the same issues in a more performant way.
reply