Hacker Newsnew | past | comments | ask | show | jobs | submit | more kshri24's commentslogin

One is deterministic the other is not. I leave it to you to determine which is which in this scenario.


Humans writing code are also non deterministic. When you vibe code you're basically a product owner / manager. Vibe coding isn't a higher level programming language, it's an abstraction over a software engineer / engineering team.


> Humans writing code are also non deterministic

That's not what determinism means though. A human coding something, irrespective of whether the code is right or wrong, is deterministic. We have a well defined cause and effect pathway. If I write bad code, I will have a bug - deterministic. If I write good code, my code compiles - still deterministic. If the coder is sick, he can't write code - deterministic again. You can determine the cause from the effect.

Every behavior in the physical World has a cause and effect chain.

On the other hand, you cannot determine why a LLM hallucinated. There is no way to retrace the path taken from input parameters to generated output. At least as of now. Maybe it will change in the future where we have tools that can retrace the path taken.


You misunderstand. A coder will write different code for the same problem each time unless they have the solution 100% memorised. And even then a huge number of factors can influence them not being able to remember 100% of the memorised code, or opt for different variations.

People are inherently nondeterministic.

The code they (and AI) writes, once written, executes deterministically.


> The code they (and AI) writes, once written, executes deterministically.

very rarely :)


> A coder will write... or opt for different variations.

Agreed.

> People are inherently nondeterministic.

We are getting into the realm of philosophy here. I, for one, believe in the idea of living organisms having no free will (or limited will to be more precise. but can also go so far as to say "dependent will"). So one can philosophically explain that people are deterministic, via concepts of Karma and rebirth. Of course none of this can be proven. So your argument can be true too.

> The code they (and AI) writes, once written, executes deterministically.

Yes. Execution is deterministic. I am however talking only about determinism in terms of being able to know the entire path: input to output. Not just the outputs characteristic (which is always going to be deterministic). It is the path from input to output that is not deterministic due to presence of a black box - the model.


I mostly agree with you, but I see what afro88 is saying as well.

If you consider a human programmer as a "black box", in the sense that you feed it a set of inputs—the problem that needs to be solved, vague requirements, etc.—and expect a functioning program as output that solves the problem, then that process is similarly nondeterministic as an LLM. Ensuring that the process is reliable in both scenarios boils down to creating detailed specifications, removing ambiguity, and iterating on the product until the acceptance tests pass.

Where I think there is a disconnect is that humans are far more capable at producing reliable software given a fuzzy set of inputs. First of all, they have an understanding of human psychology, and can actually reason about semantics in ways that a pattern matching and token generation tool cannot. And in the best case scenario of experienced programmers, they have an intuitive grasp of the problem domain, and know how to resolve ambiguities in meatspace. LLMs at their current stage can at best approximate these capabilities by integrating with other systems and data sources, so their nondeterminism is a much bigger problem. We can hope that the technology will continue to improve, as it clearly has in the past few years, but that progress is not guaranteed.


Agree with most of what you say. The only reason I say humans are different from LLMs when it comes to being a "black box" is because you can probe humans. For instance, I can ask a human to explain how he/she came to the conclusion and retrace the path taken to come to said conclusion from known inputs. And this can also be correlated with say brainwave imaging by mapping thoughts to neurons being triggered in that portion of the brain. So you can have a fairly accurate understanding of the path taken. I cannot probe the LLM however. At least not with the tools we have today.

> Where I think there is a disconnect is that humans are far more capable at producing reliable software given a fuzzy set of inputs.

Yes true. Another thought that comes to my mind is I feel it might also have to do with us recognizing other humans as not as alien to us as LLMs are. So there is an inherent trust deficit when it comes to LLMs vs when it comes to humans. Inherent trust in human beings, despite being less capable, is what makes the difference. In everything else we inherently want proper determinism and trust is built on that. I am more forgiving if a child computes 2 + 1 = 4, and will find it in me to correct the child. I won't consider it a defect. But if a calculator computes 2 + 1 = 4 even once, I would immediately discard it and never trust it again.

> We can hope that the technology will continue to improve, as it clearly has in the past few years, but that progress is not guaranteed.

Agreed.


This is true. What are the implications of that?


Really sucks to see this happen! Been using Tailwind for past few years now.

All the more reason to go closed source. Except for few really vital components that have national security implications (OS/Kernel, drivers, programming languages), which can be funded and supported by universities, Governments etc, I am of the strong opinion that everything else should go closed source.

Enough with this BS. Stop feeding the slop.


People with that perspective shouldn't have been doing open source in the first place. AI isn't hurting people sharing things, only people who are pretending to share but actually indirectly selling things.


There is no one in this World who will do things purely for altruistic purposes. Even if not for money, it would be for something intangible that ingratiates the Self (fame for example).

I can't find a single example of a software developer who has put out software purely for some altruistic purpose without any returns on that investment (direct or indirect).

Building a sustainable business model was a great way to justify open source. Not anymore.


> There is no one in this World who will do things purely for altruistic purposes. Even if not for money, it would be for something intangible that ingratiates the Self (fame for example).

And pretty much none of that is threatened by AI. LLMs learning from code, or articles, found online, are at worst neutral to this, and in many ways beneficial. They're only negative if you're using your contribution as bait to hold your audience, in lieu of a more honest approach of openly selling it to them.

> Building a sustainable business model was a great way to justify open source. Not anymore.

Or poison it. Open Source as a business model was always an anti-competitive hack, similar to "free with ads": drive price down to 0, destroying any other way of making money in this space, including "honest exchange of money for value".


> this looks too close to money laundering, just like buying art

Yep. Concur with this conclusion. It is getting really ridiculous now. No way most of these companies are at the valuation they are in.

Or the investors are just plain stupid.


> Roll up your sleeves to not fall behind

This confirms AI bubble for me and it now being entirely FUD driven. "Not fall behind" should only apply to technologies where you have to put active effort to learn as it requires years to hone and master the craft. AI is supposed to remove this "active effort" part so as to get you upto speed with the latest and bridge the gap between those "who know" and those "who do not". The fact you need to say "roll up your sleeves to not fall behind" confirms we are not in that situation yet.

In other words, it is the same old learning curve that everyone has to cross EXCEPT this time it is probabilistic instead of linear/exponential. It is quite literally a slightly better than coin toss situation when it comes to you learning the right way or not.

For me personally, we are truly in that zone of zero active effort and total replacement when AI can hit a 100% on ALL METRICS consistently, every single time, even on fresh datasets with challenging questions NOT SEEN/TRAINED by the model. Even better if it can come up with novel discoveries to remove any doubts. Chances of achieving that with current tech is 0%.


Game development is STILL a highly underrated field. Plenty of advancements/optimizations (both in software/hardware) can be directly traced back to game development. Hopefully, with RAM prices shooting up the way it is, we go back to keeping optimizations front and center and reduce all the bloat that has accumulated industry wide.


A number of my tricks are stolen from game devs and applied to boring software. Most notably, resource budgets for each task. You can’t make a whole system fast if you’re spending 20% of your reasonable execution time on one moderately useful aspect of the overall operation.


I think one could even say gaming as a sector single handedly move most of the personal computing platform forward since 80s and 90s. Before that it was probably Military and cooperate. From DOS era, overclocking CPU to push benchmarks, DOOM, 3D Graphics API from 3DFx Glide to Direct X. Faster HDD for faster Gaming Load times. And for 10 - 15 years it was gaming that carried CUDA forward.


Yes please! Stop making me download 100+gb patches!


The large file sizes are not because of bloat per-se...

It's a technique which supposedly helped at one point in time to reduce loading times, helldiver's being the most note-able example of removing this "optimization".

However, this is by design - specifically as an optimization. Can't really be calling that boat in the parents context of inefficient resource usage


This was the the reason in Helldivers, other games have different reasons - like uncompressed audio (which IIRC was the reason for the CoD-install-size drama a couple of years back) - the underlying reason is always the same though, the dev team not caring about asset size (or more likely: they would like to take care of it but are drowned in higher priority tasks).


We aren't talking about the initial downloads though. We are talking about updates. I am like 80% sure you should be able to send what changed without sending the whole game as if you were downloading it for the first time.


Helldiver's engine does have that capability, where bundle patches only include modified files and markers for deleted files. However, the problem with that, and likely the reason Arrowhead doesn't use it, is the lack of a process on the target device to stitch them together. Instead, patch files just sit next to the original file. So the trade-off for smaller downloads is a continuously increasing size on disk.


Generally "small patches" and "well-compressed assets" are on either end of a trade-off spectrum.

More compression means large change amplification and less delta-friendly changes.

More delta-friendly asset storage means storing assets in smaller units with less compression potential.

In theory, you could have the devs ship unpacked assets, then make the Steam client be responsible for packing after install, unpacking pre-patch, and then repacking game assets post-patch, but this basically gets you the worst of all worlds in terms of actual wall clock time to patch, and it'd be heavily constraining for developers.


from my understanding of the technique youre wrong despite being 80% sure ;)

any changes to the code or textures will need the same preprocessing done. large patch size is basically 1% of changes + 99% all the preprocessed data for this optimization


How about incorporating postprocessing into the update procedure instead of preprocessing?


Do you have some resource for people outside this field to understand what it's about?


It goes all the way back to tapes, was still important for CDs, and still thought relevant for HDDs.

Basically you can get much better read performance if you can read everything sequentially and you want to avoid random access at all costs. So you can basically "hydrate" the loading patterns for each state, storing the bytes in order as they're loaded from the game. The only point it makes things slower is once, on download/install.

Of course the whole excercise is pointless if the game is installed to an HDD only because of its bigger size and would otherwise be on an nvme ssd... And with still affordable 2TB nvme drives it doesn't make as much sense anymore.


So this basically leads to duplicating data for each state it's needed in? If that's the case I wonder why this isn't solvable by compressing the update download data (potentially with the knowledge of the data already installed, in case the update really only reshuffles it around)


It's also a valid consideration in the context of streaming games -- making sure that all resources for the first scene/chapter are downloaded first allows the player to begin playing while the rest of the resources are still downloading.


True!


Interesting, today I learned!




Another issue with Google Maps is it not showing Plus Codes for some locations that highlight the entire area. If you however place a pin on that location, it provides a Plus Code. Pretty stupid IMHO.

Also it is really, really hard to search for "Nearby" places. Have to do it through "Directions". Really bad UX.


You can put "... near <location>" at the end of your query to get nearby places. "... near me" also works


> And so everyone and their mother is building big error types. Well, not Everyone. A small handful of indomitable nerds still holds out against the standard.

The author is a fan of Asterix I see :)


Technology was supposed to get rid of most of bureaucracy and move the World towards automation. These FAANG companies have instead successfully integrated bureaucracy with technology and have made bureaucracy permanent. Instead of automating away bureaucracy these companies have automated away customer service.


It is a serious mistake to think that technology can remove bureaucracy. Indeed, technology by its nature makes bureaucracy a lot more rigid. Bureaucracy is about homogenising processes and erasing individual differences, and software reinforces these properties because it allows even less human input or deviation from the process. (That isn't true of all software, just software that is intended to somehow deal with large numbers of people uniformly.)


When I said remove bureaucracy I meant remove bureaucracy from people's lives. Obviously it will exist behind the curtains. I agree with you that software reinforces bureaucratic properties and it should. That is what it was supposed to do. But technology failed when it comes to rectification of any deviations in bureaucratic processes.

For example, assume you are submitting a form and the address is incorrect/not matching exactly what is stored in the database; software should (rightly) flag it and have a human review it and do the necessary correction. Instead we have the worst of both worlds, where the software flags the problem but there is no human in the loop anymore. Even the human is automated out. So the problem is never fixed. Instead, the customer/client who is interacting with the software is indirectly made aware of the internal bureaucratic process but has no recourse.


The lazy response to any new risk or problem is to just layer on new rules and processes. Large organizations always end up with those things defining their workplace culture (risk aversion, checkbox culture) and that worldview filters down to the decisions which impact customers.


they do these things in response to governmental pressure.


"Never be deceived that the rich will permit you to vote away their wealth." - Lucy Parsons


> How can sole maintainers work with multi-billion corporations without being taken advantage of?

Use AGPL, Fair Source or BSL. That's the only way forward. I for one will be using AGPL in everything. If a trillion dollar company cannot pay for services it is a fucking shame. Absolutely disgusting. Microsoft should be ashamed.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: