Hacker Newsnew | past | comments | ask | show | jobs | submit | rmah's commentslogin

Our firm uses python extensively and the virtual environment for every script or script is ... difficult. We have dozens of python scripts running for team research and in production, from small maintenance tools to rather complex daemons. Add to that the hundreds of Jupyter notebooks used by various people. Some have a handful of dependencies, some dozens of dependencies. While most of those scripts/notebooks are only used by a handful of people, many are used company-wide.

Further, we have a rather largish set of internal libraries most of our python programs rely on. And some of those rely on external 3rd party API's (often REST). When we find a bug or something changes, more often than not, we want to roll out the changed internal lib so that all programs that use it get the fix. Having to get everyone to rebuild and/or redeploy everything is a non-starter as many of the people involved are not primarily software developers.

We usually install into the system dirs and have a dependency problem maybe once a year. And it's usually trivially resolved (the biggest problem was with some google libs which had internally inconsistent dependencies at one point).

I can understand encouraging the use of virtual environments, but this movement towards requiring them ignores what, I think, is a very common use case. In short, no one way is suitable for everyone.


But in your case if you had a vanilla even just a standard, hardened RHEL image then you can run as many container variations as you want and not be impacted by host changes. Actually the host can stay pretty static.

You would have a standard container image


I think the main limitation is not code validation but assumption verification. When you ask an LLM to write some code based on a few descriptive lines of text, it is, by necessity, making a ton of assumptions. Oddly, none of the LLM's I've seen ask for clarification when multiple assumptions might all be likely. Moreover, from the behavior I've seen, they don't really backtrack to select a new assumption based on further input (I might be wrong here, it's just a feeling).

What you don't specify, it must to assume. And therein lies a huge landscape of possibilities. And since the AI's can't read your mind (yet), its assumptions will probably not precisely match your assumptions unless the task is very limited in scope.


> Oddly, none of the LLM's I've seen ask for clarification when multiple assumptions might all be likely.

It's not odd, they've just been trained to give helpful answers straight away.

If you tell them not to make assumptions and to rather first ask you all their questions together with the assumptions they would make because you want to confirm before they write the code, they'll do that too. I do that all the time, and I'll get a list of like 12 things to confirm/change.

That's the great thing about LLM's -- if you want them to change their behavior, all you need to do is ask.


Yes, except their cookie preferences to comply with european law. Oh, and they should be able to change their theme from light/dark but only that. Oh and maybe this other thing. Except in situations where it would conflict with current sales promotions. Unless they're referred by a reseller partner. Unless it's during a demo, of course. etc, etc, etc.

This is the sort of reality that a lot of developers in the business world deals with.


Game theory is no more a description of nature than Euclidian geometry is.


Sure, but go try to build a house while ignoring what Euclidean geometry tells you about it.


I agree that both geometry and game theory are very useful tools.


The statement "Game theory is inevitable. Because game theory is just math, the study of how independent actors react to incentives." implies that the "actors" are humans. But that's not what game theory assumes.

Game theory just provides a mathematical framework to analyze outcomes of decisions when parts of the system have different goals. Game theory does not claim to predict human behavior (humans make mistakes, are driven by emotion and often have goals outside the "game" in question). Thus game theory is NOT inevitable.


Yes, game theory is not a predictive model but an explanatory/general one. Additionally not everything is a game, as in statistics, not everything has a probability curve. They can be applied speculatively to great effect, but they are ultimately abstract models.


You can use it for either predictive or explanatory purposes. In the early ('00s) years of Google it was common to diagram out the incentives of all the market participants; this led to such innovations like the use of the second-price VCG auction [1] for ad sales that now make over a third of a trillion dollars per year.

[1] https://en.wikipedia.org/wiki/Vickrey%E2%80%93Clarke%E2%80%9...


Google has now (mostly?) transitioned to using first-price, and more complicated (opaque) auction-style pricing for many of its advertising products.

https://blog.google/products/admanager/simplifying-programma...


> It’s important to note that our move to a single unified first price auction only impacts display and video inventory sold via Ad Manager. This change will have no impact on auctions for ads on Google Search, AdSense for Search, YouTube, and other Google properties, and advertisers using Google Ads or Display & Video 360 do not need to take any action.


This is a good point, and I’m willing to concede that I may be wrong here, that predictive power can be gained from (possibly mis-) applications of abstractions onto a real space, which is what one of the real reasons to favor abstract thinking in business in the first place.

A practical formula:

1) Identify coordination failures that lock us into bad equilibria, e.g. it's impossible to defect from the online ads model without losing access to a valuable social graph

2) Look for leverage that rewrites the payoffs for a coalition rather than for one individual: right-to-repair laws, open protocols, interoperable standards, fiduciary duty, reputation systems, etc.

3) Accept that heroic non-participation is not enough. You must engineer a new Schelling point[1] that makes a better alternative the obvious move for a self-interested majority

TLDR, think in terms of the algebra of incentives, not in terms of squeaky wheelism and moral exhortation

[1]https://en.wikipedia.org/wiki/Focal_point_(game_theory)


As a recent example, Jon Haidt seems to have used this kind of tactic to pull off a coup with the whole kids/smartphones/social media thing [0]. Everybody knew social media tech was corrosive and soul-rotting, but nobody could move individually to stand up against its “inevitability.”

Individual families felt like, if they took away or postponed their kids’ phones, their kid would be left out and ostracized—which was probably true as long as all the other kids had them. And if a group of families or a school wanted to coordinate on something different, they’d have to 1) be ok with seeming “backwards,” and 2) squabble about how specifically to operationalize the idea.

Haidt framed it as “four simple norms,” which offered specific new Schelling points for families to use as concrete alternatives to “it’s inevitable.” And in shockingly little time, it’s at the point where 26 states have enshrined the ideas into legislation [1].

[0] https://www.cnbc.com/2025/02/19/jonathan-haidt-on-smartphone...

[1] https://apnews.com/article/cellphones-phones-school-ban-stat...


He didn't cause that, he is just riding the wave. The call to ban them has been going on for years, it was just put on pause during the pandemic.


Great.

Now let's do the same to AI slop, what many (including Haidt and co., Australia, etc) have done to lessen kids social media usage.

FWIW, I gave up on FB/Twttr crap a while ago... Unfortunately, I'm still stuck with WhatsApp and LI (both big blights) for now.

YMMV


AI slop is self-limiting. The new game-theoretic equilibrium is that nobody trusts anything they read online, at which point it will no longer be profitable to put AI slop out there because nobody will read it.

Unfortunately, it's going to destroy the Internet (and possibly society) in the process.


That’s my sense too. I wonder where the new foca are starting to form, as far as where people will look to serve the purposes that this slop’s infiltrating. What the inevitable alternatives to the New Inevitable start to look like.

At the risk of dorm-room philosophizing: My instincts are all situated in the past, and I don’t know whether that’s my failure of imagination or whether it’s where everybody else is ending up too.

Do the new information-gathering Schelling points look like the past—trust in specific individual thinkers, words’ age as a signal of their reliability, private-first discussions, web of trust, known-human-edited corpora, apprenticeship, personal practice and experience?

Is there instead no meaningful replacement, and the future looks like people’s “real” lives shrinking back to human scale? Does our Tower of Babel just collapse for a while with no real replacement in sight? Was it all much more illusory than it felt all along, and the slop is just forcing us to see that more clearly?

Did the Cronkite-era-television—>cable transition feel this way to people before us?


> AI slop is self-limiting. The new game-theoretic equilibrium is that nobody trusts anything they read online, at which point it will no longer be profitable to put AI slop out there because nobody will read it.

AI slop, unfortunately, is just starting.

It is true that nobody trusts anything online... esp the Big Media and the backlash against it in the last decade+ or so. But that's exactly where AI slop is coming in. Note the crazier and crazier conspiracy theories that are taking hold all around, and not just in the MAGA-verse. And there's plenty of takers for AI slop - both consumers of it, and producers of it.

And there's plenty of profit all around. (see crypto, NFTs, and all manners of grifting)

So no, I dont think "nobody will read it". It's more like "everybody's reading it"

But I do agree on the denouement... it's destroying the internet and society along with it


'defect' only applies to prisoners dilemma type problems. that is just one, very limited class of problem, and I would argue not very relevant to discussing AI inevitability.


Game theory is still inevitable. Its application to humans may be non-obvious.

In particular, the "games" can operate on the level of non-human actors like genes, or memes, or dollars. Several fields generate much more accurate conclusions when you detach yourself from an anthrocentric viewpoint, eg. evolutionary biology was revolutionized by the idea of genes as selfish actors rather than humans trying to pass along their genes; in particular, it explains such concepts as death, sexual selection, and viruses. Capitalism and bureaucracy both make a lot more sense when you give up the idea of them existing for human betterment and instead take the perspective of them existing simply for the purpose of existing (i.e. those organizations that survive are, well, those organizations that survive; there is nothing morally good or bad about them, but the filter that they passed is simply that they did not go bankrupt or get disbanded).

But underneath those, game theory is still fundamental. You can use it to analyze the incentives and selection pressures on the system, whether they are at sub-human (eg. viral, genetic, molecular), human, or super-human (memetic, capitalist, organizational, bureaucratic, or civilizational) scales.


You can't use unrealized capital losses (property paper losses) or even realized losses to offset property tax, you can only offset realized losses against realized gains for income taxes.


I believe that seeking external validation, inspiration and/or reason is not robust and a path to unhappiness. IMO, it's better if the reasons for you to care come from within.


I dont pay my bills from reasons within.


your professors. your classmates who got a job. your family and friends of family. anyone else you know that respects you.


lack of skilled labor, probably


Taiwan surely has some poachable labor.


First, you are correct. However, the reason GDP is used as a proxy metric for economic growth because it's convenient. Doing so does make a few assumptions though, foremost of which is that the structure of the economy will change very little from year to year. If that is so, than a rise in GDP should correspond to a rise in economic prosperity (and by extension wealth). Thus, using GDP change to measure changes in prosperity works (more or less) year by year. but the longer the periods you compare (5 years, 10 years, 20 years), the less meaningful the number becomes.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: