Hacker Newsnew | past | comments | ask | show | jobs | submit | andrewmutz's commentslogin

These paradox games are getting out of control

Having a large context window is very different from being able to effectively use a lot of context.

To get great results, it's still very important to manage context well. It doesn't matter if the model allows a very large context window, you can't just throw in the kitchen sink and expect good results


Why would removing your content from LLM training data cause people to go and seek it out directly from you?

Would removing your website from google search results cause people to go directly to your website?


This seems like a weird comparison - Google’s explicit purpose is to direct people to your site. An LLMs purpose is not.

The point being made is that just as the search engine was the primary means for users to discover content yesterday, so the LLM agent will become the primary means tomorrow. And that content doesn't have to be in the training data, but if an agent is unable to access some particular content, then it won't be discovered by users. Similar to if a rewatch engine is unable to access it.

If you're looking for a fantastic dev-focused linux distro, I can't say enough good things about Bluefin Linux

https://projectbluefin.io/


Cool idea, but supporting homebrew is a big yikes!

I hope no serious developers on linux ever use homebrew, it's the worst package manager by far.

Most package managers support versioning and keeping old versions of installs around, but not homebrew. That's why I'm boycotting it at this point, got burnt by it too many times.

I'd rather use pacman or apt-get or pkgsrc or nix or any other package manager than homebrew.


I don't use Homebrew because it installs to /home/linuxbrew/.linuxbrew. It makes absolutely no sense to use a whole new user, and then use non-standard directories.

If you change where Homebrew installs, then you are on your own because they don't support changing the install path.


while I use Homebrew on macOS for the errant command line utility or library, I share your concern. I use the Universal Blue Silverblue variant for it's integrated Nvidia support with either mise-en-place[0], or the native toolbx[1] utility for isolated environments.

[0]https://mise.jdx.dev [1]https://containertoolbx.org


I use bluefin linux full time and don't use homebrew. I do all development in containers, so I can use whatever I want inside them.

Bluefin contributor here, why are you using homebrew that way? For development use a container.

Pardon my ignorance, but how else or what else would you use Homebrew?

A lot of people don't use containers/don't want to use containers. I guess Bluefin might just not be for them though.

This is my impression - if you explicitly don't want to use toolbox or devcontainers I don't think you're on Bluefin's happy path at all, and the maintainers don't seem concerned enough by that to improve other experiences.

Right, Bluefin is for container development.

Just since you are here, is it any good for Game dev with things like godot?

No but Bazzite DX is almost done so we can start working on Bazzite GDX soon, which is going to be our game dev image. Though hopefully as more things become flatpak native ideally someday the idea of specialized images won't be so necessary.

so Bluefin is using homebrew within containers only? why bother using homebrew at all then?

Homebrew is used by millions of devs, generally because the advantages are worth it.

You can't use pacman, apt or pkgsrc on image based distros. And nix is a big headache.

Of course anything that can easily run in a container is better, but I use brew for the stuff that doesn't and have few problems.


I share your concerns about homebrew. It was one of the reasons I gave up on Silverblue/Bluefin.

Is there a simple summary of why homebrew is so problematic?

I agree with you. DHH is a big ruby guy so my expectation was he’d use brew.

he uses mise these days at least for project specific stuff

I've also been very happy with Silverblue (an alternate flavor of Universal Blue, the same guts as Bluefin). It took a bit of an adjustment period to get used to using an immutable distro, but given that I run this as the sole OS on my daily driver, reliability is paramount. It gives the same feeling of running a highly stable OS like MacOS, but with the power, ergonomics and customizability of Linux - and anything I need that isn't easy to fit into the immutable model is just a simple Distrobox invocation away.

It's "Container-driven development" done right - containerized applications and shells _feel_ native via Distrobox (which gives them access to the host FS, network, hardware, etc by default) but without the risks of native development causing dependency conflicts. And if I screw something up, I can just spin up a new container.


Silverblue is a Fedora project. The Universal Blue and its flavors (Bluefin, Bazzite, Aurora) are based on its image. They are basically community maintained versions of silverblue because Fedora is very cautious (and stubborn) in including QoL things.

[1]: https://fedoraproject.org/atomic-desktops/silverblue/


That’s a more accurate way of putting it, thanks!

The part I agree about: Software engineering is about more than writing code, so accelerating coding by 10X doesn't accelerate a software engineer by 10X.

The part I disagree about: I've never worked at a company that has a 3 month cycle from code-written to code-review-complete. That sounds insane and dysfunctional. AI won't fix an organization like that


Perhaps I was not clear here. My point isn't to say that one PR gets merged in 3 months. My point is to say that, lets say, 15 PRs from one dev get merged per quarter in the old days, for a 10x productivity boost that means that roughly 15 PRs get merged per 7 business days now. My point is simply that the amount of time that goes into the basic lag cycle involved in code review can't be compressed to 7 days.

I think focusing on end-to-end time confuses things more than it helps. A system can have 10X throughput with the latency being unchanged. You don't need to reduce latency or cycle time to have a 10X increase in throughput.

The better argument is that Software Engineers spend a lot of time doing things that aren't writing code and arent being accelerated by any AI code assistant


If you review every change as it goes, vibecoded results are often better than human-only and written much faster

If you’re reviewing every change then what does “vibe coding” even mean?

If I'm not mistaken, vibe coding is supposed to be when you don't review at all, you just let'r rip. Reviewing the AI's code is just... Like if coding was riding a bike, and you got an electric bike. Kind of. It doesn't seem like vibes to me.

This is me seeing co-workers PRs :(

It's not like human code doesn't need review.

the usage of vibe coding in my experience is towards those folks who run whatever the AI produced and if it does what they expect without throwing errors they ship it. If it throws errors they plug that back into the chatbot until the code stops throwing errors.

The whole point of vibe coding is its working faster than you would on your own. If you're reviewing it carefully and understand how it works, you might as well have written it by hand.


Even if it appears to do what you want, but you don't actually read and understand the code, how do you know it doesn't do something else also? Maybe something you don't want?

Irrelevant in vibe coding. If it walks like a duck and quacks like a duck, you don't go looking for extra heads, eyes, fingers, tongues, or tails. You ship it then throw repl.it under the bus when it blows up.

I call this "half vibe coding" (needs a better term). For instances where you know how you'll solve a problem but don't want to type it all out it's great. I tend to comb through the output. Even the SOTA models will make pretty bad performance mistakes, poor maintenance decisions, etc. But it's super useful for getting over the hump of getting started on something.

I agree 100%.

This is a good blog post. Two thoughts about it:

- Contradictory facts often shouldn't change beliefs because it is extremely rare for a single fact in isolation to undermine a belief. If you believe in climate change and encounter a situation where a group of scientists were proven to have falsified data in a paper on climate change, it really isn't enough information to change your belief in climate change, because the evidence of climate change is much larger than any single paper. It's only really after reviewing a lot of facts on both sides of an issue that you can really know enough to change your belief about something.

- The facts we're exposed to today are often extremely unrepresentative of the larger body of relevant facts. Say what you want about the previous era of corporate controlled news media, at least the journalists in that era tried to present the relevant facts to the viewer. The facts you are exposed to today are usually decided by an algorithm that is trying to optimize for engagement. And the people creating the content ("facts") that you see are usually extremely motivated/biased participants. There is zero effort by the algorithms or the content creators to present a reasonably representative set of facts on both sides of an issue


I remember reading an article on one of the classic rationalist blogs (but they write SO MUCH I can't possibly find it) describing something like "rational epistemic skepticism" – or maybe a better term I can't recall either. (As noted below: "Epistemic learned helplessness")

The basic idea: an average person can easily be intellectually overwhelmed by a clever person (maybe the person is smarter, or more educated, or maybe they just studied up on a subject a lot). They basically know this... and also know that it's not because the clever person is always right. Because there's lots of these people, and not every clever person thinks the same thing, so they obviously can't all be right. But the average person (average with respect to whatever subject) is still rational and isn't going to let their beliefs bounce around. So they develop a defensive stance, a resistance to being convinced. And it's right that they do!

If someone confronts you with the PERFECT ARGUMENT, is it because the argument is true and revelatory? Or does it involve some slight of hand? The latter is much more likely


I tend to like the ethos/logos/pathos model. Arguments from clever people can sound convincing because ethos gets mixed in. And anyone can temporarily confuse someone by using pathos. This is why it's better to have arguments externalized in a form that can be reviewed on their own, logos only. It's the only style that can stand on its own without that ephemeral effect (aside from facts changing), and it's also the only one that can be adopted and owned by any listener that reviews it and proves it true to themselves.


It's usually dumb people that have so many facts and different arguments that one can't keep up with.

And they usually have so many of those because they were convinced to pay disproportionate attention to it and don't see the need to check anything or reject bad sources.


I noticed something similar. People who believe in absolute garbage tend to be the ones that don't have robust bs filter that would let them quickly reject absolute garbage. And it's surprisingly orthogonal to person's intelligence. There's correlation but even very intelligent people can have very weak bs filter and their intelligence post-rationalizes the absolute garbage they were unable to reject.


Robust bs detectors may also leave a person susceptible to rejecting novel or unorthodox ideas. Theres a balance somewhere between not being overwhelmed by the sea of crazy and still being open to a good idea when it comes along.

Edit: This thread is amazing. 12 years of pre-uni schooling and no mention of any of this stuff... Also fair criticism of IRA too in the article. Still, seems a little ironic that the people crying foul benefited from the status quo of an uneducated populace.


I don't think you can have too strong bs detector. After it rejects something it's not forever. When the thing you rejected occurs in new contexts or produces new results you will reevaluate and either reject it again or withdraw your rejection. That's the point where intelligence comes in so that you won't withdraw your rejection too soon or reject a valid thing too persistently. But from what I noticed evaluation made at this stage are rarely a problem. The bulk of the problem is not strong enough bs filter for the initial rejection to happen. And once someone believes in something it's very hard for him to lose that belief. Garbage sticks.


Was it this one? “Epistemic learned helplessness”

https://slatestarcodex.com/2019/06/03/repost-epistemic-learn...


Yes, that's the one, thank you!


The problem isn't the PERFECT ARGUMENT, it's the argument that doesn't look like an argument at all.

Take anti-vaxxers. If you try to argue with the science, you've already lost, because anti-vaxxers have been propagandised into believing they're protecting their kids.

How? By being told that vaccinations are promoted by people who are trying to harm their kids and exploit the public for cash.

And who tells them? People like them. Not scientists. Not those smart people who look down on you for being stupid.

No, it's influencers who are just like them, part of the same tribe. Someone you could socialise with. Someone like you.

Someone who only has your best interests at heart.

And that's how it works. That's why the anti-vax and climate denial campaigns run huge bot farms with vast social media holdings which insert, amplify, and reinforce the "These people are evil and not like us and want to make you poor and harm your kids" messaging, combined with "But believe this and you will keep your kids safe".

Far-right messaging doesn't argue rationally at all. It's deliberate and cynically calculated to trigger fear, disgust, outrage, and protectiveness.

Consider how many far-right hot button topics centre on protecting kids from "weird, different, not like us" people - foreigners, intellectuals, scientists, unorthodox creatives and entertainers, people with unusual sexualities, outgroup politicians. And so on.

So when someone tries to argue with it rationally, they get nowhere. The "argument" is over before it starts.

It's not even about rhetoric or cleverness - both of which are overrated. It's about emotional conditioning using emotional triggers, tribal framing, and simple moral narratives, embedded with constant repetition and aggressive reinforcement.


I liked your point about tribalism up until you said one tribe is rational and the other not. The distribution of rational behavior does not change much tribe to tribe, it's the values that change. As soon as you say one tribe is more rational than another you're just feeding into more tribalism by insulting a whole group's intelligence.

I think the real problem is that zero friction global communication and social media has dramatically decreased the incentive to be thoughtful about anything. The winning strategy for anyone in the public eye is just to use narratives that resonate with people's existing worldview, because there is so much information out there and our civilization has become so complex that it's overwhelming to think about anything from first principles. Combine that with the dilution of local power as more and more things have gone online and global, a lot of the incentives for people to be truthful and have integrity are gone or at least dramatically diminished compared to the entirety of human history prior to the internet.


I’ll push back - the term is rational as in logical.

Rational in the sense that it flows from what emotional choices resonate ? That’s more in terms of faithful to their beliefs. I wouldn’t call that rational per se.

And being scared of tribalism is not necessary, because tribalism is what is currently being highly effective at creating political power.

So some degree of tribalism, is simply matching the competition.


>I liked your point about tribalism up until you said one tribe is rational and the other not. The distribution of rational behavior does not change much tribe to tribe, it's the values that change. As soon as you say one tribe is more rational than another you're just feeding into more tribalism by insulting a whole group's intelligence.

That was largely the case until these most recent electoral cycle, where the Great Crank Realignment, driven by the COVID response, pushed conspiracy theorists, health and wellness grifters, supplement hawkers, and many others to the right.


Umm, the COVID response itself was just as much a religion as it was science. There were people walking nature trails with nobody around with their masks securely on their faces. There were requirements for people whose job was to sit in a truck's cab by themselves all day to vaccinate or lose said job. There were unnecessarily draconian shutdowns. There was uncontrolled and unaudited spending on saving nonexistent businesses and buying enough vaccines for several more years of COVID (but with expiry dates within a year).

If the harsh response to COVID had been necessary, the places that didn't do it should have died out. They did not. You simply don't hear about the Great Depopulation of Africa.

And yet you will read the above, label me a crank, and downvote this. Meaning that you are just as tribal as the people you look down upon.


> There were people walking nature trails with nobody around with their masks securely on their faces.

Rogan got really worried about masks and health and at the same time was for instance having a guy on who does underwater weight lifting for training NBA players and gave him the highest praise and started advertising his tumeric coffee. Were masks during that kind of activity really harmful to people but underwater weight lifting and extreme in-sauna exercize good?

If California had held their restrictions a month or two longer until the vaccine they would have had many less deaths.


I really think most of these statements apply to both political sides of messaging in a majority of cases. You can't talk about in-group out-group unless you draw a line somewhere, and in your comment you drew a line between people who represent science and rationality and those that are fearful and reactionary, which you'd believe to be a sensible place to draw that line if you habitually consume basically any media. The actual science seems mostly incidental to any kind of conversation about it.

Some people are crippled by anxiety and fear of the unknown or fear of their neighbors. It's sad, but it's not unique to political alignment.


I think that what they were saying was that in-groups are trusted because of familiarity which can be exploited in order to instill messaging which drives emotional decision making over reasoned contemplation. 'Scientists' were part of the exampled used which invoked a contemporary issue (anti-vax). They are attributing these messaging systems to be a component of organized right wing campaigns; an attribution which at this point in time is rather uncontroversial.

That they would see themselves as part of the rational group opposed to a campaign of weaponized social levers which turn people against evidence in order to further the goals of a different group which is not actually aligned with those they are manipulating is not insightful or provocative. It seems to reason they would.

The implication that it means there is some sort of political 'both sides'ism that degrades their point is incredibly weak.


> The implication that it means there is some sort of political 'both sides'ism that degrades their point is incredibly weak.

I didn't intend to imply that, I interpreted their comment in roughly the same way you did and just think it's the same high level kind of messaging being leveraged regardless of which one you align with, and that issue specifically isn't inherently a right or left dividing line.

If you're inclined to be anti-vaxx, the messaging that the right will try to deliver to you will certainly capitalize on whatever they think will compound those feelings. The government is trying to control you, take your job away, your freedoms, and you should be wary of the others who say yes. It's easy to manipulate people if you're chipping away at their sense of reality.

If you're inclined to be pro-vaxx, the messaging was similarly delivered to compound a feeling of paranoia, and people who felt differently were worth considering an enemy, because they didn't care about your kids, or your grandma, or public health in general (is what messaging at the time seemed to indicate).

Regardless of this discussion not being specifically about the pandemic, my actual perception of either group of people, (the ones that absorbed as much as they could and fell down their respective doom holes) was that they were rather annoying and just avoided the topic at all costs. I wore a mask in situations that seemed to call for it, got some of the vaccinations, kept a reasonable distance, etc.. It didn't need to be more than that. I concluded that there were shreds of truth, scientific and otherwise among the feelings that everyone had, but if I accidentally found myself in conversation that had any strong opinion, it wasn't going to go well; that person was just living out their personal hellscape of paranoia that they were vulnerable to and became targets because of.

It was a very divisive and tribal moment that I hope we've learned something from.


I'm sorry that I misunderstood you, it seemed to me that you were trying to invoke a 'gotcha' in that the commenter's lack self-examination in how they may have been in a similar dynamic invalidated their judgment of others.

With this explanation as context, I don't think the commenter was attributing it as a left vs right issue except that the targeting was being done by right-wing groups.

I think a paranoid world view, broad rejection of evidence, and othering of groups based on existential fear are hallmarks of the right-wing and regardless of initial beliefs can only manifest themselves as such. Thus it is natural that such messaging, once internalized, would lead anyone to be clearly viewed as no longer aligned with anything except a right wing view.


I’d like to mildly point out that this style of caricaturing ideologies is one of the most effective at entrenching those same ideologies. If you can recognize that those critiquing you are doing so in bad faith, not only does it make the critique easy to dismiss, it provides evidence for the prior that all critiques are in bad faith and can be safely ignored.


Bit of a problem when we can see the bad faith factory. As the OP article points out, there are troll farms which are "working" both sides of an argument by supplying bad faith arguments. With the intent of provoking conflict, rather than a victory for either side.


It's also mentioned in "the authoritarians" (search for the book and the short-form essay) - roughly half the population is driven by intellectual curiosity about all kinds of things and don't always agree on much - they just want freedom to be individuals.

The other half is driven by fear, disgust, paranoia, etc.. That second group is much easier to trigger / convince - just play on their fears about their kids, their friends, their church ("will ban Bibles and churches"), etc.. (I was raised in this kind of environment).

Authoritarians WANT a "strong leader" to tell them what to think, how to act, etc. That's how they show they belong to the tribe: they believe everything that is said, they give the most $$ to their church, etc.


A complicating factor when talking about rationality and propensity towards either left- or right-wing authoritarian impulses is brain structure, according to a recently publish study.

"Young adults who scored higher on right-wing authoritarianism had less gray matter volume in the dorsomedial prefrontal cortex, a region involved in social reasoning. Meanwhile, those who endorsed more extreme forms of left-wing authoritarianism showed reduced cortical thickness in the right anterior insula, a brain area tied to empathy and emotion regulation."

Not only is it tribalism, it's also individuals fundamental anatomy. This seems like a very challenging problem if the people you are hoping to convince are hardwired against your message.

https://www.psypost.org/authoritarian-attitudes-linked-to-al... (layperson's summary)

https://www.ibroneuroscience.org/article/S0306-4522(25)00304... (actual study)


Exactly. The problem is largely due to biology. Someone might try to change this at some point for future "better" humans (eugenics of a sort), but basically impossible to change in existing adults.


There's an essay on thinking styles: fuzzy narrative big picture culture versus logical detail culture. https://www.someweekendreading.blog/math-illiterate-rulers/


> Take anti-vaxxers. If you try to argue with the science, you've already lost, because anti-vaxxers have been propagandised into believing they're protecting their kids

What do you think causes vaccine injury?

Do you believe in these zoonotic origin theory of Covid, rather than the Wuhan coronavirus Institute accidentally releasing a coronavirus in Wuhan? Why do you think that is?

Why do you think vaccine manufacturers asked governments for blanket immunity from prosecution?

Why does the United States require children to get so many more vaccines than other developed western countries?

Do you think you are assuming which side is rational?


Vaccine injury is a thing that actually happens. It really does.

Also, unvaccinated people die from diseases that vaccines prevent. That happens too.

The problem with the anti-vaxxers is their assessment of the balance of risks is distorted.

The problem with the "vaccine establishment" is that it's so certain that the balance of risks are in favor of vaccines that it's willing to hide the actual risks in order to get more people to take the vaccines. That's not only morally wrong, it also may do more harm than good in the end. (Which is the same thing if you take a consequentialist view of morals.)


Ah yes. People who think like you and agree with you are rational, not prone to fear, disgust outrage, or protectiveness. But people who disagree with you are obviously irrational and can't be reasoned with. You are "educated" and they are "fear-mongers".


> But people who disagree with you are obviously irrational and can't be reasoned with.

You are saying this with sarcasm, but it is a tautology.

If I am factually correct, by definition, everyone who disagrees with me is irrational and can't be reasoned with.

Anti-vax is a great example of this. We have loads and loads and loads of evidence of the harm that not being vaccinated can do (now including dead children thanks to measles) and very scant evidence to the contrary (there is some for specific vaccines for specific diseases like Polio). However, until it hits an anti-vaxxer personally, they simply will refuse to believe it.

Of course, once an anti-vaxxer personally gets a disease, NOW the anti-vaxxers want the vaccine. Thus, demonstrating simultaneously that they actually don't understand a single damn thing about vaccines and that their "anti-vaxx belief" was irrational as well.


> If I am factually correct, by definition, everyone who disagrees with me is irrational and can't be reasoned with.

No, that doesn’t follow at all. Your arguments could be bad or irrational in themselves (right for the wrong reasons), and other people could hold beliefs logically follow from plausible, but wrong, premises.


Ignoring the strawman at the end, you're making their point for them.

Anti-vax is actually a horrible example of this because it can never be proven that vaccines don't harm us. Any non-infinite evidence will never reduce the probability to zero. You even allude to this point. If there is a single case of a harmful vaccine, or even a reasonable probability of one, then it isn't irrational to be cautious of vaccines. Just because the evidence is enough for you doesnt make anyone who disagrees irrational. That line of thinking just makes you irrational.

I say this as a fully vaccinated (including COVID) vaccine enjoyer.


> If there is a single case of a harmful vaccine, or even a reasonable probability of one, then it isn't irrational to be cautious of vaccines

The problem is that humans are really unsuited to statistical thinking, especially about risk, and what it means to "be cautious" about something. In this context, "being cautious" about vaccines means "being reckless" about disease, because you're rejecting the mitigation measures. It is not a good bet to roll the dice for your children against measles.

We have to recognize that there have been both incidents of vaccine contamination and of individuals who have had unexpected negative reactions to vaccines. You get advised about this every time you have one!

Perhaps the diagrams should include "one sided scale" as an argument.


> Ignoring the strawman at the end

Oh, no. You don't get to ignore my actual experience with people and Covid vaccines. I watched 3 different anti-vaxxers in my family die begging for a vaccine while doctors struggled to save their dumb asses (yeah, mass spreading event).

> it can never be proven that vaccines don't harm us.

That's your job to prove, Mr. Skeptical. Not mine.

I very much can prove that not getting a vaccine does harm you. I've got a handful of measles deaths to point to right now. We've got step function decreases in reproductive cancers due to HPV vaccination. We've got shingles vaccines showing decreases in dementia and Alzheimers. I can go on and on.

It's up to YOU to show the contrary that the harm a vaccine does outweighs it's benefits.

People don't seem to get that "being skeptical" is simply the first step. After that, you are required to begin the hard work of massing factual evidence as well as cause/effect relationships for your argument.

Otherwise you are simply "obviously irrational and can't be reasoned with".

> I say this as a fully vaccinated (including COVID) vaccine enjoyer.

"I'm not racist, but ..."

Sorry. Statement gives you no credibility or authority.


It’s impossible to argue with the biased framing you’ve setup: any single good outcome due to vaccines is sufficient to declare victory for your argument while opponents face defeat unless they show that all harms outweigh all benefits based on your evaluation methodology.

Anyway, for everyone else, the J&J COVID vaccine is known to cause heart problems in certain men and boys. Here’s an article about the issue from the pre-RFK HHS era:

https://health.mountsinai.org/blog/wynk-heart-inflammation-m...


>J&J COVID vaccine is known to cause heart problems in certain men and boys

And what is the risk if you get COVID and are unvaccinated? I can't say there is no risk to drinking water, but I can say that there is a huge risk of dying of dehydration from not drinking water.


How is your framing any better? Who claimed vaccines are 100% harmless and have zero chance of injury? The claim was that the chances are vanishingly small.


> If there is a single case of a harmful vaccine, or even a reasonable probability of one, then it isn't irrational to be cautious of vaccines. Just because the evidence is enough for you doesnt make anyone who disagrees irrational. That line of thinking just makes you irrational.

There's a difference between "(ir)rational" and "(ab)normal human thinking". What you describe is both irrational and also very normal for humans.

To illustrate what I mean, I'll put the probabilities into terms of dice rolls:

Before vaccines:

Roll a normal, fair, six-sided dice, once. If it's even, you died. (Pre-industrial society, half of us died young of what are now easily preventable illnesses).

With vaccines, at current safety thresholds for fatal reactions:

Roll a normal, fair, six-sided dice, seven times. Even for borderline cases where the vaccines are covering serious illnesses, you'd need to roll 1-2-3-4-5-6-1 in that order to see a fatal adverse reaction, otherwise the vaccine is withdrawn from the market. (~1 per quarter million cases).

But, just like people don't really have a rational intuition for how a "billionaire" is a thousand times richer than a "millionaire", people don't really have rational intuition for probabilities like these. I suspect our intuition on probability is more like "here's 8 bushes, a deer is hiding behind one, which one?", because of how often people act as though being unlucky for long enough means they're due for a win. And I really do mean eight bushes, because of how badly we handle probabilities even in the 5% range.


Just to add a little to the discussion, I suspect that the "not like us" messaging is mostly a right-wing thing, while there's more of a "don't contaminate my fluids" argument from the far-left.

Neither is a rational argument, and still trigger the same disgust and fear, but tend to have different implications for outgroups.


> "don't contaminate my fluids" argument from the far-left

What does this refer to? I assume it has nothing to do with Flint, Michigan ;-)


Probably from one of the crazies in Dr. Strangelove (the movie)



repetition breeds rationalism. variety of phrasing breeds facts.

it's how the brain works. the more cognitive and perceptive angles agree on the observed, the more likely it is, that the observed is really / actually observed.

polysemous language (ambiguity) makes it easy to manipulate the observed. reinterpretation, mere exposure and thus coopted, portfolio communist media and journalism, optimize, while using AI for everything will make it as efficient as it gets.

keep adding new real angles and they'll start to sweat or throw towels and tantrums and aim for the weak.


To add to your second point, those algorithms are extremely easy to game by states with the resources and desire to craft narratives. Specifically Russia and China.

There has actually been a pretty monumental shift in Russian election meddling tactics in the last 8 years. Previously we had the troll army, in which the primary operating tactic of their bot farms were to pose as Americans (as well as Poles, Czechs, Moldovans, Ukrainians, Brits, etc.) but push Russian propaganda. Those bot farms were fairly easy to spot and ban, and there was a ton of focus on it after the 2016 election, so that strategy was short lived.

Since then, Russia has shifted a lot closer to Chinese style tactics, and now have a "goblin" army (contrasted with their troll army). This group no longer pushes the narratives themselves, but rather uses seemingly mindless engagement interactions like scrolling, upvoting, clicking on comments, replying to comments with LLMs, etc., in order to game what the social media algorithms show people. They merely push the narratives of actual Americans (not easily bannable bots) who happen to push views that are either in line with Russian propaganda, or rhetoric that Russian intelligence views as being harmful to the US. These techniques work spectacularly well for two reasons: the dopamine boost to users who say abominable shit as a way of encouraging them to do more, and as a morale-killer to people who might oppose such abominable shit but see how "popular" it is.

https://www.bruegel.org/first-glance/russian-internet-outage...


> These techniques work spectacularly well for two reasons

Do they work spectacularly well, though? E.g. the article you link shows that Twitter accounts holding anti-Ukrainian views received 49 reposts less on average during a 2-hour internet outage in Russia. Even granting that all those reposts were part of an organized campaign (its hardly surprising that people reposting anti-Ukrainian content are primarily to be found in Russia) and that 49 reposts massively boosted the visibility of this content, its effect is still upper bounded by the effect of propaganda exposure on people's opinions, which is generally low. https://www.persuasion.community/p/propaganda-almost-never-w...


Notice that the two reasons I mentioned don't hinge on changing anyones mind.

1 - They boost dopamine reward systems in people who get "social" validation of their opinions/persona as an influencer. This isn't something specific to propaganda...this is a well-observed phenomenon of social media behavior. This not only gives false validation to the person spreading the misinformation/opinions, but it influences other people who desire that sort of influence by giving them an example of something successful to replicate.

2 - In aggregate, it demoralizes those who disagree with the opinions by demonstrating a false popularity. Imagine, for example, going to the comments of an instagram post of something and you see a blatant neo-nazi holocaust denial comment with 50,000 upvotes. It hasn't changed your mind, but it absolutely will demoralize you from thinking you have any sort of democratic power to overcome it.

No opinions have changed, but more people are willing to do things that are destructive to social discourse, and fewer people are willing to exercise democratic methods to curb it.


Do you have any evidence that a substantial number of people will be influenced in the way you claim? Again, propaganda generally has no or almost no effect.


That is tricky. I think some propaganda has no effect while some propaganda is so impactful that it is the sole cause of some major, major things. I know you said "generally" but I think that doesn't present the full picture.

The Russian state's hack and leak of Podesta's emails caused Pizzagate and QAnon. Russian propagandists also fanned the flames of both. It's not quite clear if this was a propaganda victory (it could be that it was propaganda from other sources commenting on the hacked emails which bears almost all responsibility for Pizzagate and what followed) or simply an offensive cybercapabilities victory, but this is an example of the complex chains of actions which can affect societal opinions and attitudes.

I am skeptical random LLM nonsense from Russian farms is shifting sentiment. But I think it's prudent to remain open to the possibility that the aggregate effect of all propaganda, intelligence, and interference efforts by the Russian state in the past decade could have created the impetus for several significant things which otherwise would likely not have occurred.

Another example: the old Russian KGB propaganda about America inventing AIDS as a bioweapon was extremely effective and damaging: https://en.wikipedia.org/wiki/Operation_Denver

More recent Russian propaganda about America running a bioweapon lab in Ukraine has been quite effective and is still believed by many.


> Again, propaganda generally has no or almost no effect.

This is a wild claim.

I read the early article, it claimed that most of the pro trump russian propaganda was consumed by republicans so it didn't change any viewpoints.

Ignoring the idea that it might have prevented a change, it's a pretty small sample size compared to, you know, all of human history.


> a "goblin" army

Hah, a "monkey amplifier" army! Look at garbage coming out of infinite monkeys keyboards and boost what fits. Sigh


> Specifically Russia and China.

...or USA


In this context, the USA does not need a "goblin army" for pushing domestic propaganda.

Silicon valley is already fully compliant.


What should make us believe any other state propaganda is better, even for its own general population?


The best way to lie is not presenting false facts, it's curating facts to suit your narrative. It's also often that you accidentally lie to yourself or others in this way. See a great many news stories.


The act of curating facts itself is required to communicate anything because there are an infinite number of facts. You have to include some and exclude others, and you arrange them in a hierarchy of value that matches your sensibilities. This is necessary in order to perceive the world at all, because there are too many facts and most of them need to be filtered. Everyone does this by necessity. Your entire perceptual system and senses are undergirded by this framework.

There is no such thing as "objective" because it would include all things, which means it could not be perceived by anyone.


The subjective/objective split is useful. What good is raising the bar for objectivity such that it can never be achieved? Better to have objective just mean that nobody in the current audience cares to suggest contradictory evidence.

It's for indicating what's in scope for debate, and what's settled. No need to invoke "Truth". Being too stringent about objectivity means that everything is always in scope for debate, which is a terrible place to be if you want to get anything done.


I often put it this way: you can lie with the truth. I feel like most people don't get this.


Another very good way to lie is to set up the framing such that any interpretation of any fact skews in your desired direction. Including which things are to be considered important/relevant, what kind of argument is considered valid/not. Done well, people might not even pick up that there is lying/misdirection involved. Rig the game.


The idea that people believe in climate change (or evolution) is odd considering people don't say they believe in General Relativity or atomic theory of chemistry. They just accept those as the best explanations for the evidence we have. But because climate change and evolution run counter to some people's values (often religious but also financially motivated), they get called beliefs.


You generally don't oppose to things you can grasp to the point you could understand how it challenges other beliefs you culturally or intuitively integrated.

Evolution directly challenges the idea that humans are very special creatures in a universe where mighty mystic forces care about them a lot.

Climate changes, and the weight of human industry in it, challenges directly the life style expectations of the wealthiests.


To some extent, physics/chemistry/etc. challenge the notion that free will exists, but that challenge is far enough removed and rarely touched upon that people who believe in free will don't feel that modern science is attacking that belief, and the scientists working on it generally see free will or any mechanisms of the brain as far too complex when they are studying things on the order for a few particles or few molecules.

Some of neurology/psychology gets a bit closer, but science of the brain doesn't have major theories that are taught on the same level nor have much impact on public policy. The closest I can think of is how much public awareness of what constitutes a mental disorder lags behind science, but that area is still constantly contested even among the researchers themselves and thus prevents a unified message being given to the public that they must then respond to (choosing to believe the science or not).


> But because climate change and evolution run counter to some people's values (often religious but also financially motivated), they get called beliefs

Hey, weren't we just talking about propaganda?


Thanks for your thoughts, they perfectly extend mine. I agree that it would be a sign of a very fragile belief system if it gets unwound by a single bit of contradictory evidence. And as to the "facts" that we're getting 24/7 coming out of every microwave is just a sign of complete decoupling of people's beliefs from empirical reality, in my humble opinion. Supply and demand and all that.


I would contend that empiricism is inadequate to discern what is real and what is true. Much of human experience and what is meaningful to being a person is not measurable nor quantifiable.


> the previous era of corporate controlled news media... The facts you are exposed to today are usually decided by an algorithm

... But that algorithm is still corporate controlled.


> Say what you want about the previous era of corporate controlled news media, at least the journalists in that era tried to present the relevant facts to the viewer.

If you think this reduced bias, you couldn't be more wrong - it only made the bias harder to debunk. Deciding which facts are "relevant" is one easy way to bias reporting, but the much easier, much more effective way is deciding which stories are "relevant". Journalists have their own convictions and causes, motivating which incidents they cast as isolated and random, and get buried in the news, and which are part of a wider trend, a "conversation that we as a nation must have", etc., getting front-page treatment.

A typical example: And third, the failure of its findings to attract much notice, at least so far, suggests that scholars, medical institutions and members of the media are applying double standards to such studies. - https://www.economist.com/united-states/2024/10/27/the-data-... (unpaywalled: https://archive.md/Mwjb4)


> If you believe in climate change and encounter a situation where a group of scientists were proven to have falsified data in a paper on climate change, it really isn't enough information to change your belief in climate change, because the evidence of climate change is much larger than any single paper.

Although your wider point is sound that specific example should undermine your belief quite significantly if you're a rational person.

1. It's a group of scientists and their work was reviewed, so they are probably all dishonest.

2. They did it because they expected it to work.

3. If they expected it to work it's likely that they did it before and got away with it, or saw others getting away with it, or both.

4. If there's a culture of people falsifying data and getting away with it, that means there's very likely to be more than one paper with falsified data. Possibly many such papers. After all, the authors have probably authored papers previously and those are all now in doubt too, even if fraud can't be trivially proven in every case.

5. Scientists often take data found in papers at face value. That's why so many claims are only found to not replicate years or decades after they were published. Scientists also build on each other's data. Therefore, there are likely to not only be undetected fraudulent papers, but also many papers that aren't directly fraudulent but build on them without the problem being detected.

6. Therefore, it's likely the evidence base is not as robust as previously believed.

7. Therefore, your belief in the likelihood of their claims being true should be lowered.

In reality how much you should update your belief will depend on things like how the fraud was discovered, whether there were any penalties, and whether the scientists showed contrition. If the fraud was discovered by people outside of the field, nothing happened to the miscreants and the scientists didn't care that they got caught, the amount you should update your belief should be much larger than if they were swiftly detected by robust systems, punished severely and showed genuine regret afterwards.


You're making a chain of assumptions and deductions that are not necessarily true given the initial statement of the scenario. Just because you think those things logically follow doesn't mean that they do.

You also make throw away assertions line "That's why so many claims are only found to not replicate years or decades after they were published." What is "so many claims?" The majority? 10%? 0.5%?

I totally agree with you that the nuances of the situation are very important to consider, and the things you mention are possibilities, but you are too eager to reject things if you think "that specific example should undermine your belief quite significantly if you're a rational person." You made lots of assumptions in these statements and I think a rational person with humility would not make those assumptions so quickly.


> What is "so many claims?" The majority? 10%? 0.5%?

Wikipedia has a good intro to the topic. Some quick stats: "only 36% of the replications [in psychology] yielded significant findings", "Overall, 50% of the 28 findings failed to replicate despite massive sample sizes", "only 11% of 53 pre-clinical cancer studies had replications that could confirm conclusions from the original studies", "A survey of cancer researchers found that half of them had been unable to reproduce a published result".

The example is hypothetical and each step is probabilistic, so we can't say anything is necessarily true. But which parts of the reasoning do you think are wrong?


Oh so you're talking about replications in a very specific field, one completely different from the example you're using elsewhere of climate change.

Your first step is "It's a group of scientists and their work was reviewed, so they are probably all dishonest."

Even that is an unreasonable step. It is very possible for a single person to deceive their peers.

Deductive reasoning like this works so much better for Sherlock Holmes, in fiction. In reality, deductive reasoning tends to re-enforce your biases and ignore the vast possibility space of alternatives.


I didn't pick the example of climate change, but the field is irrelevant. It was just an example. The argument applies equally well regardless of what the hypothetical scientists are inventing data for.

It is possible for a single person to deceive all their peers if you assume unlimited incompetence and naiveity, but that should reduce your faith in what they say just as much!

The argument uses logical induction, not deduction. Induction works fine and is the sort of ordinary reasoning used by people every day. It's normal to trust a group of people less after they were caught lying. If you don't do this, you're the one being irrational, not other people.


> It's a group of scientists and their work was reviewed, so they are probably all dishonest.

Peer review is a very basic check, more or less asking someone else in the field "Does this paper, as presented, make any sense?". It's often overvalued by people outside the field, but it's table stakes to the scientific conversation, not a seal of approval by the field as a whole.

>Scientists often take data found in papers at face value. That's why so many claims are only found to not replicate years or decades after they were published. Scientists also build on each other's data. Therefore, there are likely to not only be undetected fraudulent papers, but also many papers that aren't directly fraudulent but build on them without the problem being detected.

I think it's rare that scientists take things completely at face value. Even without fraud, it's easy for people to make mistakes and it's rare that everyone in a field actually agrees on all the details, so if someone is relying on a paper for something, they will generally examine things quite closely, talk to the original authors, and to whatever extent practical attempt to verify it themselves. The publishing process doesn't tend to reward this behavior, though, unfortunately (And also as a result, an external observer does not generally see the results of this: if someone concludes that a result is BS as a result of this process, they're much more likely to drop it than try to publish a rebuttal, unless it's something that is particularly important)


Sorry, what I meant was that the authors on a paper are supposed to be reviewing each other's contributions. They should all have access to the same data and understand what's going on. In practice, that doesn't always happen of course. But it should. Peer review where a journal just asks someone to read the final result is indeed a much weaker form of check.

There's way too many cases of bogus papers being cited hundreds or thousands of times for me to believe scientists check papers they are building on. It probably depends on a lot on the field, though, this stuff always does.


See also: the Chinese robber fallacy.

Even if only 0.1% of Chinese people engaged in theft, and that would be a much lower rate than in any developed country, you'd still get a million Chinese thieves. You could show a new one every day, bombarding people with images and news reports of how untrustworthy Chinese people are. The news reports themselves wouldn't even be misinformation, as all the people shown would actually be guilty of the crimes they were accused of. Nevertheless, people would draw the wrong conclusion.


Many people are curious about truth. But because of gaslighting and no single source of truth and too much noise level, people have checked out completely. People know something is fishy, they know barbarians are at the gate. But they also know that the gate is 10,000 km away so they think, "Let me live my life peacefully in the meantime." They have lost hope in the system.


It is indeed a mess. It's full of strawmen, like:

> Price Drops Don’t Lead to Supply. They Kill It.

No one believes price drops cause an increase in supply. They believe an increase in supply causes price drops.

> If “build more” was going to bring prices down and stabilize the system, we wouldn’t be seeing these mixed signals.

People believe that increasing supply will lower prices, not "stabilize the system". The current system is plenty stable, and thats the problem.


Right clearly, as supply meets demand and prices stabilize or start to drop, new supply will slow down. You can easily have scenarios where lots of supply is being built at once and outruns demand by a lot and causes a short term drop in prices, but that still won't cause supply to drop.

I think arguments like the one in the article have over-learned the lesson of 2008. Yes the financial crash in 2008 wiped out so many home builders that capacity to create supply was lowered. But that's not the sort of event that is caused by high supply.


This discussion so far ignores cost. That's a problem.

And cost has to do with intensity of effort. In the case of construction, if there is less construction going on, then there is ("all other things being equal") more personnel available: so less delays and less hourly cost. If there is less construction going on, there is also less pressure on materials suppliers, so less expensive lumber, concrete, etc. So that it's easier to make a profit. So that it might make sense to build lower value housing. While when construction is booming, there is extra less incentive to build lower value housing.

Of course, if land availability is artificially constrained, then there is never much reason to build lower value housing. The profit is elsewhere.


This is an uncharitable reading of the article.

Investors & developers are motivated by profit, lower prices reduces (expectation of future) profit, so less housing will be built.

Right?

Yes, article is too wordy and lacks focus. Typical for us progressives.

OC prescribes "bottom-up" something something. I haven't read Housing Trap, so can't comment.

My prescription would be for policy makers to work with investors & developers, figure out how to make profits more stable and predictable, figure out how to institute those reforms.

I'd also consider restructuring the housing industry. eg IIRC in Germany, developers are often also landowners. So they have longer horizons for considering profitability. Seems like an obvious reform, especially considering climate crisis. Like design & build wrt total lifetime cost of ownership, so adopting passivhaus and activhaus innovations is a no brainer. (Just trying to say when landowner is also developer, they can capture more profit by incorporating better tech.)


Cline is absolutely fantastic when you combine it with Sonnet 4. Always use plan mode first and always have it write tests first (have it do TDD). It changed me from a skeptic to a believer and now I use it full time.


How much is it costing you?


I use Roo Code (Cline fork) and spend roughly $15-30/mo by subscribing to Github Copilot Pro for $10/mo for unlimited use of GPT-4.1 via the VS Code LM API, and a handful of premium credits a month (I use Gemini 2.5 Pro for the most part).

Once I max out the premium credits I pay-as-you-go for Gemini 2.5 Pro via OpenRouter, but always try to one shot with GPT 4.1 first for regular tasks, or if I am certain it's asking too much, use 2.5 Pro to create a Plan.md and then switch to 4.1 to implement it which works 90% of the time for me (web dev, nothing too demanding).

With the different configurable modes Roo Code adds to Cline I've set up the model defaults so it's zero effort switching between them, and have been playing around with custom rules so Roo could best guess whether it should one shot with 4.1 or create a plan with 2.5 Pro first but haven't nailed it down yet.


Looking at Cline, wondering what the real selling points for Roo Code are. Any chance you can say what exactly made you go with Roo Code instead of Cline?


Cline has two modes (Plan and Act) which work pretty well but Roo Code has 5 modes by default. (Code, Ask, Architect, Orchestrator, Debug) and it's designed so that users can add custom modes. e.g. I added a Code (simple) mode with instructions about the scale/complexity of tasks it can handle or decide to pass it to Code for a better model. I also changed the Architect mode to evaluate whether to redirect the user to Code or Code (simple) after generating a plan.

Roo Code just has a lot more config exposed to the user which I really appreciate. When I was using Cline I would run into minor irritating quirks that I wished I can change but couldn't vs. Roo where the odds are pretty good there are some knobs you could turn to modify that part of your workflow.


As much as you theoretically want to spend, since it's pay-per-use.

I spend $200/month by using Sonnet 4. Could be higher if you want to use Opus.


You can use Claude Code as a provider if you want it subscription based

https://docs.cline.bot/provider-config/claude-code


I want an LLM that I control to sit between me and any social media feed. Let it filter the garbage and engagement-bait and boil it down to something that actually adds value to my life


If my roof leaks and my drip bucket is full, do I need a shinier bucket or a better roof?


Achievable goal! Bluesky lets you create your own moderation tools using whatever technologies you like. https://deepwiki.com/bluesky-social/bsky-docs/2.5-moderation...


I wrote a comment a bit ago on what this adversarial interoperability [1] could look like with local LLMs and accessibility APIs [2]. Big AT Proto and Bluesky fan, as it cannot be captured ("protocols, not platforms"), but it isn't enough to have this capability only with Bluesky; it must be able to support any social network or graph. It should be a robust content processor under the user's control for any firehose they wish to consume, whether that is a REST API endpoint, an RSS feed, a plain 'ol website that the agent will login as the user as, or a closed app that the agent will use accessibility APIs to operate the app as the user.

[1] https://www.eff.org/deeplinks/2019/10/adversarial-interopera...

[2] https://news.ycombinator.com/item?id=42879342


That's not what the social media site wants. It has it's own algorithms and AI, ensuring that you get exactly what they need you to see.

You want an AI to sift though endless piles of crap, just to find the few specks of gold. Why not stop mixing your gold into the dung heap before consuming it?


I want to communicate with actual humans and enjoy meaningful conversation, 5-10 actual humans is enough really, I don't have a public-facing brand I need to maintain like the author. That's what social media is for and thus it's simply not for me.


I smell a business idea here.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: