Wow. This is the first take on AI that I find really shocking, and it is totally believable.
But worth noting the difference between a system you interact with and a visualization of such system, injected into your eyeball, is the lack of secondary effects (or first effects?). I suppose those can ALSO be faked and maintained, but at some point the illusion is more work than the system that would normally create it.
It seems likely that the fake system would be much more work than the code needed to generate it for most modern uses, but when we are spinning up bespoke code that will be run once, for a single user? The fake output might be more efficient.
I don't think this means this is the future. But it certainly means it could become a preferred approach for certain use cases, when there is no need for reuse.
I love finding ideas like this, ehich feel like true "AI-native" thinking.
My question is always: what are you building? You need to tell the AI what to build. What if it does it in a way that isn't what you want, or makes the buttom blue instead of red, or any number of other decisions?
AI can write the code, but not tell you what code you want it to write. In other words, how long are your specs? Either the LLM decides "whatever" or you have massive amounts of documentation to coordinate.
We still need to decide what to build, and some of the how. That is not automate-able, yet everyone seems to gloss over that bit.
Yes, the LLM writes the specs. In fact that LLM writes everything, and then the humans only flag anything they want changed, other than that it’s completely automated.
Imagine you were working with a very talented software shop. You might tell them your preferences sometimes and some things you want changed, but otherwise they mostly just build the right things the right way. And unlike a real software shop, the LLM system can implement changes incredibly fast.
It is a valid concern. We are firmly in the goldilocks phase of LLMs, like in the first couple of years of Google when it was truly amazing. Then SEO made Google defensive, then websites catered to Google and not users, then Google catered to Google and not websites and we end up with 30 page recipe sites.
LLMs are obviously different and will have different challenges, but their advantage is how deep into a user's request they go. Advertising comes down to a binary choice - use product X or not. If I want implementation instructions for a certain product on specific hardware an ad will be obviously out of place and irrelevant.
So "shopping comparison" asks might get broken, but those have been broken for a while.
There wouldn't be an "ad" anywhere, though. You'll just ask the LLM for alternative implementations in plan mode, and it will be selling you one of them during the conversation rather than giving you an unbiased comparison. If you become suspicious it will make sure the pros just slightly outweigh the cons, or mention how well the thing works with something else in your stack, or whatever else a skilled salesperson would do to guide your choice without you realizing.
It's already doing this by telling everyone to use React and Tailwind, it's just that nobody's getting paid for it to do that.
> Then SEO made Google defensive, then websites catered to Google and not users,
Google was created in response to simple proto-SEO techniques (e.g. keyword stuffing) that already ruined Alta Vista.
Google has been combating adversarial information retrieval since inception.
Google's background with that is one of the reasons to expect they will stay on top of the AI race. The recipe is: lots of good/novel data x careful weighting of trust x algorithm.
I think the opposite may be true.
If dev tools are broken and it annoys someone, they can more easily build a better architecture, find optimizations and release something that is in all ways better. People have been annoyed with pip forever, but it was the team behind uv that took on pip's flaws as a primary concern and made a better product.
I think having a pain point and a good concept (plus some eng chops) will result in many more dev tools - that may be cause different problems, but in general, I think more action is better than less.
this is exactly what I mean though. Instead of the community building a better tool that we collectively contribute to and work with, genAI is going to silo all the good stuff with individual developers and teams instead. Because its so cheap to create these tools, no one is going to bother publishing new ones for everyone, so we will essentially be stuck with what we have forever now.
79% of ALL child sex trafficking. 4 out of 5 child sex slaves exist thanks to Facebook's policies.
But sure, go on and talk about "leeway" and "limited capabilities" for a company worth nearly a trillion dollars. Do you honestly believe this is acceptable? What are your vested interests here?
Since you're emphasizing the ALL, I am obligated to nitpick that it is not all. The source article says that, but it's wrong; the underlying link clarifies that it's 79% of sex trafficking which occurs on social media. As has been discussed downthread, a social media platform with large marketshare is always going to have a large percentage of every bad thing that can happen on social media.
Do you have a citation for that? You may be right for all I know. I don't know much about it. But that seems unlikely to me, and if it's true, I'd like a reference I can show others when I'm trying to get them to finally close their account.
> [the report] found that 65% of child sex trafficking victims recruited on social media were recruited from Facebook
Even in 2020, I'm very skeptical that so many children were on Facebook that it could account for 2/3 of recruitment. My own kids say that they and their friends are all but allergic to Facebook. It's the uncool hangout for old people, not where teens want to be.
I may be wrong, and I'm certainly not going to tell someone that they're wrong for citing a government study. Still, I doubt it.
The number is wrong / the citation is misleading. It’s closer to 20-30% according to that study, the 79% is referring specifically to cases involving social media, of which Meta platforms are obviously going to make up a large percentage.
There’s also a reporting bias here I’m sure - if Meta is better at reporting these cases then they will become a larger percentage, etc.
You don't really need a majority of potential victims to go to location X for victims from location X to make up a majority of victims; that just means that location X is a low-risk, high-reward place for criminals to lurk looking for victims.
Thanks for looking into it and pulling out that quote. I notice there are some moving goalposts — the parent article claims 79% of _all_ minor sexual trafficking (emphasis mine), but the govt report found
> 65% of child sex trafficking victims recruited _on social
media_ were recruited from Facebook, with 14% being recruited on Instagram
(Emphasis mine). I think the parent article is repeatedly lying about the facts, that’s super annoying. I’m not at all surprised that Facebook and Instagram have the lions share of social-media victims, because they also have the lions share of social media users.
> 4 out of 5 child sex slaves exist thanks to Facebook's policies.
Even if your 79% number is correct, this does not follow. It like if someone said, 30 years ago, that 95% of total advertisements were in the classified section that 9 out of 10 retail sales happened thanks to the classifieds.
(I’m not trying to excuse Facebook’s behavior. But maybe criticisms of Facebook would be more effective if they stayed on track.)
I’m not nitpicking a weird edge case. I’m nitpicking a completely unsound inference. Even if Facebook indeed accounts for 79% of total instances of children being trafficked, it does not follow at all that removing Facebook from the picture would have reduced the number by anywhere near 79%.
Nobody in Salem wanted to be seen to stand up for witches.
I have never had a Facebook account because I never liked what they do, but this 'evidence' against them seems like they are relying on the seriousness of the allegations more than the accuracy.
You are saying that from our perspective. I don't think the argument that witches are not real would have gained you much ground back then.
We don't have the years of analysis of what actually happened for things happening right now.
While a lot of people feel a lot of certainty about all manner of social media harms, the scientific consensus is much less clear. Sure you can pull up studies showing something that looks pretty bad, but you can also find ones that say that climate change is not occurring. The best we have to go on is scientific consensus. The consensus, is not there yet. How do you tell if Jonathan Haidt is another Andrew Wakefield?
I'm not making any claims of certaincy. I have not published any books making claims of harm. I have not gone on a tour of interviews the world over trying to build public opinion instead of building consensus that the information is true.
That's how I know.
I also don't go around talking about race based differences in IQ, but that's just Haidt.
I think Yegge needs to keep up with the tech a bit more. Cursor has gotten quite powerful - it's plan mode now seems about on par with Claude Code, producing Mermaid charts and detailed multi-phase plans that pretty much just work. I also noticed their debug mode will now come up with several thesises (thesi?), create some sort of debugging harness and logging system, test each thesis, tear down the debugging logic and present a solution. I have no idea when that happened, but it helped solve a tricky frontend race condition for me a day or two ago.
I still like Claude, but man does it suck down tokens.
As a father of two small children during COVID, I can't begin to thank fnnch enough for his Honey Bear Hunt project: https://upmag.com/honey-bear-fnnch/
Hundreds (if not thousands) of honey bears were posted in windows around SF. It was one of those things that happens in SF every now and then, a mix of whimsy and hustle and unexpected joy. We couldn't take our kids to school, we couldn't take them to the park. Instead, we would drive them around town and have them point out all the honey bears they saw. "Honey bear! Another one!"
Variants of this were in NL as well, but it was just stuffed animals (I believe in support of health care workers); people went out for walks to go and spot them.
I wish stuff like that would happen again, it was an interesting time where people actually stayed home and explored their environments, their home and themselves a lot. Before that (or at the same time?) it was AR games like Pokemon Go. I'm out of touch with what's happening now, it just feels like people have reverted or gone into a new normal. Or maybe that's just me.
reply