Noticed when my CC session got booted this morning. Trying to log back in takes me to the web auth page. Clicking Authorize doesn't do anything until much later at which point it says you're good to go but CC reports a 500 error.
My wife draws comics and exhibits at comic con, and her website is basically being ddossed by AI scrapers to the point of NEEDING cloudflare to keep the site online.
Then people get to use the models that stole her work and crashed her site to sell derivative works right next to her booth?
That sucks but it’s pretty much always been like this as technology advances. You can either fight the tide or go with it like Warhol did. He pivoted to doing more photographs later in his career because it was easier to make more art.
Art != business. They are separate things and only heartbreak will ensue when you operate on the assumption that artistic values is business value.
My point isn’t about the advancement of technology, it is about theft and destruction disguised as technology.
If what happened to my wife happened to Warhol the equivalent would be someone tearing his paintings off the gallery wall and selling xeroxed versions with sharpie written over them.
However your ultimate points are that art can’t be business, which in turn means artists shouldn’t necessarily be compensated, which defies reality and morality.
Because expertise, love, and care cut across all human endeavor, and noticing those things across domains can be a life affirming kind of shared experience.
Favorited. This will be a timeless comment for me, and will remind some perspective to appreciate things I might not be otherwise familiar with, and thereby care about.
Not everything has to be. Sometimes, an artist's style or a particular track just hits a particular vibe one may be after or need in a particular moment.
I'm not a fan of this music either but I could imagine hearing it while I'm studying or coding.
Don't trash something just because it's not your vibe. Not everything has to be Mozart.
I mean, it's not like I trashed it or compared it to Mozart—I even made sure to include "interesting, stimulating, or tonally remarkable" in an attempt to preempt that latter pushback.
But even if I did, why can't I? It's fine to call some music shit. Just like you can call my opinion shit.
Policing dissenting opinions and saying everything is equally worthy of praise are two sides of the same coin sliding in the vending machine that sells us the sad state of affairs we live in today.
You absolutely trashed it in your first sneering, shitty swipe about “culture”. You don’t get to make comments like that and then whine about “policing” like a four year-old caught in the cookie jar.
Interesting this is on the ramp.com domain? I'm surprised in this tech market they can pay devs to hack on Rollercoaster Tycoon. Maybe there's some crossover I'm missing but seems like a sweet gig honestly.
Why would they be losing money? It’s what we use for tracking expenses and getting comped for travel, meals, software licenses etc - works great in my experience. I can click a few buttons and get a new business expense card spun up in less than a minute, use it to make a purchase, get approval and have the funds transferred. Boom easy.
Do you not think they’re charging enough or something?
This is brilliant SEO work, I doubt that they loose money with it. With 40h and some additional for the landingpage it might be an expensive link bait, but definitely worth it. Kudos!
If not for SEO, it’s building quite a good reputation for this company, they got a lot of open positions.
I’m a big fan of transport tycoon, used to play it for hours as a kid and with Open Transport Tycoon it also might have been a good choice, but maybe not B2C?
I don't know how someone can look at what you build and conclude LLMs are still google search. It boggles the mind how much hatred people have for AI to the point of self deception. The evidence is placed right in front of you and on your lap with that link and people still deny it.
>I think the true mind boggle is you don't seem to realize just how much content the AI conpanies have stolen.
What makes you think I don't realize it? Looks like your comment was generated by an LLM because that was an hallucination that is Not true at all.
AI companies have stolen a lot of content for training. I AGREE with this. So have you. That content lives rent free in your head as your memory. It's the same concept.
Legally speaking though, AI companies are a bit more in the red because the law, from a practical standpoint, doesn't exactly make illegal anything stored in your brain... but from a technical standpoint information on your brain, a hard drive or a billboard is still information instantiated/copied in the physical world.
The text you write and output is simply a reconfiguration of that information in your head. Look at what you're typing. The English language. It's not copywrited, but every single word your typing was not invented by you, the grammar rules and conventions were ripped off existing standards.
I think you are pointing out the exact conflation here. The commentor probably didn't steal a bunch of code, because it is possible to reason from first principles and rules and still end up being able to code as a human.
It did not take me reading the entirety of available public code to be kind of okay at programming, I created my way to being kind of okay at programming. I was given some rules and worked with those, I did not mnemonic my way into logic.
None of us scraped and consumed the entire internet, is hopefully pretty obvious, but we still have capabilities in excess of AI.
What’s being missed here is how fundamentally alien the starting point is.
A human does not begin at zero. A human is born with an enormous amount of structure already in place: a visual system that segments the world into objects, depth, edges, motion, and continuity; a spatial model that understands inside vs outside, near vs far, occlusion, orientation, and scale; a temporal model that assumes persistence through time; and a causal model that treats actions as producing effects. None of this has to be learned explicitly. A baby does not study geometry to understand space, or logic to understand cause and effect. The brain arrives preloaded.
Before you ever read a line of code, you already understand things like hierarchy, containment, repetition, symmetry, sequence, and goal-directed behavior. You know that objects don’t teleport, that actions cost effort, that symbols can stand in for things, and that rules can be applied consistently. These are not achievements. They are defaults.
An LLM starts with none of this.
It does not know what space is. It has no concept of depth, proximity, orientation, or object permanence. It does not know that a button is “on” a screen, that a window contains elements, or that left and right are meaningful distinctions. It does not know what vision is, what an object is, or that the world even has structure. At initialization, it does not even know that logic exists as a category.
And yet, we can watch it learn these things.
We know LLMs acquire spatial reasoning because they can construct GUIs with consistent layout, reason about coordinate systems, generate diagrams that preserve relative positioning, and describe scenes with correct spatial relationships. We know they acquire a functional notion of vision because they can reason about images they generate, anticipate occlusion, preserve perspective, and align visual elements coherently. None of that was built in. It was inferred.
But that inference did not come from code alone.
Code does not contain space. Code does not contain vision. Code does not contain the statistical regularities of the physical world, human perception, or how people describe what they see. Those live in diagrams, illustrations, UI mockups, photos, captions, instructional text, comics, product screenshots, academic papers, and casual descriptions scattered across the entire internet.
Humans don’t need to learn this because evolution already solved it for us. Our visual cortex is not trained from scratch; it is wired. Our spatial intuitions are not inferred; they are assumed. When we read code, we already understand that indentation implies hierarchy, that nesting implies containment, and that execution flows in time. An LLM has to reverse-engineer all of that.
That is why training on “just code” is insufficient. Code presupposes a world. It presupposes agents, actions, memory, time, structure, and intent. To understand code, a system must already understand the kinds of things code is about. Humans get that for free. LLMs don’t.
So the large, messy, heterogeneous corpus is not indulgence. It is compensation. It is how a system with no sensory grounding, no spatial intuition, and no causal priors reconstructs the scaffolding that humans are born with.
Once that scaffolding exists, the story changes.
Once the priors are in place, learning becomes local and efficient. Inside a small context window, an LLM can learn a new mini-language, adopt a novel set of rules, infer an unfamiliar API, or generalize from a few examples it has never seen before. No retraining. No new data ingestion. The learning happens in context.
This mirrors human learning exactly.
When you learn a new framework or pick up a new problem domain, you do not replay your entire lifetime of exposure. You learn from a short spec, a handful of examples, or a brief conversation. That only works because your priors already exist. The learning is cheap because the structure is already there.
The same is true for LLMs. The massive corpus is not what enables in-context learning; it is what made in-context learning possible in the first place.
The difference, then, is not that humans reason while LLMs copy. The difference is that humans start with a world model already installed, while LLMs have to build one from scratch. When you lack the priors, scale is not cheating. It is the price of entry.
But this is besides the point. We know for a fact that output from humans and LLMs are novel generalizations and not copies of existing data. It's easily proven by asking either a human or an LLM to write a program that doesn't exist in the universe and both the human and the LLM can readily do this. So in the end, both the human and the LLM have copied data in their minds and can generalize new data OFF of that copied data. It's just the LLM has more copied data while the human has less copied data, but both have copied data.
In fact the priors that a human is born with can even be described as copied data but encoded in our genes such that we our born with brains that inherit a learning bias optimized for our given reality.
That is what is missing. You look at speed of learning from training. The apt comparison in this case would be reconstructing a human brain neuron by neuron. If you want to compare how fast a human learns a new programming language with an LLM the correct comparison would thus be to compare with how fast an LLM learns a new programming language AFTER it has been trained and solely within inference in the context window.
Izotope Ozone uses AI to mix and master - for some reason that’s okay but you can’t actually generate the sounds with AI? Or what if I generate the notes with AI then use my own synth presets is that allowed?
I'd be like a bookstore banning "AI books." Like, yea, you can probably use AI in making your book and they'll sell it, but they're mainly targeting users who just happen to be publishing a book every single week, or day, or hour.
At a certain resolution it's not actually the artist doing the work.
I don't see any confusion. Ozone is a mixing and mastering tool with buttons to find the best settings using ML, from the company Izotope. People who use it use something that the company calls AI. Where does the band come in to this?
reply