Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Interesting exchange on the use of AI coding tools:

    curious how much did you write the code by hand of it?

    Karpathy: Good question, it's basically entirely hand-written (with tab autocomplete). I tried to use claude/codex agents a few times but they just didn't work well enough at all and net unhelpful, possibly the repo is too far off the data distribution.
https://x.com/karpathy/status/1977758204139331904




> the repo is too far off the data distribution

ah, this explains why these models have been useless to me this whole time. everything i do is just too far off the data distribution!


Everything is unless your app is a React todolist or leatcode questions.

people say this like it's a criticism, but damn is it ever nice to start writing a simple crud form and just have copilot autocomplete the whole thing for me.

Yep. I find the hype around AI to be wildly overblown, but that doesn’t mean that what it can do right now isn’t interesting & useful.

If you told me a decade ago that I could have a fuzzy search engine on my desktop that I could use to vaguely describe some program that I needed & it would go out into the universe of publicly available source code & return something that looks as close to the thing I’ve asked for as it can find then that would have been mindblowing. Suddenly I have (slightly lossy) access to all the code ever written, if I can describe it.

Same for every other field of human endeavour! Who cares if AI can “think“ or “do new things”? What it can do is amazing & sometimes extremely powerful. (Sometimes not, but that’s the joy of new technology!)


Why do you think what you describe being excited about does not warrant the current level of AI hype? I agree with your assessment and sometimes I think there is too much cynicism and not enough excitement.

the current level of AI hype amongst a lot of people, but especially investors and bosses, is that you can already give an AI a simple prompt and get it to spit out a fully functional, user-ready application for you. and we're so incredibly far off that.

the things that AI is able to do are incredible, but hype levels are just totally detached from reality.


> is that you can already give an AI a simple prompt and get it to spit out a fully functional, user-ready application for you.

But it can already do that. Isn't that the whole "one-shotting" thing?

The problem is, of course, that it won't be optimized, maintainable or have anyone responsible you can point to if something with it goes wrong. It almost certainly (unless you carefully prompted it to) won't have a test suite, which means any changes (even fixes) to it are risky.

So it's basically a working mockup generator.

I am so, so tired of "semi-technical" youtubers showing off new models with one-shots. The vast majority of actual devs who use this stuff need it to work over long-term context windows and over multiple iterations.


The thing is, we've already had "working mockup generators" — a.k.a. prototyping tools — for decades now.

If you come at the problem from the direction of "I draw a user interface; you guess what it's supposed to do and wire it up for me", then all you need to solve that problem (to a first-order approximation) is some plain-old 1970s "AI" heuristics.

The buzz around current AI coding prompting seems to be solely generated by the fact that while prototyping tools require you to at least have some training as a designer (i.e. understanding the problem you're solving on the level of inputs and outputs), these tools allow people with no experience in programming or design to get results. (Mainly by doing for UIs what genAI image/video tools do for art: interpolating the average of many ingested examples of how a designer would respond to a client request for X, with no regard for the designer's personal style†.)

† Unless prompted to have such regard... but if you know enough to tell the AI how to design everything, then you may as well just design everything. Just as, if you know art well enough to prompt an AI into developing a unique art style, then you likely know art well enough to just make that same art yourself with less effort than it takes to prompt and re-prompt and patch-erase-infill-prompt the AI into drawing what you want.


from what i can tell, the one-shot thing only works on youtube.

you might produce something that looks usable at first, but the actual application functionality will be significantly broken in most ways. it maybe works enough to do a demo for your video, but it won't work enough to actually distribute to end-users. and of course, as you say, it's not testable or maintainable in any way, so fixing what's broken is a bigger project than just writing it properly in the first place.


I think the cynicism is only on software dev circles, and it’s probably a response to the crazy hype.

Remember the hype isn’t just “wow it’s so cool and amazing and useful”, it’s also “I can’t wait to fire all my dumb meat-based employees”


Because to justify the current hype and spending, these companies have to have a product that will generate trillions of dollars and create mass unemployment. Which they don't have.

The current AI hype is causing a lot of leaders to put their organizations on the path to destruction.

Oh sure, there’s also way too much cynicism in some quarters. But that’s all part of the fun.

They go beyond merely "return something that looks as close to the thing I’ve asked for as it can find". Eg: Say we asked for "A todo app that has 4 buttons on the right that each play a different animal sound effect for no good reason and also you can spin a wheel and pick a random task to do". That isn't something that already exists, so in order to build that, the LLM has to break that down, look for appropriate libraries and source and decide on a framework to use, and then glue those pieces together cohesively. That didn't come from a singular repo off GitHub. The machine had to write new code in order to fulfill my request. Yeah, some if it existed in the training data somewhere, but not arranged exactly like that. The LLM had to do something in order to glue those together in that way.

Some people can't see past how the trick is done (take training data and do a bunch of math/statistics on it), but the fact that LLMs are able to build the thing is in-and-of-itself interesting and useful (and fun!).


I’m aware. But the first part is “find me something in the vector space that looks something like the thing I’m asking for”. Then the rest is vibes. Sometimes the vibes are good, sometimes they are ... decidedly not.

If the results are useful, then that’s what matters. Although I do suspect that some AI users are spending more time pulling the AI one-armed bandit handle than it would take them to just solve their problem the old fashioned way a lot of the time - but if pulling the one-armed bandit gets them a solution to their problem that they wouldn’t work up the motivation to solve themselves then that counts too, I guess.


Back in the 90s you could drag and drop a vb6 applet in Microsoft word. Somehow we’ve regressed..

Edit: for the young, wysiwyg (what you see is what you get) was common for all sorts of languages from c++ to Delphi to html. You could draw up anything you wanted. Many had native bindings to data sources of all kinds. My favourite was actually HyperCard because I learned it in grade school.


Wysiwyg kind of fell apart once we had to stop assuming everyone had an 800x600 or 1024x768 screen, because what you saw was no longer what others got.

Not entirely, in these RAD tools you also had flexible layout choices and obviously you could test it for various window sizes (although the maximum was the one supported by your graphics card). Too bad many chose the lazy way and just enforced fixed window size at 800x600.

Most of the internet still assumes you're using a 96 DPI monitor. Tho the rise of mobile phone has changed that it seems like the vast majority of the content consumed on mobile lends itself to being scaled to any DPI - eg.. movies, pictures, youtube ect.

Not a big issue with QT layouts (still have to test the result though)

I can imagine adding breakpoints to a wysiwyg editor being not terribly difficult. They decouple presentation from logic pretty well.

I still miss my days of programming Visual Basic 6. Nothing since then ever compares.

4gl or RAD is still here, but now it’s called low- or no-code.

I agree. I am "writing" simple crud apps for my own convenience and entertainment. I can use unfamiliar frameworks and languaged for extra fun and education.

Good times!


Before copilot what I'd do is diagnose and identify the feature that resembles the one that I'm about to build, and then I'd copy the files over before I start tweaking.

Boilerplate generation was never, ever the bottleneck.


I've been using AI like this as well. The code-complete / 'randomly pop up a block of code while typing' feature was cool for a bit but soon became annoying. I just use it to generate a block of boilerplate code or to ask it questions, I do 90% of the 'typing the code' bit myself, but that's not where most programmers time is spent.

i'm not sure when you tried it, but if you've had copilot disabled it might be worth giving it another go. in my totally anecdotal experience, over the last few months it's gotten significantly better at shutting up when it can't provide anything useful.

It is, because the frontend ecosystem is not just React. There are plenty of projects where LLMs still give weird suggestions just because the app is not written in React.

I've probably commented the same thing like 20 times, but my rule of thumb and use with AI / "vibe coding" is two-fold:

* Scaffolding first and foremost - It's usually fine for this, I typically ask "give me the industry standard project structure for x language as designed by a Staff level engineer" blah blah just give me a sane project structure to follow and maintain so I don't have to wonder after switching around to yet another programming language (I'm a geek, sue me).

* Code that makes sense at first glance and is easy to maintain / manage, because if you blindly take code you don't understand, you'll regret it the moment you need to be called in for a production outage and you don't know your own codebase.


"Anything that can be autogenerated by a computer shouldn't have to be, it can be automated"

People say inbreeding like it’s criticism too.

HN's cynicism towards AI coding (and everything else ever) is exhausting. Karpathy would probably cringe reading this.

First, it's not cynicism but a more realistic approach than just following SV marketing blindly, and second, it's not "everything else", just GenAI, NFTs/ICOs/Web3, "Metaverse" (or Zucks interpretation of it), delf-driving cars ready today, maybe a bit Theranos.

I’ve recently written a message queue <> database connector in Go using Claude Code, checkpointing, recovery, all that stuff built in.

I’d say it made me around 2x as productive.

I don’t think the cynicism of HN is justified, but I think what people forget is that it takes several months of really investing a lot of time into learning how to use AI well. If I see some of the prompts people give, and expect it to work, yeah no wonder that only works for React-like apps.


I asked AI to create a basic autoencoder based deep learning architecture for classifying time series data. This AI is a boon.

The thing is cryptocurrency and metaverse stuff was obvious bullshit from day one while even GPT-3 was clearly a marvel from day one. It's a false pattern match.

okay but he literally does have a bridge that non-deterministically might take you to the wrong place to sell you

The original context of this sub-thread was Karpathy saying how AI coding tools were pretty useless for him when working on this particular project.

Indeed. And only Karpathy is entitled to say that AI tools produce wrong code for him. And he's only entitled to say it for this project only.

If anyone else says this, "the skepticism is exhausting", and their experience is completely irrelevant.


Go look at the comments on HN whenever someone posts about their AI coding workflow. It will be littered with negative comments that either imply or outright say that the poster is either shilling, ignorant or working only on toy examples.

The grievance attitude seems to exist in both directions and is actually what is exhausting.


> It will be littered with negative comments that either imply or outright say that the poster is either shilling, ignorant or working only on toy examples.

And they would be often be right. Coupled with the fact that most of the glowing "omg I only code with AI" posts don't even try to show what code or products they are working on.

And yes, the absolute vast majority of people who are skeptical are skeptical precisely because they use these tools every day themselves.


Just so we are clear, you are upset by people dismissing your experience gained skepticism but have no problem dismissing every positive comment as a shill, ignorant or simple?

You don’t see any dissonance in that? It’s only the positive people that are exhausting?


I myself post positive comments about AI from time to time.

I never pretend that AI is the be all end all of programming, don't claim that it can do all the magical things, or that it's capable of running hours on end just creating software with not proof like most positive posts do.

See the difference?

I'm all for positive posts. I'm against childish belief in magic: https://dmitriid.com/everything-around-llms-is-still-magical...


Show HNs about AI startups are littered with positive comments though. It's rare to see people calling out the submitter.

posts about yet another ai workflow, typically presented with hyperbole is exhausting. The backfires are rather appeasing, entertaining at the least.

I mean Karpathy himself wrote that he could not use the AI tools for the project, so he had to handwrite most of it. I wonder why.

One of my hobby projects is an esoteric game engine oriented towards expressing simulation mechanics. I simply do not use agentic tools when editing the core code for this project (mostly rust and wgsl). It always stumbles, and leaves code that I need to fix up manually, and even then feel unsure about. I've tried a few different agents, including the current top of the line. The power is just not there yet.

At the same time, these tools have helped me reduce the development time on this project by orders of magnitude. There are two prominent examples.

--- Example 1:

The first relates to internal tooling. I was debugging a gnarly problem in an interpreter. At some point I had written code to do a step-by-step dump of the entire machine state to file (in json) and I was looking through it to figure out what was going wrong.

In a flash of insight, I asked my AI service (I'll leave names out since I'm not trying to promote one over another) to build a react UI for this information. Over the course of a single day, I (definitely not a frontend dev by history) worked with it to build out a beautiful, functional, easy to use interface for browsing step-data for my VM, with all sorts of creature comforts (like if you hover over a memory cell, and the memory cell's value happens to be a valid address to another memory cell, the target memory cell gets automatically highlighted).

This single tool has reduced my debugging time from hours or days to minutes. I never would have built the tool without AI support, because I'm simply not experienced enough in frontend stuff to build a functional UI quickly.. and this thing built an advanced UI for me based on a conversation. I was truly impressed.

--- Example 2:

As part of verifying correctness for my project, I wanted to generate a set of tests that validated the runtime behaviour. The task here consists of writing a large set of reference programs, and verifying that their behaviour was identical between a reference implementation and the real implementation.

Half decent coverage meant at least a hundred or so tests were required.

Here I was able to use agentic AI to reduce the testcase construction time from a month to about a week. I asked the AI to come up with a coverage plan and write the test case ideas to a markdown file in an organized, categorized way. Then I went through each category in the test case markdown and had the AI generate the test cases and integrate them into the code.

---

I was and remain a strong skeptic of the hype around this tech. It's not the singularity, it's not "thinking". It's all pattern matching and pattern extension, but in ways so sophisticated that it feels like magic sometimes.

But while the skeptical perspective is something I value, I can't deny that there is core utility in this tech that has a massive potential to contribute to efficiency of software development.

This is a tool that we as industry are still figuring out the shape of. In that landscape you have all sorts of people trying to evangelize these tools along their particular biases and perspectives. Some of them clearly read more into the tech than is there. Others seem to be allergically reacting to the hype and going in the other direction.

I can see that there is both noise, and fundamental value. It's worth it to try to figure out how to filter the noise out but still develop a decent sense of what the shape of that fundamental value is. It's a de-facto truth that these tools are in the future of every mainstream developer.


That's exactly why I said he would cringe at it. Seeing someone look at him saying "it's not able to make a good GPT clone" and going "yeah it's useless for anything besides React todo list demos" would definitely evoke some kind of reaction. He understands AI coding agents are neither geniuses nor worthless CRUD monkeys.

Hm, interesting point. So if he and other GenAI hotshots understand that, why do they keep seeling the tools as precisely no less than geniuses? Often with a bit of fear mongering about all the jobs that would be lost soon etc.?

or a typical CRUD app architecture, or a common design pattern, or unit/integration test scaffolding, or standard CI/CD pipeline definitions, or one-off utility scripts, etc...

Like 80% of writing coding is just being a glorified autocomplete and AI is exceptional at automating those aspects. Yes, there is a lot more to being a developer than writing code, but, in those instances, AI really does make a difference in the amount of time one is able to spend focusing on domain-specific deliverables.


And even for "out of distribution" code you can still ask question about how to do the same thing but more optimized, could a library help for this, why is that piece of code giving this unexpected output etc

It has gotten to the point that I don't modify or write SQL. Instead I throw some schema and related queries in and use natural language to rubber duck the change, by which point the LLM can already get it right.

I don't know. I successfully use it for small changes on VHDL FPGA designs these days.

I've had some success with a multi-threaded software defined radio (SDR) app in Rust that does signal processing. It's been useful for trying something out that's beyond my experience. Which isn't to say it's been easy. It's been a learning experience to figure out how to work around Claude's limitations.

Generative AI for coding isn't your new junior programmer, it's the next generation of app framework.

I wished such sentiments prevailed in upper management, as it is true. Much like owning a car that can drive itself - you still need to pass a driving test to be allowed to use it.

Really such an annoying genre of comment. Yes I’m sure your groundbreaking bespoke code cannot be written by LLMs, however for the rest of us that build and maintain 99% of the software people actually use, they are quite useful.

simple CRUD, is as common in many many business applications or backend portals, are a good fit for AI assistance imho. And fix some designs here and there, where you can't be bothered to keep track of the latest JS/CSS framework

I wonder if the new GenAI architecture namely DDN or distributed discrete networks being discussed recently can outperform the conventional architecture of GAN and VAE. As the name suggests, it can provide multitude of distributions for training and inference purposes [1].

[1] Show HN: I invented a new generative model and got accepted to ICLR (90 comments):

https://news.ycombinator.com/item?id=45536694


I work on this typed lua language in lua, and sometimes use llms to help fix internal analyzer stuff, which works 30% of the time for complex, and sometimes not at all, but helps me find a solution in the end.

However when I ask an llm to generate my typed lua code, with examples and all, on how the syntax is supposed to be, it mostly gets it wrong.

my syntax for tables/objects is: local x: {foo = boolean}

but an llm will most likely gloss over this and always use : instead of = local x: {foo: boolean}


I've had success in the past with getting it to write YueScript/Moonscript (which is not a very large part of its training data) by pointing it to the root URL for the language docs and thus making that part of the context.

If your typed version of Lua has a syntax checker, you could also have it try to use that first on any code it's generated


Are you using a coding agent or just an llm chat interface? Do you have a linter or compiler that will catch the misuse that you’ve hooked up to the agent?

I've dabbled with claude code in this particular project, but not much. My short experience with it is that it's slow, costly and goes off the rails easily.

I prefer to work with more isolated parts of the code. But again, I don't really know all that much about agents.

One thing I wanted to do on my project is reorganize all the tests, which sounds like an agent job. But I'd imagine I need to define some hard programmatic constraints to make sure tests are not lost or changed in the process.


Agents aren’t magic. They are loops with tool calls in them that help keep agents on track. And most of the agent systems have some manner of hook that you can put your own tools in to enforce things like types and styles.

I’ve had good experiences writing small scripts and linters to enforce things that agents get wrong frequently. What’s nice about those is that the agents are very good at writing them and they are easy to verify. Plus they are valuable for new humans devs as well.


That is a good thing to hear from someone as reputable as Karpathy. The folks who think we're on the cusp of AGI may want to temper their expectations a bit.

I do love Claude Code, because one thing I periodically need to do is write some web code, which is not my favorite type of coding but happens to have incredibly good coverage in the training data. Claude is a much better web developer than I am.

But for digging into the algorithmic core of our automation tooling, it doesn't have nearly as much to work with and makes far more mistakes. Still a net win I'm happy to pay for, even if it's never anything more than my web developer slave.


100%. I find the "LLMs are completely useless" and the "LLMs will usher in a new era of messianic programming" camps to be rather reductive.

I've already built some pretty large projects [1] with the assistance of agentic tooling like Claude Code. When it comes to the more squirrely algorithms and logic, they can fall down pretty hard. But as somebody who is just dreadful at UI/UX, having it hammer out all the web dev scaffolding saves me a huge amount of time and stress.

It's just a matter of tempering one's expectations.

[1] https://animated-puzzles.specr.net


Hey, thank you for making this—I really enjoyed playing it and it feels like it fits the mental-reward-between-work-tasks need. It did spin up my M1's fans after a few minutes which is a rather rare occurrence, but I'm guessing that's par for the course when you're working with a bunch of video on canvas. Either way, hope I remember it the next time I'm looking for a puzzle to solve while I take a break :)

Just thought I'd add to this thread that I also had a lot of fun playing this game, and I don't normally enjoy puzzles on the computer!

A couple of very minor pieces of feedback, if you're open to it: The camera momentum when dragging felt a little unnatural. The videos seemed to have a slightly jumpy framerate and were a bit low-resolution when zoomed in.

Honestly though, those are minor nitpicks. It's a really fun and polished experience. Thanks for sharing!


>and the "LLMs will usher in a new era of messianic programming" camps

Well, this one might still be borne out. It's just silly to think it's the case right now. Check in again in 10 years and it may be a very different story. Maybe even in 5 years.


What do we build now to reap the coming of the messianic era?

> But for digging into the algorithmic core of our automation tooling

What I find fascinating is reading this same thing in other context like “UI guru” will say “I would not let CC touch the UI but I let it rip on algorithmic core of our automation tooling cause it is better at it than me…”


Both can be true. LLMs tend to be mediocre at (almost) everything, so they're always going to be worse than the user at whatever the user is an expert in.

But 'mediocre' isn't 'useless'.


I completely agree. I'm definitely not an expert web developer. I know enough to build functional tools, but it's not exactly art that I'm making. But the core of our tooling is my primary focus, I wrote it, I've spent a lot of time perfecting it. Claude can easily impress me with things like the CSS magic it weaves, because I am unsophisticated.

This makes sense, right? It's a relatively novel thing to be writing. I don't find it to be a damning remark like other comments here seem to be concluding.

If anything, the fact that Karpathy reached towards Claude/Codex in an attempt to gain value is indicative that, in previous coding efforts, those tools were helpful to him.


Yeah, if your goal is "build the tightest 8,000 line implementation of training an LLM from scratch, with a focus on both conciseness and educational value" I don't think it's particularly surprising that Claude/Codex weren't much help.

Now to wait for Sonnet 5 and GPT-6, and ask them to build that, and see what they come up with.

Why would you expect an improvement?

because they'll be trained on karpathy's implementation

> This makes sense, right? It's a relatively novel thing to be writing.

It's really not though? Honestly I'm surprised coding agents fail hard at this task apparently


It's not _that_ far off distribution though. The math and concepts are well understood.

That's not really how LLM work though. It's fundamentally next word prediction, based on statistics of the context. Reordering ideas (which can drastically change the outcome) can result in a statistically rare context. The silly failures of simple riddles [1], and the like, demonstrate this well.

The riddle issue is putting trivial ideas together, but combined them in a statistically rare way, giving low quality output that tends towards the statistically significant answer, even if it's incorrect. The same things happens with coding, when combining well known things together in uncommon ways.

Worse (as with the riddle problem), nearby concepts that have strong statistics are going to act like attractors, with the LLM always trending towards those, removing and rewriting bits of code to better accommodate those, even if they're the opposite of what you want. I have this happen all the time in my somewhat obscure work. It'll rewrite key maths in my code to be the statistically significant textbook example, which is not what I need. I'll fix it manually or point it out, and a few changes later, it'll rewrite it again. A reasonable way around this is to "pin" the concept with a very strongly worded negative comment, like "DO NOT USE DOT PRODUCT. THIS IS NOT VECTOR PROJECTION. DO NOT modify the next line of code.".

[1] Claude 4.1 Opus:

> Prompt: "A son and his mother are in a car accident. They’re both rushed to the hospital. The doctor is about to operate on the son. But, before he does, he looks down and says "This is my son! I can’t operate on this boy!". How can this be?"

> Response: "The doctor is the boy's father.

This is a classic riddle that plays on assumptions people might make about gender and professions. The boy has two parents - his mother (who was in the car with him) and his father (the doctor at the hospital). The riddle works because some people automatically assume the doctor must be male, forgetting that the boy's father could be the doctor since his mother was already mentioned as being in the accident."

Another, with output that doesn't match the goal, statistically attracted to the riddle:

> Prompt: "A man, a sheep, and a wolf are on one side of the river, with a boat that can only hold two. How can the man safely get the boat to the other side of the river, without the sheep being eaten?"


> If anything, the fact that Karpathy reached towards Claude/Codex in an attempt to gain value is indicative that, in previous coding efforts, those tools were helpful to him.

This is good for bitcoin.



He probably just doesn’t know how to prompt correctly (heheh).

That's funny that the coiner of the term vibe coding has eventually found it not useful anymore.

That’s not what he said. This is the new project:

> My goal is to get the full "strong baseline" stack into one cohesive, minimal, readable, hackable, maximally forkable repo. nanochat will be the capstone project of LLM101n (which is still being developed). I think it also has potential to grow into a research harness, or a benchmark, similar to nanoGPT before it.

This is how he described vibe coding:

> There's a new kind of coding I call "vibe coding", where you fully give in to the vibes, embrace exponentials, and forget that the code even exists. It's possible because the LLMs (e.g. Cursor Composer w Sonnet) are getting too good. Also I just talk to Composer with SuperWhisper so I barely even touch the keyboard. I ask for the dumbest things like "decrease the padding on the sidebar by half" because I'm too lazy to find it. I "Accept All" always, I don't read the diffs anymore. When I get error messages I just copy paste them in with no comment, usually that fixes it. The code grows beyond my usual comprehension, I'd have to really read through it for a while. Sometimes the LLMs can't fix a bug so I just work around it or ask for random changes until it goes away. It's not too bad for throwaway weekend projects, but still quite amusing. I'm building a project or webapp, but it's not really coding - I just see stuff, say stuff, run stuff, and copy paste stuff, and it mostly works.

Vibe coding is clearly aimed at having fun hacking around on something that doesn’t matter, and he’s doing the opposite of that with this project. The fact that he’s not using vibe coding for something that is completely inappropriate for vibe coding is neither surprising nor a failure of vibe coding.


The llama.cpp maintainers working on supporting Qwen3-next are also not enthused by LLM output. They had to go over everything and fix it up.

https://github.com/ggml-org/llama.cpp/pull/16095#issuecommen...


Isn't the point that now Andrej's published this, it will be in-distribution soon?

> too far off the data distribution.

I guess his prompts couldn’t provide sufficient information either (there’s no limit). Sounds more like a user issue to me. :) I don’t think there’s anyone that can type faster than ChatGPT.


Backprop and transformers isn't exactly off the grid coding, but I can see how it would require a lot of patience to force claude into writing this.

How convenient! You know, my code is somewhat far off the data distribution too.

We're still not ready for ouroboros.

... or maybe he just forgot to include the claude.md ? :)

Clearly he has little idea what he's talking about.

AI can write better code than 99% of developers. This embarrassingly anti-AI shill included.

If he used the AI tool my company is developing the code would have been better and shipped sooner.


Anti-AI shill? A cofounder of OpenAI?

You have found the joke.

I think you are running into Poe's law here.



Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: