Consumer brand quality is so massively underrated by tech people.
ChatGPT has a phenomenal brand. That's worth 100x more than "product stickiness". They have 700 million weekly users and growing much faster than Google.
I think your points on Google being well positioned are apt for capitalization reasons, but only one company has consumer mindshare on "AI" and its the one with "ai" in its name.
I’ve got “normie” friends who I’d bet don’t even know that what Google has at the top of their search results is “AI” results and instead assume it’s just some extension of the normal search results we’ve all gotten used to (knowledge graph)
Every one of them refers to using “ChatGPT” when talking about AI.
How likely is it to stay that way? No idea, but OpenAI has clearly captured a notable amount of mindshare in this new era.
I'm not sure if physical products are analogous to internet services. If all it took to vacuum your house was typing "Hoover" into a browser, and everyone called vacuums "a Hoover," then I would expect Hoover to have 90% of the vacuum market share.
But since buying a vacuum usually involves going to a store, looking at available devices, and paying for them, the value of a brand name is less significant.
Pre-pandemic, at least in my social circles, "Skype" was the term for video calling. "Hey, wanna Skype?" and we'd hop on a discord call.
Post-pandemic, at work and such, "Zoom" has become synonymous for work call. Whether it's via Slack or Google Meet, or even Zoom, we use the term Zoom.
I don't know what the market share is on Skype (Pre-pandemic) or Zoom, but these common terms appear to exist for software.
Video description, from the Velcro brand YouTube channel:
Our Velcro Brand Companies legal team decided to clear a few things up about using the VELCRO® trademark correctly – because they’re lawyers and that’s what they do. When you use “velcro” as a noun or a verb (e.g., velcro shoes), you diminish the importance of our brand and our lawyers lose their insert fastening sound. So please, do not say “velcro shoes” (or “velcro wallet” or “velcro gloves”) - we repeat “velcro” is not a noun or a verb. VELCRO® is our brand. #dontsayvelcro
Even I often tell I chatgeepeeteed the result, in the same fashion when I continue saying I googled the result, while actually I used Duck Duck Go. I could ask another LLM provider, but I have no idea how to communicate that properly to a non-technical folks. Heck, I don’t want to communicate that _properly_ to tech peers either. I don’t like these pedantic phrases ‘well, actually … that wasn’t Google, I used DDG for that.’ Sometimes I can say ‘web search,’ but ‘I googled that’ is just more natural thing to say.
Same here. I tried saying ‘I asked LLM’ or ‘I asked AI’ but that doesn’t sound right for me. So, in most conversations I say ‘I asked Chat GPT’ and in most of these situations, it feels like the exact provider does not matter, since essentially they are very similar in their nature.
I cheekily refer to it as Al (like, short for Albert) because Google seems to love to shove Al's overviews in my search results.
But when I'm being more serious I'd usually just say "I asked GPT"
I have a colleague who just refers to AI as "Chat" which I think is kinda cute, but people also use the term "chat" to refer to... Like, people, or "yall". Or to their stream chat.
Yep, this. I’ve switched to Claude for a while (because I can’t afford max plans for both) and nobody in the real world has any idea what it is I’m talking about. “Oh it’s like ChatGPT?”
Claude is also difficult to consistently pronounce for a non-English speaker. Sometimes people dont say that because it can get misinterpreted. ChatGPT is something easy on the the tongue and very difficult to mis-pronounce.
The CEO is also more puritan than the pope himself considering the amount of censorship it has. Not sure if they are even interested in marketing to normies though.
> The CEO is also more puritan than the pope himself considering the amount of censorship it has.
In that case, you should try OpenAI's gpt-oss!
Both models are pretty fast for their size and I wanted to use them to summarize stories and try out translation. But it keeps checking everything against "policy" all the time! I created a jailbreak that works around this, but it still wastes a few hundred tokens talking about policy before it produces useful output.
I read that as OpenAI’s WAU is showing a steeper increase than Google ever did. Not saying it’s factually accurate, just that it’s not a fixed point-in-time comparison :)
My wife asked for information about a product, and ChatGPT fed her a handful of blatant product ads. She told the AI never to do that again, and that was the last time she saw that format of output.
I would wager that she was part of an A/B testing group, so her instruction may not have any real effect. However, we were both appalled by that output and immediately discussed alternative AI options, should such a change become permanent.
This isn’t the rise of Google, where they have a vastly superior product and can boil us frogs by slowly serving us more and more ads. We are already boiling mad from having become hypersensitive to products wholly tainted by ads.
My observation is different: ChatGPT may be well-known, but does not have a really good reputation anymore (I'd claim that it is in average of equal dubious reputation as Google) in particular in consideration of
- a lot of public statements and actions of Sam Altman (in particular his involvement into Worldcoin (iris scanning) makes him untolerable for being the CEO of a company that is concerned about its reputation)
- the attempts to overthrow Sam Altman's throne
- most people know that OpenAI at least in the past collaborated a lot with Microsoft (not a company that is well-regarded). But the really bad thing is that the A"I" features that Microsoft introduced into basically every product are hated by users. Since people know that these at least originated in ChatGPT products, this stained OpenAI's reputation a lot. Lesson: choose carefully who you collaborate with.
You massively overestimate what people actually know and read about. If you are in the tech sphere these things might be obvious to you, but I assure you regular people are not keeping track as closely.
I bet at most 10 % of people in the West can name the CEO of OpenAI.
Eh. Altman is not Musk in terms of negative coverage or average sentiment on the net. That might change in the future, but my personal guess is that your perception may be based on spending too much time in a specific echo chamber. I personally like to use people who don't use llms at all for a proper grounding. In those cases, Altman name does not exist, while Musk barely registers.
> Altman is not Musk in terms of negative coverage or average sentiment on the net.
I can assure you that in Germany (where people are very sensitive with respect to privacy topics), Sam Altman (in particular because of his involvement with Worldcoin ("iris scanning" -> surveillance)) has a very bad reputation by many people.
Most normal people don't know about these things they don't even know who Sam Altman is, for example my family that are not Americans they know about chat gpt but they don't know who Sam Altman is
my mom sees it as a nice internet bloke that helps her with writing emails. She once asked why it can't change background of her image from white to red if it can generate all that amazing art, and was genuinely disappointed that she can't get it to understand what she wants. You have skewed view on public perception on llms - they don't think about it, they just use it.
They might be. Google has been getting mildly 'aggressive' in their emails pleading with me to use gemini and I have yet to try it ( and that is despite being mildly interested ). There is a reason first mover's advantage is a real thing. People stick with what they think they know.
i wish more folks would post P(how much I believe my own take) when they make takes.
I don't think the author is fundamentally wrong; but its delivered with a sense of certainty thats similar in tone to the past 5 years of skepticism that has repeatedly been wrong.
Instead of saying "vibe coded codebases are garbage", the author would be better served writing about "what does the perfect harness for vibe coded codebase look like so that it can actually scale to production"?
There's no way to know for sure. If there was, we wouldn't be having this conversation. But I'm trying to make an educated guess.
I read the news: AI is taking over and we'll soon all be out of jobs. So I have to decide. Do I double down on software engineering, pivot to vibe coding, or try something completely different?
I need a sense of certainty to make this call, so I researched it. This post is the result. I might be wrong, but at least I'm choosing a clear direction instead of constantly switching and never getting good at either.
Vibe coding today doesn't deliver anywhere near the value of a competent software engineer. Rather than extrapolate from past progress, I looked at what it would take today.
You asked about how we will turn a vibe-coded codebase into production-ready systems. I have no idea how we'll do that and I didn't find someone with a solid plan for it.
The logical conclusion here is that there's still plenty of runway for skilled software engineers. So I'm betting on becoming a better one with or without AI.
About "vibe coded codebases are garbage".
If someone doesn't know how to build software (or quality doesn't matter) vibe coding is perfect. The code might be garbage but it beats having nothing.
These projects would otherwise be Excel spreadsheets or duct-taped tools. Now they have another option.
The problem is when people suggest vibe coding replaces developer skills, as if producing code was the bottleneck.
That wasn't the take. The take is generally, "glorified next token predictor" quality LLM skepticism takes have been repeatedly proven wrong. See the original Cursor, Devin, etc announcement threads.
More broadly, its unfortunate that vibe coding is so overloaded a term.
- Yes, product managers with 0 coding expertise are contributing code in FAANG.
- Yes, experienced engineers are "vibe coding" to great success.
- Yes, folks with 0 years of experience were building simple calculators 2 years ago and are now building games, complex website, etc, just by prompting. Where will this go in another two years?
One needs look no further than kiro - Amazons own code editor thats being used extensively internally.
Folks bemoaning vibe coding are simply suffering from lack of imagination.
What was the take then? What has repeatedly been "proven wrong?" The original author's point was that it was a great tool for rapidly spinning up a prototype that quickly fell apart at any kind of scale or complexity. I've seen this pop up over and over, and in my own usage of it as well. They don't seem able to (yet) produce anything purely "vibe" coded at any kind of real complexity or scale. If it's happening, I'm extraordinarily interested in it, because I have a lot of projects I could get off the ground to make some additional money. So what apps are they?
It's not vibe coded, though. "Vibe coding" means taking the AI's code with no review at all. Whereas I carefully reviewed the AI's output for workers-oauth-provider.
don't love this framing - capitalism operates on avarice. Every for-profit company "gives in" to it to a great extent, so we shouldn't put undue blame on isovalent for just looking out for themselves and their employees.
Being riddled with ads also doesn't increase my trust level. This really seems like their attempt to capitalize on the crazy level of interest right now.
These interviews are for engineers interviewing at companies which have scale ; not for a 0-1 growth of a product like your google example.
When you're launching products at scale, you absolutely need to design large systems that are durable and can operate at said scale. And when I'm hiring, I need to filter for folks who are able to rationalize at that scale.
Well as you are the FAANG hiring manager I expect I am going to lose the argument but I wonder if my oranges are more like apples than you suspect.
I see there being two options here
apples: please take the existing set of distributed components that evolved in tandem at this company and design a new application on top of and using those components
oranges: please start from scratch and explain how you would build a set of components that will work as well as those in 1.
Apples is what we mostly look for - in any org we have a scale and ecosystem and we don't want to throw it all out. If it must evolve it evolves in tandem with rest of the ecosystem. So if we have some scalable data layer now that has chosen one side of CP/AP and then we come along and say actually throw that out We need full concistency all the time for this HR app, we are going to build our own different CP data layer globally, hold my beer, then ...
The oranges part is fine - it tells us if someone has actually understood the principles behind apples. But honestly it's a fake out of we think oranges can actually be done. And this sort of stuff gives the impression you can design the application solution by also designing the ecosystem at the same time. That's the bit Inwant to emphasise - you build the platform then design the application on the quirks of what you built
And anyone who has not used the apples components won't understand their quirks and can come unstuck when the well known but non obvious behaviour strikes.
I am being too vague here. I feel there is an interesting set of discussions to come out of it - Inwill reread the article
ChatGPT has a phenomenal brand. That's worth 100x more than "product stickiness". They have 700 million weekly users and growing much faster than Google.
I think your points on Google being well positioned are apt for capitalization reasons, but only one company has consumer mindshare on "AI" and its the one with "ai" in its name.