Hacker Newsnew | past | comments | ask | show | jobs | submit | rajaman0's commentslogin

Consumer brand quality is so massively underrated by tech people.

ChatGPT has a phenomenal brand. That's worth 100x more than "product stickiness". They have 700 million weekly users and growing much faster than Google.

I think your points on Google being well positioned are apt for capitalization reasons, but only one company has consumer mindshare on "AI" and its the one with "ai" in its name.


I’ve got “normie” friends who I’d bet don’t even know that what Google has at the top of their search results is “AI” results and instead assume it’s just some extension of the normal search results we’ve all gotten used to (knowledge graph)

Every one of them refers to using “ChatGPT” when talking about AI.

How likely is it to stay that way? No idea, but OpenAI has clearly captured a notable amount of mindshare in this new era.


In the UK, everyone refers to a vacuum as a 'hoover'. They are not the dominant vacuum brand there despite the massive name recognition.


Same with "Pampers" in Poland. Everyone says "pampersy" when referring to just generic diapers. Almost nobody buys the literal "Pampers" brand.


I'm not sure if physical products are analogous to internet services. If all it took to vacuum your house was typing "Hoover" into a browser, and everyone called vacuums "a Hoover," then I would expect Hoover to have 90% of the vacuum market share.

But since buying a vacuum usually involves going to a store, looking at available devices, and paying for them, the value of a brand name is less significant.


Pre-pandemic, at least in my social circles, "Skype" was the term for video calling. "Hey, wanna Skype?" and we'd hop on a discord call.

Post-pandemic, at work and such, "Zoom" has become synonymous for work call. Whether it's via Slack or Google Meet, or even Zoom, we use the term Zoom.

I don't know what the market share is on Skype (Pre-pandemic) or Zoom, but these common terms appear to exist for software.



'Pampers' and 'Xerox' in Russia.


BAND-AID is another one


And "generic trademark" is the Wikipedia article.

https://en.wikipedia.org/wiki/Generic_trademark

Huh, bubble wrap, even.


On the other hand, no one cares about Velcro or Tupperware


https://www.youtube.com/watch?v=rRi8LptvFZY

Video description, from the Velcro brand YouTube channel:

Our Velcro Brand Companies legal team decided to clear a few things up about using the VELCRO® trademark correctly – because they’re lawyers and that’s what they do. When you use “velcro” as a noun or a verb (e.g., velcro shoes), you diminish the importance of our brand and our lawyers lose their insert fastening sound. So please, do not say “velcro shoes” (or “velcro wallet” or “velcro gloves”) - we repeat “velcro” is not a noun or a verb. VELCRO® is our brand. #dontsayvelcro


Tannoy another.


when people started referring to searching the internet as googling, they know their brand has made it.

It is the same with chatGPT.


Even I often tell I chatgeepeeteed the result, in the same fashion when I continue saying I googled the result, while actually I used Duck Duck Go. I could ask another LLM provider, but I have no idea how to communicate that properly to a non-technical folks. Heck, I don’t want to communicate that _properly_ to tech peers either. I don’t like these pedantic phrases ‘well, actually … that wasn’t Google, I used DDG for that.’ Sometimes I can say ‘web search,’ but ‘I googled that’ is just more natural thing to say.

Same here. I tried saying ‘I asked LLM’ or ‘I asked AI’ but that doesn’t sound right for me. So, in most conversations I say ‘I asked Chat GPT’ and in most of these situations, it feels like the exact provider does not matter, since essentially they are very similar in their nature.


I cheekily refer to it as Al (like, short for Albert) because Google seems to love to shove Al's overviews in my search results.

But when I'm being more serious I'd usually just say "I asked GPT"

I have a colleague who just refers to AI as "Chat" which I think is kinda cute, but people also use the term "chat" to refer to... Like, people, or "yall". Or to their stream chat.


I recently used "an unreliable source that gives fast answers to questions" in a comment here. I think I will keep on using it.


Hey, that’s a nice one!


I like to go with 'I asked my bot|chatbot'


>I asked AI doesn't sound right for me.

That's a you thing.


Hey, but it’s not AI!


Yep, this. I’ve switched to Claude for a while (because I can’t afford max plans for both) and nobody in the real world has any idea what it is I’m talking about. “Oh it’s like ChatGPT?”


Claude is also difficult to consistently pronounce for a non-English speaker. Sometimes people dont say that because it can get misinterpreted. ChatGPT is something easy on the the tongue and very difficult to mis-pronounce.


I know a lot of people who refer to it as ChatGTP which I assume stands for German treebrained performers


https://chat.mistral.ai/chat/12da2e83-f3f1-4a47-b432-753cac2...

I suspect they chose that name because of the proximity with the word "cloud".


The CEO is also more puritan than the pope himself considering the amount of censorship it has. Not sure if they are even interested in marketing to normies though.


> The CEO is also more puritan than the pope himself considering the amount of censorship it has.

In that case, you should try OpenAI's gpt-oss!

Both models are pretty fast for their size and I wanted to use them to summarize stories and try out translation. But it keeps checking everything against "policy" all the time! I created a jailbreak that works around this, but it still wastes a few hundred tokens talking about policy before it produces useful output.


Surely someone has abliterated it by now


Ah yes, the Latin-originating French name that has a variant at least in every Latin language is hard to pronounce for non-English users.


When I talk about any AI usage I do. I just say chatgpt to any friends.


> They have 700 million weekly users and growing much faster than Google.

Years old company growing faster than decades old company!

2.5 billion people use Gmail. I assume people check their mail (and, more importantly, receive mail) much more often than weekly.

ChatGPT has a lot of growing to do to catch up, even if it's faster


I read that as OpenAI’s WAU is showing a steeper increase than Google ever did. Not saying it’s factually accurate, just that it’s not a fixed point-in-time comparison :)


There are 2 billion more humans living now than in 2000 though, and the world is much more technology oriented.


The focus on WAU is a tell though. How much data can OpenAI use for advertising if I interact with it weekly?

When I ask about which toaster is best, is it going to show me ads for a motorcycle because that's what I asked about last week?


My wife asked for information about a product, and ChatGPT fed her a handful of blatant product ads. She told the AI never to do that again, and that was the last time she saw that format of output.

I would wager that she was part of an A/B testing group, so her instruction may not have any real effect. However, we were both appalled by that output and immediately discussed alternative AI options, should such a change become permanent.

This isn’t the rise of Google, where they have a vastly superior product and can boil us frogs by slowly serving us more and more ads. We are already boiling mad from having become hypersensitive to products wholly tainted by ads.

We ain’t gunna take it anymore.


> ChatGPT has a phenomenal brand.

My observation is different: ChatGPT may be well-known, but does not have a really good reputation anymore (I'd claim that it is in average of equal dubious reputation as Google) in particular in consideration of

- a lot of public statements and actions of Sam Altman (in particular his involvement into Worldcoin (iris scanning) makes him untolerable for being the CEO of a company that is concerned about its reputation)

- the attempts to overthrow Sam Altman's throne

- most people know that OpenAI at least in the past collaborated a lot with Microsoft (not a company that is well-regarded). But the really bad thing is that the A"I" features that Microsoft introduced into basically every product are hated by users. Since people know that these at least originated in ChatGPT products, this stained OpenAI's reputation a lot. Lesson: choose carefully who you collaborate with.


You massively overestimate what people actually know and read about. If you are in the tech sphere these things might be obvious to you, but I assure you regular people are not keeping track as closely.

I bet at most 10 % of people in the West can name the CEO of OpenAI.


Eh. Altman is not Musk in terms of negative coverage or average sentiment on the net. That might change in the future, but my personal guess is that your perception may be based on spending too much time in a specific echo chamber. I personally like to use people who don't use llms at all for a proper grounding. In those cases, Altman name does not exist, while Musk barely registers.


> Altman is not Musk in terms of negative coverage or average sentiment on the net.

I can assure you that in Germany (where people are very sensitive with respect to privacy topics), Sam Altman (in particular because of his involvement with Worldcoin ("iris scanning" -> surveillance)) has a very bad reputation by many people.


Fair point. I was being too US-centric in my response.


Most normal people don't know about these things they don't even know who Sam Altman is, for example my family that are not Americans they know about chat gpt but they don't know who Sam Altman is


Normal people that use ChatGPT have never heard of Sam Altman, especially outside the US. These points are only in tech and financial circles.


Sure, "ChatGPT" has entered the common consciousness as the name of LLM chatbots as a product.

But does that mean that all of the people who talk about "asking ChatGPT" are actually asking ChatGPT, from OpenAI?

How many of them are actually asking Claude? Or Gemini? Or some other LLM?

That's the trouble when your brand name gets genericized.


While I agree, we saw this play out with Dropbox too.


> ChatGPT has a phenomenal brand.

If by "phenomenal" you mean "the premier slop and spam provider", then yes.


Sadly that's not how the wider public sees it.


That's exactly how the wider public sees it.

It just turns out that the wider public loves peddling slop. (Not so much though when on the receiving end.)


my mom sees it as a nice internet bloke that helps her with writing emails. She once asked why it can't change background of her image from white to red if it can generate all that amazing art, and was genuinely disappointed that she can't get it to understand what she wants. You have skewed view on public perception on llms - they don't think about it, they just use it.


> helps her with writing emails

Yeah, exactly - like I said, generating slop.

(That same hypothetical mom will get annoyed when on the receiving end of a slop-generated email, though.)


Google has to be shitting its pants. No one knows what is "gemini", probably some stupid nerd thing. Normies knows ChatGPT and that is what matters.


They might be. Google has been getting mildly 'aggressive' in their emails pleading with me to use gemini and I have yet to try it ( and that is despite being mildly interested ). There is a reason first mover's advantage is a real thing. People stick with what they think they know.


> ChatGPT has a phenomenal brand. That's worth 100x more than "product stickiness". They have 700 million weekly users

I don't think majority of those 700m people use the product because of the brand. Products are a non-trivial contributor to the brand.

Also, if it were phenomenal, they wouldn't be called ClosedAI ;)


i wish more folks would post P(how much I believe my own take) when they make takes.

I don't think the author is fundamentally wrong; but its delivered with a sense of certainty thats similar in tone to the past 5 years of skepticism that has repeatedly been wrong.

Instead of saying "vibe coded codebases are garbage", the author would be better served writing about "what does the perfect harness for vibe coded codebase look like so that it can actually scale to production"?


There's no way to know for sure. If there was, we wouldn't be having this conversation. But I'm trying to make an educated guess.

I read the news: AI is taking over and we'll soon all be out of jobs. So I have to decide. Do I double down on software engineering, pivot to vibe coding, or try something completely different?

I need a sense of certainty to make this call, so I researched it. This post is the result. I might be wrong, but at least I'm choosing a clear direction instead of constantly switching and never getting good at either.

Vibe coding today doesn't deliver anywhere near the value of a competent software engineer. Rather than extrapolate from past progress, I looked at what it would take today. You asked about how we will turn a vibe-coded codebase into production-ready systems. I have no idea how we'll do that and I didn't find someone with a solid plan for it.

The logical conclusion here is that there's still plenty of runway for skilled software engineers. So I'm betting on becoming a better one with or without AI.

About "vibe coded codebases are garbage". If someone doesn't know how to build software (or quality doesn't matter) vibe coding is perfect. The code might be garbage but it beats having nothing.

These projects would otherwise be Excel spreadsheets or duct-taped tools. Now they have another option.

The problem is when people suggest vibe coding replaces developer skills, as if producing code was the bottleneck.


> 5 years of skepticism that has repeatedly been wrong.

Can you point to any app with scale that has been vibe coded?


That wasn't the take. The take is generally, "glorified next token predictor" quality LLM skepticism takes have been repeatedly proven wrong. See the original Cursor, Devin, etc announcement threads.

More broadly, its unfortunate that vibe coding is so overloaded a term. - Yes, product managers with 0 coding expertise are contributing code in FAANG. - Yes, experienced engineers are "vibe coding" to great success. - Yes, folks with 0 years of experience were building simple calculators 2 years ago and are now building games, complex website, etc, just by prompting. Where will this go in another two years?

One needs look no further than kiro - Amazons own code editor thats being used extensively internally.

Folks bemoaning vibe coding are simply suffering from lack of imagination.


What was the take then? What has repeatedly been "proven wrong?" The original author's point was that it was a great tool for rapidly spinning up a prototype that quickly fell apart at any kind of scale or complexity. I've seen this pop up over and over, and in my own usage of it as well. They don't seem able to (yet) produce anything purely "vibe" coded at any kind of real complexity or scale. If it's happening, I'm extraordinarily interested in it, because I have a lot of projects I could get off the ground to make some additional money. So what apps are they?


Cloudflare's Oauth library probably handles some scale.

https://github.com/cloudflare/workers-oauth-provider


It's not vibe coded, though. "Vibe coding" means taking the AI's code with no review at all. Whereas I carefully reviewed the AI's output for workers-oauth-provider.


don't love this framing - capitalism operates on avarice. Every for-profit company "gives in" to it to a great extent, so we shouldn't put undue blame on isovalent for just looking out for themselves and their employees.


This article presents no source for this claim, and the rest of the article is re-hashing stuff that has been well covered

This is clickbait - publications should report on facts and confirmed sources, not speculate and try to get clicks.


"according to a person familiar with the matter" is a source.


No it isn't. It's a plea from a journalist to trust that they actually have one and have the judgement to vet if the source is any good.

It's no more credible than the reporter, and I've never heard of this one.


Could the OC be sarcasm? Perhaps they forgot the /s?


Being riddled with ads also doesn't increase my trust level. This really seems like their attempt to capitalize on the crazy level of interest right now.


I mean even the first sentence of the article is factually incorrect. He did not "step down".


lol, if you were using brave browser you will see no ads


Brave Browser is privacy cope: https://digdeeper.neocities.org/articles/browsers#brave

And even if you don't use BAT, it is still an ad-driven business.


I didn't actually see the ads (I use uMatrix), but they have boxes outlining where the ads would normally go.


You are comparing apples and oranges.

These interviews are for engineers interviewing at companies which have scale ; not for a 0-1 growth of a product like your google example.

When you're launching products at scale, you absolutely need to design large systems that are durable and can operate at said scale. And when I'm hiring, I need to filter for folks who are able to rationalize at that scale.


Well as you are the FAANG hiring manager I expect I am going to lose the argument but I wonder if my oranges are more like apples than you suspect.

I see there being two options here

apples: please take the existing set of distributed components that evolved in tandem at this company and design a new application on top of and using those components

oranges: please start from scratch and explain how you would build a set of components that will work as well as those in 1.

Apples is what we mostly look for - in any org we have a scale and ecosystem and we don't want to throw it all out. If it must evolve it evolves in tandem with rest of the ecosystem. So if we have some scalable data layer now that has chosen one side of CP/AP and then we come along and say actually throw that out We need full concistency all the time for this HR app, we are going to build our own different CP data layer globally, hold my beer, then ...

The oranges part is fine - it tells us if someone has actually understood the principles behind apples. But honestly it's a fake out of we think oranges can actually be done. And this sort of stuff gives the impression you can design the application solution by also designing the ecosystem at the same time. That's the bit Inwant to emphasise - you build the platform then design the application on the quirks of what you built

And anyone who has not used the apples components won't understand their quirks and can come unstuck when the well known but non obvious behaviour strikes.

I am being too vague here. I feel there is an interesting set of discussions to come out of it - Inwill reread the article


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: