literally, how does anyone find this acceptable? It makes me feel dumb that I find this outrageous, like the emperor's new clothes: y'all can see this too right?
My strong suspicion is that, for, say, the CEO of Google, this is less about "this will improve the product", or even "LLMs are the future, so we should use them in the present even though they don't work", and more "this is what the markets, which are collectively kinda dumb, have decided they will reward this year, so we will appease them until such time as they move onto a new toy."
See Our Lord and Saviour the Blockchain; for a year or so practically every company was announcing some sort of blockchain thing. Did any of these come to anything? Of course not, but that was, when it comes to it, hardly the point.
LLMs are a particularly dramatic example, but I suspect the dynamic is, in reality, more or less the same as metaverses, blockchains, the _previous_ AI bubble (remember the year or so when everyone was announcing chatbots, until Microsoft Tay kinda scared everyone off it abruptly?), and so on.
Many in the tech community have high expectations for Google engineering quality and product development. The expectations are now too high, much like the executives who pushed this feature.
> Many in the tech community have high expectations for Google engineering quality and product development
Wait? Whom has these expectations? I presumed most in the tech industry scoff at anything Google launches, point at Google Graveyard or at all the failures they've been through.
Seriously: aside from search, ads and android, what product has Google developed ever that is of high quality and meeting the high quality? Maybe maps at some point? Gmail, maybe? Google docs? For each and every google product, I can name two competitors that meet much higher expectations of engineering.
Really the only 100% in-house products are Search and Gmail. The rest was bolted on piecemeal into the Borg: maps, android, double click, docs, YouTube, etc. - all acquired.
That’s not a bad thing, Google is really a story about a shrewd - even brilliant - acquisition and integration strategy.
However, they have systemically failed to grow new things in-house - be it space balloon internet, being an ISP, games network, etc etc.
They are likely better off just buying Anthropic at this point.
Maybe it's fake, or maybe it's been fixed since then. Who knows? This is another problem with these closed source services giving "personalized" results that are totally different from one person to the next and one day to the next. How would you propose we authenticate the screenshot?
It may be a doctored image. But we don't know. I don't think these things are deterministic. At least not just with search query. It could be real, and you might get a real response that makes sense.
I recently received a result that gave me the wrong date for a college class registration, which was pretty fucked up. It was off by three days. Luckily, I double checked my schools official website.
I think when it comes to success Google's products find success with either (or both) of the two factors.
1. Excellent integration with rest of the google ecosystem which ends up me and my friends using it and it is good enough so we continue using it. (E.g. Chrome, Photos, Drive, etc.)
2. Products are genuinely better and leagues ahead of nearest competition. (Gmail, Docs, Youtube).
I suspect Gemini is relying more on 1 rather than 2 at this point as it is not clear if it is indeed the best. But if it can play well with other Google products I am happy.
For example, I have Google One 2TB subscription to pay for all the storage and extra features, result of which I get Gemini Advanced for free and it also works in Docs. I rarely have the need to open ChatGPT even if ChatGPT is better than Gemini.
Search is where I am somewhat skeptical but this experiment is totally worth it in my opinion in Google's perspective and not doing it is stupid. Human beings adapt to the error rate of the Gen AI over time. So unless someone else delivers say 5x better accuracy than Gemini, I think it wont matter.
I am well aware of the situation, hence me searching her name.
But that doesn’t explain the nonsense word salad. I would actually be very skeptical to assume that these were taken verbatim from searchers.
You would have needed A LOT of people to have searched that exact phrase for it to be added to PAA.
Google does a good job with People Also Ask, and I have never seen this before. It might be nothing, but I am leaning towards one of their NLP systems leaking.
It's a great service for providing the wrong answers to things I didn't search for, but I admit I haven't yet figured out why I am supposed to desire this.
Yeah that sentence would literally be caught by a rudimentary grammar checker. This is what happen when you apply YOLO-driven design decisions on the basics of your core product.
Can you share the prompts you used to generate these results? I'm having a hard time replicating just by guessing what they were (full disclosure I work for Google but not on this)
It is frankly astonishing how poor Gemini continues to be in tasks even suggested answers, Assistant and in part, Google Now covered in the past.
Mistral, OpenAI, Anthropic and Meta all have their unique advantages and I have yet to find one LLM being superior in every task I use them for, but purely for such simple questions that should be in the training data, few are consistently as poor as Google efforts have been, first with Bard and now Gemini.
It doesn't matter. Most people are using Copilot because it's vetted,
"protects your data", and is paid for with office 365 (or whatever it is called now).
Google should have run at the enterprise when it had the chance fifteen years ago.
The answers aren’t deterministic. Which is part of the problem. While searching for these now may give you the right answers, I have personally encountered this level of bad responses enough times that none of these are surprising.
There’s no need to doctor screenshots to get these responses, at least twice now Google has shown bad answers in their own marketing materials.
It pretty much was deterministic before Google started using ML to "help" interpret your search queries, which was a decade or more ago. Not coincidentally, that was also when Google search results started getting noticeably worse for me.
Do you think anyone at Google even feels guilty or responsible about shipping this garbage or will they still receive a big fat bonus for shipping something? I understand that Google is probably under pressure due to all the AI-hype, but surely there must be someone on the inside that could call out this trash for what it is? Has the organization become so fully captured by psychopaths such that common sense is no longer able to prevail?
The hierarchy is too strong. Subordinate Googlers cannot really challenge anyone in their leadership chain.
That leaves Siblings to challenge each other, which doesn't really happen either. (VPs either can't check other VPs, or don't give a f** about doing so)
They just take the money at this point, I don't blame the low rang employees too much. Google isn't what it used to be and is not inhabited by the same spirit of the early days, most of those employees are long gone, the culture has changed, etc...
Jesus, that kidney stones one is bad :( It's funny when the LLM totally checks out of reality, but not at all funny when the statement seems plausible to ignorant people.
(A lot people seem to subscribe to an ideology of "dumb people get what they deserve." What this really means to me is "I have Dunning-Kruger syndrome," but I wonder how much of that gets filtered down into making excuses for AI that sucks so badly it becomes actively dangerous.)
If I already know the answer to something, why would I be looking it up? These answers from AI should assume the person on the other end doesn’t know anything and isn’t going to fact check. If they were going to fact check, they’d just look it up in the first place and not waste their time with AI.
To me, a real sign of intelligence and maturity is being able to say, “I don’t know.” I always lose a lot of respect for people when I catch them making up answers to questions they can’t answer. It means I can’t trust anything they say. If I’m not willing to accept this behavior from a person in my life, why would I accept it from a machine?
If you have a question about the answer have the AI review it to make sure it's correct. I have gotten it to pull it's head out of its ass numerous times. It's like a conversation with a human.
It's easy to hold it wrong if you believe things you see in print.
This is just completely disconnected from my original comment. What if someone reads "drink fruit juice to clear up kidney stones" from an AI and doesn't have a question about the answer because they don't fully understand what kidney stones are? The only response AI advocates seem to have is "not my problem." Or, far too often, the vindictive irresponsibility of "it was his fault for being so stupid that he trusted us."
"Flounder, you can't spend your whole life worrying about your mistakes. You fucked up! You trusted us! Hey, make the best of it... maybe we can help you."
Adding to the litany of bad google ideas from the past 10+ years:
- Killing google reader
- Pointless UI changes
- Multiple chat and videocall apps that cannibalize each other.
- Stadia fiasco
- Shoving AI down our throats in their MAIN PRODUCT
What's the source of this rot? I have a friend at google who says the place is filled with smart people competing with each other. Perhaps this competition fuels a chaotic lack of coherency? Kind of feels like they have no clear vision in the "Google Ecosystem", and are hopping on the AI bandwagon with hopes it'll ride them into the future.
Google's Gemini is not mind-blowing nor probably top model but without a doubt is in the same ballpark as all their competitors. Which I think is a pretty Good sign. Just like Meta, Google did not drop the ball on AI and looks like they had their ears to the ground better than say MSFT, APPLE or AMZN.
In that sense I can see why investors are happy. What matters is if Google can continue to innovate and at a rate faster than competition.
I know that view is popular, but it seems so short-sighted. Most companies can increase profit whenever they want, but driving out innovators and abusing goodwill never works out long-term. Look at IBM, Oracle, Intel, Boeing, US Steel, GE, Commodore, Quiznos, etc. I feel like the Google's stock holders are just getting the wool pulled over their eyes. An increase in profit often means a cannibalization of value.
I'm mostly wondering why shareholders go along with it or even pay on premium on shares making a lot of money now that may be comparatively worthless in 20 years. Do they really believe this is sustainable? Do they expect Google to just start issuing massive dividends? Are they just hoping for a greater fool?
Wall St wants to see them keeping up with the latest tech. That's why Apple put billions into VR and we will hear nothing of it in 12-24 months. Another reason is talent retention/acquisition. AI will be history in 12-24 months when the bubble bursts and we are left with a mountain of cheap GPUs.
I think it’s safe to assume that in the next few years almost all consumer hardware will be able to run a local LLM comfortably. And no one knows if LLMs can go beyond GPT-4 in a way that will genuinely blow people away.
> (...) almost all consumer hardware will be able to run a local LLM comfortably.
The last thing we want. For example, car manufacturers tried to make voice command work and the results are still unreliable; they also had experimented with touch screens and those are going away, because they are a poor and unsafe way to operate a moving vehicle. People want tactile feedback and ability to operate them without taking their eyes off the road. Why would anybody want an LLM in my camera, phone, washing machine, thermostat?
I personally don’t want that. In my opinion, this current iteration of LLM is completely overblown and I am perplexed as to how quickly it is getting adopted into everything.
Of course, LLMs are useful. For small tasks, there is a significant productivity boost to be had, but they are not trustworthy. And that is the main issue.
If we see them as untrustworthy, then perhaps it is necessary to accelerate their exposure into consumer technology as a means to show that an LLM can cause harm - in whatever form that may come.
It’s very easy to overlook LLMs making things up, but they do (including GPT4) - and if that can’t be solved then it’s safe to assume this hype will be short-lived.
All tech that becomes ubiquitous is based on giving humans answers that do not change based a throw of coin. Your bank account statement shows the same balance for the same end of day query no matter when you request it; your GPS gives you directions to the same place every time you put in the same destination address, not to a place that looks like a probable destination you might want. The best uses for AI are those that do not use generative BS, but for ML that give us answers, patterns, action scripts. Our brains are wired for survival and constantly look for answers, patterns, and scripts that do not change at random. There is a reason why movies and stories follow a hero's arc. We want order, life is chaotic enough. Not realising that will be a rude awakening for all investors in generative AI.
People that hide behind procedures and metrics and competition driven by that instead of looking outside the window and seeing if it's rainy or sunny.
MBA heads (which might be unfair here because this is more like engineer clockwork head) that use "products launched" as a metric even if you're actually taking down and launching the same product over and over
G killing "unprofitable" products like Google Reader because it makes sense on paper except that a) it's a minor line item b) they are too analytical to measure the impact on good will and brand "soft power" of the power and c) the existence of the product created demand for RSS producers and it was not simply another reader.
I've heard one link in the chain is that promotions are too tied to Shipping New Things, not improving current products or keeping things from becoming worse.
Accurate. Abject lack of a cohesive vision/strategy, and no leadership to articulate and drive it. It’s a motley collection of fiefdoms competing for perf points.
The relentless drive to make as much money as possible, as quickly as possible. Publicly traded companies tearing at the soft underbelly of society for a few pennies more each quarter.
I turned off AI Overview in Google by switching to Kagi a few months back :P
Seriously though. I'm waiting for the day that Google indexes all websites using embeddings into a massive vector database, so that Gemini can interact directly with all the indexed websites.
On a related note, I actually really like Kagi's quick answer feature for simple questions. The fact that they cite sources that you can then double check is the biggest win to me.
Thanks, I appreciate the detailed instructions. Just did all of my browsers to escape the AI. (Which in most cases looked like the first choice just reworded some.
OTOH, I'm thinking about clicking that Make Default button on DuckDuckGo and skipping the entire mess.
Check out Kagi, if you’re willing to pay for search. It’s really good, and I’ve actually found their new AI features useful, and they’re not ramming them down my throat.
When it gives an AI answer it has references to the source material so I can go read it. It also only gives an AI answer if I ask a question and use a question mark. If I don’t I need to click to request an AI response. They also have an AI thing to summarize a page or ask questions about a page. This is also not automatic, the button for it isn’t even visible unless hovering over the link. I find it to be a nice balance. There if I need/want it, but gone when I don’t.
AI Overview has been embarrassingly bad and wrong on many things. Often making clearly incomprehensible assertions, sometimes with numbers that disagree within the course of a single sentence.
I'm really curious - how will this affect sites that monetize off of affiliate links and ads? It seems to me that AI summarization and overview will deter, at the minimum reduce, the number of page clicks and traffic to sites that are producing information.
For most queries, I get ~100 links, up to a limit of about 250-300. No blue links, just black and white. People complain about the quality of Google search. The complaint I have is the limited number of results. I have custom programs that process SERPs and I use search not only for "search" but for also for "discovery"; I want a maximum number of results, several hundred at least. In the days of AltaVista I could get thousands.
Having used this commandline search method for decades, there does not seem to be any effect of "search history" on the results I get. All the search engines appear to rely on Javascript to "learn" about www users.
Whereas if I do a search in a popular browser, an absurdly large program that runs other peoples' Javascript indiscriminantly, I can easily see how search history is being used to modify the results. It's comically bad. And that is what most people seem to complain about.
There is no single program. It has always been a shell script that runs multiple programs.
At present, a routine search looks like this:
echo search terms|1.sh > 1.htm
links 1.htm
links is 1.4 MiB static binary.
The programs used in 1.sh are as follows. All are static binaries.
68K nc
744K busybox (sed)
44K yy025
44K yy030
44K yy032
40K yy044
44K yy045
40K yy073
168K yy084
yy programs are just dumb filters I write that process input from stdin and output formatted text, fast, with low memory usage.
A localhost-bound, forward proxy is also used. This can be up to 9 MiB static binary if I use the latest OpenSSL. Size can be significantly reduced by using alternative TLS libraries.
1.sh currently supports about 40 different search engines.
And when you do need google, you can always just add `!g` to your search query. There are a bunch of other useful ones [0], my favorite is probably `!w` for Wikipedia.
This is true, but there are less ads than on Bing proper and they are less intrusive. Also no cookies popups or constant rewards notification or flashing icon.
I’m not an AI maximalist by any means but something that summarizes things that I already have context for like “syntax to do X in language Y” I’d much rather see a summary than an ad riddled webpage.
These days I go back to hoarding links and downloading manuals. I much prefer to do the manual search work instead of enduring the falsehoods. And learning compounds.
If people had thought more critically about streaming when it was new, they might have foreseen that the ad-free experience was temporary and would inevitably be replaced with ads. That money was never going to be left on the table long term.
But the ads are worse now, the precedent with a DVR is to be able to control the video stream and skip ads but with streaming there's no such precedent so the technology allows that freedom to be taken away and ads can be made unskippable.
And now we have an opportunity to realize before it's too late that this generation of AI will inevitably lead to as much advertising as we had before, and it will once again become qualitatively worse in that it will be seamlessly and opaquely integrated into AI output without even necessarily any disclosure. I will take a blockable banner ad over that any day.
If only Safari supported OpenSearch, or at least manually defined custom search engines. Apple's greed is getting more and more frustrating as a user.
That said, I personally like Google's AI overviews. What I find much more frustrating is the "reels-like" UI the mobile page devolves into when scrolling past the first 10 or so results. I really wish there was a way to get one without the other.
I was watching a, 2 year old tutorial and the person in the video searched on Google and it looked very minimalistic and simple, kinda like how Brave Search does but now it's just the opposite. Everything looks very busy.
Google is really bad now. I’ve been using Kagi for the past year and got redirected to Google once a few months ago. I was shocked anyone still used it. It was so unpleasant I recoiled when the page loaded and had to look around to see where I was and why everything was so awful.
Is there any way to do a Web search with DDG bangs? I checked their list but didn't see an option. I occasionally use !g as a fallback search but recently started seeing this AI slop taking over the top of the results page, making it even harder to find a decent result.
Considering how much Google search has shaped my life, I am somewhat* surprised by the amount of people in this thread, who apparently have zero tolerance for continued innovation in this area and the rough edges it brings.
This phrasing is pretty biased and something I most often see from product managers who are pushing changes that aren't in most user's interests. Many product changes that Google makes these days are driven by promotions and/or increasing revenue, not to make products better for end users. You can refer to all changes as "innovation", but it's going to sound unreasonable to most.
Also, characterizing people who object to one specific change as having "zero tolerance for continued innovation" is ridiculous and innacurate.
> something I most often see from product managers
I am not a product manager. A data point, to help adjust your bias.
> Many product changes that Google makes these days are driven by promotions and/or increasing revenue, not to make products better for end users
Can you be specific? In this particular instance, how does AI integration increase revenue? Again, I don't know what "promotions" refers to here. Feel free to be specific, if you do.
> You can refer to all changes as "innovation", but it's going to sound unreasonable to most.
I agree, that does sound fairly unreasonable.
> Also, characterizing people who object to one specific change as having "zero tolerance for continued innovation" is ridiculous and innacurate.
Again, we are in agreement: That would be ridiculous. And so is thinking of ways to integrate AI into Googles main product as "one specific change".
It's kind of interesting when you look back a year or so to when ChatGPT came out, and the prevailing HN sentiment seemed to be that ChatGPT would be a Google Search killer, even back when it was even more prone to hallucinations and stating wild nonsense as fact.
When ChatGPT first came out, you were getting everyone's first impressions, which LLMs are notoriously good at. And if you ask me, mostly useless once you get past those first impressions.
I'm not convinced a lot of it is for the better... I literally have to scroll down at least a screen and a half for the actual search results on my phone more often than not. And AI result makes that even worse.
I have a low tolerance for tools that don't work well.
I'm not willing to put up with rough edges for "continued innovation" in a mainline product, and I don't see why I should have to.
Release the rough product as a beta for those who like the bleeding edge. Sand those edges down before mainstreaming it. The eagerness of software companies to release junk in the mainstream as a kind of "free" beta testing or market research is a large part of why software quality is not so good these days.
You’re projecting your own life experience into other people instead of empathising with different experiences and trying to understand where they come from.
People don’t “have zero tolerance for continued innovation”¹, they’re tired of companies shoving shit down our throats and hailing deficient technology as the second coming of Christ. Sundar Pichai called this “more profound than fire and electricity”, for crying out loud. If they presented the technology with humility and honesty instead of throwing sand in our eyes and trying to hide the rough edges with a “trust me, bro”, you’d see them being given more slack.
¹ That’s the type of leading statement that prevents one from understanding another’s point of view. It makes a pre-judgement of something as being unambiguously good.
The worst is when actual innovation get thrown out (by buying companies and enshitifying them) or when something is done, but they go and break it (because growth).
As a customer, it feels like companies don’t care about your needs and wants. Like Slack and Dropbox enabling content scanning without consent.
Not sure what the fuss is about. I've actually found the overviews to be quite helpful for a bunch of my searches since it was turned on. My impression so far has been nothing but positive.
Is it a principles thing or are people really hating on having to scroll past a paragraph or two? If it's the former, well good luck with that.
In general, Google is really undermining their whole business model with the AI Overview paragraph as it will take away from their AdWord link revenue just as much as organic traffic to other sites.
Once Google realizes this, they'll inevitably have to manipulate the AI paragraph content for commercial reasons. That'll be fun! Then everyone will look back on the current "clean" AI output with fondness.
I agree, imo it needs to be slightly smarter at when it turns itself off, as I've had it pop up for queries that I'd rather click a link for and on mobile it takes up most of the screen.
But for many other queries I do quickly get the answer I was looking for.
>In general, Google is really undermining their whole business model with the AI Overview paragraph as it will take away from their AdWord link revenue just as much as organic traffic to other sites.
The future is probably selectively biased ai, as scary as that sounds.
FWIW the Firefox instructions don't seem to apply to Ubuntu (at least under GNOME, can't speak for any other DE). There's no "three dots," just right click on the address bar, then choose "Add Google Web" from the context menu.
Alternatively, you can avoid this nonsense entirely by using DuckDuckGo, or StartPage, or Kagi, or any number of other totally competent search engines.
https://imgur.com/a/g4qOU9a