Hacker News new | past | comments | ask | show | jobs | submit | gmuslera's comments login

I thought that April's Fools Day was the 1st. Or at least, it looks a lot like the typical announces of that day (i.e. the one by ElevenLabs of this very year).

I don't know what to expect from that. I'm not sure if even a conversation, as we understand it, would make sense with them.


That was my first reaction as well, and actually double checked the posting date. But apparently it's a real thing:

https://blog.google/technology/ai/dolphingemma/

As to conversations and whether cetacean communication has a syntax, I think that's a super interesting question. I know next to nothing about cetacean communication but a quick PubMed search turned up this reference which looks like it might be a useful jumping off point for the literature in this area:

King SL, Guarino E, Donegan K, McMullen C, Jaakkola K. Evidence that bottlenose dolphins can communicate with vocal signals to solve a cooperative task. R Soc Open Sci. 2021 Mar 17;8(3):202073. doi: 10.1098/rsos.202073. PMID: 33959360; PMCID: PMC8074934.


1 word: birds.

Anyway, detecting and avoiding obstacles should be in the menu. Maybe not as complex as at street level with people and cars doing unexpected things, but maybe with some added complexity that need to have into account like weather, inertia and things near landing sites.


I'm not sure you can avoid birds like a skateboad running onto the street. If anything I expect dynamic avoid areas or timed "no fly" zones depending on bird mass. Maneuvering to avoid birds seems like a recipe for disaster.

If a drone flies at a speed that gives birds time to take notice and react/avoid, does it remove the danger ? I wonder if there is wide variation among species.

There is something that comes before democracy, that is the ability to make good (for some definition of good) decisions. In theory all the affected by a decision should have its voice or vote on it, but not all have independent, faithful and complete information on it, really critical thinking, and try to have an unbiased view on the topic. More than age, sex, skin colour, economic status and social, place of living or family, those are more useful criteria.

In the age of AI and worldwide information networks, some of this could be provided, but still it won't be enough. Is still easy to put a bias on source information, and in practice that won't be so different from manipulation of social sectors.


Doesn’t matter what you want anymore. You are not the client, but the product. They are the ones getting faster horses.

Until I finally get fed up and leave. There is value in my sharing pictures of my kids with distance friends and seeing pictures of their kids - but Facebook has got so bad at that I finally gave up logging in and not I'm not a product that exists for them. And in turn because I'm not there facebook is less valueable for my friends and so they are more likely to leave in the future.

The only question are people like me outliers that can be ignored - there will always be a few people you can't get. However I could be a sign of the end.


They don't need you as the product - they found better products their customers would rather buy. My dad is one of the better products. I've seen what Facebook turned him into.

> They are the ones getting faster horses.

To a point, until stage 3 enshittification hits, and the business claws back all the value.


What we are labeling as AI today is different than was thought to be in the 90s, or when Asimov wrote most of his stories about robots and other ways of AI.

Saying that, a variant of Susan Calvin role could prove to be useful today.


> What we are labeling as AI today is different than was thought to be in the 90s, or when Asimov wrote most of his stories about robots and other ways of AI.

Multivac in "the last question"?


AI is far closer to Asimov's vision of AI than anyone else's. The "Positronic Brain" is very close to what we ended up with.

The three laws of robotics seemed ridiculous until 2021, when it became clear that you could just give AI general firm guidelines and let them work out the details (and ways to evade the rules) from there.


Not sure that I agree with that. People have been imagining human-like AI since before computers were even a thing. The Star Trek computer from TNG is basically an LLM, really.

AI _researchers_ had a different idea of what AI would be like, as they were working on symbolic AI, but in the popular imagination, "AI" was a computer that acted and thought like a human.


> The Star Trek computer from TNG is basically an LLM, really.

The Star Trek computer is not like LLMs: a) it provides reliable answers, b) it is capable of reasoning, c) it is capable of actually interacting with its environment in a rational manner, d) it is infallible unless someone messes with it. Each one of these points is far in the future of LLMs.


Their point is that it seems to function like an LLM even if it's more advanced. The points raised in this comment don't refute that, per the assertion that each of them is in the future of LLMs.

> Their point is that it seems to function like an LLM even if it's more advanced.

So did ELIZA. So did SmarterChild. Chatbots are not exactly a new technology. LLMs are at best a new cog in that same old functionality—but nothing has fundamentally made them more reliable or useful. The last 90% of any chatbot will involve heavy usage of heuristics with both approaches. The main difference is some of the heuristics are (hopefully) moved into training.


Stating that LLMs are not more reliable or useful than ELIZA or SmarterChild is so incredibly off-base I have to wonder if you've ever actually used a LLM. Please run the same query past ELIZA and Gemini 2.5 (https://aistudio.google.com/prompts/new_chat) and report back.

> Please run the same query past ELIZA and Gemini 2.5 (https://aistudio.google.com/prompts/new_chat) and report back.

I don't see much difference—you still have to take any output skeptically. I can't claim to have ever used gemini, but last I checked it still can't cite sources, which would at least assist with validation.

I'm just saying this didn't introduce any fundamentally new capabilities—we've always been able to GIGO-excuse all chatbots. The "soft" applications of LLMs have always been approximated by heuristics (e.g. generation of content of unknown use or quality). Even the summarization tech LLMs seem to offer don't seem to substantially improve over the NLP-heuristic-driven predecessors.

But yea, if you really want to generate content of unknown quality, this is a massive leap. I just don't see this as very interesting.


> I can't claim to have ever used gemini, but last I checked it still can't cite sources, which would at least assist with validation.

Yes, it can cite sources, just like any other major LLM service out there. Gemini, Claude, Deepseek, and ChatGPT are the ones I personally validated this with, but I bet other major LLM services can do so as well.

Just tested this using Gemini with “Is fluoride good for teeth? Cite sources for any of the claims” prompt, and it listed every claim as a bullet point accompanied by the corresponding source. The sources were links to specific pages addressing the claims from CDC, Cleveland Clinic, John Hopkins, and NIDCR. I clicked on each of the links to verify that they were corroborating what Gemini response was saying, and they were.

In fact, it would more often than not include sources even without me explicitly asking for sources.


> Yes, it can cite sources, just like any other major LLM service out there.

Let's see an example:

Ask if america was ever a democracy and tell us what it uses as sources to evaluate its ability to function. Language really shows its true colors when you commit to floating signifiers.

I asked gemini "was america ever a democracy"? And it confidently responded "While the ideal of democracy has always been a guiding principle in the United States", which is a blatant lie, and provided no sources. The next prompt, "was america every a democracy? Please cite sources" gives a mealy-mouthed reply hedging on the definition of democracy... which it refuses to cite. If I ask it "will america ever be democratic" it just vomits up excuses about democracy being a principal and not measurable. With no sources. Etc. this is not a useful tool for things humans already do well. This is a PR megaphone with little utility outside of shitty copy editing.


They don't make up the sources or include sources that don't include the citation anymore?

I get that sometimes, but you click the link and very easily determine whether the source exists or not.

Yet when you ask it to dim the lights, it dims either way too little or way too much. Poor Geordi.

For what it's worth, I was referring to the episode when he set up a romantic dinner for the scientist lady. Computer couldn't get the lighting right.

> The Star Trek computer from TNG is basically an LLM, really.

Watched all seasons recently for the first time. While some things are "just" vector search with a voice interface, there are also goodies like "Computer, extrapolate from theoretical database!", or "Create dance partner, female!" :D

For anyone curious: https://www.youtube.com/watch?v=6CDhEwhOm44


> The Star Trek computer from TNG is basically an LLM, really.

No. The Star Trek computer is a fictional character, really. It's not a technology any more than Jean-Luc Picard is. It's does whatever the writers needed it to do to further the plot.

It reminds me: J. Michael Straczynski (of Babylon 5 fame) was once asked "How fast do Starfuries travel?" and he replied "At the speed of plot."


The citizens of other countries should be concerned too. Even short term plans could end badly for some disruptive measure Trump because whatever, as he did basically every week since he got in charge.

A single word have a lot of meanings, a lot of different styles, a lot of different usages, some of which may not be so explicitly thought about.

The act itself of taking notes, processing whatever info you are receiving into your own words already have a value. Try to put them into context, or with a particular future usage in mind have a value. Writing them thinking on sharing it with someone else have another, different value too. All of that even if they disappear tomorrow.

And if they are not supposed to disappear they have more values too. You can put things there to forget or at least not willingly try to keep in your memory. You can relate them with other information, or add more not so much later. And you can check them much later and see how differ your initial ideas, your mind state, or a particular stage on something with what you have now, or how it developed with time.

As something generic enough, is not good to fix its meaning and usage to some opinionated bucket of what it "should be". And build or facilitate on some particular usage should not go badly against some of the things that make it great.


The use of notes is quite open-ended (generic, if you want). I'd argue it's not about what they "should be", but that they are almost never the end goal. Whatever use you make of them. I agree with you that they are useful in themselves just because you interpret actively what you receive.

Never imagine that Trump tariffs would end creating a new drinking game.

In what else is used the still increasing coal/oil/gas that is being extracted? Or is that energy requirements grow so much that that 40% didn't made it to go down in absolute numbers?

In general the Incerto series by Nassim Taleb (Black Swan, Antifragile, and Skin in the Game) was worth it. The Selfish Gene, System thinking A primer, I am a strange loop, Sapiens are some books that I read recently that had a lasting impression.

Incerto is great. I’m constantly amazed by how people identifies completely opposite black swans though to justify completely opposite things when I see it quoted in public though.

The Bed of Procrustes is great too, and easy to reread as a summary of the other books once you've already read them a few times individually.

Taleb also wrote a forward to the recent edition of Cipolla's The Basic Laws of Human Stupidity, which predates Taleb's books but is "Taleb-adjacent".


i just picked up an edition of “The Evolution of Cooperation” that has a foreword by richard dawkins. was cool to see his take, from writing the selfish gene, on axelrod’s contributions to the study of cooperation. by any chance, did your edition of his book mention those cooperation studies? dawkins said he updated a later edition in this foreward i just read

It was a old edition of the book. But I had enough of the Prisoner dilemma on it to have an introduction.

Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: