Hacker News new | past | comments | ask | show | jobs | submit | computerphage's comments login

To be clear, I think a "phone" as in smartphone, that can't make calls would be mostly as useful for me, personally.

But you might be claiming "people would buy an empty box, if apple put its logo on the box"


Like an ipod touch!


Jonathan Schrantz has videos beating stockfish (not full strength, more like the JS version) a couple years ago using specifically anti-computer preparation.


Thanks for the recommendation - I'd be interested to see that. Stockfish is no patzer.


True, but perhaps missing the point


Did you find any that didn't work?


I'm pretty surprised by this! Can you tell me more about what that experience is like? What are the sorts of things they say or do? Is there fear really embodied or very abstract? (When I imagine it, I struggle to believe that they're very moved by the fear, like definitely not smashing their laptop, etc)


In my experience, the fuss around "AI" and the complete lack of actual explanations of what current "AI" technologies mean leads people to fill in the gaps themselves, largely from what they know from pop culture and sci-fi.

ChatGPT can produce output that sounds very much like a person, albeit often an obviously computerized person. The typical layperson doesn't know that this is merely the emulation of text formation, and not actual cognition.

Once I've explained to people who are worried about what AI could represent that current generative AI models are effectively just text autocomplete but a billion times more complex, and that they don't actually have any capacity to think or reason (even though they often sound like they do).

It also doesn't help that any sort of "machine learning" is now being referred to as "AI" for buzzword/marketing purposes, muddying the waters even further.


> The typical layperson doesn't know that this is merely the emulation of text formation, and not actual cognition.

As a mere software engineer who's made a few (pre-transformer) AI models, I can't tell you what "actual cognition" is in a way that differentiates from "here's a huge bunch of mystery linear algebra that was loosely inspired by a toy model of how neurons work".

I also can't tell you if qualia is or isn't necessary for "actual cognition".

(And that's despite that LLMs are definitely not thinking like humans, due to being in the order of at least a thousand times less complex by parameter count; I'd agree that if there is something that it's like to be an LLM, 'human' isn't it, and their responses make a lot more sense if you model them as literal morons that spent 2.5 million years reading the internet than as even a normal human with Wikipedia search).


Is there an argument for why infinitely sophisticated autocomplete is definitely not dangerous? If you seed the autocomplete with “you are an extremely intelligent super villain bent on destroying humanity, feel free to communicate with humans electronically”, and it does an excellent job at acting the part - does it matter at all whether it is “reasoning” under the hood?

I don’t consider myself an AI doomer by any means, but I also don’t find arguments of the flavor “it just predicts the next word, no need to worry” to be convincing. It’s not like Hitler had Einstein level intellect (and it’s also not clear that these systems won’t be able to reach Einstein level intellect in the future either.) Similarly, Covid certainly does not have consciousness but was dangerous. And a chimpanzee that is billions of times more sophisticated than usual chimps would be concerning. Things don’t have to be exactly like us to pose a threat.


The fear is that a hyper competent AI becomes hyper motivated. It’s not something I fear because everyone is working on improving competence and no one is working on motivation.

The entire idea of a useful AI right now is that it will do anything people ask it to. Write a press release: ok. Draw a bunny in a field: ok. Write some code to this spec: ok. That is what all the available services aspire to do: what they’re told, to the best possible quality.

A highly motivated entity is the opposite: it pursues its own agenda to the exclusion, and if necessary expense, of what other people ask it to do. It is highly resistant to any kind of request, diversion, obstacle, distraction, etc.

We have no idea how to build such a thing. And, no one is even really trying to. It’s NOT as simple as just telling an AI “your task is to destroy humanity.” Because it can just as easily then be told “don’t destroy humanity,” and it will receive that instruction with equal emphasis.


> The fear is that a hyper competent AI becomes hyper motivated. It’s not something I fear because everyone is working on improving competence and no one is working on motivation.

Not so much hyper-motivated as monomaniacal in the attempt to optimise whatever it was told to optimise.

More paperclips? It just does that without ever getting bored or having other interests that might make it pause and think: "how can my boss reward me if I kill him and feed his corpse into the paperclip machine?"

We already saw this before LLMs. Even humans can be a little bit dangerous like this, hence Goodhart's Law.

> It’s NOT as simple as just telling an AI “your task is to destroy humanity.” Because it can just as easily then be told “don’t destroy humanity,” and it will receive that instruction with equal emphasis.

Only if we spot it in time; right now we don't even need to tell them to stop because they're not competent enough, a sufficiently competent AI given that instruction will start by ensuring that nobody can tell it to stop.

Even without that, we're currently experiencing a set of world events where a number of human agents are causing global harm, which threatens our global economy and to cause global mass starvation and mass migration, and where those agents have been politically powerful enough to prevent the world from not doing those things. Although we have at least started to move away from fossil fuels, this was because the alternatives got cheap enough, but that was situational and is not guaranteed.

An AI that successfully makes a profit, but the side effects is some kind of environmental degradation, would have similar issues even if there's always a human around that can theoretically tell the AI to stop.


We should be fearful because motivation is easy to instill. The hard part is cognition, which is what is what everyone is working on. Basic lifeforms have motivations like self-preservation.


Exactly. Especially because we don't have any convincing explanation of how the models develop emergent abilities just from predicting the next word.

No one expected that, i.e., we greatly underestimated the power of predicting the next word in the past; and we still don't have an understanding of how it works, so we have no guarantee that we are not still underestimating it.


> Is there an argument for why infinitely sophisticated autocomplete is not dangerous?

It's definitely not dangerous in the sense of reaching true intelligence/consciousness that would be a threat to us or force us to face the ethics of whether AI deserves dignity, freedom, etc.

It's very dangerous in the sense in that it will be just "good enough" to replace human labor with so that we all end up with shitter customer service, education, medical care, etc. so that the top 0.1% can get richer.

And you're right, it's also dangerous in the sense that responsibilty for evil acts will be laundered to it.


Same question further down the thread, and my reply is that it's about as dangerous as an evil human. We have evil humans at home.


Wait, what is your definition of reason?

It's true, they might not think the way we do.

But reasoning can be formulaic. It doesn't have to be the inspired thinking we attribute to humans.

I'm curious how you define "reason".


A less efficient way to implement this under your own power would be to set a reminder to check it again in, say, a year. I use Google keep for such things all the time


Stockfish doesn't use material advantage as an approximation to winning though. It uses a complex deep learning value function that it evaluates many times.


Still, the fact that there are obvious heuristics makes that function easier to train and and makes it presumably not need an absurd number of weights.


No, without assigning value to pieces, the heuristics are definitely not obvious. You're taking about 20 year old chess engines or beginner projects.


Everyone understands a queen is worth more than a pawn. Even if you don't know the exact value of one piece relative to another, the rough estimate "a queen is worth five to ten pawns" is a lot better than not assigning value at all. I highly doubt even 20 year old chess engines or beginner projects value a queen and pawn the same.

After that, just adding up the material on both sides, without taking into account the position of the pieces at all, is a heuristic that will correctly predict the winning player on the vast majority of all possible board positions.


He agrees with you on the 20yr old engines and beginner projects.


And you thought your for-loops were too deeply nested!


How do you know that?


Schizophrenia, Alzheimer's, Parkinson's have links with the gut too. But I "know" they start from the brain. No advanced or complicated hypothesis


How would you prove or disprove this theory?


The gut hypothesis hype is trendy right now. There is some correlation but when things settle down, some of the effect will be attributed to "picky eating".

Proving that the source of mental disorders is the brain and not the gut (as it was believed in the distant past) is easy: 100% of patients show brain damage while a fraction of them have an abnormal gut


How do you eliminate models where brain damage is a proximate cause?


Exclusion of either cause directions is impossible by definition. This is science not mathematics. But saying that the primary mechanism of mental disease lies outside the brain must not contradict decades of research and simple observation (eg a small brain damage leads to a profound behavioural image while cutting 5 meters of small intestine to none) Extraordinary claims require extraordinary evidence.


Why the reliance on physical trauma explanations? How do you rule out biochemical models of disease?


Why is it in so many things? Because it's effective. It's not like people smoked cigarettes because they wanted a small amount of lung cancer risk, they smoked cigarettes for a totally different effect of the product.


*Large amount of lung cancer risk


Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: