Hacker News new | past | comments | ask | show | jobs | submit | jointpdf's comments login

Are state-level actors the main market for AI security?

Using the definition from the article:

> AI safety, which refers to preventing AI from causing harm, is a hot topic amid fears that rogue AI could act against the interests of humanity or even cause human extinction.

If the purpose of a state is to ensure its continued existence, then they should be able to make >=$1 in profit.



What about edamame? Definitely count as beans. Per 100g: 120kcal / 11g protein / 5g fat / 10g carbs / 5g fiber.

Plus it’s impossible to stop eating them once you gain momentum.


Soy has a bad PR problem despite no proof in tests.

https://www.hsph.harvard.edu/nutritionsource/soy/


Carbon steel is much better than cast iron! It takes some time to build up a nonstick coating (seasoning), but once you do fried eggs will glide around like they’re on ice.

I have a Mauviel that I love, but the Matfer Bourgeat is better for eggs because it doesn’t have the steel rivets on the inside of the pan. Both are made in France and cost like $70.

This is a solid explainer on carbon steel pans: https://youtu.be/-suTmUX4Vbk


Or it will twist itself into a giant hairball of contorted logic, like GPT3.5 does when I (a human) encourage it to explain its errors.


You can click through the Lichess opening database (click the book icon, and then the Lichess tab) to get an idea: https://lichess.org/analysis

But the answer is insanely unlikely, past a certain number of moves. The combinatorial explosion is inescapable. Even grandmaster games are often novelties in <10 moves.

So, it has a to have some kind of internal representation of board state and what makes a reasonable move and such that enables it to generalize (choosing random legal moves is almost unbelievably bad, so it’s not doing that).

I also doubt that it has been trained on the full (massive) database of Lichess games, but that would be an interesting experiment: https://database.lichess.org/


Superposition just means “linear combination” in this context. Basically, a weighted mixture of “simulated entities” (or possible responses).

https://en.m.wikipedia.org/wiki/Superposition_principle


The author liberally alludes to “superposition collapse,” which implies that they’re referring to its quantum mechanical meaning.


It doesn't imply that. What term would you use to refer to the narrowing of a hypothesis space upon acquisition of new evidence?


In a non-Bayesian context, I would call it “updating/retraining my model.”

In a formal Bayesian context, I’d call it “updating my posterior by adding data to the likelihood.”


That could describe both the narrowing or broadening of one's hypothesis space.


Bayesian inference?


Would you say the commonly-used random level generation algorithm in gamedev, Wave Function Collapse, implies it’s using quantum mechanics? Most people would disagree with you, I suspect.


The skill is in creating the training data in the first place.

Training a model is hardly a skill. It’s more like playing Tamagotchi—check on it once in a while to make sure it hasn’t died, and guess at ways to make it happier in the future.


I agree with your first statement, and disagree with the second one. Training a new, non-trivial model is 1/3 craft, 1/3 science, and 1/3 art. There are not very many people in the world capable of training GPT-4 level models or beating state of the art in image generation.


I see a lot of confident assertions of this type (LLMs don’t actually understand anything, cannot be creative, cannot be conscious, etc.), but never any data to substantiate the claim.

This recent paper suggests that recent LLMs may be acquiring theory of mind (or something analogous to it): https://arxiv.org/abs/2302.02083v1

Some excerpts:

> We administer classic false-belief tasks, widely used to test ToM in humans, to several language models, without any examples or pre-training. Our results show that models published before 2022 show virtually no ability to solve ToM tasks. Yet, the January 2022 version of GPT- 3 (davinci-002) solved 70% of ToM tasks, a performance comparable with that of seven-year-old children. Moreover, its November 2022 version (davinci-003), solved 93% of ToM tasks, a performance comparable with that of nine-year-old children.

> Large language models are likely candidates to spontaneously develop ToM. Human language is replete with descriptions of mental states and protagonists holding divergent beliefs, thoughts, and desires. Thus, a model trained to generate and interpret human-like language would greatly benefit from possessing ToM.

> While such results should be interpreted with caution, they suggest that the recently published language models possess the ability to impute unobservable mental states to others, or ToM. Moreover, models’ performance clearly grows with their complexity and publication date, and there is no reason to assume that their it should plateau anytime soon. Finally, there is neither an indication that ToM-like ability was deliberately engineered into these models, nor research demonstrating that scientists know how to achieve that. Thus, we hypothesize that ToM-like ability emerged spontaneously and autonomously, as a byproduct of models’ increasing language ability.


> … but never any data to substantiate the claim

There is!

https://arxiv.org/abs/2301.06627

From the paper:

  > Based on this evidence, we argue that (1) contemporary LLMs should be taken seriously as models of formal linguistic skills; (2) models that master real-life language use would need to incorporate or develop not only a core language module, but also multiple non-language-specific cognitive capacities required for modeling thought.


Thanks for this, I’ll give it a read.


I find this interesting not from the perspective of LLMs but it seems to imply that human language being a prerequisite for self-awareness. Is that really so?


Can we think without language? - https://mcgovern.mit.edu/2019/05/02/ask-the-brain-can-we-thi...

> Imagine a woman – let’s call her Sue. One day Sue gets a stroke that destroys large areas of brain tissue within her left hemisphere. As a result, she develops a condition known as global aphasia, meaning she can no longer produce or understand phrases and sentences. The question is: to what extent are Sue’s thinking abilities preserved?

> Many writers and philosophers have drawn a strong connection between language and thought. Oscar Wilde called language “the parent, and not the child, of thought.” Ludwig Wittgenstein claimed that “the limits of my language mean the limits of my world.” And Bertrand Russell stated that the role of language is “to make possible thoughts which could not exist without it.” Given this view, Sue should have irreparable damage to her cognitive abilities when she loses access to language. Do neuroscientists agree? Not quite.

The Language of Thought Hypothesis - https://plato.stanford.edu/entries/language-thought/ (which has a long history going back to Augustine)

---

If 23 year old me were here now and considering future life paths, I'd be sorely tempted to be looking at declaring/finishing a dual CS/philos major and going to grad school.


Do these test imply that a theory or mind requires human language any more so than a mirror test implies that self-awareness requires eyeballs?


It's a reasonable hypothesis. Being good at calculating 'What would happen next if {x}' is a decent working definition of baseline intelligence, and the capabilities of language allow for much longer chains of thought than what is possible from a purely reactive approach.

Entering the realm of Just My Opinion, I wouldn't be surprised at all if internal theory of mind is simply an emergent property of increasing next-token prediction capability. At a certain point, you hit a phase change, where intelligence loops back on itself to include its own operation as part of its predictions, and there you go- (some form) of self-awareness.


This paper is worth a HN submission in its own right. Or did it have one already?


I came across this paper elsewhere, but it looks like someone posted it today: https://news.ycombinator.com/item?id=34756024

There was also a larger discussion from a few days ago: https://news.ycombinator.com/item?id=34730365


I could be wrong, but this doesn’t seem accurate. The videos were leaked some time ago, and in 2020 the DOD confirmed that they were legitimate videos from the Navy (https://www.defense.gov/News/Releases/Release/Article/216571...).

As for the other reports/hearings, it seems like it’s driven more by Congress. For example: https://www.dni.gov/files/ODNI/documents/assessments/Prelima...

> This preliminary report is provided by the Office of the Director of National Intelligence (ODNI) in response to the provision in Senate Report 116-233, accompanying the Intelligence Authorization Act (IAA) for Fiscal Year 2021, that the DNI, in consultation with the Secretary of Defense (SECDEF), is to submit an intelligence assessment of the threat posed by unidentified aerial phenomena (UAP) and the progress the Department of Defense Unidentified Aerial Phenomena Task Force (UAPTF) has made in understanding this threat.


Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: