Are state-level actors the main market for AI security?
Using the definition from the article:
> AI safety, which refers to preventing AI from causing harm, is a hot topic amid fears that rogue AI could act against the interests of humanity or even cause human extinction.
If the purpose of a state is to ensure its continued existence, then they should be able to make >=$1 in profit.
Carbon steel is much better than cast iron! It takes some time to build up a nonstick coating (seasoning), but once you do fried eggs will glide around like they’re on ice.
I have a Mauviel that I love, but the Matfer Bourgeat is better for eggs because it doesn’t have the steel rivets on the inside of the pan. Both are made in France and cost like $70.
You can click through the Lichess opening database (click the book icon, and then the Lichess tab) to get an idea: https://lichess.org/analysis
But the answer is insanely unlikely, past a certain number of moves. The combinatorial explosion is inescapable. Even grandmaster games are often novelties in <10 moves.
So, it has a to have some kind of internal representation of board state and what makes a reasonable move and such that enables it to generalize (choosing random legal moves is almost unbelievably bad, so it’s not doing that).
I also doubt that it has been trained on the full (massive) database of Lichess games, but that would be an interesting experiment: https://database.lichess.org/
Would you say the commonly-used random level generation algorithm in gamedev, Wave Function Collapse, implies it’s using quantum mechanics? Most people would disagree with you, I suspect.
The skill is in creating the training data in the first place.
Training a model is hardly a skill. It’s more like playing Tamagotchi—check on it once in a while to make sure it hasn’t died, and guess at ways to make it happier in the future.
I agree with your first statement, and disagree with the second one. Training a new, non-trivial model is 1/3 craft, 1/3 science, and 1/3 art. There are not very many people in the world capable of training GPT-4 level models or beating state of the art in image generation.
I see a lot of confident assertions of this type (LLMs don’t actually understand anything, cannot be creative, cannot be conscious, etc.), but never any data to substantiate the claim.
This recent paper suggests that recent LLMs may be acquiring theory of mind (or something analogous to it): https://arxiv.org/abs/2302.02083v1
Some excerpts:
> We administer classic false-belief tasks, widely used to test ToM in humans, to several language models, without any examples or pre-training. Our results show that models published before 2022 show virtually no ability to solve ToM tasks. Yet, the January 2022 version of GPT- 3 (davinci-002) solved 70% of ToM tasks, a performance comparable with that of seven-year-old children. Moreover, its November 2022 version (davinci-003), solved 93% of ToM tasks, a performance comparable with that of nine-year-old children.
> Large language models are likely candidates to spontaneously develop ToM. Human language is replete with descriptions of mental states and protagonists holding divergent beliefs, thoughts, and desires. Thus, a model trained to generate and interpret human-like language would greatly benefit from possessing ToM.
> While such results should be interpreted with caution, they suggest that the recently published language models possess the ability to impute unobservable mental states to others, or ToM. Moreover, models’ performance clearly grows with their complexity and publication date, and there is no reason to assume that their it should plateau anytime soon. Finally, there is neither an indication that ToM-like ability was deliberately engineered into these models, nor research demonstrating that scientists know how to achieve that. Thus, we hypothesize that ToM-like ability emerged spontaneously and autonomously, as a byproduct of models’ increasing language ability.
> Based on this evidence, we argue that (1) contemporary LLMs should be taken seriously as models of formal linguistic skills; (2) models that master real-life language use would need to incorporate or develop not only a core language module, but also multiple non-language-specific cognitive capacities required for modeling thought.
I find this interesting not from the perspective of LLMs but it seems to imply that human language being a prerequisite for self-awareness. Is that really so?
> Imagine a woman – let’s call her Sue. One day Sue gets a stroke that destroys large areas of brain tissue within her left hemisphere. As a result, she develops a condition known as global aphasia, meaning she can no longer produce or understand phrases and sentences. The question is: to what extent are Sue’s thinking abilities preserved?
> Many writers and philosophers have drawn a strong connection between language and thought. Oscar Wilde called language “the parent, and not the child, of thought.” Ludwig Wittgenstein claimed that “the limits of my language mean the limits of my world.” And Bertrand Russell stated that the role of language is “to make possible thoughts which could not exist without it.” Given this view, Sue should have irreparable damage to her cognitive abilities when she loses access to language. Do neuroscientists agree? Not quite.
If 23 year old me were here now and considering future life paths, I'd be sorely tempted to be looking at declaring/finishing a dual CS/philos major and going to grad school.
It's a reasonable hypothesis. Being good at calculating 'What would happen next if {x}' is a decent working definition of baseline intelligence, and the capabilities of language allow for much longer chains of thought than what is possible from a purely reactive approach.
Entering the realm of Just My Opinion, I wouldn't be surprised at all if internal theory of mind is simply an emergent property of increasing next-token prediction capability. At a certain point, you hit a phase change, where intelligence loops back on itself to include its own operation as part of its predictions, and there you go- (some form) of self-awareness.
> This preliminary report is provided by the Office of the Director of National Intelligence (ODNI) in response to the provision in Senate Report 116-233, accompanying the Intelligence Authorization Act (IAA) for Fiscal Year 2021, that the DNI, in consultation with the Secretary of Defense (SECDEF), is to submit an intelligence assessment of the threat posed by unidentified aerial phenomena (UAP) and the progress the Department of Defense Unidentified Aerial Phenomena Task Force (UAPTF) has made in understanding this threat.
Using the definition from the article:
> AI safety, which refers to preventing AI from causing harm, is a hot topic amid fears that rogue AI could act against the interests of humanity or even cause human extinction.
If the purpose of a state is to ensure its continued existence, then they should be able to make >=$1 in profit.