Hacker Newsnew | past | comments | ask | show | jobs | submit | hiddeninplain's commentslogin

You seem to be relying too heavily on your own "language games". For instance, flip flopping between using "LLM technology" and "AI" to refer to what appears to be the same thing in your argument. I find it all quite incomprehensible.

> If you'd like to hear my opinion, I happen to think that LLM technology is the most important, arguably the only thing, to have happened in philosophy since Wittgenstein;

So, assume cognitive bias and a penchant for hyperbole.

> LLM technology is the most important, arguably the only thing, to have happened in philosophy

Why would "LLM technology" be important to philosophy?

> arguably the only thing, to have happened in philosophy

Did "LLM technology" "happen in philosophy"? What does it mean to "happen in philosophy"?

> indeed, Wittgenstein presents the only viable framework for comprehending AI in all of humanities.

What could this even mean?

Linguistics would appear at least one other of the applicable humanities to large language models.

Wittgenstein was famously critical of Turing's claim that a machine can think to the extent he claimed it caused Turing to create misunderstandings even in his mathematics.

Wittgenstein also disliked Cantor. and even the concept of 'sets'.

I am struggling to see how this all adds up to being the "only viable framework for comprehending AI".

> If it so happens that AI can help illuminate these things, like all good tools in philosophy of language do, it also means that we're in luck, and there's hope for better institutions.

This is a wild ride.

So, "AI" exploits weaknesses in institutions, but this is different from "destroying institutions", and its a good thing because we can improve the institutions by fixing the exploitable areas; which is also a wholly speculative outcome with many counterexamples in real life.

Reads like: "Sure, I broke your window and robbed your store, but you should be thanking me and encouraging me to break more windows and rob more people because I illuminated that glass is susceptible to breaking when a rock is thrown at it. Oh, your shit? I'm keeping it. You're welcome."


My writing could be erratic sometimes, but "flip flopping" is a bit unfair, don't you think? When they say "AI," I assume they mean LLM technology and its applications above all else; the so-called "intelligent agent" discourse is a big one, but it's important to remember why it works in the first place. Well, because the pretraining stage is already capturing all the necessary information, right? Moreover, mechanistic studies show that most significant info is preserved in the dense layers, not attention heads. So there's something very fundamental, albeit conceptually simple—going on that allows for a whole bunch of emergent behaviour, enabling much more complex discourses.

> Why would "LLM technology" be important to philosophy?

Well, because it has empirically proved that Wittgenstein was more or less right all along, and linguists like Chomsky (I would go as far as saying Kripke, too, but that's a different story) were ultimately wrong! To put it simply: in order to learn language, and by extension, compute arbitrary discourses, you don't need to ever learn definitions of words. All you need is demonstrations of language use. The same goes for syntax, grammar, and a bunch of other things linguists were obsessing about for decades, like modality. (But that's a different story altogether!) Computer science people call this the bitter lesson, but that is only a statement on predictive power, not emergent power. If it only ever were the case for learning existing discourses, that wouldn't be remotely as surprising. Computing arbitrary discourses is a much stronger proposition!

> Did "LLM technology" "happen in philosophy"? What does it mean to "happen in philosophy"?

LLM's were a bit of a shock, and a lot of people are not receptive to this idea that Wittgensteinians won, basically, game over. There will be more flailing, but ultimately they will adapt. You can already see this with Askell and other traditionally-trained philosophy people adopting language games, it's only that they call it alignment. Neither a coincidence she went to Cambridge. It will take a bit of time for "academic philosophy" to recognise this, but eventually they will, because why wouldn't they?

Game over.

> Linguistics would appear at least one other of the applicable humanities to large language models.

Yeah, not really. All the interesting stuff that is happening has very little to do with linguistics. There's prefill from grammar, but it would be a stretch to attribute it to linguistics. In linguistic literature, word2vec was big time for the time being, but they did fuck-all with it ever since. I'm not trying to be hyperbolic here, either.

> Wittgenstein was famously critical of Turing's claim that a machine can think

I never understood this line of reasoning. So what Witt. and Turing had disagreements at the time? Witt. never had a chance to see LLM's, or anything remotely like it. This was unexpected result, you know? We could have guessed that it would be the case, but there were no evidence. We still don't have a solid theory to go from Frege to something like modern LLM's, and we may never will, but the evidence is there—Wittgenstein was right about you need for language to work.

> Wittgenstein also disliked Cantor. and even the concept of 'sets'.

I don't see what this has anything to do with?

> So, "AI" exploits weaknesses in institutions, but this is different from "destroying institutions", and its a good thing because we can improve the institutions by fixing the exploitable areas; which is also a wholly speculative outcome with many counterexamples in real life.

I never said AI "exploits" anything. I only ever said that being able to compute arbitrary discourses opens so many more doors than what's a pigeonhole insinuation like that would entail. What wasn't obvious before, is becoming obvious now. (This is why all these people are coming out with "revelations" on how AI is destroying institutions.) And it's not because of material circumstance. Just that some magic was dispelled, so stuff became obvious, and this is philosophy at work.

This is real philosophy at hand, not some academic wanking :-)


Again, I find it very difficult to get past your own personal "language game"s.

> Game over.

Is a perfect example. What "game" is "over"?

Chomsky's philosophical linguistics have long been derided and stripped for parts, and he was friends with Epstein and his cohorts so he can fuck right on off to disgrace and obscurity, but his goals within linguistics, as I understand them, were to identify why humanity has its faculty of language.

Wittgenstein was uninterested in answering the same question, and large language models are about as far from an answer to that question as one can get.

So, again, I am unsure what has been settled to the point of decrying "Game over".

Does this game only have two "teams"? One possible "outcome"?

Who's on what side of the "game"?

What have they said that shows their allegiance to one idea, and what have they said in opposition to the other?

What about large language models either support or contradict, respectively, said ideas?

As a huge fan of the ideas and writings of Wittgenstein I find it hard to believe that there are contemporary 'philosophers' who disagree with his ideas, namely that words take on meaning through context, but there are certainly trolls and conservatives in every field.


Wild that you claim others misunderstand art via an ill conceived attribution to "thing-ness", but make all of your arguments on the grounds of said "thing-ness".

Duchamp's R Mutt is an abstract commentary.

The actual vehicle of this commentary, the upside down urinal, is wholly arbitrary.


>Wild that you claim others misunderstand art via an ill conceived attribution to "thing-ness"

I don't claim others misunderstand art. I'm saying that art as a product that can be sold for income, where people want to own it, is tied to thingness.

>The actual vehicle of this commentary, the upside down urinal, is wholly arbitrary.

Yes. I agree. I'm generally confused by what you're trying to say here. I also know there are a many copies of Fountain... which again, demonstrates the concept of thingness in art I'm trying to talk about.

You typically can't hang a performance art piece in a gallery all day. You certainly can't sell a print to people at home. The fact that they care about the original instead of holding equal value to the print is exactly what I'm talking about. Digital creations don't have the same thingness, because you'd literally need to do something like get the original RAM that rendered the piece to identify it as "the original."


It seems you have abandoned your thesis in order to retain your belief that concerns about imagegen tech "are worthless".

Defining "concerns about AI" broadly as "is it art?" while obstinately denying the possibility for real concerns about imagegen tech: theft of intellectual property by the wealthy, environmental, economic, expressive, and on and on.

> I don't claim others misunderstand art.

> gp: I'd suggest people learn about ...

Is a passive aggressive way to say "you misunderstand this due to your ignorance".

> I also know there are a many copies of Fountain... which again, demonstrates the concept of thingness

> gp: Fucking Duchamp’s readymades should make any concern about AI worthless.

If anything this "demonstrates the thingness in consumerism".

My point was you are ex post facto conflating your opinion of the items in the gift shop with the named artist's own expression.


> This technique allowed him to mass-produce images, echoing the consumer culture he sought to critique and celebrate.

Critique, yes. Celebrate, wat?

I tend to categorize Warhol as an artist that if you hate their work you should love it because the point is to coerce you to hating it to lead you to the realization that the arc of factory mass production bends toward lowering quality.

I highly doubt Warhol used his chosen soup brand because he felt it was the pinnacle of soup and represented how even mass produced quantities can have excellence in quality.

More likely he was saying this piece is to art as this brand's product is to soup.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: