Nondeterministic systems are by definition incompatible with requirements for fixed and universal standards. One can either accept this, and wade into the murky waters of the humans, or sit on the sidelines while the technology develops without the influence of those who wish for the system to be have fixed and universal standards.
I'm finding it hard to identify any particulars in this piece, considering the largely self-defeating manner in which the arguments are presented, or should I say, compiled, from popular media. Had it not been endorsed by Stanford in some capacity, and sensationalised by means of punchy headline, we wouldn't be having this conversation in the first place! Now, much has been said about various purported externalities of LLM technology, and continues so, on a daily basis—here in Hacker News comments, if not elsewhere. Between wannabe ethicists and LessWrong types, contemplating the meaning of the word "intelligence," we're in no short supply of opinions on AI.
If you'd like to hear my opinion, I happen to think that LLM technology is the most important, arguably the only thing, to have happened in philosophy since Wittgenstein; indeed, Wittgenstein presents the only viable framework for comprehending AI in all of humanities. Part because it's what LLM "does"—compute arbitrary discourses, and part because that is what all good humanities end up doing—examining arbitrary discourses, not unlike the current affairs they cite in the opinion piece at hand, for arguments that they present, and ultimately, the language used to construct these arguments. If we're going to be concerned with AI like that, we shall start by making effort to avoid all kinds of language games that allow frivolously substituting "what AI does" for "what people do with AI."
This may sound simple, obvious even, but it also happens to be much easier said than done.
That is not to say that AI doesn't make a material difference to what people would otherwise do without it, but exactly like all of language is a tool—a hammer, if you will, that only gains meaning during use—AI is not different in that respect. For the longest time, humans had monopoly on computing of arbitrary discourses. This is why lawyers exist, too—so that we may compute certain discourses reliably. What has changed is now computers get to do it, too; currently, with varying degree of success. For "AI" to "destroy institutions," or in other words, for it doing someone's bidding to some undesirable end, something in the structure of said institutions must allow that in the first place! If it so happens that AI can help illuminate these things, like all good tools in philosophy of language do, it also means that we're in luck, and there's hope for better institutions.
You seem to be relying too heavily on your own "language games". For instance, flip flopping between using "LLM technology" and "AI" to refer to what appears to be the same thing in your argument. I find it all quite incomprehensible.
> If you'd like to hear my opinion, I happen to think that LLM technology is the most important, arguably the only thing, to have happened in philosophy since Wittgenstein;
So, assume cognitive bias and a penchant for hyperbole.
> LLM technology is the most important, arguably the only thing, to have happened in philosophy
Why would "LLM technology" be important to philosophy?
> arguably the only thing, to have happened in philosophy
Did "LLM technology" "happen in philosophy"? What does it mean to "happen in philosophy"?
> indeed, Wittgenstein presents the only viable framework for comprehending AI in all of humanities.
What could this even mean?
Linguistics would appear at least one other of the applicable humanities to large language models.
Wittgenstein was famously critical of Turing's claim that a machine can think to the extent he claimed it caused Turing to create misunderstandings even in his mathematics.
Wittgenstein also disliked Cantor. and even the concept of 'sets'.
I am struggling to see how this all adds up to being the "only viable framework for comprehending AI".
> If it so happens that AI can help illuminate these things, like all good tools in philosophy of language do, it also means that we're in luck, and there's hope for better institutions.
This is a wild ride.
So, "AI" exploits weaknesses in institutions, but this is different from "destroying institutions", and its a good thing because we can improve the institutions by fixing the exploitable areas; which is also a wholly speculative outcome with many counterexamples in real life.
Reads like: "Sure, I broke your window and robbed your store, but you should be thanking me and encouraging me to break more windows and rob more people because I illuminated that glass is susceptible to breaking when a rock is thrown at it. Oh, your shit? I'm keeping it. You're welcome."
My writing could be erratic sometimes, but "flip flopping" is a bit unfair, don't you think? When they say "AI," I assume they mean LLM technology and its applications above all else; the so-called "intelligent agent" discourse is a big one, but it's important to remember why it works in the first place. Well, because the pretraining stage is already capturing all the necessary information, right? Moreover, mechanistic studies show that most significant info is preserved in the dense layers, not attention heads. So there's something very fundamental, albeit conceptually simple—going on that allows for a whole bunch of emergent behaviour, enabling much more complex discourses.
> Why would "LLM technology" be important to philosophy?
Well, because it has empirically proved that Wittgenstein was more or less right all along, and linguists like Chomsky (I would go as far as saying Kripke, too, but that's a different story) were ultimately wrong! To put it simply: in order to learn language, and by extension, compute arbitrary discourses, you don't need to ever learn definitions of words. All you need is demonstrations of language use. The same goes for syntax, grammar, and a bunch of other things linguists were obsessing about for decades, like modality. (But that's a different story altogether!) Computer science people call this the bitter lesson, but that is only a statement on predictive power, not emergent power. If it only ever were the case for learning existing discourses, that wouldn't be remotely as surprising. Computing arbitrary discourses is a much stronger proposition!
> Did "LLM technology" "happen in philosophy"? What does it mean to "happen in philosophy"?
LLM's were a bit of a shock, and a lot of people are not receptive to this idea that Wittgensteinians won, basically, game over. There will be more flailing, but ultimately they will adapt. You can already see this with Askell and other traditionally-trained philosophy people adopting language games, it's only that they call it alignment. Neither a coincidence she went to Cambridge. It will take a bit of time for "academic philosophy" to recognise this, but eventually they will, because why wouldn't they?
Game over.
> Linguistics would appear at least one other of the applicable humanities to large language models.
Yeah, not really. All the interesting stuff that is happening has very little to do with linguistics. There's prefill from grammar, but it would be a stretch to attribute it to linguistics. In linguistic literature, word2vec was big time for the time being, but they did fuck-all with it ever since. I'm not trying to be hyperbolic here, either.
> Wittgenstein was famously critical of Turing's claim that a machine can think
I never understood this line of reasoning. So what Witt. and Turing had disagreements at the time? Witt. never had a chance to see LLM's, or anything remotely like it. This was unexpected result, you know? We could have guessed that it would be the case, but there were no evidence. We still don't have a solid theory to go from Frege to something like modern LLM's, and we may never will, but the evidence is there—Wittgenstein was right about you need for language to work.
> Wittgenstein also disliked Cantor. and even the concept of 'sets'.
I don't see what this has anything to do with?
> So, "AI" exploits weaknesses in institutions, but this is different from "destroying institutions", and its a good thing because we can improve the institutions by fixing the exploitable areas; which is also a wholly speculative outcome with many counterexamples in real life.
I never said AI "exploits" anything. I only ever said that being able to compute arbitrary discourses opens so many more doors than what's a pigeonhole insinuation like that would entail. What wasn't obvious before, is becoming obvious now. (This is why all these people are coming out with "revelations" on how AI is destroying institutions.) And it's not because of material circumstance. Just that some magic was dispelled, so stuff became obvious, and this is philosophy at work.
This is real philosophy at hand, not some academic wanking :-)
Again, I find it very difficult to get past your own personal "language game"s.
> Game over.
Is a perfect example. What "game" is "over"?
Chomsky's philosophical linguistics have long been derided and stripped for parts, and he was friends with Epstein and his cohorts so he can fuck right on off to disgrace and obscurity, but his goals within linguistics, as I understand them, were to identify why humanity has its faculty of language.
Wittgenstein was uninterested in answering the same question, and large language models are about as far from an answer to that question as one can get.
So, again, I am unsure what has been settled to the point of decrying "Game over".
Does this game only have two "teams"? One possible "outcome"?
Who's on what side of the "game"?
What have they said that shows their allegiance to one idea, and what have they said in opposition to the other?
What about large language models either support or contradict, respectively, said ideas?
As a huge fan of the ideas and writings of Wittgenstein I find it hard to believe that there are contemporary 'philosophers' who disagree with his ideas, namely that words take on meaning through context, but there are certainly trolls and conservatives in every field.
Most Zoomers around me that pirate use some application that obfuscates the torrenting part away, they just have to know how to use a search box and hit play.
People who do not pay for ChatGPT often have money and prefer not to pay for for a subscription for several reasons including, but not exclusively:
1) They don't use ChatGPT often enough to justify it
2) They use alternatives primarily (a subset of #1)
3) They choose to spend their money on other things
How can an advertiser tell the difference? Which is a stronger signal of having money: paying for something, or not paying for something? Furthermore, with all those reasons, why would advertisers prefer those people in ChatGPT? Advertisers are trying to change your behavior, usually to spend money the way they want you to. If you’re rarely using the service and don’t easily part with money, you’re probably less worth persuing than… well the person who is the opposite of those things.
Advertisers are salivating at paying users but paying users really don't want any advertising in their product because they're paying not to have any advertising. That does not mean somebody will not cave in and shove advertising in regardless.
You’ve equated selling ads, like a newspaper does, with tracking user behavior, collating it with other information purchased on the market, and targeting people to change their behavior. Disingenuous.
scale changes, time changes, but at its core it’s similar. what i look at is chatgpt’s roadmap, a lifeline.
it doesn’t save my life, but at least i’m seeing more relevant ads now :) not getting detergent ads while searching for perfume is still nice, all things considered.
Also, your newspaper is selling the data points it has. If it had more, it would sell more. See: your local paper isn’t selling ads to a car wash six towns over. They do, however, sell ads that align with the political affinities of your local newsrooms area.
"advertising, regardless of scale, is the art of turning data into revenue."
This is disingenuous. Putting up a billboard over a highway to make people aware of a certain brand of beer is not the same as building detailed profiles on people in order to sell to the highest bidder the opportunity to change your behavior right when you're likely to do so. But somehow, this user puts them together with the very convenient "regardless of scale."
Maybe you're OK with an entire industry that makes money trying to get you to do what they want -- buy what they want, think what they want. Maybe you're OK with your past behavior being written on a shadow ledger and sold the highest bidder, traded on the dark web, and used by governments. It's your right to be okay with that, since it's your life. But you being okay with that doesn't change the fact that this is a fundamentally different type of behavior than what is commonly called "advertising." It's a curious equivocation, this sane-washing, and it does make one wonder why an otherwise intelligent person feels to need to do it.
“Conversation privacy: We keep your conversations with ChatGPT private from advertisers, and we never sell your data to advertisers.”
The same sleight of hand that’s been used by surveillance capitalists for years. It’s not about “selling your data” because they have narrowly defined data to mean “the actual chats you have” and not “information we infer about you from your usage of the service,” which they do sell to advertisers in the form of your behavioral futures.
Fuck all this. OpenAI caved to surveillance capitalism in record time.
Sure. If you want to pay the 247% tarriff, there’s nothing stopping you from doing this. US import duty applies when you cross the border, calculated on the vehicle’s origin (China), not purchase location.
I think the way this would work is you would have your Canadian friend/owner drive it across and then return via another mode of transport. It's entirely possible you could get away with it pretty much indefinitely (especially in an area where folks are used to seeing Canadian plates), but I could also see someone checking a list of "foreign vehicles that entered the US and never left" at some point and one or both of you having some explaining to do (i.e. being ruled inadmissible).
I can't tell if you are talking about keeping the car in the US or Canada, but I can tell you in the US, you have to register the car. If you don't register the car, they don't just issue fines, they tow and charge daily storage until you register it. And if you don't pay the fines, you never the car back. The state will auction it off and keep the money, and if the auction price is less than the storage fines, they send you a bill for the rest.
This is only correct if you're not planning on ever registering the vehicle. And good luck with the paperwork to prove that during import. This is a great way to waste a bunch of money and get your shiny new car crushed
reply