FWIW I don't understand a lot of what either of you mean, but I'm very interested. Quick run-through, excuse the editorial tone, I don't know how to give feedback on writing without it.
# Post 1
> The problems of epistemology and informational quality control are complicated, but humanity has developed a decent amount of social and procedural technology to do these, some of which has defined the organization of various institutions.
Very fluffy, creating very uncertain parsing for reader.
Should cut down, then could add specificity:
ex. "Dealing with misinformation is complicated. But we have things like dictionaries and the internet, there's even specialization in fact-checking, like Snopes.com"
(I assume the specifics I added aren't what you meant, just wanted to give an example)
> The mere presence of LLMs doesn't fundamentally change how we should calibrate our beliefs or verify information. However, the mythology/marketing that LLMs are "outperforming humans"
They do, or are clearly at par, at many tasks.
Where is the quote from?
Is bringing this up relevant to the discussion?
Would us quibbling over that be relevant to this discussion?
> combined with the fact that the most popular ones are black boxes to the overwhelming majority of their users means that a lot of people aren't applying those tools to their outputs.
Are there unpopular ones aren't black boxes?
What tools? (this may just indicate the benefit of a clearer intro)
> As a technology, they're much more useful if you treat them with what is roughly the appropriate level of skepticism for a human stranger you're talking to on the street
This is a sort of obvious conclusion compared to the complicated language leading into it, and doesn't add to the posts before it. Is there a stronger claim here?
# Post 2
> I wonder what ChatGPT would have to say if I ran this text through with a specialized prompt.
Why do you wonder that?
What does "specialized" mean in this context?
My guess is there's a prompt you have in mind, which then would clarify A) what you're wondering about B) what you meant by specialized prompt. But a prompt is a question, so it may be better to just ask the question?
> Your choice of words is interesting, almost like you are optimizing for persuasion,
What language optimizes for persuasion? I'm guessing the fluffy advanced verbiage indicates that?
Does this boil down to "Your word choice creates persuasive writing"?
> but simultaneously, I get a strong vibe of intention of optimizing for truth.
Is there a distinction here? What would "optimizing for truth" vs. "optimizing for persuasion" look like?
Do people usually write not-truthful things, to the point it's worth noting that when you think people are writing with the intention of truth?
As long as we're doing unsolicited advice, this revision seems predicated on the assumption that we are writing for a general audience, which ill suits the context in which the posts were made. This is especially bizarre because you then interject to defend the benchmarking claim I've called "marketing", and having an opinion on that subject at all makes it clear that you also at the very least understand the shared context somewhat, despite being unable to parse the fairly obvious implication that treating models with undue credulity is a direct result of the outsized and ill-defined claims about their capabilities to which I refer. I agree that I could stand to be more concise, but if you find it difficult to parse my writing, perhaps this is simply because you are not its target audience
Let's go ahead and say the LLM stuff is all marketing and it's all clearly worse than all humans. It's plainly unrelated to anything else in the post, we don't need to focus on it.
Like I said, I'm very interested!
Maybe it doesn't mean anything other than what it says on the tin? You think people should treat an LLM like a stranger making claims? Makes sense!
It's just unclear what a lot of it means and the word choice makes it seem like there's something grander going on, coughs as our compatriots in this intricately weaved thread on the international network known as the world wide web have also explicated, and imparted via the written word, as their scrivening also remarks on the lexicographical phenomenae. coughs
My only other guess is you are doing some form of performance art to teach us a broader lesson?
There's something very "off" here, and I'm not the only to note it. Like, my instinct is it's iterated writing using an LLM asked to make it more graduate-school level.
Your post and the one I originally responded to are good evidence against something I said earlier. The mere existence of LLMs does clearly change the landscape of epistemology, because whether or not they're even involved in a conversation people will constantly invoke them when they think your prose is stilted (which is, by the way, exactly the wrong instinct), or to try to posture that they occupy some sort of elevated remove from the conversation (which I'd say they demonstrate false by replying at all). I guess dehumanizing people by accusing them of being "robots" is probably as old as the usage of that word if not older, but recently interest in talking robots has dramatically increased and so here we are
I can't tell you exactly what you find "off" about my prose, because while you have advocated precision your objection is impossibly vague. I talk funny. Okay. Cool. Thanks.
Anyway, most benchmarks are garbage, and even if we take the validity of these benchmarks for granted, these AI companies don't release their datasets or even weights, so we have no idea what's out of distribution. To be clear, this means the claims can't be verified even by the standards of ML benchmarks, and thus should be taken as marketing, because companies lying about their tech has both a clearly defined motivation and a constant stream of unrelenting precedent
# Post 1
> The problems of epistemology and informational quality control are complicated, but humanity has developed a decent amount of social and procedural technology to do these, some of which has defined the organization of various institutions.
Very fluffy, creating very uncertain parsing for reader.
Should cut down, then could add specificity:
ex. "Dealing with misinformation is complicated. But we have things like dictionaries and the internet, there's even specialization in fact-checking, like Snopes.com"
(I assume the specifics I added aren't what you meant, just wanted to give an example)
> The mere presence of LLMs doesn't fundamentally change how we should calibrate our beliefs or verify information. However, the mythology/marketing that LLMs are "outperforming humans"
They do, or are clearly at par, at many tasks.
Where is the quote from?
Is bringing this up relevant to the discussion?
Would us quibbling over that be relevant to this discussion?
> combined with the fact that the most popular ones are black boxes to the overwhelming majority of their users means that a lot of people aren't applying those tools to their outputs.
Are there unpopular ones aren't black boxes?
What tools? (this may just indicate the benefit of a clearer intro)
> As a technology, they're much more useful if you treat them with what is roughly the appropriate level of skepticism for a human stranger you're talking to on the street
This is a sort of obvious conclusion compared to the complicated language leading into it, and doesn't add to the posts before it. Is there a stronger claim here?
# Post 2
> I wonder what ChatGPT would have to say if I ran this text through with a specialized prompt.
Why do you wonder that?
What does "specialized" mean in this context?
My guess is there's a prompt you have in mind, which then would clarify A) what you're wondering about B) what you meant by specialized prompt. But a prompt is a question, so it may be better to just ask the question?
> Your choice of words is interesting, almost like you are optimizing for persuasion,
What language optimizes for persuasion? I'm guessing the fluffy advanced verbiage indicates that?
Does this boil down to "Your word choice creates persuasive writing"?
> but simultaneously, I get a strong vibe of intention of optimizing for truth.
Is there a distinction here? What would "optimizing for truth" vs. "optimizing for persuasion" look like?
Do people usually write not-truthful things, to the point it's worth noting that when you think people are writing with the intention of truth?