Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

It was put forward in 1960s (maybe? Robert Anton Wilson? and for parallel purposes Philip K Dick's percept / concept feedback cycle) science fiction, and having therefore casually looked for phenomena when support / disprove this hypothesis over the intervening years: that people in power necessarily become functionally psychotic because people will self-select to be around them as a self-preserving / promoting opportunity (sycophants) who cannot help but filter shared observations through their own biases, this is profoundly unsurprising to me.

If you choose to believe as Jaron Lanier does that LLMs are a mashup (or as I would characterize it a funhouse mirror) of the human condition, as represented by the Internet, this sort of implicit bias is already represented in most social media. This is further distilled by the cultural practice of hiring third world residents to tag training sets and provide the "reinforcement learning"... people who are effectively if not actually in the thrall of their employers and can't help but reflect their own sycophancy.

As someone who is therefore historically familiar with this process in a wider systemic sense I need (hope for?) something in articles like this which diagnoses / mitigates the underlying process.



I'm just going to re-write what you've written with a bit of extra salt:

Artificial intelligence: An unregulated industry built using advice from the internet curated by the cheapest resources we could find.

What can we mitigate your responsibility for this morning?

I've had AI provide answers verbatim from a self-promotion card of the product I was querying as if it was a review of the product. I don't want to chance a therapy bot quoting a single source that, whilst it may be adjacent to the problem needing to be addressed, could be wildly inappropriate or incorrect due to the sensitivities inherent where therapy is required.

(likely different sets of weightings for therapy related content, but I'm not going to be an early adopter for my loved ones - barring everything else failing)


I kind of wonder why psych bots aren't regulated as medical devices, since ML diagnostic products certainly are.


My theory is that the further up the hierarcy the beneficial decisions are often harmful to those below which requires emotional distancing which even further up becomes full blown collective psychopaty. The yes men grow close while everyone else floats away.


Every single empire falls into this, right? The king surrounds himself with useless sycophants that can't produce anything but are very good at flattering him, he eventually leads the empire to ruin, revolution happens, the cycle starts anew.

I wish I could see hope in the use of LLMs but i don't think the genie goes back into the bottle, the people prone to this kind of delusion will just dig a hole and go deep until they find the willpower or someone on the outside to pull them out. Feels to me like gambling, there's no power that will block gambling apps due to the amount of money they fuel into lobbying so the best we can do is try to help our friends and family and prevent them from being sucked into it.


Certainly not the story of, ex: the Mongol Empire. Which is the Great Khan dies but he was the big personality holding everything together.

There were competent kings and competent Empires.

Indeed, it's tough to decide where the Roman Empire really began it's decline. It's not a singular event but a centuries long decline. Same with the Spanish Empire and English Empire.

Indeed, the English Empire may have collapsed but that's mostly because Britain just got bored of it. There's no traditional collapse for the breakup of the British Empire

---------

I can think of some dramatic changes as well. The fall of the Tokugawa Shogunate of Japan wasn't due to incompetence, but instead the culture shock of a full iron battleship from USA visiting Japan when they were still a swords and samurai culture. This broke the Japanese trust in the Samurai system and led to a violent revolution resulting in incredible industrialization. But I don't think the Tokugawa Shogunate was ever considered especially corrupt or incompetent.

---------

Now that being said: Dictators fall into the dictator trap. A bad king who becomes a narcissist and dictator will fall under the pattern you describe. But that doesn't really happen all that often. That's why it's so memorable when it DOES happen


> the English Empire may have collapsed but that's mostly because Britain just got bored of it. There's no traditional collapse for the breakup of the British Empire

I completely agree with the point you're making, but this part is simply incorrect. The British Empire essentially bankrupted itself during WW2, and much of its empire was made up of money losing territories. This led them to start 'liberating' these territories en masse which essentially signaled the end of the British Empire.


It is ironic and sad that colonies were both oppressed and not profitable.

The way Britain has restricted Industry in India (famously even salt) left it vulnerable in WW2.

Colonial policies are really up there with great failures of communists


> As someone who is therefore historically familiar with this process in a wider systemic sense

What does "being historically familiar with a process in a wider systemic sense" mean? I'm trying to parse this sentence without success.


I'm reading it to say, having working knowledge of intra-personal structures in a way that is contingent on historical context. These would be social, economic, religious, family, political, patterns of relation that groups of people exist in.

The assumption GP is making is that the incentives, values, and biases impressed upon folks providing RL training data may systematically favor responses along a certain vector that is the sum of these influences in a way that doesn't cancel out because the sample isn't representative. The economic dimension for example is particularly difficult to unbias because the sample creates the dataset as an integral part of their job. The converse would be collecting RL training data from people outside of the context of work.

While that it may not be feasible or even possible to counter, that difficulty or impossibility doesn't resolve the issue of bias.


Thank you everyone for the love.

I read Robert Anton Wilson and Philip K Dick many years ago. I've been observing a recurring feature in human thought / organization ever since. People in this thread have done a pretty good job with the functional psychosis part, but I encourage considering percept / concept as well: what this is is the notion that what we see influences our mental model, but it works the other way as well and our mental model influences what we're capable of seeing. Yes, sort of like confirmation bias, but much more disturbing. For example, in the CIA's online library there is a coursebook titled _Psychology of Intelligence Analysis_ (1999) and one of the topics discussed is: "Initial exposure to blurred or ambiguous stimuli interferes with accurate perception even after more and better information be- comes available." Particularly fascinating to me is that people who are first shown a picture which is too blurry to make out take longer to correctly identify it as it is made clearer. https://www.cia.gov/resources/csi/books-monographs/psycholog...

My father was a psychiatrist. I'm interested in various facets of how people come to regard each other and their surroundings. I'm fascinated with the role language plays in this. I personally believe that computer programming languages and tech stacks provide a uniquely objective framework for evaluating the emergence of "personality" in cultures.

"Diagnosticity is the informational value of an interaction, event, or feedback for someone seeking self-knowledge." https://dictionary.apa.org/diagnosticity

Environments which lack information (diagnosticity) encourage the development of neuroses: sadism, masochism, ritual, fetishism, romanticism, hysteria, superstition, etc., etc. I have observed that left to stew in their own juices the spontaneous cultures which emerge around different languages / stacks tend to gravitate towards language-specific constellations of such neuroses; I'm not the only person who has observed this. I tend towards the "radar chart" methodology described in Leary's _Interpersonal Diagnosis of Personality_ (1957); but here's a great talk someone gave at SXSW one year which explores a Lacanian model: https://www.youtube.com/watch?v=mZyvIHYn2zk


People are extremely uncomfortable with uncertainty, especially about themselves. So they create explanations... Programmers also don't like uncertainty so they create programming languages. There's also a bit of "not invented here" syndrome.

Languages like Haskell are really applied type theory etc... In some sense, the academics invent languages for different levels of abstraction to ultimately write papers about how useful they are.

In terms of programming languages, personality wise, in the end it's all javascript. Then there is Java and the Jvm which is on a mission to co-opt multiple personalities.


“our mental model influences what we're capable of seeing.”

This is too common. I’d like to think the Socratic method and mindset helps one break out of this rut.


Check out the CIA's free coursebook referenced above. It's got good stuff in it. (Your tax dollars at work.)


How about all the people out there who are at rock bottom, or have major issues, are not leaders, are not at the top of their game, and need some encouragement or understanding?

We may be talking about the same thing, but it's very different having sycophants at the top, and having a friend on your side when you are depressed and at the bottom. Yet both of them might do the same thing. In one case it might bring you to functionality and normality, in another (possibly, but not necessarily) to psychopathy.


Geoff Lewis has been sampling the product. Will this turn into a cultural thing amongst VCs? (Has it already?)

https://futurism.com/openai-investor-chatgpt-mental-health




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: