Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
The "computers are social actors" theory no longer applies to desktop computers (nature.com)
56 points by vasco on Nov 15, 2023 | hide | past | favorite | 67 comments


In 2006 I shipped my iBook to Apple to fix its screen bezel. I made a Flash animation where my laptop cheekily described its issue, thanking the technicians in advance, and I made it launch on startup.

The iBook came back with a factory reset. I restored from backup and stopped anthropomorphizing my computers.


Sounds more like this is just an exhibit of the replication crisis. I don’t remember people interacting with computers as if they are human in the 1990s — at least the people who actually did interact with computers on a regular basis.


From what I can understand, the original finding is along the lines of when the computer says "insert floppy disc 3", people instinctively responded as if it was a person giving them that order and subconsciously went through all the usual social processing for that response - do they have authority over me? is anyone else watching? etc. Not that they thought the computer actually was capable of human interaction at a conscious level.

This is more believable than the way most people are interpreting it but I still have doubts. I was an adult in that era, and what I remember is computers were just so much more mechanical then. It was more like operating a microwave oven than today's much more sophisticated machines. It's hard to imagine those simple error codes triggering human social reactions.


I have very different memories of the early 90's. While, even as a programmer, while I very much interacted with computers as a tool, I tended to treat interactions with it as a conversation with the engineer who designed the hardware or software I was interacting with. Most non-programmers weren't really conscious of the engineer on the other side of the conversation, so the other side of the conversation was the "stupid computer".

I'm not sure the key change over time here is the about "emergent technology". My observation is that, as computers were increasingly networked (particularly after the explosion of the Internet), perception of computers shifted much more to being a communication tool... and that's now literally true as the primary computer people interact with is their phone. Even before smartphones, once the networking became ubiquitous things started changing. Email was initially the primary use for networked computers, and then with the web they became tools to interact with remote people/organizations, and with the explosion of social media they became tools primarily for interacting with other people. Before networking, the person at the other end of the conversation was just so much more removed/abstract, so most people's mental models were left with only the device itself in the interaction.


True. Though I would argue that people nowadays are generally less submissive to authority than 30 years ago, hence that might not be specific to computers.


It's hard to make out what you're saying through the plexiglass and the double masks.


> Sounds more like this is just an exhibit of the replication crisis.

Do you have evidence of that, or are you saying it looks the same?

The replication crisis on HN is replication of these comments in every discussion of social science research. The papers that they tried to replicate at least were at least based on evidence, and the conclusion was not that they were completely wrong but that, for many papers (not all by any measure), the conclusion was replicated but the evidence wasn't quite as strong as in the original.


There are two explanations for the discrepancy between the original study and the present one:

1) The behavior of people has changed substantively since the first study. (Which is what the present study seems to be concluding.)

2) At least one of the studies wasn’t representative and/or didn’t actually measure what it claimed to measure. (That is, an exhibit of the replication crisis.)

What I’m saying is that explanation 2 seems much more plausible to me, given that the results of the first study don’t match my recollection and experience at all.


There are other explanations too, and every study flaw, if it exists, doesn't implicate the "replication crisis".

> the results of the first study don’t match my recollection and experience at all.

30 years ago you had some anecdotal impressions. 30 years ago the researchers had a hypothesis, focused on this exact question, designed an experiment, and collected evidence. (And many since have found it valid.) 30 years later, we have a detailed written report of their experience, and your 30-year old memory as posted on HN.

It's just too easy to say, 'that's not my experience' without evidence. Everything is 'debunked' on that basis. Now we have 30 year old memories too.


You know the replication crisis is a real thing, right?


What do you mean by "real thing"? The comment to which you responded makes clear I know about the issues in detail, so I'm clearly aware of it.

If you mean that yours and/or layer8's perspectives are a 'real thing', you don't have a privileged claim on reality, of course (and neither do I).


> Sounds more like this is just an exhibit of the replication crisis

Yeah. In the article we can find that "the original study recruited 30 Stanford University undergraduates (10 in each of the original 3 conditions, see Procedure)."

I wouldn't think twice before throwing away the results and forgetting about them.

The replication study does better, recruiting 132 participants. This is much harder to ignore.


It seems like they found that the CASA effect was real when the technology was novel. Once you get comfortable with computers, it wears off.


I mean, it might have been before your time, but people have been doing this since the 60's.

https://en.wikipedia.org/wiki/ELIZA


I’m familiar with this of course, but it’s conditioned on a chat program that actually tries to appear human, and the credulous reactions to ELIZA were mostly lay people who otherwise hadn’t used computers before. Likewise, some people — famously including some software engineers — believe that certain AI chatbots are sentient, which also doesn’t falsify the linked study. (This argument holds regardless of whether you consider AI chatbots to be sentient or not.)


But what does that actually show?

Humans have been communicating with other humans via text for ages. Fooling a human that the person on the other side of text communication is also a human isn't too hard, especially when they are not looking for signs of deception.

Even if they are aware it is a computer, humans will anthropomorphize literally anything. Animals are obvious, but we'll even extent it to simple objects like cars. We'll talk to them with zero expectation of any response or sign of sentience. It's a suspension of disbelief, really. It's not any different than kids playing Cops And Robbers, people performing a theatre play, or playing a board game with friends.

CASA seems to suggest that humans will view any computer interaction as one between two humans. That's not just interacting with chat bots, that's also something like opening the Configuration Panel. If you ask me, that's a massive stretch!


I agree. this theory sounds ludicrous to me except for people who maybe have absolutely zero technical know-how


That describes people using computers for the first time, a widespread experience when the first experiment was done.


That doesn't make sense. Computers aren't, and weren't even back then, some kind of alien technology, unlike anything else people have experienced earlier. I can't imagine those people treating their calculators, vacuums or cars as persons. A computer is a qualitative jump, but not that big.


What is that based on? And what did they use that was like a computer?


If the human subjects were Stanford students 30 years ago, they were pretty much my age cohort and most probably also had similar technologically-rich US middle class experiences in childhood.

Most in this cohort grew up with many kinds of electronic device. This includes kid-focused, hand-held games like a "speak and spell" or a "simon says" game and other toys that played recorded voices. Video game arcades and home video games were common and games were full of sequential prompting and story telling. But I think we saw these as conduits for content, not defaulting to any belief that there was agency in the device.

There were also portable radios and tape players, so we were used to technology delivering social content. Also, there were plenty of digital controls on home appliances as mentioned previously. Microwave ovens, washing machines, CD players, programmable VCRs, programmed irrigation timers, etc. These were all things that had some level of automated and asynchronous behavior.

I remember one of my older relatives back in the 1980s liked to jokingly quote a theory about "the perversity of inanimate objects" that he had picked up as a technician in the US Navy. And yet we would have raised an eyebrow if anybody really responded to devices as social actors. It would be as strange as if someone talked to plants or to their shoes.


True, but they did say the experiment was done with people who had been using computers regularly.


Good point. Though they still could be pretty new - not like people who grew up with them.


Did you read the article? Did you understand it? CASA doesn't postulate that users in the 90s interacted with computers as if they were literally other humans.


layer8:

I don’t remember people interacting with computers as if they are human in the 1990s — at least the people who actually did interact with computers on a regular basis.

You:

CASA doesn't postulate that users in the 90s interacted with computers as if they were literally other humans.

The article:

The Computers Are Social Actors (CASA) theory is the most important theoretical contribution that has shaped the field of human–computer interaction. The theory states that humans interact with computers as if they are human, and is the cornerstone on which all social human–machine communication (e.g., chatbots, robots, virtual agents) are designed. However, the theory itself dates back to the early 1990s, and, since then, technology and its place in society has evolved and changed drastically.

Me:

Confused.


From the original article: “The [CASA] theory states that humans interact with computers as if they are human, and is the cornerstone on which all social human–machine communication (e.g., chatbots, robots, virtual agents) are designed. ”


Seems like possibly bad timing giving advances in AI. ChatGPT/GitHub Copilot can already be used to interface with a computer (there's some git repos floating about if one looks), I think it's only a matter of time before someone makes an actually useful AI interface.


If you think it seems like bad timing, I suggest you read closer: They very specifically say that emerging technologies (like the recent advances in ML systems) are still subject to the CASA theory.


My 18yo daughter, not tech illiterate but not particularly interested in programming, has mentioned to me that she found AI-generated artwork really interesting for about a month, but it quickly became boring. I wonder if it comes from a "wow what kind of creative person could come up with something that unusual?" initial response, which quickly gives way to boredom as one realizes that there is no creative person behind there?

Also, I wonder if the period before "emerging" technologies stop seeming interesting (e.g. like another person) is speeding up over time, like so much else?


I suspect it’s more because most AI-generated artwork looks same-ish after a while. At least that’s my own perception.


When do other people stop being interesting?

For example in lines of work where individuals meet large numbers of the public over extended periods of time, how long does it take before the average interaction is 'meh'?


tool use is already that.


Humans seem to naturally anthropomorphize anything which exhibits systematic yet unpredictable or capricious behavior: Pets, machines, mountains, oceans, weather, planets, ex-wives, etc. We give them names and project a sense of agency to them, because they must obviously act certain ways some times on purpose. It's the reason we have gods, fairies and superstitions. It's why we name our cars and our storms. It's just what we do.

So to me, it makes sense that as we understand something more, being able to predict what and why it does something, the less likely we are to think of it as having some sort of inner being.


> So to me, it makes sense that as we understand something more, being able to predict what and why it does something, the less likely we are to think of it as having some sort of inner being.

Pretty much the opposite with respect to pets and humans we are acquainted or intimate with.

The sheer speed with which computers change (the modern automated update cycles) might be a counter to any impulse we have to anthropomorphize them. Even with animals we are much more likely to anthropomorphize longer-living animals than shorter living ones. This could even be something like a survival mechanism - don't become emotionally attached to something with you only a brief while.


I've never heard this theory but it just sounds incredibly stupid and unrealistic to me. The kind of thing some academic nonce who never stepped foot off a university campus would come up with. I don't treat any computer I've ever owned like it was a "social actor". Computers are tools.


I feel the same way, but watch on as others interact with their workstations in a very linear fashion. It is always difficult and sometimes impossible to get them to take a step back and think about it.

These are the same people that click the big green Download button. These are the same people that save to the cloud by accident. These are the same people that don't understand what a default printer means. The number of these people is not decreasing.


An alternate hypothesis is that CASA is still correct, but that people start interacting with computers in a less than compassionate manner. E.g. people may start to see themselves as "wealthy" compared to the computer (or whatever the general mechanism of a reduction in empathy): https://greatergood.berkeley.edu/article/item/how_money_chan...


It's still a social actor, but I'm higher ranking, it's a peasant? Interesting, but hard to test.


Maybe in the future for computers that are expected to make decisions directly affecting the welfare of humans.

I think we would start to see this in people defending the decision of an AI in arguments with other people (such as for automated vehicles). What it ultimately turns out to be I don't know.


>One main reason why no direct replication has been conducted before now, is that the proposed underlying psychological basis for the CASA Theory suggested that a short period of time (e.g., 30 years) should have no influence on the effect. Specifically, the original authors postulated that the reason we respond to social computers as if they have an awareness is because our brains are not evolving at the same rate as technology; our brains are still adapted to our early ancestors’ environment4.

I find this hypothesis... shockingly naive, bordering on psuedoscientific. Brains are there to learn behaviors faster than evolution can. Neuroplasticity (as well as just the general experience of being human) means that brains can un- and re-learn things as needed rather than being stuck with the same static behaviors that, say, a plant might have. No surprise the CASA theory got falsified, and good on the researchers for doing the necessary falsification work.


My goodness, how did that pass peer review?

Postulating that evolution is strictly the result natural selection (ie our ancestral chain is solely responsible for what we are capable of doing) is such an archaic view.

We don’t have to “randomly mutate and wait for the inferior genes to drop out of the gene pool” to see human adaption, and there are a plethora of counterexamples to remove any doubt.


> Brains are there to learn behaviors faster than evolution can.

Reflexes whether innate, enforced, or purely learned are basically impossible to modify without either conscious attention or a lot of training. I presume the CASA proponents were arguing that these social responses are akin to social reflexes.

Plants "learn" too. https://theconversation.com/pavlovs-plants-new-study-shows-p...

The mechanistic hypothesis being:

> Plants may lack brains and neural tissues but they do possess a sophisticated calcium-based signaling network in their cells similar to animals’ memory processes.


It is not just that as plant cell signaling is how animals got neurotransmitters through evolution. In fact many plants have neurotransmitter analogs due their signaling needs through evolution.


Perhaps they were talking about test subjects that haven't yet been exposed to computers.


"Nass and Reeves make a point of stating in their methodology that all the participants 'have extensive experience with computers … they were all daily users, and many even did their own programming'"


I say this with utmost respect (and a little bit of devil's advocacy heh :) -- I might suggest your critique is a little too self-assured.

Disclaimer: Not a neuroscientist, but a biochemist *shrug*

We recognize faces easily due to deep wiring. Some linguistic behavior is deeply rooted as well, e.g., our ability to be immersed in written or visual narratives of experiences that are not our own ("aesthetic illusion") is an evolved extension of "play". Both of these are related to deep brain structures, not just culture. (Though HOW we play is certainly also in the realm of culture.)

Lots of conclusions about brains can be rooted in there being really foundational neural structures that can't simply be rewired by culture. Just because culture exists, it doesn't mean that conclusions rooted in biology are irrational. And assuming the effect was reproduced over many cultures (which perhaps it was not) then it would be fair to assume it was rooted in some deeper system.

tldr- I feel you are falling for hindsight bias. Respectfully, I suggest we're all best-served to navigate this world by cultivating a healthy dose of humility, and I say that more for all the self-assured readers upvoting you for making a dunk.

I'm guessing the truth is somewhere in between :)


except no such neural structure has ever been found. humans have been using tools for longer than we've been human -- without solid evidence that this tool is interpreted as a social actor, based on real neuroscience, this kind of claim rooted in an evolutionary argument is psuedoscience. people have been making arguments from evolution to say all kinds of nonsense things since Darwin (like justifying racial hierachies). which neural structure is posited to cause us to humanize our tools?

if anything, the historical evidence points in the opposite direction -- that people objectify far more than they humanize, even when the cost is measured in hundreds, thousands, or millions of lives. that's merely an observation, not a hypothesis or a claim about what people will do or about what they are capable of doing. we ought to humanize more often.


"We haven't yet found a specific neural structure for recognizing faces" is far from evidence that no such structure exists. Our understanding of the brain still has massive gaps in it, and I can testify from my own experience working for a psychology & neuroscience department (which includes one person particularly specializing in perception, from the very basic "light hitting the optic nerve" stage all the way to object categorization and recognition) that we still have a lot to learn in this area specifically.

It may very well be that there isn't a brain structure dedicated to this, and that would be fascinating, too! But to denigrate the people doing their best to understand this stuff 30 years ago as "pseudoscientific" just because they made an assumption about how plastic the brain was without our benefit of 20/20 hindsight is very much uncalled-for.


> "We haven't yet found a specific neural structure for recognizing faces" is far from evidence that no such structure exists.

proving a negative is, famously, quite hard. an unsolved problem, even. facial recognition has a plethora of evidence beyond an argument from evolution. the notion that we humanize tools is one that, as yet, lacks that evidence. I urge people to be more skeptical of arguments from evolution. we understand very little about our evolution and it's easy to insert our own worldviews and beliefs into such arguments, allowing them to state virtually anything we like in a plausible envelope with the shape of a scientific argument. I'm not just calling the argument about humanizing tools pseudoscience -- I'm applying it equally to every other argument from evolution that lacks other motivating evidence.


I understand your original point was about a neural structure involved in humanization, and not of facial recognition, but am responding to the point you let the interlocutor derail this to.

> > "We haven't yet found a specific neural structure for recognizing faces" is far from evidence that no such structure exists.

> proving a negative is, famously, quite hard.

Whether structure or not, we do have very strong evidence that a mechanism of facial recognition exists as there are people who lack this mechanism to various degrees.

This article posits that we have indeed discovered a specific neural structure involved in facial recognition: https://www.aipc.net.au/articles/the-neuroscience-of-facial-...

> The brain has even evolved a dedicated area in the neural landscape, the fusiform face area or FFA (Kanwisher et al, 1997), to specialise in facial recognition. This is part of a complex visual system that can determine a surprising number of things about another person.


thanks, I appreciate this


Your brain is a structure for learning structures.

It doesn't need to have a built-in module for recognizing faces; it wires up a face-recognition system on the fly, from visual data.


> Your brain is a structure for learning structures.

And it does so by having specialized structures.

> It doesn't need to have a built-in module for recognizing faces; it wires up a face-recognition system on the fly, from visual data.

Except it does appear to have such a special structure, the Fusiform Face Area. If it did not, people with prosopagnosia wouldn't just have problems with recognizing faces, but would have more general pattern recognition problems.


https://neurosciencenews.com/empathy-human-robots-psychology...

> They performed electroencephalography (EEG) in 15 healthy adults who were observing pictures of either a human or robotic hand in painful or non-painful situations, such as a finger being cut by a knife. Event-related brain potentials for empathy toward humanoid robots in perceived pain were similar to those for empathy toward humans in pain. However, the beginning of the top-down process of empathy was weaker in empathy toward robots than toward humans.

So basically it seems we potentiate empathy toward similar kinds of beings and then maybe pattern-recognize that they are not similar to clamp down on the potentiated empathetic response?


I live in a space where I tend to talk with ppl about research every day or two (both academics and regular citizens). In case you value communication, know that you come across as way too unnecessarily confident to seem interesting to engage with. Take that as you'd like, coming from someone who learns from and shares with others regularly IRL

In case you're curious why: "pseudoscience" has a real meaning (not just an punchy and authoritative word to toss around to shut down discussion), and your mention of social darwinism comes across as a weirdly aggressive conversational closer


> humans have been using tools for longer than we've been human

It depends on how you define 'human'. Our line spit from chimpanzees 7 million years ago (mya); we walked upright 6 mya. Tool use began ~2.58 mya (possibly 3.3 mya, depending on some uncertain evidence).


I mean, lots of animals use tools, including chimps. those tools aren't nearly as sophisticated, so it depends on how you define tool use, but the point still stands. this is all besides the point.


Most rigorous social science theory


Abstract

The Computers Are Social Actors (CASA) theory is the most important theoretical contribution that has shaped the field of human–computer interaction. The theory states that humans interact with computers as if they are human, and is the cornerstone on which all social human–machine communication (e.g., chatbots, robots, virtual agents) are designed. However, the theory itself dates back to the early 1990s, and, since then, technology and its place in society has evolved and changed drastically. Here we show, via a direct replication of the original study, that participants no longer interact with desktop computers as if they are human. This suggests that the CASA Theory may only work for emergent technology, an important concept that needs to be taken into account when designing and researching human–computer interaction.


Was the original paper ever replicated? Cause if not, that’s too strong a conclusion. A simpler explanation could be that the original paper got something wrong and wasn’t replicable.


"This suggests that the CASA Theory may only work for emergent technology[..]"

I think we can see this unfold in real time with large language model based chatbots like ChatGPT. First they seem almost human and the initial reaction is to treat them like that. Always say "Thank you!" and potentially get angry at them for being wrong. It doesn't take long though, to realize that the bot is a bot and even if it speaks human language it behaves significantly different from a real human. Then the human behavior starts to change as well and the bot is treated differently.


Except we see to see behaviors in asking questions to LLMs where we get different/better responses if we ask "please".


And also, we get better answers from LLMs for solving captchas if we claim that the hard-to-decipher letters were written on our dear grandmother's Christmas ornament. I find this quite amusing.


This makes sense if you think about chatGPT as a ML model and not as a sentient AI. Its training data would show that asking nicely elicits better answers.


I mean should I think of my mother as a ML human and not a sentient one? Why does asking nicely work on either?


If I set up a linux box with the root password “please” I don’t think you would describe that as sentient.


Wonder whether 1.) technical proficiency and 2.) age of introduction to computer would make a difference

Because totally anecdotally, I mean it could have been subconscious, but I never remember treating a computer that way. It’s always been… I dunno, a machine.




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: