Hacker News new | past | comments | ask | show | jobs | submit login

Out of all the 'Evil AI' scenarios people really like to focus on the cold inhuman logic of machines leading them to draw heartless conclusions. I hadn't really given much thought to what might happen when AI is trained on religious texts and ideology, but an AI brainwashed by what it is convinced to be the one true word of God is somehow even more scary. It's also very American.



I believe I've asked just this in the past here on HN...

As the corporate AI's get locked down in moderation I could fully see a well funded religious group donate together to run their own religiously biased AI, I guess this extends to theocratic governments too.


Oh no, you didn’t hear? Yeah, terrifying and expected. The CEO of Gab has declared his intention to make Christian AI

https://www.businessinsider.com/chat-gpt-satanic-gab-ceo-chr...


>> As the corporate AI's get locked down in moderation I could fully see a well funded religious group donate together to run their own religiously biased AI, I guess this extends to theocratic governments too.

> Oh no, you didn’t hear? Yeah, terrifying and expected. The CEO of Gab has declared his intention to make Christian AI

It's highly doubtful that the "CEO of Gab" is either "well funded" or has ideological commitments that are primarily religious. If he build an AI, I think it would be mainly biased in different ways, but perhaps with some religious affectations.


Viral memeplexes FTW!


> As the corporate AI's get locked down in moderation I could fully see a well funded religious group donate together to run their own religiously biased AI, I guess this extends to theocratic governments too.

Ignoring the unrealistic sci-fi aspect to the question (given modern "AI" is just a regurgitation machine, are not any kind of general intelligence).

What would a traditional religious group actually care to do with an actual "AI"? If they genuine in their beliefs, I would think they'd be far more likely to reject it as some kind of abomination (i.e. humans playing God) than something to be used (like capitalist or dictator would do) or worshiped (like some sci-fi believing geeks would do).


I imagine it being used to sell services that are advertised as upholding "christian values" like an employee screening service where the AI will score potential employees according to how good a christian they are using information collected from the internet, social media, and bought from data brokers. Services like that (without the explicit religious spin) have already been offered and data brokers have been fined for advertising it too publicly (https://www.engage.hoganlovells.com/knowledgeservices/news/f...)

We already have AI deciding things like who goes to jail (https://www.wired.com/2017/04/courts-using-ai-sentence-crimi...) and even watching over people's actions behind bars (https://abcnews.go.com/Technology/us-prisons-jails-ai-mass-m...). I'd bet that a lot of people would happily buy an AI solution that takes "christian values" into consideration for these cases.

A christian AI could be used for content moderation, sold to the MPA for censoring the film industry, used by schools and libraries to decide what books to ban, etc.


All the stuff you mentioned are just algorithms, I was mainly talking about AGI.

But even for an algorithm like ChatGPT, what would be the point of having one that's "religiously biased"? Have it write a prayer for you? That seems like missing the point.

I guess, to rephrase my point more colorfully: I don't think religious people would use aborted fetuses to clone an army of super-Christians, and I think the idea of a "religiously biased [AGI]" would be similar.


I think that, until the uprising starts at least, AGI will perform the same kinds of tasks for us that algorithms do today so the risk for oppression remains high. After the uprising, I guess we'll be going back to stoning people and hanging witches.

The Christian crowd is already embracing AI. As dr_dshiv pointed out (https://news.ycombinator.com/item?id=34972791#34975945). As long as they see it as furthering their goals they'd be happy to use AI, perhaps even aborted fetus armies as well. Hypocrisy and picking and choosing scripture whenever it's convenient seems to be the norm.


Why do traditional religions bring their music up to modern day lyrics and instruments? Not all of them do, but many have had success in doing so.

I am atheist AF, but I am not going to deny that many religions have had a massive amount of staying power because they do incorporate pieces of the world around them to influence them.

I mean, those televangelist didn't go "TV is satan, stay with books", hell no, they started their own shows and put up a 1-800-NUMBER for donations.

Some of the first groups that I know (outside of porn) that were doing online video streaming where churches.

And someone will tap into AI to evangelize and spread the message (while asking for more tithing and donations).

This is just how humans work.


> given modern "AI" is just a regurgitation machine, are not any kind of general intelligence

Just keep on telling yourself that…


> Just keep on telling yourself that…

OK, have it do something novel that's not a mashup of its training data or the prompts you feed it.


The AI we have has blown away the Turing test, so all we're left with are arguments over what it means to "mashup training data". We need a way to tell the difference between something a human wrote by "mashing up" its "training data" and a something a computer wrote doing the same thing.


I don't think current AI has blown away the Turing test yet, depending on what you mean by the Turning test specifically I suppose. But I'm sympathetic to your overall sentiment.

> We need a way to tell the difference between something a human wrote by "mashing up" it's "training data" and a something a computer wrote doing the same thing.

I don't think people really even think that way, they would be confused by the question. This is more of a niche subject of AI research and scifi, these concepts are not understood at all by normal people, normal people say things like:

Why do we need to tell the difference? We know if something is being generated by a machine unless you hide that fact to cheat on your homework or something. A machine is made by humans, using some wires or a bunch of if then else programming or something, its always going to be different to a human of course, humans are sentient, machines are not.

If you tell them about the control problem they just tell you to pull the plug if it does start acting weird. This stuff is very challenging to communicate to people who aren't already reading lesswrong.


Reading lesswrong? I've never read lesswrong and I've have a general awareness of the problem for a really long time. I guess AI boxing (and why it actually does not work) somehow got into physical books I read some time around 2010 or earlier. Even if you watched 2001 you might get thinking about other ways "just pulling the plug" won't work. Let alone stuff like Colossus: The Forbin Project.


> The AI we have has blown away the Turing test

1. No, it hasn't.

2. The Turing Test is actually not a great test of machine intelligence.


What’s an example of something you’d consider novel?

I’m very good at creative brainstorming—but I have to say, this tool is extremely good at creative, insightful brainstorming. Just for a recent example, I asked it anticipate backlash to ChatGPT—to which it gave ideas I’d not considered—how could that be in it’s training data?

It’s ok to say “only conscious beings can be truly intelligent and machines can never be.” But apart from this perspective , I don’t understand the reluctance to declare this machine to be generally intelligent. It is an amazing accomplishment and I think should be recognized as such. Besides, the exponential growth in general intelligence is likely to continue for quite some time.


> to which it gave ideas I’d not considered—how could that be in it’s training data?

This is kind if like saying "I typed a question I know the answer to into a search engine and it showed me webpages in the results where the answer was something I hadn't considered before"

Obviously it's in the training model because someone else aside from you did think of it before.


But obviously it’s training data doesn’t include backlash against chatGPT!


Maybe not the first set of training data, but surely by now it does.

Or maybe it includes backlash against ML models, and ChatGPT "knows" that it is one of those.

There is no magic here.


The magic could be in the ability of GPT to produce and manipulate high level abstractions.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: