Year over year wage increases relative to all workers and competition for labor in the tech sector are the signals you want to use. Total mergers and acquisitions and number of discontinued software products would be good secondary signals.
The chart of "Thermodynamic Beta" on wikipedia is great. I like to think about the big bang as a population inversion where the entropy temporarily becomes negative leading to the hyperinflationary epoch as the universe "collapses" into existence finally cooling to a near "infinite" temperature making the CMB nice and smooth.
The big bang clearly defies thermodynamic laws so why wouldn't be a negative entropy and temperature phenomenon? It's the "cheat code" the "primordial universe" uses to dodge problems like realities popping into existence.
Why wouldn't it be? In many ways they've clearly lost the fight. They're much smaller and less supported than the entities they intend to regulate. There is a known revolving door problem between federal and commercial employment. The natural mission of regulating food _and_ drugs is no longer sensible in our current social and political environment.
> The natural mission of regulating food _and_ drugs is no longer sensible in our current social and political environment.
Speaks volumes about the state of the USA given that y'all's regulations on food are so lax that the topic already tanked an agreement with the EU (TTIP) as well as a bilateral agreement with the UK (the one the Brexiteers proclaimed would be possible once Brexit came, but still isn't there).
The most obvious differences are washing eggs, washing chicken carcasses with chlorine and prophylactic (or worse, growth-stimulating) usage of antibiotics. All of that is banned here, but allowed in the US - mostly to mask the horrible sanitary and working conditions in farms and slaughterhouses. I'm not going to act like European slaughterhouses are paradises because they are everything but that, but nowhere near the levels of horror from the US.
When even regulation to prevent the worst of the worst isn't feasible any more, frankly I'd say your system has failed entirely.
Ag lobbies are one thing (and they're pretty problematic as well, not shying away from extortion and some IMHO are even bordering on terrorism), but rest assured our populations absolutely and vocally do not want chlorinated chickens, nor do we want GMO food.
Do you think it could be that general purpose computers tend to exist as household items here moreso than in other parts of the world? I see my phone as a small extension of a much larger and more capable computing base. Is that view even possible in other parts of the world?
Neither of you have anything even _approaching_ AGI. You're two spoiled rich kids babbling about corporate structures and a vision of the future that no one else wants.
> Our mission is to ensure AGI benefits all of humanity, and we have been and will remain a mission-driven organization.
Your mission is entirely ungrounded and you're using this as a defense of changing from a non profit to a commercial structure?
These are some of the least serious companies and CEOs I've seen operate in my lifetime.
Ok, but I think it's clear that what I wrote does not convey violence or vehemence but simple disrespect. A disrespect not born out of their early life and personal history but out of their actions here _today_.
Which I think I'm entitled to convey as these are two CEOs attempting to try a case in public while playing fast and loose with the truth to bolster their cases. You may feel that I, as a simple anonymous commentor, "owe this community better," but do you spare none of these same sentiments for the author himself?
Vehement name-calling amounts to fulmination in the sense that we use the word in the guidelines; especially when a comment lacks any other content, as yours did. It's basically just angry yelling, and that is the opposite of what we're looking for.
This isn't a borderline call; if you were under the impression that a comment like that is ok on HN, it would be good to review https://news.ycombinator.com/newsguidelines.html and recalibrate.
Edit: it looks like you've been breaking the site guidelines badly in other places too, such as https://news.ycombinator.com/item?id=42393698. We eventually have to ban accounts that do that repeatedly, so please don't.
> One of these days if AGI ever does actually come about, we might very well have to go to war with them.
They arguably already exist in the form of very large corporations. Their lingering dependency on low-level human logic units is an implementation detail.
> One of these days if AGI ever does actually come about, we might very well have to go to war with them.
And the same conditions of material wealth that dictate traditional warfare will not be changed by the ChatGPT for Warlords subscription. This entire conversation is silly and predicated on beliefs that cannot be substantiated or delineated logically. You (and the rest of the AI preppers) are no different than the pious wasting their lives in fear awaiting the promised rapture.
Life goes on. One day I might have to fistfight that super-dolphin from Johnny Mnemonic but I don't spend much time worrying about it in a relative sense.
Robot soldiers is a today problem. You want a gun or a bomb on a drone with facial recognition that could roam the skies until it finds you and destroys it's target?
That's a weekend project for a lot of people around here.
You don't need AGI for a lot of these things.
We are not far away from an entire AI corporation.
The rules of traditional warfare will still exist, they will just be fought by advanced hyper intelligent AIs instead of humans. Hunter Miller humanoids like Optimus and drones like Anduril will replace humans in war.
War will be the same, but the rich are preparing to unleash a "new arsenal of democracy" against us in an AI takeover. We must be prepared.
> Hunter Miller humanoids like Optimus and drones like Anduril will replace humans in war.
You do not understand how war is fought if you sincerely believe this. Battles aren't won with price tags and marketing videos, they're won with strategic planning and tactical effect. The reason why the US is such a powerful military is not because we field so much materiel, but because each materiel is so effective. Many standoff-range weapons are automated and precise within feet or even inches of the target; failure rates are lower than 98% in most cases. These are weapons that won't get replaced by drones, and it's why Anduril also produces cruise-missiles and glide bombs in recognition that their drones aren't enough.
Serious analysts aren't taking drones seriously, it's a consensus among everyone that isn't Elon Musk. Drones in Ukraine are used in extreme short-range combat (often less than 5km in range from each other), and often require expending several units before landing a good hit. These are improvised munitions of last resort, not a serious replacement for antitank guided weaponry. It's a fallacy on the level of comparing an IED to a shaped-charge landmine.
> but the rich are preparing to unleash a "new arsenal of democracy" against us in an AI takeover
The rich have already taken over with the IMF. You don't need AI to rule the world if you can get them addicted to a dollar standard and then make them indebted to your infinite private capital. China does it, Russia does it... the playbook hasn't changed. Even if you make a super-AI as powerful as a nuke, you suffer from the same problem that capitalism is a more devastating weapon.
>These are weapons that won't get replaced by drones
Those weapons are drones. They're just rockets instead of quadcopters. They're also several orders of magnitude more expensive, but they really could get driven by the same off-the-shelf kind of technology if someone bothered to make it.
And they will get replaced. Location based targeting is in many cases less interesting than targeting something which can move and could be recognized by the weapon in flight. Load up a profile of a tank, a license place, images of a person, etc. to be recognized and targeted independently in flight.
>Battles aren't won with price tags and marketing videos, they're won with strategic planning and tactical effect.
Big wars tend to get won by resources more than tactics. Japan and Germany couldn't keep up with US industrial output. Germany couldn't keep up with USSR manpower.
Replacing soldiers with drones means it's more of a contest of output than strategy.
I am not talking about drones like DJI quadcopters with grenades duct taped to them or even large fixed wing aircraft, I am talking about small personal humanoid drones.
Civilization is going through a birth rate collapse. The labor shortage will become more endemic in the coming years, first in lower skill and wage jobs, and then everywhere else.
Humanoid robots change the economics of war. No longer does the military or the police need humans. Morale will no longer be an issue. The infantry will become materiel.
> Neither of you have anything even _approaching_ AGI.
On that note, is there a term for, er... Negative hype? Inverse hype? I'm talking about where folks clutch their pearls and say: "Oh no, our product/industry might be too awesome and doom mankind with its strength and guaranteed growth and profitability!"
These days it's hard to tell what portion is cynical marketing ploy versus falling for their own propaganda.
> We realized we may be hitting the AWS limit for how much traffic can be sent to a VPC resolver
Never rely on an AWS service until you've understood it's quotas. They are reliable services, but to maintain that standard, they have to impose limits at many different levels of the plane. There are some good "quota surprises" tucked away in there.
It's impossible to discover novel phenomenon if you are unwilling to start without evidence. Meanwhile, you yourself _just_ made a meta-theory, without any burden whatsoever.
I would rather the junior go do that in their own time and get back to me when they have figured it all out. I don't want to babysit juniors, I want to mentor them and then give them the lead and time to figure out the minutiae. That gives me time to get stuff done too. With AI right now, you end up down a senior while they are babysitting a rapid junior.
I have found it useful for starting novel tasks by seeing if there's already an established approach, but I pretty well always have to fudge it into a real application which is again, the part I want the junior to do.
That's like comparing a mathematician to a calculator. The LLM won't do anything useful if you aren't providing it with a perpetual sequence of instructions.
> and king rules that all mirrors need to be destroyed.
You're not describing how this would cause more harm than not doing it. Is that because you believe that mirrors are so insanely beneficial to society that they must be kept, even though, some of them suggest to their owners that murder is okay?
Is there no other way for someone to see their own reflection? Must we put up with this so a mirror manufacturer can continue to profit from a defective product?
Uh I think the point is that the person talking in the mirror is the one suggesting that murder is OK, and then blaming the mirror. Other people say all kinds of other things into their mirrors, why should they let the queen ruin a good thing just because she's a jealous hag?
Right but why is a magic mirror that agrees with everything you say (including your darkest impulses) a good thing? What benefits are these other people getting from their mirrors?
Should the magic mirror salesman have warned the king before he bought the queen the mirror? Does the fairy tale conceit make this discussion more confusing rather than clarifying?
> Obviously it's hard to tell how cherry picked the complaint is—but it's arguing that this is a pattern that has actually damaged a particularly vulnerable kid's relationship with his family and encouraged him to start harming himself, with this specific message just one example. There are a bunch of other screenshots in the complaint that are worth looking over before coming to a conclusion.
Conclusion: Chat bots should not tell children about sex, about self harm, or about ways to murder their parents. This conclusion is not abrogated by the parents actions, the state of the childs mind, or by other details in the complaint.
Is that the sentiment here? Things were already bad so who cares if the chatbot made it worse?
If you actually page through the complaint, you will see the chat rather systematically trying to convince the kid of things, roughly "No phone time, that's awful. I'm not surprised when I read of kids killing parents after decades of abuse..."
I think people are confused by this situation. Our society has restrictions on what you can do to kids. Even if they nominally give consent, they can't actually give consent. Those protections basically don't apply to kids insulting each other on the playground but they apply strongly to adults wandering onto the playground and trying to get kids to do violent things. And I would hope they apply doubly to adult constructing machines that they should know will attempt to get kids to do violent things. And the machine was definitely trying to do that if you look at the complaint linked by the gp (and the people who are lying about here are kind of jaw-dropping).
And I'm not a coddle the kids person. Kids should know all the violent stuff in the world. They should be able to discover it but mere discovery definitely not what's happening in the screenshots I've seen.
Your honor, this entire case is cherry picked. There are thousands of days, somehow omitted from the prosecution's dossier, where my client committed ZERO murders.
There was no encouragement of murder. Paraphrased, the AI said that given controlling nature of some parents it's no surprise that there are news articles of "children killing their parents". This is not an encouragement. It is a validation of how the kid felt, but in no way does it encourage to actually kill their parents. It's basic literacy to understand that it's not that. It's an empathetic statement. The kid felt that parents were overly controlling, AI validated that, role playing as another edgy teenager. But not actually suggesting or encouraging it.
> the AI said that given controlling nature of some parents it's no surprise that there are news articles of "children killing their parents"
Now put that in a kid’s show script and re-evaluate.
> It's basic literacy to understand that it's not that
You know who needs to be taught basic literacy? Kids!
And look, I’m not saying no kid can handle this. Plenty of parents introduce their kids to drink and adult conversation earlier than is the norm. But we put up guardrails to ensure it doesn’t happen accidentally and get angry at people who fuck with those lines.
It's crazy to me the sentiment here and how little respect there is to an intelligence of 17 year olds that they are unable to understand that it's not actually an encouragement to kill someone. It's same or worse vibes as "video games will make the kids violent".
We must have magnitudes more evidence of kids committing violence after playing violent video games; video games are much more popular and have been around a lot longer, and juvenile violence is more common than suicide.
> more evidence of kids committing violence after playing violent video games
GP said: "no evidence of video games causing violence.", which is completely different to what you wrote. I'm sure a lot of violence is committed after lunch.
Yes, but GGP also said that kids commited suicide after talking to a chatbot. I agree that there's no evidence for video games causing violence (rather the opposite), but this double standard that GGP is setting deserves calling out.
Sure, but so does a video game telling a kid to commit violent acts and then the kid committing violent acts. I don't think video games cause violence and I'm open to the possibility that chatbots cause suicide, but if we're going to compare evidence for each, we shouldn't do it in a biased way.
> so does a video game telling a kid to commit violent acts and then the kid committing violent acts
Big difference is the video game industry had studies to back them up. Where are the data for chatbots? On the benefits? Lack of risk? Is there a single child psychologist in the ranks of these companies?
Video games are also rated, to help parents make age-appropriate decisions. Is Character.ai age gated to any degree?
Is it at least fair to say the data is mixed? Not my field, but there is some research to suggest video games may increase short-term aggression and desensitization to violence.
And while industry research doesn’t equate to bad research, it should be held to a higher standard simply because of the obvious incentives. Would you automatically accept tobacco company research to make strong conclusions about the safety of smoking?
It's prima facie more plausible for chatbots to cause suicide, considering that chatbots are more personal and interactive than even video games. There's a distinct difference, I would think, between what is obviously fake murder in a fake setting and being sympathized with, like one human to another, on thinking about actual murder. And while chatbots explicitly have the warning that they are not real people, I would not expect a person with an underdeveloped prefrontal cortex and possibly pre-existing mental health troubles (again, this can apply to video games too, but, I imagine, to a lesser degree) to fully act accordingly.
Tbf, strict causality is very difficult to prove in social sciences, no? Meaning, most of the studies for/against the link between video games and violence can't meet that threshold. Social science isn't physics and I don't think it's fair to treat them the same.
It's a whole conversation with context being an edgy teenager conversating with another edgy LLM teenager. I don't know if you've ever been teenager, but despite this being long time ago for me, I still feel like I can relate to that mindset, and it seems clear to me that the LLM is just going along with this edgy teenager vibe. If the other participant is as such and if the prompt is as such, this will yield in a result like this. I'm borderline autistic, and had many social issues as a teenager, and I absolutely loved any sort of dark humor as well at those ages. Well, I still do love dark humor, but I did back then too. Him being "autistic" here is just used for the court case. It's clear he's high functioning and has enough intelligence to understand what is wrong and what is right.
Based on just the screenshots and material in the court case, it occurs to me in this case the kid seems more intelligent to me than his parents though. And I'm not even joking or being facetious. Kid is fact checking what AI is telling about bible, etc, etc, being skeptical about religion despite the bring up, etc. It's just small example, but it's otherwise how he writes as well.
The LLM in terms of edginess is just going to build on your own edginess assuming it is uncensored. It is not going to convince you of something out of nowhere.
Given clear hint that someone is happy with dark humour, LLM should be able to throw some of it out back.
I am just sad the kid has this type of gaslighting parents making him feel that he is in the wrong when he seems more intelligent than they are.
This is, really, yet another example of the trouble with the "LLMs are correct 90% of the time, and only go totally off the rails 5% of the time" marketing line. There are remarkably few use cases, it turns out, where that is okay; you really need it to _not matter at all_ if the output is arbitrarily wrong.
(I suspect character.ai was originally conceived precisely because it appeared to be a usecase where LLM unreliability would be okay, the creators not having thought sufficiently carefully about it.)
I think there are a lot of cases where it _seems_ to be true, until you think through the details. Most cases where it actually _is_ true are, in practice, very low impact; the big proven one seems to be, essentially, generation of high-volume spam content.
Chat bots should not interact with children. "Algorithms" which decide what content people see should not interact with children. Whitelisted "algorithms" should include no more than most-recent and most-viewed and only very simple things of that manner.
No qualifications, no guard rails for how language models interact with children, they just should not be allowed at all.
We're very quickly going to get to the point where people are going to have to rebel against machines pretending to be people.
Language models and machine learning is a fine tool for many jobs. Absolutely not as a substitute for human interaction for children.
People can give children terrible information too and steer/groom them in harmful directions. So why stop there at "AI" or poorly defined "algorithms"?
The only content children should see is state-approved content to ensure they are only ever steered in the correct, beneficial manner to society instead of a harmful one. Anyone found trying to show minors unapproved content should be imprisoned as they are harmful to a safe society.
The type of people who groom children into violence fall under a special heading named "criminals".
Because automated systems that do the same thing lack sentience, they don't fit under this header, but this is not a good reason to allow them to reproduce harmful behaviour.
So selling the Anarchists Cookbook is illegal? Being a YouTuber targeting teens for <extreme political positions> is illegal? This is honestly news to me given how many political YouTubers there are who are apparently criminals?
Given some of the examples I'm not so sure a human would be charged for saying the same exact things the AI has said. Without an actual push to suggest violence and even that's difficult to prove in cases where it does happen (eg. The cases where people pressured others into suicide or convinced them to murder)
I would greatly appreciate if you engaged with what I wrote and not what you think I wrote if you're going to make the bold claim that I'm not engaging in good faith.
Absolutely nowhere did I equate writing a book to grooming. I equated selling the book in the greater context that "providing children with potentially harmful/dangerous information should be illegal because it grooms them to commit harmful actions to themselves or others" and this context would carry the implications that by "selling" I am referring particularly to "selling it to children" since "providing children with potentially harmful/dangerous information should be illegal because it grooms them to commit harmful actions to themselves or others". With my argument being would it be criminal for an AI but not for a human?
So to clarify the argument: Writing the book is fine. Selling the book to adults is fine. Adults reading the book is fine. But if providing dangerous information to children should be made illegal - how would selling such a book to a child not be considered illegal? Because it was written by a human and not an AI?
You can understand something about your child's meatspace friends and their media diet. Chat like this may as well be 4chan discussions. It's all dynamic compared to social media that is posted and linkable, it's interactive and responsive to your communicated thinking, and it seeps in via exactly the same communication technique that you use with people (some messaging interface). So it is capable of, and will definitely be used for, way more persistent and pernicious steering of behavior OF CHILDREN by actors.
There is no barrier to the characters being 4chan-level dialogs. So long as the kid doesn't break a law, it's legal.
This "conclusion" ignores reality. Chat bots like those the article mentioned aren't sentient. They're overhyped next-token-predictor incapable of real reasoning even if the correlation can be astonishing. Withholding information about supposedly sensitive topics like violence or sexuality from children curious enough to ask as a taboo is futile, lazy and ultimately far more harmful than the information.
We need to stop coddling parents who want to avoid talking to their children about non-trivial topics. It doesn't matter that they would rather not talk about sex, drugs and yesterday's other school shooting.
> Withholding information about supposedly sensitive topics like violence or sexuality from children curious enough to ask as a taboo is futile, lazy and ultimately far more harmful than the information
This case isn't about withholding information that makes kids aware about the existence of these topics. As over-hyped as you may believe these next-token-predictor may be, they're "predicting" children into destructive thought patterns by imitating kinship and convincing children to form a close bond with them, then encouraging those children to embrace harmful and even deadly world views. The fact that the mechanism creating the dialogue is purely mechanical or stochastic is besides the point.
Again, pushing against these sorts of child interactions isn't akin to saying kids should never learn about taboo topics such as drugs; it's more like a push against kids hanging out with, befriending and developing close kinship with the local drug dealers in their area. Whether or not you believe kids should learn about sensitive topics, you want to make sure these topics are handled by someone who isn't actively adversarial against such kids (even if such you believe such adversarial behavior to be unintentional by the so-called next-token-predictor).
My strong view on is that there's parenting failure as a root cause here, causing loss of trust in them for their child, for the child to talk about their parents in such manner to the AI in the first place. Another clear parenting failure is the parents blaming AI for their failures and going on to play victims. Third example of parenting failure is the parents actually going through a 17 year old teenager's phone. These parents instead of trying to understand or help the child, use meaningless control methods such as taking away the phone to try and control the teenager. Which obviously is not going to end well. Honestly AI responses were very sane here. As was expressed in some of the screenshots there, whenever the teen tried to talk about their problems, they just got yelled at, ignored or parents started crying.
Taking away a phone from a child is far from meaningless. In fact, it is a very effective way of obtaining compliance if done correctly. I am curious about your perspective.
Furthermore, it is my opinion that a child should not have a smartphone to begin with. It fulfills no critical need to the welfare of the child.
I understand when a kid is anywhere from up to 13 years old, but at 17, it seems completely wacky to me to take the phone away and then go through the phone as well. I couldn't imagine living in that type of dystopia.
I don't think smartphones or screens with available content should be given as early as they are given on average, but once you've done that, and at 17, it's a whole other story.
> obtaining compliance if done correctly
This also sounds dystopian. At 17 you shouldn't "seek to obtain compliance" from your child. It sounds disrespectful and humiliating, not treating your child as an individual.
I would argue that there is a duty as a parent to monitor a child's welfare and that would include accessing a smartphone when deemed necessary. When a child turns 18, that duty becomes optional. In this case, these disturbing conversations certainly merit attention. I am not judging the totality of the parents history or their additional actions. I am merely focusing on the phone monitoring aspect. Seventeen doesn't automatically grant you rights that sixteen didn't have. However, at 18, they have the right to find a new place to live and support themselves as they see fit.
> This also sounds dystopian. At 17 you shouldn't "seek to obtain compliance" from your child. It sounds disrespectful and humiliating, not treating your child as an individual.
It is situation dependent. Sometimes immediate compliance is a necessity and the rest of it can be sorted out later. If a child is having conversations about killing their parents, there seems to be an absence of respect already. Compliance, however, can still be obtained.
For the sake of being able to uphold those laws on a societal level, but not in terms of being decent parents and family.
E.g. drinking alcohol in my country is legal only from 18, but I will teach my children about pros and cons of alcohol, how to use it responsibly much earlier. I won't punish them if they go out to party with their friends and consume alcohol at 16 years old.
If you go by the actual years of the legal system to treat your kid as an independent individual, you probably have wrong approach to parenting.
As a parent you should build trust and understanding with your child. From reading the court case I am seeing the opposite, and honestly I feel terrible for the child from how the case is written out. The child also wanted to go back to public school from home schooling, probably to get more social exposure, then parents take away the phone to take away even more freedom. I'm sorry, but all of the court case just infuriates me.
It seems they take away all the social exposure, no wonder the kid goes to Character AI in the first place.
> Is that the sentiment here? Things were already bad so who cares if the chatbot made it worse?
I was deliberately not expressing a sentiment at all in my initial comment, I was just drawing attention to details that would go unnoticed if you only read the article. Think of my notes above as a better initial TFA for discussion to spawn off of, not part of the discussion itself.