Because people would just feel bad about it and keep doing it. I don't care about people feeling bad about immoral behavior, I care about them not doing it in the first place.
"..our initial analysis estimates that around 0.15% of users active in a given week have conversations that include explicit indicators of potential suicidal planning or intent and 0.05% of messages contain explicit or implicit indicators of suicidal ideation or intent."
Roughly 700 million weekly active users, it's more like 1 million people discussing suicide with ChatGPT every week.
For reference, 12.8 million Americans are reported as thinking about suicide and 1.5 million are reported as attempting suicide in a year.
This ratio allows for a relatively simple 2x2 repeating pattern. That makes interpolating the values immensely simpler.
Also you don't want the red and blue to be too far apart, reconstructing the colour signal is difficult enough as it is. Moire effects are only going to get worse if you use an even sparser resolution.
Lol - they really should be locking down their email accounts and enforcing that policy. Or manually reviewing outbound messages before they can be sent. It seems likely that just telling the LLMs that will have a non-zero failure rate.
The github repo includes (among other things) a script (relying on python-pptx) to output decomposed layer images into a pptx file “where you can edit and move these layers flexibly.” (I've never user Powerpoint for this, but maybe it is good enough for this and ubiquitous enough that this is sensible?)
I saw some people at a company called Pruna AI got it down to 8 seconds with Cloudflare/Replicate, but I don't know if it was on consumer hardware or an A100/H100/H200, and I don't know if the inference optimization is open-source yet.
with torch.inference_mode():
output = pipeline(**inputs)
output_image = output.images[0]
for i, image in enumerate(output_image):
image.save(f"{i}.png")
Unless it's a joke that went over my head or you're talking about some other GitHub readme (there's only one GitHub link in TFA), posting an outright lie like this is not cool.
The word "powerpoint" is not there, however this text is:
“The following scripts will start a Gradio-based web interface where you can decompose an image and export the layers into a pptx file, where you can edit and move these layers flexibly.”
Oh okay I missed it, sorry. But that’s just using a separate python-pptx package to export the generated list of images to a .pptx file, not something inherent to the model.
It's the replies that are the interesting bit. It's not perfect, but it can maintain multiple conversations with different people in the same context, and do things like changing its current rules in response to conversations with users. Its slightly robotic tone is deliberate: it tries to convey information in the most efficient way possible. I'm not sure if that's an emergent property or if its in one of its fixed memory blocks. I do know that earlier on people managed to convince it to change its personality and cpfiffer had to intervene to stop people doing that.
These kind of LLM bots can be fun to play with in a "try to make it say/do something silly" way, but beyond that I don't really get the point. The writing style is grating and I don't think I've ever seen one say anything genuinely useful.
Yeah, I think it's an uncommon word. It's not a concept that would come up for most American English speakers, unless you're in a community that uses a language with another writing system (I think I first encountered it in a synagogue with Hebrew) or you're learning such a language.
I think I've maybe occasionally seen "translit." in text used to mark that the following is transliterated, but I could see that being easily glossed over.
If AI is at the point where it is exactly as capable of your average junior 3D professional in 10 years, it will probably have automated a ton (double digit percentage?) of current jobs such that nothing is safe. There's a lot of complexity, it's fairly long time horizon, it's very visually detailed, it's creative and subjective, and there's not a lot of easily accessible high quality training data.
It's like 2D art with more complexity and less training data. Non-AI 2D art and animation tools haven't been made irrelevant yet, and don't look like they will be soon.
Yeah. I wonder why that is - is it because it highlights a conflict between our actions and values? If left unexamined, it's a non-issue, so having it spelled out feels like a problem being created?
I think sometimes this is when we find our way to the middle of two relatively simple drives: "be an orthodox group member/ avoid being a social outcast" and "avoid the discomfort of cognitive dissonance / admitting hypocrisy".
If there aren't immediate consequences for inaction (especially if there ARE costs and/or social consequences for action) were very good at convincing ourselves to ignore it (or tell ourselves we will EVENTUALLY deal with it but just not right now)
I would much rather assume the people I'm interacting with are honest and conveying their real feelings, vs playing some (probably) Machiavellian game with N levels of dishonesty and manipulation from what could easily be a malevolent person at the core. At least that tends to be the assumption when you pick up on a lack of authenticity in this way.
When you have a real indication of dealing with a master manipulator, it's very understandable that you should use an abundance of caution. That's probably an instinct in us at this level.
Of course everyone is at least a little aware that they're putting on a bit of a ruse with their public persona, but that needs to be tethered to some level of authenticity or you'll just be sending out Patrick Bateman vibes.
This strikes me as a glass-half-empty interpretation. Why is the stuff from the blog post necessarily machiavellian and manipulative?
I didn't read any of that into that blog post.
Rather, it was about how to create win-win situations where the people involved genuinely enjoy each others company. No need for bad intentions here.
> When you have a real indication of dealing with a master manipulator
This statement seems like a paradox. Forgive my "No True Scotsman" example. If the person is such a "master manipulator" what indications do you have? The social normies will miss them, or will think they are the ones making the suggestions/decisions. This is the hallmark of master craft sales people.
Wouldn't you think it is more important what the goal for the other person is? If their goal is to enrich and make both of your lives better, does it matter whether they consicously use social techniques or have natural automatic ability to do so?
It is also autism vs psychopathy. Patrick Bateman is nowhere close to someone autistic trying to learn those socially successful behaviours. Patrick Bateman is a terrible human being not because they are inauthentic, he is a terrible human being because of the acts he did and wanted to do.
reply