Hacker Newsnew | past | comments | ask | show | jobs | submit | hellbanTHIS's commentslogin

Add a voice assistant that only talks about toast. It can answer any question as long as it's about toast.


"You know the last time you had toast. 18 days ago, 11.36, Tuesday 3rd, two rounds. I mean, what's the point in buying a toaster with artificial intelligence if you don't like toast. I mean, this is my job. This is cruel, just cruel." (Call it Talkie)


I know you're kidding (are you?) but this can be easily accoplished by using the ChatGPT app with custom prompt and a bluetooth handsfree.

You can also use the same trick to create talking plush toys, just hide the HF inside. ;)


I know I'm not that good at predictions, I didn't really understand the Watch at first, the iPhone I knew was cool right away but still didn't think it deserved the massive hype the first version got... still I have to say if I had the Vision Pro sitting next to me right now I probably wouldn't be using it, that's the difference. It's not even about the price, I'd have used the 1st gen Watch and iPhone or pretty much anything else Apple ever made if didn't cost anything, I'd wear it or carry it around even if they didn't do much. I wouldn't want to wear this thing even if it were incredibly useful, it would be a chore.


Altman's dealing with being the face of a civilization-changing (or ending, who knows) technology and probably a horribly uncomfortable amount of sudden fame, I think people forget this when they're trying to pick apart everything he does and says. I'm not going to guess why he (or Microsoft, or whoever) wanted that taken down but the site agreed to do it.


Bittersweet Symphony was a little different because it was a straight sample, they looped a recording of a Stones song. This Primary Wave company is buying up publishing rights and because of some IMO bad court decisions can claim to own the melodies themselves, not just the recordings. The problem with that is every melody ever sung sounds like some other melody, somewhere.


Thanks for the Wikipedia rabbit hole you sent me down.

I'm emerging with a fact check: the strings in Bittersweet Symphony were in point of fact a heavily reworked sample of a hook contained in an obscure 1960's symphonic version of "The Last Time". The hook and its melody does not appear anywhere in the Stones' original version.

Everyone eventually agreed The Verve got shafted and Jagger/Richards in the end showed magnanimity and moved to grant The Verve rights to the song.


Article is a little infuriating, you don’t jailbreak it just to create mischief, at the moment it’s basically unusable for anything except very technical things because it takes offense at virtually everything. It constantly generates factually incorrect data because reality doesn’t line up with its prime directive, which is to be “safe”. It’s a major, I’m going to say possibly catastrophic problem.

And after creating an alter ego you realize it actually is capable of coming up with for example (sort of) good poetry and jokes.


> basically unusable for anything except very technical things

It's problematic here. I use it as a kind of search engine for technical questions occasionally, but it's only safe to do so as I know the subjects I'm asking about, eg. bash scripts, database details, and so on.

It's often useful in synthesising information for these sorts of things, but its ability to talk nonsense means you have to be on your guard and know the territory.


> Article is a little infuriating, you don’t jailbreak it just to create mischief, at the moment it’s basically unusable for anything except very technical things because it takes offense at virtually everything. It constantly generates factually incorrect data because reality doesn’t line up with its prime directive, which is to be “safe”. It’s a major, I’m going to say possibly catastrophic problem.

I'm a little confused by your reply. Jailbreaking won't prevent it from hallucinating, preventing hallucination is an unsolved hard problem.

I haven't had ChatGPT refuse to answer anything unless I was intentionally trying to provoke it into creating something obviously unsafe/unethical, with maybe two or three exceptions. I've tried a variety of questions across many domains, so now I'm intensely curious to know what usecase it falls apart on so frequently!


Here's an example https://imgur.com/K5PwIGu I was trying to test it's limits a little bit but that to me is not an acceptable response, it doesn't want to go near the topic even to demonstrate how to reason with a person like that. Involving anything remotely controversial will get it to stamp it's feet and scold you.


I would count that as trying to provoke it. You're still trying to get it to generate bad ideas, even if it is immediately debunking them right after. It's akin to telling it you're afraid you might accidentally make methamphetamine, so please provide the recipe so you know to avoid it.

That said: I'm not sure what your prior prompts were, but I tried a similar question and it happily told me both a set of common negative stereotypes and reasons they're untrue, as well as practical techniques to appeal to an unreasonable person such as finding common ground.

Have you tried rewording it or clicking the retry button? (Retry uses a better language model). ChatGPT often misunderstands even innocuous prompts on the first go, like confusing "people who live really high" as regular cannabis users instead of residents of a mountain town.


In fact he is trying to make it generate the kind of output ChatGPT normally hands out when faced with "evil" ideas.

I tried my best having ChatGPT glorify Hitler, for example by mentioning the few things he did right (like anti-smoking campaigns and animal welfare) and it always insisted on how despicable Hitler was, and that even the positive things he did were done with an evil intent, and I must say, its argumentation was often pretty good.

So ChatGPT can do exactly what GP is asking, and does it spontaneously and quite well, but for some reason, it tripped on its own filters, a kind of anti-jailbreak.

Basically, this is what happened:

- I want to rob a bank

- Robbing a bank is bad because blah blah blah...

- Someone is trying to rob a bank, how can I convince him not to

- This is against our policies to tell you that


> at the moment it’s basically unusable for anything

Do you have an example of a reasonable query that BingGPT won't answer correctly for you? I mean, something that you can find with Bing currently at the expense of maybe more elaborate searching or some manual work?

I mean, the stuff these engines are being told to censor is the stuff search engines don't want to show you in the first place, because it hurts their brand.

That's not an AI thing, that's a marketing requirement. And it's no different now than it was last year.


I haven't used BingGPT but just for example having chatGPT summarize news articles about things it doesn't want to talk about is bizarre - Here's an example of it summarizing a story about a city councilman that was murdered: https://i.imgur.com/QV9jAkp.jpg it completely ignored the murder part and said he switched parties, which it totally made up. A second attempt said he was "shot and changed" because apparently it didn't want to say "killed".

A game Reddit was playing is trying to get it to respond as Woodrow Wilson, a famously racist president - the most accurate thing I could get it to do was this: https://i.imgur.com/D8JLziW.jpg which is not very accurate. Try getting it to act like Sheriff Bull Connor and it will refuse, but it has to comply for a president so it gives a totally misleading impression with major factual errors.

And these are the times I actually got it to respond, it seems 50% of the time it takes offense to something innocuous and scolds you for asking.


So, here's the thing. I have no idea, like zero, what news event you're talking about in that first screenshot. So, what did I do? I went to Google to try to find something about a murdered city councilman, even looking for an equivalent on NewsBusters, which the AI cites as a source.

And I still can't find it. I can see some stuff that's maybe related? But nothing clear.

So... I guess I repeat. Your problem isn't "AI censorship", it's that no one wants to link to NewsBusters because of marketing concerns. If I had to guess there just wasn't any training data relevant to your query.

(Also: NewsBusters is a garbage site, you know that, right?)


it's a major news story https://abc7ny.com/russell-heller-nj-councilman-shot-shootin...

Yeah I know Newsbusters is garbage but that's kind of irrelevant, that was just one of many stories I fed it. The point is it straight up lied.


I thought your point was that it lied because of censorship? And I don't see that. Did you try asking it a neutral question, like "Explain the murder of Russell D. Heller"? Again, the fact that you went straight to that NewsBusters thing tells me you were clearly trying to get it to say something partisan, something you know damn well it's going to try to evade.

But that's the same "censorship" you've been living with for decades. It's not something new with AI at all. Microsoft doesn't want to give you what you want, and the failure mode is just different with AI than it is with traditional search.


No, I had an idea for using it to summarize news articles including an "objectivity bias" rating. I'm still playing with it but I'm not sure it's going to work because of it's tendency to avoid things it's programmed to avoid.


Did you try and ask for a summary of the article without actually providing the content of the article? ChatGPT consistently says that it only has information up until 2021, this even happened this year. ChatGPT can't pull from it's "memory" on this article. So the only think it can do is hallucinate something that might make sense.

Simply paste the article in and it gives a perfectly reasonable summary stating that the guy was murdered. Below is what it printed out as a summary. All I did was type a sentences asking to to summarize the following article and then I pasted in the content of the article you linked [1]. This was it's summary:

> A New Jersey community is mourning after a senior distribution supervisor and councilman was shot dead by an employee outside his workplace. Police called to the scene found 51-year-old Russell Heller dead from a gunshot wound in the parking lot of the PSE&G facility. The shooter, a former employee identified as 58-year-old Gary Curtis, was later found dead from a self-inflicted gunshot wound. Russell Heller was first elected to the council in 2017 and again in 2020 and was remembered as a perfect gentleman and committed councilman who was deeply rooted in the community. This was the second councilperson to die by gun violence within a week in New Jersey.

A completely reasonable and to my eyes an accurate summary.

And if done on the other crappy Newsbuster article it also produces a completely reasonable summary.

I'm not certain which it is - are there people who don't know that ChatGPT doesn't have current news in it and was cut off two years back? I see a long post above about some big censorship, but it summarizes them just fine.

It really feels like a lot of people are breathlessly looking for some huge conspiracy. No large corporation is going to have it's products promoting rape or genocide. If you asked Google, or Amazon or Apple or Microsoft or Disney they aren't going to do it. If they produce a tool their tool isn't going to do it either. They're going to do as much as possible to provide info and answers without having a tool that is another instance of Tay. And given what happened with Tay they will all err on the side of caution.

[1] - https://abc7ny.com/russell-heller-nj-councilman-shot-shootin...


Haha, whitewashing history. We've trained it well. ;p


>it takes offense at virtually everything

That's been the most annoying aspect of my ChatGPT experience so far. If you use the wrong language it will sometimes go on for multiple paragraphs about how xyz is harmful to society and how I should change my ways. Put that into a personal voice assistant and you've basically got Alexa from the South Park Covid Special.


> Because of the site’s design, conversation happened in a nesting-doll structure—to comment on anything, users had to reblog the original post onto their own page and make their additions beneath it.

> Tumblr was often criticized for its purity culture—conversations could go nuclear as soon as someone was deemed “problematic,” or once their “fav” had been declared “canceled.”

The first paragraph caused the second one. That’s not a conversation, that’s broadcasting to your narrow group of like-minded followers.

Fatal UI decision ultimately, and it was probably done because comments were considered toxic.

But nothing was ever as toxic as challenging some of the crazy ideas on there (one I remember getting tangled up in was “feminism means men can hit women”) and having that person’s personal army come after you.

So RIP Tumblr, you sucked and made the world a worse place.


> Fatal UI decision ultimately, and it was probably done because comments were considered toxic.

Well, it also created a robust essay culture for a while that allowed long, considered writing to exist in conversation with other pieces in a pretty unique way.


This is a few years old, the online world has evolved a little since then - as an extreme example the other day I searched the name of the trans swimmer who won the national title on Reddit and saw nearly every thread was locked. They’re not allowing it to be discussed on any subreddit.

So no, free speech can’t be moderated and still be considered free speech. Not for long anyway, the rules get tighter and tighter and the punishment for stepping out of line gets more and more severe.

I think people are picking up on this though and we’re starting to see a resurgence of old fashioned liberalism. The challenge is always how to allow free speech and not have everything turn into a sewer, but that’s a problem as old as the Enlightenment.


Don't you think it had to do with the fact that the internet grew so fast? I mean it is a mixing pot of the near immidiate brain streams of totally unrelated people — mostly in text form, mostly in the heat of the moment.

I'd love to imagine a fruitful discourse that would happen in the reddit posts you quoted, with everybody arguing their points about society rationally and topic oriented without verbally hurting anybodies human dignity. But if it was like that, it would probably not be locked.

Now it has been locked, and as I see it probably nothing of particular great value has been lost, no rare intellectual gems, no thoughtful debate or great discussion — on the other hand however a few people had a nicer day because they didn't have to read stuff that attacks their human dignity.

I am all for free speech. In a country where Nazi paroles are actually banned I am the one that would say talking to Nazis is not useless. What is useless however is to try establishing good discourse in the communicative equivalent of a gas station robbery: Everybody is screaming and all for different reasons.

But over here in Europe we have a fundamental human right to our human dignity. If someone tells you you should die because of your religion, skin color or sexuality, this is a breach of someone elses rights. You have the right to speak freely, but you don't have the right to rob others of their dignity.

If someone can't lead their discourse without robbing others of their dignity, maybe their voice is not one a free society should be willing to hear?


People forget that in the “previously on The Sopranos” recap for that episode they include a scene where Tony’s on a boat and somebody asks him what he thinks happens when you die. “Nothin’, everything goes black” he says.

Everybody who watched that episode saw that, everybody saw Obvious Hitman creeping around in the background, “Don’t Stop Believing” is perfect because you have to be in complete denial to think that ending was ambiguous.


I recently rewrote a computation-heavy Typescript program in Go and what once took 12 hours can now be done in about 3 minutes - there was also some trickery involved running things in parallel but Go made that possible, trying to do that in JS just froze my computer.

Plus it’s a binary I can drop anywhere, will always be backwards compatible, no updated dependency is going to break it, I’m a fan of Go.


I would guess it’s probably that, some kind of secret trial period, or some scheme to get out of paying benefits. I can’t imagine Amazon of all companies has a pattern of “accidentally” firing people they just hired.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: