> But people don't actually care about the environment. They care about looking like they care about the environment, and sending industrial processes somewhere else. There is a difference.
The idea that people setting pollution rules secretly don't care is silly.
I'd expect the security team to realize what the code is treating as a secret isn't actually secret.
But there's a second insight that seems tough for a security review to catch. You have to realize that even though you can't do anything obviously malicious with the API, there is a billing problem.
Feels like they'll use it for purposes Anthropic didn't approve of, and then turn around and blame them when it turns out asking ChatGPT to determine which ships are hostile was a bad idea.
It's not a defense, it's a constant reminder that one side is bad or worse. As someone who believes in modern values and society, I think it's very important to acknowledge all of the recent events.
If you care about the victims, you should also care about the victims at the music festival too. Because they're one and the same, innocent people who were murdered for stupid ideology.
If you imagine it just keeps improving, the end point would be some sort of AGI though. Logically, once you have something better at making software than humans, you can ask it to make a better AI than we were able to make.
The other possibility is, as you say, progress slows down before its better than humans. But then how is it replacing them? How does a worse horse replace horses?
I said I don’t think it follows, and you certainly gave no support for the idea that it must follow. Logically speaking, it’s possible for improvements to continue indefinitely in specific domains, and never come close to AGI.
Progress in LLMs will not slow down before they are better at programming than humans. Not “better than humans.” Better at programming. Just like computers are better than humans at a whole bunch of other things.
Computers have gotten steadily better at adding and multiplying and yet there is no AGI or expectation thereof as a result.
Either the AI can do better than humans at programming, or it can't. If I ask it to make an improved AI, or better tools for making an improved AI, and it can't do it, then at best it's matching human output.
All the current AI success is due to computers getting better at adding and multiplying. That's genuinely the core of how they work. The people who believe AGI is imminent believe the opposite of that last claim.
No one is talking about AGI in this thread except you, though. The post said nothing about it. It's an absolute non sequitur that you brought up yourself.
The idea that people setting pollution rules secretly don't care is silly.
California can't fix the whole world's problems.
reply