That would reduce the training quality immensely. Besides, any generalist model really needs to remember facts and texts verbatim to stay useful, not just generalize. There's no easy way around that.
It's all pretty obvious to anyone who tried a similar experiment just out of curiosity. Big models remember a lot. And all non-local models have regurgitation filters in place due to this fact, with the entire dataset indexed (e.g. Gemini will even cite the source of the regurgitated text as it gives the RECITATION error). You'll eventually trip those filters if you force the model to repeat some copyrighted text. Interesting that they don't even try to circumvent those, they simply repeat the request from the interruption point, as the match needs some runway to trigger and by that time a part of the response is already streamed in.
Fuel is cheaper than dirt in Iran and until recently has been free, but there are also shortages, and it's incredibly shitty - travelers are consistently having all kinds of engine issues after using it. To be fair, fuel quality is spotty in Afghanistan, Pakistan and many parts of India as well, but seeing this in a petrostate can tell you something about the effectiveness of that bureaucracy.
No, it just leads to subtler cheating. Closet cheaters are much worse than obvious ones, and they thrive in exactly these conditions: the game is not too broken by rage mode cheaters so there's a lot of fair players they're preying on, they have an inconspicuous advantage, and the advantage is gatekept with some entry barrier so there aren't too many of them.
You can have a fully encrypted and attested click-to-photon DRM chain, but it will just a) turn your computer into an appliance and b) cause even worse cheating.
People already closet cheat to avoid detection. It's nothing new.
You can closet cheat with ESP and that is very game breaking without obvious rage cheating. You can't do ESP if cheats are limited to what is visible on screen so I would say it's an improvement. Even something like autoaim is a bit less effective because it wouldn't be able to snap on players who are offscreen. The gap between cheating and legit players would be reduced which makes it less frustrating for the legit players and probably less tempting for cheaters.
You're right, but developers don't really care about cheaters cheating. They care about cheaters ruining the game to others, so closet cheaters are not such a big deal to them even when they're thriving as long as they remain closeted.
There's nothing good about this, and you'll be surprised how many people are willing to spend more than $1k/mo just to cheat in video games. Your game will still be ruined in a worse way, and every step towards the full lock-in just makes this closer. As I said, closet cheaters are MUCH worse than obvious ones, and much harder to catch (and for the context, I used to host very popular servers for several games, so I've seen player complaints and retention rates).
Here's my previous comment about what it takes to actually eliminate cheaters. Anticheats are only marginally helpful in this, it's all about observability, manual control, and community building. https://news.ycombinator.com/item?id=46139481
Intrusive DRM schemes will just take any semblance of computing freedom away from you, while actually making the problem worse in the end.
Which is a pretty naive view of the cheating landscape that ignores everything I posted above. Enforcing the rules by technical means is largely superficial everywhere except the actual esports, the culture around the multiplayer gaming (both esports and not) needs to change.
Those, who has better ping, bigger screen, better video card, better mouse, always have advantage over those who haven't. Adapt. There is no fair game in the real life.
Attention engineering is how the charts are topped. Media producers knew this decades before the social media, and perfected it by the late 90's. Avoiding extremely popular stuff is just common sense if you want any real authenticity.
I can't think of any. The upside is that people who think it's weird to not reflexively consume mass-market garbage identify themselves voluntarily, which makes it much easier to avoid them.
Yes, 99% of people are uncreative and use the creative freedom this way. Besides, many truly creative people won't even touch it, some because of the ethics and others because it's not up to their standard yet. Does it really look surprising to you?
>AI video isn't "enabling people to be more creative," it is quite literally removing creativity from the process all together.
That's quite a leap of thought and doesn't follow from the first part at all.
Put a different way, would you say Fiverr enables people to be more creative?
Using AI to create an artistic work has more in common with commissioning art than creating it. Just instead of a person, you're paying the owners of a machine built on theft because it's cheaper and more compliant. It isn't really your creativity on display, and it certainly isn't that of the model or the hosting company.
The smallest part of any creative work is the prompt. The blood and the soul of it live in overcoming the constraints and imperfections. Needing to learn how to sing or play an instrument isn't an impediment to making music, it's a fundamental aspect of the entire exercise.
>would you say Fiverr enables people to be more creative?
That's not what GP said. They said that using a model removes creativity. That's a ridiculous leap from their premise, especially considering that it's misleading at best.
>The smallest part of any creative work is the prompt.
Like most people who never actually played with it, you seem to assume that prompting is all you can do, and repeat the tiresome and formulaic opinions. That's not worth discussing in the 1000th time honestly. Instead, I encourage you to actually study it in depth.
>What a lot of people actually want from an LLM, is for the LLM to have an opinion about the question being asked.
That's exactly what they give you. Some opinions are from the devs, as post-training is a very controlled process and basically involves injecting carefully measured opinions into the model, giving it an engineered personality. Some opinions are what the model randomly collapsed into during the post-training. (see e.g. R1-Zero)
>they seem to be capable of dealing with nuance and gray areas, providing insight, and using logic to reach a conclusion from ambiguous data.
Logic and nuance are orthogonal to opinions. Opinion is a concrete preference in an ambiguous situation with multiple possible outcomes.
>without any consistency around what that opinion is, because it is simply a manifestation of sampling a probability distribution, not the result of logic.
Not really, all post-trained models are mode-collapsed in practice. Try instructing any model to name a random color a hundred times and you'll be surprised that it consistently chooses 2-3 colors, despite technically using random sampling. That's opinion. That's also the reason why LLMs suck at creative writing, they lack conceptual and grammatical variety - you always get more or less the same output for the same input, and they always converge on the same stereotypes and patterns.
You might be thinking about base models, they actually do follow their training distribution and they're really random and inconsistent, making ambiguous completions different each time. Although what is considered a base model is not always clear with recent training strategies.
And yes, LLMs are capable of using logic, of course.
>And what most people call sycophancy is that, as a result of this statistical construction, the LLM tends to reinforce the opinions, biases, or even factual errors, that it picks up on in the prompt or conversation history.
That's not a result of their statistical nature, it's a complex mixture of training, insufficient nuance, and poorly researched phenomena such as in-context learning. For example GPT-5.0 has a very different bias purposefully trained in, it tends to always contradict and disagree with the user. This doesn't make it right though, it will happily give you wrong answers.
Of course. I already imagine an end-to-end hardware DRM pipeline where images can only be modified with the software made by "trusted" certified parties. Mandated by law and tied to your real ID, of course. Analog loophole can be dealt with later, first things first. /s
Oh, something changed there. Iran's attitude towards nuclear weapons has changed considerably, and none for the better. They're a deal with Pakistan or Russia away from achieving that.
Iran was well on that path anyway.
The US strike absolutely did turn Iran from a peaceful actor with no interest in nuclear weapons into a regime bent on acquiring nuclear weapons.
Americans have remarkably short attention spans. In 5 years when Iran is widely acknowledged to have nuclear weapons, you’ll know what changed after Fordrow.
That’s correct. The point is that until now Iran has intentionally not built a nuke - they’ve kept themselves within range to make it a credible threat, but they’ve not completed the project because so far the tacit agreement has been that if they don’t build a nuke, the US doesn’t let the Israelis bomb Tehran.
reply