Previously they didn't officially quote how much limits were included in pro subscription, but you could determine it by upgrading from plus that reached weekly limits - after upgrade you ended up with 8% used limits, so we can assume they reduced limits by half just for pro users.
inference costs nothing in comparison to training (you have so many requests in parallel at their scale), for inference they should be profitable even when you drain whole weekly quota every week
but of course they have to pay for training too.
this looks like short sighted money grab (do they need it?), that trade short term profit for trust and customer base (again) as people will cancel their unusable subscriptions.
changing model family when you have instructions tuned for for one of them is tricky and takes long time so people will stick to one of them for some time, but with API pricing you quickly start looking for alternatives and openai gpt-5 family is also fine for coding when you spend some time tuning it.
another pain is switching your agent software, moving from CC to codex is more painful than just picking different model in things like OC, this is plausible argument why they are doing this.
And even when AMD does move their mainstream desktop processors to a new socket, there's very little reason to expect them to be trying to accommodate multi-GPU setups. SLI and Crossfire are dead, multi-GPU gaming isn't coming back for the foreseeable future, so multi-GPU is more or less a purely workstation/server feature at this point. They're not going to increase the cost of their mainstream platform for the sole purpose of cannibalizing Threadripper sales.
>why do the results need to be decrypted by trustees after the election?
they probably design this system to be used for government elections, how they can convince anyone to use it when they do not use it for their own elections?
I gave it a spin with instructions that worked great with gpt-5-codex (5.1 regressed a lot so I do not even compare to it).
Code quality was fine for my very limited tests but I was disappointed with instruction following.
I tried few tricks but I wasn't able to convince it to first present plan before starting implementation.
I have instructions describing that it should first do exploration (where it tried to discover what I want) then plan implementation and then code, but it always jumps directly to code.
this is bug issue for me especially because gemini-cli lacks plan mode like Claude code.
for codex those instructions make plan mode redundant.
what are tested and fairly lightweight alternatives for Loki?
elastic stack is so heavy it's out of question for smaller clusters, loki integration with grafana is nice to have but separate capable dashboard would be also fine
even if its slightly better, they might still have released the benchmarks and called it a incremental improvement. I think that its falls behind one some compared to chat gpt5
it's not entirely unmoderated. some of the rules are being enforced fairly strictly - for example, NSFW images on SFW boards get reported and erased within minutes. blatant spammers, shills, and schizos get dealt with too. only residential IPs can post, which reduces the volume of shit quite a bit. a dedicated schizo can shit up a thread, a coordinated raid can shit up a whole board, but given the ephemeral nature of 4chan, it's like pissing in an ocean of piss.
rather, it is politically unmoderated. which is, of course, the pearl-clutching anathema.
i think this might be caused by codex.
it's open source, many people use it and it uses ratatui. People check how it is implemented and discover ratatui.
I believe this might be current most popular application using this library.
I used codex to write the VHS script, which runs codex to generate a Ratatui app, and then then used codex to add this to the website. It's codapodes all the way down.
reply