Hacker Newsnew | past | comments | ask | show | jobs | submit | dchftcs's commentslogin

you still can think hard but you can offload some parts to LLM when you're stuck. Then you can leave space for more hard-won inspiration. When you're faced with a high-stakes decision, evaluating all sorts of possibilities, it's really easy to maximize the utilization of your brain, so in those cases you have plenty of chance to think hard.


I think what you say would have be fair if Elon's and his fanboys' stance was "we need more data" rather than "we will be able to scale self-driving cars very quickly, very soon".


There's a somewhat niche game publisher that had very gamer-friendly practices. At some point they released older games for free, and for new popular ones they were relatively friendly to mods. Now they only operate a lootbox game. The sad reality of commerce.


> I have to learn their processes/tools/etc from reading the docs. > So I'm probably coming in at like 20% better than junior.

There are firms that take that to heart, and there is indeed a lot of truth in it. A large amount of skills and knowledge just aren't transferable when switching jobs. But I think it's not hard to create more than 20% of the value. And even if it really is 20% of the value, the profit generated from the work might actually be more than the salary gap anyway, and the 1-year growth curve might be faster for a senior than a junior.


Anthropic and OpenAI are essentially betting that a somewhat small difference in accuracy translates to a huge advantage, and continuing to be the one that's slightly but consistently better than others is the only way they can justify investments in them at all. It's natural to then consider that an agent trained to use a specific tool will be better at using that tool. If Claude continues to be slightly better than other models at coding, and Claude Code continues to be slightly better than OpenCode, combined it can be difficult to beat them even at a cheaper price. Right now, even though Kimi K2 and the likes are cheaper with OpenCode and perform decently, I spend more than 10x the amount on Claude Code.


In that case though, why the lock-in? If the combination really does have better performance than competitors’ offerings, then Anthropic should encourage an open ecosystem, confident in winning the comparison.


I imagine they do not see it as a level playing field. If OpenCode can draw on Claude Code credits but cannot draw on Codex ones (we've just had a tweet promising to fix this, more or less), then it can be construed as an advantage on the part of OpenAI. Personally I think it's idiotic and companies should stop penny-pinching in situations where people are already paying $200, there can be no more value extraction at this price point.


Their bosses are likely happier for the lower downtime required to run the software anyway.


At some point velocity will slow down too. Figuring out edge cases in production to add or subtract a few lines, or backtracking from a bad change.


I suspect depreciation will be a bit slower for a while, because there is a supply crunch.


The replacement of employment income tax with corporate or dividend income tax is not fully efficient. In the sense that the tax rates are different, they benefit different governments, through different time horizons (in case of dividends). While I agree it would be practical to rely on earnings tax when AI broadly replaces labor, we can still answer the obviously rhetorical question (similar to how "drinking ones own cerebral fluid" is obviously rhetorical) of "should AI pay taxes" - that there may need to be a tax reform.


Throwing in ML jargon and going straight to modelling before understanding the problem reduces your credibility as a data scientist in front of engineers and stakeholders.

As always, one of the most difficult parts is getting good features and data. In this case one difficulty is measuring and defining the reaction time to begin with.

In Counter Strike you rely on footsteps to guess if someone is around the corner and start shooting when they come close. For far away targets, lots of people camp at specifc spots and often shoot without directly sighting someone if they anticipate someone crossing - the hit rate may be low but it's a low cost thing to do. Then you have people not hiding too well and showing a toe. Or someone pinpointing the position of an enemy based on information from another player. So the question is, what is the starting point for you to measure the reaction?

Now let's say you successfully measured the reaction time and applied a threshold of 80ms. Bot runners will adapt and sandbag their reaction time, or introduce motions to make it harder to measure mouse movements, and the value of your model now is less than the electricity needed to run it.

So with your proposal to solve the reaction time problem with KL divergence. Congratulations, you just solved a trivial statistics problem to create very little business value.


Appreciate the feedback, you're right - armchair speculation is different than actual data science. Without actual data to examine, we're left with the latter and that can still be a fun exercise even if it doesn't solve any business problem. We're here to chitchat and converse after all.


Yeah, apologies if it was too harsh. I was more irked by someone else who kept trying to asset it's an easy problem, and confused it with your display of raw curiosity, which is something I don't wish to discourage.


More like congrats, you just made every cheater far less effective by forcing them to play nearer to human limits.

You arent eliminating cheaters, that's impossible, you are limiting their impact.


If cheaters play indistingushable from normal people, the seems like mission accomplished.


Cheaters don't have to play like normal people to avoid detection. They just have to make it expensive to police them. For example, the game developer may be afraid of a even a 10% false positive ban rate, and as a result won't ban anyone except perhaps a small number of clean-cut cases.


Yes, the current status is that cheaters can play distingushable from humans. But my point was more that, if we create a system that allows cheating that still is equivalent to a good player, then it just feels like playing against good players. Which, to me, feels like it'd be mission accomplished.

This is one of the cases where ML methods seem appropriate.


Most cheaters are playing well outside of human limits and doing huge amounts of damage to the legitimate player experience. A 10% safety margin beyond human play sounds reasonable. A world where cheaters can only play 10% better than humans is a far better world than the one we are in at the moment.


Strong disagree. I play a lot of casual CS, and the number of extremely poor / new / young players using rudimentary cheats and performing far below average is huge. Most players don't watchfully spectate the bottom fraggers in the lobby, but if you do, the number of them brazenly using wallhacks is quite high.

These players aren't using aimbot / triggerbot (or if they are, they don't understand the gunplay and try to shoot while running), and may not even understand wall penetration, so their reaction times wouldn't look abnormal at all. From the data, they would likely have below average reaction times still.

Even though they are not performing well, their presence still massively alters the gameplay for legitimate players. For one, lurking becomes a pointless endeavor. You're better off rushing wildly than attempting any sort of stealth.


"A world where cheaters can only play 10% better than humans is a far better world than the one we are in at the moment."

My world is pretty fine, as I don't play games on servers, without active admin/mods that kick and ban people who obviously cheat.

ML solutions can maybe help here, but I believe they can reliable detect cheats, without banning also lucky or skilled players, once I see it.


Human administration is not scalable.


Why not? As long as there are players, some of them also want to be admins. You maybe mean commercial administration is not scalable for games with a fixed price? Sure, but give the option to the community to manage (rent) servers on their own and they will solve it themself.


Its not even an option in most titles and the industry as a whole has moved away from such hosting models, partly to ensure players receive a consistent and fair experience. Community servers were rife with admin abuse.

Its okay if you havent played an online game in 20 years mate


Yep


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: