Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
CS:GO: From Zero to 0-Day (neodyme.io)
233 points by pizza on June 11, 2023 | hide | past | favorite | 100 comments


    However, sometimes programmers forget to remove the debug symbols from the final binaries of the game. Programmers are humans, and humans make mistakes.
Delivering software with debug symbols - wether commercial or non-profit - isn’t a mistake. It is a decision for better bug reports, traces and convenience.

Executables/libraries on disk file size will increase with debug symbols. The kernel and loader will look at binary headers and load only what is needed for execution into main-memory. Kernel and loader will not load the debug symbols at runtime, they are useless for them. The debugger will look also at the headers and use them to load the debug symbols, which are need for backtraces. Therefore initial file read from disk will take a little more time but the execution is not affected. Startup speed is usually influenced by loading of further resources, initialization and checks. So you will get a lot for some bytes on the disk.

https://stackoverflow.com/questions/24117063/is-a-program-co...

https://stackoverflow.com/questions/39222698/does-compiling-....

Some people still argue that security by obscurity works and hide everything. I doubt that.


This is WILD!

The sentence before the one you quote is

> Sometimes versions of the game are also intentionally shipped with debug symbols to generate better error reports.

You remove that context, quote him saying sometimes it's a mistake and criticise him with the preface he already provided!?


I'm sick of this weird HN trend where readers pick out a single sentence only tangentially related to the rest of the article and then try to prove how smart they are by rebutting it.


This is a good form habit. Discussion is largely about creating narratives and arguments to try to come to a consensus on reality. This is ostensibly useful.

When someone makes a statement and that statement is innaccurate or untrue, it is helpful to correct it so that they can change their post/future-posts to ensure they are making the strongest argument/most compelling narrative possible. Sometimes this is misused or misunderstood to be combative, but it is not constructive to approach conversation this way.


These Reddit-like comment sections with public vote counters highly incentivize this type of thing. I don't even know why HN has a voting system? I think sorting comment threads by number of replies would be more useful. Vote counters only make sense when you have a massive amount of comments to sort through, which really only happens on posts about Apple afaik. However, those comment threads tend to be filled with useless arguing anyways.

Would people stop using this site if it stopped offering dopamine hits in the form of a number next to your name?


Reply count can be more of an indicator of controversy, rather than value. The best comment might provide me deep me deep new insight, but all I have to say is "that's great!". Saying something inflammatory results in lots of replies.


In fact, the HN algorithm downranks threads with a high comment-to-point ratio.


I choose to believe there exists an old guard that use the site without points visible, either through frontend modification or mobile apps.


you mean that's not the whole point of posting here?


With context then the quote is

> Sometimes versions of the game are also intentionally shipped with debug symbols to generate better error reports. However, sometimes programmers forget to remove the debug symbols from the final binaries of the game. Programmers are humans, and humans make mistakes.

It looks like the author is saying shipping final binaries with debug symbols is a mistake, and that's what @ho_schi is criticising.


The author is noting that in this particular case, Valve made a mistake in releasing the debug symbols for a particular build, as noted by them releasing them in a 2017 build, and then removing them in a subsequent build.

The author did not say that "shipping with debug symbols is always a mistake", they noted that it can happen unintentionally and provided evidence of that in this case.


I’m sorry. You are right! I missed to quote the sentence before :(

As pointed out by jpmoral, I consider debug symbols generally a good thing. Exceptions apply as usual e.g. embedded computing.

Another approach is using stuff like debuginfod to ship debug symbols automatically when needed. Archlinux and Fedora do that and it improved debugging and reporting - especially regarding dependencies.


You don't need to ship your debug symbols to have stack traces. You just need to uniquely identify published binaries, and link them to stored debug symbols. See also breakpad, of which Chrome and Firefox use a variant of.


You should absolutely never release a commercial binary to public with debug symbols.

There are ways to convert symbol names on the crash report server, so the claim that you can get better crash report with debug symbols is not correct.

Why you shouldn’t release debug symbols:

* It helps patent/copyright trolls litigate you easier.

* Makes it easier to reverse engineer your binaries, which will help malicious actors and competitors.

* You might lose some trade secrets.

If you are a startup owner, please ensure to never release commercial/close-source binaries with debug symbols. You can thank me later.


> Some people still argue that security by obscurity works and hide everything. I doubt that.

Most things involving security also use and greatly care about obscurity of implementation. The principle is that the security should not "rely solely" on obscurity, not that obscurity has no benefits.

Usually, making software such that one can have a high confidence of it being secure is not economical. One relies on being harder to break than the competition, for which obscurity can be a big deal, and on offloading or mitigating damage when flaws are inevitably exploited anyway.


Security by obscurity very obviously works, in the sense that it raises the bar of effort required by potential attackers. You can't rely on it to be your only security, but there's no reason why it wouldn't help.


It’s the engineering definition of “works”: if a solution merely mitigates the frequency or severity of a problem, but does not completely solve it in all cases, obviously the solution is useless and no better than doing nothing at all. Only the platonically ideal solution has any value at all.


What are you talking about? It's the engineering definition of "works" because engineers live in the real world and incorporate economic factors into their decision making to make something that works better under real-world conditions. If a solution merely mitigates the frequency or severity of a problem, the solution is obviously not better than doing nothing at all, since you just admitted it mitigates the severity or frequency of a problem. Would you rather have 100 0-days or one? Would you rather have a high-impact 0-day or a low-impact one?


You're kidding? That's opposite to the spirit of engineering.


Perhaps, but many engineers have personalities that are in conflict with what you’re calling the spirit of engineering. Binary thinking is a too-common thing.


> Binary thinking is a too-common thing.

It's engineering after all, isn't it?


Nobody argues that anymore, but this post is evidence that attackers value debug symbols and that should be a factor in deciding whether or not to strip them.


> Delivering software with debug symbols - wether commercial or non-profit - isn’t a mistake. It is a decision for better bug reports, traces and convenience.

Yes, it is a mistake. None of those things require symbols on the customer’s machine.


> Some people still argue that security by obscurity works and hide everything. I doubt that.

It is certainly true for games. There are only very expensive cheats for eg R6S and Overwatch 2, and the encryption of those games is a headache.


This is such a weird answer because on Windows, the default is to simply split out the debug data from the binary. And then, shock horror the technology, serve the right debug data from a (build) server on demand!

In any case, this is an online video game, if you want to have a billion cheaters, feel free to ship debug data (for no reason) with the binary.


On windows, debug symbols are distinct files that match with a specific version of a binary. When your crash reports get sent off to Valve, they have the appropriate PDB file to match and have full source level debuggablity.


They might reveal trade secrets though.


I love CS:GO, but the Source engine has been a leaky faucet of critical bugs, and the developers haven't taken them seriously enough over the years. The disclosure timeline in this blog post seems to be the norm for them. For this tranche of bugs, it took significant public pressure for them to do anything.

I mean, it's not quite MW2 (2009), which I believe has multiple known RCEs that'll never be patched. But still not good enough for an actively developed live service game. One hopes that Source 2 has better standards for it, but I'll believe it when I see it.


The whole Call of Duty series is riddled with bugs similar to the one in this article. This makes sense of course since their engines are based on the same principles, they all originate from the Quake engine. One example of an exploit found in the Quake engine in 2006 [1], also related to the “fast download” functionality, has been present in Call of Duty games such as MW2, and Black Ops - so at least 4 years after the exploit was found.

It seems like Activision does not allocate any development time to their older games, even if critical vulnerabilities are found. Black Ops 3 (2015) has a number of remote code execution vulnerabilities in its peer-to-peer networking code, making it completely unsafe to play - even though it is readily available on Steam. Of course, these exploits are fixed by the community in custom, modded, clients. Sadly Activision likes to send cease & desist letters to developers of these clients.

Personally I’ve found an exploit in CoD4 (2007) that will leak the CD-key of a person joining your server. This was reported to Activision in 2009 (ish), but also never (officially) fixed. It uses similar CVar leaking concepts as the one discussed in this article. Probably that would also make for an interesting write-up.

[1] https://web.archive.org/web/20080517095348/http://www.securi...


The original Half Life engine (goldSrc) was based on the Quake engine, on Wikipedia it is mentioned that it was heavily modified. Half Life 2 used the Source engine and it doesn't necessarily mean that a lot of Quake engine artifacts are left, no?


It doesn't have to be a lot, though. After all, Valve was hacked like an year (sorry but it's been literally 20 yeas) before HL2 was released and yes, some Quake code was identified immediately.


Just make a MITM node.js app to filter out anything not supposed to be there

Edit: a raspberri pi that filters out bad packets from old games


> But still not good enough for an actively developed live service game.

Not just actively developed, but the most popular game on Steam by far, with ongoing revenue of around $50 million per month (peaking at >$100mil recently).

https://steamcharts.com

https://www.dexerto.com/csgo/players-reportedly-spent-100-mi...

They especially have no excuse for not pouring resources into security.


The money keeps coming in, why would they expend and effort?


They'll get away with it until they suddenly don't. There have been a multitude of server-to-client and client-to-server RCEs in Source, in the worst case scenario someone could have deployed a worm which infected servers which infected their clients which infected more servers and compromised practically the entire active playerbase. They're just lucky it hasn't happened yet.

What it would actually mean for Valve if they allowed that to happen is hard to say, since they're in such a firmly entrenched position with a near-monopoly on PC game distribution, but every CS:GO player getting cryptolocker'd wouldn't be good for their brand to say the least.


Has anyone tried filing a class action lawsuit?

How about an FTC complaint for unfair and deceptive business practices?

This seems like a great case.


IMO, this is one of the drawbacks of Valve's famous flat structure + compensation being tied to stack ranking.

Given the drawn out timescales for getting critical bugs fixed, working on security is seen as boring or low value by Valve devs. Due to compensation being tied to stack ranking system, people are dissuaded from fixing these bugs as a result.


This is really fascinating to me because MW2 was, and still is, one of my favorite games.

Can you tell me more about these RCEs, how they work, or some technical analysis on this game?


Here is a GH repo about two (now patched) MW2 exploits: https://github.com/momo5502/cod-exploits.


Reading the timeline at the bottom makes this seem like a very frustrating process, it took multiple nudges and over a year for Valve to even start fixing. I can't imagine putting in this much work and getting such a lackluster response.


Seriously, 15 months is kind of a slap in the face (to everyone who plays CS:GO)


Exploiting these bugs requires you to connect to external servers which is not something you are really supposed to do in csgo.


Community servers are a built in feature to Counter-Strike: Global Offensive, accessible for regular users via the main menu. Many of the professional scene exclusively use them (via MM services like FACEIT, and training servers like BrutalCS).

Connecting to these services safely is absolutely something players should have trust in.


Doesn't take much to assume that a Source game is full of RCEs.


I'll gladly take research that helps protect tens of millions of players over generic defeatist hand waving that "source games are broken".


Hosting own servers is a feature of CSGO.

A feature provided by all well designed client/server software. Thus avoiding vendor lock-in and dependence. Good for LAN-Parties, bad or no networking, original developer doesn’t exist anymore or doesn’t care.

And then there as these developers which are the only ones with the server code and shutdown their servers by intention - I recommend to avoid software which such a planned obsolescence.


Have you even played CS:GO? Some of the modes aren't available on official servers - kreedz, aim maps, etc.


How do you know this? Is there strong cryptographic integrity protection for traffic from the official servers that prevents active attacks injecting malicious messages?

(Also, why aren't you supposed to use other servers?)


The timeline mentioned in the post displays a scary lack of professionalism from Valve. For a full RCE (with full chain required to even be acknowledged!), taking over a YEAR to fix it and then having an almost insultingly low payout for it does not fill me with confidence as a user. What makes this worse is that if it had been exploited, even just getting account takeovers would have been catastrophic, not even speaking of other fun stuff people can do with an RCE. Many csgo players have items of insane monetary value in their steam inventories, so getting account takeovers would've been a direct link to profitability with this exploit. I'm very disappointed hearing this about such a reputable software maker.


I expect that many, if not most networked games have RCEs in them.

Security is not a priority. Priorities are performance and getting the thing out the door under constant crunch time with minimal cost and likely nobody hired specifically to handle this kind of security.

Performance usually means shortcuts everywhere, a mentality to avoid anything unnecessary in normal operation (like checks that values are within an expected range), and memory-unsafe languages.

Add various anti-cheats and you'll probably get an almost free escalation straight to the kernel too.

Also, lots of complexity, and often built-in scripting languages.

Edit: In this specific case, it was a series in logic bugs, in code that would generally not be considered super-sensitive and thus receive additional attention for a security review. To write secure software, every developer needs to be security aware, and a game company isn't the environment where you can expect that to happen.


Yes I think that this is actually very common and we just don't hear about it much.

Most of these online multiplayer games are implemented with heaps of unsafe c++, and due to the nature of the game, are constantly parsing inputs from the network, and from other clients. This is a recipe for disaster!

For this reason I will only run modern games in some sort of sandbox, ideally in a VM with GPU pass through, or maybe run them on a dedicated system that is only used for games.

Even in a VM, exploiting it would give access to the passed in GPU, and from there I'm not totally sure what is possible, so even this isn't perfect but it's quite a lot better than nothing.

Normally people are launching the game through the steam GUI under their regular uid, so CS:GO for example with this setup, can read and write to all of the data in ${HOME}. That's pretty scary!

Even running steam through flatpak doesn't give you much protection when sharing the X11 socket.

I think the most reasonable setup for the "average user" is to create a separate user account on their system that is dedicated to gaming. You can run graphical sessions for both users at the same time and switch between them via whatever method (i would just switch between multiple ttys). Users on Unix-like systems in theory should not be able to interfere with other users, or even harm the system so long as they can't escalate to root. It's important to realize that there are situations where an unpriv'd user can harm the system, like if there is a bug with sudo, or a kernel exploit! I think it's much better than the default though!


Valve doesn't make games, they make money. CSGO and TF2 are barely maintained.


CS2, essentially a rewrite of CS:GO using the Source 2 engine is coming "this summer", so there's that.


Like 10 people work on this game, no surprise nothing is happening.


If only both of you guys knew that Valve has been working on CS2 for the past three years and no, it's not "10 people" it a lot lot more.

I agree it took them too long to fix these vulnerabilities but Valve is notorious for its schedules/timeline/insane delays. It's been a meme for years now.


Item stealing from exploits and viruses has largely been stopped by 2fa needed to exchange items. I otherwise agree with the rest of the post


It's somewhat concerning that Valve seems to disregard the importance of RCEs in server/client code, making them relatively easy to find.

There appears to be a lack of proactivity in addressing these bugs. Given Valve's substantial daily earnings from CS:GO through weapon skin sales, it seems their operational focus is more on maintaining the revenue stream rather than committing to quality engineering practices.


I think it's good security practice to run/install your games under a different user than the one you use for banking. It's a hack, but it's all we can do with the dismal state of desktop OSes.


Unfortunately, all it takes is something like Genshin Impact's anti-cheat system - which runs at the kernel / ring 0 level [1] - to be compromised, and then it's game over anyways.

Not to mention the potential of "what if the anti-cheat is deliberately programmed to be malicious?". A dismal state all around.

[1] https://genshin.hoyoverse.com/en/news/detail/103720


Brings an entirely different element of "fun" to every Tencent game.


With the companies that don't care about security but require their bugware in ring0? With whole-system memory scans, exfiltration and executing code right off network?

I've for a long time kept my gaming PC in physically separate network. While I've relaxed that for practical reasons, can't imagine ever logging into anything I can't trivially burn or recover into a gaming machine.


VMs are normally a better solution for isolation, although with games it's probably better to let them run on the hardware while the sensitive stuff gets its own VMs.


The problem with VMs is games that require anti-cheat, which from what I've read from user experiences try to detect and prevent VM usage (not sure about Valve's anti-cheat though).


I've been playing only in a VM with GPU passthrough for years now and the online game which made problems was Darktide, but they pushed a fix soon after release to actually allow VMs. I've heard the same from friends using a similar setup.

Still anecdotal, I know, but it seems to be a rather small issue.


The common ones such as EAC, BattlEye and VAC don't prevent you from running the corresponding games in a VM.


Hmm, as I've seen topics like this[1] for EAC, though a comment from a different result[2] mentions EAC can be configured to optionally block VMs so maybe it varies by game.

> Today, I purchased and installed Dead by Daylight with the intention of playing with some friends, only to start the game and find a message from Easy Anti-Cheat stating that the game "cannot run under a virtual machine."

[1] https://www.reddit.com/r/VFIO/comments/100yqok/bypass_eac_ca...

[2] https://www.reddit.com/r/blackdesertonline/comments/px0flc/v...


You are right. I've heard similar stories about BattlEye in the past which I assumed might just have been a decision they've reverted since but could've very well been an option up to game devs to enable. Thankfully not many of them do seemingly.


If the VM operates under the same privileges than a host, all this does is prevent automated attacks - a savvy attacker can still access the disk image of the guest, in some cases, it might actually be more vulnerable (as disk image access is implicitly a privesc in the context of the VM).


Don't do banking on your gaming PC


I do all my banking on my iphone. No way I would do banking on a desktop.


What does banking mean here? Managing bank accounts online in your browser


What an amazing Sunday read, a step-by-step explanation, tables, timelines, even a short video demonstrating the exploit.

Looking at how big the impact could be if this was discovered elsewhere, and how meager the payout was and how little Valve seemed to be interested, I guess it's fair to say that security of millions of PC clients is not taken seriously at all. If this is how it's handled for a game that makes big $$, I wouldn't dare imagine how it is for thousands of online AA games and other low budget software that runs everyday in our computers.


I always wondered how many game clients out there have exploitable vulnerabilities. Seems like you see a lot of CVEs in popular software, but never games.


Probably a lot, this isn't the first time Source has been affected by an RCE. For better or worse there's much less of an attack surface in newer multiplayer games though, since they've trended towards more closed ecosystems where the community isn't allowed to host their own servers or create their own content that the client consumes directly.

Nowadays I'd be more concerned about malicious mods for singleplayer titles.


Most likely every game that allows private servers has a bunch of exploitable vulnerabilities. Game developers are under too much pressure to deliver to worry about security. A while ago a popular game was running direct connections to the developers mysql server.


If more people would return their games on such cases, or do lawsuits, like in other goods, they would worry about security.


Wow, which game?


Final Fantasy 14 had raw SQL queries in it's UI, so handing in a quest basically made you send an UPDATE statement straight to the DB to update your character with the rewards.


Amazons New World was a perfect example, you could literally send the server info that you're invulnerable and you would be... unkillable. People flying, telling the server they have x gold, zero validation whatsoever. Online games are a constant balance between what can be handled by client and what should be handled on the server but most of them display absolute lack of after-thought given from developers and/or management.


Super Meat Boy comes to mind: https://news.ycombinator.com/item?id=3387628


CS:GO and other source games have had loads of serious vulnerabilities. I've found a few myself just by looking at the leaked source code:

https://github.com/perilouswithadollarsign/cstrike15_src

The engine code is very old, and CS:GO itself was not developed by valve but by Hidden Path Entertainment initially as a port of CS: Source to consoles and they didn't really do a great job.


Game studios have no security processes and people who look at this stuff (i.e. game hacking) are more likely to just add exploits to their cheats (e.g. being able to crash other players is obviously a huge selling point for a hack) instead of reporting them.


as a player with thousands of hours in csgo, here's my take:

right from the get go, the authors made it clear that each of the rce requires a user to connect to a malicious server. the only scenario this is possible is if the user connects to one of the community servers.

while community servers have been the backbone of previous cs titles, it is far from the main place the majority of players connect to to play the game. the only major exception being third-party matchmaking systems such as faceit and esea, where the connection is handled directly by each service.

so the scope of exploit in the vast majority of the cases would be for a very minor set of playerbase or unless exploiters hack faceit/esea servers.

at the same time, the community servers have had a shady relationship with valve, where some of them allow you to try any skin in the game from server-side. i believe things like that played a part in why valve did not bother much with supporting them with the current game, especially with the ui for connecting to them barely updated since cs 1.6 two decades ago.

moreover, the exploits listed are not by themselves enough to inject a payload to victims's computer. the game itself runs in user mode and the client-sided commands are usually game specific.

overall, while i support the efforts made, the potential impact have been overblown in interpretation. i can see why valve took so long to fix these.


> for their bug bounty program, Valve requires a full-chain exploit demonstrating RCE impact

That sounds quite bad.


All of that work for $30K after 1 yr of Valve slow rolling it


Bug 1 ShouldPreventServerCommand calls Host_IsSinglePlayerGame instead of Host_IsLocalServer. The former function checks to see if the server that you are connected to is a single player server where actually we are interested in if we are running a server locally.

https://github.com/sr2echa/CSGO-Source-Code/blob/dafb3cafa88...

https://github.com/sr2echa/CSGO-Source-Code/blob/dafb3cafa88...


A good example of why security practices matter in game development.


How exactly does it show that?


By showing a 0-day for a game server.


On the other hand, this is the most successful multiplayer FPS of all time, you rarely see articles like this, and I can't remember ever seeing an article talking about an exploit like this actually being used.

How many high value targets are installing games on their machines with sensitive data, and connecting to random private servers? Hopefully not enough to justify developing an exploit like this.


CS:GO skins can sell for thousands of dollars, using an exploit like this for spear-phishing attacks against whales flaunting their expensive inventories could easily net more than the $7500 bounty which the OP received for responsible disclosure. It's probably already happening, highly targeted attacks aren't likely to make the news, and even if they are publicised it's hard to determine exactly how they were achieved.

https://www.kaspersky.co.uk/blog/cs-go-two-million-usd-inven...


Only because sadly, liability still isn't a thing in many places.

Too many people are happy buying shoes that explode, taking their feet away, if the shoe laces are tied on the wrong order.


What were the consequences of this?


I think they meant to say that security practices should matter in game development, not that they actually do in practice.


As a late-30’s dev who plays competitive daily, this is the most frustrating thing in my day, but also the best way for me to vent out daily frustrations.


As much as I loved the olden says of shareware doom and extensive mods to games like counter strike, I think when it comes to competitive online games streaming is going to become the norm. Every one will be able to play, it will be much harder to cheat, and updates and enhancements are basically instant from an end user perspective.


A lot of hacks already rely on visual/audio cues. An aimbot or audio-based wallhack doesn't need any input you don't want to give players.

Also, competitive games, especially shooters, are extremely sensitive to latency, so they are by their nature quite unfit for streaming. People who build PCs which render at 300 FPS to a 120Hz+ monitor for competitive play won't be happy to deal with even a quite optimistic 10ms additional roundtrip latency.


I understand, but in a sense, those unlevel the playing field. I get it isn’t a perfect argument since many people have slower internet latencies, but it everyone has to do it, you get rid of a ton of variables.


I did not fully read the article yet as I'm away from home - does this also affect Linux clients?


hamdiish1




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: