Steam Client is so incredibly bloated - I wish they released a Lite version without Steam Link, Achievements, Trading cards/Inventory, Browser, Community, Workshop, Hub, ...
It could also update less frequently if it had fewer features and updates would be faster.
Just the list of games I have and the DRM required to play them.
You are in luck, Valve has SteamCMD[1] for users just like yourself! It is a command line version of the Steam client and can do most of what you need like downloading, updating, even verifying game integrity from the command line! I only wish it was open source, maybe someone can reverse engineer it?
Playnite [1]. Open source, lightning fast, and light on resources. You can even combine different platforms such as Steam + Battle.net. Saves you running each of these. Though you lose things like downloads and social. Heck, it even has support for emulators.
I just realized I only think about security vulnerabilities for code I deploy, not software I use. So what are some examples of RCE vulnerabilities that were actively exploited in consumer software in the past?
Just curious how it would've played out if a blackhat had discovered this instead.
Why do people write their own memory allocators and put them in production software for modern desktop applications? It's not like you can eek that much more performance out, and the steam client is already fairly buggy. Switching the views is slow, and doesn't always work.
The system allocator is tuned for general purpose workloads. If you know something specific about your workload, like size or frequency or lifetime, you can do better, and sometimes "better" means "by a lot".
It's not like a heap overflow wouldn't have been a big deal otherwise, for what it's worth.
It's not like the bug was in their allocator, which sounds like nothing more than a free list of small fixed size objects sliced from larger chunks allocated using the system allocator. That's not a complicated thing, and it can have a significant performance impact.
Valve is a games company. They probably already had a stable of allocators and were used to that kind of programming. Also the Windows malloc was much slower in 2003 IIRC.
Valve uses their own tier0 and tier1 libraries in both the Source Engine and Steam, they implement their own standard library like functions and I believe it's where the memory allocator is too. tier1 source is available [0] whilst I don't think tier0 has ever been made public, though it's headers are [1], if you look hard enough you can find the complete source code for Source 2007 though which will include tier0.
Well, since steam is in the background during all the games launched from it, often allows a special video overlay to browse community features while the game is running, supports chat, has an embedded browser, and can be used as an operating system overlay so you don't even have to use the OS for most things (big picture mode, for dedicated use and control with a game controller), I can imagine they probably want to make sure whatever allocator and reclamation scheme they are using has very low and predictable latency.
The last thing they want to have to deal with is people reporting how running the game through steam costs a non-negligible amount of performance, or causes weird occasional lags/stalls while playing. In that respect, it's an extremely high performance application, in that it needs to be nigh unnoticeable to the type of people that overclock systems, push their graphics cards to their limits, and play games with FPS counters always showing in the corner.
In that respect, it may be a textbook case of an application where you want a very specific memory allocation scheme that falls within very strict performance guidelines.
It already has a large number of performance issues. The store is slow to open, and it's slow to switch to my library. Even slower to view my inventory.
Re-using buffers, and only falling back to normal allocations for large sizes is fairly normal in networking code. You see the same strategy used in code for reading things like HTTP requests, in just about every language.
The easiest way to explain it is to go backwards and ask if a one-size-fits-all allocator can really be optimal for most of its users. The answer is probably no. At the very least, most software that has a lot of memory churn could benefit from trying to reuse memory before going down to the allocator. When it comes to particular kinds of objects, like strings, it is very clear that you can have enormous performance benefits from relatively small tweaks.
I remember reading that Valve is all self-organized and perf/bonuses are based on shipping. In that environment, is there any incentive for an IC to own something like software assurance / quality / fuzzing?
Launching the calculator is basically a "hello world" of exploitation. It proves there's an issue in an obvious way and became a bit of a meme in security.
Just the list of games I have and the DRM required to play them.