Can someone please ELI5? I've heard much about it but still, with all the drama, I still don't get it.
SKG is an initiative that will force game publishers to keep a game online, provided that people have paid for it, and the publisher is not bankrupt? Is that right? What does it have to do with democracy?
No, they do not want to force publishers to keep a game online. The initiative just wants developers to provide a way for users to keep using a game after it has gone EOL by allowing users to run their own servers or by no longer requiring internet access.
See the FAQ[1]:
> Aren't you asking companies to support games forever? Isn't that unrealistic?
> A: No, we are not asking that at all. We are in favor of publishers ending support for a game whenever they choose. What we are asking for is that they implement an end-of-life plan to modify or patch the game so that it can run on customer systems with no further support from the company being necessary. We agree that it is unrealistic to expect companies to support games indefinitely and do not advocate for that in any way. Additionally, there are already real-world examples of publishers ending support for online-only games in a responsible way, such as:
> 'Gran Turismo Sport' published by Sony
> 'Knockout City' published by Velan Studios
> 'Mega Man X DiVE' published by Capcom
> 'Scrolls / Caller's Bane' published by Mojang AB
> 'Duelyst' published by Bandai Namco Entertainment
I'm not sure what the question "What does it have to do with democracy?" is referring to. Some people find that no longer having access to video games they paid for isn't fair so are petitioning their governments for consumer protection against that.
A solution to the problem was developed in the late 90s / early 2000.
Games allowed for personally hosted servers and the ability to connect to them. This is how original Call of Duty, Counter Strike, Quake III, Doom 3, Enemy Territory, and more worked. A person did not have to create a user account with the company that produced the title.
Modern day games require an user account for their services and you are only allowed to connect to their servers without being able to self-host.
Self-hosting was very beneficial during dial up days because the local ISP could run the server to reduce connection latency.
Games like Battlefield Bad Company 2 is a great example of how bad it has become.
Some good responses here already. One angle that's not been mentioned yet is informed consent at the time of purchase.
When "buying" not "renting" there is presently no information for the consumer to make an informed choice about what they are purchasing when it comes to a live service game because no end-of-service date is available at the time of making the purchasing decision.
This is in large part why the end of The Crew was problematic for many people.
Had the service end of life been advertised at the point of purchase the consumer could have knowingly "purchased" a time-limited product, or not, but the decision would have been informed.
All this stuff about end-of-life plans, releasing self-hosted servers, patching out online-only stuff and leaving behind an offline-only game, etc, is great, but it's only one of the possible remedies that SKG have been discussing for the last couple of years.
Another perfectly feasible one is not to dress up a time-limited entitlement to participate in a live service as the same thing as an "own forever" product at the point of purchase.
SKG will prevent game publishers from making online games unplayable. This could be as simple as releasing the server code and adding a setting to allow custom servers.
Basically the official servers can die, as long as unofficial servers can be used instead.
Or more so semi-online games. With some components that should be fully playable offline. There is no reason why these offline components shouldn't continue to operate when servers are shutdown.
What SKG movement want, in short terms, is that game developers/publishers of live service games and online only games be forced, once the games is no longer supported, to provide tools, software, executables to the community to keep the game going. They are using the banner of consumer protection and a public EU initiative to force the EU politicians to debate and come up with a solution.
The drama mostly stems from the fact that the head of the movement is a gamer with no knowledge of either software development or game development, so he has a VERY simplistic view of how a game server-client works and thinks that developers just have a .exe executable running from a raspberry pi that can be uploaded to github and that's it. When people with knowledge call out that there are TONS middleware used to develop a game with their own licenses and that a server nowadays is more than a single machine, he just says: well, this movement is no retroactive so new games will be develop with that in mind and automatically every software vendor will be fine with distributing their code so that everyone can keep playing.
While I support the spirit of the movement, this will ultimately end up with a warning label in a box because real life has more nuances.
I think someone with his perspective might be actually a perfect head of the movement. Most people who play games are not programmers & games are becoming a big part of modern culture.
Why should people playing (and paying !) for games really care what bad technical or business decisions have the publishers done when they see part of their culture being killed to save a buck ?
A lot of other important problems have been resolved in a similar manner without every participant in the movement being a technical expert.
In a three way chat between the movement, politicians and the game industry, you need to know the technical details to rebuke the arguments and support your claims.
Also, the technical decisions are not just about saving a buck but getting the game shipped. If my game is about growing vegetables and I want to let the player drive to the state farm, but I don't want to spend time (and money) building my own physics engine for driving, I grab a solution off the shelve with their license and go back to the core of my game, this same thing is repeat for many other things like authentication, anti-cheat, networking, etc
I’m a game developer - this sums up my feelings perfectly.
A lot of this middleware isn’t necessarily even game middleware - think of a turn based game that might use a custom DB instead of mongo or SQL. You’re effectively banning any non game specific middleware from being used or requiring that every company provide a separate licensing path for game developers.
From your post it's not clear that you understand how AWS charges. CloudWatch metrics would only validate your case if these were pay-per-use services like Lambdas or something. But you use the word "infrastructure" which implies you have allocated resources and simply don't use them. That's a valid charge.
Again maybe you are aware, but it wasn't clear from your post.
I don’t get the analogy because novel is supposed to be interesting. Code isn’t supposed to be interesting, it’s supposed to work.
If you’re writing novel algorithms all day, then I get your point. But are you? Or have you ever delegated work? If you find the AI losing its train of thought all it takes is to try again with better high level instructions.
When they started adding new hooks just to work around their own broken component/rendering lifecycle, I knew React was doomed to become a bloated mess.
Nobody in their right mind is remembering to use `useDeferredValue` or `useEffectEvent` for their very niche uses.
These are a direct result of React's poor component lifecycle design. Compare to Vue's granular lifecycle hooks which give you all the control you need without workarounds, and they're named in a way that make sense. [1]
And don't get me started on React's sad excuse for global state management with Contexts. A performance nightmare full of entire tree rerenders on every state change, even if components aren't subscribing to that state. Want subscriptions? Gotta hand-roll everything or use a 3rd party state library which don't support initialization before your components render if your global state depends on other state/data in React-land.
I'm all for people avoiding React if they want, but I do want to respond to some of this, as someone who has made a few React apps for work.
> When they started adding new hooks just to work around their own broken component/rendering lifecycle, I knew React was doomed to become a bloated mess.
Hooks didn't fundamentally change anything. They are ways to escape the render loop, which class components already had.
> Nobody in their right mind is remembering to use `useDeferredValue` or `useEffectEvent` for their very niche uses.
Maybe because you don't necessarily need to. But for what it's worth, I'm on old versions of React when these weren't things, and I've built entire SPAs out without them at work. But reading their docs, they seem fine?
> And don't get me started on React's sad excuse for global state management with Contexts. A performance nightmare full of entire tree rerenders on every state change
I think it's good to give context on what a rerender is. It's not the same as repainting the DOM, or even in the same order of magnitude of CPU cycles. Your entire site could rerender from a text input, but you're unlikely to notice it even with 10x CPU slowdown in Devtools, unless you put something expensive in the render cycle for no reason. Indeed, I've seen people do a fetch request every time a text input changes. Meanwhile, if I do the same slowdown on Apple Music which is made in Svelte, it practically crashes.
But pretty much any other state management library will work the way you've described you want.
My issue with React Context is you can only assign initial state through the `value` prop on the provider if you need that initial state to be derived from other hook state/data, which requires yet another wrapper component to pull those in.
Even if you make a `createProvider` factory to initialize a `useMyContext` hook, it still requires what I mentioned above.
Compare this to Vue's Pinia library where you can simply create a global (setup) hook that allows you to bring in other hooks and dependencies, and return the final, global state. Then when you use it, it points to a global instance instead of creating unique instances for each hook used.
Example (React cannot do this, not without enormous boilerplate and TypeScript spaghetti -- good luck!):
This is remarkably easy, and the best part is: I don't have to wrap my components with another <Context.Provider> component. I can... just use the hook! I sorely wish React offered a better way to wire up global or shared state like this. React doesn't even have a plugin system that would allow someone to port Pinia over to React. It's baffling.
Every other 3rd party state management library has to use React Context to initialize store data based on other React-based state/data. Without Context, you must wait for a full render cycle and assign the state using `useEffect`, causing your components to flash or delay rendering before the store's ready.
You can use Tanstack Query or Zustand for this in React. They essentially have a global state, and you can attach reactive "views" to it. They also provide ways to delay rendering until you have the data ready.
It'll handle cancellation if your state changes while the query is being evaluated, you can add deferred rendering, and so on. You can even hook it into Suspense and have "transparent" handling of in-progress queries.
The downside is that mutations also need to be handled by these libraries, so it essentially becomes isomorphic to Solid's signals.
I've used React Query and Zustand extensively in my projects, and unfortunately Zustand suffers from the same issue in cases where you aren't dealing with async data. I'm talking about React state + data that's already available, but can't be used to initialize your store before the first render cycle.
Here's how Zustand gets around this, and lo-and-behold: it requires React Context :( [1] (Look at how much boilerplate is required!)
React Query at least gives you an `initialData` option [2] to populate the cache before anything is done, and it works similarly to `useState`'s initializer. The key nuance with `const [state, setState] = useState(myInitialValue)` is the initial value is set on `state` before anything renders, so you don't need to wait while the component flashes `null` or a loading state. Whatever you need it to be is there immediately, helping UIs feel faster. It's a minor detail, but it makes a big difference when you're working with more complex dependencies.
And you'd have to use `queryClient` to mutate the state locally since we aren't dealing with server data here.
But here's what I really want from the React team...
// Hook uses global instance instead of creating a new one each time it's used. No React Context boilerplate or ceremony. No wrapping components with more messy JSX. I can set state using React's primitives instead of messing with a 3rd party store:
const useGlobalStore = createGlobalStore(() => {
const dataFromAnotherHook = useAnotherHook();
const [settings, setSettings] = useState({
optionA: true,
optionB: dataFromAnotherHook,
});
return {
settings,
setSettings,
}
});
I think it might already be working like that? React now has concurrent rendering, so it will try to optimistically render the DOM on the first pass. This applies even if you have hooks.
They both will result in essentially the same amount of work. Same for calculations with useMemo(). It was a different situation before React 18, because rendering passes were essentially atomic.
How exactly is Vue better? It just introduces more artificial states, as far as I see.
My major problem with React is the way it interacts with async processes, but that's because async processes are inherently tricky to model. Suspense helps, but I don't like it. I very much feel that the intermediate states should be explicit.
I think it's a matter of taste and preference mostly, but I like Vue's overall design better. It uses JS Proxies to handle reactive state (signals, basically) on a granular level, so entire component functions don't need to be run on every single render — only what's needed. This is reflected in benchmarks comparing UI libraries, especially when looking at table row rendering performance.
Their setup (component) functions are a staging ground for wiring up their primitives without you having to worry about how often each call is being made in your component function. Vue 3's composition pattern was inspired by React with hooks, with the exception that variables aren't computed on every render.
And I agree about Suspense, it's a confusing API because it's yet-another-way React forces you to nest your app / component structure even further, which creates indirection and makes it harder to tie things together logically so they're easier to reason about. The "oops, I forgot this was wrapped with X or Y" problem persists if a stray wrapper lives outside of the component you're working on.
I prefer using switch statements or internal component logic to assign the desired state to a variable, and then rendering it within the component's wrapper elements -- states like: loading, error, empty, and default -- all in the same component depending on my async status.
I tried proxy-based approaches before (in Solid) and I _also_ had a lot of problems with async processes. The "transparent" proxies are not really transparent.
I understand that mixing declarative UI with the harsh imperative world is always problematic, but I think I prefer React's approach of "no spooky action at a distance".
As for speed, I didn't find any real difference between frameworks when they are used correctly. React can handle several thousand visible elements just fine, and if you have more, you probably should work on reducing that or providing optimized diffing.
For example, we're using React for 3D reactive scenes with tens of thousands of visible elements. We do that by hooking into low-level diffing (the design was inspired by ThreeJS), and it works acceptably well on React.Native that uses interpreted JS.
I'm with you there -- I use React more than Vue day-to-day since most companies reach for it before anything else, so it's ubiquitous. Most devs simply don't have a choice unless they're lucky enough to be in the driver seat of a greenfield project.
I find React perfectly acceptable, it's just global state management and a few flaws with its lifecycle that repeatedly haunt me from time to time. (see: https://news.ycombinator.com/item?id=46683809)
Vue's downside is not being able to store off template fragments in variables. Every template/component must be a separate component file (or a registered component somewhere in the tree), so the ease of passing around HTML/JSX conditionally with variables is impossible in Vue. You can use raw render functions, but who wants to write those?
JSX being a first-class citizen is where React really shines.
I forgot to mention in my other reply, but if you find yourself needing to render a massive list performantly, check out TanStack Virtual. It's a godsend!
I use Preact, in the old-school way, without any "use-whatever" that React introduced. I like it that way. It's simple, it's very easy, and I get things done quickly without over-thinking it.
It didn't solve frontend, it sold developers one lie (i.e. ui = f(state) ) and managers another (developers are interchangeable gears).
Problems are only truly solved by the folks who dedicate themselves to understanding the problem, that is: the folks working on web standards and the other folks implementing them.
> Problems are only truly solved by the folks who dedicate themselves to understanding the problem, that is: the folks working on web standards and the other folks implementing them.
It kills me to think of how amazing Web Components could be if those folks had started standardising them _now_ instead of in "competition" with userland component libraries of the time (while punting on many of the essential challenges of developing UI in the DOM those libraries were still evolving solutions for), and introduced more problems entirely of their own making.
Too bad the problems getting solved aren't the problems that need solving. Maybe this is one of the reasons software development is such a joke of a profession.
At work, I have the same difficulty using AI as you. When working on deep Jiras that require a lot of domain knowledge, bespoke testing tools, but maybe just a few lines of actual code changes across a vast codebase, I have not been able to use it effectively.
For personal projects on the other hand, it has expedited me what? 10x, 30x? It's not measurable. My output has been so much more than what would have been possible earlier, that there is no benchmark because these level of projects would not have been getting completed in the first place.
Back to using at work: I think it's a skill issue. Both on my end and yours. We haven't found a way to encode our domain knowledge into AI and transcend into orchestrators of that AI.
> deep Jiras that require a lot of domain knowledge, bespoke testing tools, but maybe just a few lines of actual code changes
How do new hires onboard? Do you spend days of your own time guiding them in person, do they just figure things out on their own after a few quarters of working on small tickets, or are things documented? Basically AI, when working on a codebase, has the same level of context that a new hire would have, so if you want them to get started faster then provide them with ample documentation.
> Do you spend days of your own time guiding them in person, do they just figure things out on their own after a few quarters of working on small tickets
It is this rather than docs. I think you're absolutely right about our lack of documentation handicapping AI agents.
> Hey, I'm not the OG commentator, why do I have to explain myself! :)
The issue is that you're not acknowledging or replying to people's explanations for _why_ they see this as exponential growth. It's almost as if you skimmed through the meat of the comment and then just re-phrased your original idea.
> When Fernando Alonso (best rookie btw) goes from 0-60 in 2.4 seconds in his Aston Martin, is it reasonable to assume he will near the speed of light in 20 seconds?
This comparison doesn't make sense because we know the limits of cars but we don't yet know the limits of LLMs. It's an open question. Whether or not an F1 engine can make it the speed of light in 20 seconds is not an open question.
It's not in me to somehow disprove claims of exponential growth when there isn't even evidence provided of it.
My point with the F1 comparison is to say that a short period of rapid improvement doesn't imply exponential growth and it's about as weird to expect that as it is for an f1 car to reach the speed of light. It's possible you know, the regulations are changing for next season - if Leclerc sets a new lap record in Australia by .1 ms we can just assume exponential improvements and surely Ferrari will be lapping the rest of the field by the summer right?
There is already evidence provided of it! METR time horizons is going up on an exponential trend. This is literally the most famous AI benchmark and already mentioned in this thread.
I am still using LLMs just to ask questions and never giving them the keyboard so I haven’t quite experienced this yet. It has not made me a 10x dev but at times it has made me a 2x dev, and that’s quite enough for me.
It’s like jacking off, once in a while won’t hurt and may even be beneficial. But if you do it constantly you’re gonna have a problem.
It’s immeasurable. I use AI for powering through personal projects, which would not have gotten done without AI because I also have a job and a life. It allows me to focus on the product and requirements rather than the code. It’s hard to measure because the projects would simply not have gotten done without it.
SKG is an initiative that will force game publishers to keep a game online, provided that people have paid for it, and the publisher is not bankrupt? Is that right? What does it have to do with democracy?
reply