I've read opinions in the same vein of what you said, except painting this as a good outcome. The gist of the argument is why spend time looking for the right tool and effort learning its uses when you can tell an agent to work out the "problem" for you and spit out a tailored solution.
It's about being oblivious, I suppose. Not too different to claiming there will be no need to write new fiction when an LLM will write the work you want to read by request.
It's a reasonable question - I would probably answer, having shipped some of these naive solutions before, that you'll find out later it doesn't do entirely what you wished, is very difficult/impossible to maintain, has severe flaws you're unable to be aware of because you lack the domain expertise, or the worst in my opinion, becomes completely unable to adapt to new features you need with it, where as the more mature solutions most likely already had spent considerable amount of time thinking about these things.
I was dabbling in consulting infrastructure for a bit, often prospects would come to me with stuff like this "well I'll just have AI do it" and my response has been "ok, do that, but do keep me in mind if that becomes very difficult a year or two down the road." I haven't yet followed up with any of them to see how they are doing, but some of the ideas I heard were just absolute insanity to me.
Hey, I am actually working on making this compatible on earlier AMD's as well because I have an old gaming laptop with an RX5700m which is GFX10. I'm reading up on the ISA documentation to see where the differences are, and I'll have to adjust some binary encoding to get it to work.
I mean this with respect to the other person though please don't vibe code this if you want to contribute or keep the compiler for yourself. This isn't because I'm against using AI assistance when it makes sense it's because LLMs will really fail in this space. Theres's things in the specs you won't find until you try it and LLMs find it really hard to get things right when literal bits matter.
I really like the minimal approach you've taken here - it's refreshing to see this built completely from the ground up and it's clearly readable and for me, very educational.
But help me understand something. BarraCuda does its own codegen and therefore has to implement its own optimisation layer? It's increbibly impressive to get "working" binaries, but will it ever become a "viable" alternative to nvidia's CUDA if it has to re-invent decades of optimisation techniques? Is there a performance comparison between the binaries produced by this compiler and the nvidia one? Is this something you working on as an interesting technical project to learn from and prove that this "can be done"? Or are you trying to create something that can make CUDA a realistic option on AMD GPUs?
If it helps: I'm actively considering how to make LLMs more helpful in GPU programming. Recently I converted the RDNA 3.5 ISA manual from PDF to Markdown for LLM retrieval (because I have a Strix Halo). Notably, I converted every bitfield diagram to plain text table and manually checked its correctness. Everyone can comment there if they find a typo. I guess someone can also do it for RDNA 2 and other manuals.
Don't let anyone dissuade you, it's going to be annoying but it can be done. When diffusion was new and rocm was still a mess I was manually patching a lot to get a vii, 1030, then 1200 working well enough.
It's a LOT less bad than it used to be, amd deserves serious credit. Codex should be able to crush it once you get the env going
Imagine if Tolkien was writing Fellowship last decade, and the book landed on your hands today. No decades of cult growing, no adaptations or explosive marketing, some word of mouth. Would you think it "much important" before reading it? What makes the importance?
In my opinion it's the prose. It's always the prose. Always gotta be on the lookout for good writers, new and old.
> despite everyone saying for 10+ years that they are going to die.
What many people have been saying in my experience is pretty much the opposite: that Mozilla isn't going anywhere because Google wants them (needs them) to be around. That it's their antitrust Trojan horse.
Does it maybe come down to changing licenses, as in a license expires and another is negotiated with different terms (to charge per household instead in the example above)?
If the market is spread so thin that, say, fairly original games released today would have been sure hits 15 years ago, where is the failure? Lack of six figure investment in marketing campaigns? Is creating success simply already having the capital to make a successful game? Is it being in the influencer "meta" (see right now e.g. PEAK)?
I don't think success/failure should be framed in any other way than "did the game break even for the dev/publisher" and that's beyond what any player perceives. Because crossing that line will send devs into despair, as you mentioned, it's just not sane.
I took "I can see a good reason why it failed" to mean, "There was an obvious flaw in the craftsmanship of the game": The story wasn't good (if it relied on story), the mechanics weren't good, the graphics were sloppy or ugly, it was buggy or incomplete or something else.
The claim is: Make a solid game - a solid story, solid mechanics, solid graphics, no bugs, etc., and the game will succeed.
And that's an easy claim to refute -- point out just one game that was at least "solid" on all those fronts which nonetheless failed. He's asking you to show him one, so that he can update his beliefs.
"They didn't spend $500k promoting it" doesn't seem like a "good reason why it failed".
What I’d suggest is taking a look through the games published by a company like Raw Fury that has a stellar reputation. There are plenty of good games by that definition that didn’t do well commercially on their books.
I picked a random Raw Fury game, Regions of Ruin. It looks like a Viking side-scroller, fighter, builder game. The art is pretty good, but amateurish. Overall, a pretty good game, though it would probably never catch my eye. I looked up the stats: it once had 3000 players at once, and has about 2000 good reviews. A game stats site estimated it had about $400,000 in sales. I consider this a success.
I should clarify that by "success" I mean the game had a good amount of attention and enough sales to potentially make a profit. This is what I care about as a potential game developer. Does the market still give decent games a decent shot at being profitable? Regions of Ruin is a decent game and had a decent shot at being profitable.
I looked at Phantom Spark. It's a simple F-Zero style racer through nice looking 3D stages. It's fairly minimal, only one type of racing vehicle with some color variations. The main draw of the game is improving your time trial times. There's some characters that put text on the screen, but their style doesn't really fit the game. Overall, the characters don't appear to contribute to a story or anything. I'm guessing there's maybe like a dozen tracks? This game was reviewed by several gaming sites, including IGN and received decent scores. One website estimated it made $80,000 in revenue.
Everyone will have to judge for themselves whether or not those two games had a shot at success. Judge for yourself the state of the gaming market.
For context since success is slippery I’d take it as able to recoup development costs and provide a runway for the next project otherwise being a professional game developer could not be sustainable. This is also the worst place to be in as a developer where each project has to recoup it’s very precarious.
Both are rated Very Positive on Steam so clearly both are good games in the opinion of the gaming population at large.
The thesis that all you need to find success is a good game is clearly not sufficient.
FWIW I think Regions of Ruin was most definitely a commercial success and that estimated revenue figure is probably very low for the review count they have.
reply