Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

They missed the 3rd(!) bug in those few lines of code? The normalization was done before zeroing out the Z component, so the resulting vector isn’t actually normalized.


Not saying I never conjure the spaghetti but whoever coded that really was deep into the tail end of a caffeine, coding and sleep deprivation binge ;)


Good chance it was John Carmack, probably one the top 10 programmers of all time. He basically built the Quake engine himself. There is a pretty decent analysis of it here:

* https://fabiensanglard.net/quakeSource/index.php

I recommend checking it out.

EDIT: i was thinking of the analysis done on Quake 2, which is more about the engine:

* https://fabiensanglard.net/quake2/index.php

while the one about is regarding the multiplayer system, which is still an amazing bit of dev.

Github:

* https://github.com/id-Software/Quake


Ranking programmers is as dumb as ranking musicians.

Carmack is great but you can't really say he's better than say some guy you've never heard of who wrote a bunch of absolutely brilliant algorithms inside a missile defense system and it's all super secret. Or someone who did amazing work on a compiler Carmack used or brilliant work in an OS kernel that you're never going to know about, or some AI code, or something deep within a stock exchange's platform.

Carmack's perceived greatness is elevated because almost 100% of what he has worked on is directly visible to the average consumer.


Carmack's main superpowers seem to be tons of hard work, not raw intelligence

1) Perseverance and focus: He kept rewriting the quake renderer, pushing it faster and better, where other people had long ago settled for a lesser variant or given up.

2) Try everything, even the stupid ideas. When he wrote quake, the wisdom was that the FPU was way to slow for a game. He managed to do z-divisions for blocks of 16 pixels on it, while the integer code spit out the pixels.

3) Attention to detail. The doom span renderer was sub pixel precise, which was pretty unique for its time. It looked better in a hard to define way.

4) Code quality. Doom and quake are very readable, very portable and heavily documented. The average 1-person codebase tends to be messy and undocumented, as nobody else ever takes a look. On the other hand, plenty of fixmes in there. It was unabashedly special cased where he could get away with it.

5) He consumed whatever idea he could get. BSPs were an obscure research trick before he took them and ran with it.

He had huge holes in his 3D knowledge when writing doom, as he admits himself. He didn't care until quake forced him to level up. But he could look in the mirror, say he needed to do better, and learn the extra things when he had to.

I don't know if Carmack is that much smarter than the average HNer, but he surely works a lot harder. And that's an even better compliment, in my book


Not sure where I read it, but someone writing about the history of DOOM basically said John was a machine and the machine ran on Diet Coke and pizza. As long as he had both, he could remain in the zone indefinitely, which is pretty crazy when think about all the things he accomplished that you listed.


> Not sure where I read it, but someone writing about the history of DOOM basically said John was a machine and the machine ran on Diet Coke and pizza.

I'm pretty sure I've read it in "Masters of Doom".


> Ranking programmers is as dumb as ranking musicians.

If a musician had a noticeable impact on culture and was prolific, I don't see why we can't put them in a bracket of [in top #]. The ranking is really just to illustrate my respect for his work, accomplishments, insightful communications, and continued relevance as well as impact on my life and those around me. You can call that 'dumb' if you like, but I am human just like you and am not above placing some people's accomplishments above those of others.


To me, Carmack equates more to an engineer who designed wild equipment used by sound engineers and musicians. Aside from Torvalds or Wozniak, I think most programmers are unsung, regardless of how much their products impact the world.

Most programmers are quickly forgotten if they were ever known in the first place. In terms of cultural impact, think of the programmers behind our PC or mobile operating systems, major consumer app categories like spreadsheets, word processors, photo or video editing, etc. Of course some nerds of computing history can cite folks behind all these, but it is niche trivia to most of the population.

It also seems like an era coming to a close where a solo programmer can deliver an impactful product. Who gets the credit instead? Usually an executive or entrepreneur gets the credit. Sometimes this person programmed early on, but often they either transitioned to management or started there and other programmers were responsible for delivering the product we eventually know.


> It also seems like an era coming to a close where a solo programmer can deliver an impactful product.

This is true but you have to look at it differently. There are generally two ways of looking at history: the 'great man'[0] theory, or the 'of the times' theory.

If you look at it one way Woz and Carmack and Torvalds and the others were instrumental in shaping their surroundings and without those specific people we would have lost out on basically the entire technological world as we know it.

If you look at it another way, they were inevitable -- the times were such that it was bound to happen (or at least extremely likely) because of a great confluence of events that could never be arranged or predicted, and if Woz had electrocuted himself making the Apple I power supply module then someone else would have done something similar around the same time and we end up at the same spot (but Steve Jobs becomes a moderately successful Bay Area Benz dealer and we all use Blackberry phones with physical keyboards in 2023).

The era of an individual engineer or programmer making a paradigm shifting breakthrough in his or her basement may be over, but that just means the times have shifted into another dynamic. What that is can not be predicted, but if we do survive the oncoming crises upon us and somehow also never end up turning the planet into smoldering radioactive ash over a shipping lane dispute or something, there will be a time when such a person can be expected to emerge and do it again.

[0] excuse the masculine nature of this terminology, but unfortunately that is what it is called, though I haven't formally studied history in a while and it could have changed


Coincidentally (or not), Carmack started Armadillo Aerospace and was working rocket control and navigation systems. So…


> Ranking programmers is as dumb as ranking musicians.

But also as much fun for people with appropriately specialized interests, so it'll probably keep happening.


I'm not sure if it would've been John Carmack in this particular case. He was more concerned with the engine and rendering side of things, less with the gameplay logic stuff. More likely it would've been either John Romero or John Cash who wrote this code, but it's hard to say for sure. Best thing we can hope for is that John Romero pitches in with his impeccable memory :D


You can read Carmack's .plan archive from 1996 here, if you're so inclined:

https://github.com/ESWAT/john-carmack-plan-archive/blob/mast...

It's a _fascinating_ snapshot into Quake's development.

I have no idea if his .plan is a record of what he, specifically, was doing, or if he was just capturing what the programming team was doing, but at the very least, it makes clear that he was aware of huge amounts of very highly specific game code issues as they were being worked on and was almost certainly deeply involved.


Wow, this link is something else - you can track what problems he tackled and what he worked on DAILY.


Guessing you're on the younger side? I used to read these voraciously as a 11 year old. Felt like a magic window into the game industry.


Hello fellow I-was-once-11-and-fascinated-by-.plan! I wasn't sure if I was the only one. St Louis in 2000 made it seem like there were only a handful of people interested in programming at all.

Hmm... 11 would put me at 19999, so I must've been more like 12 or 13. I remember that's when I started taking gamedev seriously.


So the only requirement to work at iD back in the day was to be called John?


That’s what my buddy said, yeah. Good old John Manyjohns.


For those who haven’t seen Buckaroo Banzai—all the people working at Yoyodyne Propulsion are named John. It’s a plot point.


Or Carmack.


It seems like Amazon has the same idea when promoting people named Andy or Jeff to executive roles.


"Hi Sandy, hope you're doing well."


I loved the video, I love this type of analysis of bugs and glitches. This comment, however, just makes me nervous. It's like my innermost worry.

I spend so much time trying to do a good job which I can be proud of and yet I always know there are bugs and I just hope not too many players notice and get annoyed by them.

And here we are, pointing out poorly written code in practically ancient software and attributing it to one particular programmer. Of course, Carmack has more than shown his worth throughout his career. But I dread the thought of a forum post pointing out mistakes I've made at work like this.

It's part of why it took me so long to dare publish articles about fun stuff I've done at work.


Honestly why care? Everyone writes bugs -- the thing is that Quake works fine -- if some quick runners want to exploit a bug 32 years later to impress each other with runs, after having dissected the code to bits to find glitches, is that really a bad mark on anyone? I don't remember anyone complaining about lightning gun glitches when Quake was being played.


I agree. Code that works well is a good thing even if not perfect and super optimized.

I’m not an elegant coder. My code isn’t perfect. It does run however and is pretty efficient mostly bug free. It does it’s useful thing and I can be proud of that.

Having gone through enough code reviews to know there is a huge amount of polish one can give code. It’s a trade off between perfect and Good enough.

I’m hopeful new languages and patterns leveraging libraries help make our code better or at least let us write more if it.


People speed run this game. Understanding every line of it is a part of the process and the reason people care who wrote what line is because it has a lot of culture and history associated with it. If you work on stuff that will make this kind of impact that makes people care like this you will feel proud having your bugs picked apart, not ashamed for having put them there in the first place. And if you don't rest easy knowing any bug you ever make probably won't see a thousandth of the effort being put into understanding and attributing it.


Perhaps the Living Videotex slogan can help you:

-- "We Make Shitty Software... With Bugs!"

http://scripting.com/davenet/1995/09/03/wemakeshittysoftware...

I think it is a good mindset to have. We must all assume that software is an imperfect thing done by imperfect beings. Even celebrate it.

On this particular case, I recommend concentrating not on the fact that there was a bug, but on the fact that the code written so many years ago is still relevant for someone today. Focus on that. Try to make something that someone will find a bug in 30 years from now.


I found this looking through Carmack's .plan files. Perhaps you will appreciate it.

Snippet:

I want bug free software. I also want software that runs at infinite speed, takes no bandwidth, is flexible enough to do anything, and was finished yesterday.

Every day I make decisions to let something stand and move on, rather than continuing until it is "perfect". Often, I really WANT to keep working on it, but other things have risen to the top of the priority que, and demand my attention.

* https://raw.githubusercontent.com/ESWAT/john-carmack-plan-ar...


I feel the same way, all the time. The more time I spend improving and tweaking a solution, the longer the list of remaining improvements I wish I could do.

That's why pieces I publish about my work usually contain a fairly long section about potential further improvements.


If anybody cares enough to pick over your code 30 years later, that probably means you were very successful at what you did; at the very least you had a hand in creating something people care about, which is more than most programmers can say.


>probably one the top 10 programmers of all time

By what metric?

Popularity? PR? Net worth? Complex software? Competitive programming? Impact?

For every "top" SE there are tens of similarly skilled engineers that you have never heard of


> By what metric?

His record speaks for itself. I know it is tough to imagine what it was like programming 3D engines meant to run on graphics non-accelerated PCs running DOS, without the internet knowledge or collaboration available to us now, but the stuff he was doing was absolutely revolutionary. He basically created PC gaming as we know it today by creating a market for 3D accelerators and before that by doing it in software very well. He also did this stuff in an record amount of time. From Wolfenstein 3D (1992) to Quake (1995) he made three separate engines which each revolutionized the industry one after the other.

EDIT: The metric I am using is 'impact' in an industry and in society in general -- the impact of Doom and Quake on culture and the adoption of computers and PC gaming and many other things can not be overstated. You can draw a direct line from Doom to Quake to nVidia to CUDA and machine learning if you want.


An interesting read into some of the insights that Carmack had which lead to the revolution that was Quake can be found in the "Ramblings in Realtime" articles[1] by Michael Abrash[2], who helped John develop Quake's 3D engine. Abrash wrote the rather famous "Graphics Programming Black Book", now available for free[3].

[1]: https://valvedev.info/archives/abrash/

[2]: https://en.wikipedia.org/wiki/Michael_Abrash

[3]: https://github.com/jagregory/abrash-black-book


I remember he dropped Doom 3 on us with fully dynamic lighting and shadows in like fucking 2004 or something. I remember getting my hands on the engine leak (in 2003?) and absolutely getting my mind blown to see shadows dancing on the wall (at like 8 FPS on my athlonXP geforce4 rig) at something resembling real-time, in the same map that was used at the Apple developer demo where Steve Jobs announced mac support.

All of Carmack's engines were YEARS ahead of their time. Everyone was doing baked lighting and employing various gimmicks for dynamic-looking shadows, and this dude (and his team) comes in and destroys everyone with this fully-, actually-dynamic lighting system. I don't think anyone else even came close until Crysis in 2007.

If that isn't a top 10 programmer performance than I don't know what is.

2001 macworld demo: https://www.youtube.com/watch?v=80guchXqz14


> If that isn't a top 10 programmer performance than I don't know what is.

Nobody's saying he isn't brilliant. But it's not possible to declare him as a "top 10" programmer without actually taking into account all the other programmers.

I think there are at least 10 other programmers that you've never heard of, but that have had as much or more impact on society in general as Carmack did.

Isn't it enough to just say he's brilliant? Ranking programmers is a fool's errand and serves no purpose other than to devalue other brilliant programmers.

(Also, I think measuring how brilliant someone is by how much impact they've had on society doesn't make much sense. Those are two entirely different things.)


I don't think anyone here means top 10 in a strict ranking sense of the phrase. But I think Carmack very easily has a place amongst the pantheon of programming gods, whether you measure by 'brilliance' (whatever that means) or impact. He has done a ridiculous amount of innovation over the years, and set the pace for game engine and real-time rendering development for something like 20 years.


> I think Carmack very easily has a place amongst the pantheon of programming gods, whether you measure by 'brilliance' (whatever that means) or impact.

I guess? I don't know. There are so many programmers that have had a much greater overall impact and such than Carmack has (which in no way takes away from Carmack's accomplishments!) that I find it hard to say either way.

That's part of why I think trying to rank people is a bit strange. I doubt that there is even much consensus on what it takes to be a "programming god" in the first place.

But I do agree that Carmack is great!


Like who? I can think of Brian Kernighan, Dennis Ritchie, Linux Torvalds, Steve Wozniak, Ada Lovelace, Grace Hopper, Richard Stallman, Edsger Dijkstra, Bill Gates, John Carmack. Maybe Alan Turing. With the exception of the real OG's I think Carmack fits in that list quite nicely.


..and hundreds or thousands of others, that signed some NDA, worked on e.g. mission-critical systems (where bugs as the ones discussed here would cost you your job) and who never got to show their work, which is OP's point.


I mean, if you are going to be culturally impactful, I think the most important prerequisite is that you are allowed to talk about what you create...


Agreed, but then we should differentiate between "best" and "most famous"


The closest guy to Carmack today seems to be Brian Caris, who developed virtualized geometry[1], a rasterization based rendering technique where the geometric detail of static objects is adjusted to the screen resolution in real time. There is another talk (can't find it right now) where he mentions Carmack as an inspiration. I wonder what Carmack would be working on today if he was still in the game engine business.

[1] https://advances.realtimerendering.com/s2021/Karis_Nanite_SI...


He was a little bit. I wonder what he was up at meta before switching to agi


VR headsets.


Who made DLSS? Using ML to denoise ray tracing is massive.


I think DLSS doesn't do denoising. DLSS uses an ML algorithm to upscale frames using additional information like z buffer and motion vectors. Denoising is used to drastically reduce the number of required rays for ray tracing. I would guess denoising is done first, using a non-ML algorithm, and DLSS is done afterwards.


DLSS does the denoising since DLSS 1.0 see the DLSS announcement https://www.youtube.com/live/4_g_Y0W1Xn8?feature=share (pardon the crappy gaming channel, it looks like Nvidia weren’t running their own YouTube the time)

It adds frames since I think 3.0.


That's odd, since you can use ray tracing without DLSS. I'm very sure it uses denoising even then.


There may be some confusion between upscaling and denoising. You might notice I keep editing my comment because it’s really really hard to find old Nvidia release announcements.

Update: https://youtu.be/6O2B9BZiZjQ looks good around 3:54 Nvidia refer to “deep learning for image denoising” which to me seems like DLSS.

Ray tracing without DLSS might be a different denoising technique. Is it really slow when you turn it on ?


There's some really big innovations happening in e.g. rendering like that; I wouldn't be surprised if the complexity and work involved in just Nanite is 10x that of the whole Quake engine.

Similarly there's the work nvidia is doing, using AI technology to upscale graphics instead of rendering things at 4K.


The idea is that, the derivative came after the virtuoso. Of course what came after is more complex costs more takes more is more impressive blah blah. Dismissing those that thought the thought or solved for a problem before it was, is such a dumb thing people do.


Hobbyists have already made their own implementation of Nanite. It's not that complicated.


It has been over a year since the algorithm has been explained in quite a bit of detail.

Hobbyists make complicated things quite regularly, although I don't know about 10x quake.

10x a quake for modern hardware, maybe (could be quite simple)

10x software rendered dos quake with loads of asm, probably not.


In that sense, neither is the quake engine.


Hell no. As skilled as Carmack is as a coder, he falls well below the pioneers for early operating systems and programming language compilers and development in terms of industry and societal impact. Even Bricklin's VisiCalc and its descendants resulting in boring old modern spreadsheet software has a better claim to lasting societal impact than Carmack's 3D engines.


> The metric I am using is 'impact' in an industry and in society in general

If you correlate against lines of code written over a lifetime then John Brunner and Vernor Vinge bubble to the top. They're my go-to examples for people who think that literature reviews "don't matter".

Kind of a fun exercise since we get to see all the different values out there in the world.


Excellent summary


I don't think a deep analysis is required here. It is quite obvious that he'll not be the top-10 programmer on every single metric that one can imagine.

He's very good. Enough people agree. I recommend letting it at that.


Amount of tickets closed on JIRA lmaoo


LoC, the ultimate metric


Please no. I know it's a joke, but please.


I know, the truth hurts


Impact. He pretty much invented the FPS genre by reading a paper about Z buffers from some guy in the Navy, IIRC. All the Call Duty kiddies owe him a debt. Who knows how long it would have taken for that whole genre to get going without him?


He popularised BSP for FPS which meant doom ran faster than SystemShock, despite having more complex level geometry


Was Doom's geometry more complex? No room over room, no slopes, vertical axis was awkward. Feels like apples and oranges, even before considering the gameplay was also very distinct between them.


All true. SystemShock also have 3-D models for some objects too.

I guess a better way of putting it would’ve been the SystemShock would have run a lot faster if it had BSP.


Since I’m actually a top 5 programmer of all time (self decided) I can say with confidence that Carmack is actually a top 50 programmer but no where near the top 10. He didn’t use ternaries nearly enough.


> Good chance it was John Carmack, probably one the top 10 programmers of all time.

Where would you rank Linus Torvalds?


In the top 10 as well. In fact not too long ago I was asking someone to rate some piece of code and used 'from Linus / Carmack to (something shitty I don't remember)' as the metric.


It would be fun to make a game that showed you random code snippets and asked you to identify the author.


I honestly have no idea how good Linus is as a programmer; he's proven his worth as a project manager, he seems to know a thing or two (to make an understatement) about operating systems, but I don't know if he's a good programmer.


It's tough to decide if the Linux kernel or git is his bigger contribution to society.


Git isn't even the best DVCS, it won on the back of github more than it's own strengths.


Empathy is a trait that I value highly on a programmer.


Empathy and faux politeness are different things.


I'm not disagreeing. In fact I would say that faux politeness is a strong sign of low empathy.

On this particular case, Linus has publicly admitted that he struggles with empathy.


I've been working with the leaked wipEout source code for a while[1] and I can assure the quality of the Quake source is absolutely stellar in comparison. While Quake's source may not be up to modern best practices, the overall architecture certainly has a lot of structure and thought put into it.

Modifying Quake to run on the Occulus Rift was a breeze[2], compared to the mountains of garbage I have to wade through with wipEout.

[1] https://twitter.com/phoboslab/status/1653707447586922498

[2] https://phoboslab.org/log/2016/05/quake-for-oculus-rift


Sure, but there's a big difference between leaked code vs. code cleaned up and officially released over 3 years after launch. Who knows what the Quake source looked like on the date they initially shipped. Quake also remained in development for quite a while, with the latest patch released ~March 1997 (original release June 96), and QuakeWorld last released end of 1998.


> the leaked wipEout source code

That's really cool, will you share your efforts anywhere? Love a bit of anti-gravity racing myself. I recommend you listen to the OST whilst coding it for maximum immersion.


The unlicensed source makes this difficult. My current plan is to trim the game to a demo with one race track, compile it to WASM, put it on my website and get Sony's attention (maybe through Nightdive Studios?) for a proper re-release.

In the likely event that this fails, I'll just YOLO it and put it on Github. After all, Sony didn't care that I published[1] all of wipEout's assets seven years ago.

[1] https://phoboslab.org/wipeout/


That's class! Would you object if I typed up a quick review and listed it on my Well Made Web gallery? I'd slot it in under Artistic https://wmw.thran.uk/artistic/index.html


>whoever coded that really was deep into the tail end of a caffeine, coding and sleep deprivation binge ;)

That's most indie game devs of the period, but more so at id Software which splintered the team during Quake's development due to internal squabbles, causing John Romero and others to leave id.

You can even tell that from the quality of Quake's levels which start with beautifully crafted and intricate levels, and as you approach the end, progress into "whatever, let's just ship it, this is gonna sell" kind of levels that were mostly just boring repetitive filler to pad the play time.


"You can even tell that from the quality of Quake's levels which start with beautifully crafted and intricate levels, and as you approach the end, progress into "whatever, let's just ship it, this is gonna sell" kind of levels that were mostly just boring repetitive filler to pad the play time."

IIRC (this is well-documented if you want to double check), Tim Willits made most of the Episode 1 maps, John Romero made most of the episode 2 maps, American McGee made most of the Episode 3 maps, Sandy Peterson made most of the episode 4 maps, and John Romero made most of the level 1, military base themed maps in each episode.

The episode 4 maps are often barren, lacking in details, and missing much of the beautiful interconnections of earlier maps... but this is also true of Peterson's maps from Doom (he did a lot of episode 3 in the original Doom, IIRC). So I think it's more of "this guy might be a strong game designer in a lot of other contexts, but the specific needs of making cutting edge Doom/Quake style maps isn't a great fit for him".

I was at Raven Software at the transition from the Doom engine and other 2.5D engines to Quake (and then Quake 2, and then Quake 3, and then Doom 3), and there were a number of existing designers who were fine game designers in earlier, 2d contexts who found their skills severely out of sync with the changing demands of 3d map making, and most of them eventually had to transition to other roles or leave the industry.


FWIW while i also found Petersen's maps on the weird side geometrically, at the same time i think they were among the most interesting to play (both in Doom and Quake) as he often tried to come up with various gameplay tricks and traps to break the "mold".

His Quake maps especially give me the impression that he was more into trying to come up with ideas on what is possible for the player to do in the freedom allowed in 3D space than how to make a good looking environment (especially in Quake's theme that didn't really have to conform to any realistic constrains and could have shapes floating in space, physically impossible architectures or whatever).


I think there's a couple of orthogonal issues here.

One of them is the nature of interactivity in the levels themselves. There's a spectrum between having a game grammar made of distinct discrete interactive reusable objects and then building unique situations by assembling them in interesting ways, versus having (essentially) unique scripted traps or interactive things or set pieces that only show up in one place. Older action games that inspired Doom tend to draw from that former tradition; a lot of FPS games that came after Quake tended to go more down that second road. Quake's trigger system specifically opened up the door to a rudimentary kind of visual scripting that made the latter style of design more possible in a way that wasn't possible in Doom (although it was possible in Hexen via HexenC(?)). I think you could say that that style of design really came more into its own with Half-Life, which foregrounded unique interactivity grounded in very specific, themed levels much more clearly. Doom at its best seems like it's much more in the design space of, say, Robotron and old Mario games. Fewer unique set pieces, much more focus on discrete interactive toys to be recombined... and given id's background with Commander Keen and their earlier recreation of the first level of Mario 3, this design influence shouldn't be a surprise. Anyway, Quake feels like it is at the intersection of these two styles of design.

I think it is true that Peterson did try to go more down that second road of design in the episode 4 maps in a way that there was less of in other maps, and that it interesting.

But the other thing that sticks out to me more so, in terms of level design, is about the way the space is shaped. A lot of the very best Doom and Quake levels have a tendency of having different parts of levels intersect and interact in interesting, playful ways. The order that you see areas is different from the order that you hear areas is different from the order that you can attack into or interact with areas is different from the order you can move through areas is different from the order that different kinds of enemies can move through areas or attack areas. And that changes as you progress through a level, get keys, and activate switches. There's a tendency for levels to start somewhat linear and movement constrained but give information about later areas in a somewhat more non-linear, tantalizing way, and then as a player progresses, for the player's movement in a level to become more like a multiply connected graph as switches, keys, and activated lifts make a lot of one-way paths become two-way. And that style of design plays to the strengths of Doom and Quake using BSPs for levels as their fundamental data structure - BSPs specifically make these kinds of weird and surprising visual and physical intersections between areas manageable in terms of computational performance on 90's era hardware.

Whether or not someone considers the design approaches I just outlined appealing is fundamentally an aesthetic issue, obviously - there's no one right way to enjoy a game. But my general sense is that the Sandy Peterson maps in Doom and Quake tend to explore these approaches to play much less than the maps made by other designers.


You assume that the quake maps were built sequentially, but that's not the case. Since this was all new technology, the "best" maps were conceived at the end of development, after the team was familiar with the tech. You want those maps to be at the start of the game, as it's the first thing the player sees.

Also, the distinct style of the last episode is easily explained by the fact that all of its maps were built by Sandy Petersen. E3 was mostly American McGee, while John Romero and Tim Willits worked on E1 and E2.

I'm not a fan of Petersen's maps either, but they are regarded (and liked) as quite unique, compared to the rest of the game.


Yeah, I didn't like Petersen's levels much at the time, but looking back on them later as a designer (I worked on a couple doomed Unreal projects) I can see he was trying to be as creative as possible within the limits of the engine, as far as getting away from "find the yellow key" style design that frankly, everyone was already bored of.

I had a pirate pre-release copy of Quake that I'd wished I'd saved. But in any case I do remember there were changes to the maps all over the game, so the levels were definitely not done in order. The biggest difference I remember is the ending in the final game is totally different. The pre-release had a more Doom style waves of monsters fight on a sort of giant terrace. I didn't particularly like that doom style ending, but also felt the release game's ending was kind of an anti climactic gimmick.


Yeah, the boss at the end of episode 1 (e1m7, House of Chthon) was way more interesting than end.bsp, and I remember really blew me away as a kid. Makes sense based on what others were saying about having the best levels up front.


I disagree that the later levels are filler. Episode 4 is my favorite, because it's the episode that best shows off Quake's excellent movement. It's less cramped than the others, and has better "flow", letting you bunny hop all over the maps with minimal waiting. I agree with the common consensus that it's the ugliest of the episodes, but I think this suits the weird otherworldly theme that Sandy Petersen was going for (influenced by the works of H. P. Lovecraft).

The only real complaint I have is the use of the Spawn enemy, which is generally considered the worst designed enemy in the game. But the enemy placement in episode 4 has the advantage that it makes relatively little use of the Ogre, which I consider the second worst enemy, because it has too much HP for something so common, and is too predictable on Nightmare difficulty (to the point that some people say Hard difficulty is actually more difficult than Nightmare).


Yep I’m in this code base a lot, and there’s a lot of this. Some of it by John’s own hand. But you know, if it works, it works. I’ve added plenty of my own bugs in the same spirit.


Adding bugs intentionally? Based


> causing John Romero and others to leave id

afaik Romero was fired for not working. Cubicle walls came down because Romero was playing Doom on the clock instead of making the game(his form of protest for not making an RPG or something). If you look at Quake code most broken/lazy stuff was done by him. Example https://www.youtube.com/watch?v=DEkjDkr0Qmc


AFAIK, Romero was a bit full of himself at that point in time seeing himself as a godly game designer. He then went on to start his own company and build the RPG he wanted, Daikatana, which was late, over budget, and flopped big time.


> You can even tell that from the quality of Quake's levels which start with beautifully crafted and intricate levels, and as you approach the end, progress into "whatever, let's just ship it, this is gonna sell" kind of levels that were mostly just filler to pad the play time.

This rings really true to me. I only had played the shareware version of quake and only recently played the full game, and the second half really felt like padding.

Some gimmicks in particular were really overused later on (I specifically recall that at a certain point I could just tell "There's going to be an ogre hidden in this corner").

I'm not sure if there was also some other reasons - maybe they decided to use some of the best designs for the shareware, maybe ran out of space for more enemies or maybe there's just way more nostalgia for the first levels - but I'm happy to see that some people share a similar opinion.


They probably didn't expect many people to get more than half way through the game.


It was a different time. It's not like today when everybody has a Steam collection full of games played just a couple of hours and then abandoned.


I swear I'll finish more of them when I get some free time.

Though it certainly is nicer on my wallet to normally grab games on sale.


I was looking at my Steam library and wishlist last night, in anticipation of the upcoming summer sale season (other storefronts have already begun their summer sales), and I was somewhat disappointed in myself. Even though I try to only buy games on sale, I think I'd likely save money overall by only buying (and finishing, if I'm enjoying them) games at full price when I'm ready to play them vs. speculatively buying games on sale.


Doom, Doom 2, Quake, Quake 2, Descent, Descent 2 and Duke Nukem 3D are pretty much the only games I've ever completed because Steam didn't exist and that was basically all there was to play at the time. Now it's easy to get stuck or bored and move on to something else.


You can play the episodes in any order, so I expect most people tried all of them.


Quake is/was a multiplayer game with a single player mode attached for initial training. Or, from a mod creators perspective, a 3D engine with a demo mode for what was possible. Qtest1, the “demo” that came out a few weeks before release, was multiplayer only.


That's sort of how things ended up, but I don't think it was the intention. Doom and Doom2 were (and are) magnificent single player and co-op games, if you like that sort of thing, which I do (Doom was my lode star when I was working as a gameplay programmer / designer on Soldier of Fortune).

The better single player levels in Quake are actually really intricate and well-designed single player levels too, in exactly the way that Doom levels were great.

Some of the enemy design in Quake is actually pretty good, too, with sharply distinguished silhouettes between enemies, and good discrete gameplay property differentiation between them - well, at least for the zombies, the fiends, and the shamblers. Some of the others are fine, too (scraggs, grunts).

I think the bigger issue is they bit off way more than they could chew.

Specifically, Doom's single player mechanics thrive on having hordes of enemies interacting with highly interconnected levels in interesting ways, making space management a big part of the single player game play. Trying to figure out where you're safe to pick a fight, or how to steer the hordes around to create a space where it's safe to fight, is a big part of the draw. But, because rendering 3d monsters was so expensive compared to 2d sprites, Quake couldn't do hordes when it shipped, and fighting just a few enemies at a time (with their health highly jacked up) was a totally different experience. It could have been made to work, but they would have had to stray a lot further from the Doom gameplay recipe than they did.

And they also would have needed, I think, a lot more monster variety. A single player game with 30 different enemy types that were interesting and differentiated would have been much stronger.

That's my two cents, anyway.


Yeah that's the first thing that drew my attention, if you normalise and then remove a component... now you know for sure length whatever you are doing is dependent on the direction of the input vector.

I still find vector programming hard though, so if I was writing anything like this I'd always visualise it as part of my testing. i.e temporarily render out the other beams and possibly the intermediate vector as necessary (pretty much what they did in this video). That makes testing easier to verify and debugging way more intuitive.


To be fair, bug #1 was that the normalization call had no effect, so the fact that it’s also being done in the wrong sequence doesn’t actually have a further effect on gameplay.


It turns out you can do a lot of work with non-normalized normal vectors as long as you don't trip over the math that actually cares whether they're normalized.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: