Worth noting here regarding the patent issue with Creative.
John Carmack said [1]:
"The patent [2] situation well and truly sucks. We were prepared to use a two-pass algorithm that gave equivalent results at a speed hit, but we negotiated the deal with Creative so that we were able to use the zfail method without having to actually pay any cash. It was tempting to take a stand and say that our products were never going to use any advanced Creative/3dlabs products because of their position on patenting gaming software algorithms, but that would only have hurt the users."
Here is the patent work-around in the Doom 3 source code. Check out the RB_T_Shadow method in draw_common.cpp at line 1151 [3]:
// patent-free work around
if ( !external ) {
// "preload" the stencil buffer with the number of volumes
// that get clipped by the near or far clip plane
qglStencilOp( GL_KEEP, tr.stencilDecr, tr.stencilDecr );
GL_Cull( CT_FRONT_SIDED );
RB_DrawShadowElementsWithCounters( tri, numIndexes );
qglStencilOp( GL_KEEP, tr.stencilIncr, tr.stencilIncr );
GL_Cull( CT_BACK_SIDED );
RB_DrawShadowElementsWithCounters( tri, numIndexes );
}
Attention readers: For some reason i cannot comprehend the pages use javascript to load some images. There are no noscript elements notifying you about this. So enable javascript to read through this.
Attention readers: For some reason I cannot comprehend people use the internet in 2012 with javascript and cookies disabled and then whine when some things don't work.
Poor[1] Javascript can hamper accessibility. Some people need to get the content in a way they can access. Sometimes those people will need to turn off bits of javascript to allow their assistive tech browser to find the content, or to allow their hardware to find the links for clicking, or to keep the user experience similar to what their used to, or etc etc.
Computers in 2012 are less powerful than a few years ago - many (most?) people will be using tablets or smart phones and thus using ARM processors with low clock speeds. Internet connection is also problematic. People may be using nice fibre-optic broadband with low contention. Or they may be stuck on dial up. Or in a country like Australia with very expensive connections. Or on mobile connections with poor bandwidth and high charges.
[1] I'm not saying this website's javascript is poor.
That's true. It seems like javascript disabling is just not on the minds of web developers. You have to be a special kind of masochist to use noscript these days... I used it for years, but ghostery covers most of what I liked about it.
Well I guess the overall improvement must be worth images not loading on this page and whining on HN. So, what's the problem again? O you think your setup should be added as an edge case, sorry non standard setups are often not worth catering to. If you want it to work just enable scrips, it's not that hard.
I did know what you meant by noscript, just using the plugin as an example of non-javascript users :-). No cookies? You are a masochist. Another reason I like ghostery -- it blocks the cookies that I don't like.
I figured it out after about 15 seconds from seeing one of the images load on-demand. Which is the time any self-respecting computer programmer should take.
I load links as background tabs and visit them later when I am on a slow connection. When sites do things like this I end up viewing the site for the first time several minutes after clicking on it only to find that I had been waiting for nothing.
Right, I'm in a position such as yours every now and then. Now I understand that you meant that you can't possibly know you have to force the images to load because you have not looked at the tab yet.
We only just found out last year why it was problematic – on windows, OpenGL can only safely draw to a window that was created by the same thread. We created the window on the launch thread, but then did all the rendering on a separate render thread
I actually thought this was well known and documented as I've known it for years. Guess not :)
AFAIK you can use OpenGL in other threads, but you must lock the context to the current thread, which I believe means you're still only rendering from one thread at a time and possibly this is OpenGL 3+ ???
In my own multithreaded OpenGL code, my main thread always became the render thread after starting up the other subsystems. Well, render thread and input gathering thread (as that often needs to be done in the main thread too - at least in SDL).
It may be not so much "well-known" as universally observed without thinking about it. I made the same mistake id did, using SDL in fact, spawning a separate thread for SDL calls. I beat my head against it for much longer than necessary and finally ended up in the SDL irc channel, where someone firmly informed me to stop making calls from multiple threads. When I told them I was making all SDL calls from my rendering thread, including SDL_Init, as advised by the documentation, their response was basically that nobody else had ever had this crazy idea before, every game does rendering and UI interaction from the main thread (never mind that I wasn't writing a game and had no user interaction,) and I shouldn't go around trying weird complex stuff until I learned the basics.
They didn't argue that the restriction was obvious or well-known, only that it didn't need to be documented because good programmers never thought of violating it. (I feel more than a little vindicated with Carmack in my corner :-) ) Strangely enough, they also implied that it might be fixed in the future, and now the SDL docs imply that what I did would work [1], though I wouldn't bet on it because they have a FAQ item that seems to say otherwise [2].
It's something for people to keep in mind when they're writing documentation: document the limitations of your software even if you can't imagine why someone would violate them. People coming from a different background, such as a non-game programmer picking up a game library for some simple animation, might approach your software with different assumptions.
Both of the links you posted say that video/event functions should not be called from different threads:
Don't call SDL video/event functions from separate threads
most graphics back ends are not thread-safe, so you should only call SDL video functions from the main thread of your application
The second one specifically says from the main thread, while the first one only says "separate threads". I was always under the illusion that it didn't matter which thread, as long as its the one you cal SDL_Init from. Your experience shows otherwise...
I agree - documentation should be clear about limitations and assumptions.
I really like Fabians source code reviews. Would love to see engines like Unreal being open-sourced to compare how they work internally. Would be a nice perspective. I.e. Unreal was much better in out-door scenes and I think the maps where not heavily preprocessed like in Quake et.al. What was their way of tackling the problem?
In case there are any resources that describe the technical details, would be nice If someone could post a link.
Visual Studio has an extension engine, you can download many extensions for free. Go to Tools > Extension Manager and browse away. There's one called Productivity Power Tools that we just install by default.
Also, XCode has a nice feature where you can switch between .h and .c files with CMD+Up. This is missing from VS, but you can add a macro and map it to a key, for me it's ALT+O. See here:
He mentioned just for code browsing so don't panic. Nobody in their right mind would think XCode compares to Visual Studio. I mean it's 2012 and there is still no refactoring support.
A couple of those features Xcode implements using code completion. It looks like Whole Tomato may be slightly better, but your comment is clearly wrong.
I get the point you are making and to a certain degree, I agree with you.
Havign said that, they still do get cash from the engine, through existing licensing deals.
I haven't checked this specific license that doom3 was released under, but it's common for the code to be released under a free license for non commercial use, but a paid license would be required for commercial variants.
The engine (idTech4) is released under GPL. Thus, you can use it for commercial use without paying anything. I guess you can also still buy a non-GPLed version.
Derivative work that uses the engine must be GPL'd too. That is bad for commercial products that intend to be closed source; I guess those are the commercial products that buy the engine.
The blog author goes full nerd, like (paraphrasing) "You did this genius thing making a frontend/backend pipeline for rendering. Was this inspired by LCC? What are the advantages to a monolithic renderer?" and Cormack answers practically "Oh, didn't help that much, because of an unexplainable quirk of OpenGL it only worked well on my developer machine."
>In some part of the code (see dmap page) there are actually more comments than statements.
>Dmap source code is very well commented, just look at the amount of green: There is more comments than code !
You know, in my experience that's not a good thing. I work on similar, heavily-commented code and find it extremely painful. At some point it becomes a burden to see the code behind the comments. (And just so no one misinterprets me: I am not against comments /per se/.)
It's like when you read code written by someone who simply loved whitespace and who appended a useless "banner comment" after each real comment[1]:
// If x is less than 3, do stuff 50 times.
// -----------------------------
if ( x < 3 )
{
// While i goes from 0 to 50.
// --------------------------
for ( int i = 0; i < 50; i++ )
{
// Do stuff.
// ---------------
doStuff ( ) ;
}
}
So what would've fitted in one screen of text, if written in a sensible fashion, now requires one and a half screen and lots of scrolling.
[1] The comments in the example are actually both crappy and pointless. Sadly, the program I work on is riddled with them. Please don't write out what the programming language constructs do in English.
At least in that case you can attribute it to well-intentioned stupidity. Here's my favourite short example you must instead attribute to malice: [1] a closed-form implementation of fibs(n). Follow along with the comments!
I was taught to write programs in "pseudo code" before actually coding. Pseudo code being "English". What your example looks like to me is psudo code turned in to comments as the real code is written in underneath.
Not sure how that helps or hinders, its just an observation.
Though I no longer have the link handy, one of my favourite examples of this was of an OAuth2 'library' for PHP. You couldn't make out the code for the endless comments. Javadoc style comments and annotations; actual comments ... there was paragraph upon paragraph of comment for individual class properties, that you could reasonably infer from their name if it was done well.
It got to the point where the comments were so lengthy and prosaic, you were deterred from reading them just by their very existence.
Unfortunately bad comments like this leads people into thinking that all or most comments are useless.
It's true that I don't want to see a comment like "loop through this 50 times". But what may not be obvious is the purpose of the loop, the significance of the number 50. Putting in "why" can save hours.
The only exception where I would want a comment that just says what a line of code is for a really complex line, like a complicated regex for example.
I agree with the point about comments being useful for a really long complicated regex, but any really long complicated regex is a coding problem to begin with. Really, instead of comments (or in addition to comments), the regex should be broken up into logical pieces separated by white-space, just like regular code is. Jeff Atwood explains it well in this classic post: http://www.codinghorror.com/blog/2008/06/regular-expressions...
The problem is that comments are often only good the first few times you read them. After heavily hacking on some code for a while, you know the comments. I find the ability to fold comments very valuable.
Yes, I admit that in code reviews, staleness of comments is a thing that tender to get pointed out to me. However, I've since developed a procedure (review comments belonging to any hunk of changes) to prevent that.
John Carmack said [1]:
"The patent [2] situation well and truly sucks. We were prepared to use a two-pass algorithm that gave equivalent results at a speed hit, but we negotiated the deal with Creative so that we were able to use the zfail method without having to actually pay any cash. It was tempting to take a stand and say that our products were never going to use any advanced Creative/3dlabs products because of their position on patenting gaming software algorithms, but that would only have hurt the users."
[1] http://newenthusiast.com/carmacks-reverse-still-an-issue-200...
[2] The patent number 6,384,822, "Method For Rendering Shadows Using A Shadow Volume And A Stencil Buffer", can be read here: http://www.google.com/patents/about?id=Om0LAAAAEBAJ
Here is the patent work-around in the Doom 3 source code. Check out the RB_T_Shadow method in draw_common.cpp at line 1151 [3]:
[3] https://github.com/TTimo/doom3.gpl/blob/master/neo/renderer/...Further reading on the general topic can be found here: https://en.wikipedia.org/wiki/Shadow_volume