Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Stanford CS course on digital photography (stanford.edu)
130 points by United857 on Aug 30, 2011 | hide | past | favorite | 25 comments


Here's the Harvard Extension School digital photography course: http://tv.cse7.org/2010/fall/


That is cool. I am really liking the democratization of information and (to a lesser extent) knowledge that the Internet has brought.

Very neat to see this in the CS department, with the added technical aspects the art school would miss.


It's funny you say that about the added technical stuff, I was looking through it hoping the that the Art dept. had done the same thing with more of the artistic/compositional technique stuff and less of the technical stuff.


Art department almost always have photography classes (which, today, would be digital). Being a photographer, I've always wanted to know more about the algorithms used so I could bend them to my will more fully.

The meta-advantage of the CS one is it makes seem CS practical: see, it's not just fiddling with computers.


As somebody interested in photography, this is really useful.

I'm wondering if anybody else is interested in working through the 'course'. We could do a lecture and an exercise each week, then setup some kind of discussion forum? I know it would probably improve my photography a lot.


You could check out the Reddit photo class -http://www.reddit.com/r/photoclass/ lots of people there do it together to get a better "learning experience".

There is a version 2.0 going on right now too, currently on lesson 4 posted Aug. 12 - http://www.reddit.com/r/PhotoClassRedux/


I don't think it's going to make you a better photographer necessarily, but maybe you'll come out a better editor.


How so? I took a (very) quick look through the lecture notes and exercises and they seemed to relate mainly to the art of taking photos. The first exercise seems to be about learning to use aperture, focus, and shutter speed to produce different effects. The first lecture is about "natural & linear perspective, pinholes and lenses, aperture, shutter, motion blur, depth of field, ISO" which is pretty much what I would expect of a photography course.


Right. Well, with a digital camera you can probably figure out all of that on your own just by futzing around with the camera for a weekend.

The difficult part comes in understanding light and understanding composition, neither of which the course really seems to talk about.

No one has ever needed to know about 3d colourspaces ( http://graphics.stanford.edu/courses/cs178-11/applets/locus.... ) in order to take a good picture. However, good editing is about 30% of the effort behind a good photo; you might come out a better PHOTOGRAPHER but your photos won't necessarily be that much improved.

Anyways, the secret is to look at what other people are doing and to take as many pictures as possible.


I would suggest that this is much more towards the science of taking photos than the art of of taking photos


The HDRI shot on top of the site is very nice! But I don't understand why these shots are still not possible with current camera's. If you take a picture with the longest exposure time wouldn't it be possible to dim the brightest areas using a formula?


If you overexpose that part of the sensor, it simply records the maximum value. Then when you "dim" it later, you just get a blank grey area. In order to extract detail, you have to maintain some contrast between "bright" and "really bright", so you have to keep the "bright" area below the maximum value of the sensor.


But doesn't a camera sample pixels at a given rate? So lets say a camera can access image data from the sensor @ 100Hz and you store this over time (exposure time) it doesn't matter how bright areas are. But I can be completely wrong. Maybe I'm confusing frame-rate (video ability) with the sensor burst-rate.


I'm not entirely sure what you mean by sampling pixels, but the parent's post is saying that some areas are very bright while some areas are very dark, like you see in the HDR shot.. but the sensor doesn't know which of the two light levels you want to adjust it for, so it makes a best-guess based on your camera settings.

Example: while our eyes are much more advanced than any camera; think of the effect of having a giant spotlight shined directly* into your eye: You can see the light, but everything else is going to be much darker in comparison. If you look at it long enough, eventually all you're going to see is just the spot-light, everything else around you will be dark. Much like your eye, this is exactly what would happen to your picture if you make the exposure time long enough.. and it doesn't take very long, even 0.25 or 0.5 seconds of time can ruin a picture taken outdoors.

To answer your original post: a lot of DSLRs nowadays DO have features that do this, called "Bracketing" mode where it'll take the same picture with low, mid-range, and high exposure levels(hence the "high dynamic range" name). From there, you combine the three pictures(or more) in photoshop or other software and you get that resulting image that you see. In the low mode, you'll be able to see the foreground while the background is mostly white, the middle level will have both areas show but with not-too-pleasing colors, while the high level will show the background, but the foreground will appear completely dark.

*personal experimentation from long ago has shown that this isn't advisable :)

Edit: I should add that bracketing is just a special feature that takes the pain out of having to set the exposure time and aperture(how wide open the lens is when shooting), but these kinds of HDR shots are perfectly capable with any camera(yes, even the pocket-sized ones) if you have control over these settings. Knowing how to use the camera is key(along with keeping it perfectly still).


"I'm not entirely sure what you mean by sampling pixels"

I think I'm confusing some things. I can understand that each pixel of a sensor can output values from lets say 0 to 1V where 0V is dark, 1V is light. So you need a shutter-time because else the sensor will only output values of 1V.

I was thinking todays cameras can sample these values at a very high frame-rate. So when you sample these values over time and store them you can expand the range. But it seems like "bracketing" is doing something like this. I did not know. Thanks for the info!


I've always found this interesting too, especially when a RAW file grows in size with exposure time... doesn't that indicate it is "sampling" the sensor?


Of particular interest to me was their open platform for computational photography including the Frankencamera and the FCam API:

http://graphics.stanford.edu/projects/camera-2.0/

A free paper published in IEEE Computer Graphics and Applications:

http://graphics.stanford.edu/papers/camera20/


This is really good stuff! I love the first lesson: "Bad Photos". Having taken a photography class and dabbling with it that is one of the best ways to start getting to know the capabilities of your camera.

After working through the assignments you should have a pretty solid portfolio.


I recommend Dick Lyon’s lectures to Google engineers a few years ago about “photographic technology”. http://www.dicklyon.com/phototech/


On a slightly related note, does anyone know of a good online course on pencil sketching? Google didn't help me.


If you find one I would be interested, as I've also searched for this without success.


Try digging around here: http://forums.cgsociety.org/ Go down to General Techniques and look around. Mostly focused on cg art stuff, but there's quite a bit of traditional stuff (mostly focused on anatomy/figure drawing, but there is other stuff). Not exactly a class, but they have workshops that get started up from time to time that you follow along with.


In all honesty, I do not think a photography class that is recreational should be part of the official curriculum.

Seriously, the Harvard course advertises itself with "what the difference between sports mode and portrait mode on the camera's dial is"

I love photography, I think it is good you can take classes and have fun (even though much of the course content you can learn yourself easily), but for academic credit on a school of engineering, no.


I think it's good that students get the chance learn some things outside their immediate subject area. Some experience in the more creative areas such as photography and design can't hurt, even for a CS student. You never know when it will come in handy.


Sure, I have done many elective classes including a couple on a military academy. The latter helps me understand the evening news a lot better. But my degree was in engineering and as such, all these other useful classes did not count as official part of my curriculum. And rightly so, I believe.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: