Hacker News new | past | comments | ask | show | jobs | submit login

Let's also not forget that Paul, as an employee of Intel has interest in boosting his company's strategic direction (many-core). I think this may explain the ostensibly inflammatory headline.

Nevertheless, you can still see what Paul is trying to get at: CS curriculums are woefully under-preparing their students for a parallel world.




if you believe it's going to be a parallel world in the future. I think that's still up for much debate.


Since clock speeds have pretty much hit a barrier - the only way to go faster will apparently be to leverage parallelism.

It's worth pointing out that the "speed-of-light collides with 20+GHz serial processor" See http://www.hppc-workshop.org/HPPC07/Vishkin_HPPC07.pdf


When I realized that by the time the light leaving my monitor reached my eyes, my CPU already completed at least three more instructions, my appreciation for clock speeds hit a new high.


That hardware will continue to expose more levels of parallelism is almost certain at this point.

That most software will consciously need to exploit that parallelism is not as clear. It's possible that some applications will be able ignore parallelism, but overall system performance can still be improved by being able to schedule multiple processes in parallel.


Exactly. Moreover, most programmers will simply use high level primitives from some library where concurrency is deep buried, so they won't need to worry about it at all.

I suspect that people that talk about everybody using concurrency by themselves haven't thought what the future applications will use CPU power for. 3D graphics, IA, image and voice recognition... all these applications are susceptible to be encapsulated in some black box and used through a simple API. In fact, how many programmers are using right now complex APIs? I think it's a tiny fraction. It would be naive to think that suddenly the new generation will be full of highly skilled programmers.

Web apps is a clear example of heavily concurrent application where concurrency can be simply ignored most of the time by most programmers.


Exactly. Moreover, most programmers will simply use high level primitives from some library where concurrency is deep buried, so they won't need to worry about it at all.

And who writes the libraries? I'd suggest that if you're only gluing together libraries, much of a rigorous CS training is wasted anyaways. You could do just fine with very little formal training on algorithm design and analysis, and a rather shaky understanding of the fundamentals of algorithms, if that's all the programming you do is. You're probably better off with a trade school.

If you're going to be doing anything challenging in the programming domain, you'll want a good grounding in concurrency.


Wasn't it enough to completely miss my point, you also had to put that annoying "you" all over the place. Sigh.


what was your point then, if not that programmers won't need to know concurrency because it will be in black boxes that they haven't written? (Bear with me, I'm a bit dense at the moment. I haven't had much sleep the last while)

I still think that programmers that are actually doing more than simply gluing together premade libraries will need to be familiar with concurrency, and that anyone taking a theoretical computer science degree to graduate and glue together libraries is probably overqualified.


>what was your point then, if not that programmers...

No, not "programmers", but "some programmers" or "a lot of programmers". Of course there we'll be always people that has to do the hard part of whatever, but it is a minority now and I'm afraid it will still be a minority in the future.

Don't think that everybody is as snart as you or your buddies. No sarcasm, I really believe that you get it better than my points ;-) In my experience the concurrency is written always by the same person (guess who), in the best case, that it.

That doesn't mean that I think it shouldn't be taught. Only that I'm skeptical it will solve anything.


I don't know if english is not your first language, but "you" in the context you are complaining about here does not mean YOU it means "ANYONE".


It's still wrong. The point wasn't about individual perspective, but about what proportion of programmers' population does the hard work.


It's a less archaic way of saying "one". 'when one does X' vs 'when you do x'


And as more things are run on servers, and less on clients, do we need to go faster on the client?


It's not necessarily all about speed (or what theorists would refer to as single-task completion). It's also about increasing throughput.

Google adds a pinch of concurrency to its web browser (each tab running in another thread) and it improves the client side experience.


I don't think throughput is a barrier at the moment.

The main touted benefit I've heard of Chromium's tab-threading is insulation, so if one tab crashes it doesn't bring the whole browser down.


Actually, I'd say the main benefit is that closing tabs actually releases memory because those processes die.


New computers are coming with gigabytes of ram, I have 2. On the other hand, I rather enjoyed watching a quicktime crash take down only 1 tab. I kept loading it repeatedly.


I have 4, that means nothing when the browser keeps bloating up and taking gigs itself because opening and closing lots of tabs all day long fragments the memory so badly that it can't release ram back to the OS when you close some tabs.



In Chrome each tab is a separate process, which is mostly for stability (if one crashes it doesn't take down the whole browser).


Pretty much anyone working with 3D, Image processing, AI and in many simulation areas will hope for more speed for a few more years.


Extracting parallelism from rendering code is well-understood and has been for a decade or more. Intel is hyping all this as if it's something new but really it's well-understood stuff being implemented inside one box instead of being spread across many.


Maybe we do: faster client-side AJAX would create a much better user experience... but how much this is needed, I don't know. Maybe it doesn't really matter?

At any rate, speed can be increased substantially without faster hardware, but with: (1) efficient implementations of javascript (Chromium); (2) faster Flash (AS3 + AIR); (3) web-friendlier Java (JavaFX); or even Sliverlight.

But the nice thing about faster hardware is you get faster apps without rewriting or learning new tech - opps, unless that faster hardware is many-core[⁎]...

Another reason to not need many-core is that desktop apps are already fast enough for the staples (word processing, spreadsheets). Hence the rise of sub-notebooks, the iPhone, and the best-selling games console being by far the least powerful (wii).

[⁎] many-core (as opposed to multi-core) technically means heaps of processors - tens or hundreds. It is a qualitatively different situation from one thread per processor.


you need to stop fighting the future


Basically CS curriculums are woefully under-preparing students.


CS curriculums are woe




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: