Hacker News new | past | comments | ask | show | jobs | submit login
"Sequential programming is dead. So stop teaching it" (intel.com)
52 points by jaydub on Nov 13, 2008 | hide | past | favorite | 98 comments



Aside from the inflammatory headline, what the article seems to be saying is "teach more about concurrent programming during early software programming courses," which seems reasonable enough. Like pointers, concurrent programming concepts seem to be a tough pill for some CS students.


Let's also not forget that Paul, as an employee of Intel has interest in boosting his company's strategic direction (many-core). I think this may explain the ostensibly inflammatory headline.

Nevertheless, you can still see what Paul is trying to get at: CS curriculums are woefully under-preparing their students for a parallel world.


if you believe it's going to be a parallel world in the future. I think that's still up for much debate.


Since clock speeds have pretty much hit a barrier - the only way to go faster will apparently be to leverage parallelism.

It's worth pointing out that the "speed-of-light collides with 20+GHz serial processor" See http://www.hppc-workshop.org/HPPC07/Vishkin_HPPC07.pdf


When I realized that by the time the light leaving my monitor reached my eyes, my CPU already completed at least three more instructions, my appreciation for clock speeds hit a new high.


That hardware will continue to expose more levels of parallelism is almost certain at this point.

That most software will consciously need to exploit that parallelism is not as clear. It's possible that some applications will be able ignore parallelism, but overall system performance can still be improved by being able to schedule multiple processes in parallel.


Exactly. Moreover, most programmers will simply use high level primitives from some library where concurrency is deep buried, so they won't need to worry about it at all.

I suspect that people that talk about everybody using concurrency by themselves haven't thought what the future applications will use CPU power for. 3D graphics, IA, image and voice recognition... all these applications are susceptible to be encapsulated in some black box and used through a simple API. In fact, how many programmers are using right now complex APIs? I think it's a tiny fraction. It would be naive to think that suddenly the new generation will be full of highly skilled programmers.

Web apps is a clear example of heavily concurrent application where concurrency can be simply ignored most of the time by most programmers.


Exactly. Moreover, most programmers will simply use high level primitives from some library where concurrency is deep buried, so they won't need to worry about it at all.

And who writes the libraries? I'd suggest that if you're only gluing together libraries, much of a rigorous CS training is wasted anyaways. You could do just fine with very little formal training on algorithm design and analysis, and a rather shaky understanding of the fundamentals of algorithms, if that's all the programming you do is. You're probably better off with a trade school.

If you're going to be doing anything challenging in the programming domain, you'll want a good grounding in concurrency.


Wasn't it enough to completely miss my point, you also had to put that annoying "you" all over the place. Sigh.


what was your point then, if not that programmers won't need to know concurrency because it will be in black boxes that they haven't written? (Bear with me, I'm a bit dense at the moment. I haven't had much sleep the last while)

I still think that programmers that are actually doing more than simply gluing together premade libraries will need to be familiar with concurrency, and that anyone taking a theoretical computer science degree to graduate and glue together libraries is probably overqualified.


>what was your point then, if not that programmers...

No, not "programmers", but "some programmers" or "a lot of programmers". Of course there we'll be always people that has to do the hard part of whatever, but it is a minority now and I'm afraid it will still be a minority in the future.

Don't think that everybody is as snart as you or your buddies. No sarcasm, I really believe that you get it better than my points ;-) In my experience the concurrency is written always by the same person (guess who), in the best case, that it.

That doesn't mean that I think it shouldn't be taught. Only that I'm skeptical it will solve anything.


I don't know if english is not your first language, but "you" in the context you are complaining about here does not mean YOU it means "ANYONE".


It's still wrong. The point wasn't about individual perspective, but about what proportion of programmers' population does the hard work.


It's a less archaic way of saying "one". 'when one does X' vs 'when you do x'


And as more things are run on servers, and less on clients, do we need to go faster on the client?


It's not necessarily all about speed (or what theorists would refer to as single-task completion). It's also about increasing throughput.

Google adds a pinch of concurrency to its web browser (each tab running in another thread) and it improves the client side experience.


I don't think throughput is a barrier at the moment.

The main touted benefit I've heard of Chromium's tab-threading is insulation, so if one tab crashes it doesn't bring the whole browser down.


Actually, I'd say the main benefit is that closing tabs actually releases memory because those processes die.


New computers are coming with gigabytes of ram, I have 2. On the other hand, I rather enjoyed watching a quicktime crash take down only 1 tab. I kept loading it repeatedly.


I have 4, that means nothing when the browser keeps bloating up and taking gigs itself because opening and closing lots of tabs all day long fragments the memory so badly that it can't release ram back to the OS when you close some tabs.



In Chrome each tab is a separate process, which is mostly for stability (if one crashes it doesn't take down the whole browser).


Pretty much anyone working with 3D, Image processing, AI and in many simulation areas will hope for more speed for a few more years.


Extracting parallelism from rendering code is well-understood and has been for a decade or more. Intel is hyping all this as if it's something new but really it's well-understood stuff being implemented inside one box instead of being spread across many.


Maybe we do: faster client-side AJAX would create a much better user experience... but how much this is needed, I don't know. Maybe it doesn't really matter?

At any rate, speed can be increased substantially without faster hardware, but with: (1) efficient implementations of javascript (Chromium); (2) faster Flash (AS3 + AIR); (3) web-friendlier Java (JavaFX); or even Sliverlight.

But the nice thing about faster hardware is you get faster apps without rewriting or learning new tech - opps, unless that faster hardware is many-core[⁎]...

Another reason to not need many-core is that desktop apps are already fast enough for the staples (word processing, spreadsheets). Hence the rise of sub-notebooks, the iPhone, and the best-selling games console being by far the least powerful (wii).

[⁎] many-core (as opposed to multi-core) technically means heaps of processors - tens or hundreds. It is a qualitatively different situation from one thread per processor.


you need to stop fighting the future


Basically CS curriculums are woefully under-preparing students.


CS curriculums are woe


I'm not sure how you can talk about concurrency without talking about locks and other such topics, which I think is very ambitious for new programmers.

However, if universities start teaching programming with pure functional languages (which is highly unlikely), concurrency becomes a much easier topic for discussion.

Perhaps this is the sort of thinking that will lead to more universities adopting something like PLT Scheme, which in recent versions, has moved the notion of mutable pairs into a library. Doing this might bring functional languages out of academia and more into the mainstream, which would be fantastic.

I remember fellow students having trouble grasping pointers, even after a 2nd year architecture course, which I thought was completely absurd since they were writing assembly code without tremendous strain.


Often when competent programmers are having trouble grasping pointers, they are actually doing fine with thinking about pointers in abstract, but having trouble with the notation describing a particular instance of them. (Especially with C programs.)

One of the things that doing good OO and following the Law of Demeter does for you is to reduce the levels of indirection you have to deal with to one or two.


From what I've seen helping people I know handle pointers, they're really actually having trouble with the concepts of pointers. Getting them to draw the box diagrams with pointers correctly is challenging. The notation is an extra challenge, but it's not the main difficulty.

It's not the notation, it is the concept.


Only time I've seen difficulty with pointers in the abstract is when helping people in intro programming classes in C. Once you're past that level, programmers get it, but they get confused by what the code is actually saying.


What really helps in that situation is a good debugger, that lets you quickly look up blocks of memory (e.g. where the pointer references). Obviously it should be stable enough to handle bad pointers (giving an error message instead of crashing is nice).

The old CodeWarrior debugger was great for this. Lots of windows you could pop up for every in-memory object you cared about. sigh


You should take a look at ddd. I've never used CodeWarrior, but from your description it sounds a lot like ddd.


ddd's nice, but CW made it a lot easier to stay close to the machine-level. Nothing like a good window'd hexdump to browse :-)


I'm not sure how you can talk about concurrency without talking about locks ... which I think is very ambitious for new programmers.

I agree. I don't think it should start from CS1, but you can start with the Dining Philosophers problem in CS1 to introduce the ideas of locks and starvation.


I'm not sure how you can talk about concurrency without talking about locks

that is because you have slept through the entire revolution of share-nothing concurrency


I don't see how share nothing concurrency solves the "problem" of learning about locks. While locks in a share nothing concurrency architecture is not a problem, it is in other concurrency setups (you know, like threads?)


I think his point was that you don't need to learn about locks to learn about concurrency, which is a valid point. The problem with the point, at least as I see it, is that first year programming classes (at this point in time at least) do not teach languages that have concurrency models that really support it, nor do they eliminate the problem of synchronization by providing immutable data structures.

I, just this afternoon, heard Guy Blelloch give a talk about Parallel thinking today, and he seems to support the idea of teaching parallel programming throughout the curriculum almost instead of sequential programming--making sequential thinking the oddity. I think this makes sense, but it obviously has some flaws...


* do not teach languages that have concurrency models that really support it*

I don't know that I agree with that at all. My first college CS class taught Java, definitely supports concurrency. It's not Erlang-like concurrency, but concurrency nonetheless.

Theres a trend to start teaching Python as a first language, that supports it as well.


Please elaborate.


search "shared nothing concurrency"...read sutter's "the end of the free lunch"...


That's ridiculous. Programming is not all about multi-core performance. Also, programs do not all need to directly implement parallelism to be performant. The architecture can support parallel processing with sequential languages. A great example of that is Rails, when set up with mongrel processes, each of which can run on its own core if necessary.

Hyperbole, hyperbole, hyperbole <-- summary of this article.


This isn't really true; you can't just start spinning up multiple mongrels with no consequence to your application. If this were true, there would be no need for locking the database in your application code for a transaction. Plus you can only spin up so many mongrels before your database performance starts to suffer.


That's irrelevant to the discussion at hand - if anything, it goes to show that parallelism is already integrated into the rails architecture via transactions.


Its not built into rails transactions; you explicitly have to lock tables/rows (depends on the db/engine you're running) inside of transactions to avoid conflicts. We ran into this problem with Poll Everywhere when we had to update a counter cache column. We had something along the lines of

Poll.transaction { increment!(:results_counter) }

This worked fine with one mongrel in our dev and test environments but when we threw that out to our cluster of mongrels, we got all sorts of locking errors when a torrent of people would vote. To resolve the issue we had to add:

Poll.transaction{ lock!; increment!(:results_counter); }

If this isn't a bottleneck or leaky abstraction then I don't know what is. Locks are ugly and I consider them a hack. In our case an RDBMS probably isn't the best data store solution.


you should stop posting until you take classes in operating systems and compilers. really


You're a condescending fuckwit.


who is trying to make you less dumb


Since when does a decent database lock tables for concurrent transactions?


Right, I just hope in a year or two when we get 64 core machines they will sell RAM in the petshop by the kilo so we can feed it cheaply to those 256 hungry mongrels ;-)


That's ridiculous. Programming is not all about multi-core performance.

it will be. you're getting beaten over the head by chip designers telling you that your future cpu is going to consist of a (possibly large) array of processing cores with a high-capacity bus connecting them. they are telling you this is the only way they can give you higher performance. you had better start believing them because these systems are starting to get delivered now.

A great example of that is Rails, when set up with mongrel processes, each of which can run on its own core if necessary

?????? so mongrel comes with its own OS kernel that has better support for multicore than linux and freebsd? wow!! coolzzz!


?????? so mongrel comes with its own OS kernel that has better support for multicore than linux and freebsd? wow!! coolzzz!

Hi. You may not have noticed it, but this site is not Reddit. Please try to keep commentary like this to a minimum, where 0 is the minimum.

Anyway, please also read the posts you are replying to. They are saying that many applications get concurrency "for free", since the database library handles the concurrency for them. Yes, this can be a bottleneck, but it is a fundamental problem with the notion of locking. If you want maximum performance, don't lock. If you want absolute data integrity, you have to lock. That's a problem.

Concurrency should definitely be a part of CS programs, but Intel's thread library isn't the way to do it. CL-STM or Haskell's STM would be much better.


> you're getting beaten over the head by chip designers telling you that your future cpu is going to consist of a (possibly large) array of processing cores with a high-capacity bus connecting them. they are telling you this is the only way they can give you higher performance. you had better start believing them because these systems are starting to get delivered now.

If your chip designers are telling you that they're building a large array of procesing cores connected with a high-capacity bus, you need to get some new chip designers.

If you've got a bus that can actually support a modest number of cores, your cores are too wimpy and should be built with whatever was used for the bus.

More likely, you actually have a saturated bus that is the system bottleneck, so your cores are spending most of their time waiting for access.

There is no silver bullet. Many problems are bound by bisection bandwidth. The more cores, the worse the problem. You end up devoting proportionally more space and power to communication as you increase the number of processors.


"you're getting beaten over the head by chip designers telling you that your future cpu is going to consist of a (possibly large) array of processing cores with a high-capacity bus connecting them."

Sorry, but I don't buy that. We're also moving to a thin client world where we don't actually need that much power on our thin clients.

Of course the chip makers are saying that - they want to sell more chips. They have to come up with some other number they can increase.


Can you envision a world where your "thin client" is powered by 25 cores clocked at a low speed like 100 MHz? Why not?

Do you really think performance doesn't need to be increased from the current status quo? The number of cores will only go up from here.


No I can't. That would be simply ridiculous. The thirst for power is not infinite. At some point, most people will have enough power to do everything they need (Unless they're using say windows, which will always require 10 times more computing power than the previous version).


And 640 kilobytes of RAM should be enough for anybody.

There will always be an increase in power demand. To think otherwise is short-sighted. If you could keep what we have now or have a sentient computer sitting on your desk, which would you choose?


Personally? I'd keep what I have. There is nothing better or more rewarding than squeezing more performance out of fixed limited hardware. Where is the fun if you can just buy twice as many servers? The day hardware is free, is a very very sad day for programmers.

Of course there will be massive demands in the world of servers, research, gaming etc, but that's not everything.


"The day hardware is free, is a very very sad day for programmers."

It may be a sad day for those of you who program primarily for the challenge, but for those of us who want to get stuff done, it'll be a joyous day. :)


"Sorry, but I don't buy that. We're also moving to a thin client world where we don't actually need that much power on our thin clients."

Yeah, we've heard it before, several times, from the moment that networking was invented and onwards. Why will the push for this stick _this_ time around?


Sorry, but I don't buy that

then you clearly aren't buying new high-end servers for data crunching either, because these are already multicore


As surprising as that may sound, 99% of developers out there are not buying new high-end servers for data crunching, no.


what is in the cpus of those machines will be in the cpus of all machines. jesus, how much more legit can you get than intel telling you this is coming?


Intel may try to sell many-core but who wants to buy it?


you. you aren't going to be given the choice, these are coming to a laptop near you...like on your lap


So you're saying that no one will ever sell lower cost single core laptops any more? hrmmm I'm skeptical.


Um... you consider a chip maker, telling you that you need to buy new chips "legit"?


No, but I consider a chip maker telling me what kind of chips they're going to be making legit. Especially when all the other chip makers are saying the same thing (All the other chip makers: AMD, Freescale, Intel, Sun, Marvell, and others)


Itanium


"?????? so mongrel comes with its own OS kernel that has better support for multicore than linux and freebsd? wow!! coolzzz!"

In fact, quite the opposite. Handling concurrency by having multiple share-nothing processes relies on the OS to handle the scheduling and core assignment.

edit: "real" (system) processes.


uh, yeah, i know that. i thought the "coolzzz" would relate my sarcasm


Your sarcasm implies that mongrel's scheduling and IPC was inferior to the OS, which is not the case.


I don't write OS kernels, I write rails application. I don't give a rat's ass about the OS kernel. I don't even give a rat's ass about how Mongrel is programmed. The Rails applications themselves don't need to be altered to run in a multi-core environment. Mongrel naturally scales to as many servers or CPUs as you want to run it on, since there is no interaction between different mongrel instances, all that happens at the database.

Let me make that point even clearer: I don't give a shit how the database has been programmed. Someone there has obviously had to think about parallelism, but I don't need to, because I'm not writing a fricken database.

Got it?


I don't write OS kernels, I write rails application. I don't give a rat's ass about the OS kernel

then stop spouting off uninformed comments about how processes are scheduled

The Rails applications themselves don't need to be altered to run in a multi-core environment.

nor do any other program compiled for that architecture. its the OS that schedules processes, not your userland program. the point is, some programs can be written in a way that makes it easier for the OS to exploit multicore. since ruby is not a functional language, my guess is that it would tend to not help the kernel exploit these resources. but obviously in the worst case, a process can run inside one core and never get the advantages of the rest of the chip architecture. this is about exploiting multicore

Got it?

yes, i get that you know very little about how computers function


nor do any other program compiled for that architecture

Then none of the people writing those programs need to know or care about parallelism.

Therefore, the core message of the article is brain-dead.


NO. why don't you READ before you reply

a program compiled for a multicore CPU will RUN. the question is how OPTIMALLY does it run. a program with no potential for parallelism will not get any parallelism. it will run, but run slow compared to programs designed for parallelism.

programs written to exploit parallelism will be programs that bring new approaches to data and state. functional languages provide this today, which is why lots of people think they will be the way forward for multicore.

honestly i think you are just bordering on being a troll. why don't you do some reading on this topic before writing more uninformed replies


As the instigator of the offending post, I am delighted to see the discussion. I am down at Supercomputing 08 with Tom and others and we will be discussing this topic on a panel this evening,. The discussants include NVIDIA and Intel (and SUN and IBM and AMD) so clearly this is an issue of concern to ALL computer manufacturers. If you are in Austin - come to room 10B this evening (Monday, Nov 17). If you are not, we are doing a live webcast on the subject http://is.gd/7Rvz on Thursday Nov 20. I would love to be able to carry discussion on this topic further. One idea would be an open forum using some kind of internet voice/text app. Not sure which yet. I know I could drag some Intel folks and I'm pretty sure I could get a few folks from elsewhere in the industry and acacdemia to join in. Maybe we could even make this a semi-regular and continuing discussion on broader topics – you guys clearly have both the opinions and the savvy. So let me know -post something on my blog if you like the idea http://is.gd/7RyX and we'll figure out how to make it happen. .


How can you teach concurrent programming without teaching sequental programming first?


Teach them concurrently?


What I delightful thread. I am indirectly part of the creation of this inflammatory headline. It comes from the Panel discussion we are holding next Monday evening at SC08 in Austin. The panel's title is "There Is No More Sequential Programming! Why Are We Still Teaching It?"

It is a complex issue. I taught myself to program at 16 to play blackjack. 41 years later, I am still creating and playing games (not video games). For over thirty years I worked for supercomputer companies. Along the way, I went to grad school and formalized my education.

I believe my experience is not atypical. My students who succeed as CS are ones who at some point have a passion to solve a problem and are intent on gaining the skills to do it. Some start from flow charts and pseudo code; some start by debugging an empty file.

What does this mean for this discussion? I don't care whether we view sequential as a special case of parallel; or parallel as a special case of sequential. Ideally, I'm going to help my students have the thinking skills and the experience to solve interesting problems by cutting code with threads, with MPI calls, via Cuda, or just with other code. But there is no question that many of the rarified programming skills of my supercomputer days are fast becoming everyday programming skills.


In other news, Microsoft announces that UNIX is dead, so it should not be taught anymore.


Many-core overshoots needed performance for most uses. Hence the rise in sub-notebooks, iPhones, best-selling games console being by far the least powerful console (wii).

+

Concurrency is a unsolved problem. There are locks etc; there's smalltalk/Erlang pure-message passing and Web Services/SOA (it's concurrency). Concurrency is of academic interest, and niche apps (game engines; simulations; etc.

=

unsolved problem that is not needed... so far, anyway


The idea of solving problems by breaking down solution steps and running them in parallel is exciting. However, identifying areas where the solution to a problem can be parallelized is the largest problem. As critical is analyzing whether or not the administrative overhead that goes along with parallelization is going to negate the benefit. Put more succinctly, computer science curricula should emphasize this kind of analysis well before introducing the techniques for achieving parallelization.

Donald Knuth's skepticism over the benefits of concurrency makes me want to rethink my own assumptions.

I haven't really seen anyone describe the changes that should be made to curricula. Do any educators on this thread have specific changes in mind?


The purpose of programming is not to maximize performance of Intel CPUs. If I want to calculate 1+1 or sqrt(2), why should I bother with parallel algorithms. Also, a lot of the time there will just be parallel threads exploiting the cores (like multiple applications running on an OS).

What would be interesting would be a kind of "complex systems" programming, stuff like cellular automata, but maybe it is impossible to make them tractable enough.

Also, aren't the most performant "parallel computations" simply specialized matrix operations. I am not sure if learning specialized and hard to understand programming languages for parallel computation are the best way forward.


Parallelism still costs overhead. Concurrency is not free so you have to weigh the cost/benefit before deciding to use sequential or parallel for the task at hand.


I recently read the other thread, that IE 6 won't go anywhere, so it's the same with sequential programming. I don't think it won't be dead.


This ignores the millions of single core/processor, embedded systems out there (and surely also represent a growing industry.


In a nutshell "we moved our product in a direction that is largely useless. Please help us."


dead??? Not sure about that one. There is more to programming than just multicore server backend stuff. Regardless, you don't "stop teaching it", you just teach more related to multi-core architectures if those do start to become more prevalent.


Rumours of its death have been exaggerated. 99% of programming is sequential.


Well, he did say it was a diktat, which I had to lookup: http://www.tfd.com/diktat


"People aren't using the hardware you're building. So stop building it."


Perfect timing. My copy of JCIP came in the mail yesterday.


so java programmers get deep into java.util.concurrent


no, java programmers pick up haskell is more likely.


no, clojure


no, haskell. clojure in the end runs on the jvm. the ghc compiler has parallelism baked into it in a fundamental way.

don't get me wrong, i like clojure, but it won't be better for multicore until the jvm is better for multicore




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: