Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Japanese supercomputer blisters 10 quadrillion calculations per second (networkworld.com)
28 points by coondoggie on Nov 4, 2011 | hide | past | favorite | 18 comments


Let me hum a few bars: suppose you were sitting on an enormous pile of money (borrowed from your citizens, but they don't really get irked about that), you had an industry which was almost universally popular (one of the triumphs of marketing technology in your country), its primary beneficiaries were important constituencies, and you could justify almost any amount of expenditure by saying it was for National Greatness And The Advancement Of Science (TM). You might spend an awful lot of money on white elephant projects, regardless of whether that was an efficient method of achieving the stated goals of the program.

For the benefit of previous employers: いや、うちのプロジェクトではなくて、アメリカのNASAを考えていますよ。


Processing masses of seismic data and reorganizing Japan's energy infrastructure with or without its existing nuclear fleet seem like the sort of problems that would benefit from ultra-large computational capacity, to name but two possibilities.


I find these announcements interesting, but perhaps more interesting from a technology point of view but they do seem a bit like putting a V8 engine in a mini-cooper just to show off. They offer the following possible "uses":

Analyzing the behavior of nanomaterials through simulations and contributing to the early development of such next-generation semiconductor materials, particularly nanowires and carbon nanotubes, that are expected to lead to future fast-response, low-power devices.

Predicting which compounds, from among a massive number of drug candidate molecules, will prevent illnesses by binding with active regions on the proteins that cause illnesses, as a way to reduce drug development times and costs (pharmaceutical applications).

Simulating the actions of atoms and electrons in dye-sensitized solar cells to contribute to the development of solar cells with higher energy-conversion efficiency. Simulating seismic wave propagation, strong motion, and tsunamis to predict the effects they will have on human-made structures; predicting the extent of earthquake-impact zones for disaster prevention purposes; and contributing to the design of quake-resistant structures.

Conducting high-resolution (400-m) simulations of atmospheric circulation models to provide detailed predictions of weather phenomena that elucidate localized effects, such as cloudbursts.

Which sound a bit squishy. I'd be interested to know what China's super has done since they showed pictures of it. Clay Dillow over at the Popular Science blog has been putting out snippets on various super computers under the titles "what are you working on now?" and it seems like a whole lot of nothing.

Franklin (#27) - http://www.popsci.com/technology/article/2011-10/day-life-su... Climate change modelling.

iForge (unranked) -http://www.popsci.com/technology/article/2011-11/what-are-yo... Doing fluid dyanmics (this is probably the most common thing)

Roadrunner (#10) - http://www.popsci.com/technology/article/2011-10/roadrunner-... Super sekrit weapons stuff. Which seems to be the most common use. (Is there like some sort of open source nuke simulator or something?)

Anyway, there are probably a dozen machines PopSci has looked at and I really am trying to get a sense of how they pay for themselves.


Hm. I don't see how those count as "a whole lot of nothing", but then I'm a physics student, so basic science research doesn't bother me. What sorts of benefits would you want to see from supercomputing for it to be "useful"?

There was a recent Washington Post article on the DoE's supercomputers and the uses they're put to:

http://www.washingtonpost.com/national/national-security/sup...


I don't get it. If materials modeling, drug discovery, seismology and atmospheric modeling sound "squishy" to you, what on Earth wouldn't be squishy?

Most of those machines are shared between a bunch of researchers working on a bunch of different problems. There's some great stuff going on, some semi-great stuff going on, and probably some fairly useless stuff going on, too. I've used many of 'em myself. In terms of science done per dollar spent, big-arse supercomputers are certainly far more efficient than most publicly-funded research.


Fair enough, non-squishy is the return on investment Amazon gets with AWS from its clusters. I don't really expect a monetary return on pure research (although it does happen) and I don't see a lot of papers rolling by in Science, Nature, or Xarchiv where the work required the use of one of the top 500. Nor do I see announcements from Pfizer or Merck saying "This new simulation allowed us to track down this drug in record time which treats condition X and because we're more efficient we can charge less for it and still make a huge profit."

The shuttle CFD work that Ames did, came out with a lot of info on their use of the Cray super to do the modelling, that is an example of something I would expect to see.

However, as others have pointed out you can get a machine that is close to that of a Cray 2 today off the shelf.

Schlumberge has one doing oil field seismic analysis, I understand the value proposition there. But there are many many machines on the T500 list where I just wonder, "Ok why do they have that?"


I don't see a lot of papers rolling by in Science, Nature, or Xarchiv where the work required the use of one of the top 500

There are many papers in Science and Nature based on work from supercomputers. And if by Xarchiv you mean arXiv, the proportion is bloody huge. Just try searching for "NERSC" for instance and you'll find that NERSC alone leads to a zillion publications per year.

there are many many machines on the T500 list where I just wonder, "Ok why do they have that?"

And you take the fact that you personally don't know what they're being used for as evidence that they're not doing anything worthwhile?

Most of these machines aren't used for one big project, they're used by hundreds of researchers for hundreds of small projects.


"And you take the fact that you personally don't know what they're being used for as evidence that they're not doing anything worthwhile?"

That was not what I said. Clearly some organization ponies up a few hundred million to build one of these systems and signs up to feed it power they think what ever it is doing is "worthwhile." I would love to understand that so I posed the question.

I think I mentioned that I understand the NERSC vision, sharing super computers around for science. And yes, there are lots of citations about their facilities, and they have six such machines on their homepage [1].

My question is if you aren't a government or university, what value would you derive from having a supercomputer on the T500 list? (patio11's excellent response not withstanding). Off list someone pointed out that Boeing uses their computer for structural analysis of airplanes made of carbon fiber. That seems like a perfectly reasonable use, one plane like the 787 is probably 5 - 6B$ of sales over its life time.

[1] http://www.nersc.gov/users/computational-systems/


Well, there aren't that many computers on the top500 that aren't owned by a government or university. I'm too lazy to look through the whole 500, but just in the top 100 let me count.

Airbus has one at 29. I think it's clear what they use that for.

Vestas Windsystems in Denmark has number 53. Presumably they're optimizing their wind turbines.

IBM has a couple, which they largely run like shared research facilities.

And that's pretty much it for commercial research facilities in the top500.


It "blisters" them? Presumably that problems is most noticeable with a fresh boot.


This K machine is a months-old story. Why the sudden jump in news stories on it?


It was just upgraded.


I stopped being impressed by supercomputers like 5 years ago. These days it's all about money. You can buy as many CPUs/RAM, and connect them together.

I'm much more impressed by the advances in CPU architecture. SSDs have been amazing so far. Networking has been improving too.


What impressed you about the supercomputers of 5 years ago that no longer impresses you about today's supercomputers? The Cray approach (http://www.nccs.gov/wp-content/uploads/2011/03/UserMeeting20...) and the IBM approach (http://en.wikipedia.org/wiki/Blue_Gene#Blue_Gene.2FQ) both seem more interesting than just connecting together as many CPUs/RAM as you can afford.

There are interesting OS challenges for the compute nodes, interesting hardware challenges on interconnect, interesting filesystem challenges, interesting programming challenges to manage the parallelism, competing architectural approaches between CPUs and GPU-accelerated clusters ... it's a pretty interesting space!


Actually that's not exactly true, since Linpack doesn't really lend itself to architecture's like say, Google's, where you have lots of machines sitting on the network. The bandwidth between the nodes, and its latency becomes the determining factor. So for these guys it is all about the interconnect rather than just how many CPUs you can put in a rack.


After I found out about bitcoin, it's the first thing I think about when they come out with faster hardware.


K doesn't have GPUs, so it sucks for Bitcoin mining. In general, supercomputers contain extra stuff that isn't useful (and thus isn't profitable) for mining.


How long does it take to boot Windows?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: