What took them so long? Plans for Fusion was announced when AMD acquired ATi, and that was 2006. And now they announce that they are shipping them in 2011, five years later?! Meanwhile Intel has already released several integrated CPU+GPU solutions (Pineview and Arrendale/Clarkdale)
There are a couple of ways that a Fusion-style core can be used.
You could take advantage of the high-bandwidth, low-latency connection between CPU and GPU to get higher performance, especially in GPGPU applications, but GPGPU hasn't really taken off yet. There's also the limitation that now CPU and GPU are sharing the same heatsink.
You could use it to offer a low-end system with good integrated graphics, because there's no longer a penalty for the GPU sharing the CPU's RAM. However, this approach pretty much requires a new socket, which probably eliminates any potential cost savings from not having to buy a discrete GPU. This approach also sacrifices some in flexibility, as adding a discrete graphics card later on makes the on-die GPU worthless unless you have a good asymmetric SLI/Crossfire system, and even then you're limited to the features supported by the on-die GPU.
The approach taken so far by Intel and AMD is to use it to make a better low-power platform. Intel's obviously been way ahead here. Ever since the Pentium M, AMD's had trouble keeping up in the mobile market. ATI's GPUs haven't been that great for the mobile market, either. It was obvious in 2006 that this was going to be the hardest approach for AMD to pull of, but it also seems to be the only one that might pay off.
5 years to absorb a newly acquired company's expertise and design a new kind of CPU with it doesn't sound that long to me. Intel does have integrated graphics, but it's very far from true video cards.
I don't think it will be such a big hit. Current IGPs from Intel can decode 2 full HD video streams (http://software.intel.com/en-us/articles/quick-reference-gui...). The next generation is supposed to have 2 such (or better) GPUs side by side on a chip (http://www.fudzilla.com/content/view/17576/1/). These IGPs will be enough for even higher number of people than today. How many people need to play games in uber resolutions, anyway?
Laptop graphics are always a minefield of barely adequate hardware. I really hope this, as well as Nvidia's ION chipsets see an end to these dark days.
Because "adequate" can have vastly different meanings, depending on what you use your GPU for. I use it mostly for eye-candy at the GUI level. I play a bit with Google Earth and that's about all. My Intel GMA can do that just fine.
If, however, I were into heavy gaming, I would find this setup unusable (down to the fact I don't run Windows).
I am not sure how well an Intel GMA would fare if it were used as a number cruncher. I suspect it wouldn't excel, but, compared to x86 vector hardware, even a lowly GPU should hold it's own well.
That's a problem inherent in the nature of HD codecs. Decoding with a general-purpose CPU is too inefficient. Any platform that has acceptable power consumption for the ultra-portable market is going to have to use dedicated decoding hardware. Once you take HD decoding off the list of things the CPU has to be fast enough for, you have the option to use a small, low-power CPU like the Atom. I don't see any room for improvement by using a different arrangement. Software writers will just have to get used to using the OS provided decoders so that any accelerators present can be used.
Gettting used to using the OS provided codecs can be a problem, because it forces you into specific codec, that you may want to avoid and disallows to use codec you want to use.
For example, I can't imagine Bink being hardware accelerated anytime soon.
ION is the integrated GPU chip, the CPU is usually an Intel Atom. ION does a remarkably good job decoding HD video if hardware acceleration is enabled within the media app.