Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

What are the real world applications you’re referring to?


Not the gp, but…

- medical diagnosis

- assistant-systems for everything

- noise reduction in industrial settings (via mechanical design)

- basically anything language related

- interpretation of image data

- essentially anything involving a pair of interacting sequences (thanks, transformers)

Contrary to many cynic takes, deep learning, CNNs, transformers are massive. It’s hard to realize the scope of these developments zoomed in to one’s warped perception of time and progress. One needs to zoom out a bit.

We could probably stop advancing the field and reshape most of the “real world” we’re used to, just with things you can import with two lines in python. But those changes come incrementally, and most of the bright minds around are dedicated to advancing our conceptual realm instead of the meatspace-one.


I agree with this perspective. There are certainly more step changes to come in the theoretical domain (arbitrary tensor 2 tensor of any dimensionality / variable input output size?), but the application space seems largely known, and now just requires lots and lots of implementation in the real word - which isn’t necessarily going to be from wildly successful software platforms but more from applied AI practitioners building niche tools everywhere


It can be anything really. Better medication design, discovery and modelling of new compounds, disease discovery, better understanding of our environment, etc.


So why aren't we already aware of what these applications are since ML has already been well-funded and hyped for about a decade? Basically what is the inflection point?

Obviously this is subjective, but underwhelmed would be a compliment for how I feel about ChatGPT. I've been hearing about these near-term breakthroughs for nearly a decade, even working in the industry. And yet, the real-world progress is nothing compared to the hype (or funding).

So why now? What's changed?


I’m not the OP, and I’m a radiographer. Big things are happening with image processing in radiology. ML/Deep Learning or whatever the marketing people call it is what is being done behind the scenes in Siemens ‘Deep Resolve’ for MR systems.

It’s utterly transformed what we do. It adds signal in an initial processing stage in the k-space domain (I’d estimate 25%-30%) then reconstructs the image. Then it doubles the resolution (or does it quadruple?) with double the pixels in the x direction, double in the y.

It does this based on a training dataset of paired images, one high resolution, one low resolution.

Images are now obtained quicker and are higher resolution than ever. Imaging protocols have extra sequences as time constraints are reduced. Patients can stay still for the short scans we are doing. Not ever scan benefited the same way, and some sequences don’t have the tech yet.

The images are fantastic. It’s a larger change than the move to high field magnets (1.5T to 3T) and the hype about it isn’t anywhere near enough.

I’m lucky to be able to compare imaging with and without the technology applied, to be able to mess about with it and find the rough edges (they are unexpected and a little counter intuitive) but I’d be trying to avoid systems without it (or an equivalent) as a system user or a patient. The future is very very bright.


Thank you for responding.

Do you know if 'Deep Resolve' potentially introduces artifacts? I believe you that it's better. I'm trying to figure out if it's either

a) we developed a superior technology that is almost always better, but will very occasionally introduce noise that can be screened by a trained technician (or whatever)

b) we developed a superior technology that is literally better in every way when comparing final images

I'm not trying to discount a) here because if it's progress for the industry then that's still a win.


It’s still earlyish days and it’s fascinating.

As a rule, in MRI you have three things, pick 2. Resolution, signal and time to acquire.

The nature of the Deep Resolve training dataset means that matching the training input parameters makes images look good. Counterintuitively, accelerating the scan more sometimes improves images (better matching the training dataset). The differences are not subtle. This sort of breaks the res/signal/time things.

Yes, it can produce artefact on images that are low signal. It’s a grain type effect in the phase direction.

Every new acceleration technique has its artefacts and issues (fast spin echo, single shot, parallel imaging, simultaneous multi slice, etc). The beauty of Deep Resolve is that you can reconstruct the image again without DR applied and compare the result.

One minor proviso though, DR loves signal, so a scan that is to be run with DR needs more signal than a non deep resolve sequence. This is more than made up for by the resolution doubling after DR is applied.

Other accelerations have their own quirks to be handled (eg, parallel imaging needs a lot of over sampling, simultaneous multi slice needs a lot of extra elements turned on.

I’d say it’s your option b, but with the caveat that every single MR sequences (DR or not) needs someone to check the image is real and not showing stuff that isn’t there. Artefacts handled on a normal day will include machine faults, technician introduced artefact, patient issues (movement!), sequences issues, vendor specific problems and some weird as things that never get explained. These things keep me employed.

It looked like we were entering a Teslas arms race - more being better. The likes of DR are doing the opposite, with great images at lower field strength. Lower field strength magnets have better t1 contrast, are easier to make, are easier to install, they use less (or zero) helium, they are easier to maintain, they are safer, they are more readily available, they are lighter, they are cheaper etc etc.


> Images are now obtained quicker and are higher resolution than ever. > I’m lucky to be able to compare imaging with and without the technology applied

Interesting, can you share some images, or a paper on this?

Btw I've tried some online image enhancers on a random painted image and it was not amazing, it did enhance some parts but not really by much. I'm sure specialized systems can do much better when the input image is made by the same source as the training set.


I have got a few images that I used for a talk. These were an early iteration of the tech (Deep Resolve Sharp) and not what we use now (Deep Resolve Sharp, but also Deep Resolve Boost, which also adds signal in the k-space domain).

Siemens have a mass on on it, but I don’t believe the marking really conveys how good it is. They also talk up the speed side of things, while I have gone for a bit of speed but mainly higher resolution. I’m not even sure how to talk about resolution anymore. Is it the voxel size acquired, or what the end result produces? It’s smoke and mirrors but whatever you call it, the end result is great. We are also accelerating scans more than the Siemens examples (they seem to use 2 or 3x a lot. We go minimum 4, and as much as 8). We use higher resolution than most of their examples too.

https://marketing.webassets.siemens-healthineers.com/2f18155...

https://www.siemens-healthineers.com/magnetic-resonance-imag...


* better pictures from your phone's cameras

* auto-captions for youtube videos, you can now search through them. when they started they were quite good and have only improved since.

* ChatGPT we haven't seen widely deployed yet but already now a lot of people use products built on GPT's technology, so an even better version will benefit all of them and widen the circle of users.


I think this is a critical question. Most businesses, imo, would not benefit from the added uninterpretablity and computational overhead from using deep learning. Of those industries modeling stochastic or evolving processes, however, the impact is monumental.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: