Those familiar with the field will recognize an augmented reality microscope as an incomplete alternative to, or stepping-stone towards AI applied to whole-slide images, which are multi-resolution images of more or less identical quality quality to that of a microscope.
For some use-cases, deploying AI to microscopic fields of view is a viable, lower-cost alternative to creating whole-slide images and running AI on them in their entirety (whole-slide scanners are a bit expensive). Specifically, if a pathologists identifies a suspicious region, the augmented scope can provide useful support. However, many types of anatomic pathology assessment require laborious review of several slides. Only AI applied to whole-slide images can pre-identify rare events or "hot spots", saving pathologist time while improving diagnostic confidence.
Exhaustive search is definitely a high value option. But let's consider dermpath. Where the entire lesion is probably in the field of view at 4x, but the number of possible classes may be high (bullous lesions, anyone?). In that case, "90% Bowen's disease, 3% inflamed Seborhheic keratosis, and 7% other" is useful, especially if you knew the machine had been trained on everything in Weedon.
I agree with you, and there is several companies already doing this for whole slides. In addition, people should also understand that more than 1 slide is necessary for a high quality analysis of the cancer. From the top of my head, I remember Definiens (http://definiens.com) that develops software that does whole slide analysis and HalioDX (http://www.haliodx.com/) that does the same and use it as auxiliary for colon cancer treatment.
Huh... Why? It _seems_ like a problem with an easy (and cheap) solution, given how readily available (and cheap!) suitably precise cartesian bots are these days, combined with high-quality digital microscopes and the state of modern image alignment algorithms.
A lot of that cost comes from all the supporting hardware and certified software. Plus a lot of the machines have to be designed for high throughput applications.
I work in the field and am about to be sent to a remote location for 2 years. Literally the only pathologist for 100s of miles. This would be more than a little nice to have.
Much of Radiology and Pathology specimen interpretation is based on reliable and consistent detection. To come and think of it, the application of AI into this area is fantastic because it removes human fatigue and missed diagnoses. AI neural nets seem well equipped for image recognition. At the very least, this can lead to a first level flagging of specimens.
My only concern is that with any system, a major downside would be that human operators would place too much reliance on a system that works great most of the time, resulting in missing something that otherwise would have been caught. This happens every once in a while, such as with EKG machines spitting out a diagnosis based on electrical activity patterns.
This seems to be very interesting from a technical perspective, but it seems that it would not improve upon the accuracy of current tools and would be additive to cost, if only incrementally so. Thus the real world benefits would be reducing time spent reviewing slides or other workflow improvements?
Layperson speaking, but I believe skilled labor is most of the cost of these diagnostics. Even if the scope is $100k, the doctor is $200k/yr so it'd make sense for this to be a cost reducing tool if it increases speed.
For some use-cases, deploying AI to microscopic fields of view is a viable, lower-cost alternative to creating whole-slide images and running AI on them in their entirety (whole-slide scanners are a bit expensive). Specifically, if a pathologists identifies a suspicious region, the augmented scope can provide useful support. However, many types of anatomic pathology assessment require laborious review of several slides. Only AI applied to whole-slide images can pre-identify rare events or "hot spots", saving pathologist time while improving diagnostic confidence.