Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I find this fascinating. I used to do high content screening of cells with machines like the Perkin Elmer Operetta and lots of dyes. I would have never thought machine learning would make us go full circle back to brightfield.

The meat of this article:

> The trend of ML in general over the past 15~ years has been to strip away more and more of the biases you’ve encoded about your dataset as you feed it into a model. Computer vision went from hand-crafted interpretable features (e.g. number of circles, number of black pixels when thresholded, etc), to hand-crafted uninterpretable features (e.g. scale invariant feature transform), to automatically extracted uninterpretable features (e.g. hidden dimensions of a convolutional neural network). In other words, the bitter lesson; pre-imposing structure on your data is useful for a human, but detrimental to a machine.

The gist of it I understood is that ML could already tell and label the organelles and cell structure in brightfield imaging without the dyes and that adding those on top just muddled and hindered it.



It makes a lot of sense in retrospect because all the structures are visible on the brightfield image - it's just more annoying for a human being to pick out the features on thousands of slides.

I wonder what the dye budgets and spectral bands will go to now that there is no need to mark the boundary of the nucleus. I bet there are a lot of things that are not visible in the refractive index that you could dye.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: