In new research published today in Nature Medicine, scientists at New York University re-trained an off-the-shelf Google deep learning algorithm to distinguish between two of the most common types of lung cancers with 97 percent accuracy. This type of AI—the same tech that identifies faces, animals, and objects in pictures uploaded to Google’s online services—has proven adept at diagnosing disease before, including diabetic blindness and heart conditions. But NYU’s neural network learned how to do something no pathologist has ever done: identify the genetic mutations teeming inside each tumor from just a picture.

“I thought the real novelty would be not just to show the AI is as good as humans, but to have it provide insights a human expert would not be able to,” says Aristotelis Tsirigos, a pathologist at the NYU School of Medicine and a lead author on the new study.

To do so, Tsirigos’ team started with Google’s Inception v3—an open-source algorithm that Google trained to identify 1000 different classes of objects. To teach the algorithm to distinguish between images of cancerous and healthy tissue, the researchers showed it hundreds of thousands of images taken from The Cancer Genome Atlas, a public library of patient tissue samples.

Sourced through Scoop.it from: www.wired.com