Every now again there is a research paper that has nothing to do with MS, but its findings have broader implications that are so important for the field that I feel obliged to discuss them and put them in context for pwMS.
Gulshan et al. Development and Validation of a Deep Learning Algorithm for Detection of Diabetic Retinopathy in Retinal Fundus Photographs. JAMA. Published online November 29, 2016. doi:10.1001/jama.2016.17216
Importance: Deep learning is a family of computational methods that allow an algorithm to program itself by learning from a large set of examples that demonstrate the desired behavior, removing the need to specify rules explicitly. Application of these methods to medical imaging requires further assessment and validation.
Objective: To apply deep learning to create an algorithm for automated detection of diabetic retinopathy and diabetic macular edema in retinal fundus photographs.
Design and Setting: A specific type of neural network optimized for image classification called a deep convolutional neural network was trained using a retrospective development data set of 128 175 retinal images, which were graded 3 to 7 times for diabetic retinopathy, diabetic macular edema, and image gradability by a panel of 54 US licensed ophthalmologists and ophthalmology senior residents between May and December 2015. The resultant algorithm was validated in January and February 2016 using 2 separate data sets, both graded by at least 7 US board-certified ophthalmologists with high intragrader consistency.
Exposure: Deep learning–trained algorithm.
Main Outcomes and Measures: The sensitivity and specificity of the algorithm for detecting referable diabetic retinopathy (RDR), defined as moderate and worse diabetic retinopathy, referable diabetic macular edema, or both, were generated based on the reference standard of the majority decision of the ophthalmologist panel. The algorithm was evaluated at 2 operating points selected from the development set, one selected for high specificity and another for high sensitivity.
Results: The EyePACS-1 data set consisted of 9963 images from 4997 patients (mean age, 54.4 years; 62.2% women; prevalence of RDR, 683/8878 fully gradable images [7.8%]); the Messidor-2 data set had 1748 images from 874 patients (mean age, 57.6 years; 42.6% women; prevalence of RDR, 254/1745 fully gradable images [14.6%]). For detecting RDR, the algorithm had an area under the receiver operating curve of 0.991 (95% CI, 0.988-0.993) for EyePACS-1 and 0.990 (95% CI, 0.986-0.995) for Messidor-2. Using the first operating cut point with high specificity, for EyePACS-1, the sensitivity was 90.3% (95% CI, 87.5%-92.7%) and the specificity was 98.1% (95% CI, 97.8%-98.5%). For Messidor-2, the sensitivity was 87.0% (95% CI, 81.1%-91.0%) and the specificity was 98.5% (95% CI, 97.7%-99.1%). Using a second operating point with high sensitivity in the development set, for EyePACS-1 the sensitivity was 97.5% and specificity was 93.4% and for Messidor-2 the sensitivity was 96.1% and specificity was 93.9%.
Conclusions and Relevance: In this evaluation of retinal fundus photographs from adults with diabetes, an algorithm based on deep machine learning had high sensitivity and specificity for detecting referable diabetic retinopathy. Further research is necessary to determine the feasibility of applying this algorithm in the clinical setting and to determine whether use of the algorithm could lead to improved care and outcomes compared with current ophthalmologic assessment.
I bet it's time for the radiologists to think about their future careers also
Yes, anyone who has a job based on image analysis; radiologists, pathologists, etc. But is everyone who is involved in diagnosis, which is essentially pattern recognition. What we need are tools to record neurological function to feed to AI bots to teach them to become neurologists.
Totally agree, Prof G.But this could have implications on advanced stages of human replacement I believe. Using a black box (deep neuronal net) to analyse another black box (human body) could lead us to the point where decisions will be made (and mostly correct) but no one would tell you why this decision had been made and why its worked on this patient.The problem with current technology (and current state of art technology for pattern recognition is deep neural networks of a sorts), and AI specialist would admit — they hard-to-impossible to analyse today, we simply haven’t developed technique good enough to look inside it and tell you how it works.E.g. you got a task to distinguish between a car and a skyscraper on a set of photos. Then you can use a ConvNet model, and if you guess weights, number of layers etc right, would use good quality dataset to train it, you could have a astonishing results.But if you ask why such or such decision had been made there would be no answer, because simply speaking we don’t understand why unit (or neurone how some calls it) on a second layer reacts to a Trump tower and other on a deeper layer to audi only but not to porsche. We got no theory behind it, what features of a model or training process, or dataset leads to this complex behaviour, and could not describe it using any formal (programming, pseudocode, mathematic symbols…) human understandable language. Not yet at least.
What happens to thinking outside the box and not following accepted dogma though?
Well, I guess you can train neuronal network to analyse neuronal network to analyse neuronal network to analyse your biological network, or another artificial neuronal net~ wait and in some time it can posses some kind of inconceivable seld-consciousness and… well the future scenarios are pictured in the Terminator series, or in Asimov’s Last Question short-story, if you’d like such kind of thinking outside the box, MouseDoc.
In Asimov's Foundation novels, the emergence of the Mule confounded psychohistory.
MD2I forget the quote from 'My if was a quant' by a Emanuel Derman. Something like models are just models, I think that is relevant in this context. The 1.0 version of these tools will not see outside the box, but these tools will get better with each version. It would be hugely advantageous if we could automate the inside the box part of the problem.
I doubt any AI could duplicate the thought processes of Mouse Doctor!
All joking aside this technology is relatively easily accessible, if anyone wants to experiment or work on their brain health. There is a fantastic completely free course available online, the lectures are recorded inside a self driving car that drives the lecturers around a parking lot in mountain view California.
Where could one get sample data to play with in scikit-learn?
Second that question. 😉
Seems like the Russian bots have beenin overdrive over the last few days, judging by the web traffic. Hopefully they're on the case of curing MS 😉
I don't understand, who on this thread is a bot?
No-one but we've been getting a lot of web traffic from Russia over the last few days, which is not typical and I suspect it isn't regarding interest in MS.
One should not be afraid of a bot that will pass Turing test, but afraid of the bot that deliberately fail it :p
Vasy lol, hope you are watching Westworld
In the long-run this is good for just about any diagnostic profession — from doctors to computer technical support. It allows *initial* triage to be done rapidly and escalated to the most appropriate expert.Further, I think that it would have an impact (at least here in the USA) on how insurance does or doesn't pay for treatment. AI will be seen as more deterministic and trustworthy than human triage.As to ProfG's comment — yes, working for a tech firm to help them tune their AI (even if only as a consultant) is a great thing because it will help them rapidly improve the algorithms.
I used to watch Star Trek when I was younger and I really liked Data.Any chance these bots will be as impressionable as Data?