#ClinicSpeak & #ResearchSpeak: standardising MRI reading

How reliable is your MRI report? Could it be improved? #ResearchSpeak #ClinicSpeak #MSBlog

One of the problems MS centres face when incorporating annual, or biannual, MRI monitoring scans into clinical practice is the variable quality of neuroradiology reporting. To treat-2-target of NEDA you have to have your neuroradiologists buy into the management of MS. Most neuroradiologists are diagnosticians and are very good and looking at a series of images and making a diagnosis within 5 minutes. For monitoring, however, you have to look at each scan carefully comparing it last year’s scan, slice by slice, to see if any of the lesions are new, or bigger or smaller than last year. The study below shows that when neuroradiologists use a template, a predefined proforma for reporting scans, they improve their detection rate in relation to MS-related findings. We need to use a similar tool for monitoring scans. Another option that neuroradiologists find useful is propriety software that comes with each scanner that registers two sets of scans, this years and last years scans, and highlights new lesions. This at least helps the neuroradiologist find new lesions. We are also testing a bot for this; we send our images into the cloud and let automated image analysis software do this for us. The latter will be welcomed by neurologists as reading monitoring scans is very monotonous and repetitive. 



Dickerson et al. Effect of Template Reporting of Brain MRIs for Multiple Sclerosis on Report Thoroughness and Neurologist-Rated Quality: Results of a Prospective Quality Improvement Project. J Am Coll Radiol. 2016 Dec 5. pii: S1546-1440(16)30947-4.



PURPOSE: To assess the impact of structured reporting templates on the objective and subjective quality of radiology reports for brain MRIs in patients with multiple sclerosis (MS).

METHODS: A HIPAA-compliant prospective quality improvement initiative was undertaken to develop and implement a 12-item structured reporting template for brain MRI examinations in patients with known or suspected MS based on published guidelines. Reports created 1 year before implementing the template served as the baseline. A random sample of 10 template and 10 non-template reports was sent to five neurologists outside the study institution with MS expertise, who reviewed the reports for comprehensiveness and quality. The number of MS-relevant elements in template and non-template reports were compared with unpaired t tests. Proportions were compared with χ2 and Fisher exact tests.

RESULTS: There were 63 reports in the pre-template period and 93 reports in the post-template period. Use of the template increased over time in the post-template period (P = .04). All 12 MS-relevant findings were addressed more often and with less variability in template reports: (11.1 ± 0.7 findings versus 5.8 ± 2.2 findings in non-template reports, P < .001). Neurologists were more likely to give the template reports the highest positive rating (56% [107/190] versus 28% [56/199], P < .001) and less likely to give the template reports a lower rating (7% [13/190] versus 15% [29/199], P = .01) compared with the non-template reports.

CONCLUSION: Template reporting of brain MRI examinations increases the rate at which MS-relevant findings are included in the report. Standardized reports are preferred by neurologists with MS expertise.

4 thoughts on “#ClinicSpeak & #ResearchSpeak: standardising MRI reading”

  1. Ultimately I think only software and automation can solve this since human reading of MRI is so subjective. One of the worst cases I can think of is if "Frank" reads the scan this year and then leaves the practice only to be replaced by "Joe" who reads the scan next year. Even with proforma standards how can we ensure that they *interpret* what they see in the same way? In my view only automation can accomplish that.

  2. I would have thought that anything to help reporting is needed. At present it seems like only a minority in the UK get regular scans and if there continues to be a move to monitoring scans or scans when we relapse then I can't imagine that neuro-radiologists could cope with the additional demands (with no extra resources).

  3. Out of 6 neuros, 2 saw lesions the other 4 didn't. How accurate is the report? I wonder if each practitioner was sure his/her interpretation was the correct one.

  4. It's more than enough to come up with a tool, a software, that helps you to scan the magnetic resonance images in scale. It's very annoying to have to walk with cd's or even the printed images of all the exams you have ever done every time you go to a neurologist so he can tell you whether or not you have MS or if the disease has evolved.

Leave a Reply to CinaraCancel reply

Discover more from Prof G's MS Blog Archive

Subscribe now to keep reading and get access to the full archive.

Continue reading