Purpose: Amyloid PET which has been widely used for noninvasive assessment of cortical amyloid burden is visually interpreted in the clinical setting. As a fast and easy-to-use visual interpretation support system, we analyze whether the deep learning-based end-to-end estimation of amyloid burden improves inter-reader agreement as well as the confidence of the visual reading.
Methods: A total of 121 clinical routines [18F]Florbetaben PET images were collected for the randomized blind-reader study. The amyloid PET images were visually interpreted by three experts independently blind to other information. The readers qualitatively interpreted images without quantification at the first reading session. After more than 2-week interval, the readers additionally interpreted images with the quantification results provided by the deep learning system. The qualitative assessment was based on a 3-point BAPL score (1: no amyloid load, 2: minor amyloid load, and 3: significant amyloid load). The confidence score for each session was evaluated by a 3-point score (0: ambiguous, 1: probably, and 2: definite to decide).
Results: Inter-reader agreements for the visual reading based on a 3-point scale (BAPL score) calculated by Fleiss kappa coefficients were 0.46 and 0.76 for the visual reading without and with the deep learning system, respectively. For the two reading sessions, the confidence score of visual reading was improved at the visual reading session with the output (1.27 ± 0.078 for visual reading-only session vs. 1.66 ± 0.63 for a visual reading session with the deep learning system).
Conclusion: Our results highlight the impact of deep learning-based one-step amyloid burden estimation system on inter-reader agreement and confidence of reading when applied to clinical routine amyloid PET reading.
Keywords: Alzheimer’s disease; Amyloid PET; Deep learning; PET; Visual quantification; [18F]Florbetaben.
Kim JY, Oh D, Sung K, Choi H*, Paeng JC, Cheon GJ, Kang KW, Lee DY, Lee DS.(2020)