Oral Abstracts Session
Philine Reisdorf, MSc
PhD Student
Charité – Universitätsmedizin Berlin
Berlin, Berlin, Germany
Philine Reisdorf, MSc
PhD Student
Charité – Universitätsmedizin Berlin
Berlin, Berlin, Germany
Thomas C. R Hadler, PhD
Postgraduate
Charité - Universitätsmedizin Berlin
Berlin, Berlin, Germany
Helen Schmeiser
Clinical Researcher
Working Group on CMR, Experimental and Clinical Research Center, a cooperation between Charité – Universitätsmedizin Berlin and the Max Delbrück Center for Molecular Medicine in the Helmholtz Association, Berlin, Germany, Germany
Philipp Theis
Clinical Researcher
Working Group on CMR, Experimental and Clinical Research Center, a cooperation between Charité – Universitätsmedizin Berlin and the Max Delbrück Center for Molecular Medicine in the Helmholtz Association, Berlin, Germany, Germany
Jan Gröschel, MD
MD
Charite
Berlin, Berlin, Germany
Anja Hennemuth, PhD
Prof.
Deutsches Herzzentrum der Charité, Germany
Steffen Lange
Professor for Theoretical Computer Science
Hochschule Darmstadt, Germany
Jeanette Schulz-Menger, MD
Head Working Group Cardiac MRI
Charité/ University Medicine Berlin and Helios
Berlin, Berlin, Germany
Figure 2: Label distribution by number of images per class and reader.
Figure 3: Resulting mean recall in percent for the nine CNNs. Scenario i) shows the binary classifier for ‘no artifact’ vs ‘artifact’, scenario ii) shows the classifier for differentiating ‘no artifact’ from ‘infolding’ and scenario iii) the multi-label approach for ‘no artifact’, ‘infolding’ and ‘motion’. The letter at the CNN indicates on which labels the CNN was trained and validated, the different columns provide which labels were used for testing the CNN. The two lower rows in the three subfigures show how reader A performed compared to ground truth labels B and vice versa for all scenarios. Note that the inter-reader mean recall of reader A and reader B for scenarios i) and iii) is similar to the performance of CNNA and CNNB respectively — that is, CNNA is compared to reader A, and CNNB to reader B. However, in scenario ii), CNNA outperforms reader A, and CNNB outperforms reader B when classifying infoldings versus artefact-free images, even though both CNNs were trained on labels from the respective human reader..png)