AnaXNet: Anatomy Aware Multi-label Finding Classification in Chest X-Ray
AuthorAgu, Nkechinyere; Wu, Joy T.; Chao, Hanqing; Lourentzou, Ismini; Sharma, Arjun; Moradi, Mehdi; Yan, Pingkun; Hendler, James A.
Full CitationNkechinyere N. Agu, Joy T. Wu, Hanqing Chao, Ismini Lourentzou, Arjun Sharma, Mehdi Moradi, Pingkun Yan, and James Hendler. 2021. AnaXNet: Anatomy Aware Multi-label Finding Classification in Chest X-Ray. In Medical Image Computing and Computer Assisted Intervention – MICCAI 2021: 24th International Conference, Strasbourg, France, September 27 – October 1, 2021, Proceedings, Part V. Springer-Verlag, Berlin, Heidelberg, 804–813. https://doi.org/10.1007/978-3-030-87240-3_77
MetadataShow full item record
AbstractRadiologists usually observe anatomical regions of chest X-ray images as well as the overall image before making a decision. However, most existing deep learning models only look at the entire X-ray image for classification, failing to utilize important anatomical information. In this paper, we propose a novel multi-label chest X-ray classification model that accurately classifies the image finding and also localizes the findings to their correct anatomical regions. Specifically, our model consists of two modules, the detection module and the anatomical dependency module. The latter utilizes graph convolutional networks, which enable our model to learn not only the label dependency but also the relationship between the anatomical regions in the chest X-ray. We further utilize a method to efficiently create an adjacency matrix for the anatomical regions using the correlation of the label across the different regions. Detailed experiments and analysis of our results show the effectiveness of our method when compared to the current state-of-the-art multi-label chest X-ray image classification methods while also providing accurate location information.;
PublisherSpringer-Verlag, Berlin, Heidelberg
The following license files are associated with this item: