背景:胸部X射线是筛查胸部疾病最常见和最经济的放射学检查。根据胸片筛查领域的知识,病理信息通常集中在肺和心脏区域。然而,在实践中获取区域级标注代价较高,模型训练主要依赖弱监督的图像级分类标签,这对计算机辅助胸部X射线筛查是一个很大的挑战。为了解决这个问题,最近提出了一些方法来识别包含病理信息的局部区域,这对胸部疾病的分类是至关重要的。受此启发,我们提出了一种新的深度学习框架来探索肺和心脏区域的区别信息。
结果:我们设计了一个带有多尺度注意模块的特征提取器,用于从全局图像中学习全局注意图,为了有效地利用疾病特有的线索,我们通过一个已经训练好的像素分割模型来定位包含病理信息的肺和心脏区域,以生成二值化掩码。通过在学习的全局注意图和二值化掩码上引入元素逻辑AND算子,我们得到了局部注意图,其中肺和心脏区域的像素为1,其他区域的像素为0。通过将注意力图中非肺和心脏区域的特征归零,可以有效地利用非肺和心脏的疾病特异性线索;与现有的融合全局特征和局部特征的方法相比,我们采用特征加权来避免削弱肺和心脏区域特有的视觉线索。我们的方法采用像素分割的方法可以克服局部区域的定位偏差。通过在公开可用的chestX-ray14数据集上的基准拆分评估,综合实验表明,与最先进的方法相比,我们的方法取得了更好的性能。
结论:我们提出了一种新的用于胸部X射线中胸部疾病多标签分类的深层框架。提出的网络旨在有效地利用包含胸部X射线筛查主要线索的病理区域。我们提出的网络已用于辅助放射科医生临床筛查。胸部X射线在放射检查中占有相当大的比例。探索更多提高性能的方法是很有价值的。
关键词:胸部X射线;胸部疾病分类;像素分割;肺和心脏区域;多尺度注意力
1、在以往的使用卷积神经网络对胸部X射线图像进行疾病分类通常使用全局图像进行模型训练。由于疾病区域较小和大量的卷积层,导致丢失了一些疾病细节特征。基于此,在模型训练过程中,增强病理性区域的视觉特征,抑制正常区域的干扰是至关重要的。
2、为了避免病理性区域定位的偏差,文献[16,21]提出了融合全局特征的深度融合网络来补偿丢失的局部特征判别曲线。但是,融合方法必须仔细调整,以避免局部特征在全局特征中平滑。局部特征学习了病理信息,但在融合过程中其区分作用会减弱
不是一个新问题,基于融合的多分支网络,创新点大概是通过分割方法得到局部分支,之后增加局部分支在全局分支中的权重。
1、为了有效地学习病变区域的判别信息,避免正常区域的影响,提出了一种新的胸片胸部疾病分类深度学习框架,以挖掘局部区域的鉴别信息,增强局部区域在胸部疾病分类中的区分作用。
2、多尺度注意模块从胸片图像中学习判别信息,生成全局注意图。我们对包含病理信息的肺和心脏区域应用特征加权策略,以有效地利用其特定于疾病的线索。
疾病分类
[2] Kumar, P., Grewal, M., Srivastava, M.M.: Boosted cascaded convnets for multilabel classification of thoracic diseases in chest radiographs. In: International Conference Image Analysis and Recognition, pp. 546-552 (2018). Springer
[3] Guan, Q., Huang, Y.: Multi-label chest x-ray image classification via category-wise residual attention learning. Pattern Recognition Letters 130, 259-266 (2020)
异常检测
[4] Mao, Y., Xue, F.-F., Wang, R., Zhang, J., Zheng, W.-S., Liu, H.: Abnormality detection in chest x-ray images using uncertainty prediction autoencoders. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 529-538 (2020). Springer
[5] Bozorgtabar, B., Mahapatra, D., Vray, G., Thiran, J.-P.: Salad: Self-supervised aggregation learning for anomaly detection on x-rays. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 468-478 (2020). Springer
分割
[6] Xue, C., Deng, Q., Li, X., Dou, Q., Heng, P.-A.: Cascaded robust learning at imperfect labels for chest x-ray segmentation. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 579-588 (2020). Springer
[7] Abdulah, H., Huber, B., Lal, S., Abdallah, H., Soltanian-Zadeh, H., Gatti, D.L.: Lung segmentation in chest x-rays with res-cr-net. arXiv preprint arXiv:2011.08655 (2020)
疾病预测
[8] Khan, A.I., Shah, J.L., Bhat, M.M.: Coronet: A deep neural network for detection and diagnosis of covid-19 from chest x-ray images. Computer Methods and Programs in Biomedicine, 105581 (2020)
[9] Tam, L.K., Wang, X., Turkbey, E., Lu, K., Wen, Y., Xu, D.: Weakly supervised one-stage vision and language disease detection using large scale pneumonia and pneumothorax studies. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 45-55 (2020). Springer
公布了几个大型X射线数据集
[13] Johnson, A.E., Pollard, T.J., Greenbaum, N.R., Lungren, M.P., Deng, C.-y., Peng, Y., Lu, Z., Mark, R.G., Berkowitz, S.J., Horng, S.: Mimic-cxr-jpg, a large publicly available database of labeled chest radiographs. arXiv preprint arXiv:1901.07042 (2019)
[14] Irvin, J., Rajpurkar, P., Ko, M., Yu, Y., Ciurea-Ilcus, S., Chute, C., Marklund, H., Haghgoo, B., Ball, R., Shpanskaya, K., et al.: Chexpert: A large chest radiograph dataset with uncertainty labels and expert comparison. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 33, pp. 590-597 (2019)
[15] Bustos, A., Pertusa, A., Salinas, J.-M., de la Iglesia-Vayá, M.: Padchest: A large chest x-ray image dataset with multi-label annotated reports. Medical image analysis 66, 101797 (2020)
有一个1TB大小的PADCHEST数据集,暂时未下载。下载地址:https://b2drop.bsc.es/index.php/s/BIMCV-PadChest-FULL(不需VPN)
用于胸部X射线疾病分类的病理性区域定位方法
区域定位(region proposals) 显著图(saliency maps)
[18] Yao, L., Prosky, J., Poblenz, E., Covington, B., Lyman, K.: Weakly supervised medical diagnosis and localization from multiple resolutions. arXiv preprint arXiv:1803.07703 (2018)
[19] Tang, Y., Wang, X., Harrison, A.P., Lu, L., Xiao, J., Summers, R.M.: Attention-guided curriculum learning for weakly supervised classification and localization of thoracic diseases on chest radiographs. In: International Workshop on Machine Learning in Medical Imaging, pp. 249-258 (2018). Springer
注意力机制在医学图像分析领域的应用
[24] Oktay, O., Schlemper, J., Folgoc, L.L., Lee, M., Heinrich, M., Misawa, K., Mori, K., McDonagh, S., Hammerla, N.Y., Kainz, B., et al.: Attention u-net: Learning where to look for the pancreas. arXiv preprint arXiv:1804.03999 (2018)
[25] Nie, D., Gao, Y., Wang, L., Shen, D.: Asdnet: Attention based semi-supervised deep networks for medical image segmentation. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 370-378 (2018). Springer
[26] Li, L., Xu, M., Wang, X., Jiang, L., Liu, H.: Attention based glaucoma detection: A large-scale database and cnn model. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 10571-10580 (2019)
利用注意力机制进行胸部疾病分类
[28] Wang, H., Jia, H., Lu, L., Xia, Y.: Thorax-net: An attention regularized deep neural network for classification of thoracic diseases on chest radiography. IEEE journal of biomedical and health informatics 24(2), 475-485 (2019)
[29] Ma, C., Wang, H., Hoi, S.C.: Multi-label thoracic disease image classification with cross-attention networks. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 730-738 (2019). Springer
一种对比诱导注意网络方法
[30] Liu, J., Zhao, G., Fei, Y., Zhang, M., Wang, Y., Yu, Y.: Align, attend and locate: Chest x-ray diagnosis via contrast induced attention network with limited supervision. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 10632-10641 (2019)
一种注意力引导的mask推理过程来定位显著区域并学习用于分类的判别特征
[16] Guan, Q., Huang, Y., Zhong, Z., Zheng, Z., Zheng, L., Yang, Y.: Thorax disease classification with attention guided convolutional neural network. Pattern Recognition Letters 131, 38-45 (2020)
CBAM的空间注意模块
[31] Woo, S., Park, J., Lee, J.-Y., So Kweon, I.: Cbam: Convolutional block attention module. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 3-19 (2018)
由于区域级注释的相对稀缺性,局部定位和学习在胸部X光图像分析领域收到越来越多的关注
[32] Viniavskyi, O., Dobko, M., Dobosevych, O.: Weakly-supervised segmentation for disease localization in chest x-ray images. In: International Conference on Artificial Intelligence in Medicine, pp. 249-259 (2020). Springer
[33] Wolleb, J., Sandkühler, R., Cattin, P.C.: Descargan: Disease-specific anomaly detection with weak supervision. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 14-24 (2020). Springer
以往的胸腔疾病分类工作仅使用图像级分类标签,通过有监督的训练从全局图像中学习可分辨信息,但是这种方法容易受到正常区域的影响。
[11] Rajpurkar, P., Irvin, J., Zhu, K., Yang, B., Mehta, H., Duan, T., Ding, D., Bagul, A., Langlotz, C., Shpanskaya, K., et al.: Chexnet: Radiologist-level pneumonia detection on chest x-rays with deep learning. arXiv preprint arXiv:1711.05225 (2017)
[10] Yao, L., Poblenz, E., Dagunts, D., Covington, B., Bernard, D., Lyman, K.: Learning to diagnose from scratch by exploiting dependencies among labels. arXiv preprint arXiv:1710.10501 (2017)
[2] Kumar, P., Grewal, M., Srivastava, M.M.: Boosted cascaded convnets for multilabel classification of thoracic diseases in chest radiographs. In: International Conference Image Analysis and Recognition, pp. 546-552 (2018). Springer
一个squeeze-and-excitation block,可以增强对正常区域和病理区域之间细微差异的敏感度
[34] Hu, J., Shen, L., Sun, G.: Squeeze-and-excitation networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 7132-7141 (2018)
更多的局部定位方法依赖于显著图。
[19] Tang, Y., Wang, X., Harrison, A.P., Lu, L., Xiao, J., Summers, R.M.: Attention-guided curriculum learning for weakly supervised classification and localization of thoracic diseases on chest radiographs. In: International Workshop on Machine Learning in Medical Imaging, pp. 249-258 (2018). Springer
[18] Yao, L., Prosky, J., Poblenz, E., Covington, B., Lyman, K.: Weakly supervised medical diagnosis and localization from multiple resolutions. arXiv preprint arXiv:1803.07703 (2018)
[17] Hermoza, R., Maicas, G., Nascimento, J.C., Carneiro, G.: Region proposals for saliency map refinement for weakly-supervised disease localisation and classification. arXiv preprint arXiv:2005.10550 (2020)
Gumbel-Softmax函数被用来组合区域建议和显著图检测器
[35] Jang, E., Gu, S., Poole, B.: Categorical reparameterization with gumbel-softmax. arXiv preprint arXiv:1611.01144 (2016)
为了避免病理区域位置偏差带来的区分性信息损失,一些方法将全局图像训练和局部区域学习相融合,将全局特征和局部特征相结合的深度融合网络逐渐在计算机视觉任务中普及。
[36] Ding, M., Antani, S., Jaeger, S., Xue, Z., Candemir, S., Kohli, M., Thoma, G.: Local-global classifier fusion for screening chest radiographs. In: Medical Imaging 2017: Imaging Informatics for Healthcare, Research, and Applications, vol. 10138, p. 101380 (2017). International Society for Optics and Photonics
[37] Cao, B., Araujo, A., Sim, J.: Unifying deep local and global features for image search. In: European Conference on Computer Vision, pp. 726-743 (2020). Springer
对比模型
[20] Wang, X., Peng, Y., Lu, L., Lu, Z., Bagheri, M., Summers, R.M.: Chestx-ray8: Hospital-scale chest x-ray database and benchmarks on weakly-supervised classification and localization of common thorax diseases. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2097-2106 (2017)
首先发布了胸部X光数据集基准,并提出了一种用于处理胸部疾病分类的深度卷积神经网络。我们用预训练好的ResNet-50作为预训练模型。
[11] Rajpurkar, P., Irvin, J., Zhu, K., Yang, B., Mehta, H., Duan, T., Ding, D., Bagul, A., Langlotz, C., Shpanskaya, K., et al.: Chexnet: Radiologist-level pneumonia detection on chest x-rays with deep learning. arXiv preprint arXiv:1711.05225 (2017)
使用DenseNet-121训练模型,这项工作表明,CheXNet的表现在统计上明显高于放射科医生。
[12] Yan, C., Yao, J., Li, R., Xu, Z., Huang, J.: Weakly supervised deep learning for thoracic disease classification and localization on chest x-rays. In: Proceedings of the 2018 ACM International Conference on Bioinformatics, Computational Biology, and Health Informatics, pp. 103-110 (2018)
这项工作是基于CheXNet模型,以DenseNet为骨干,首次探索了疾病特定领域的学习问题,使用squeeze-and-excitation block,可以增强对正常区域和病理区域之间细微差异的敏感度
[21] Liu, H., Wang, L., Nan, Y., Jin, F., Wang, Q., Pu, J.: Sdfn: Segmentation-based deep fusion network for thoracic disease classification in chest x-ray images. Computerized Medical Imaging and Graphics 75, 66-73 (2019)
该文提出了一种基于分割的深度融合网络(SDFN)来利用局部区域的区分信息。SDFN采用像素级分割检测局部区域,采用深度融合框架统一全局特征和局部特征。还使用像素分割来识别肺和心脏区域。但是深度融合方法似乎不能有效地处理解决局部特征被全局特征淹没的问题,因此,本文使用特征加权策略来关注局部特征。
[16] Guan, Q., Huang, Y., Zhong, Z., Zheng, Z., Zheng, L., Yang, Y.: Thorax disease classification with attention guided convolutional neural network. Pattern Recognition Letters 131, 38-45 (2020)
提出了一种三分支注意力引导的卷积神经网络(AGCNN),用于胸部X线图像的胸腔疾病分类。这项工作定位了全局注意图中的显著区域,然后从胸部X射线图像中裁剪出相应的区域。
[17] Hermoza, R., Maicas, G., Nascimento, J.C., Carneiro, G.: Region proposals for saliency map refinement for weakly-supervised disease localisation and classification. arXiv preprint arXiv:2005.10550 (2020)
将区域建议和显著性检测相结合,设计了一个用于弱监督疾病分类的三阶段深度学习框架(SALNet)。该工作基于区域建议从显著图中获取局部区域,并在胸片14数据集的基准分割上取得了很好的性能。
1、为了克服方法(AGCNN,SalNet)依赖显著图和区域建议的位置偏差,我们的方法采用与SDFN相同的像素分割方法来识别包含病理信息的肺和心脏区域。
2、对于局部特征总是被全局特征覆盖的问题,使用特征加权策略增强局部特征。
3、设计了多尺度注意模块,通过检测细微的差异,帮助挖掘鉴别线索来提高分类性能。
我们使用Pytorch框架实现了CXR-IRNet,并使用预训练模型DenseNet-121作为特征提取器的主干网络。我们提取DenseNet的最后一个卷积特征图作为全局注意图。在Sigmoid非线性之后,单输出用于分类概率预测。对于多尺度注意模块,除了将原始图像作为一个特征外,我们还采用了核大小为5,9的两个卷积来生成另外两个特征,这三个尺度特征用于后续操作。我们将chest X-ray图片重新调整大小为256×256,然后通过中心裁剪获得大小为224×224的图像用于训练。用相同的均值和标准差对每幅图像进行归一化。我们使用Adam优化器,学习率为0.001,权重衰减为0.0001。我们的网络从无到有地训练了50个epochs, batch size为512。为了进行比较,我们直接使用SDFN和SalNet的已发表的性能,不复现。其他方法由相同的实验设置实现,以便进行公平的比较。我们使用AUROC和ROC曲线作为评价指标,这两种方法都被广泛用于多标签分类的性能评估。ROC曲线由两个评估标准组成,包括敏感性(真阳性)和特异性(真阴性)。对于检测可视化,我们根据box上的交并集(IOU)进行评估。
Chest X-ray14数据集。
开源代码,https://github.com/fjssharpsword/CXRDC star人较少,问题无人回答
通过与之前的工作DCNN[20],CheXNet[11],SENet[12],SDFN[21],AGCNN[16],SalNet[17]比较AUROC,得到改论文提出的方法达到SOTA。
在这项工作中,我们提出了一种新的深层框架,用于胸部X射线图像中的胸部疾病的多标签分类。该网络的目的是有效地利用包含主要线索的病理区域进行胸部X线筛查,提出了一种带有多尺度注意力模块的特征提取器,以有效地从胸部X射线图像中学习病理信息。同时,采用像素级分割方法对含有病理信息的肺和心脏区域进行识别,克服了定位偏差的问题。然后,采用特征加权策略将过滤中的非肺和心脏区域提取出来。基于我们的深层框架,分类概率层主要依赖于肺和心脏区域的信息。通过对ChestX-ray14数据集的评估,我们建立了新的最先进的基线。我们提出的网络已经应用于临床筛查,以辅助放射科医生。
在未来,我们试图通过应用区域检测来学习病理区域特有的视觉线索来改进我们的模型。
原文链接:https://blog.csdn.net/weixin_46144186/article/details/122155887