Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 2 de 2
Filtrar
Más filtros










Base de datos
Intervalo de año de publicación
1.
Comput Med Imaging Graph ; 113: 102354, 2024 04.
Artículo en Inglés | MEDLINE | ID: mdl-38341946

RESUMEN

Lung granuloma is a very common lung disease, and its specific diagnosis is important for determining the exact cause of the disease as well as the prognosis of the patient. And, an effective lung granuloma detection model based on computer-aided diagnostics (CAD) can help pathologists to localize granulomas, thereby improving the efficiency of the specific diagnosis. However, for lung granuloma detection models based on CAD, the significant size differences between granulomas and how to better utilize the morphological features of granulomas are both critical challenges to be addressed. In this paper, we propose an automatic method CRDet to localize granulomas in histopathological images and deal with these challenges. We first introduce the multi-scale feature extraction network with self-attention to extract features at different scales at the same time. Then, the features will be converted to circle representations of granulomas by circle representation detection heads to achieve the alignment of features and ground truth. In this way, we can also more effectively use the circular morphological features of granulomas. Finally, we propose a center point calibration method at the inference stage to further optimize the circle representation. For model evaluation, we built a lung granuloma circle representation dataset named LGCR, including 288 images from 50 subjects. Our method yielded 0.316 mAP and 0.571 mAR, outperforming the state-of-the-art object detection methods on our proposed LGCR.


Asunto(s)
Granuloma , Pulmón , Humanos , Calibración , Granuloma/diagnóstico por imagen , Granuloma/patología , Pulmón/diagnóstico por imagen , Pulmón/patología
2.
BMC Bioinformatics ; 24(1): 315, 2023 Aug 19.
Artículo en Inglés | MEDLINE | ID: mdl-37598159

RESUMEN

BACKGROUND: Two types of non-invasive, radiation-free, and inexpensive imaging technologies that are widely employed in medical applications are ultrasound (US) and infrared thermography (IRT). The ultrasound image obtained by ultrasound imaging primarily expresses the size, shape, contour boundary, echo, and other morphological information of the lesion, while the infrared thermal image obtained by infrared thermography imaging primarily describes its thermodynamic function information. Although distinguishing between benign and malignant thyroid nodules requires both morphological and functional information, present deep learning models are only based on US images, making it possible that some malignant nodules with insignificant morphological changes but significant functional changes will go undetected. RESULTS: Given the US and IRT images present thyroid nodules through distinct modalities, we proposed an Adaptive multi-modal Hybrid (AmmH) classification model that can leverage the amalgamation of these two image types to achieve superior classification performance. The AmmH approach involves the construction of a hybrid single-modal encoder module for each modal data, which facilitates the extraction of both local and global features by integrating a CNN module and a Transformer module. The extracted features from the two modalities are then weighted adaptively using an adaptive modality-weight generation network and fused using an adaptive cross-modal encoder module. The fused features are subsequently utilized for the classification of thyroid nodules through the use of MLP. On the collected dataset, our AmmH model respectively achieved 97.17% and 97.38% of F1 and F2 scores, which significantly outperformed the single-modal models. The results of four ablation experiments further show the superiority of our proposed method. CONCLUSIONS: The proposed multi-modal model extracts features from various modal images, thereby enhancing the comprehensiveness of thyroid nodules descriptions. The adaptive modality-weight generation network enables adaptive attention to different modalities, facilitating the fusion of features using adaptive weights through the adaptive cross-modal encoder. Consequently, the model has demonstrated promising classification performance, indicating its potential as a non-invasive, radiation-free, and cost-effective screening tool for distinguishing between benign and malignant thyroid nodules. The source code is available at https://github.com/wuliZN2020/AmmH .


Asunto(s)
Nódulo Tiroideo , Humanos , Nódulo Tiroideo/diagnóstico por imagen , Ultrasonografía , Suministros de Energía Eléctrica , Programas Informáticos , Termodinámica
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA
...