Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 1.732
Filter
1.
Med Image Anal ; 99: 103307, 2024 Sep 05.
Article in English | MEDLINE | ID: mdl-39303447

ABSTRACT

Automatic analysis of colonoscopy images has been an active field of research motivated by the importance of early detection of precancerous polyps. However, detecting polyps during the live examination can be challenging due to various factors such as variation of skills and experience among the endoscopists, lack of attentiveness, and fatigue leading to a high polyp miss-rate. Therefore, there is a need for an automated system that can flag missed polyps during the examination and improve patient care. Deep learning has emerged as a promising solution to this challenge as it can assist endoscopists in detecting and classifying overlooked polyps and abnormalities in real time, improving the accuracy of diagnosis and enhancing treatment. In addition to the algorithm's accuracy, transparency and interpretability are crucial to explaining the whys and hows of the algorithm's prediction. Further, conclusions based on incorrect decisions may be fatal, especially in medicine. Despite these pitfalls, most algorithms are developed in private data, closed source, or proprietary software, and methods lack reproducibility. Therefore, to promote the development of efficient and transparent methods, we have organized the "Medico automatic polyp segmentation (Medico 2020)" and "MedAI: Transparency in Medical Image Segmentation (MedAI 2021)" competitions. The Medico 2020 challenge received submissions from 17 teams, while the MedAI 2021 challenge also gathered submissions from another 17 distinct teams in the following year. We present a comprehensive summary and analyze each contribution, highlight the strength of the best-performing methods, and discuss the possibility of clinical translations of such methods into the clinic. Our analysis revealed that the participants improved dice coefficient metrics from 0.8607 in 2020 to 0.8993 in 2021 despite adding diverse and challenging frames (containing irregular, smaller, sessile, or flat polyps), which are frequently missed during a routine clinical examination. For the instrument segmentation task, the best team obtained a mean Intersection over union metric of 0.9364. For the transparency task, a multi-disciplinary team, including expert gastroenterologists, accessed each submission and evaluated the team based on open-source practices, failure case analysis, ablation studies, usability and understandability of evaluations to gain a deeper understanding of the models' credibility for clinical deployment. The best team obtained a final transparency score of 21 out of 25. Through the comprehensive analysis of the challenge, we not only highlight the advancements in polyp and surgical instrument segmentation but also encourage subjective evaluation for building more transparent and understandable AI-based colonoscopy systems. Moreover, we discuss the need for multi-center and out-of-distribution testing to address the current limitations of the methods to reduce the cancer burden and improve patient care.

2.
Health Informatics J ; 30(3): 14604582241288460, 2024.
Article in English | MEDLINE | ID: mdl-39305515

ABSTRACT

Importance: Medical imaging increases the workload involved in writing reports. Given the lack of a standardized format for reports, reports are not easily used as communication tools. Objective: During medical team-patient communication, the descriptions in reports also need to be understood. Automatically generated imaging reports with rich and understandable information can improve medical quality. Design, setting, and participants: The image analysis theory of Panofsky and Shatford from the perspective of image metadata was used in this study to establish a medical image interpretation template (MIIT) for automated image report generation. Main outcomes and measures: The image information included digital imaging and communications in medicine (DICOM), reporting and data systems (RADSs), and image features used in computer-aided diagnosis (CAD). The utility of the images was evaluated by a questionnaire survey to determine whether the image content could be better understood. Results: In 100 responses, exploratory factor analysis revealed that the factor loadings of the facets were greater than 0.5, indicating construct validity, and the overall Cronbach's alpha was 0.916, indicating reliability. No significant differences were noted according to sex, age or education. Conclusions and relevance: Overall, the results show that MIIT is helpful for understanding the content of medical images.


Subject(s)
Metadata , Humans , Female , Decision Making, Shared , Middle Aged , Adult , Surveys and Questionnaires , Reproducibility of Results , Breast/diagnostic imaging
3.
BMC Med Imaging ; 24(1): 253, 2024 Sep 20.
Article in English | MEDLINE | ID: mdl-39304839

ABSTRACT

BACKGROUND: Breast cancer is one of the leading diseases worldwide. According to estimates by the National Breast Cancer Foundation, over 42,000 women are expected to die from this disease in 2024. OBJECTIVE: The prognosis of breast cancer depends on the early detection of breast micronodules and the ability to distinguish benign from malignant lesions. Ultrasonography is a crucial radiological imaging technique for diagnosing the illness because it allows for biopsy and lesion characterization. The user's level of experience and knowledge is vital since ultrasonographic diagnosis relies on the practitioner's expertise. Furthermore, computer-aided technologies significantly contribute by potentially reducing the workload of radiologists and enhancing their expertise, especially when combined with a large patient volume in a hospital setting. METHOD: This work describes the development of a hybrid CNN system for diagnosing benign and malignant breast cancer lesions. The models InceptionV3 and MobileNetV2 serve as the foundation for the hybrid framework. Features from these models are extracted and concatenated individually, resulting in a larger feature set. Finally, various classifiers are applied for the classification task. RESULTS: The model achieved the best results using the softmax classifier, with an accuracy of over 95%. CONCLUSION: Computer-aided diagnosis greatly assists radiologists and reduces their workload. Therefore, this research can serve as a foundation for other researchers to build clinical solutions.


Subject(s)
Breast Neoplasms , Ultrasonography, Mammary , Humans , Female , Breast Neoplasms/diagnostic imaging , Ultrasonography, Mammary/methods , Neural Networks, Computer , Image Interpretation, Computer-Assisted/methods , Diagnosis, Computer-Assisted/methods
4.
Med Biol Eng Comput ; 2024 Sep 18.
Article in English | MEDLINE | ID: mdl-39292382

ABSTRACT

Atherosclerosis causes heart disease by forming plaques in arterial walls. IVUS imaging provides a high-resolution cross-sectional view of coronary arteries and plaque morphology. Healthcare professionals diagnose and quantify atherosclerosis physically or using VH-IVUS software. Since manual or VH-IVUS software-based diagnosis is time-consuming, automated plaque characterization tools are essential for accurate atherosclerosis detection and classification. Recently, deep learning (DL) and computer vision (CV) approaches are promising tools for automatically classifying plaques on IVUS images. With this motivation, this manuscript proposes an automated atherosclerotic plaque classification method using a hybrid Ant Lion Optimizer with Deep Learning (AAPC-HALODL) technique on IVUS images. The AAPC-HALODL technique uses the faster regional convolutional neural network (Faster RCNN)-based segmentation approach to identify diseased regions in the IVUS images. Next, the ShuffleNet-v2 model generates a useful set of feature vectors from the segmented IVUS images, and its hyperparameters can be optimally selected by using the HALO technique. Finally, an average ensemble classification process comprising a stacked autoencoder (SAE) and deep extreme learning machine (DELM) model can be utilized. The MICCAI Challenge 2011 dataset was used for AAPC-HALODL simulation analysis. A detailed comparative study showed that the AAPC-HALODL approach outperformed other DL models with a maximum accuracy of 98.33%, precision of 97.87%, sensitivity of 98.33%, and F score of 98.10%.

5.
Med Image Anal ; 99: 103320, 2024 Sep 02.
Article in English | MEDLINE | ID: mdl-39244796

ABSTRACT

The potential and promise of deep learning systems to provide an independent assessment and relieve radiologists' burden in screening mammography have been recognized in several studies. However, the low cancer prevalence, the need to process high-resolution images, and the need to combine information from multiple views and scales still pose technical challenges. Multi-view architectures that combine information from the four mammographic views to produce an exam-level classification score are a promising approach to the automated processing of screening mammography. However, training such architectures from exam-level labels, without relying on pixel-level supervision, requires very large datasets and may result in suboptimal accuracy. Emerging architectures such as Visual Transformers (ViT) and graph-based architectures can potentially integrate ipsi-lateral and contra-lateral breast views better than traditional convolutional neural networks, thanks to their stronger ability of modeling long-range dependencies. In this paper, we extensively evaluate novel transformer-based and graph-based architectures against state-of-the-art multi-view convolutional neural networks, trained in a weakly-supervised setting on a middle-scale dataset, both in terms of performance and interpretability. Extensive experiments on the CSAW dataset suggest that, while transformer-based architecture outperform other architectures, different inductive biases lead to complementary strengths and weaknesses, as each architecture is sensitive to different signs and mammographic features. Hence, an ensemble of different architectures should be preferred over a winner-takes-all approach to achieve more accurate and robust results. Overall, the findings highlight the potential of a wide range of multi-view architectures for breast cancer classification, even in datasets of relatively modest size, although the detection of small lesions remains challenging without pixel-wise supervision or ad-hoc networks.

6.
Sci Rep ; 14(1): 20647, 2024 09 04.
Article in English | MEDLINE | ID: mdl-39232180

ABSTRACT

Lung cancer (LC) is a life-threatening and dangerous disease all over the world. However, earlier diagnoses and treatment can save lives. Earlier diagnoses of malevolent cells in the lungs responsible for oxygenating the human body and expelling carbon dioxide due to significant procedures are critical. Even though a computed tomography (CT) scan is the best imaging approach in the healthcare sector, it is challenging for physicians to identify and interpret the tumour from CT scans. LC diagnosis in CT scan using artificial intelligence (AI) can help radiologists in earlier diagnoses, enhance performance, and decrease false negatives. Deep learning (DL) for detecting lymph node contribution on histopathological slides has become popular due to its great significance in patient diagnoses and treatment. This study introduces a computer-aided diagnosis for LC by utilizing the Waterwheel Plant Algorithm with DL (CADLC-WWPADL) approach. The primary aim of the CADLC-WWPADL approach is to classify and identify the existence of LC on CT scans. The CADLC-WWPADL method uses a lightweight MobileNet model for feature extraction. Besides, the CADLC-WWPADL method employs WWPA for the hyperparameter tuning process. Furthermore, the symmetrical autoencoder (SAE) model is utilized for classification. An investigational evaluation is performed to demonstrate the significant detection outputs of the CADLC-WWPADL technique. An extensive comparative study reported that the CADLC-WWPADL technique effectively performs with other models with a maximum accuracy of 99.05% under the benchmark CT image dataset.


Subject(s)
Algorithms , Deep Learning , Diagnosis, Computer-Assisted , Lung Neoplasms , Tomography, X-Ray Computed , Humans , Lung Neoplasms/diagnostic imaging , Lung Neoplasms/diagnosis , Lung Neoplasms/pathology , Tomography, X-Ray Computed/methods , Diagnosis, Computer-Assisted/methods
7.
Diagnostics (Basel) ; 14(17)2024 Aug 28.
Article in English | MEDLINE | ID: mdl-39272675

ABSTRACT

Brain cancer is a substantial factor in the mortality associated with cancer, presenting difficulties in the timely identification of the disease. The precision of diagnoses is significantly dependent on the proficiency of radiologists and neurologists. Although there is potential for early detection with computer-aided diagnosis (CAD) algorithms, the majority of current research is hindered by its modest sample sizes. This meta-analysis aims to comprehensively assess the diagnostic test accuracy (DTA) of computer-aided design (CAD) models specifically designed for the detection of brain cancer utilizing hyperspectral (HSI) technology. We employ Quadas-2 criteria to choose seven papers and classify the proposed methodologies according to the artificial intelligence method, cancer type, and publication year. In order to evaluate heterogeneity and diagnostic performance, we utilize Deeks' funnel plot, the forest plot, and accuracy charts. The results of our research suggest that there is no notable variation among the investigations. The CAD techniques that have been examined exhibit a notable level of precision in the automated detection of brain cancer. However, the absence of external validation hinders their potential implementation in real-time clinical settings. This highlights the necessity for additional studies in order to authenticate the CAD models for wider clinical applicability.

8.
Comput Methods Programs Biomed ; 256: 108379, 2024 Nov.
Article in English | MEDLINE | ID: mdl-39217667

ABSTRACT

BACKGROUND AND OBJECTIVE: The incidence of facial fractures is on the rise globally, yet limited studies are addressing the diverse forms of facial fractures present in 3D images. In particular, due to the nature of the facial fracture, the direction in which the bone fractures vary, and there is no clear outline, it is difficult to determine the exact location of the fracture in 2D images. Thus, 3D image analysis is required to find the exact fracture area, but it needs heavy computational complexity and expensive pixel-wise labeling for supervised learning. In this study, we tackle the problem of reducing the computational burden and increasing the accuracy of fracture localization by using a weakly-supervised object localization without pixel-wise labeling in a 3D image space. METHODS: We propose a Very Fast, High-Resolution Aggregation 3D Detection CAM (VFHA-CAM) model, which can detect various facial fractures. To better detect tiny fractures, our model uses high-resolution feature maps and employs Ablation CAM to find an exact fracture location without pixel-wise labeling, where we use a rough fracture image detected with 3D box-wise labeling. To this end, we extract important features and use only essential features to reduce the computational complexity in 3D image space. RESULTS: Experimental findings demonstrate that VFHA-CAM surpasses state-of-the-art 2D detection methods by up to 20% in sensitivity/person and specificity/person, achieving sensitivity/person and specificity/person scores of 87% and 85%, respectively. In addition, Our VFHA-CAM reduces location analysis time to 76 s without performance degradation compared to a simple Ablation CAM method that takes more than 20 min. CONCLUSION: This study introduces a novel weakly-supervised object localization approach for bone fracture detection in 3D facial images. The proposed method employs a 3D detection model, which helps detect various forms of facial bone fractures accurately. The CAM algorithm adopted for fracture area segmentation within a 3D fracture detection box is key in quickly informing medical staff of the exact location of a facial bone fracture in a weakly-supervised object localization. In addition, we provide 3D visualization so that even non-experts unfamiliar with 3D CT images can identify the fracture status and location.


Subject(s)
Algorithms , Imaging, Three-Dimensional , Humans , Imaging, Three-Dimensional/methods , Skull Fractures/diagnostic imaging , Facial Bones/diagnostic imaging , Facial Bones/injuries , Tomography, X-Ray Computed/methods
9.
Radiol Artif Intell ; 6(5): e230342, 2024 Sep.
Article in English | MEDLINE | ID: mdl-39166973

ABSTRACT

Purpose To develop an artificial intelligence model that uses supervised contrastive learning (SCL) to minimize bias in chest radiograph diagnosis. Materials and Methods In this retrospective study, the proposed method was evaluated on two datasets: the Medical Imaging and Data Resource Center (MIDRC) dataset with 77 887 chest radiographs in 27 796 patients collected as of April 20, 2023, for COVID-19 diagnosis and the National Institutes of Health ChestX-ray14 dataset with 112 120 chest radiographs in 30 805 patients collected between 1992 and 2015. In the ChestX-ray14 dataset, thoracic abnormalities included atelectasis, cardiomegaly, effusion, infiltration, mass, nodule, pneumonia, pneumothorax, consolidation, edema, emphysema, fibrosis, pleural thickening, and hernia. The proposed method used SCL with carefully selected positive and negative samples to generate fair image embeddings, which were fine-tuned for subsequent tasks to reduce bias in chest radiograph diagnosis. The method was evaluated using the marginal area under the receiver operating characteristic curve difference (∆mAUC). Results The proposed model showed a significant decrease in bias across all subgroups compared with the baseline models, as evidenced by a paired t test (P < .001). The ∆mAUCs obtained by the proposed method were 0.01 (95% CI: 0.01, 0.01), 0.21 (95% CI: 0.21, 0.21), and 0.10 (95% CI: 0.10, 0.10) for sex, race, and age subgroups, respectively, on the MIDRC dataset and 0.01 (95% CI: 0.01, 0.01) and 0.05 (95% CI: 0.05, 0.05) for sex and age subgroups, respectively, on the ChestX-ray14 dataset. Conclusion Employing SCL can mitigate bias in chest radiograph diagnosis, addressing concerns of fairness and reliability in deep learning-based diagnostic methods. Keywords: Thorax, Diagnosis, Supervised Learning, Convolutional Neural Network (CNN), Computer-aided Diagnosis (CAD) Supplemental material is available for this article. © RSNA, 2024 See also the commentary by Johnson in this issue.


Subject(s)
COVID-19 , Radiography, Thoracic , Humans , Radiography, Thoracic/methods , Radiography, Thoracic/standards , Retrospective Studies , Female , Male , Middle Aged , Aged , COVID-19/diagnostic imaging , COVID-19/diagnosis , Adult , Artificial Intelligence , SARS-CoV-2 , Radiographic Image Interpretation, Computer-Assisted/methods , Supervised Machine Learning , Adolescent , Young Adult
10.
Radiologie (Heidelb) ; 64(10): 752-757, 2024 Oct.
Article in German | MEDLINE | ID: mdl-39186073

ABSTRACT

BACKGROUND: Artificial intelligence (AI) is increasingly finding its way into routine radiological work. OBJECTIVE: Presentation of the current advances and applications of AI along the entire radiological patient journey. METHODS: Systematic literature review of established AI techniques and current research projects, with reference to consensus recommendations. RESULTS: The applications of AI in radiology cover a wide range, starting with AI-supported scheduling and indications assessment, extending to AI-enhanced image acquisition and reconstruction techniques that have the potential to reduce radiation doses in computed tomography (CT) or acquisition times in magnetic resonance imaging (MRI), while maintaining comparable image quality. These include computer-aided detection and diagnosis, such as fracture recognition or nodule detection. Additionally, methods such as worklist prioritization and structured reporting facilitated by large language models enable a rethinking of the reporting process. The use of AI promises to increase the efficiency of all steps of the radiology workflow and an improved diagnostic accuracy. To achieve this, seamless integration into technical workflows and proven evidence of AI systems are necessary. CONCLUSION: Applications of AI have the potential to profoundly influence the role of radiologists in the future.


Subject(s)
Artificial Intelligence , Radiology , Humans , Radiology/methods , Radiology/trends , Tomography, X-Ray Computed/methods , Magnetic Resonance Imaging/methods
11.
Br J Radiol ; 97(1162): 1653-1660, 2024 Oct 01.
Article in English | MEDLINE | ID: mdl-39102827

ABSTRACT

OBJECTIVE: To determine whether adding elastography strain ratio (SR) and a deep learning based computer-aided diagnosis (CAD) system to breast ultrasound (US) can help reclassify Breast Imaging Reporting and Data System (BI-RADS) 3 and 4a-c categories and avoid unnecessary biopsies. METHODS: This prospective, multicentre study included 1049 masses (691 benign, 358 malignant) with assigned BI-RADS 3 and 4a-c between 2020 and 2022. CAD results was dichotomized possibly malignant vs. benign. All patients underwent SR and CAD examinations and histopathological findings were the standard of reference. Reduction of unnecessary biopsies (biopsies in benign lesions) and missed malignancies after reclassified (new BI-RADS 3) with SR and CAD were the outcome measures. RESULTS: Following the routine conventional breast US assessment, 48.6% (336 of 691 masses) underwent unnecessary biopsies. After reclassifying BI-RADS 4a masses (SR cut-off <2.90, CAD dichotomized possibly benign), 25.62% (177 of 691 masses) underwent an unnecessary biopsies corresponding to a 50.14% (177 vs. 355) reduction of unnecessary biopsies. After reclassification, only 1.72% (9 of 523 masses) malignancies were missed in the new BI-RADS 3 group. CONCLUSION: Adding SR and CAD to clinical practice may show an optimal performance in reclassifying BI-RADS 4a to 3 categories, and 50.14% masses would be benefit by keeping the rate of undetected malignancies with an acceptable value of 1.72%. ADVANCES IN KNOWLEDGE: Leveraging the potential of SR in conjunction with CAD holds immense promise in substantially reducing the biopsy frequency associated with BI-RADS 3 and 4A lesions, thereby conferring substantial advantages upon patients encompassed within this cohort.


Subject(s)
Breast Neoplasms , Diagnosis, Computer-Assisted , Elasticity Imaging Techniques , Ultrasonography, Mammary , Humans , Elasticity Imaging Techniques/methods , Female , Prospective Studies , Ultrasonography, Mammary/methods , Breast Neoplasms/diagnostic imaging , Breast Neoplasms/pathology , Middle Aged , Adult , Diagnosis, Computer-Assisted/methods , Aged , Breast/diagnostic imaging , Breast/pathology , Deep Learning , Biopsy , Radiology Information Systems , Young Adult
12.
Article in English | MEDLINE | ID: mdl-39209199

ABSTRACT

BACKGROUND & AIMS: Computer-aided diagnosis (CADx) assists endoscopists in differentiating between neoplastic and non-neoplastic polyps during colonoscopy. This study aimed to evaluate the impact of polyp location (proximal vs. distal colon) on the diagnostic performance of CADx for ≤5 mm polyps. METHODS: We searched for studies evaluating the performance of real-time CADx alone (ie, independently of endoscopist judgement) for predicting the histology of colorectal polyps ≤5 mm. The primary endpoints were CADx sensitivity and specificity in the proximal and distal colon. Secondary outcomes were the negative predictive value (NPV), positive predictive value (PPV), and the accuracy of the CADx alone. Distal colon was limited to the rectum and sigmoid. RESULTS: We included 11 studies for analysis with a total of 7782 polyps ≤5 mm. CADx specificity was significantly lower in the proximal colon compared with the distal colon (62% vs 85%; risk ratio (RR), 0.74; 95% confidence interval [CI], 0.72-0.84). Conversely, sensitivity was similar (89% vs 87%); RR, 1.00; 95% CI, 0.97-1.03). The NPV (64% vs 93%; RR, 0.71; 95% CI, 0.64-0.79) and accuracy (81% vs 86%; RR, 0.95; 95% CI, 0.91-0.99) were significantly lower in the proximal than distal colon, whereas PPV was higher in the proximal colon (87% vs 76%; RR, 1.11; 95% CI, 1.06-1.17). CONCLUSION: The diagnostic performance of CADx for polyps in the proximal colon is inadequate, exhibiting significantly lower specificity compared with its performance for distal polyps. Although current CADx systems are suitable for use in the distal colon, they should not be employed for proximal polyps until more performant systems are developed specifically for these lesions.

13.
J Pers Med ; 14(8)2024 Jul 26.
Article in English | MEDLINE | ID: mdl-39201984

ABSTRACT

Early detection of breast cancer is essential for increasing survival rates, as it is one of the primary causes of death for women globally. Mammograms are extensively used by physicians for diagnosis, but selecting appropriate algorithms for image enhancement, segmentation, feature extraction, and classification remains a significant research challenge. This paper presents a computer-aided diagnosis (CAD)-based hybrid model combining convolutional neural networks (CNN) with a pruned ensembled extreme learning machine (HCPELM) to enhance breast cancer detection, segmentation, feature extraction, and classification. The model employs the rectified linear unit (ReLU) activation function to enhance data analytics after removing artifacts and pectoral muscles, and the HCPELM hybridized with the CNN model improves feature extraction. The hybrid elements are convolutional and fully connected layers. Convolutional layers extract spatial features like edges, textures, and more complex features in deeper layers. The fully connected layers take these features and combine them in a non-linear manner to perform the final classification. ELM performs classification and recognition tasks, aiming for state-of-the-art performance. This hybrid classifier is used for transfer learning by freezing certain layers and modifying the architecture to reduce parameters, easing cancer detection. The HCPELM classifier was trained using the MIAS database and evaluated against benchmark methods. It achieved a breast image recognition accuracy of 86%, outperforming benchmark deep learning models. HCPELM is demonstrating superior performance in early detection and diagnosis, thus aiding healthcare practitioners in breast cancer diagnosis.

14.
Biomed Phys Eng Express ; 10(5)2024 Aug 27.
Article in English | MEDLINE | ID: mdl-39142295

ABSTRACT

With the advancement of computer-aided diagnosis, the automatic segmentation of COVID-19 infection areas holds great promise for assisting in the timely diagnosis and recovery of patients in clinical practice. Currently, methods relying on U-Net face challenges in effectively utilizing fine-grained semantic information from input images and bridging the semantic gap between the encoder and decoder. To address these issues, we propose an FMD-UNet dual-decoder U-Net network for COVID-19 infection segmentation, which integrates a Fine-grained Feature Squeezing (FGFS) decoder and a Multi-scale Dilated Semantic Aggregation (MDSA) decoder. The FGFS decoder produces fine feature maps through the compression of fine-grained features and a weighted attention mechanism, guiding the model to capture detailed semantic information. The MDSA decoder consists of three hierarchical MDSA modules designed for different stages of input information. These modules progressively fuse different scales of dilated convolutions to process the shallow and deep semantic information from the encoder, and use the extracted feature information to bridge the semantic gaps at various stages, this design captures extensive contextual information while decoding and predicting segmentation, thereby suppressing the increase in model parameters. To better validate the robustness and generalizability of the FMD-UNet, we conducted comprehensive performance evaluations and ablation experiments on three public datasets, and achieved leading Dice Similarity Coefficient (DSC) scores of 84.76, 78.56 and 61.99% in COVID-19 infection segmentation, respectively. Compared to previous methods, the FMD-UNet has fewer parameters and shorter inference time, which also demonstrates its competitiveness.


Subject(s)
Algorithms , COVID-19 , Lung , SARS-CoV-2 , Tomography, X-Ray Computed , Humans , COVID-19/diagnostic imaging , Tomography, X-Ray Computed/methods , Lung/diagnostic imaging , Semantics , Image Processing, Computer-Assisted/methods , Neural Networks, Computer
15.
Vis Comput Ind Biomed Art ; 7(1): 21, 2024 Aug 21.
Article in English | MEDLINE | ID: mdl-39167337

ABSTRACT

Medical image registration is vital for disease diagnosis and treatment with its ability to merge diverse information of images, which may be captured under different times, angles, or modalities. Although several surveys have reviewed the development of medical image registration, they have not systematically summarized the existing medical image registration methods. To this end, a comprehensive review of these methods is provided from traditional and deep-learning-based perspectives, aiming to help audiences quickly understand the development of medical image registration. In particular, we review recent advances in retinal image registration, which has not attracted much attention. In addition, current challenges in retinal image registration are discussed and insights and prospects for future research provided.

16.
Biomed Eng Online ; 23(1): 84, 2024 Aug 22.
Article in English | MEDLINE | ID: mdl-39175006

ABSTRACT

This study aims to develop a super-resolution (SR) algorithm tailored specifically for enhancing the image quality and resolution of early cervical cancer (CC) magnetic resonance imaging (MRI) images. The proposed method is subjected to both qualitative and quantitative analyses, thoroughly investigating its performance across various upscaling factors and assessing its impact on medical image segmentation tasks. The innovative SR algorithm employed for reconstructing early CC MRI images integrates complex architectures and deep convolutional kernels. Training is conducted on matched pairs of input images through a multi-input model. The research findings highlight the significant advantages of the proposed SR method on two distinct datasets at different upscaling factors. Specifically, at a 2× upscaling factor, the sagittal test set outperforms the state-of-the-art methods in the PSNR index evaluation, second only to the hybrid attention transformer, while the axial test set outperforms the state-of-the-art methods in both PSNR and SSIM index evaluation. At a 4× upscaling factor, both the sagittal test set and the axial test set achieve the best results in the evaluation of PNSR and SSIM indicators. This method not only effectively enhances image quality, but also exhibits superior performance in medical segmentation tasks, thereby providing a more reliable foundation for clinical diagnosis and image analysis.


Subject(s)
Deep Learning , Image Processing, Computer-Assisted , Magnetic Resonance Imaging , Uterine Cervical Neoplasms , Uterine Cervical Neoplasms/diagnostic imaging , Humans , Female , Image Processing, Computer-Assisted/methods , Algorithms
17.
Sci Rep ; 14(1): 20085, 2024 08 29.
Article in English | MEDLINE | ID: mdl-39209880

ABSTRACT

Computer-aided diagnosis has been slow to develop in the field of oral ulcers. One of the major reasons for this is the lack of publicly available datasets. However, oral ulcers have cancerous lesions and their mortality rate is high. The ability to recognize oral ulcers at an early stage in a timely and effective manner is a very critical issue. In recent years, although there exists a small group of researchers working on these, the datasets are private. Therefore to address this challenge, in this paper a multi-tasking oral ulcer dataset (Autooral) containing two major tasks of lesion segmentation and classification is proposed and made publicly available. To the best of our knowledge, we are the first team to make publicly available an oral ulcer dataset with multi-tasking. In addition, we propose a novel modeling framework, HF-UNet, for segmenting oral ulcer lesion regions. Specifically, the proposed high-order focus interaction module (HFblock) performs acquisition of global properties and focus for acquisition of local properties through high-order attention. The proposed lesion localization module (LL-M) employs a novel hybrid sobel filter, which improves the recognition of ulcer edges. Experimental results on the proposed Autooral dataset show that our proposed HF-UNet segmentation of oral ulcers achieves a DSC value of about 0.80 and the inference memory occupies only 2029 MB. The proposed method guarantees a low running load while maintaining a high-performance segmentation capability. The proposed Autooral dataset and code are available from  https://github.com/wurenkai/HF-UNet-and-Autooral-dataset .


Subject(s)
Oral Ulcer , Oral Ulcer/pathology , Humans , Diagnosis, Computer-Assisted/methods , Algorithms , Databases, Factual
18.
J Healthc Inform Res ; 8(3): 506-522, 2024 Sep.
Article in English | MEDLINE | ID: mdl-39131101

ABSTRACT

In practical electrocardiography (ECG) interpretation, the scarcity of well-annotated data is a common challenge. Transfer learning techniques are valuable in such situations, yet the assessment of transferability has received limited attention. To tackle this issue, we introduce MELEP, which stands for Muti-label Expected Log of Empirical Predictions, a measure designed to estimate the effectiveness of knowledge transfer from a pre-trained model to a downstream multi-label ECG diagnosis task. MELEP is generic, working with new target data with different label sets, and computationally efficient, requiring only a single forward pass through the pre-trained model. To the best of our knowledge, MELEP is the first transferability metric specifically designed for multi-label ECG classification problems. Our experiments show that MELEP can predict the performance of pre-trained convolutional and recurrent deep neural networks, on small and imbalanced ECG data. Specifically, we observed strong correlation coefficients (with absolute values exceeding 0.6 in most cases) between MELEP and the actual average F1 scores of the fine-tuned models. Our work highlights the potential of MELEP to expedite the selection of suitable pre-trained models for ECG diagnosis tasks, saving time and effort that would otherwise be spent on fine-tuning these models.

19.
J Imaging Inform Med ; 2024 Aug 16.
Article in English | MEDLINE | ID: mdl-39150595

ABSTRACT

Primary diffuse central nervous system large B-cell lymphoma (CNS-pDLBCL) and high-grade glioma (HGG) often present similarly, clinically and on imaging, making differentiation challenging. This similarity can complicate pathologists' diagnostic efforts, yet accurately distinguishing between these conditions is crucial for guiding treatment decisions. This study leverages a deep learning model to classify brain tumor pathology images, addressing the common issue of limited medical imaging data. Instead of training a convolutional neural network (CNN) from scratch, we employ a pre-trained network for extracting deep features, which are then used by a support vector machine (SVM) for classification. Our evaluation shows that the Resnet50 (TL + SVM) model achieves a 97.4% accuracy, based on tenfold cross-validation on the test set. These results highlight the synergy between deep learning and traditional diagnostics, potentially setting a new standard for accuracy and efficiency in the pathological diagnosis of brain tumors.

20.
Quant Imaging Med Surg ; 14(8): 5902-5914, 2024 Aug 01.
Article in English | MEDLINE | ID: mdl-39144019

ABSTRACT

Background: Bone age assessment (BAA) is crucial for the diagnosis of growth disorders and the optimization of treatments. However, the random error caused by different observers' experiences and the low consistency of repeated assessments harms the quality of such assessments. Thus, automated assessment methods are needed. Methods: Previous research has sought to design localization modules in a strongly or weakly supervised fashion to aggregate part regions to better recognize subtle differences. Conversely, we sought to efficiently deliver information between multi-granularity regions for fine-grained feature learning and to directly model long-distance relationships for global understanding. The proposed method has been named the "Multi-Granularity and Multi-Attention Net (2M-Net)". Specifically, we first applied the jigsaw method to generate related tasks emphasizing regions with different granularities, and we then trained the model on these tasks using a hierarchical sharing mechanism. In effect, the training signals from the extra tasks created as an inductive bias, enabling 2M-Net to discover task relatedness without the need for annotations. Next, the self-attention mechanism acted as a plug-and-play module to effectively enhance the feature representation capabilities. Finally, multi-scale features were applied for prediction. Results: A public data set of 14,236 hand radiographs, provided by the Radiological Society of North America (RSNA), was used to develop and validate 2M-Net. In the public benchmark testing, the mean absolute error (MAE) between the bone age estimates of the model and of the reviewer was 3.98 months (3.89 months for males and 4.07 months for females). Conclusions: By using the jigsaw method to construct a multi-task learning strategy and inserting the self-attention module for efficient global modeling, we established 2M-Net, which is comparable to the previous best method in terms of performance.

SELECTION OF CITATIONS
SEARCH DETAIL