Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 14 de 14
Filtrar
Mais filtros








Base de dados
Intervalo de ano de publicação
1.
IEEE Trans Med Imaging ; 43(5): 1945-1957, 2024 May.
Artigo em Inglês | MEDLINE | ID: mdl-38206778

RESUMO

Color fundus photography (CFP) and Optical coherence tomography (OCT) images are two of the most widely used modalities in the clinical diagnosis and management of retinal diseases. Despite the widespread use of multimodal imaging in clinical practice, few methods for automated diagnosis of eye diseases utilize correlated and complementary information from multiple modalities effectively. This paper explores how to leverage the information from CFP and OCT images to improve the automated diagnosis of retinal diseases. We propose a novel multimodal learning method, named geometric correspondence-based multimodal learning network (GeCoM-Net), to achieve the fusion of CFP and OCT images. Specifically, inspired by clinical observations, we consider the geometric correspondence between the OCT slice and the CFP region to learn the correlated features of the two modalities for robust fusion. Furthermore, we design a new feature selection strategy to extract discriminative OCT representations by automatically selecting the important feature maps from OCT slices. Unlike the existing multimodal learning methods, GeCoM-Net is the first method that formulates the geometric relationships between the OCT slice and the corresponding region of the CFP image explicitly for CFP and OCT fusion. Experiments have been conducted on a large-scale private dataset and a publicly available dataset to evaluate the effectiveness of GeCoM-Net for diagnosing diabetic macular edema (DME), impaired visual acuity (VA) and glaucoma. The empirical results show that our method outperforms the current state-of-the-art multimodal learning methods by improving the AUROC score 0.4%, 1.9% and 2.9% for DME, VA and glaucoma detection, respectively.


Assuntos
Interpretação de Imagem Assistida por Computador , Imagem Multimodal , Tomografia de Coerência Óptica , Humanos , Tomografia de Coerência Óptica/métodos , Imagem Multimodal/métodos , Interpretação de Imagem Assistida por Computador/métodos , Algoritmos , Doenças Retinianas/diagnóstico por imagem , Retina/diagnóstico por imagem , Aprendizado de Máquina , Fotografação/métodos , Técnicas de Diagnóstico Oftalmológico , Bases de Dados Factuais
2.
Comput Methods Programs Biomed ; 242: 107826, 2023 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-37837885

RESUMO

BACKGROUND: Skin lesions are a prevalent ailment, with melanoma emerging as a particularly perilous variant. Encouragingly, artificial intelligence displays promising potential in early detection, yet its integration within clinical contexts, particularly involving multi-modal data, presents challenges. While multi-modal approaches enhance diagnostic efficacy, the influence of modal bias is often disregarded. METHODS: In this investigation, a multi-modal feature learning technique termed "Contrast-based Consistent Representation Disentanglement" for dermatological diagnosis is introduced. This approach employs adversarial domain adaptation to disentangle features from distinct modalities, fostering a shared representation. Furthermore, a contrastive learning strategy is devised to incentivize the model to preserve uniformity in common lesion attributes across modalities. Emphasizing the learning of a uniform representation among models, this approach circumvents reliance on supplementary data. RESULTS: Assessment of the proposed technique on a 7-point criteria evaluation dataset yields an average accuracy of 76.1% for multi-classification tasks, surpassing researched state-of-the-art methods. The approach tackles modal bias, enabling the acquisition of a consistent representation of common lesion appearances across diverse modalities, which transcends modality boundaries. This study underscores the latent potential of multi-modal feature learning in dermatological diagnosis. CONCLUSION: In summation, a multi-modal feature learning strategy is posited for dermatological diagnosis. This approach outperforms other state-of-the-art methods, underscoring its capacity to enhance diagnostic precision for skin lesions.


Assuntos
Melanoma , Dermatopatias , Humanos , Inteligência Artificial , Aprendizagem , Melanoma/diagnóstico , Projetos de Pesquisa , Dermatopatias/diagnóstico
3.
Diagnostics (Basel) ; 13(8)2023 Apr 12.
Artigo em Inglês | MEDLINE | ID: mdl-37189498

RESUMO

Chest X-rays (CXRs) are essential in the preliminary radiographic assessment of patients affected by COVID-19. Junior residents, as the first point-of-contact in the diagnostic process, are expected to interpret these CXRs accurately. We aimed to assess the effectiveness of a deep neural network in distinguishing COVID-19 from other types of pneumonia, and to determine its potential contribution to improving the diagnostic precision of less experienced residents. A total of 5051 CXRs were utilized to develop and assess an artificial intelligence (AI) model capable of performing three-class classification, namely non-pneumonia, non-COVID-19 pneumonia, and COVID-19 pneumonia. Additionally, an external dataset comprising 500 distinct CXRs was examined by three junior residents with differing levels of training. The CXRs were evaluated both with and without AI assistance. The AI model demonstrated impressive performance, with an Area under the ROC Curve (AUC) of 0.9518 on the internal test set and 0.8594 on the external test set, which improves the AUC score of the current state-of-the-art algorithms by 1.25% and 4.26%, respectively. When assisted by the AI model, the performance of the junior residents improved in a manner that was inversely proportional to their level of training. Among the three junior residents, two showed significant improvement with the assistance of AI. This research highlights the novel development of an AI model for three-class CXR classification and its potential to augment junior residents' diagnostic accuracy, with validation on external data to demonstrate real-world applicability. In practical use, the AI model effectively supported junior residents in interpreting CXRs, boosting their confidence in diagnosis. While the AI model improved junior residents' performance, a decline in performance was observed on the external test compared to the internal test set. This suggests a domain shift between the patient dataset and the external dataset, highlighting the need for future research on test-time training domain adaptation to address this issue.

4.
IEEE Trans Cybern ; 53(8): 5323-5335, 2023 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-36240037

RESUMO

Deep neural network has shown a powerful performance in the medical image analysis of a variety of diseases. However, a number of studies over the past few years have demonstrated that these deep learning systems can be vulnerable to well-designed adversarial attacks, with minor disruptions added to the input. Since both the public and academia have focused on deep learning in the health information economy, these adversarial attacks would prove more important and raise security concerns. In this article, adversarial attacks on deep learning systems in medicine are analyzed from two different points of view: 1) white box and 2) black box. A fast adversarial sample generation method, Feature Space-Restricted Attention Attack is proposed to explore more confusing adversarial samples. It is based on a generative adversarial network with bound classification space to generate perturbations to achieve attacks. Meanwhile, it can employ an attention mechanism to focus this perturbation on the lesion region. This enables the perturbation closely associated with the classification information making the attack more efficient and invisible. The performance and specificity of the proposed attack method are demonstrated by conducting extensive experiments on three different types of medical images. Finally, it is expected that this work can assist practitioners become being of current weaknesses in the deployment of deep learning systems in clinical settings. And, it further investigates domain-specific features of medical deep learning systems to enhance model generalization and resistance to attacks.


Assuntos
Aprendizado Profundo , Redes Neurais de Computação
5.
Med Image Anal ; 83: 102664, 2023 01.
Artigo em Inglês | MEDLINE | ID: mdl-36332357

RESUMO

Pneumonia can be difficult to diagnose since its symptoms are too variable, and the radiographic signs are often very similar to those seen in other illnesses such as a cold or influenza. Deep neural networks have shown promising performance in automated pneumonia diagnosis using chest X-ray radiography, allowing mass screening and early intervention to reduce the severe cases and death toll. However, they usually require many well-labelled chest X-ray images for training to achieve high diagnostic accuracy. To reduce the need for training data and annotation resources, we propose a novel method called Contrastive Domain Adaptation with Consistency Match (CDACM). It transfers the knowledge from different but relevant datasets to the unlabelled small-size target dataset and improves the semantic quality of the learnt representations. Specifically, we design a conditional domain adversarial network to exploit discriminative information conveyed in the predictions to mitigate the domain gap between the source and target datasets. Furthermore, due to the small scale of the target dataset, we construct a feature cloud for each target sample and leverage contrastive learning to extract more discriminative features. Lastly, we propose adaptive feature cloud expansion to push the decision boundary to a low-density area. Unlike most existing transfer learning methods that aim only to mitigate the domain gap, our method instead simultaneously considers the domain gap and the data deficiency problem of the target dataset. The conditional domain adaptation and the feature cloud generation of our method are learning jointly to extract discriminative features in an end-to-end manner. Besides, the adaptive feature cloud expansion improves the model's generalisation ability in the target domain. Extensive experiments on pneumonia and COVID-19 diagnosis tasks demonstrate that our method outperforms several state-of-the-art unsupervised domain adaptation approaches, which verifies the effectiveness of CDACM for automated pneumonia diagnosis using chest X-ray imaging.


Assuntos
Teste para COVID-19 , COVID-19 , Humanos
6.
Med Image Anal ; 81: 102535, 2022 10.
Artigo em Inglês | MEDLINE | ID: mdl-35872361

RESUMO

Accurate skin lesion diagnosis requires a great effort from experts to identify the characteristics from clinical and dermoscopic images. Deep multimodal learning-based methods can reduce intra- and inter-reader variability and improve diagnostic accuracy compared to the single modality-based methods. This study develops a novel method, named adversarial multimodal fusion with attention mechanism (AMFAM), to perform multimodal skin lesion classification. Specifically, we adopt a discriminator that uses adversarial learning to enforce the feature extractor to learn the correlated information explicitly. Moreover, we design an attention-based reconstruction strategy to encourage the feature extractor to concentrate on learning the features of the lesion area, thus, enhancing the feature vector from each modality with more discriminative information. Unlike existing multimodal-based approaches, which only focus on learning complementary features from dermoscopic and clinical images, our method considers both correlated and complementary information of the two modalities for multimodal fusion. To verify the effectiveness of our method, we conduct comprehensive experiments on a publicly available multimodal and multi-task skin lesion classification dataset: 7-point criteria evaluation database. The experimental results demonstrate that our proposed method outperforms the current state-of-the-art methods and improves the average AUC score by above 2% on the test set.


Assuntos
Diagnóstico por Imagem , Dermatopatias , Pele , Bases de Dados Factuais , Humanos , Aprendizado de Máquina , Pele/patologia , Dermatopatias/classificação , Dermatopatias/diagnóstico
7.
Healthcare (Basel) ; 10(1)2022 Jan 17.
Artigo em Inglês | MEDLINE | ID: mdl-35052339

RESUMO

(1) Background: Chest radiographs are the mainstay of initial radiological investigation in this COVID-19 pandemic. A reliable and readily deployable artificial intelligence (AI) algorithm that detects pneumonia in COVID-19 suspects can be useful for screening or triage in a hospital setting. This study has a few objectives: first, to develop a model that accurately detects pneumonia in COVID-19 suspects; second, to assess its performance in a real-world clinical setting; and third, by integrating the model with the daily clinical workflow, to measure its impact on report turn-around time. (2) Methods: The model was developed from the NIH Chest-14 open-source dataset and fine-tuned using an internal dataset comprising more than 4000 CXRs acquired in our institution. Input from two senior radiologists provided the reference standard. The model was integrated into daily clinical workflow, prioritising abnormal CXRs for expedited reporting. Area under the receiver operating characteristic curve (AUC), F1 score, sensitivity, and specificity were calculated to characterise diagnostic performance. The average time taken by radiologists in reporting the CXRs was compared against the mean baseline time taken prior to implementation of the AI model. (3) Results: 9431 unique CXRs were included in the datasets, of which 1232 were ground truth-labelled positive for pneumonia. On the "live" dataset, the model achieved an AUC of 0.95 (95% confidence interval (CI): 0.92, 0.96) corresponding to a specificity of 97% (95% CI: 0.97, 0.98) and sensitivity of 79% (95% CI: 0.72, 0.84). No statistically significant degradation of diagnostic performance was encountered during clinical deployment, and report turn-around time was reduced by 22%. (4) Conclusion: In real-world clinical deployment, our model expedites reporting of pneumonia in COVID-19 suspects while preserving diagnostic performance without significant model drift.

8.
IEEE Trans Med Imaging ; 41(3): 559-570, 2022 03.
Artigo em Inglês | MEDLINE | ID: mdl-34606448

RESUMO

The early detection and timely treatment of breast cancer can save lives. Mammography is one of the most efficient approaches to screening early breast cancer. An automatic mammographic image classification method could improve the work efficiency of radiologists. Current deep learning-based methods typically use the traditional softmax loss to optimize the feature extraction part, which aims to learn the features of mammographic images. However, previous studies have shown that the feature extraction part cannot learn discriminative features from complex data using the standard softmax loss. In this paper, we design a new architecture and propose respective loss functions. Specifically, we develop a double-classifier network architecture that constrains the extracted features' distribution by changing the classifiers' decision boundaries. Then, we propose the double-classifier constraint loss function to constrain the decision boundaries so that the feature extraction part can learn discriminative features. Furthermore, by taking advantage of the architecture of two classifiers, the neural network can detect the difficult-to-classify samples. We propose a weighted double-classifier constraint method to make the feature extract part pay more attention to learning difficult-to-classify samples' features. Our proposed method can be easily applied to an existing convolutional neural network to improve mammographic image classification performance. We conducted extensive experiments to evaluate our methods on three public benchmark mammographic image datasets. The results showed that our methods outperformed many other similar methods and state-of-the-art methods on the three public medical benchmarks. Our code and weights can be found on GitHub.


Assuntos
Neoplasias da Mama , Mamografia , Neoplasias da Mama/diagnóstico por imagem , Detecção Precoce de Câncer , Feminino , Humanos , Redes Neurais de Computação
9.
IEEE J Biomed Health Inform ; 26(3): 1080-1090, 2022 03.
Artigo em Inglês | MEDLINE | ID: mdl-34314362

RESUMO

Pneumonia is one of the most common treatable causes of death, and early diagnosis allows for early intervention. Automated diagnosis of pneumonia can therefore improve outcomes. However, it is challenging to develop high-performance deep learning models due to the lack of well-annotated data for training. This paper proposes a novel method, called Deep Supervised Domain Adaptation (DSDA), to automatically diagnose pneumonia from chest X-ray images. Specifically, we propose to transfer the knowledge from a publicly available large-scale source dataset (ChestX-ray14) to a well-annotated but small-scale target dataset (the TTSH dataset). DSDA aligns the distributions of the source domain and the target domain according to the underlying semantics of the training samples. It includes two task-specific sub-networks for the source domain and the target domain, respectively. These two sub-networks share the feature extraction layers and are trained in an end-to-end manner. Unlike most existing domain adaptation approaches that perform the same tasks in the source domain and the target domain, we attempt to transfer the knowledge from a multi-label classification task in the source domain to a binary classification task in the target domain. To evaluate the effectiveness of our method, we compare it with several existing peer methods. The experimental results show that our method can achieve promising performance for automated pneumonia diagnosis.


Assuntos
Aprendizado Profundo , Pneumonia , Diagnóstico Precoce , Humanos , Pneumonia/diagnóstico por imagem , Tomografia Computadorizada por Raios X/métodos , Raios X
10.
IEEE/ACM Trans Comput Biol Bioinform ; 19(4): 2241-2251, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-33600319

RESUMO

To obtain a well-performed computer-aided detection model for detecting breast cancer, it is usually needed to design an effective and efficient algorithm and a well-labeled dataset to train it. In this paper, first, a multi-instance mammography clinic dataset was constructed. Each case in the dataset includes a different number of instances captured from different views, it is labeled according to the pathological report, and all the instances of one case share one label. Nevertheless, the instances captured from different views may have various levels of contributions to conclude the category of the target case. Motivated by this observation, a feature-sensitive deep convolutional neural network with an end-to-end training manner is proposed to detect breast cancer. The proposed method first uses a pre-train model with some custom layers to extract image features. Then, it adopts a feature fusion module to learn to compute the weight of each feature vector. It makes the different instances of each case have different sensibility on the classifier. Lastly, a classifier module is used to classify the fused features. The experimental results on both our constructed clinic dataset and two public datasets have demonstrated the effectiveness of the proposed method.


Assuntos
Neoplasias da Mama , Algoritmos , Neoplasias da Mama/diagnóstico por imagem , Feminino , Humanos , Mamografia/métodos , Redes Neurais de Computação
11.
Med Image Anal ; 73: 102147, 2021 10.
Artigo em Inglês | MEDLINE | ID: mdl-34246849

RESUMO

The early detection of breast cancer greatly increases the chances that the right decision for a successful treatment plan will be made. Deep learning approaches are used in breast cancer screening and have achieved promising results when a large-scale labeled dataset is available for training. However, they may suffer from a dramatic decrease in performance when annotated data are limited. In this paper, we propose a method called deep adversarial domain adaptation (DADA) to improve the performance of breast cancer screening using mammography. Specifically, our aim is to extract the knowledge from a public dataset (source domain) and transfer the learned knowledge to improve the detection performance on the target dataset (target domain). Because of the different distributions of the source and target domains, the proposed method adopts an adversarial learning technique to perform domain adaptation using the two domains. Specifically, the adversarial procedure is trained by taking advantage of the disagreement of two classifiers. To evaluate the proposed method, the public well-labeled image-level dataset Curated Breast Imaging Subset of the Digital Database for Screening Mammography (CBIS-DDSM) is employed as the source domain. Mammography samples from the West China Hospital were collected to construct our target domain dataset, and the samples are annotated at case-level based on the corresponding pathological reports. The experimental results demonstrate the effectiveness of the proposed method compared with several other state-of-the-art automatic breast cancer screening approaches.


Assuntos
Neoplasias da Mama , Mamografia , Neoplasias da Mama/diagnóstico por imagem , Bases de Dados Factuais , Detecção Precoce de Câncer , Feminino , Humanos
12.
Front Psychol ; 12: 587405, 2021.
Artigo em Inglês | MEDLINE | ID: mdl-34017276

RESUMO

Although many studies have provided evidence that abstract knowledge can be acquired in artificial grammar learning, it remains unclear how abstract knowledge can be attained in sequence learning. To address this issue, we proposed a dual simple recurrent network (DSRN) model that includes a surface SRN encoding and predicting the surface properties of stimuli and an abstract SRN encoding and predicting the abstract properties of stimuli. The results of Simulations 1 and 2 showed that the DSRN model can account for learning effects in the serial reaction time (SRT) task under different conditions, and the manipulation of the contribution weight of each SRN accounted for the contribution of conscious and unconscious processes in inclusion and exclusion tests in previous studies. The results of human performance in Simulation 3 provided further evidence that people can implicitly learn both chunking and abstract knowledge in sequence learning, and the results of Simulation 3 confirmed that the DSRN model can account for how people implicitly acquire the two types of knowledge in sequence learning. These findings extend the learning ability of the SRN model and help understand how different types of knowledge can be acquired implicitly in sequence learning.

13.
Artigo em Inglês | MEDLINE | ID: mdl-30040652

RESUMO

Classifying breast cancer histopathological images automatically is an important task in computer assisted pathology analysis. However, extracting informative and non-redundant features for histopathological image classification is challenging due to the appearance variability caused by the heterogeneity of the disease, the tissue preparation, and staining processes. In this paper, we propose a new feature extractor, called deep manifold preserving autoencoder, to learn discriminative features from unlabeled data. Then, we integrate the proposed feature extractor with a softmax classifier to classify breast cancer histopathology images. Specifically, it learns hierarchal features from unlabeled image patches by minimizing the distance between its input and output, and simultaneously preserving the geometric structure of the whole input data set. After the unsupervised training, we connect the encoder layers of the trained deep manifold preserving autoencoder with a softmax classifier to construct a cascade model and fine-tune this deep neural network with labeled training data. The proposed method learns discriminative features by preserving the structure of the input datasets from the manifold learning view and minimizing reconstruction error from the deep learning view from a large amount of unlabeled data. Extensive experiments on the public breast cancer dataset (BreaKHis) demonstrate the effectiveness of the proposed method.


Assuntos
Neoplasias da Mama , Interpretação de Imagem Assistida por Computador/métodos , Neoplasias da Mama/classificação , Neoplasias da Mama/diagnóstico por imagem , Neoplasias da Mama/patologia , Aprendizado Profundo , Feminino , Histocitoquímica , Humanos
14.
Int J Comput Assist Radiol Surg ; 13(2): 179-191, 2018 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-28861708

RESUMO

PURPOSE: Cell nuclei classification in breast cancer histopathology images plays an important role in effective diagnose since breast cancer can often be characterized by its expression in cell nuclei. However, due to the small and variant sizes of cell nuclei, and heavy noise in histopathology images, traditional machine learning methods cannot achieve desirable recognition accuracy. To address this challenge, this paper aims to present a novel deep neural network which performs representation learning and cell nuclei recognition in an end-to-end manner. METHODS: The proposed model hierarchically maps raw medical images into a latent space in which robustness is achieved by employing a stacked denoising autoencoder. A supervised classifier is further developed to improve the discrimination of the model by maximizing inter-subject separability in the latent space. The proposed method involves a cascade model which jointly learns a set of nonlinear mappings and a classifier from the given raw medical images. Such an on-the-shelf learning strategy makes obtaining discriminative features possible, thus leading to better recognition performance. RESULTS: Extensive experiments with benign and malignant breast cancer datasets are conducted to verify the effectiveness of the proposed method. Better performance was obtained when compared with other feature extraction methods, and higher recognition rate was achieved when compared with other seven classification methods. CONCLUSIONS: We propose an end-to-end DNN model for cell nuclei and non-nuclei classification of histopathology images. It demonstrates that the proposed method can achieve promising performance in cell nuclei classification, and the proposed method is suitable for the cell nuclei classification task.


Assuntos
Neoplasias da Mama/diagnóstico , Núcleo Celular/patologia , Aprendizado de Máquina , Redes Neurais de Computação , Algoritmos , Neoplasias da Mama/patologia , Núcleo Celular/classificação , Diagnóstico por Computador , Feminino , Humanos , Processamento de Imagem Assistida por Computador , Reprodutibilidade dos Testes , Software
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA