Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 5 de 5
Filtrar
Mais filtros

Base de dados
Tipo de documento
País de afiliação
Intervalo de ano de publicação
1.
Med Image Anal ; 84: 102704, 2023 02.
Artigo em Inglês | MEDLINE | ID: mdl-36473414

RESUMO

Despite technological and medical advances, the detection, interpretation, and treatment of cancer based on imaging data continue to pose significant challenges. These include inter-observer variability, class imbalance, dataset shifts, inter- and intra-tumour heterogeneity, malignancy determination, and treatment effect uncertainty. Given the recent advancements in image synthesis, Generative Adversarial Networks (GANs), and adversarial training, we assess the potential of these technologies to address a number of key challenges of cancer imaging. We categorise these challenges into (a) data scarcity and imbalance, (b) data access and privacy, (c) data annotation and segmentation, (d) cancer detection and diagnosis, and (e) tumour profiling, treatment planning and monitoring. Based on our analysis of 164 publications that apply adversarial training techniques in the context of cancer imaging, we highlight multiple underexplored solutions with research potential. We further contribute the Synthesis Study Trustworthiness Test (SynTRUST), a meta-analysis framework for assessing the validation rigour of medical image synthesis studies. SynTRUST is based on 26 concrete measures of thoroughness, reproducibility, usefulness, scalability, and tenability. Based on SynTRUST, we analyse 16 of the most promising cancer imaging challenge solutions and observe a high validation rigour in general, but also several desirable improvements. With this work, we strive to bridge the gap between the needs of the clinical cancer imaging community and the current and prospective research on data synthesis and adversarial networks in the artificial intelligence community.


Assuntos
Processamento de Imagem Assistida por Computador , Neoplasias , Humanos , Processamento de Imagem Assistida por Computador/métodos , Inteligência Artificial , Reprodutibilidade dos Testes , Estudos Prospectivos , Neoplasias/diagnóstico por imagem , Imageamento por Ressonância Magnética/métodos
2.
J Med Imaging (Bellingham) ; 10(6): 061403, 2023 Nov.
Artigo em Inglês | MEDLINE | ID: mdl-36814939

RESUMO

Purpose: Deep learning has shown great promise as the backbone of clinical decision support systems. Synthetic data generated by generative models can enhance the performance and capabilities of data-hungry deep learning models. However, there is (1) limited availability of (synthetic) datasets and (2) generative models are complex to train, which hinders their adoption in research and clinical applications. To reduce this entry barrier, we explore generative model sharing to allow more researchers to access, generate, and benefit from synthetic data. Approach: We propose medigan, a one-stop shop for pretrained generative models implemented as an open-source framework-agnostic Python library. After gathering end-user requirements, design decisions based on usability, technical feasibility, and scalability are formulated. Subsequently, we implement medigan based on modular components for generative model (i) execution, (ii) visualization, (iii) search & ranking, and (iv) contribution. We integrate pretrained models with applications across modalities such as mammography, endoscopy, x-ray, and MRI. Results: The scalability and design of the library are demonstrated by its growing number of integrated and readily-usable pretrained generative models, which include 21 models utilizing nine different generative adversarial network architectures trained on 11 different datasets. We further analyze three medigan applications, which include (a) enabling community-wide sharing of restricted data, (b) investigating generative model evaluation metrics, and (c) improving clinical downstream tasks. In (b), we extract Fréchet inception distances (FID) demonstrating FID variability based on image normalization and radiology-specific feature extractors. Conclusion: medigan allows researchers and developers to create, increase, and domain-adapt their training data in just a few lines of code. Capable of enriching and accelerating the development of clinical machine learning models, we show medigan's viability as platform for generative model sharing. Our multimodel synthetic data experiments uncover standards for assessing and reporting metrics, such as FID, in image synthesis studies.

3.
Artif Intell Med ; 132: 102386, 2022 10.
Artigo em Inglês | MEDLINE | ID: mdl-36207090

RESUMO

Computer-aided detection systems based on deep learning have shown great potential in breast cancer detection. However, the lack of domain generalization of artificial neural networks is an important obstacle to their deployment in changing clinical environments. In this study, we explored the domain generalization of deep learning methods for mass detection in digital mammography and analyzed in-depth the sources of domain shift in a large-scale multi-center setting. To this end, we compared the performance of eight state-of-the-art detection methods, including Transformer based models, trained in a single domain and tested in five unseen domains. Moreover, a single-source mass detection training pipeline was designed to improve the domain generalization without requiring images from the new domain. The results show that our workflow generalized better than state-of-the-art transfer learning based approaches in four out of five domains while reducing the domain shift caused by the different acquisition protocols and scanner manufacturers. Subsequently, an extensive analysis was performed to identify the covariate shifts with the greatest effects on detection performance, such as those due to differences in patient age, breast density, mass size, and mass malignancy. Ultimately, this comprehensive study provides key insights and best practices for future research on domain generalization in deep learning based breast cancer detection.


Assuntos
Neoplasias da Mama , Aprendizado Profundo , Neoplasias da Mama/diagnóstico por imagem , Feminino , Humanos , Aprendizado de Máquina , Mamografia/métodos , Redes Neurais de Computação
4.
Front Oncol ; 12: 1044496, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-36755853

RESUMO

Computer-aided detection systems based on deep learning have shown good performance in breast cancer detection. However, high-density breasts show poorer detection performance since dense tissues can mask or even simulate masses. Therefore, the sensitivity of mammography for breast cancer detection can be reduced by more than 20% in dense breasts. Additionally, extremely dense cases reported an increased risk of cancer compared to low-density breasts. This study aims to improve the mass detection performance in high-density breasts using synthetic high-density full-field digital mammograms (FFDM) as data augmentation during breast mass detection model training. To this end, a total of five cycle-consistent GAN (CycleGAN) models using three FFDM datasets were trained for low-to-high-density image translation in high-resolution mammograms. The training images were split by breast density BI-RADS categories, being BI-RADS A almost entirely fatty and BI-RADS D extremely dense breasts. Our results showed that the proposed data augmentation technique improved the sensitivity and precision of mass detection in models trained with small datasets and improved the domain generalization of the models trained with large databases. In addition, the clinical realism of the synthetic images was evaluated in a reader study involving two expert radiologists and one surgical oncologist.

5.
Phys Med ; 83: 25-37, 2021 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-33684723

RESUMO

The vast amount of data produced by today's medical imaging systems has led medical professionals to turn to novel technologies in order to efficiently handle their data and exploit the rich information present in them. In this context, artificial intelligence (AI) is emerging as one of the most prominent solutions, promising to revolutionise every day clinical practice and medical research. The pillar supporting the development of reliable and robust AI algorithms is the appropriate preparation of the medical images to be used by the AI-driven solutions. Here, we provide a comprehensive guide for the necessary steps to prepare medical images prior to developing or applying AI algorithms. The main steps involved in a typical medical image preparation pipeline include: (i) image acquisition at clinical sites, (ii) image de-identification to remove personal information and protect patient privacy, (iii) data curation to control for image and associated information quality, (iv) image storage, and (v) image annotation. There exists a plethora of open access tools to perform each of the aforementioned tasks and are hereby reviewed. Furthermore, we detail medical image repositories covering different organs and diseases. Such repositories are constantly increasing and enriched with the advent of big data. Lastly, we offer directions for future work in this rapidly evolving field.


Assuntos
Algoritmos , Inteligência Artificial , Big Data , Humanos
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA