Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 55
Filtrar
1.
Med Image Anal ; 95: 103206, 2024 May 15.
Artigo em Inglês | MEDLINE | ID: mdl-38776844

RESUMO

The correct interpretation of breast density is important in the assessment of breast cancer risk. AI has been shown capable of accurately predicting breast density, however, due to the differences in imaging characteristics across mammography systems, models built using data from one system do not generalize well to other systems. Though federated learning (FL) has emerged as a way to improve the generalizability of AI without the need to share data, the best way to preserve features from all training data during FL is an active area of research. To explore FL methodology, the breast density classification FL challenge was hosted in partnership with the American College of Radiology, Harvard Medical Schools' Mass General Brigham, University of Colorado, NVIDIA, and the National Institutes of Health National Cancer Institute. Challenge participants were able to submit docker containers capable of implementing FL on three simulated medical facilities, each containing a unique large mammography dataset. The breast density FL challenge ran from June 15 to September 5, 2022, attracting seven finalists from around the world. The winning FL submission reached a linear kappa score of 0.653 on the challenge test data and 0.413 on an external testing dataset, scoring comparably to a model trained on the same data in a central location.

2.
Sensors (Basel) ; 24(7)2024 Mar 24.
Artigo em Inglês | MEDLINE | ID: mdl-38610288

RESUMO

Generative models are used as an alternative data augmentation technique to alleviate the data scarcity problem faced in the medical imaging field. Diffusion models have gathered special attention due to their innovative generation approach, the high quality of the generated images, and their relatively less complex training process compared with Generative Adversarial Networks. Still, the implementation of such models in the medical domain remains at an early stage. In this work, we propose exploring the use of diffusion models for the generation of high-quality, full-field digital mammograms using state-of-the-art conditional diffusion pipelines. Additionally, we propose using stable diffusion models for the inpainting of synthetic mass-like lesions on healthy mammograms. We introduce MAM-E, a pipeline of generative models for high-quality mammography synthesis controlled by a text prompt and capable of generating synthetic mass-like lesions on specific regions of the breast. Finally, we provide quantitative and qualitative assessment of the generated images and easy-to-use graphical user interfaces for mammography synthesis.


Assuntos
Cabeça , Mamografia , Difusão , Nível de Saúde
3.
IEEE Trans Med Imaging ; 43(3): 954-965, 2024 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-37878457

RESUMO

Representational transfer from publicly available models is a promising technique for improving medical image classification, especially in long-tailed datasets with rare diseases. However, existing methods often overlook the frequency-dependent behavior of these models, thereby limiting their effectiveness in transferring representations and generalizations to rare diseases. In this paper, we propose FoPro-KD, a novel framework that leverages the power of frequency patterns learned from frozen pre-trained models to enhance their transferability and compression, presenting a few unique insights: 1) We demonstrate that leveraging representations from publicly available pre-trained models can substantially improve performance, specifically for rare classes, even when utilizing representations from a smaller pre-trained model. 2) We observe that pre-trained models exhibit frequency preferences, which we explore using our proposed Fourier Prompt Generator (FPG), allowing us to manipulate specific frequencies in the input image, enhancing the discriminative representational transfer. 3) By amplifying or diminishing these frequencies in the input image, we enable Effective Knowledge Distillation (EKD). EKD facilitates the transfer of knowledge from pre-trained models to smaller models. Through extensive experiments in long-tailed gastrointestinal image recognition and skin lesion classification, where rare diseases are prevalent, our FoPro-KD framework outperforms existing methods, enabling more accessible medical models for rare disease classification. Code is available at https://github.com/xmed-lab/FoPro-KD.


Assuntos
Compressão de Dados , Doenças Raras , Humanos , Pele
4.
J Med Imaging (Bellingham) ; 10(Suppl 2): S22401, 2023 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-37705763

RESUMO

The editorial introduces the JMI Special Issue on Advances in Breast Imaging.

5.
J Med Imaging (Bellingham) ; 10(5): 051807, 2023 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-37082509

RESUMO

Purpose: Population-based screening programs for the early detection of breast cancer have significantly reduced mortality in women, but they are resource intensive in terms of time, cost, and workload and still have limitations mainly due to the use of 2D imaging techniques, which may cause overlapping of tissues, and interobserver variability. Artificial intelligence (AI) systems may be a valuable tool to assist radiologist when reading and classifying mammograms based on the malignancy of the detected lesions. However, there are several factors that can influence the outcome of a mammogram and thus also the detection capability of an AI system. The aim of our work is to analyze the robustness of the diagnostic ability of an AI system designed for breast cancer detection. Approach: Mammograms from a population-based screening program were scored with the AI system. The sensitivity and specificity by means of the area under the receiver operating characteristic (ROC) curve were obtained as a function of the mammography unit manufacturer, demographic characteristics, and several factors that may affect the image quality (age, breast thickness and density, compression applied, beam quality, and delivered dose). Results: The area under the curve (AUC) from the scoring ROC curve was 0.92 (95% confidence interval = 0.89 - 0.95). It showed no dependence with any of the parameters considered, as the differences in the AUC for different interval values were not statistically significant. Conclusion: The results suggest that the AI system analyzed in our work has a robust diagnostic capability, and that its accuracy is independent of the studied parameters.

6.
JAMA Netw Open ; 6(2): e230524, 2023 02 01.
Artigo em Inglês | MEDLINE | ID: mdl-36821110

RESUMO

Importance: An accurate and robust artificial intelligence (AI) algorithm for detecting cancer in digital breast tomosynthesis (DBT) could significantly improve detection accuracy and reduce health care costs worldwide. Objectives: To make training and evaluation data for the development of AI algorithms for DBT analysis available, to develop well-defined benchmarks, and to create publicly available code for existing methods. Design, Setting, and Participants: This diagnostic study is based on a multi-institutional international grand challenge in which research teams developed algorithms to detect lesions in DBT. A data set of 22 032 reconstructed DBT volumes was made available to research teams. Phase 1, in which teams were provided 700 scans from the training set, 120 from the validation set, and 180 from the test set, took place from December 2020 to January 2021, and phase 2, in which teams were given the full data set, took place from May to July 2021. Main Outcomes and Measures: The overall performance was evaluated by mean sensitivity for biopsied lesions using only DBT volumes with biopsied lesions; ties were broken by including all DBT volumes. Results: A total of 8 teams participated in the challenge. The team with the highest mean sensitivity for biopsied lesions was the NYU B-Team, with 0.957 (95% CI, 0.924-0.984), and the second-place team, ZeDuS, had a mean sensitivity of 0.926 (95% CI, 0.881-0.964). When the results were aggregated, the mean sensitivity for all submitted algorithms was 0.879; for only those who participated in phase 2, it was 0.926. Conclusions and Relevance: In this diagnostic study, an international competition produced algorithms with high sensitivity for using AI to detect lesions on DBT images. A standardized performance benchmark for the detection task using publicly available clinical imaging data was released, with detailed descriptions and analyses of submitted algorithms accompanied by a public release of their predictions and code for selected methods. These resources will serve as a foundation for future research on computer-assisted diagnosis methods for DBT, significantly lowering the barrier of entry for new researchers.


Assuntos
Inteligência Artificial , Neoplasias da Mama , Humanos , Feminino , Benchmarking , Mamografia/métodos , Algoritmos , Interpretação de Imagem Radiográfica Assistida por Computador/métodos , Neoplasias da Mama/diagnóstico por imagem
7.
Sensors (Basel) ; 23(4)2023 Feb 08.
Artigo em Inglês | MEDLINE | ID: mdl-36850495

RESUMO

During the last few years, supervised deep convolutional neural networks have become the state-of-the-art for image recognition tasks. Nevertheless, their performance is severely linked to the amount and quality of the training data. Acquiring and labeling data is a major challenge that limits their expansion to new applications, especially with limited data. Recognition of Lego bricks is a clear example of a real-world deep learning application that has been limited by the difficulties associated with data gathering and training. In this work, photo-realistic image synthesis and few-shot fine-tuning are proposed to overcome limited data in the context of Lego bricks recognition. Using synthetic images and a limited set of 20 real-world images from a controlled environment, the proposed system is evaluated on controlled and uncontrolled real-world testing datasets. Results show the good performance of the synthetically generated data and how limited data from a controlled domain can be successfully used for the few-shot fine-tuning of the synthetic training without a perceptible narrowing of its domain. Obtained results reach an AP50 value of 91.33% for uncontrolled scenarios and 98.7% for controlled ones.

8.
Med Phys ; 50(5): 3223-3243, 2023 May.
Artigo em Inglês | MEDLINE | ID: mdl-36794706

RESUMO

PURPOSE: BUS-Set is a reproducible benchmark for breast ultrasound (BUS) lesion segmentation, comprising of publicly available images with the aim of improving future comparisons between machine learning models within the field of BUS. METHOD: Four publicly available datasets were compiled creating an overall set of 1154 BUS images, from five different scanner types. Full dataset details have been provided, which include clinical labels and detailed annotations. Furthermore, nine state-of-the-art deep learning architectures were selected to form the initial benchmark segmentation result, tested using five-fold cross-validation and MANOVA/ANOVA with Tukey statistical significance test with a threshold of 0.01. Additional evaluation of these architectures was conducted, exploring possible training bias, and lesion size and type effects. RESULTS: Of the nine state-of-the-art benchmarked architectures, Mask R-CNN obtained the highest overall results, with the following mean metric scores: Dice score of 0.851, intersection over union of 0.786 and pixel accuracy of 0.975. MANOVA/ANOVA and Tukey test results showed Mask R-CNN to be statistically significant better compared to all other benchmarked models with a p-value >0.01. Moreover, Mask R-CNN achieved the highest mean Dice score of 0.839 on an additional 16 image dataset, that contained multiple lesions per image. Further analysis on regions of interest was conducted, assessing Hamming distance, depth-to-width ratio (DWR), circularity, and elongation, which showed that the Mask R-CNN's segmentations maintained the most morphological features with correlation coefficients of 0.888, 0.532, 0.876 for DWR, circularity, and elongation, respectively. Based on the correlation coefficients, statistical test indicated that Mask R-CNN was only significantly different to Sk-U-Net. CONCLUSIONS: BUS-Set is a fully reproducible benchmark for BUS lesion segmentation obtained through the use of public datasets and GitHub. Of the state-of-the-art convolution neural network (CNN)-based architectures, Mask R-CNN achieved the highest performance overall, further analysis indicated that a training bias may have occurred due to the lesion size variation in the dataset. All dataset and architecture details are available at GitHub: https://github.com/corcor27/BUS-Set, which allows for a fully reproducible benchmark.


Assuntos
Benchmarking , Redes Neurais de Computação , Feminino , Humanos , Ultrassonografia Mamária , Aprendizado de Máquina , Processamento de Imagem Assistida por Computador/métodos
10.
Inform Med Unlocked ; 30: 100945, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-35434261

RESUMO

Since the COVID-19 pandemic, several research studies have proposed Deep Learning (DL)-based automated COVID-19 detection, reporting high cross-validation accuracy when classifying COVID-19 patients from normal or other common Pneumonia. Although the reported outcomes are very high in most cases, these results were obtained without an independent test set from a separate data source(s). DL models are likely to overfit training data distribution when independent test sets are not utilized or are prone to learn dataset-specific artifacts rather than the actual disease characteristics and underlying pathology. This study aims to assess the promise of such DL methods and datasets by investigating the key challenges and issues by examining the compositions of the available public image datasets and designing different experimental setups. A convolutional neural network-based network, called CVR-Net (COVID-19 Recognition Network), has been proposed for conducting comprehensive experiments to validate our hypothesis. The presented end-to-end CVR-Net is a multi-scale-multi-encoder ensemble model that aggregates the outputs from two different encoders and their different scales to convey the final prediction probability. Three different classification tasks, such as 2-, 3-, 4-classes, are designed where the train-test datasets are from the single, multiple, and independent sources. The obtained binary classification accuracy is 99.8% for a single train-test data source, where the accuracies fall to 98.4% and 88.7% when multiple and independent train-test data sources are utilized. Similar outcomes are noticed in multi-class categorization tasks for single, multiple, and independent data sources, highlighting the challenges in developing DL models with the existing public datasets without an independent test set from a separate dataset. Such a result concludes a requirement for a better-designed dataset for developing DL tools applicable in actual clinical settings. The dataset should have an independent test set; for a single machine or hospital source, have a more balanced set of images for all the prediction classes; and have a balanced dataset from several hospitals and demography. Our source codes and model are publicly available for the research community for further improvements.

11.
Comput Biol Med ; 140: 105093, 2021 Nov 30.
Artigo em Inglês | MEDLINE | ID: mdl-34883343

RESUMO

Dynamic Contrast Enhanced Magnetic Resonance Imaging (DCE-MRI) has been recognized as an effective tool for Breast Cancer (BC) diagnosis. Automatic BC analysis from DCE-MRI depends on features extracted particularly from lesions, hence, lesions need to be accurately segmented as a prior step. Due to the time and experience required to manually segment lesions in 4D DCE-MRI, automating this task is expected to reduce the workload, reduce observer variability and improve diagnostic accuracy. In this paper we propose an automated method for breast lesion segmentation from DCE-MRI based on a U-Net framework. The contributions of this work are the proposal of a modified U-Net architecture and the analysis of the input DCE information. In that sense, we propose the use of an ensemble method combining three U-Net models, each using a different input combination, outperforming all individual methods and other existing approaches. For evaluation, we use a subset of 46 cases from the TCGA-BRCA dataset, a challenging and publicly available dataset not reported to date for this task. Due to the incomplete annotations provided, we complement them with the help of a radiologist in order to include secondary lesions that were not originally segmented. The proposed ensemble method obtains a mean Dice Similarity Coefficient (DSC) of 0.680 (0.802 for main lesions) which outperforms state-of-the art methods using the same dataset, demonstrating the effectiveness of our method considering the complexity of the dataset.

12.
Sensors (Basel) ; 21(23)2021 Dec 03.
Artigo em Inglês | MEDLINE | ID: mdl-34884094

RESUMO

Recently, 6D pose estimation methods have shown robust performance on highly cluttered scenes and different illumination conditions. However, occlusions are still challenging, with recognition rates decreasing to less than 10% for half-visible objects in some datasets. In this paper, we propose to use top-down visual attention and color cues to boost performance of a state-of-the-art method on occluded scenarios. More specifically, color information is employed to detect potential points in the scene, improve feature-matching, and compute more precise fitting scores. The proposed method is evaluated on the Linemod occluded (LM-O), TUD light (TUD-L), Tejani (IC-MI) and Doumanoglou (IC-BIN) datasets, as part of the SiSo BOP benchmark, which includes challenging highly occluded cases, illumination changing scenarios, and multiple instances. The method is analyzed and discussed for different parameters, color spaces and metrics. The presented results show the validity of the proposed approach and their robustness against illumination changes and multiple instance scenarios, specially boosting the performance on relatively high occluded cases. The proposed solution provides an absolute improvement of up to 30% for levels of occlusion between 40% to 50%, outperforming other approaches with a best overall recall of 71% for the LM-O, 92% for TUD-L, 99.3% for IC-MI and 97.5% for IC-BIN.


Assuntos
Sinais (Psicologia) , Iluminação , Reconhecimento Psicológico
13.
Artif Intell Med ; 111: 102001, 2021 01.
Artigo em Inglês | MEDLINE | ID: mdl-33461693

RESUMO

BACKGROUND AND OBJECTIVE: In modern ophthalmology, automated Computer-aided Screening Tools (CSTs) are crucial non-intrusive diagnosis methods, where an accurate segmentation of Optic Disc (OD) and localization of OD and Fovea centers are substantial integral parts. However, designing such an automated tool remains challenging due to small dataset sizes, inconsistency in spatial, texture, and shape information of the OD and Fovea, and the presence of different artifacts. METHODS: This article proposes an end-to-end encoder-decoder network, named DRNet, for the segmentation and localization of OD and Fovea centers. In our DRNet, we propose a skip connection, named residual skip connection, for compensating the spatial information lost due to pooling in the encoder. Unlike the earlier skip connection in the UNet, the proposed skip connection does not directly concatenate low-level feature maps from the encoder's beginning layers with the corresponding same scale decoder. We validate DRNet using different publicly available datasets, such as IDRiD, RIMONE, DRISHTI-GS, and DRIVE for OD segmentation; IDRiD and HRF for OD center localization; and IDRiD for Fovea center localization. RESULTS: The proposed DRNet, for OD segmentation, achieves mean Intersection over Union (mIoU) of 0.845, 0.901, 0.933, and 0.920 for IDRiD, RIMONE, DRISHTI-GS, and DRIVE, respectively. Our OD segmentation result, in terms of mIoU, outperforms the state-of-the-art results for IDRiD and DRIVE datasets, whereas it outperforms state-of-the-art results concerning mean sensitivity for RIMONE and DRISHTI-GS datasets. The DRNet localizes the OD center with mean Euclidean Distance (mED) of 20.23 and 13.34 pixels, respectively, for IDRiD and HRF datasets; it outperforms the state-of-the-art by 4.62 pixels for IDRiD dataset. The DRNet also successfully localizes the Fovea center with mED of 41.87 pixels for the IDRiD dataset, outperforming the state-of-the-art by 1.59 pixels for the same dataset. CONCLUSION: As the proposed DRNet exhibits excellent performance even with limited training data and without intermediate intervention, it can be employed to design a better-CST system to screen retinal images. Our source codes, trained models, and ground-truth heatmaps for OD and Fovea center localization will be made publicly available upon publication at GitHub.1.


Assuntos
Diabetes Mellitus , Retinopatia Diabética , Oftalmologia , Disco Óptico , Algoritmos , Artefatos , Retinopatia Diabética/diagnóstico por imagem , Humanos , Programas de Rastreamento , Disco Óptico/diagnóstico por imagem
14.
Artif Intell Med ; 107: 101880, 2020 07.
Artigo em Inglês | MEDLINE | ID: mdl-32828439

RESUMO

In current breast ultrasound computer aided diagnosis systems, the radiologist preselects a region of interest (ROI) as an input for computerised breast ultrasound image analysis. This task is time consuming and there is inconsistency among human experts. Researchers attempting to automate the process of obtaining the ROIs have been relying on image processing and conventional machine learning methods. We propose the use of a deep learning method for breast ultrasound ROI detection and lesion localisation. We use the most accurate object detection deep learning framework - Faster-RCNN with Inception-ResNet-v2 - as our deep learning network. Due to the lack of datasets, we use transfer learning and propose a new 3-channel artificial RGB method to improve the overall performance. We evaluate and compare the performance of our proposed methods on two datasets (namely, Dataset A and Dataset B), i.e. within individual datasets and composite dataset. We report the lesion detection results with two types of analysis: (1) detected point (centre of the segmented region or the detected bounding box) and (2) Intersection over Union (IoU). Our results demonstrate that the proposed methods achieved comparable results on detected point but with notable improvement on IoU. In addition, our proposed 3-channel artificial RGB method improves the recall of Dataset A. Finally, we outline some future directions for the research.


Assuntos
Aprendizado Profundo , Diagnóstico por Computador , Feminino , Humanos , Processamento de Imagem Assistida por Computador , Aprendizado de Máquina , Ultrassonografia Mamária
15.
Comput Biol Med ; 120: 103738, 2020 05.
Artigo em Inglês | MEDLINE | ID: mdl-32421644

RESUMO

BACKGROUND AND OBJECTIVE: Automatic segmentation of skin lesions is considered a crucial step in Computer-aided Diagnosis (CAD) systems for melanoma detection. Despite its significance, skin lesion segmentation remains an unsolved challenge due to their variability in color, texture, and shapes and indistinguishable boundaries. METHODS: Through this study, we present a new and automatic semantic segmentation network for robust skin lesion segmentation named Dermoscopic Skin Network (DSNet). In order to reduce the number of parameters to make the network lightweight, we used a depth-wise separable convolution in lieu of standard convolution to project the learned discriminating features onto the pixel space at different stages of the encoder. Additionally, we implemented both a U-Net and a Fully Convolutional Network (FCN8s) to compare against the proposed DSNet. RESULTS: We evaluate our proposed model on two publicly available datasets, namely ISIC-20171 and PH22. The obtained mean Intersection over Union (mIoU) is 77.5% and 87.0% respectively for ISIC-2017 and PH2 datasets which outperformed the ISIC-2017 challenge winner by 1.0% with respect to mIoU. Our proposed network also outperformed U-Net and FCN8s respectively by 3.6% and 6.8% with respect to mIoU on the ISIC-2017 dataset. CONCLUSION: Our network for skin lesion segmentation outperforms the other methods discussed in the article and is able to provide better-segmented masks on two different test datasets which can lead to better performance in melanoma detection. Our trained model along with the source code and predicted masks are made publicly available3.


Assuntos
Melanoma , Dermatopatias , Neoplasias Cutâneas , Dermoscopia , Humanos , Melanoma/diagnóstico por imagem , Redes Neurais de Computação , Pele/diagnóstico por imagem , Neoplasias Cutâneas/diagnóstico por imagem
16.
Comput Biol Med ; 121: 103774, 2020 06.
Artigo em Inglês | MEDLINE | ID: mdl-32339095

RESUMO

In recent years, the use of Convolutional Neural Networks (CNNs) in medical imaging has shown improved performance in terms of mass detection and classification compared to current state-of-the-art methods. This paper proposes a fully automated framework to detect masses in Full-Field Digital Mammograms (FFDM). This is based on the Faster Region-based Convolutional Neural Network (Faster-RCNN) model and is applied for detecting masses in the large-scale OPTIMAM Mammography Image Database (OMI-DB), which consists of ∼80,000 FFDMs mainly from Hologic and General Electric (GE) scanners. This research is the first to benchmark the performance of deep learning on OMI-DB. The proposed framework obtained a True Positive Rate (TPR) of 0.93 at 0.78 False Positive per Image (FPI) on FFDMs from the Hologic scanner. Transfer learning is then used in the Faster R-CNN model trained on Hologic images to detect masses in smaller databases containing FFDMs from the GE scanner and another public dataset INbreast (Siemens scanner). The detection framework obtained a TPR of 0.91±0.06 at 1.69 FPI for images from the GE scanner and also showed higher performance compared to state-of-the-art methods on the INbreast dataset, obtaining a TPR of 0.99±0.03 at 1.17 FPI for malignant and 0.85±0.08 at 1.0 FPI for benign masses, showing the potential to be used as part of an advanced CAD system for breast cancer screening.


Assuntos
Neoplasias da Mama , Aprendizado Profundo , Neoplasias da Mama/diagnóstico por imagem , Diagnóstico por Computador , Detecção Precoce de Câncer , Feminino , Humanos , Mamografia , Redes Neurais de Computação
17.
J Med Imaging (Bellingham) ; 6(3): 031401, 2019 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-31037248

RESUMO

This guest editorial introduces the special section on Advances in Breast Imaging.

18.
Med Image Anal ; 54: 76-87, 2019 05.
Artigo em Inglês | MEDLINE | ID: mdl-30836308

RESUMO

Breast magnetic resonance imaging (MRI) and X-ray mammography are two image modalities widely used for early detection and diagnosis of breast diseases in women. The combination of these modalities, traditionally done using intensity-based registration algorithms, leads to a more accurate diagnosis and treatment, due to the capability of co-localizing lesions and susceptibles areas between the two image modalities. In this work, we present the first attempt to register breast MRI and X-ray mammographic images using intensity gradients as the similarity measure. Specifically, a patient-specific biomechanical model of the breast, extracted from the MRI image, is used to mimic the mammographic acquisition. The intensity gradients of the glandular tissue are directly projected from the 3D MRI volume to the 2D mammographic space, and two different gradient-based metrics are tested to lead the registration, the normalized cross-correlation of the scalar gradient values and the gradient correlation of the vectoral gradients. We compare these two approaches to an intensity-based algorithm, where the MRI volume is transformed to a synthetic computed tomography (pseudo-CT) image using the partial volume effect obtained by the glandular tissue segmentation performed by means of an Expectation-Maximization algorithm. This allows us to obtain the digitally reconstructed radiographies by a direct intensity projection. The best results are obtained using the scalar gradient approach along with a transversal isotropic material model, obtaining a target registration error (TRE), in millimeters, of 5.65 ±â€¯2.76 for CC- and of 7.83 ±â€¯3.04 for MLO-mammograms, while the TRE is 7.33 ±â€¯3.62 in the 3D MRI. We also evaluate the effect of the glandularity of the breast as well as the landmark position on the TRE, obtaining moderated correlation values (0.65 and 0.77 respectively), concluding that these aspects need to be considered to increase the accuracy in further approaches.


Assuntos
Mama/diagnóstico por imagem , Aumento da Imagem/métodos , Interpretação de Imagem Assistida por Computador/métodos , Imageamento por Ressonância Magnética , Mamografia , Imagem Multimodal , Algoritmos , Pontos de Referência Anatômicos , Artefatos , Neoplasias da Mama/diagnóstico por imagem , Meios de Contraste , Feminino , Humanos , Imageamento Tridimensional
19.
Artif Intell Med ; 95: 64-81, 2019 04.
Artigo em Inglês | MEDLINE | ID: mdl-30195984

RESUMO

In recent years, deep convolutional neural networks (CNNs) have shown record-shattering performance in a variety of computer vision problems, such as visual object recognition, detection and segmentation. These methods have also been utilised in medical image analysis domain for lesion segmentation, anatomical segmentation and classification. We present an extensive literature review of CNN techniques applied in brain magnetic resonance imaging (MRI) analysis, focusing on the architectures, pre-processing, data-preparation and post-processing strategies available in these works. The aim of this study is three-fold. Our primary goal is to report how different CNN architectures have evolved, discuss state-of-the-art strategies, condense their results obtained using public datasets and examine their pros and cons. Second, this paper is intended to be a detailed reference of the research activity in deep CNN for brain MRI analysis. Finally, we present a perspective on the future of CNNs in which we hint some of the research directions in subsequent years.


Assuntos
Encéfalo/diagnóstico por imagem , Imageamento por Ressonância Magnética/métodos , Redes Neurais de Computação , Humanos , Processamento de Imagem Assistida por Computador/métodos
20.
J Med Imaging (Bellingham) ; 6(1): 011007, 2019 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-30310824

RESUMO

Multistage processing of automated breast ultrasound lesions recognition is dependent on the performance of prior stages. To improve the current state of the art, we propose the use of end-to-end deep learning approaches using fully convolutional networks (FCNs), namely FCN-AlexNet, FCN-32s, FCN-16s, and FCN-8s for semantic segmentation of breast lesions. We use pretrained models based on ImageNet and transfer learning to overcome the issue of data deficiency. We evaluate our results on two datasets, which consist of a total of 113 malignant and 356 benign lesions. To assess the performance, we conduct fivefold cross validation using the following split: 70% for training data, 10% for validation data, and 20% testing data. The results showed that our proposed method performed better on benign lesions, with a top "mean Dice" score of 0.7626 with FCN-16s, when compared with the malignant lesions with a top mean Dice score of 0.5484 with FCN-8s. When considering the number of images with Dice score > 0.5 , 89.6% of the benign lesions were successfully segmented and correctly recognised, whereas 60.6% of the malignant lesions were successfully segmented and correctly recognized. We conclude the paper by addressing the future challenges of the work.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...