Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 37
Filtrar
1.
Sci Data ; 11(1): 512, 2024 May 17.
Artigo em Inglês | MEDLINE | ID: mdl-38760418

RESUMO

Given the high prevalence of lung cancer, an accurate diagnosis is crucial. In the diagnosis process, radiologists play an important role by examining numerous radiology exams to identify different types of nodules. To aid the clinicians' analytical efforts, computer-aided diagnosis can streamline the process of identifying pulmonary nodules. For this purpose, medical reports can serve as valuable sources for automatically retrieving image annotations. Our study focused on converting medical reports into nodule annotations, matching textual information with manually annotated data from the Lung Nodule Database (LNDb)-a comprehensive repository of lung scans and nodule annotations. As a result of this study, we have released a tabular data file containing information from 292 medical reports in the LNDb, along with files detailing nodule characteristics and corresponding matches to the manually annotated data. The objective is to enable further research studies in lung cancer by bridging the gap between existing reports and additional manual annotations that may be collected, thereby fostering discussions about the advantages and disadvantages between these two data types.


Assuntos
Neoplasias Pulmonares , Humanos , Neoplasias Pulmonares/diagnóstico por imagem , Bases de Dados Factuais , Nódulo Pulmonar Solitário/diagnóstico por imagem , Diagnóstico por Computador
2.
Artif Intell Med ; 149: 102814, 2024 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-38462277

RESUMO

Machine Learning models need large amounts of annotated data for training. In the field of medical imaging, labeled data is especially difficult to obtain because the annotations have to be performed by qualified physicians. Natural Language Processing (NLP) tools can be applied to radiology reports to extract labels for medical images automatically. Compared to manual labeling, this approach requires smaller annotation efforts and can therefore facilitate the creation of labeled medical image data sets. In this article, we summarize the literature on this topic spanning from 2013 to 2023, starting with a meta-analysis of the included articles, followed by a qualitative and quantitative systematization of the results. Overall, we found four types of studies on the extraction of labels from radiology reports: those describing systems based on symbolic NLP, statistical NLP, neural NLP, and those describing systems combining or comparing two or more of the latter. Despite the large variety of existing approaches, there is still room for further improvement. This work can contribute to the development of new techniques or the improvement of existing ones.


Assuntos
Processamento de Linguagem Natural , Radiologia , Aprendizado de Máquina
3.
Artif Intell Med ; 147: 102737, 2024 01.
Artigo em Inglês | MEDLINE | ID: mdl-38184361

RESUMO

Chest X-ray scans are frequently requested to detect the presence of abnormalities, due to their low-cost and non-invasive nature. The interpretation of these images can be automated to prioritize more urgent exams through deep learning models, but the presence of image artifacts, e.g. lettering, often generates a harmful bias in the classifiers and an increase of false positive results. Consequently, healthcare would benefit from a system that selects the thoracic region of interest prior to deciding whether an image is possibly pathologic. The current work tackles this binary classification exercise, in which an image is either normal or abnormal, using an attention-driven and spatially unsupervised Spatial Transformer Network (STERN), that takes advantage of a novel domain-specific loss to better frame the region of interest. Unlike the state of the art, in which this type of networks is usually employed for image alignment, this work proposes a spatial transformer module that is used specifically for attention, as an alternative to the standard object detection models that typically precede the classifier to crop out the region of interest. In sum, the proposed end-to-end architecture dynamically scales and aligns the input images to maximize the classifier's performance, by selecting the thorax with translation and non-isotropic scaling transformations, and thus eliminating artifacts. Additionally, this paper provides an extensive and objective analysis of the selected regions of interest, by proposing a set of mathematical evaluation metrics. The results indicate that the STERN achieves similar results to using YOLO-cropped images, with reduced computational cost and without the need for localization labels. More specifically, the system is able to distinguish abnormal frontal images from the CheXpert dataset, with a mean AUC of 85.67% - a 2.55% improvement vs. the 0.98% improvement achieved by the YOLO-based counterpart in comparison to a standard baseline classifier. At the same time, the STERN approach requires less than 2/3 of the training parameters, while increasing the inference time per batch in less than 2 ms. Code available via GitHub.


Assuntos
Artefatos , Tórax , Raios X , Exercício Físico
4.
Comput Methods Programs Biomed ; 236: 107558, 2023 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-37087944

RESUMO

BACKGROUND AND OBJECTIVE: Convolutional neural networks are widely used to detect radiological findings in chest radiographs. Standard architectures are optimized for images of relatively small size (for example, 224 × 224 pixels), which suffices for most application domains. However, in medical imaging, larger inputs are often necessary to analyze disease patterns. A single scan can display multiple types of radiological findings varying greatly in size, and most models do not explicitly account for this. For a given network, whose layers have fixed-size receptive fields, smaller input images result in coarser features, which better characterize larger objects in an image. In contrast, larger inputs result in finer grained features, beneficial for the analysis of smaller objects. By compromising to a single resolution, existing frameworks fail to acknowledge that the ideal input size will not necessarily be the same for classifying every pathology of a scan. The goal of our work is to address this shortcoming by proposing a lightweight framework for multi-scale classification of chest radiographs, where finer and coarser features are combined in a parameter-efficient fashion. METHODS: We experiment on CheXpert, a large chest X-ray database. A lightweight multi-resolution (224 × 224, 448 × 448 and 896 × 896 pixels) network is developed based on a Densenet-121 model where batch normalization layers are replaced with the proposed size-specific batch normalization. Each input size undergoes batch normalization with dedicated scale and shift parameters, while the remaining parameters are shared across sizes. Additional external validation of the proposed approach is performed on the VinDr-CXR data set. RESULTS: The proposed approach (AUC 83.27±0.17, 7.1M parameters) outperforms standard single-scale models (AUC 81.76±0.18, 82.62±0.11 and 82.39±0.13 for input sizes 224 × 224, 448 × 448 and 896 × 896, respectively, 6.9M parameters). It also achieves a performance similar to an ensemble of one individual model per scale (AUC 83.27±0.11, 20.9M parameters), while relying on significantly fewer parameters. The model leverages features of different granularities, resulting in a more accurate classification of all findings, regardless of their size, highlighting the advantages of this approach. CONCLUSIONS: Different chest X-ray findings are better classified at different scales. Our study shows that multi-scale features can be obtained with nearly no additional parameters, boosting performance.


Assuntos
Aprendizado Profundo , Redes Neurais de Computação , Radiografia , Radiografia Torácica/métodos
5.
J Med Imaging (Bellingham) ; 10(1): 014006, 2023 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-36825083

RESUMO

Purpose: The development of accurate methods for retinal layer and fluid segmentation in optical coherence tomography images can help the ophthalmologists in the diagnosis and follow-up of retinal diseases. Recent works based on joint segmentation presented good results for the segmentation of most retinal layers, but the fluid segmentation results are still not satisfactory. We report a hierarchical framework that starts by distinguishing the retinal zone from the background, then separates the fluid-filled regions from the rest, and finally, discriminates the several retinal layers. Approach: Three fully convolutional networks were trained sequentially. The weighting scheme used for computing the loss function during training is derived from the outputs of the networks trained previously. To reinforce the relative position between retinal layers, the mutex Dice loss (included for optimizing the last network) was further modified so that errors between more "distant" layers are more penalized. The method's performance was evaluated using a public dataset. Results: The proposed hierarchical approach outperforms previous works in the segmentation of the inner segment ellipsoid layer and fluid (Dice coefficient = 0.95 and 0.82, respectively). The results achieved for the remaining layers are at a state-of-the-art level. Conclusions: The proposed framework led to significant improvements in fluid segmentation, without compromising the results in the retinal layers. Thus, its output can be used by ophthalmologists as a second opinion or as input for automatic extraction of relevant quantitative biomarkers.

6.
Graefes Arch Clin Exp Ophthalmol ; 260(12): 3825-3836, 2022 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-35838808

RESUMO

PURPOSE: This study aims to investigate retinal and choroidal vascular reactivity to carbogen in central serous chorioretinopathy (CSC) patients. METHODS: An experimental pilot study including 68 eyes from 20 CSC patients and 14 age and sex-matched controls was performed. The participants inhaled carbogen (5% CO2 + 95% O2) for 2 min through a high-concentration disposable mask. A 30° disc-centered fundus imaging using infra-red (IR) and macular spectral domain optical coherence tomography (SD-OCT) using the enhanced depth imaging (EDI) technique was performed, both at baseline and after a 2-min gas exposure. A parametric model fitting-based approach for automatic retinal blood vessel caliber estimation was used to assess the mean variation in both arterial and venous vasculature. Choroidal thickness was measured in two different ways: the subfoveal choroidal thickness (SFCT) was calculated using a manual caliper and the mean central choroidal thickness (MCCT) was assessed using an automatic software. RESULTS: No significant differences were detected in baseline hemodynamic parameters between both groups. A significant positive correlation was found between the participants' age and arterial diameter variation (p < 0.001, r = 0.447), meaning that younger participants presented a more vasoconstrictive response (negative variation) than older ones. No significant differences were detected in the vasoreactive response between CSC and controls for both arterial and venous vessels (p = 0.63 and p = 0.85, respectively). Although the vascular reactivity was not related to the activity of CSC, it was related to the time of disease, for both the arterial (p = 0.02, r = 0.381) and venous (p = 0.001, r = 0.530) beds. SFCT and MCCT were highly correlated (r = 0.830, p < 0.001). Both SFCT and MCCT significantly increased in CSC patients (p < 0.001 and p < 0.001) but not in controls (p = 0.059 and 0.247). A significant negative correlation between CSC patients' age and MCCT variation (r = - 0.340, p = 0.049) was detected. In CSC patients, the choroidal thickness variation was not related to the activity state, time of disease, or previous photodynamic treatment. CONCLUSION: Vasoreactivity to carbogen was similar in the retinal vessels but significantly higher in the choroidal vessels of CSC patients when compared to controls, strengthening the hypothesis of a choroidal regulation dysfunction in this pathology.


Assuntos
Coriorretinopatia Serosa Central , Humanos , Coriorretinopatia Serosa Central/diagnóstico , Angiofluoresceinografia/métodos , Projetos Piloto , Acuidade Visual , Corioide/patologia , Tomografia de Coerência Óptica/métodos , Estudos Retrospectivos
7.
Sci Rep ; 12(1): 6596, 2022 04 21.
Artigo em Inglês | MEDLINE | ID: mdl-35449199

RESUMO

The coronavirus disease 2019 (COVID-19) pandemic has impacted healthcare systems across the world. Chest radiography (CXR) can be used as a complementary method for diagnosing/following COVID-19 patients. However, experience level and workload of technicians and radiologists may affect the decision process. Recent studies suggest that deep learning can be used to assess CXRs, providing an important second opinion for radiologists and technicians in the decision process, and super-human performance in detection of COVID-19 has been reported in multiple studies. In this study, the clinical applicability of deep learning systems for COVID-19 screening was assessed by testing the performance of deep learning systems for the detection of COVID-19. Specifically, four datasets were used: (1) a collection of multiple public datasets (284.793 CXRs); (2) BIMCV dataset (16.631 CXRs); (3) COVIDGR (852 CXRs) and 4) a private dataset (6.361 CXRs). All datasets were collected retrospectively and consist of only frontal CXR views. A ResNet-18 was trained on each of the datasets for the detection of COVID-19. It is shown that a high dataset bias was present, leading to high performance in intradataset train-test scenarios (area under the curve 0.55-0.84 on the collection of public datasets). Significantly lower performances were obtained in interdataset train-test scenarios however (area under the curve > 0.98). A subset of the data was then assessed by radiologists for comparison to the automatic systems. Finetuning with radiologist annotations significantly increased performance across datasets (area under the curve 0.61-0.88) and improved the attention on clinical findings in positive COVID-19 CXRs. Nevertheless, tests on CXRs from different hospital services indicate that the screening performance of CXR and automatic systems is limited (area under the curve < 0.6 on emergency service CXRs). However, COVID-19 manifestations can be accurately detected when present, motivating the use of these tools for evaluating disease progression on mild to severe COVID-19 patients.


Assuntos
COVID-19 , Aprendizado Profundo , COVID-19/diagnóstico por imagem , Humanos , Radiografia , Radiografia Torácica/métodos , Estudos Retrospectivos
8.
Comput Biol Med ; 144: 105333, 2022 05.
Artigo em Inglês | MEDLINE | ID: mdl-35279425

RESUMO

After publishing an in-depth study that analyzed the ability of computerized methods to assist or replace human experts in obtaining carotid intima-media thickness (CIMT) measurements leading to correct therapeutic decisions, here the same consortium joined to present technical outlooks on computerized CIMT measurement systems and provide considerations for the community regarding the development and comparison of these methods, including considerations to encourage the standardization of computerized CIMT measurements and results presentation. A multi-center database of 500 images was collected, upon which three manual segmentations and seven computerized methods were employed to measure the CIMT, including traditional methods based on dynamic programming, deformable models, the first order absolute moment, anisotropic Gaussian derivative filters and deep learning-based image processing approaches based on U-Net convolutional neural networks. An inter- and intra-analyst variability analysis was conducted and segmentation results were analyzed by dividing the database based on carotid morphology, image signal-to-noise ratio, and research center. The computerized methods obtained CIMT absolute bias results that were comparable with studies in literature and they generally were similar and often better than the observed inter- and intra-analyst variability. Several computerized methods showed promising segmentation results, including one deep learning method (CIMT absolute bias = 106 ± 89 µm vs. 160 ± 140 µm intra-analyst variability) and three other traditional image processing methods (CIMT absolute bias = 139 ± 119 µm, 143 ± 118 µm and 139 ± 136 µm). The entire database used has been made publicly available for the community to facilitate future studies and to encourage an open comparison and technical analysis (https://doi.org/10.17632/m7ndn58sv6.1).


Assuntos
Artérias Carótidas , Espessura Intima-Media Carotídea , Artérias Carótidas/diagnóstico por imagem , Artéria Carótida Primitiva/diagnóstico por imagem , Humanos , Ultrassonografia/métodos , Ultrassonografia Doppler
9.
Ultrasound Med Biol ; 47(8): 2442-2455, 2021 08.
Artigo em Inglês | MEDLINE | ID: mdl-33941415

RESUMO

Common carotid intima-media thickness (CIMT) is a commonly used marker for atherosclerosis and is often computed in carotid ultrasound images. An analysis of different computerized techniques for CIMT measurement and their clinical impacts on the same patient data set is lacking. Here we compared and assessed five computerized CIMT algorithms against three expert analysts' manual measurements on a data set of 1088 patients from two centers. Inter- and intra-observer variability was assessed, and the computerized CIMT values were compared with those manually obtained. The CIMT measurements were used to assess the correlation with clinical parameters, cardiovascular event prediction through a generalized linear model and the Kaplan-Meier hazard ratio. CIMT measurements obtained with a skilled analyst's segmentation and the computerized segmentation were comparable in statistical analyses, suggesting they can be used interchangeably for CIMT quantification and clinical outcome investigation. To facilitate future studies, the entire data set used is made publicly available for the community at http://dx.doi.org/10.17632/fpv535fss7.1.


Assuntos
Algoritmos , Artérias Carótidas/diagnóstico por imagem , Espessura Intima-Media Carotídea , Idoso , Sistemas Computacionais , Feminino , Humanos , Masculino , Pessoa de Meia-Idade , Ultrassonografia
10.
Med Image Anal ; 70: 102027, 2021 05.
Artigo em Inglês | MEDLINE | ID: mdl-33740739

RESUMO

Lung cancer is the deadliest type of cancer worldwide and late detection is the major factor for the low survival rate of patients. Low dose computed tomography has been suggested as a potential screening tool but manual screening is costly and time-consuming. This has fuelled the development of automatic methods for the detection, segmentation and characterisation of pulmonary nodules. In spite of promising results, the application of automatic methods to clinical routine is not straightforward and only a limited number of studies have addressed the problem in a holistic way. With the goal of advancing the state of the art, the Lung Nodule Database (LNDb) Challenge on automatic lung cancer patient management was organized. The LNDb Challenge addressed lung nodule detection, segmentation and characterization as well as prediction of patient follow-up according to the 2017 Fleischner society pulmonary nodule guidelines. 294 CT scans were thus collected retrospectively at the Centro Hospitalar e Universitrio de So Joo in Porto, Portugal and each CT was annotated by at least one radiologist. Annotations comprised nodule centroids, segmentations and subjective characterization. 58 CTs and the corresponding annotations were withheld as a separate test set. A total of 947 users registered for the challenge and 11 successful submissions for at least one of the sub-challenges were received. For patient follow-up prediction, a maximum quadratic weighted Cohen's kappa of 0.580 was obtained. In terms of nodule detection, a sensitivity below 0.4 (and 0.7) at 1 false positive per scan was obtained for nodules identified by at least one (and two) radiologist(s). For nodule segmentation, a maximum Jaccard score of 0.567 was obtained, surpassing the interobserver variability. In terms of nodule texture characterization, a maximum quadratic weighted Cohen's kappa of 0.733 was obtained, with part solid nodules being particularly challenging to classify correctly. Detailed analysis of the proposed methods and the differences in performance allow to identify the major challenges remaining and future directions - data collection, augmentation/generation and evaluation of under-represented classes, the incorporation of scan-level information for better decision-making and the development of tools and challenges with clinical-oriented goals. The LNDb Challenge and associated data remain publicly available so that future methods can be tested and benchmarked, promoting the development of new algorithms in lung cancer medical image analysis and patient follow-up recommendation.


Assuntos
Neoplasias Pulmonares , Nódulo Pulmonar Solitário , Algoritmos , Bases de Dados Factuais , Humanos , Neoplasias Pulmonares/diagnóstico por imagem , Estudos Retrospectivos , Tomografia Computadorizada por Raios X
11.
Am J Clin Pathol ; 155(4): 527-536, 2021 03 15.
Artigo em Inglês | MEDLINE | ID: mdl-33118594

RESUMO

OBJECTIVES: This study evaluated the usefulness of artificial intelligence (AI) algorithms as tools in improving the accuracy of histologic classification of breast tissue. METHODS: Overall, 100 microscopic photographs (test A) and 152 regions of interest in whole-slide images (test B) of breast tissue were classified into 4 classes: normal, benign, carcinoma in situ (CIS), and invasive carcinoma. The accuracy of 4 pathologists and 3 pathology residents were evaluated without and with the assistance of algorithms. RESULTS: In test A, algorithm A had accuracy of 0.87, with the lowest accuracy in the benign class (0.72). The observers had average accuracy of 0.80, and most clinically relevant discordances occurred in distinguishing benign from CIS (7.1% of classifications). With the assistance of algorithm A, the observers significantly increased their average accuracy to 0.88. In test B, algorithm B had accuracy of 0.49, with the lowest accuracy in the CIS class (0.06). The observers had average accuracy of 0.86, and most clinically relevant discordances occurred in distinguishing benign from CIS (6.3% of classifications). With the assistance of algorithm B, the observers maintained their average accuracy. CONCLUSIONS: AI tools can increase the classification accuracy of pathologists in the setting of breast lesions.


Assuntos
Inteligência Artificial , Neoplasias da Mama/classificação , Neoplasias da Mama/patologia , Diagnóstico por Computador/métodos , Feminino , Humanos , Interpretação de Imagem Assistida por Computador/métodos
12.
Comput Biol Med ; 126: 103995, 2020 11.
Artigo em Inglês | MEDLINE | ID: mdl-33007620

RESUMO

Diabetic retinopathy (DR) is a diabetes complication, which in extreme situations may lead to blindness. Since the first stages are often asymptomatic, regular eye examinations are required for an early diagnosis. As microaneurysms (MAs) are one of the first signs of DR, several automated methods have been proposed for their detection in order to reduce the ophthalmologists' workload. Although local convergence filters (LCFs) have already been applied for feature extraction, their potential as MA enhancement operators was not explored yet. In this work, we propose a sliding band filter for MA enhancement aiming at obtaining a set of initial MA candidates. Then, a combination of the filter responses with color, contrast and shape information is used by an ensemble of classifiers for final candidate classification. Finally, for each eye fundus image, a score is computed from the confidence values assigned to the MAs detected in the image. The performance of the proposed methodology was evaluated in four datasets. At the lesion level, sensitivities of 64% and 81% were achieved for an average of 8 false positives per image (FPIs) in e-ophtha MA and SCREEN-DR, respectively. In the last dataset, an AUC of 0.83 was also obtained for DR detection.


Assuntos
Diabetes Mellitus , Retinopatia Diabética , Microaneurisma , Algoritmos , Retinopatia Diabética/diagnóstico por imagem , Diagnóstico Precoce , Fundo de Olho , Humanos , Microaneurisma/diagnóstico por imagem
13.
Med Image Anal ; 63: 101715, 2020 07.
Artigo em Inglês | MEDLINE | ID: mdl-32434128

RESUMO

Diabetic retinopathy (DR) grading is crucial in determining the adequate treatment and follow up of patient, but the screening process can be tiresome and prone to errors. Deep learning approaches have shown promising performance as computer-aided diagnosis (CAD) systems, but their black-box behaviour hinders clinical application. We propose DR|GRADUATE, a novel deep learning-based DR grading CAD system that supports its decision by providing a medically interpretable explanation and an estimation of how uncertain that prediction is, allowing the ophthalmologist to measure how much that decision should be trusted. We designed DR|GRADUATE taking into account the ordinal nature of the DR grading problem. A novel Gaussian-sampling approach built upon a Multiple Instance Learning framework allow DR|GRADUATE to infer an image grade associated with an explanation map and a prediction uncertainty while being trained only with image-wise labels. DR|GRADUATE was trained on the Kaggle DR detection training set and evaluated across multiple datasets. In DR grading, a quadratic-weighted Cohen's kappa (κ) between 0.71 and 0.84 was achieved in five different datasets. We show that high κ values occur for images with low prediction uncertainty, thus indicating that this uncertainty is a valid measure of the predictions' quality. Further, bad quality images are generally associated with higher uncertainties, showing that images not suitable for diagnosis indeed lead to less trustworthy predictions. Additionally, tests on unfamiliar medical image data types suggest that DR|GRADUATE allows outlier detection. The attention maps generally highlight regions of interest for diagnosis. These results show the great potential of DR|GRADUATE as a second-opinion system in DR severity grading.


Assuntos
Aprendizado Profundo , Diabetes Mellitus , Retinopatia Diabética , Retinopatia Diabética/diagnóstico por imagem , Diagnóstico por Computador , Fundo de Olho , Humanos , Incerteza
14.
IEEE J Biomed Health Inform ; 24(10): 2894-2901, 2020 10.
Artigo em Inglês | MEDLINE | ID: mdl-32092022

RESUMO

Early diagnosis of lung cancer via computed tomography can significantly reduce the morbidity and mortality rates associated with the pathology. However, searching lung nodules is a high complexity task, which affects the success of screening programs. Whilst computer-aided detection systems can be used as second observers, they may bias radiologists and introduce significant time overheads. With this in mind, this study assesses the potential of using gaze information for integrating automatic detection systems in the clinical practice. For that purpose, 4 radiologists were asked to annotate 20 scans from a public dataset while being monitored by an eye tracker device, and an automatic lung nodule detection system was developed. Our results show that radiologists follow a similar search routine and tend to have lower fixation periods in regions where finding errors occur. The overall detection sensitivity of the specialists was 0.67±0.07, whereas the system achieved 0.69. Combining the annotations of one radiologist with the automatic system significantly improves the detection performance to similar levels of two annotators. Filtering automatic detection candidates only for low fixation regions still significantly improves the detection sensitivity without increasing the number of false-positives.


Assuntos
Aprendizado Profundo , Tecnologia de Rastreamento Ocular , Neoplasias Pulmonares/diagnóstico por imagem , Interpretação de Imagem Radiográfica Assistida por Computador/métodos , Radiologistas , Fixação Ocular/fisiologia , Humanos , Tomografia Computadorizada por Raios X/métodos
15.
Sci Rep ; 9(1): 11591, 2019 08 12.
Artigo em Inglês | MEDLINE | ID: mdl-31406194

RESUMO

We propose iW-Net, a deep learning model that allows for both automatic and interactive segmentation of lung nodules in computed tomography images. iW-Net is composed of two blocks: the first one provides an automatic segmentation and the second one allows to correct it by analyzing 2 points introduced by the user in the nodule's boundary. For this purpose, a physics inspired weight map that takes the user input into account is proposed, which is used both as a feature map and in the system's loss function. Our approach is extensively evaluated on the public LIDC-IDRI dataset, where we achieve a state-of-the-art performance of 0.55 intersection over union vs the 0.59 inter-observer agreement. Also, we show that iW-Net allows to correct the segmentation of small nodules, essential for proper patient referral decision, as well as improve the segmentation of the challenging non-solid nodules and thus may be an important tool for increasing the early diagnosis of lung cancer.


Assuntos
Automação , Pneumopatias/diagnóstico por imagem , Algoritmos , Detecção Precoce de Câncer , Humanos , Pneumopatias/patologia , Redes Neurais de Computação
16.
Med Image Anal ; 56: 122-139, 2019 08.
Artigo em Inglês | MEDLINE | ID: mdl-31226662

RESUMO

Breast cancer is the most common invasive cancer in women, affecting more than 10% of women worldwide. Microscopic analysis of a biopsy remains one of the most important methods to diagnose the type of breast cancer. This requires specialized analysis by pathologists, in a task that i) is highly time- and cost-consuming and ii) often leads to nonconsensual results. The relevance and potential of automatic classification algorithms using hematoxylin-eosin stained histopathological images has already been demonstrated, but the reported results are still sub-optimal for clinical use. With the goal of advancing the state-of-the-art in automatic classification, the Grand Challenge on BreAst Cancer Histology images (BACH) was organized in conjunction with the 15th International Conference on Image Analysis and Recognition (ICIAR 2018). BACH aimed at the classification and localization of clinically relevant histopathological classes in microscopy and whole-slide images from a large annotated dataset, specifically compiled and made publicly available for the challenge. Following a positive response from the scientific community, a total of 64 submissions, out of 677 registrations, effectively entered the competition. The submitted algorithms improved the state-of-the-art in automatic classification of breast cancer with microscopy images to an accuracy of 87%. Convolutional neuronal networks were the most successful methodology in the BACH challenge. Detailed analysis of the collective results allowed the identification of remaining challenges in the field and recommendations for future developments. The BACH dataset remains publicly available as to promote further improvements to the field of automatic classification in digital pathology.


Assuntos
Neoplasias da Mama/patologia , Redes Neurais de Computação , Reconhecimento Automatizado de Padrão , Algoritmos , Feminino , Humanos , Microscopia , Coloração e Rotulagem
17.
Med Image Anal ; 52: 24-41, 2019 02.
Artigo em Inglês | MEDLINE | ID: mdl-30468970

RESUMO

Surgical tool detection is attracting increasing attention from the medical image analysis community. The goal generally is not to precisely locate tools in images, but rather to indicate which tools are being used by the surgeon at each instant. The main motivation for annotating tool usage is to design efficient solutions for surgical workflow analysis, with potential applications in report generation, surgical training and even real-time decision support. Most existing tool annotation algorithms focus on laparoscopic surgeries. However, with 19 million interventions per year, the most common surgical procedure in the world is cataract surgery. The CATARACTS challenge was organized in 2017 to evaluate tool annotation algorithms in the specific context of cataract surgery. It relies on more than nine hours of videos, from 50 cataract surgeries, in which the presence of 21 surgical tools was manually annotated by two experts. With 14 participating teams, this challenge can be considered a success. As might be expected, the submitted solutions are based on deep learning. This paper thoroughly evaluates these solutions: in particular, the quality of their annotations are compared to that of human interpretations. Next, lessons learnt from the differential analysis of these solutions are discussed. We expect that they will guide the design of efficient surgery monitoring tools in the near future.


Assuntos
Extração de Catarata/instrumentação , Aprendizado Profundo , Instrumentos Cirúrgicos , Algoritmos , Humanos , Gravação em Vídeo
18.
PLoS One ; 13(4): e0194702, 2018.
Artigo em Inglês | MEDLINE | ID: mdl-29668759

RESUMO

BACKGROUND: Changes in the retinal vessel caliber are associated with a variety of major diseases, namely diabetes, hypertension and atherosclerosis. The clinical assessment of these changes in fundus images is tiresome and prone to errors and thus automatic methods are desirable for objective and precise caliber measurement. However, the variability of blood vessel appearance, image quality and resolution make the development of these tools a non-trivial task. METHOLODOGY: A method for the estimation of vessel caliber in eye fundus images via vessel cross-sectional intensity profile model fitting is herein proposed. First, the vessel centerlines are determined and individual segments are extracted and smoothed by spline approximation. Then, the corresponding cross-sectional intensity profiles are determined, post-processed and ultimately fitted by newly proposed parametric models. These models are based on Difference-of-Gaussians (DoG) curves modified through a multiplying line with varying inclination. With this, the proposed models can describe profile asymmetry, allowing a good adjustment to the most difficult profiles, namely those showing central light reflex. Finally, the parameters of the best-fit model are used to determine the vessel width using ensembles of bagged regression trees with random feature selection. RESULTS AND CONCLUSIONS: The performance of our approach is evaluated on the REVIEW public dataset by comparing the vessel cross-sectional profile fitting of the proposed modified DoG models with 7 and 8 parameters against a Hermite model with 6 parameters. Results on different goodness of fitness metrics indicate that our models are constantly better at fitting the vessel profiles. Furthermore, our width measurement algorithm achieves a precision close to the observers, outperforming state-of-the art methods, and retrieving the highest precision when evaluated using cross-validation. This high performance supports the robustness of the algorithm and validates its use in retinal vessel width measurement and possible integration in a system for retinal vasculature assessment.


Assuntos
Fundo de Olho , Processamento de Imagem Assistida por Computador , Modelos Teóricos , Vasos Retinianos/diagnóstico por imagem , Algoritmos , Bases de Dados Factuais , Humanos , Processamento de Imagem Assistida por Computador/métodos , Reprodutibilidade dos Testes
19.
IEEE Trans Med Imaging ; 37(3): 781-791, 2018 03.
Artigo em Inglês | MEDLINE | ID: mdl-28981409

RESUMO

In medical image analysis applications, the availability of the large amounts of annotated data is becoming increasingly critical. However, annotated medical data is often scarce and costly to obtain. In this paper, we address the problem of synthesizing retinal color images by applying recent techniques based on adversarial learning. In this setting, a generative model is trained to maximize a loss function provided by a second model attempting to classify its output into real or synthetic. In particular, we propose to implement an adversarial autoencoder for the task of retinal vessel network synthesis. We use the generated vessel trees as an intermediate stage for the generation of color retinal images, which is accomplished with a generative adversarial network. Both models require the optimization of almost everywhere differentiable loss functions, which allows us to train them jointly. The resulting model offers an end-to-end retinal image synthesis system capable of generating as many retinal images as the user requires, with their corresponding vessel networks, by sampling from a simple probability distribution that we impose to the associated latent space. We show that the learned latent space contains a well-defined semantic structure, implying that we can perform calculations in the space of retinal images, e.g., smoothly interpolating new data points between two retinal images. Visual and quantitative results demonstrate that the synthesized images are substantially different from those in the training set, while being also anatomically consistent and displaying a reasonable visual quality.


Assuntos
Processamento de Imagem Assistida por Computador/métodos , Redes Neurais de Computação , Vasos Retinianos/diagnóstico por imagem , Algoritmos , Técnicas de Diagnóstico Oftalmológico , Humanos , Retina/diagnóstico por imagem
20.
PLoS One ; 12(6): e0177544, 2017.
Artigo em Inglês | MEDLINE | ID: mdl-28570557

RESUMO

Breast cancer is one of the main causes of cancer death worldwide. The diagnosis of biopsy tissue with hematoxylin and eosin stained images is non-trivial and specialists often disagree on the final diagnosis. Computer-aided Diagnosis systems contribute to reduce the cost and increase the efficiency of this process. Conventional classification approaches rely on feature extraction methods designed for a specific problem based on field-knowledge. To overcome the many difficulties of the feature-based approaches, deep learning methods are becoming important alternatives. A method for the classification of hematoxylin and eosin stained breast biopsy images using Convolutional Neural Networks (CNNs) is proposed. Images are classified in four classes, normal tissue, benign lesion, in situ carcinoma and invasive carcinoma, and in two classes, carcinoma and non-carcinoma. The architecture of the network is designed to retrieve information at different scales, including both nuclei and overall tissue organization. This design allows the extension of the proposed system to whole-slide histology images. The features extracted by the CNN are also used for training a Support Vector Machine classifier. Accuracies of 77.8% for four class and 83.3% for carcinoma/non-carcinoma are achieved. The sensitivity of our method for cancer cases is 95.6%.


Assuntos
Neoplasias da Mama/patologia , Redes Neurais de Computação , Neoplasias da Mama/classificação , Feminino , Humanos , Processamento de Imagem Assistida por Computador , Máquina de Vetores de Suporte
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA