Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 144
Filtrar
1.
Med Image Anal ; 95: 103199, 2024 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-38759258

RESUMO

The accurate diagnosis on pathological subtypes for lung cancer is of significant importance for the follow-up treatments and prognosis managements. In this paper, we propose self-generating hybrid feature network (SGHF-Net) for accurately classifying lung cancer subtypes on computed tomography (CT) images. Inspired by studies stating that cross-scale associations exist in the image patterns between the same case's CT images and its pathological images, we innovatively developed a pathological feature synthetic module (PFSM), which quantitatively maps cross-modality associations through deep neural networks, to derive the "gold standard" information contained in the corresponding pathological images from CT images. Additionally, we designed a radiological feature extraction module (RFEM) to directly acquire CT image information and integrated it with the pathological priors under an effective feature fusion framework, enabling the entire classification model to generate more indicative and specific pathologically related features and eventually output more accurate predictions. The superiority of the proposed model lies in its ability to self-generate hybrid features that contain multi-modality image information based on a single-modality input. To evaluate the effectiveness, adaptability, and generalization ability of our model, we performed extensive experiments on a large-scale multi-center dataset (i.e., 829 cases from three hospitals) to compare our model and a series of state-of-the-art (SOTA) classification models. The experimental results demonstrated the superiority of our model for lung cancer subtypes classification with significant accuracy improvements in terms of accuracy (ACC), area under the curve (AUC), positive predictive value (PPV) and F1-score.


Assuntos
Neoplasias Pulmonares , Tomografia Computadorizada por Raios X , Humanos , Neoplasias Pulmonares/diagnóstico por imagem , Neoplasias Pulmonares/classificação , Tomografia Computadorizada por Raios X/métodos , Redes Neurais de Computação , Interpretação de Imagem Radiográfica Assistida por Computador/métodos , Algoritmos
2.
IEEE Trans Image Process ; 33: 3486-3495, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38814773

RESUMO

Continuous sign language recognition (CSLR) is to recognize the glosses in a sign language video. Enhancing the generalization ability of CSLR's visual feature extractor is a worthy area of investigation. In this paper, we model glosses as priors that help to learn more generalizable visual features. Specifically, the signer-invariant gloss feature is extracted by a pre-trained gloss BERT model. Then we design a gloss prior guidance network (GPGN). It contains a novel parallel densely-connected temporal feature extraction (PDC-TFE) module for multi-resolution visual feature extraction. The PDC-TFE captures the complex temporal patterns of the glosses. The pre-trained gloss feature guides the visual feature learning through a cross-modality matching loss. We propose to formulate the cross-modality feature matching into a regularized optimal transport problem, it can be efficiently solved by a variant of the Sinkhorn algorithm. The GPGN parameters are learned by optimizing a weighted sum of the cross-modality matching loss and CTC loss. The experiment results on German and Chinese sign language benchmarks demonstrate that the proposed GPGN achieves competitive performance. The ablation study verifies the effectiveness of several critical components of the GPGN. Furthermore, the proposed pre-trained gloss BERT model and cross-modality matching can be seamlessly integrated into other RGB-cue-based CSLR methods as plug-and-play formulations to enhance the generalization ability of the visual feature extractor.

3.
Med Image Anal ; 91: 102996, 2024 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-37857067

RESUMO

This article discusses the opportunities, applications and future directions of large-scale pretrained models, i.e., foundation models, which promise to significantly improve the analysis of medical images. Medical foundation models have immense potential in solving a wide range of downstream tasks, as they can help to accelerate the development of accurate and robust models, reduce the dependence on large amounts of labeled data, preserve the privacy and confidentiality of patient data. Specifically, we illustrate the "spectrum" of medical foundation models, ranging from general imaging models, modality-specific models, to organ/task-specific models, and highlight their challenges, opportunities and applications. We also discuss how foundation models can be leveraged in downstream medical tasks to enhance the accuracy and efficiency of medical image analysis, leading to more precise diagnosis and treatment decisions.


Assuntos
Diagnóstico por Imagem , Humanos , Diagnóstico por Imagem/métodos , Previsões
4.
Med Image Anal ; 91: 102999, 2024 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-37862866

RESUMO

Coronary CT angiography (CCTA) is an effective and non-invasive method for coronary artery disease diagnosis. Extracting an accurate coronary artery tree from CCTA image is essential for centerline extraction, plaque detection, and stenosis quantification. In practice, data quality varies. Sometimes, the arteries and veins have similar intensities and locate closely, which may confuse segmentation algorithms, even deep learning based ones, to obtain accurate arteries. However, it is not always feasible to re-scan the patient for better image quality. In this paper, we propose an artery and vein disentanglement network (AVDNet) for robust and accurate segmentation by incorporating the coronary vein into the segmentation task. This is the first work to segment coronary artery and vein at the same time. The AVDNet consists of an image based vessel recognition network (IVRN) and a topology based vessel refinement network (TVRN). IVRN learns to segment the arteries and veins, while TVRN learns to correct the segmentation errors based on topology consistency. We also design a novel inverse distance weighted dice (IDD) loss function to recover more thin vessel branches and preserve the vascular boundaries. Extensive experiments are conducted on a multi-center dataset of 700 patients. Quantitative and qualitative results demonstrate the effectiveness of the proposed method by comparing it with state-of-the-art methods and different variants. Prediction results of the AVDNet on the Automated Segmentation of Coronary Artery Challenge dataset are avaliabel at https://github.com/WennyJJ/Coronary-Artery-Vein-Segmentation for follow-up research.


Assuntos
Algoritmos , Vasos Coronários , Humanos , Vasos Coronários/diagnóstico por imagem , Tomografia Computadorizada por Raios X/métodos , Angiografia Coronária/métodos , Angiografia por Tomografia Computadorizada/métodos , Processamento de Imagem Assistida por Computador/métodos
5.
Artigo em Inglês | MEDLINE | ID: mdl-38082617

RESUMO

Tooth segmentation from intraoral scans is a crucial part of digital dentistry. Many Deep Learning based tooth segmentation algorithms have been developed for this task. In most of the cases, high accuracy has been achieved, although, most of the available tooth segmentation techniques make an implicit restrictive assumption of full jaw model and they report accuracy based on full jaw models. Medically, however, in certain cases, full jaw tooth scan is not required or may not be available. Given this practical issue, it is important to understand the robustness of currently available widely used Deep Learning based tooth segmentation techniques. For this purpose, we applied available segmentation techniques on partial intraoral scans and we discovered that the available deep Learning techniques under-perform drastically. The analysis and comparison presented in this work would help us in understanding the severity of the problem and allow us to develop robust tooth segmentation technique without strong assumption of full jaw model.Clinical relevance- Deep learning based tooth mesh segmentation algorithms have achieved high accuracy. In the clinical setting, robustness of deep learning based methods is of utmost importance. We discovered that the high performing tooth segmentation methods under-perform when segmenting partial intraoral scans. In our current work, we conduct extensive experiments to show the extent of this problem. We also discuss why adding partial scans to the training data of the tooth segmentation models is non-trivial. An in-depth understanding of this problem can help in developing robust tooth segmentation tenichniques.


Assuntos
Aprendizado Profundo , Dente , Algoritmos , Dente/diagnóstico por imagem , Cintilografia , Modelos Dentários
6.
Artigo em Inglês | MEDLINE | ID: mdl-38083011

RESUMO

Accurate liver tumor segmentation is a prerequisite for data-driven tumor analysis. Multiphase computed tomography (CT) with extensive liver tumor characteristics is typically used as the most crucial diagnostic basis. However, the large variations in contrast, texture, and tumor structure between CT phases limit the generalization capabilities of the associated segmentation algorithms. Inadequate feature integration across phases might also lead to a performance decrease. To address these issues, we present a domain-adversarial transformer (DA-Tran) network for segmenting liver tumors from multiphase CT images. A DA module is designed to generate domain-adapted feature maps from the non-contrast-enhanced (NC) phase, arterial (ART) phase, portal venous (PV) phase, and delay phase (DP) images. These domain-adapted feature maps are then combined with 3D transformer blocks to capture patch-structured similarity and global context attention. The experimental findings show that DA-Tran produces cutting-edge tumor segmentation outcomes, making it an ideal candidate for this co-segmentation challenge.


Assuntos
Neoplasias Hepáticas , Humanos , Neoplasias Hepáticas/diagnóstico por imagem , Algoritmos , Artérias , Fontes de Energia Elétrica , Generalização Psicológica
7.
Artigo em Inglês | MEDLINE | ID: mdl-38083482

RESUMO

Lung cancer is a malignant tumor with rapid progression and high fatality rate. According to histological morphology and cell behaviours of cancerous tissues, lung cancer can be classified into a variety of subtypes. Since different cancer subtype corresponds to distinct therapies, the early and accurate diagnosis is critical for following treatments and prognostic managements. In clinical practice, the pathological examination is regarded as the gold standard for cancer subtypes diagnosis, while the disadvantage of invasiveness limits its extensive use, leading the non-invasive and fast-imaging computed tomography (CT) test a more commonly used modality in early cancer diagnosis. However, the diagnostic results of CT test are less accurate due to the relatively low image resolution and the atypical manifestations of cancer subtypes. In this work, we propose a novel automatic classification model to offer the assistance in accurately diagnosing the lung cancer subtypes on CT images. Inspired by the findings of cross-modality associations between CT images and their corresponding pathological images, our proposed model is developed to incorporate general histopathological information into CT imagery-based lung cancer subtypes diagnostic by omitting the invasive tissue sample collection or biopsy, and thereby augmenting the diagnostic accuracy. Experimental results on both internal evaluation datasets and external evaluation datasets demonstrate that our proposed model outputs more accurate lung cancer subtypes diagnostic predictions compared to existing CT-based state-of-the-art (SOTA) classification models, by achieving significant improvements in both accuracy (ACC) and area under the receiver operating characteristic curve (AUC).Clinical Relevance- This work provides a method for automatically classifying the lung cancer subtypes on CT images.


Assuntos
Neoplasias Pulmonares , Humanos , Neoplasias Pulmonares/diagnóstico por imagem , Neoplasias Pulmonares/patologia , Pulmão/patologia , Tomografia Computadorizada por Raios X/métodos , Tórax , Curva ROC
9.
Nat Commun ; 14(1): 5510, 2023 09 07.
Artigo em Inglês | MEDLINE | ID: mdl-37679325

RESUMO

Overcoming barriers on the use of multi-center data for medical analytics is challenging due to privacy protection and data heterogeneity in the healthcare system. In this study, we propose the Distributed Synthetic Learning (DSL) architecture to learn across multiple medical centers and ensure the protection of sensitive personal information. DSL enables the building of a homogeneous dataset with entirely synthetic medical images via a form of GAN-based synthetic learning. The proposed DSL architecture has the following key functionalities: multi-modality learning, missing modality completion learning, and continual learning. We systematically evaluate the performance of DSL on different medical applications using cardiac computed tomography angiography (CTA), brain tumor MRI, and histopathology nuclei datasets. Extensive experiments demonstrate the superior performance of DSL as a high-quality synthetic medical image provider by the use of an ideal synthetic quality metric called Dist-FID. We show that DSL can be adapted to heterogeneous data and remarkably outperforms the real misaligned modalities segmentation model by 55% and the temporal datasets segmentation model by 8%.


Assuntos
Neoplasias Encefálicas , Aprendizagem , Humanos , Angiografia , Núcleo Celular , Angiografia por Tomografia Computadorizada
10.
Front Cell Dev Biol ; 11: 1242481, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-37635874

RESUMO

Intra-thymic T cell development is coordinated by the regulatory actions of SATB1 genome organizer. In this report, we show that SATB1 is involved in the regulation of transcription and splicing, both of which displayed deregulation in Satb1 knockout murine thymocytes. More importantly, we characterized a novel SATB1 protein isoform and described its distinct biophysical behavior, implicating potential functional differences compared to the commonly studied isoform. SATB1 utilized its prion-like domains to transition through liquid-like states to aggregated structures. This behavior was dependent on protein concentration as well as phosphorylation and interaction with nuclear RNA. Notably, the long SATB1 isoform was more prone to aggregate following phase separation. Thus, the tight regulation of SATB1 isoforms expression levels alongside with protein post-translational modifications, are imperative for SATB1's mode of action in T cell development. Our data indicate that deregulation of these processes may also be linked to disorders such as cancer.

11.
Med Image Anal ; 89: 102904, 2023 10.
Artigo em Inglês | MEDLINE | ID: mdl-37506556

RESUMO

Generalization to previously unseen images with potential domain shifts is essential for clinically applicable medical image segmentation. Disentangling domain-specific and domain-invariant features is key for Domain Generalization (DG). However, existing DG methods struggle to achieve effective disentanglement. To address this problem, we propose an efficient framework called Contrastive Domain Disentanglement and Style Augmentation (CDDSA) for generalizable medical image segmentation. First, a disentangle network decomposes the image into domain-invariant anatomical representation and domain-specific style code, where the former is sent for further segmentation that is not affected by domain shift, and the disentanglement is regularized by a decoder that combines the anatomical representation and style code to reconstruct the original image. Second, to achieve better disentanglement, a contrastive loss is proposed to encourage the style codes from the same domain and different domains to be compact and divergent, respectively. Finally, to further improve generalizability, we propose a style augmentation strategy to synthesize images with various unseen styles in real time while maintaining anatomical information. Comprehensive experiments on a public multi-site fundus image dataset and an in-house multi-site Nasopharyngeal Carcinoma Magnetic Resonance Image (NPC-MRI) dataset show that the proposed CDDSA achieved remarkable generalizability across different domains, and it outperformed several state-of-the-art methods in generalizable segmentation. Code is available at https://github.com/HiLab-git/DAG4MIA.


Assuntos
Processamento de Imagem Assistida por Computador , Humanos , Fundo de Olho
12.
IEEE Trans Pattern Anal Mach Intell ; 45(8): 10409-10426, 2023 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-37022840

RESUMO

Modern medical imaging techniques, such as ultrasound (US) and cardiac magnetic resonance (MR) imaging, have enabled the evaluation of myocardial deformation directly from an image sequence. While many traditional cardiac motion tracking methods have been developed for the automated estimation of the myocardial wall deformation, they are not widely used in clinical diagnosis, due to their lack of accuracy and efficiency. In this paper, we propose a novel deep learning-based fully unsupervised method, SequenceMorph, for in vivo motion tracking in cardiac image sequences. In our method, we introduce the concept of motion decomposition and recomposition. We first estimate the inter-frame (INF) motion field between any two consecutive frames, by a bi-directional generative diffeomorphic registration neural network. Using this result, we then estimate the Lagrangian motion field between the reference frame and any other frame, through a differentiable composition layer. Our framework can be extended to incorporate another registration network, to further reduce the accumulated errors introduced in the INF motion tracking step, and to refine the Lagrangian motion estimation. By utilizing temporal information to perform reasonable estimations of spatio-temporal motion fields, this novel method provides a useful solution for image sequence motion tracking. Our method has been applied to US (echocardiographic) and cardiac MR (untagged and tagged cine) image sequences; the results show that SequenceMorph is significantly superior to conventional motion tracking methods, in terms of the cardiac motion tracking accuracy and inference efficiency.


Assuntos
Algoritmos , Aprendizado de Máquina não Supervisionado , Coração/diagnóstico por imagem , Movimento (Física) , Imageamento por Ressonância Magnética
13.
IEEE J Biomed Health Inform ; 27(7): 3302-3313, 2023 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-37067963

RESUMO

In recent years, several deep learning models have been proposed to accurately quantify and diagnose cardiac pathologies. These automated tools heavily rely on the accurate segmentation of cardiac structures in MRI images. However, segmentation of the right ventricle is challenging due to its highly complex shape and ill-defined borders. Hence, there is a need for new methods to handle such structure's geometrical and textural complexities, notably in the presence of pathologies such as Dilated Right Ventricle, Tricuspid Regurgitation, Arrhythmogenesis, Tetralogy of Fallot, and Inter-atrial Communication. The last MICCAI challenge on right ventricle segmentation was held in 2012 and included only 48 cases from a single clinical center. As part of the 12th Workshop on Statistical Atlases and Computational Models of the Heart (STACOM 2021), the M&Ms-2 challenge was organized to promote the interest of the research community around right ventricle segmentation in multi-disease, multi-view, and multi-center cardiac MRI. Three hundred sixty CMR cases, including short-axis and long-axis 4-chamber views, were collected from three Spanish hospitals using nine different scanners from three different vendors, and included a diverse set of right and left ventricle pathologies. The solutions provided by the participants show that nnU-Net achieved the best results overall. However, multi-view approaches were able to capture additional information, highlighting the need to integrate multiple cardiac diseases, views, scanners, and acquisition protocols to produce reliable automatic cardiac segmentation algorithms.


Assuntos
Aprendizado Profundo , Ventrículos do Coração , Humanos , Ventrículos do Coração/diagnóstico por imagem , Imageamento por Ressonância Magnética/métodos , Algoritmos , Átrios do Coração
14.
Sci Data ; 10(1): 231, 2023 04 21.
Artigo em Inglês | MEDLINE | ID: mdl-37085533

RESUMO

The success of training computer-vision models heavily relies on the support of large-scale, real-world images with annotations. Yet such an annotation-ready dataset is difficult to curate in pathology due to the privacy protection and excessive annotation burden. To aid in computational pathology, synthetic data generation, curation, and annotation present a cost-effective means to quickly enable data diversity that is required to boost model performance at different stages. In this study, we introduce a large-scale synthetic pathological image dataset paired with the annotation for nuclei semantic segmentation, termed as Synthetic Nuclei and annOtation Wizard (SNOW). The proposed SNOW is developed via a standardized workflow by applying the off-the-shelf image generator and nuclei annotator. The dataset contains overall 20k image tiles and 1,448,522 annotated nuclei with the CC-BY license. We show that SNOW can be used in both supervised and semi-supervised training scenarios. Extensive results suggest that synthetic-data-trained models are competitive under a variety of model training settings, expanding the scope of better using synthetic images for enhancing downstream data-driven clinical tasks.


Assuntos
Neoplasias da Mama , Aprendizado Profundo , Privacidade , Fluxo de Trabalho , Processamento de Imagem Assistida por Computador , Semântica , Humanos , Feminino
16.
Med Image Anal ; 82: 102642, 2022 11.
Artigo em Inglês | MEDLINE | ID: mdl-36223682

RESUMO

Whole abdominal organ segmentation is important in diagnosing abdomen lesions, radiotherapy, and follow-up. However, oncologists' delineating all abdominal organs from 3D volumes is time-consuming and very expensive. Deep learning-based medical image segmentation has shown the potential to reduce manual delineation efforts, but it still requires a large-scale fine annotated dataset for training, and there is a lack of large-scale datasets covering the whole abdomen region with accurate and detailed annotations for the whole abdominal organ segmentation. In this work, we establish a new large-scale Whole abdominal ORgan Dataset (WORD) for algorithm research and clinical application development. This dataset contains 150 abdominal CT volumes (30495 slices). Each volume has 16 organs with fine pixel-level annotations and scribble-based sparse annotations, which may be the largest dataset with whole abdominal organ annotation. Several state-of-the-art segmentation methods are evaluated on this dataset. And we also invited three experienced oncologists to revise the model predictions to measure the gap between the deep learning method and oncologists. Afterwards, we investigate the inference-efficient learning on the WORD, as the high-resolution image requires large GPU memory and a long inference time in the test stage. We further evaluate the scribble-based annotation-efficient learning on this dataset, as the pixel-wise manual annotation is time-consuming and expensive. The work provided a new benchmark for the abdominal multi-organ segmentation task, and these experiments can serve as the baseline for future research and clinical application development.


Assuntos
Benchmarking , Tomografia Computadorizada por Raios X , Humanos , Tomografia Computadorizada por Raios X/métodos , Algoritmos , Abdome , Processamento de Imagem Assistida por Computador/métodos
17.
Annu Int Conf IEEE Eng Med Biol Soc ; 2022: 4758-4763, 2022 07.
Artigo em Inglês | MEDLINE | ID: mdl-36086601

RESUMO

Multi-modality images have been widely used and provide comprehensive information for medical image analysis. However, acquiring all modalities among all institutes is costly and often impossible in clinical settings. To leverage more comprehensive multi-modality information, we propose privacy secured decentralized multi-modality adaptive learning architecture named ModalityBank. Our method could learn a set of effective domain-specific modulation parameters plugged into a common domain-agnostic network. We demonstrate by switching different sets of configurations, the generator could output high-quality images for a specific modality. Our method could also complete the missing modalities across all data centers, thus could be used for modality completion purposes. The downstream task trained from the synthesized multi-modality samples could achieve higher performance than learning from one real data center and achieve close- to- real performance compare with all real images.


Assuntos
Imageamento por Ressonância Magnética , Imagem Multimodal , Imageamento por Ressonância Magnética/métodos , Imagem Multimodal/métodos
18.
Front Cardiovasc Med ; 9: 919810, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-35859582

RESUMO

Recent advances in magnetic resonance imaging are enabling the efficient creation of high-dimensional, multiparametric images, containing a wealth of potential information about the structure and function of many organs, including the cardiovascular system. However, the sizes of these rich data sets are so large that they are outstripping our ability to adequately visualize and analyze them, thus limiting their clinical impact. While there are some intrinsic limitations of human perception and of conventional display devices which hamper our ability to effectively use these data, newer computational methods for handling the data may aid our ability to extract and visualize the salient components of these high-dimensional data sets.

19.
Med Image Anal ; 80: 102485, 2022 08.
Artigo em Inglês | MEDLINE | ID: mdl-35679692

RESUMO

Examination of pathological images is the golden standard for diagnosing and screening many kinds of cancers. Multiple datasets, benchmarks, and challenges have been released in recent years, resulting in significant improvements in computer-aided diagnosis (CAD) of related diseases. However, few existing works focus on the digestive system. We released two well-annotated benchmark datasets and organized challenges for the digestive-system pathological cell detection and tissue segmentation, in conjunction with the International Conference on Medical Image Computing and Computer-Assisted Intervention (MICCAI). This paper first introduces the two released datasets, i.e., signet ring cell detection and colonoscopy tissue segmentation, with the descriptions of data collection, annotation, and potential uses. We also report the set-up, evaluation metrics, and top-performing methods and results of two challenge tasks for cell detection and tissue segmentation. In particular, the challenge received 234 effective submissions from 32 participating teams, where top-performing teams developed advancing approaches and tools for the CAD of digestive pathology. To the best of our knowledge, these are the first released publicly available datasets with corresponding challenges for the digestive-system pathological detection and segmentation. The related datasets and results provide new opportunities for the research and application of digestive pathology.


Assuntos
Benchmarking , Diagnóstico por Computador , Colonoscopia , Humanos , Processamento de Imagem Assistida por Computador/métodos
20.
Med Image Anal ; 80: 102517, 2022 08.
Artigo em Inglês | MEDLINE | ID: mdl-35732106

RESUMO

Despite that Convolutional Neural Networks (CNNs) have achieved promising performance in many medical image segmentation tasks, they rely on a large set of labeled images for training, which is expensive and time-consuming to acquire. Semi-supervised learning has shown the potential to alleviate this challenge by learning from a large set of unlabeled images and limited labeled samples. In this work, we present a simple yet efficient consistency regularization approach for semi-supervised medical image segmentation, called Uncertainty Rectified Pyramid Consistency (URPC). Inspired by the pyramid feature network, we chose a pyramid-prediction network that obtains a set of segmentation predictions at different scales. For semi-supervised learning, URPC learns from unlabeled data by minimizing the discrepancy between each of the pyramid predictions and their average. We further present multi-scale uncertainty rectification to boost the pyramid consistency regularization, where the rectification seeks to temper the consistency loss at outlier pixels that may have substantially different predictions than the average, potentially due to upsampling errors or lack of enough labeled data. Experiments on two public datasets and an in-house clinical dataset showed that: 1) URPC can achieve large performance improvement by utilizing unlabeled data and 2) Compared with five existing semi-supervised methods, URPC achieved better or comparable results with a simpler pipeline. Furthermore, we build a semi-supervised medical image segmentation codebase to boost research on this topic: https://github.com/HiLab-git/SSL4MIS.


Assuntos
Redes Neurais de Computação , Aprendizado de Máquina Supervisionado , Humanos , Processamento de Imagem Assistida por Computador/métodos , Incerteza
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...