Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 47
Filtrar
1.
Endoscopy ; 2024 May 02.
Artigo em Inglês | MEDLINE | ID: mdl-38547927

RESUMO

BACKGROUND: This study evaluated the effect of an artificial intelligence (AI)-based clinical decision support system on the performance and diagnostic confidence of endoscopists in their assessment of Barrett's esophagus (BE). METHODS: 96 standardized endoscopy videos were assessed by 22 endoscopists with varying degrees of BE experience from 12 centers. Assessment was randomized into two video sets: group A (review first without AI and second with AI) and group B (review first with AI and second without AI). Endoscopists were required to evaluate each video for the presence of Barrett's esophagus-related neoplasia (BERN) and then decide on a spot for a targeted biopsy. After the second assessment, they were allowed to change their clinical decision and confidence level. RESULTS: AI had a stand-alone sensitivity, specificity, and accuracy of 92.2%, 68.9%, and 81.3%, respectively. Without AI, BE experts had an overall sensitivity, specificity, and accuracy of 83.3%, 58.1%, and 71.5%, respectively. With AI, BE nonexperts showed a significant improvement in sensitivity and specificity when videos were assessed a second time with AI (sensitivity 69.8% [95%CI 65.2%-74.2%] to 78.0% [95%CI 74.0%-82.0%]; specificity 67.3% [95%CI 62.5%-72.2%] to 72.7% [95%CI 68.2%-77.3%]). In addition, the diagnostic confidence of BE nonexperts improved significantly with AI. CONCLUSION: BE nonexperts benefitted significantly from additional AI. BE experts and nonexperts remained significantly below the stand-alone performance of AI, suggesting that there may be other factors influencing endoscopists' decisions to follow or discard AI advice.

2.
JPRAS Open ; 39: 330-343, 2024 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-38390355

RESUMO

Background: The utilization of three-dimensional (3D) surface imaging for facial anthropometry is a significant asset for patients undergoing maxillofacial surgery. Notably, there have been recent advancements in smartphone technology that enable 3D surface imaging.In this study, anthropometric assessments of the face were performed using a smartphone and a sophisticated 3D surface imaging system. Methods: 30 healthy volunteers (15 females and 15 males) were included in the study. An iPhone 14 Pro (Apple Inc., USA) using the application 3D Scanner App (Laan Consulting Corp., USA) and the Vectra M5 (Canfield Scientific, USA) were employed to create 3D surface models. For each participant, 19 anthropometric measurements were conducted on the 3D surface models. Subsequently, the anthropometric measurements generated by the two approaches were compared. The statistical techniques employed included the paired t-test, paired Wilcoxon signed-rank test, Bland-Altman analysis, and calculation of the intraclass correlation coefficient (ICC). Results: All measurements showed excellent agreement between smartphone-based and Vectra M5-based measurements (ICC between 0.85 and 0.97). Statistical analysis revealed no statistically significant differences in the central tendencies for 17 of the 19 linear measurements. Despite the excellent agreement found, Bland-Altman analysis revealed that the 95% limits of agreement between the two methods exceeded ±3 mm for the majority of measurements. Conclusion: Digital facial anthropometry using smartphones can serve as a valuable supplementary tool for surgeons, enhancing their communication with patients. However, the proposed data suggest that digital facial anthropometry using smartphones may not yet be suitable for certain diagnostic purposes that require high accuracy.

3.
Artigo em Inglês | MEDLINE | ID: mdl-38306026

RESUMO

BACKGROUND: Differentiation of high-flow from low-flow vascular malformations (VMs) is crucial for therapeutic management of this orphan disease. OBJECTIVE: A convolutional neural network (CNN) was evaluated for differentiation of peripheral vascular malformations (VMs) on T2-weighted short tau inversion recovery (STIR) MRI. METHODS: 527 MRIs (386 low-flow and 141 high-flow VMs) were randomly divided into training, validation and test set for this single-center study. 1) Results of the CNN's diagnostic performance were compared with that of two expert and four junior radiologists. 2) The influence of CNN's prediction on the radiologists' performance and diagnostic certainty was evaluated. 3) Junior radiologists' performance after self-training was compared with that of the CNN. RESULTS: Compared with the expert radiologists the CNN achieved similar accuracy (92% vs. 97%, p = 0.11), sensitivity (80% vs. 93%, p = 0.16) and specificity (97% vs. 100%, p = 0.50). In comparison to the junior radiologists, the CNN had a higher specificity and accuracy (97% vs. 80%, p <  0.001; 92% vs. 77%, p <  0.001). CNN assistance had no significant influence on their diagnostic performance and certainty. After self-training, the junior radiologists' specificity and accuracy improved and were comparable to that of the CNN. CONCLUSIONS: Diagnostic performance of the CNN for differentiating high-flow from low-flow VM was comparable to that of expert radiologists. CNN did not significantly improve the simulated daily practice of junior radiologists, self-training was more effective.

5.
Comput Biol Med ; 169: 107929, 2024 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-38184862

RESUMO

In the field of computer- and robot-assisted minimally invasive surgery, enormous progress has been made in recent years based on the recognition of surgical instruments in endoscopic images and videos. In particular, the determination of the position and type of instruments is of great interest. Current work involves both spatial and temporal information, with the idea that predicting the movement of surgical tools over time may improve the quality of the final segmentations. The provision of publicly available datasets has recently encouraged the development of new methods, mainly based on deep learning. In this review, we identify and characterize datasets used for method development and evaluation and quantify their frequency of use in the literature. We further present an overview of the current state of research regarding the segmentation and tracking of minimally invasive surgical instruments in endoscopic images and videos. The paper focuses on methods that work purely visually, without markers of any kind attached to the instruments, considering both single-frame semantic and instance segmentation approaches, as well as those that incorporate temporal information. The publications analyzed were identified through the platforms Google Scholar, Web of Science, and PubMed. The search terms used were "instrument segmentation", "instrument tracking", "surgical tool segmentation", and "surgical tool tracking", resulting in a total of 741 articles published between 01/2015 and 07/2023, of which 123 were included using systematic selection criteria. A discussion of the reviewed literature is provided, highlighting existing shortcomings and emphasizing the available potential for future developments.


Assuntos
Procedimentos Cirúrgicos Robóticos , Cirurgia Assistida por Computador , Endoscopia , Procedimentos Cirúrgicos Minimamente Invasivos , Procedimentos Cirúrgicos Robóticos/métodos , Cirurgia Assistida por Computador/métodos , Instrumentos Cirúrgicos , Processamento de Imagem Assistida por Computador/métodos
6.
Plast Reconstr Surg ; 152(4): 670e-674e, 2023 10 01.
Artigo em Inglês | MEDLINE | ID: mdl-36952590

RESUMO

SUMMARY: Digital-nerve lesions result in a loss of tactile sensation reflected by an anesthetic area (AA) at the radial or ulnar aspect of the respective digit. Available tools to monitor the recovery of tactile sense have been criticized for their lack of validity. Precise quantification of AA dynamics by three-dimensional (3D) imaging could serve as an accurate surrogate to monitor recovery after digital-nerve repair. For validation, AAs were marked on digits of healthy volunteers to simulate the AA of an impaired cutaneous innervation. The 3D models were composed from raw images that had been acquired with a 3D camera to precisely quantify relative AA for each digit (3D models, n = 80). Operator properties varied with regard to individual experience in 3D imaging and image processing. In addition, the concept was applied in a clinical case study. Results showed that images taken by experienced photographers were rated as better quality ( P < 0.001) and needed less processing time ( P = 0.020). Quantification of the relative AA was not altered significantly, regardless of experience level of the photographer ( P = 0.425) or image assembler ( P = 0.749). The proposed concept allows precise and reliable surface quantification of digits and can be performed consistently without relevant distortion by lack of examiner experience. Routine 3D imaging of the AA has the great potential to provide visual evidence of various returning states of sensation and to convert sensory nerve recovery into a metric variable with high responsiveness to temporal progress.


Assuntos
Sensação , Percepção do Tato , Humanos , Tato , Processamento de Imagem Assistida por Computador , Pele , Imageamento Tridimensional/métodos
7.
Comput Biol Med ; 154: 106585, 2023 03.
Artigo em Inglês | MEDLINE | ID: mdl-36731360

RESUMO

Semantic segmentation is an essential task in medical imaging research. Many powerful deep-learning-based approaches can be employed for this problem, but they are dependent on the availability of an expansive labeled dataset. In this work, we augment such supervised segmentation models to be suitable for learning from unlabeled data. Our semi-supervised approach, termed Error-Correcting Mean-Teacher, uses an exponential moving average model like the original Mean Teacher but introduces our new paradigm of error correction. The original segmentation network is augmented to handle this secondary correction task. Both tasks build upon the core feature extraction layers of the model. For the correction task, features detected in the input image are fused with features detected in the predicted segmentation and further processed with task-specific decoder layers. The combination of image and segmentation features allows the model to correct present mistakes in the given input pair. The correction task is trained jointly on the labeled data. On unlabeled data, the exponential moving average of the original network corrects the student's prediction. The combined outputs of the students' prediction with the teachers' correction form the basis for the semi-supervised update. We evaluate our method with the 2017 and 2018 Robotic Scene Segmentation data, the ISIC 2017 and the BraTS 2020 Challenges, a proprietary Endoscopic Submucosal Dissection dataset, Cityscapes, and Pascal VOC 2012. Additionally, we analyze the impact of the individual components and examine the behavior when the amount of labeled data varies, with experiments performed on two distinct segmentation architectures. Our method shows improvements in terms of the mean Intersection over Union over the supervised baseline and competing methods. Code is available at https://github.com/CloneRob/ECMT.


Assuntos
Pesquisa Biomédica , Robótica , Humanos , Semântica , Processamento de Imagem Assistida por Computador
8.
Gastrointest Endosc ; 97(5): 911-916, 2023 05.
Artigo em Inglês | MEDLINE | ID: mdl-36646146

RESUMO

BACKGROUND AND AIMS: Celiac disease with its endoscopic manifestation of villous atrophy (VA) is underdiagnosed worldwide. The application of artificial intelligence (AI) for the macroscopic detection of VA at routine EGD may improve diagnostic performance. METHODS: A dataset of 858 endoscopic images of 182 patients with VA and 846 images from 323 patients with normal duodenal mucosa was collected and used to train a ResNet18 deep learning model to detect VA. An external dataset was used to test the algorithm, in addition to 6 fellows and 4 board-certified gastroenterologists. Fellows could consult the AI algorithm's result during the test. From their consultation distribution, a stratification of test images into "easy" and "difficult" was performed and used for classified performance measurement. RESULTS: External validation of the AI algorithm yielded values of 90%, 76%, and 84% for sensitivity, specificity, and accuracy, respectively. Fellows scored corresponding values of 63%, 72%, and 67% and experts scored 72%, 69%, and 71%, respectively. AI consultation significantly improved all trainee performance statistics. Although fellows and experts showed significantly lower performance for difficult images, the performance of the AI algorithm was stable. CONCLUSIONS: In this study, an AI algorithm outperformed endoscopy fellows and experts in the detection of VA on endoscopic still images. AI decision support significantly improved the performance of nonexpert endoscopists. The stable performance on difficult images suggests a further positive add-on effect in challenging cases.


Assuntos
Inteligência Artificial , Aprendizado Profundo , Humanos , Endoscopia Gastrointestinal , Algoritmos , Atrofia
9.
J Clin Med ; 11(17)2022 Aug 25.
Artigo em Inglês | MEDLINE | ID: mdl-36078928

RESUMO

BACKGROUND: Reliable, time- and cost-effective, and clinician-friendly diagnostic tools are cornerstones in facial palsy (FP) patient management. Different automated FP grading systems have been developed but revealed persisting downsides such as insufficient accuracy and cost-intensive hardware. We aimed to overcome these barriers and programmed an automated grading system for FP patients utilizing the House and Brackmann scale (HBS). METHODS: Image datasets of 86 patients seen at the Department of Plastic, Hand, and Reconstructive Surgery at the University Hospital Regensburg, Germany, between June 2017 and May 2021, were used to train the neural network and evaluate its accuracy. Nine facial poses per patient were analyzed by the algorithm. RESULTS: The algorithm showed an accuracy of 100%. Oversampling did not result in altered outcomes, while the direct form displayed superior accuracy levels when compared to the modular classification form (n = 86; 100% vs. 99%). The Early Fusion technique was linked to improved accuracy outcomes in comparison to the Late Fusion and sequential method (n = 86; 100% vs. 96% vs. 97%). CONCLUSIONS: Our automated FP grading system combines high-level accuracy with cost- and time-effectiveness. Our algorithm may accelerate the grading process in FP patients and facilitate the FP surgeon's workflow.

10.
Gut ; 71(12): 2388-2390, 2022 12.
Artigo em Inglês | MEDLINE | ID: mdl-36109151

RESUMO

In this study, we aimed to develop an artificial intelligence clinical decision support solution to mitigate operator-dependent limitations during complex endoscopic procedures such as endoscopic submucosal dissection and peroral endoscopic myotomy, for example, bleeding and perforation. A DeepLabv3-based model was trained to delineate vessels, tissue structures and instruments on endoscopic still images from such procedures. The mean cross-validated Intersection over Union and Dice Score were 63% and 76%, respectively. Applied to standardised video clips from third-space endoscopic procedures, the algorithm showed a mean vessel detection rate of 85% with a false-positive rate of 0.75/min. These performance statistics suggest a potential clinical benefit for procedure safety, time and also training.


Assuntos
Aprendizado Profundo , Ressecção Endoscópica de Mucosa , Humanos , Inteligência Artificial , Endoscopia Gastrointestinal
11.
Sci Rep ; 12(1): 11115, 2022 07 01.
Artigo em Inglês | MEDLINE | ID: mdl-35778456

RESUMO

The endoscopic features associated with eosinophilic esophagitis (EoE) may be missed during routine endoscopy. We aimed to develop and evaluate an Artificial Intelligence (AI) algorithm for detecting and quantifying the endoscopic features of EoE in white light images, supplemented by the EoE Endoscopic Reference Score (EREFS). An AI algorithm (AI-EoE) was constructed and trained to differentiate between EoE and normal esophagus using endoscopic white light images extracted from the database of the University Hospital Augsburg. In addition to binary classification, a second algorithm was trained with specific auxiliary branches for each EREFS feature (AI-EoE-EREFS). The AI algorithms were evaluated on an external data set from the University of North Carolina, Chapel Hill (UNC), and compared with the performance of human endoscopists with varying levels of experience. The overall sensitivity, specificity, and accuracy of AI-EoE were 0.93 for all measures, while the AUC was 0.986. With additional auxiliary branches for the EREFS categories, the AI algorithm (AI-EoE-EREFS) performance improved to 0.96, 0.94, 0.95, and 0.992 for sensitivity, specificity, accuracy, and AUC, respectively. AI-EoE and AI-EoE-EREFS performed significantly better than endoscopy beginners and senior fellows on the same set of images. An AI algorithm can be trained to detect and quantify endoscopic features of EoE with excellent performance scores. The addition of the EREFS criteria improved the performance of the AI algorithm, which performed significantly better than endoscopists with a lower or medium experience level.


Assuntos
Esofagite Eosinofílica , Inteligência Artificial , Esofagite Eosinofílica/diagnóstico , Esofagoscopia/métodos , Humanos , Índice de Gravidade de Doença
15.
Comput Biol Med ; 135: 104578, 2021 08.
Artigo em Inglês | MEDLINE | ID: mdl-34171639

RESUMO

Even though artificial intelligence and machine learning have demonstrated remarkable performances in medical image computing, their level of accountability and transparency must be provided in such evaluations. The reliability related to machine learning predictions must be explained and interpreted, especially if diagnosis support is addressed. For this task, the black-box nature of deep learning techniques must be lightened up to transfer its promising results into clinical practice. Hence, we aim to investigate the use of explainable artificial intelligence techniques to quantitatively highlight discriminative regions during the classification of early-cancerous tissues in Barrett's esophagus-diagnosed patients. Four Convolutional Neural Network models (AlexNet, SqueezeNet, ResNet50, and VGG16) were analyzed using five different interpretation techniques (saliency, guided backpropagation, integrated gradients, input × gradients, and DeepLIFT) to compare their agreement with experts' previous annotations of cancerous tissue. We could show that saliency attributes match best with the manual experts' delineations. Moreover, there is moderate to high correlation between the sensitivity of a model and the human-and-computer agreement. The results also lightened that the higher the model's sensitivity, the stronger the correlation of human and computational segmentation agreement. We observed a relevant relation between computational learning and experts' insights, demonstrating how human knowledge may influence the correct computational learning.


Assuntos
Esôfago de Barrett , Inteligência Artificial , Esôfago de Barrett/diagnóstico por imagem , Humanos , Aprendizado de Máquina , Redes Neurais de Computação , Reprodutibilidade dos Testes
16.
Artigo em Inglês | MEDLINE | ID: mdl-34172253

RESUMO

The evaluation and assessment of Barrett's esophagus is challenging for both expert and nonexpert endoscopists. However, the early diagnosis of cancer in Barrett's esophagus is crucial for its prognosis, and could save costs. Pre-clinical and clinical studies on the application of Artificial Intelligence (AI) in Barrett's esophagus have shown promising results. In this review, we focus on the current challenges and future perspectives of implementing AI systems in the management of patients with Barrett's esophagus.


Assuntos
Inteligência Artificial/normas , Esôfago de Barrett/diagnóstico , Aprendizado Profundo/normas , Endoscopia/métodos , Humanos , Prognóstico
17.
Endoscopy ; 53(9): 878-883, 2021 09.
Artigo em Inglês | MEDLINE | ID: mdl-33197942

RESUMO

BACKGROUND: The accurate differentiation between T1a and T1b Barrett's-related cancer has both therapeutic and prognostic implications but is challenging even for experienced physicians. We trained an artificial intelligence (AI) system on the basis of deep artificial neural networks (deep learning) to differentiate between T1a and T1b Barrett's cancer on white-light images. METHODS: Endoscopic images from three tertiary care centers in Germany were collected retrospectively. A deep learning system was trained and tested using the principles of cross validation. A total of 230 white-light endoscopic images (108 T1a and 122 T1b) were evaluated using the AI system. For comparison, the images were also classified by experts specialized in endoscopic diagnosis and treatment of Barrett's cancer. RESULTS: The sensitivity, specificity, F1 score, and accuracy of the AI system in the differentiation between T1a and T1b cancer lesions was 0.77, 0.64, 0.74, and 0.71, respectively. There was no statistically significant difference between the performance of the AI system and that of experts, who showed sensitivity, specificity, F1, and accuracy of 0.63, 0.78, 0.67, and 0.70, respectively. CONCLUSION: This pilot study demonstrates the first multicenter application of an AI-based system in the prediction of submucosal invasion in endoscopic images of Barrett's cancer. AI scored equally to international experts in the field, but more work is necessary to improve the system and apply it to video sequences and real-life settings. Nevertheless, the correct prediction of submucosal invasion in Barrett's cancer remains challenging for both experts and AI.


Assuntos
Adenocarcinoma , Esôfago de Barrett , Neoplasias Esofágicas , Adenocarcinoma/diagnóstico por imagem , Inteligência Artificial , Esôfago de Barrett/diagnóstico por imagem , Neoplasias Esofágicas/diagnóstico por imagem , Esofagoscopia , Humanos , Projetos Piloto , Estudos Retrospectivos
18.
Arch Gynecol Obstet ; 303(3): 721-728, 2021 03.
Artigo em Inglês | MEDLINE | ID: mdl-33184690

RESUMO

PURPOSE: In this trial, we used a previously developed prototype software to assess aesthetic results after reconstructive surgery for congenital breast asymmetry using automated anthropometry. To prove the consensus between the manual and automatic digital measurements, we evaluated the software by comparing the manual and automatic measurements of 46 breasts. METHODS: Twenty-three patients who underwent reconstructive surgery for congenital breast asymmetry at our institution were examined and underwent 3D surface imaging. Per patient, 14 manual and 14 computer-based anthropometric measurements were obtained according to a standardized protocol. Manual and automatic measurements, as well as the previously proposed Symmetry Index (SI), were compared. RESULTS: The Wilcoxon signed-rank test revealed no significant differences in six of the seven measurements between the automatic and manual assessments. The SI showed robust agreement between the automatic and manual methods. CONCLUSION: The present trial validates our method for digital anthropometry. Despite the discrepancy in one measurement, all remaining measurements, including the SI, showed high agreement between the manual and automatic methods. The proposed data bring us one step closer to the long-term goal of establishing robust instruments to evaluate the results of breast surgery. LEVEL OF EVIDENCE: IV.


Assuntos
Mama/anatomia & histologia , Imageamento Tridimensional/métodos , Imageamento por Ressonância Magnética/métodos , Adulto , Antropometria/métodos , Estética , Feminino , Humanos , Mastectomia , Reprodutibilidade dos Testes , Software
19.
Gut ; 2020 Oct 30.
Artigo em Inglês | MEDLINE | ID: mdl-33127833

RESUMO

OBJECTIVE: Artificial intelligence (AI) may reduce underdiagnosed or overlooked upper GI (UGI) neoplastic and preneoplastic conditions, due to subtle appearance and low disease prevalence. Only disease-specific AI performances have been reported, generating uncertainty on its clinical value. DESIGN: We searched PubMed, Embase and Scopus until July 2020, for studies on the diagnostic performance of AI in detection and characterisation of UGI lesions. Primary outcomes were pooled diagnostic accuracy, sensitivity and specificity of AI. Secondary outcomes were pooled positive (PPV) and negative (NPV) predictive values. We calculated pooled proportion rates (%), designed summary receiving operating characteristic curves with respective area under the curves (AUCs) and performed metaregression and sensitivity analysis. RESULTS: Overall, 19 studies on detection of oesophageal squamous cell neoplasia (ESCN) or Barrett's esophagus-related neoplasia (BERN) or gastric adenocarcinoma (GCA) were included with 218, 445, 453 patients and 7976, 2340, 13 562 images, respectively. AI-sensitivity/specificity/PPV/NPV/positive likelihood ratio/negative likelihood ratio for UGI neoplasia detection were 90% (CI 85% to 94%)/89% (CI 85% to 92%)/87% (CI 83% to 91%)/91% (CI 87% to 94%)/8.2 (CI 5.7 to 11.7)/0.111 (CI 0.071 to 0.175), respectively, with an overall AUC of 0.95 (CI 0.93 to 0.97). No difference in AI performance across ESCN, BERN and GCA was found, AUC being 0.94 (CI 0.52 to 0.99), 0.96 (CI 0.95 to 0.98), 0.93 (CI 0.83 to 0.99), respectively. Overall, study quality was low, with high risk of selection bias. No significant publication bias was found. CONCLUSION: We found a high overall AI accuracy for the diagnosis of any neoplastic lesion of the UGI tract that was independent of the underlying condition. This may be expected to substantially reduce the miss rate of precancerous lesions and early cancer when implemented in clinical practice.

20.
Comput Biol Med ; 126: 104029, 2020 11.
Artigo em Inglês | MEDLINE | ID: mdl-33059236

RESUMO

Barrett's esophagus figured a swift rise in the number of cases in the past years. Although traditional diagnosis methods offered a vital role in early-stage treatment, they are generally time- and resource-consuming. In this context, computer-aided approaches for automatic diagnosis emerged in the literature since early detection is intrinsically related to remission probabilities. However, they still suffer from drawbacks because of the lack of available data for machine learning purposes, thus implying reduced recognition rates. This work introduces Generative Adversarial Networks to generate high-quality endoscopic images, thereby identifying Barrett's esophagus and adenocarcinoma more precisely. Further, Convolution Neural Networks are used for feature extraction and classification purposes. The proposed approach is validated over two datasets of endoscopic images, with the experiments conducted over the full and patch-split images. The application of Deep Convolutional Generative Adversarial Networks for the data augmentation step and LeNet-5 and AlexNet for the classification step allowed us to validate the proposed methodology over an extensive set of datasets (based on original and augmented sets), reaching results of 90% of accuracy for the patch-based approach and 85% for the image-based approach. Both results are based on augmented datasets and are statistically different from the ones obtained in the original datasets of the same kind. Moreover, the impact of data augmentation was evaluated in the context of image description and classification, and the results obtained using synthetic images outperformed the ones over the original datasets, as well as other recent approaches from the literature. Such results suggest promising insights related to the importance of proper data for the accurate classification concerning computer-assisted Barrett's esophagus and adenocarcinoma detection.


Assuntos
Adenocarcinoma , Esôfago de Barrett , Neoplasias Esofágicas , Adenocarcinoma/diagnóstico por imagem , Esôfago de Barrett/diagnóstico por imagem , Endoscopia , Neoplasias Esofágicas/diagnóstico por imagem , Humanos , Aprendizado de Máquina , Redes Neurais de Computação
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...