Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 4 de 4
Filtrar
Más filtros

Banco de datos
Tipo del documento
Asunto de la revista
País de afiliación
Intervalo de año de publicación
1.
J Vis ; 24(4): 6, 2024 Apr 01.
Artículo en Inglés | MEDLINE | ID: mdl-38587421

RESUMEN

In many different domains, experts can make complex decisions after glancing very briefly at an image. However, the perceptual mechanisms underlying expert performance are still largely unknown. Recently, several machine learning algorithms have been shown to outperform human experts in specific tasks. But these algorithms often behave as black boxes and their information processing pipeline remains unknown. This lack of transparency and interpretability is highly problematic in applications involving human lives, such as health care. One way to "open the black box" is to compute an artificial attention map from the model, which highlights the pixels of the input image that contributed the most to the model decision. In this work, we directly compare human visual attention to machine visual attention when performing the same visual task. We have designed a medical diagnosis task involving the detection of lesions in small bowel endoscopic images. We collected eye movements from novices and gastroenterologist experts while they classified medical images according to their relevance for Crohn's disease diagnosis. We trained three state-of-the-art deep learning models on our carefully labeled dataset. Both humans and machine performed the same task. We extracted artificial attention with six different post hoc methods. We show that the model attention maps are significantly closer to human expert attention maps than to novices', especially for pathological images. As the model gets trained and its performance gets closer to the human experts, the similarity between model and human attention increases. Through the understanding of the similarities between the visual decision-making process of human experts and deep neural networks, we hope to inform both the training of new doctors and the architecture of new algorithms.


Asunto(s)
Algoritmos , Redes Neurales de la Computación , Humanos , Cognición , Movimientos Oculares , Aprendizaje Automático
2.
Int Orthop ; 46(5): 937-944, 2022 05.
Artículo en Inglés | MEDLINE | ID: mdl-35171335

RESUMEN

BACKGROUND: Artificial Intelligence (AI)/Machine Learning (ML) applications have been proven efficient to improve diagnosis, to stratify risk, and to predict outcomes in many respective medical specialties, including in orthopaedics. CHALLENGES AND DISCUSSION: Regarding hip and knee reconstruction surgery, AI/ML have not made it yet to clinical practice. In this review, we present sound AI/ML applications in the field of hip and knee degenerative disease and reconstruction. From osteoarthritis (OA) diagnosis and prediction of its advancement, clinical decision-making, identification of hip and knee implants to prediction of clinical outcome and complications following a reconstruction procedure of these joints, we report how AI/ML systems could facilitate data-driven personalized care for our patients.


Asunto(s)
Inteligencia Artificial , Aprendizaje Automático , Predicción , Humanos , Articulación de la Rodilla , Extremidad Inferior
3.
Data Brief ; 42: 108258, 2022 Jun.
Artículo en Inglés | MEDLINE | ID: mdl-35599827

RESUMEN

One of the most common treatments for infertile couples is In Vitro Fertilization (IVF). It consists of controlled ovarian hyperstimulation, followed by ovum pickup, fertilization, and embryo culture for 2-6 days under controlled environmental conditions, leading to intrauterine transfer or freezing of embryos identified as having a good implantation potential by embryologists. To allow continuous monitoring of embryo development, Time-lapse imaging incubators (TLI) were first released in the IVF market around 2010. This time-lapse technology provides a dynamic overview of embryonic in vitro development by taking photographs of each embryo at regular intervals throughout its development. TLI appears to be the most promising solution to improve embryo quality assessment methods, and subsequently the clinical efficiency of IVF. In particular, the unprecedented high volume of high-quality images produced by TLI systems has already been leveraged using modern Artificial Intelligence (AI) methods, like deep learning (DL). An important limitation to the development of AI-based solutions for IVF is the absence of a public reference dataset to train and evaluate deep learning (DL) models. In this work, we describe a fully annotated dataset of 704 TLI videos of developing embryos with all 7 focal planes available, for a total of 2,4M images. Of note, we propose highly detailed annotations with 16 different development phases, including early cell division phases, but also late cell divisions, phases after morulation, and very early phases, which have never been used before. This is the first public dataset that will allow the community to evaluate morphokinetic models and the first step towards deep learning-powered IVF. We postulate that this dataset will help improve the overall performance of DL approaches on time-lapse videos of embryo development, ultimately benefiting infertile patients with improved clinical success rates.

4.
Endosc Int Open ; 9(7): E1136-E1144, 2021 Jul.
Artículo en Inglés | MEDLINE | ID: mdl-34222640

RESUMEN

Background and study aims Computer-aided diagnostic tools using deep neural networks are efficient for detection of lesions in endoscopy but require a huge number of images. The impact of the quality of annotation has not been tested yet. Here we describe a multi-expert annotated dataset of images extracted from capsules from Crohn's disease patients and the impact of the quality of annotations on the accuracy of a recurrent attention neural network. Methods Images of capsule were annotated by a reader first and then reviewed by three experts in inflammatory bowel disease. Concordance analysis between experts was evaluated by Fleiss' kappa and all the discordant images were, again, read by all the endoscopists to obtain a consensus annotation. A recurrent attention neural network developed for the study was tested before and after the consensus annotation. Available neural networks (ResNet and VGGNet) were also tested under the same conditions. Results The final dataset included 3498 images with 2124 non-pathological (60.7 %), 1360 pathological (38.9 %), and 14 (0.4 %) inconclusive. Agreement of the experts was good for distinguishing pathological and non-pathological images with a kappa of 0.79 ( P  < 0.0001). The accuracy of our classifier and the available neural networks increased after the consensus annotation with a precision of 93.7 %, sensitivity of 93 %, and specificity of 95 %. Conclusions The accuracy of the neural network increased with improved annotations, suggesting that the number of images needed for the development of these systems could be diminished using a well-designed dataset.

SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA