Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 31
Filtrar
Más filtros

Banco de datos
Tipo del documento
Intervalo de año de publicación
1.
Endoscopy ; 53(9): 932-936, 2021 09.
Artículo en Inglés | MEDLINE | ID: mdl-33137834

RESUMEN

BACKGROUND: Cleanliness scores in small-bowel capsule endoscopy (SBCE) have poor reproducibility. The aim of this study was to evaluate a neural network-based algorithm for automated assessment of small-bowel cleanliness during capsule endoscopy. METHODS: 600 normal third-generation SBCE still frames were categorized as "adequate" or "inadequate" in terms of cleanliness by three expert readers, according to a 10-point scale, and served as a training database. Then, 156 third-generation SBCE recordings were categorized in a consensual manner as "adequate" or "inadequate" in terms of cleanliness; this testing database was split into two independent 78-video subsets for the tuning and evaluation of the algorithm, respectively. RESULTS: Using a threshold of 79 % "adequate" still frames per video to achieve the best performance, the algorithm yielded a sensitivity of 90.3 %, specificity of 83.3 %, and accuracy of 89.7 %. The reproducibility was perfect. The mean calculation time per video was 3 (standard deviation 1) minutes. CONCLUSION: This neural network-based algorithm allowing automatic assessment of small-bowel cleanliness during capsule endoscopy was highly sensitive and paves the way for automated, standardized SBCE reports.


Asunto(s)
Endoscopía Capsular , Algoritmos , Humanos , Intestino Delgado/diagnóstico por imagen , Redes Neurales de la Computación , Reproducibilidad de los Resultados
2.
Endoscopy ; 53(9): 893-901, 2021 09.
Artículo en Inglés | MEDLINE | ID: mdl-33167043

RESUMEN

BACKGROUND : Artificial intelligence (AI) research in colonoscopy is progressing rapidly but widespread clinical implementation is not yet a reality. We aimed to identify the top implementation research priorities. METHODS : An established modified Delphi approach for research priority setting was used. Fifteen international experts, including endoscopists and translational computer scientists/engineers, from nine countries participated in an online survey over 9 months. Questions related to AI implementation in colonoscopy were generated as a long-list in the first round, and then scored in two subsequent rounds to identify the top 10 research questions. RESULTS : The top 10 ranked questions were categorized into five themes. Theme 1: clinical trial design/end points (4 questions), related to optimum trial designs for polyp detection and characterization, determining the optimal end points for evaluation of AI, and demonstrating impact on interval cancer rates. Theme 2: technological developments (3 questions), including improving detection of more challenging and advanced lesions, reduction of false-positive rates, and minimizing latency. Theme 3: clinical adoption/integration (1 question), concerning the effective combination of detection and characterization into one workflow. Theme 4: data access/annotation (1 question), concerning more efficient or automated data annotation methods to reduce the burden on human experts. Theme 5: regulatory approval (1 question), related to making regulatory approval processes more efficient. CONCLUSIONS : This is the first reported international research priority setting exercise for AI in colonoscopy. The study findings should be used as a framework to guide future research with key stakeholders to accelerate the clinical implementation of AI in endoscopy.


Asunto(s)
Inteligencia Artificial , Colonoscopía , Técnica Delphi , Humanos
3.
J Gastroenterol Hepatol ; 36(1): 12-19, 2021 Jan.
Artículo en Inglés | MEDLINE | ID: mdl-33448511

RESUMEN

Neural network-based solutions are under development to alleviate physicians from the tedious task of small-bowel capsule endoscopy reviewing. Computer-assisted detection is a critical step, aiming to reduce reading times while maintaining accuracy. Weakly supervised solutions have shown promising results; however, video-level evaluations are scarce, and no prospective studies have been conducted yet. Automated characterization (in terms of diagnosis and pertinence) by supervised machine learning solutions is the next step. It relies on large, thoroughly labeled databases, for which preliminary "ground truth" definitions by experts are of tremendous importance. Other developments are under ways, to assist physicians in localizing anatomical landmarks and findings in the small bowel, in measuring lesions, and in rating bowel cleanliness. It is still questioned whether artificial intelligence will enter the market with proprietary, built-in or plug-in software, or with a universal cloud-based service, and how it will be accepted by physicians and patients.


Asunto(s)
Inteligencia Artificial/tendencias , Endoscopía Capsular/métodos , Endoscopía Capsular/tendencias , Enfermedades Intestinales/diagnóstico , Enfermedades Intestinales/patología , Intestino Delgado/patología , Predicción , Humanos
4.
Gastrointest Endosc ; 89(1): 189-194, 2019 01.
Artículo en Inglés | MEDLINE | ID: mdl-30017868

RESUMEN

BACKGROUND AND AIMS: GI angiectasia (GIA) is the most common small-bowel (SB) vascular lesion, with an inherent risk of bleeding. SB capsule endoscopy (SB-CE) is the currently accepted diagnostic procedure. The aim of this study was to develop a computer-assisted diagnosis tool for the detection of GIA. METHODS: Deidentified SB-CE still frames featuring annotated typical GIA and normal control still frames were selected from a database. A semantic segmentation images approach associated with a convolutional neural network (CNN) was used for deep-feature extractions and classification. Two datasets of still frames were created and used for machine learning and for algorithm testing. RESULTS: The GIA detection algorithm yielded a sensitivity of 100%, a specificity of 96%, a positive predictive value of 96%, and a negative predictive value of 100%. Reproducibility was optimal. The reading process for an entire SB-CE video would take 39 minutes. CONCLUSIONS: The developed CNN-based algorithm had high diagnostic performances, allowing detection of GIA in SB-CE still frames. This study paves the way for future automated CNN-based SB-CE reading softwares.


Asunto(s)
Algoritmos , Angiodisplasia/diagnóstico , Endoscopía Capsular/métodos , Enfermedades Intestinales/diagnóstico , Intestino Delgado , Redes Neurales de la Computación , Anciano , Anciano de 80 o más Años , Diagnóstico por Computador , Femenino , Humanos , Masculino , Persona de Mediana Edad , Valor Predictivo de las Pruebas , Sensibilidad y Especificidad
5.
J Crohns Colitis ; 18(1): 75-81, 2024 Jan 27.
Artículo en Inglés | MEDLINE | ID: mdl-37527554

RESUMEN

BACKGROUND AND AIM: Pan-enteric capsule endoscopy [PCE] is a highly sensitive but time-consuming tool for detecting pathology. Artificial intelligence [AI] algorithms might offer a possibility to assist in the review and reduce the analysis time of PCE. This study examines the agreement between PCE assessments aided by AI technology and standard evaluations, in patients suspected of Crohn's disease [CD]. METHOD: PCEs from a prospective, blinded, multicentre study, including patients suspected of CD, were processed by the deep learning solution AXARO® [Augmented Endoscopy, Paris, France]. Based on the image output, two observers classified the patient's PCE as normal or suggestive of CD, ulcerative colitis, or cancer. The primary outcome was per-patient sensitivities and specificities for detecting CD and inflammatory bowel disease [IBD]. Complete reading of PCE served as the reference standard. RESULTS: A total of 131 patients' PCEs were analysed, with a median recording time of 303 min. The AXARO® framework reduced output to a median of 470 images [2.1%] per patient, and the pooled median review time was 3.2 min per patient. For detecting CD, the observers had a sensitivity of 96% and 92% and a specificity of 93% and 90%, respectively. For the detection of IBD, both observers had a sensitivity of 97% and had a specificity of 91% and 90%, respectively. The negative predictive value was 95% for CD and 97% for IBD. CONCLUSIONS: Using the AXARO® framework reduced the initial review time substantially while maintaining high diagnostic accuracy-suggesting its use as a rapid tool to rule out IBD in PCEs of patients suspected of Crohn's disease.


Asunto(s)
Endoscopía Capsular , Enfermedad de Crohn , Enfermedades Inflamatorias del Intestino , Humanos , Enfermedad de Crohn/diagnóstico por imagen , Enfermedad de Crohn/patología , Estudios Prospectivos , Inteligencia Artificial , Enfermedades Inflamatorias del Intestino/diagnóstico
6.
Sci Data ; 11(1): 4, 2024 Jan 02.
Artículo en Inglés | MEDLINE | ID: mdl-38168517

RESUMEN

Several Diptera species are known to transmit pathogens of medical and veterinary interest. However, identifying these species using conventional methods can be time-consuming, labor-intensive, or expensive. A computer vision-based system that uses Wing interferential patterns (WIPs) to identify these insects could solve this problem. This study introduces a dataset for training and evaluating a recognition system for dipteran insects of medical and veterinary importance using WIPs. The dataset includes pictures of Culicidae, Calliphoridae, Muscidae, Tabanidae, Ceratopogonidae, and Psychodidae. The dataset is complemented by previously published datasets of Glossinidae and some Culicidae members. The new dataset contains 2,399 pictures of 18 genera, with each genus documented by a variable number of species and annotated as a class. The dataset covers species variation, with some genera having up to 300 samples.


Asunto(s)
Ceratopogonidae , Aprendizaje Profundo , Dípteros , Muscidae , Animales , Insectos
7.
Artículo en Inglés | MEDLINE | ID: mdl-37018555

RESUMEN

Anomaly detection is important in many real-life applications. Recently, self-supervised learning has greatly helped deep anomaly detection by recognizing several geometric transformations. However these methods lack finer features, usually highly depend on the anomaly type, and do not perform well on fine-grained problems. To address these issues, we first introduce in this work three novel and efficient discriminative and generative tasks which have complementary strength: (i) a piece-wise jigsaw puzzle task focuses on structure cues; (ii) a tint rotation recognition is used within each piece, taking into account the colorimetry information; (iii) and a partial re-colorization task considers the image texture. In order to make the re-colorization task more object-oriented than background-oriented, we propose to include the contextual color information of the image border via an attention mechanism.We then present a new out-of-distribution detection function and highlight its better stability compared to existing methods. Along with it, we also experiment different score fusion functions. Finally, we evaluate our method on an extensive protocol composed of various anomaly types, from object anomalies, style anomalies with fine-grained classification to local anomalies with face anti-spoofing datasets. Our model significantly outperforms state-of-the-art with up to 36% relative error improvement on object anomalies and 40% on face anti-spoofing problems.

8.
Front Neurosci ; 17: 1220172, 2023.
Artículo en Inglés | MEDLINE | ID: mdl-37650105

RESUMEN

Introduction: Datasets containing only few images are common in the biomedical field. This poses a global challenge for the development of robust deep-learning analysis tools, which require a large number of images. Generative Adversarial Networks (GANs) are an increasingly used solution to expand small datasets, specifically in the biomedical domain. However, the validation of synthetic images by metrics is still controversial and psychovisual evaluations are time consuming. Methods: We augment a small brain organoid bright-field database of 40 images using several GAN optimizations. We compare these synthetic images to the original dataset using similitude metrcis and we perform an psychovisual evaluation of the 240 images generated. Eight biological experts labeled the full dataset (280 images) as syntetic or natural using a custom-built software. We calculate the error rate per loss optimization as well as the hesitation time. We then compare these results to those provided by the similarity metrics. We test the psychovalidated images in a training step of a segmentation task. Results and discussion: Generated images are considered as natural as the original dataset, with no increase of the hesitation time by experts. Experts are particularly misled by perceptual and Wasserstein loss optimization. These optimizations render the most qualitative and similar images according to metrics to the original dataset. We do not observe a strong correlation but links between some metrics and psychovisual decision according to the kind of generation. Particular Blur metric combinations could maybe replace the psychovisual evaluation. Segmentation task which use the most psychovalidated images are the most accurate.

9.
Biomedicines ; 11(10)2023 Sep 30.
Artículo en Inglés | MEDLINE | ID: mdl-37893062

RESUMEN

To characterize the growth of brain organoids (BOs), cultures that replicate some early physiological or pathological developments of the human brain are usually manually extracted. Due to their novelty, only small datasets of these images are available, but segmenting the organoid shape automatically with deep learning (DL) tools requires a larger number of images. Light U-Net segmentation architectures, which reduce the training time while increasing the sensitivity under small input datasets, have recently emerged. We further reduce the U-Net architecture and compare the proposed architecture (MU-Net) with U-Net and UNet-Mini on bright-field images of BOs using several data augmentation strategies. In each case, we perform leave-one-out cross-validation on 40 original and 40 synthesized images with an optimized adversarial autoencoder (AAE) or on 40 transformed images. The best results are achieved with U-Net segmentation trained on optimized augmentation. However, our novel method, MU-Net, is more robust: it achieves nearly as accurate segmentation results regardless of the dataset used for training (various AAEs or a transformation augmentation). In this study, we confirm that small datasets of BOs can be segmented with a light U-Net method almost as accurately as with the original method.

10.
Sci Rep ; 13(1): 21389, 2023 12 04.
Artículo en Inglés | MEDLINE | ID: mdl-38049590

RESUMEN

Sandflies (Diptera; Psychodidae) are medical and veterinary vectors that transmit diverse parasitic, viral, and bacterial pathogens. Their identification has always been challenging, particularly at the specific and sub-specific levels, because it relies on examining minute and mostly internal structures. Here, to circumvent such limitations, we have evaluated the accuracy and reliability of Wing Interferential Patterns (WIPs) generated on the surface of sandfly wings in conjunction with deep learning (DL) procedures to assign specimens at various taxonomic levels. Our dataset proves that the method can accurately identify sandflies over other dipteran insects at the family, genus, subgenus, and species level with an accuracy higher than 77.0%, regardless of the taxonomic level challenged. This approach does not require inspection of internal organs to address identification, does not rely on identification keys, and can be implemented under field or near-field conditions, showing promise for sandfly pro-active and passive entomological surveys in an era of scarcity in medical entomologists.


Asunto(s)
Aprendizaje Profundo , Phlebotomus , Psychodidae , Animales , Psychodidae/parasitología , Reproducibilidad de los Resultados , Phlebotomus/parasitología , Entomología
11.
Sci Rep ; 13(1): 13895, 2023 08 25.
Artículo en Inglés | MEDLINE | ID: mdl-37626130

RESUMEN

We present a new and innovative identification method based on deep learning of the wing interferential patterns carried by mosquitoes of the Anopheles genus to classify and assign 20 Anopheles species, including 13 malaria vectors. We provide additional evidence that this approach can identify Anopheles spp. with an accuracy of up to 100% for ten out of 20 species. Although, this accuracy was moderate (> 65%) or weak (50%) for three and seven species. The accuracy of the process to discriminate cryptic or sibling species is also assessed on three species belonging to the Gambiae complex. Strikingly, An. gambiae, An. arabiensis and An. coluzzii, morphologically indistinguishable species belonging to the Gambiae complex, were distinguished with 100%, 100%, and 88% accuracy respectively. Therefore, this tool would help entomological surveys of malaria vectors and vector control implementation. In the future, we anticipate our method can be applied to other arthropod vector-borne diseases.


Asunto(s)
Anopheles , Artrópodos , Aprendizaje Profundo , Animales , Humanos , Mosquitos Vectores , Hermanos
12.
Sci Rep ; 13(1): 17628, 2023 10 17.
Artículo en Inglés | MEDLINE | ID: mdl-37848666

RESUMEN

Hematophagous insects belonging to the Aedes genus are proven vectors of viral and filarial pathogens of medical interest. Aedes albopictus is an increasingly important vector because of its rapid worldwide expansion. In the context of global climate change and the emergence of zoonotic infectious diseases, identification tools with field application are required to strengthen efforts in the entomological survey of arthropods with medical interest. Large scales and proactive entomological surveys of Aedes mosquitoes need skilled technicians and/or costly technical equipment, further puzzled by the vast amount of named species. In this study, we developed an automatic classification system of Aedes species by taking advantage of the species-specific marker displayed by Wing Interferential Patterns. A database holding 494 photomicrographs of 24 Aedes spp. from which those documented with more than ten pictures have undergone a deep learning methodology to train a convolutional neural network and test its accuracy to classify samples at the genus, subgenus, and species taxonomic levels. We recorded an accuracy of 95% at the genus level and > 85% for two (Ochlerotatus and Stegomyia) out of three subgenera tested. Lastly, eight were accurately classified among the 10 Aedes sp. that have undergone a training process with an overall accuracy of > 70%. Altogether, these results demonstrate the potential of this methodology for Aedes species identification and will represent a tool for the future implementation of large-scale entomological surveys.


Asunto(s)
Aedes , Ochlerotatus , Animales , Mosquitos Vectores , Aprendizaje Automático , Especificidad de la Especie
13.
Diagnostics (Basel) ; 12(10)2022 Sep 26.
Artículo en Inglés | MEDLINE | ID: mdl-36292013

RESUMEN

Capsule endoscopy (CE) is a valid alternative to conventional gastrointestinal (GI) endoscopy tools. In CE, annotation tools are crucial in developing large and annotated medical image databases for training deep neural networks (DNN). We provide an overview of the described and in-use various annotation systems available, focusing on the annotation of adenomatous polyp pathology in the GI tract. Some studies present promising results regarding time efficiency by implementing automated labelling features in annotation systems. Thus, data are inadequate regarding the general overview for users, and may also be more specific on which features provided are necessary for polyp annotation.

14.
J Clin Med ; 11(10)2022 May 17.
Artículo en Inglés | MEDLINE | ID: mdl-35628947

RESUMEN

Background: Bubbles often mask the mucosa during capsule endoscopy (CE). Clinical scores assessing the cleanliness and the amount of bubbles in the small bowel (SB) are poorly reproducible unlike machine learning (ML) solutions. We aimed to measure the amount of bubbles with ML algorithms in SB CE recordings, and compare two polyethylene glycol (PEG)-based preparations, with and without simethicone, in patients with obscure gastro-intestinal bleeding (OGIB). Patients & Methods: All consecutive outpatients with OGIB from a tertiary care center received a PEG-based preparation, without or with simethicone, in two different periods. The primary outcome was a difference in the proportions (%) of frames with abundant bubbles (>10%) along the full-length video sequences between the two periods. SB CE recordings were analyzed by a validated computed algorithm based on a grey-level of co-occurrence matrix (GLCM), to assess the abundance of bubbles in each frame. Results: In total, 105 third generation SB CE recordings were analyzed (48 without simethicone and 57 with simethicone-added preparations). A significant association was shown between the use of a simethicone-added preparation and a lower abundance of bubbles along the SB (p = 0.04). A significantly lower proportion of "abundant in bubbles" frames was observed in the fourth quartile (30.5% vs. 20.6%, p = 0.02). There was no significant impact of the use of simethicone in terms of diagnostic yield, SB transit time and completion rate. Conclusion: An accurate and reproducible computed algorithm demonstrated significant decrease in the abundance of bubbles along SB CE recordings, with a marked effect in the last quartile, in patients for whom simethicone had been added in PEG-based preparations, compared to those without simethicone.

15.
Sci Rep ; 12(1): 20086, 2022 11 22.
Artículo en Inglés | MEDLINE | ID: mdl-36418429

RESUMEN

A simple method for accurately identifying Glossina spp in the field is a challenge to sustain the future elimination of Human African Trypanosomiasis (HAT) as a public health scourge, as well as for the sustainable management of African Animal Trypanosomiasis (AAT). Current methods for Glossina species identification heavily rely on a few well-trained experts. Methodologies that rely on molecular methodologies like DNA barcoding or mass spectrometry protein profiling (MALDI TOFF) haven't been thoroughly investigated for Glossina sp. Nevertheless, because they are destructive, costly, time-consuming, and expensive in infrastructure and materials, they might not be well adapted for the survey of arthropod vectors involved in the transmission of pathogens responsible for Neglected Tropical Diseases, like HAT. This study demonstrates a new type of methodology to classify Glossina species. In conjunction with a deep learning architecture, a database of Wing Interference Patterns (WIPs) representative of the Glossina species involved in the transmission of HAT and AAT was used. This database has 1766 pictures representing 23 Glossina species. This cost-effective methodology, which requires mounting wings on slides and using a commercially available microscope, demonstrates that WIPs are an excellent medium to automatically recognize Glossina species with very high accuracy.


Asunto(s)
Tripanosomiasis Africana , Moscas Tse-Tse , Animales , Humanos , Aprendizaje Automático , Bases de Datos Factuales , Enfermedades Desatendidas , Espectrometría de Masa por Láser de Matriz Asistida de Ionización Desorción
16.
Therap Adv Gastroenterol ; 15: 17562848221132683, 2022.
Artículo en Inglés | MEDLINE | ID: mdl-36338789

RESUMEN

Background: Artificial intelligence (AI) is rapidly infiltrating multiple areas in medicine, with gastrointestinal endoscopy paving the way in both research and clinical applications. Multiple challenges associated with the incorporation of AI in endoscopy are being addressed in recent consensus documents. Objectives: In the current paper, we aimed to map future challenges and areas of research for the incorporation of AI in capsule endoscopy (CE) practice. Design: Modified three-round Delphi consensus online survey. Methods: The study design was based on a modified three-round Delphi consensus online survey distributed to a group of CE and AI experts. Round one aimed to map out key research statements and challenges for the implementation of AI in CE. All queries addressing the same questions were merged into a single issue. The second round aimed to rank all generated questions during round one and to identify the top-ranked statements with the highest total score. Finally, the third round aimed to redistribute and rescore the top-ranked statements. Results: Twenty-one (16 gastroenterologists and 5 data scientists) experts participated in the survey. In the first round, 48 statements divided into seven themes were generated. After scoring all statements and rescoring the top 12, the question of AI use for identification and grading of small bowel pathologies was scored the highest (mean score 9.15), correlation of AI and human expert reading-second (9.05), and real-life feasibility-third (9.0). Conclusion: In summary, our current study points out a roadmap for future challenges and research areas on our way to fully incorporating AI in CE reading.

17.
J Imaging ; 7(2)2021 Feb 03.
Artículo en Inglés | MEDLINE | ID: mdl-34460624

RESUMEN

Bio-inspired Event-Based (EB) cameras are a promising new technology that outperforms standard frame-based cameras in extreme lighted and fast moving scenes. Already, a number of EB corner detection techniques have been developed; however, the performance of these EB corner detectors has only been evaluated based on a few author-selected criteria rather than on a unified common basis, as proposed here. Moreover, their experimental conditions are mainly limited to less interesting operational regions of the EB camera (on which frame-based cameras can also operate), and some of the criteria, by definition, could not distinguish if the detector had any systematic bias. In this paper, we evaluate five of the seven existing EB corner detectors on a public dataset including extreme illumination conditions that have not been investigated before. Moreover, this evaluation is the first of its kind in terms of analysing not only such a high number of detectors, but also applying a unified procedure for all. Contrary to previous assessments, we employed both the intensity and trajectory information within the public dataset rather than only one of them. We show that a rigorous comparison among EB detectors can be performed without tedious manual labelling and even with challenging acquisition conditions. This study thus proposes the first standard unified EB corner evaluation procedure, which will enable better understanding of the underlying mechanisms of EB cameras and can therefore lead to more efficient EB corner detection techniques.

18.
Front Neurosci ; 15: 629067, 2021.
Artículo en Inglés | MEDLINE | ID: mdl-34276279

RESUMEN

Purpose: Since their first generation in 2013, the use of cerebral organoids has spread exponentially. Today, the amount of generated data is becoming challenging to analyze manually. This review aims to overview the current image acquisition methods and to subsequently identify the needs in image analysis tools for cerebral organoids. Methods: To address this question, we went through all recent articles published on the subject and annotated the protocols, acquisition methods, and algorithms used. Results: Over the investigated period of time, confocal microscopy and bright-field microscopy were the most used acquisition techniques. Cell counting, the most common task, is performed in 20% of the articles and area; around 12% of articles calculate morphological parameters. Image analysis on cerebral organoids is performed in majority using ImageJ software (around 52%) and Matlab language (4%). Treatments remain mostly semi-automatic. We highlight the limitations encountered in image analysis in the cerebral organoid field and suggest possible solutions and implementations to develop. Conclusions: In addition to providing an overview of cerebral organoids cultures and imaging, this work highlights the need to improve the existing image analysis methods for such images and the need for specific analysis tools. These solutions could specifically help to monitor the growth of future standardized cerebral organoids.

19.
J Clin Med ; 10(23)2021 Dec 06.
Artículo en Inglés | MEDLINE | ID: mdl-34884410

RESUMEN

Artificial intelligence (AI) has shown promising results in digestive endoscopy, especially in capsule endoscopy (CE). However, some physicians still have some difficulties and fear the advent of this technology. We aimed to evaluate the perceptions and current sentiments toward the use of AI in CE. An online survey questionnaire was sent to an audience of gastroenterologists. In addition, several European national leaders of the International CApsule endoscopy REsearch (I CARE) Group were asked to disseminate an online survey among their national communities of CE readers (CER). The survey included 32 questions regarding general information, perceptions of AI, and its use in daily life, medicine, endoscopy, and CE. Among 380 European gastroenterologists who answered this survey, 333 (88%) were CERs. The mean average time length of experience in CE reading was 9.9 years (0.5-22). A majority of CERs agreed that AI would positively impact CE, shorten CE reading time, and help standardize reporting in CE and characterize lesions seen in CE. Nevertheless, in the foreseeable future, a majority of CERs disagreed with the complete replacement all CE reading by AI. Most CERs believed in the high potential of AI for becoming a valuable tool for automated diagnosis and for shortening the reading time. Currently, the perception is that AI will not replace CE reading.

20.
Dig Liver Dis ; 53(12): 1627-1631, 2021 Dec.
Artículo en Inglés | MEDLINE | ID: mdl-34563469

RESUMEN

BACKGROUND AND AIMS: Current artificial intelligence (AI)-based solutions for capsule endoscopy (CE) interpretation are proprietary. We aimed to evaluate an AI solution trained on a specific CE system (Pillcam®, Medtronic) for the detection of angiectasias on images captured by a different proprietary system (MiroCam®, Intromedic). MATERIAL AND METHODS: An advanced AI solution (Axaro®, Augmented Endoscopy), previously trained on Pillcam® small bowell images, was evaluated on independent datasets with more than 1200 Pillcam® and MiroCam® still frames (equally distributed, with or without angiectasias). Images were reviewed by experts before and after AI interpretation. RESULTS: Sensitivity for the diagnosis of angiectasia was 97.4% with Pillcam® images and 96.1% with Mirocam® images, with specificity of 98.8% and 97.8%, respectively. Performances regarding the delineation of regions of interest and the characterization of angiectasias were similar in both groups (all above 95%). Processing time was significantly shorter with Mirocam® (20.7 ms) than with Pillcam® images (24.6 ms, p<0.0001), possibly related to technical differences between systems. CONCLUSION: This proof-of-concept study on still images paves the way for the development of resource-sparing, "universal" CE databases and AI solutions for CE interpretation.


Asunto(s)
Angiodisplasia/diagnóstico , Endoscopía Capsular/métodos , Aprendizaje Profundo , Intestino Delgado/patología , Humanos , Intestino Delgado/diagnóstico por imagen , Prueba de Estudio Conceptual
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA