Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 6 de 6
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
Graefes Arch Clin Exp Ophthalmol ; 260(5): 1779-1788, 2022 May.
Artigo em Inglês | MEDLINE | ID: mdl-34999946

RESUMO

PURPOSE: Artificial intelligence (AI) has entered the field of medicine, and ophthalmology is no exception. The objective of this study was to report on scientific production and publication trends, to identify journals, countries, international collaborations, and major MeSH terms involved in AI in ophthalmology research. METHODS: Scientometric methods were used to evaluate global scientific production and development trends in AI in ophthalmology using PubMed and the Web of Science Core Collection. RESULTS: A total of 1356 articles were retrieved over the period 1966-2019. The yearly growth of AI in ophthalmology publications has been 18.89% over the last ten years, indicating that AI in ophthalmology is a very attractive topic in science. Analysis of the most productive journals showed that most were specialized in computer and medical systems. No journal was found to specialize in AI in ophthalmology. The USA, China, and the UK were the three most productive countries. The study of international collaboration showed that, besides the USA, researchers tended to collaborate with peers from neighboring countries. Among the twenty most frequent MeSH terms retrieved, there were only four related to clinical topics, revealing the retina and glaucoma as the most frequently encountered subjects of interest in AI in ophthalmology. Analysis of the top ten Journal Citation Reports categories of journals and MeSH terms for articles confirmed that AI in ophthalmology research is mainly focused on engineering and computing and is mainly technical research related to computer methods. CONCLUSIONS: This study provides a broad view of the current status and trends in AI in ophthalmology research and shows that AI in ophthalmology research is an attractive topic focusing on retinal diseases and glaucoma. This study may be useful for researchers in AI in ophthalmology such as clinicians, but also for scientists to better understand this research topic, know the main actors in this field (including journals and countries), and have a general overview of this research theme.


Assuntos
Glaucoma , Oftalmologia , Inteligência Artificial , Bibliometria , China , Humanos
2.
Med Image Anal ; 72: 102118, 2021 08.
Artigo em Inglês | MEDLINE | ID: mdl-34126549

RESUMO

In recent years, Artificial Intelligence (AI) has proven its relevance for medical decision support. However, the "black-box" nature of successful AI algorithms still holds back their wide-spread deployment. In this paper, we describe an eXplanatory Artificial Intelligence (XAI) that reaches the same level of performance as black-box AI, for the task of classifying Diabetic Retinopathy (DR) severity using Color Fundus Photography (CFP). This algorithm, called ExplAIn, learns to segment and categorize lesions in images; the final image-level classification directly derives from these multivariate lesion segmentations. The novelty of this explanatory framework is that it is trained from end to end, with image supervision only, just like black-box AI algorithms: the concepts of lesions and lesion categories emerge by themselves. For improved lesion localization, foreground/background separation is trained through self-supervision, in such a way that occluding foreground pixels transforms the input image into a healthy-looking image. The advantage of such an architecture is that automatic diagnoses can be explained simply by an image and/or a few sentences. ExplAIn is evaluated at the image level and at the pixel level on various CFP image datasets. We expect this new framework, which jointly offers high classification performance and explainability, to facilitate AI deployment.


Assuntos
Diabetes Mellitus , Retinopatia Diabética , Algoritmos , Inteligência Artificial , Retinopatia Diabética/diagnóstico por imagem , Humanos , Programas de Rastreamento , Fotografação
3.
Med Image Anal ; 52: 24-41, 2019 02.
Artigo em Inglês | MEDLINE | ID: mdl-30468970

RESUMO

Surgical tool detection is attracting increasing attention from the medical image analysis community. The goal generally is not to precisely locate tools in images, but rather to indicate which tools are being used by the surgeon at each instant. The main motivation for annotating tool usage is to design efficient solutions for surgical workflow analysis, with potential applications in report generation, surgical training and even real-time decision support. Most existing tool annotation algorithms focus on laparoscopic surgeries. However, with 19 million interventions per year, the most common surgical procedure in the world is cataract surgery. The CATARACTS challenge was organized in 2017 to evaluate tool annotation algorithms in the specific context of cataract surgery. It relies on more than nine hours of videos, from 50 cataract surgeries, in which the presence of 21 surgical tools was manually annotated by two experts. With 14 participating teams, this challenge can be considered a success. As might be expected, the submitted solutions are based on deep learning. This paper thoroughly evaluates these solutions: in particular, the quality of their annotations are compared to that of human interpretations. Next, lessons learnt from the differential analysis of these solutions are discussed. We expect that they will guide the design of efficient surgery monitoring tools in the near future.


Assuntos
Extração de Catarata/instrumentação , Aprendizado Profundo , Instrumentos Cirúrgicos , Algoritmos , Humanos , Gravação em Vídeo
4.
Med Image Anal ; 47: 203-218, 2018 07.
Artigo em Inglês | MEDLINE | ID: mdl-29778931

RESUMO

This paper investigates the automatic monitoring of tool usage during a surgery, with potential applications in report generation, surgical training and real-time decision support. Two surgeries are considered: cataract surgery, the most common surgical procedure, and cholecystectomy, one of the most common digestive surgeries. Tool usage is monitored in videos recorded either through a microscope (cataract surgery) or an endoscope (cholecystectomy). Following state-of-the-art video analysis solutions, each frame of the video is analyzed by convolutional neural networks (CNNs) whose outputs are fed to recurrent neural networks (RNNs) in order to take temporal relationships between events into account. Novelty lies in the way those CNNs and RNNs are trained. Computational complexity prevents the end-to-end training of "CNN+RNN" systems. Therefore, CNNs are usually trained first, independently from the RNNs. This approach is clearly suboptimal for surgical tool analysis: many tools are very similar to one another, but they can generally be differentiated based on past events. CNNs should be trained to extract the most useful visual features in combination with the temporal context. A novel boosting strategy is proposed to achieve this goal: the CNN and RNN parts of the system are simultaneously enriched by progressively adding weak classifiers (either CNNs or RNNs) trained to improve the overall classification accuracy. Experiments were performed in a dataset of 50 cataract surgery videos, where the usage of 21 surgical tools was manually annotated, and a dataset of 80 cholecystectomy videos, where the usage of 7 tools was manually annotated. Very good classification performance are achieved in both datasets: tool usage could be labeled with an average area under the ROC curve of Az=0.9961 and Az=0.9939, respectively, in offline mode (using past, present and future information), and Az=0.9957 and Az=0.9936, respectively, in online mode (using past and present information only).


Assuntos
Algoritmos , Extração de Catarata/instrumentação , Colecistectomia/instrumentação , Processamento de Imagem Assistida por Computador/métodos , Redes Neurais de Computação , Gravação em Vídeo , Humanos
5.
Annu Int Conf IEEE Eng Med Biol Soc ; 2017: 2002-2005, 2017 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-29060288

RESUMO

The automatic detection of surgical tools in surgery videos is a promising solution for surgical workflow analysis. It paves the way to various applications, including surgical workflow optimization, surgical skill evaluation and real-time warning generation. A solution based on convolutional neural networks (CNNs) is proposed in this paper. Unlike existing solutions, the proposed CNN does not analyze images independently. it analyzes sequences of consecutive images. Features extracted from each image by the CNN are fused inside the network using the optical flow. For improved performance, this multi-image fusion strategy is also applied while training the CNN. The proposed framework was evaluated in a dataset of 30 cataract surgery videos (6 hours of videos). Ten tool categories were defined by surgeons. The proposed system was able to detect each of these categories with a high area under the ROC curve (0.953 ≤ Az ≤ 0.987). The proposed detector, based on multi-image fusion, was significantly more sensitive and specific than a similar system analyzing images independently (p = 2.98 × 10-6 and p = 2.07 × 10-3, respectively).


Assuntos
Catarata , Extração de Catarata , Humanos , Redes Neurais de Computação , Curva ROC
6.
Annu Int Conf IEEE Eng Med Biol Soc ; 2017: 4407-4410, 2017 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-29060874

RESUMO

In recent years, several algorithms were proposed to monitor a surgery through the automatic analysis of endoscope or microscope videos. This paper aims at improving existing solutions for the automated analysis of cataract surgeries, the most common ophthalmic surgery, which are performed under a microscope. Through the analysis of a video recording the surgical tray, it is possible to know which tools are put on or taken from the surgical tray, and therefore which ones are likely being used by the surgeon. Combining these observations with observations from the microscope video should enhance the overall performance of the system. Our contribution is twofold: first, datasets of artificial surgery videos are generated in order to train the convolutional neural networks (CNN) and, second, two classification methods are evaluated to detect the presence of tools in videos. Also, we assess the impact of the manner of building the artificial datasets on the tool recognition performance. By design, the proposed artificial datasets highly reduce the need for fully annotated real datasets and should also produce better performance. Experiments show that one of the proposed classification methods was able to detect most of the targeted tools well.


Assuntos
Reconhecimento Automatizado de Padrão , Algoritmos , Extração de Catarata , Redes Neurais de Computação , Gravação em Vídeo
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...