Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 5 de 5
Filtrar
Mais filtros











Base de dados
Intervalo de ano de publicação
1.
Sci Rep ; 13(1): 18408, 2023 10 27.
Artigo em Inglês | MEDLINE | ID: mdl-37891238

RESUMO

This paper presents a low computationally intensive and memory efficient convolutional neural network (CNN)-based fully automated system for detection of glaucoma, a leading cause of irreversible blindness worldwide. Using color fundus photographs, the system detects glaucoma in two steps. In the first step, the optic disc region is determined relying upon You Only Look Once (YOLO) CNN architecture. In the second step classification of 'glaucomatous' and 'non-glaucomatous' is performed using MobileNet architecture. A simplified version of the original YOLO net, specific to the context, is also proposed. Extensive experiments are conducted using seven state-of-the-art CNNs with varying computational intensity, namely, MobileNetV2, MobileNetV3, Custom ResNet, InceptionV3, ResNet50, 18-Layer CNN and InceptionResNetV2. A total of 6671 fundus images collected from seven publicly available glaucoma datasets are used for the experiment. The system achieves an accuracy and F1 score of 97.4% and 97.3%, with sensitivity, specificity, and AUC of respectively 97.5%, 97.2%, 99.3%. These findings are comparable with the best reported methods in the literature. With comparable or better performance, the proposed system produces significantly faster decisions and drastically minimizes the resource requirement. For example, the proposed system requires 12 times less memory in comparison to ResNes50, and produces 2 times faster decisions. With significantly less memory efficient and faster processing, the proposed system has the capability to be directly embedded into resource limited devices such as portable fundus cameras.


Assuntos
Glaucoma , Disco Óptico , Humanos , Glaucoma/diagnóstico por imagem , Disco Óptico/diagnóstico por imagem , Fundo de Olho , Redes Neurais de Computação , Técnicas de Diagnóstico Oftalmológico
2.
Rev Sci Instrum ; 94(5)2023 May 01.
Artigo em Inglês | MEDLINE | ID: mdl-37219385

RESUMO

We report the modification of a gas phase ultrafast electron diffraction (UED) instrument that enables experiments with both gas and condensed matter targets, where a time-resolved experiment with sub-picosecond resolution is demonstrated with solid state samples. The instrument relies on a hybrid DC-RF acceleration structure to deliver femtosecond electron pulses on the target, which is synchronized with femtosecond laser pulses. The laser pulses and electron pulses are used to excite the sample and to probe the structural dynamics, respectively. The new system is added with capabilities to perform transmission UED on thin solid samples. It allows for cooling samples to cryogenic temperatures and to carry out time-resolved measurements. We tested the cooling capability by recording diffraction patterns of temperature dependent charge density waves in 1T-TaS2. The time-resolved capability is experimentally verified by capturing the dynamics in photoexcited single-crystal gold.

3.
Artigo em Inglês | MEDLINE | ID: mdl-35534406

RESUMO

OBJECTIVE: This study aimed to evaluate a deep learning (DL) system using convolutional neural networks (CNNs) for automatic detection of caries on bitewing radiographs. STUDY DESIGN: In total, 2468 bitewings were labeled by 3 dentists to create the reference standard. Of these images, 1257 had caries and 1211 were sound. The Faster region-based CNN was applied to detect the regions of interest (ROIs) with potential lesions. A total of 13,246 ROIs were generated from all 'sound' images, and 50% of 'caries' images (selected randomly) were used to train the ROI detection module. The remaining 50% of 'caries' images were used to validate the ROI detection module. Caries detection was then performed using Inception-ResNet-v2. A set of 3297 'caries' and 5321 'sound' ROIs cropped from the 2468 images was used to train and validate the caries detection module. Data sets were randomly divided into training (90%) and validation (10%) data sets. Recall, precision, specificity, accuracy, and F1 score were used as metrics to assess performance. RESULTS: The caries detection module achieved recall, precision, specificity, accuracy, and F1 scores of 0.89, 0.86, 0.86, 0.87, and 0.87, respectively. CONCLUSIONS: The proposed DL system demonstrated promising performance for detecting proximal surface caries on bitewings.


Assuntos
Aprendizado Profundo , Cárie Dentária , Cárie Dentária/diagnóstico por imagem , Humanos
4.
Sci Rep ; 10(1): 7266, 2020 04 29.
Artigo em Inglês | MEDLINE | ID: mdl-32350327

RESUMO

Alterations of Young's modulus (YM) and Poisson's ratio (PR) in biological tissues are often early indicators of the onset of pathological conditions. Knowledge of these parameters has been proven to be of great clinical significance for the diagnosis, prognosis and treatment of cancers. Currently, however, there are no non-invasive modalities that can be used to image and quantify these parameters in vivo without assuming incompressibility of the tissue, an assumption that is rarely justified in human tissues. In this paper, we developed a new method to simultaneously reconstruct YM and PR of a tumor and of its surrounding tissues based on the assumptions of axisymmetry and ellipsoidal-shape inclusion. This new, non-invasive method allows the generation of high spatial resolution YM and PR maps from axial and lateral strain data obtained via ultrasound elastography. The method was validated using finite element (FE) simulations and controlled experiments performed on phantoms with known mechanical properties. The clinical feasibility of the developed method was demonstrated in an orthotopic mouse model of breast cancer. Our results demonstrate that the proposed technique can estimate the YM and PR of spherical inclusions with accuracy higher than 99% and with accuracy higher than 90% in inclusions of different geometries and under various clinically relevant boundary conditions.


Assuntos
Neoplasias da Mama/patologia , Animais , Modelos Animais de Doenças , Técnicas de Imagem por Elasticidade/métodos , Feminino , Humanos , Camundongos , Distribuição de Poisson , Reprodutibilidade dos Testes , Estresse Mecânico
5.
J Glaucoma ; 27(11): 957-964, 2018 11.
Artigo em Inglês | MEDLINE | ID: mdl-30095604

RESUMO

PURPOSE: To evaluate aqueous humor outflow (AHO) in intact eyes of live human subjects during cataract surgery using fluorescein aqueous angiography. METHODS: Aqueous angiography was performed in 8 live human subjects (56 to 86 y old; 2 men and 6 women). After anesthesia, fluorescein (2%) was introduced into the eye [either alone or after indocyanine green (ICG; 0.4%)] from a sterile, gravity-driven constant-pressure reservoir. Aqueous angiographic images were obtained with a Spectralis HRA+OCT and FLEX module (Heidelberg Engineering). Using the same device, anterior-segment optical coherence tomography (OCT) and infrared images were also concurrently taken with aqueous angiography. RESULTS: Fluorescein aqueous angiography in the live human eye showed segmental AHO patterns. Initial angiographic signal was seen on average by 14.0±3.0 seconds (mean±SE). Using multimodal imaging, angiographically positive signal colocalized with episcleral veins (infrared imaging) and intrascleral lumens (anterior-segment OCT). Sequential aqueous angiography with ICG followed by fluorescein showed similar segmental angiographic patterns. DISCUSSION: Fluorescein aqueous angiography in live humans was similar to that reported in nonhuman primates and to ICG aqueous angiography in live humans. As segmental patterns with sequential angiography using ICG followed by fluorescein were similar, these tracers can now be used sequentially, before and after trabecular outflow interventions, to assess their effects on AHO in live human subjects.


Assuntos
Humor Aquoso/metabolismo , Extração de Catarata , Angiofluoresceinografia/métodos , Idoso , Idoso de 80 Anos ou mais , Feminino , Fluoresceína/metabolismo , Humanos , Masculino , Pessoa de Meia-Idade , Tomografia de Coerência Óptica/métodos
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA