Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 24
Filtrar
1.
J Am Chem Soc ; 146(6): 4134-4143, 2024 Feb 14.
Artigo em Inglês | MEDLINE | ID: mdl-38317439

RESUMO

Identifying multiple rival reaction products and transient species formed during ultrafast photochemical reactions and determining their time-evolving relative populations are key steps toward understanding and predicting photochemical outcomes. Yet, most contemporary ultrafast studies struggle with clearly identifying and quantifying competing molecular structures/species among the emerging reaction products. Here, we show that mega-electronvolt ultrafast electron diffraction in combination with ab initio molecular dynamics calculations offer a powerful route to determining time-resolved populations of the various isomeric products formed after UV (266 nm) excitation of the five-membered heterocyclic molecule 2(5H)-thiophenone. This strategy provides experimental validation of the predicted high (∼50%) yield of an episulfide isomer containing a strained three-membered ring within ∼1 ps of photoexcitation and highlights the rapidity of interconversion between the rival highly vibrationally excited photoproducts in their ground electronic state.

2.
Phys Chem Chem Phys ; 24(25): 15416-15427, 2022 Jun 29.
Artigo em Inglês | MEDLINE | ID: mdl-35707953

RESUMO

The structural dynamics of photoexcited gas-phase carbon disulfide (CS2) molecules are investigated using ultrafast electron diffraction. The dynamics were triggered by excitation of the optically bright 1B2(1Σu+) state by an ultraviolet femtosecond laser pulse centred at 200 nm. In accordance with previous studies, rapid vibrational motion facilitates a combination of internal conversion and intersystem crossing to lower-lying electronic states. Photodissociation via these electronic manifolds results in the production of CS fragments in the electronic ground state and dissociated singlet and triplet sulphur atoms. The structural dynamics are extracted from the experiment using a trajectory-fitting filtering approach, revealing the main characteristics of the singlet and triplet dissociation pathways. Finally, the effect of the time-resolution on the experimental signal is considered and an outlook to future experiments provided.

3.
Faraday Discuss ; 228(0): 39-59, 2021 May 27.
Artigo em Inglês | MEDLINE | ID: mdl-33565561

RESUMO

We investigate the fragmentation and isomerization of toluene molecules induced by strong-field ionization with a femtosecond near-infrared laser pulse. Momentum-resolved coincidence time-of-flight ion mass spectrometry is used to determine the relative yield of different ionic products and fragmentation channels as a function of laser intensity. Ultrafast electron diffraction is used to capture the structure of the ions formed on a picosecond time scale by comparing the diffraction signal with theoretical predictions. Through the combination of the two measurements and theory, we are able to determine the main fragmentation channels and to distinguish between ions with identical mass but different structures. In addition, our diffraction measurements show that the independent atom model, which is widely used to analyze electron diffraction patterns, is not a good approximation for diffraction from ions. We show that the diffraction data is in very good agreement with ab initio scattering calculations.

4.
J Digit Imaging ; 31(4): 553-561, 2018 08.
Artigo em Inglês | MEDLINE | ID: mdl-29209841

RESUMO

Retinal fundus images are often corrupted by non-uniform and/or poor illumination that occur due to overall imperfections in the image acquisition process. This unwanted variation in brightness limits the pathological information that can be gained from the image. Studies have shown that poor illumination can impede human grading in about 10~15% of retinal images. For automated grading, the effect can be even higher. In this perspective, we propose a novel method for illumination correction in the context of retinal imaging. The method splits the color image into luminosity and chroma (i.e., color) components and performs illumination correction in the luminosity channel based on a novel background estimation technique. Extensive subjective and objective experiments were conducted on publicly available DIARETDB1 and EyePACS images to justify the performance of the proposed method. The subjective experiment has confirmed that the proposed method does not create false color/artifacts and at the same time performs better than the traditional method in 84 out of 89 cases. The objective experiment shows an accuracy improvement of 4% in automated disease grading when illumination correction is performed by the proposed method than the traditional method.


Assuntos
Fundo de Olho , Aumento da Imagem/métodos , Processamento de Imagem Assistida por Computador/métodos , Fotografação/métodos , Doenças Retinianas/diagnóstico por imagem , Artefatos , Diagnóstico por Imagem/métodos , Feminino , Humanos , Masculino , Imagem Óptica/métodos , Doenças Retinianas/patologia , Medição de Risco , Sensibilidade e Especificidade
5.
J Digit Imaging ; 31(6): 869-878, 2018 12.
Artigo em Inglês | MEDLINE | ID: mdl-29704086

RESUMO

Fundus images obtained in a telemedicine program are acquired at different sites that are captured by people who have varying levels of experience. These result in a relatively high percentage of images which are later marked as unreadable by graders. Unreadable images require a recapture which is time and cost intensive. An automated method that determines the image quality during acquisition is an effective alternative. To determine the image quality during acquisition, we describe here an automated method for the assessment of image quality in the context of diabetic retinopathy. The method explicitly applies machine learning techniques to access the image and to determine 'accept' and 'reject' categories. 'Reject' category image requires a recapture. A deep convolution neural network is trained to grade the images automatically. A large representative set of 7000 colour fundus images was used for the experiment which was obtained from the EyePACS that were made available by the California Healthcare Foundation. Three retinal image analysis experts were employed to categorise these images into 'accept' and 'reject' classes based on the precise definition of image quality in the context of DR. The network was trained using 3428 images. The method shows an accuracy of 100% to successfully categorise 'accept' and 'reject' images, which is about 2% higher than the traditional machine learning method. On a clinical trial, the proposed method shows 97% agreement with human grader. The method can be easily incorporated with the fundus image capturing system in the acquisition centre and can guide the photographer whether a recapture is necessary or not.


Assuntos
Retinopatia Diabética/diagnóstico por imagem , Fundo de Olho , Processamento de Imagem Assistida por Computador/métodos , Retina/diagnóstico por imagem , Telemedicina/métodos , Algoritmos , Humanos , Aprendizado de Máquina , Redes Neurais de Computação
6.
J Med Syst ; 42(4): 57, 2018 Feb 17.
Artigo em Inglês | MEDLINE | ID: mdl-29455260

RESUMO

In this paper we systematically evaluate the performance of several state-of-the-art local feature detectors and descriptors in the context of longitudinal registration of retinal images. Longitudinal (temporal) registration facilitates to track the changes in the retina that has happened over time. A wide number of local feature detectors and descriptors exist and many of them have already applied for retinal image registration, however, no comparative evaluation has been made so far to analyse their respective performance. In this manuscript we evaluate the performance of the widely known and commonly used detectors such as Harris, SIFT, SURF, BRISK, and bifurcation and cross-over points. As of descriptors SIFT, SURF, ALOHA, BRIEF, BRISK and PIIFD are used. Longitudinal retinal image datasets containing a total of 244 images are used for the experiment. The evaluation reveals some potential findings including more robustness of SURF and SIFT keypoints than the commonly used bifurcation and cross-over points, when detected on the vessels. SIFT keypoints can be detected with a reliability of 59% for without pathology images and 45% for with pathology images. For SURF keypoints these values are respectively 58% and 47%. ALOHA descriptor is best suited to describe SURF keypoints, which ensures an overall matching accuracy, distinguishability of 83%, 93% and 78%, 83% for without pathology and with pathology images respectively.


Assuntos
Interpretação de Imagem Assistida por Computador/métodos , Reconhecimento Automatizado de Padrão/métodos , Retina/fisiopatologia , Retinoscopia/métodos , Algoritmos , Humanos , Reprodutibilidade dos Testes , Fatores de Tempo
7.
J Med Syst ; 40(12): 277, 2016 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-27787783

RESUMO

This paper presents a novel two step approach for longitudinal (over time) registration of retinal images. Longitudinal registration is an important preliminary step to analyse longitudinal changes on the retina including disease progression. While potential overlap and minimal geometric distortion are likely in longitudinal images, identification of reliable features over time is a potential challenge for longitudinal registration. Relying on the widely accepted phenomenon that retinal vessels are more reliable over time, the proposed method aims to accurately match bifurcation and cross-over points between different timestamp images. Binary robust independent elementary features (BRIEF) are computed around bifurcation points which are then matched based on Hamming distance. Prior to computing BRIEF descriptors, a preliminary registration is performed relying on SURF key-point matching. Experiments are conducted on different image datasets containing 109 longitudinal image pairs in total. The proposed method has been found to produce accurate registration (i.e. registration with zero alignment error) for 97 % cases, which is significantly higher than the other methods in comparison. The paper also reveals the finding that both the number and distributions of accurately matching key-points pairs are important for successful registration of image pairs.


Assuntos
Interpretação de Imagem Assistida por Computador/métodos , Retina/anatomia & histologia , Retina/diagnóstico por imagem , Vasos Retinianos/anatomia & histologia , Vasos Retinianos/diagnóstico por imagem , Algoritmos , Humanos
8.
Stud Health Technol Inform ; 310: 1490-1491, 2024 Jan 25.
Artigo em Inglês | MEDLINE | ID: mdl-38269711

RESUMO

We report on the prediction performance of artificial intelligence components embedded into a telehealth platform underlying a newly established eye screening service connecting metropolitan-based ophthalmologists to patients in remote indigenous communities in Northern Territory and Queensland. Two AI-based components embedded into the telehealth platform were evaluated on retinal images collected from 328 unique patients: an image quality alert system and a diabetic retinopathy detection system. Compared to ophthalmologists, at an individual image level, the image quality detection algorithm was correct 72% of the time, and 85% accurate at a patient level. The retinopathy detection algorithm was correct 85% accurate at an individual image level, and 87% accurate at a patient level. This evaluation provides assurances for future service models using AI to complement and support decisions of eye health assessment teams.


Assuntos
Sistemas de Apoio a Decisões Clínicas , Diabetes Mellitus , Retinopatia Diabética , Doenças Retinianas , Humanos , Retinopatia Diabética/diagnóstico por imagem , Inteligência Artificial , Algoritmos
9.
Biomed Opt Express ; 15(4): 2262-2280, 2024 Apr 01.
Artigo em Inglês | MEDLINE | ID: mdl-38633090

RESUMO

OCT is a widely used clinical ophthalmic imaging technique, but the presence of speckle noise can obscure important pathological features and hinder accurate segmentation. This paper presents a novel method for denoising optical coherence tomography (OCT) images using a combination of texture loss and generative adversarial networks (GANs). Previous approaches have integrated deep learning techniques, starting with denoising Convolutional Neural Networks (CNNs) that employed pixel-wise losses. While effective in reducing noise, these methods often introduced a blurring effect in the denoised OCT images. To address this, perceptual losses were introduced, improving denoising performance and overall image quality. Building on these advancements, our research focuses on designing an image reconstruction GAN that generates OCT images with textural similarity to the gold standard, the averaged OCT image. We utilize the PatchGAN discriminator approach as a texture loss to enhance the quality of the reconstructed OCT images. We also compare the performance of UNet and ResNet as generators in the conditional GAN (cGAN) setting, as well as compare PatchGAN with the Wasserstein GAN. Using real clinical foveal-centered OCT retinal scans of children with normal vision, our experiments demonstrate that the combination of PatchGAN and UNet achieves superior performance (PSNR = 32.50) compared to recently proposed methods such as SiameseGAN (PSNR = 31.02). Qualitative experiments involving six masked clinical ophthalmologists also favor the reconstructed OCT images with PatchGAN texture loss. In summary, this paper introduces a novel method for denoising OCT images by incorporating texture loss within a GAN framework. The proposed approach outperforms existing methods and is well-received by clinical experts, offering promising advancements in OCT image reconstruction and facilitating accurate clinical interpretation.

10.
Stud Health Technol Inform ; 310: 911-915, 2024 Jan 25.
Artigo em Inglês | MEDLINE | ID: mdl-38269941

RESUMO

D1ental caries remains the most common chronic disease in childhood, affecting almost half of all children globally. Dental care and examination of children living in remote and rural areas is an ongoing challenge that has been compounded by COVID. The development of a validated system with the capacity to screen large numbers of children with some degree of automation has the potential to facilitate remote dental screening at low costs. In this study, we aim to develop and validate a deep learning system for the assessment of dental caries using color dental photos. Three state-of-the-art deep learning networks namely VGG16, ResNet-50 and Inception-v3 were adopted in the context. A total of 1020 child dental photos were used to train and validate the system. We achieved an accuracy of 79% with precision and recall respectively 95% and 75% in classifying 'caries' versus 'sound' with inception-v3.


Assuntos
Aprendizado Profundo , Cárie Dentária , Criança , Humanos , Cor , Cárie Dentária/diagnóstico por imagem , Automação
11.
Sci Rep ; 13(1): 18408, 2023 10 27.
Artigo em Inglês | MEDLINE | ID: mdl-37891238

RESUMO

This paper presents a low computationally intensive and memory efficient convolutional neural network (CNN)-based fully automated system for detection of glaucoma, a leading cause of irreversible blindness worldwide. Using color fundus photographs, the system detects glaucoma in two steps. In the first step, the optic disc region is determined relying upon You Only Look Once (YOLO) CNN architecture. In the second step classification of 'glaucomatous' and 'non-glaucomatous' is performed using MobileNet architecture. A simplified version of the original YOLO net, specific to the context, is also proposed. Extensive experiments are conducted using seven state-of-the-art CNNs with varying computational intensity, namely, MobileNetV2, MobileNetV3, Custom ResNet, InceptionV3, ResNet50, 18-Layer CNN and InceptionResNetV2. A total of 6671 fundus images collected from seven publicly available glaucoma datasets are used for the experiment. The system achieves an accuracy and F1 score of 97.4% and 97.3%, with sensitivity, specificity, and AUC of respectively 97.5%, 97.2%, 99.3%. These findings are comparable with the best reported methods in the literature. With comparable or better performance, the proposed system produces significantly faster decisions and drastically minimizes the resource requirement. For example, the proposed system requires 12 times less memory in comparison to ResNes50, and produces 2 times faster decisions. With significantly less memory efficient and faster processing, the proposed system has the capability to be directly embedded into resource limited devices such as portable fundus cameras.


Assuntos
Glaucoma , Disco Óptico , Humanos , Glaucoma/diagnóstico por imagem , Disco Óptico/diagnóstico por imagem , Fundo de Olho , Redes Neurais de Computação , Técnicas de Diagnóstico Oftalmológico
12.
Eur J Ophthalmol ; : 11206721231199126, 2023 Sep 06.
Artigo em Inglês | MEDLINE | ID: mdl-37671441

RESUMO

INTRODUCTION: Automated assessment of age-related macular degeneration (AMD) using optical coherence tomography (OCT) has gained significant research attention in recent years. Though a list of convolutional neural network (CNN)-based methods has been proposed recently, methods that uncover the decision-making process of CNNs or critically interpret CNNs' decisions in the context are scant. This study aims to bridge this research gap. METHODS: We independently trained several state-of-the-art CNN models, namely, VGG16, VGG19, Xception, ResNet50, InceptionResNetV2 for AMD detection and applied CNN visualization techniques, namely, Grad-CAM, Grad-CAM++, Score CAM, Faster Score CAM to highlight the regions of interest utilized by the CNNs in the context. Retinal layer segmentation methods were also developed to explore how the CNN regions of interest related to the layers of the retinal structure. Extensive experiments involving 2130 SD-OCT scans collected from Duke University were performed. RESULTS: Experimental analysis shows that Outer Nuclear Layer to Inner Segment Myeloid (ONL-ISM) influences the AMD detection decision heavily as evident from the normalized intersection (NI) scores. For AMD cases the obtained average NI scores were respectively 13.13%, 17.2%, 9.7%, 10.95%, and 11.31% for VGG16, VGG19, ResNet50, Xception, and Inception ResNet V2, whereas, for normal cases, these values were respectively 21.7%, 21.3%, 16.85%, 10.175% and 16%. CONCLUSION: Critical analysis reveals that the ONL-ISM is the most contributing layer in determining AMD, followed by Nerve Fiber Layer to Inner Plexiform Layer (NFL-IPL).

13.
Rev Sci Instrum ; 94(5)2023 May 01.
Artigo em Inglês | MEDLINE | ID: mdl-37219385

RESUMO

We report the modification of a gas phase ultrafast electron diffraction (UED) instrument that enables experiments with both gas and condensed matter targets, where a time-resolved experiment with sub-picosecond resolution is demonstrated with solid state samples. The instrument relies on a hybrid DC-RF acceleration structure to deliver femtosecond electron pulses on the target, which is synchronized with femtosecond laser pulses. The laser pulses and electron pulses are used to excite the sample and to probe the structural dynamics, respectively. The new system is added with capabilities to perform transmission UED on thin solid samples. It allows for cooling samples to cryogenic temperatures and to carry out time-resolved measurements. We tested the cooling capability by recording diffraction patterns of temperature dependent charge density waves in 1T-TaS2. The time-resolved capability is experimentally verified by capturing the dynamics in photoexcited single-crystal gold.

14.
Vision (Basel) ; 6(3)2022 Jul 20.
Artigo em Inglês | MEDLINE | ID: mdl-35893762

RESUMO

The aim of the study was to assess various retinal vessel parameters of diabetes mellitus (DM) patients and their correlations with systemic factors in type 2 DM. A retrospective exploratory study in which 21 pairs of baseline and follow-up images of patients affected by DM were randomly chosen from the Sankara Nethralaya−Diabetic Retinopathy Study (SN DREAMS) I and II datasets. Patients' fundus was photographed, and the diagnosis was made based on Klein classification. Vessel thickness parameters were generated using a web-based retinal vascular analysis platform called VASP. The thickness changes between the baseline and follow-up images were computed and normalized with the actual thicknesses of baseline images. The majority of parameters showed 10~20% changes over time. Vessel width in zone C for the second vein was significantly reduced from baseline to follow-up, which showed positive correlations with systolic blood pressure and serum high-density lipoproteins. Fractal dimension for all vessels in zones B and C and fractal dimension for vein in zones A, B and C showed a minimal increase from baseline to follow-up, which had a linear relationship with diastolic pressure, mean arterial pressure, serum triglycerides (p < 0.05). Lacunarity for all vessels and veins in zones A, B and C showed a minimal decrease from baseline to follow-up which had a negative correlation with pulse pressure and positive correlation with serum triglycerides (p < 0.05). The vessel widths for the first and second arteries significantly increased from baseline to follow-up and had an association with high-density lipoproteins, glycated haemoglobin A1C, serum low-density lipoproteins and total serum cholesterol. The central reflex intensity ratio for the second artery was significantly decreased from baseline to follow-up, and positive correlations were noted with serum triglyceride, serum low-density lipoproteins and total serum cholesterol. The coefficients for branches in zones B and C artery and the junctional exponent deviation for the artery in zone A decreased from baseline to follow-up showed positive correlations with serum triglycerides, serum low-density lipoproteins and total serum cholesterol. Identifying early microvascular changes in diabetic patients will allow for earlier intervention, improve visual outcomes and prevent vision loss.

15.
Artigo em Inglês | MEDLINE | ID: mdl-35534406

RESUMO

OBJECTIVE: This study aimed to evaluate a deep learning (DL) system using convolutional neural networks (CNNs) for automatic detection of caries on bitewing radiographs. STUDY DESIGN: In total, 2468 bitewings were labeled by 3 dentists to create the reference standard. Of these images, 1257 had caries and 1211 were sound. The Faster region-based CNN was applied to detect the regions of interest (ROIs) with potential lesions. A total of 13,246 ROIs were generated from all 'sound' images, and 50% of 'caries' images (selected randomly) were used to train the ROI detection module. The remaining 50% of 'caries' images were used to validate the ROI detection module. Caries detection was then performed using Inception-ResNet-v2. A set of 3297 'caries' and 5321 'sound' ROIs cropped from the 2468 images was used to train and validate the caries detection module. Data sets were randomly divided into training (90%) and validation (10%) data sets. Recall, precision, specificity, accuracy, and F1 score were used as metrics to assess performance. RESULTS: The caries detection module achieved recall, precision, specificity, accuracy, and F1 scores of 0.89, 0.86, 0.86, 0.87, and 0.87, respectively. CONCLUSIONS: The proposed DL system demonstrated promising performance for detecting proximal surface caries on bitewings.


Assuntos
Aprendizado Profundo , Cárie Dentária , Cárie Dentária/diagnóstico por imagem , Humanos
16.
Struct Dyn ; 9(5): 054303, 2022 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-36267802

RESUMO

Ultrafast electron diffraction (UED) from aligned molecules in the gas phase has successfully retrieved structures of both linear and symmetric top molecules. Alignment of asymmetric tops has been recorded with UED but no structural information was retrieved. We present here the extraction of two-dimensional structural information from simple transformations of experimental diffraction patterns of aligned molecules as a proof-of-principle for the recovery of the full structure. We align 4-fluorobenzotrifluoride with a linearly polarized laser and show that we can distinguish between atomic pairs with equal distances that are parallel and perpendicular to the aligned axis. We additionally show with numerical simulations that by cooling the molecules to a rotational temperature of 1 K, more distances and angles can be resolved through direct transformations.

17.
Dentomaxillofac Radiol ; 51(2): 20210296, 2022 Feb 01.
Artigo em Inglês | MEDLINE | ID: mdl-34644152

RESUMO

OBJECTIVE: This study aimed to evaluate an automated detection system to detect and classify permanent teeth on orthopantomogram (OPG) images using convolutional neural networks (CNNs). METHODS: In total, 591 digital OPGs were collected from patients older than 18 years. Three qualified dentists performed individual teeth labelling on images to generate the ground truth annotations. A three-step procedure, relying upon CNNs, was proposed for automated detection and classification of teeth. Firstly, U-Net, a type of CNN, performed preliminary segmentation of tooth regions or detecting regions of interest (ROIs) on panoramic images. Secondly, the Faster R-CNN, an advanced object detection architecture, identified each tooth within the ROI determined by the U-Net. Thirdly, VGG-16 architecture classified each tooth into 32 categories, and a tooth number was assigned. A total of 17,135 teeth cropped from 591 radiographs were used to train and validate the tooth detection and tooth numbering modules. 90% of OPG images were used for training, and the remaining 10% were used for validation. 10-folds cross-validation was performed for measuring the performance. The intersection over union (IoU), F1 score, precision, and recall (i.e. sensitivity) were used as metrics to evaluate the performance of resultant CNNs. RESULTS: The ROI detection module had an IoU of 0.70. The tooth detection module achieved a recall of 0.99 and a precision of 0.99. The tooth numbering module had a recall, precision and F1 score of 0.98. CONCLUSION: The resultant automated method achieved high performance for automated tooth detection and numbering from OPG images. Deep learning can be helpful in the automatic filing of dental charts in general dentistry and forensic medicine.


Assuntos
Aprendizado Profundo , Dente , Humanos , Redes Neurais de Computação , Radiografia , Radiografia Panorâmica , Dente/diagnóstico por imagem
18.
Sci Rep ; 11(1): 9704, 2021 05 06.
Artigo em Inglês | MEDLINE | ID: mdl-33958686

RESUMO

Diabetic retinopathy (DR) is a leading cause of blindness and affects millions of people throughout the world. Early detection and timely checkups are key to reduce the risk of blindness. Automated grading of DR is a cost-effective way to ensure early detection and timely checkups. Deep learning or more specifically convolutional neural network (CNN)-based methods produce state-of-the-art performance in DR detection. Whilst CNN based methods have been proposed, no comparisons have been done between the extracted image features and their clinical relevance. Here we first adopt a CNN visualization strategy to discover the inherent image features involved in the CNN's decision-making process. Then, we critically analyze those features with respect to commonly known pathologies namely microaneurysms, hemorrhages and exudates, and other ocular components. We also critically analyze different CNNs by considering what image features they pick up during learning to predict and justify their clinical relevance. The experiments are executed on publicly available fundus datasets (EyePACS and DIARETDB1) achieving an accuracy of 89 ~ 95% with AUC, sensitivity and specificity of respectively 95 ~ 98%, 74 ~ 86%, and 93 ~ 97%, for disease level grading of DR. Whilst different CNNs produce consistent classification results, the rate of picked-up image features disagreement between models could be as high as 70%.


Assuntos
Retinopatia Diabética/diagnóstico por imagem , Redes Neurais de Computação , Algoritmos , Conjuntos de Dados como Assunto , Aprendizado Profundo , Retinopatia Diabética/fisiopatologia , Humanos , Sensibilidade e Especificidade
19.
Front Neurol ; 12: 637000, 2021.
Artigo em Inglês | MEDLINE | ID: mdl-33833728

RESUMO

Background: Patient and public involvement (PPI) is an active partnership between the public and researchers in the research process. In dementia research, PPI ensures that the perspectives of the person with "lived experience" of dementia are considered. To date, in many lower- and middle-income countries (LMIC), where dementia research is still developing, PPI is not well-known nor regularly undertaken. Thus, here, we describe PPI activities undertaken in seven research sites across South Asia as exemplars of introducing PPI into dementia research for the first time. Objective: Through a range of PPI exemplar activities, our objectives were to: (1) inform the feasibility of a dementia-related study; and (2) develop capacity and capability for PPI for dementia research in South Asia. Methods: Our approach had two parts. Part 1 involved co-developing new PPI groups at seven clinical research sites in India, Pakistan and Bangladesh to undertake different PPI activities. Mapping onto different "rings" of the Wellcome Trust's "Public Engagement Onion" model. The PPI activities included planning for public engagement events, consultation on the study protocol and conduct, the adaptation of a study screening checklist, development and delivery of dementia training for professionals, and a dementia training programme for public contributors. Part 2 involved an online survey with local researchers to gain insight on their experience of applying PPI in dementia research. Results: Overall, capacity and capability to include PPI in dementia research was significantly enhanced across the sites. Researchers reported that engaging in PPI activities had enhanced their understanding of dementia research and increased the meaningfulness of the work. Moreover, each site reported their own PPI activity-related outcomes, including: (1) changes in attitudes and behavior to dementia and research involvement; (2) best methods to inform participants about the dementia study; (3) increased opportunities to share knowledge and study outcomes; and (4) adaptations to the study protocol through co-production. Conclusions: Introducing PPI for dementia research in LMIC settings, using a range of activity types is important for meaningful and impactful dementia research. To our knowledge, this is the first example of PPI for dementia research in South Asia.

20.
Appl AI Lett ; 1(1)2020 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-36478669

RESUMO

To develop a convolutional neural network visualization strategy so that optical coherence tomography (OCT) features contributing to the evolution of age-related macular degeneration (AMD) can be better determined. We have trained a U-Net model to utilize baseline OCT to predict the progression of geographic atrophy (GA), a late stage manifestation of AMD. We have augmented the U-Net architecture by attaching deconvolutional neural networks (deconvnets). Deconvnets produce the reconstructed feature maps and provide an indication regarding the inherent baseline OCT features contributing to GA progression. Experiments were conducted on longitudinal spectral domain (SD)-OCT and fundus autofluorescence images collected from 70 eyes with GA. The intensity of Bruch's membrane-outer choroid (BMChoroid) retinal junction exhibited a relative importance of 24%, in the GA progression. The intensity of the inner retinal pigment epithelium (RPE) and BM junction (InRPEBM) showed a relative importance of 22%. BMChoroid (where the AMD feature/damage of choriocapillaris was included) followed by InRPEBM (where the AMD feature/damage of RPE was included) are the layers which appear to be most relevant in predicting the progression of AMD.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA