Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 51
Filtrar
1.
PLoS Genet ; 20(5): e1011273, 2024 May.
Artigo em Inglês | MEDLINE | ID: mdl-38728357

RESUMO

Existing imaging genetics studies have been mostly limited in scope by using imaging-derived phenotypes defined by human experts. Here, leveraging new breakthroughs in self-supervised deep representation learning, we propose a new approach, image-based genome-wide association study (iGWAS), for identifying genetic factors associated with phenotypes discovered from medical images using contrastive learning. Using retinal fundus photos, our model extracts a 128-dimensional vector representing features of the retina as phenotypes. After training the model on 40,000 images from the EyePACS dataset, we generated phenotypes from 130,329 images of 65,629 British White participants in the UK Biobank. We conducted GWAS on these phenotypes and identified 14 loci with genome-wide significance (p<5×10-8 and intersection of hits from left and right eyes). We also did GWAS on the retina color, the average color of the center region of the retinal fundus photos. The GWAS of retina colors identified 34 loci, 7 are overlapping with GWAS of raw image phenotype. Our results establish the feasibility of this new framework of genomic study based on self-supervised phenotyping of medical images.


Assuntos
Fundo de Olho , Estudo de Associação Genômica Ampla , Fenótipo , Retina , Humanos , Estudo de Associação Genômica Ampla/métodos , Retina/diagnóstico por imagem , Masculino , Polimorfismo de Nucleotídeo Único , Feminino , Processamento de Imagem Assistida por Computador/métodos
2.
Commun Biol ; 7(1): 414, 2024 Apr 05.
Artigo em Inglês | MEDLINE | ID: mdl-38580839

RESUMO

Understanding the genetic architecture of brain structure is challenging, partly due to difficulties in designing robust, non-biased descriptors of brain morphology. Until recently, brain measures for genome-wide association studies (GWAS) consisted of traditionally expert-defined or software-derived image-derived phenotypes (IDPs) that are often based on theoretical preconceptions or computed from limited amounts of data. Here, we present an approach to derive brain imaging phenotypes using unsupervised deep representation learning. We train a 3-D convolutional autoencoder model with reconstruction loss on 6130 UK Biobank (UKBB) participants' T1 or T2-FLAIR (T2) brain MRIs to create a 128-dimensional representation known as Unsupervised Deep learning derived Imaging Phenotypes (UDIPs). GWAS of these UDIPs in held-out UKBB subjects (n = 22,880 discovery and n = 12,359/11,265 replication cohorts for T1/T2) identified 9457 significant SNPs organized into 97 independent genetic loci of which 60 loci were replicated. Twenty-six loci were not reported in earlier T1 and T2 IDP-based UK Biobank GWAS. We developed a perturbation-based decoder interpretation approach to show that these loci are associated with UDIPs mapped to multiple relevant brain regions. Our results established unsupervised deep learning can derive robust, unbiased, heritable, and interpretable brain imaging phenotypes.


Assuntos
Loci Gênicos , Estudo de Associação Genômica Ampla , Humanos , Estudo de Associação Genômica Ampla/métodos , Fenótipo , Encéfalo/diagnóstico por imagem , Neuroimagem
3.
J Am Med Inform Assoc ; 31(6): 1239-1246, 2024 May 20.
Artigo em Inglês | MEDLINE | ID: mdl-38497957

RESUMO

OBJECTIVE: Passive monitoring of touchscreen interactions generates keystroke dynamic signals that can be used to detect and track neurological conditions such as Parkinson's disease (PD) and psychomotor impairment with minimal burden on the user. However, this typically requires datasets with clinically confirmed labels collected in standardized environments, which is challenging, especially for a large subject pool. This study validates the efficacy of a self-supervised learning method in reducing the reliance on labels and evaluates its generalizability. MATERIALS AND METHODS: We propose a new type of self-supervised loss combining Barlow Twins loss, which attempts to create similar feature representations with reduced feature redundancy for samples coming from the same subject, and a Dissimilarity loss, which promotes uncorrelated features for samples generated by different subjects. An encoder is first pre-trained using this loss on unlabeled data from an uncontrolled setting, then fine-tuned with clinically validated data. Our experiments test the model generalizability with controls and subjects with PD on 2 independent datasets. RESULTS: Our approach showed better generalization compared to previous methods, including a feature engineering strategy, a deep learning model pre-trained on Parkinsonian signs, and a traditional supervised model. DISCUSSION: The absence of standardized data acquisition protocols and the limited availability of annotated datasets compromise the generalizability of supervised models. In these contexts, self-supervised models offer the advantage of learning more robust patterns from the data, bypassing the need for ground truth labels. CONCLUSION: This approach has the potential to accelerate the clinical validation of touchscreen typing software for neurodegenerative diseases.


Assuntos
Doença de Parkinson , Aprendizado de Máquina Supervisionado , Humanos , Doença de Parkinson/diagnóstico , Masculino , Feminino , Idoso , Algoritmos , Pessoa de Meia-Idade
4.
iScience ; 27(3): 109004, 2024 Mar 15.
Artigo em Inglês | MEDLINE | ID: mdl-38375230

RESUMO

Deep learning-based neuroimaging pipelines for acute stroke typically rely on image registration, which not only increases computation but also introduces a point of failure. In this paper, we propose a general-purpose contrastive self-supervised learning method that converts a convolutional deep neural network designed for registered images to work on a different input domain, i.e., with unregistered images. This is accomplished by using a self-supervised strategy that does not rely on labels, where the original model acts as a teacher and a new network as a student. Large vessel occlusion (LVO) detection experiments using computed tomographic angiography (CTA) data from 402 CTA patients show the student model achieving competitive LVO detection performance (area under the receiver operating characteristic curve [AUC] = 0.88 vs. AUC = 0.81) compared to the teacher model, even with unregistered images. The student model trained directly on unregistered images using standard supervised learning achieves an AUC = 0.63, highlighting the proposed method's efficacy in adapting models to different pipelines and domains.

5.
iScience ; 27(2): 108881, 2024 Feb 16.
Artigo em Inglês | MEDLINE | ID: mdl-38318348

RESUMO

Automated tools to detect large vessel occlusion (LVO) in acute ischemic stroke patients using brain computed tomography angiography (CTA) have been shown to reduce the time for treatment, leading to better clinical outcomes. There is a lot of information in a single CTA and deep learning models do not have an obvious way of being conditioned on areas most relevant for LVO detection, i.e., the vasculature structure. In this work, we compare and contrast strategies to make convolutional neural networks focus on the vasculature without discarding context information of the brain parenchyma and propose an attention-inspired strategy to encourage this. We use brain CTAs from which we obtain 3D vasculature images. Then, we compare ways of combining the vasculature and the CTA images using a general-purpose network trained to detect LVO. The results show that the proposed strategies allow to improve LVO detection and could potentially help to learn other cerebrovascular-related tasks.

7.
JAMA Neurol ; 80(11): 1182-1190, 2023 Nov 01.
Artigo em Inglês | MEDLINE | ID: mdl-37721738

RESUMO

Importance: The benefit of endovascular stroke therapy (EVT) in large vessel occlusion (LVO) ischemic stroke is highly time dependent. Process improvements to accelerate in-hospital workflows are critical. Objective: To determine whether automated computed tomography (CT) angiogram interpretation coupled with secure group messaging can improve in-hospital EVT workflows. Design, Setting, and Participants: This cluster randomized stepped-wedge clinical trial took place from January 1, 2021, through February 27, 2022, at 4 comprehensive stroke centers (CSCs) in the greater Houston, Texas, area. All 443 participants with LVO stroke who presented through the emergency department were treated with EVT at the 4 CSCs. Exclusion criteria included patients presenting as transfers from an outside hospital (n = 158), in-hospital stroke (n = 39), and patients treated with EVT through randomization in a large core clinical trial (n = 3). Intervention: Artificial intelligence (AI)-enabled automated LVO detection from CT angiogram coupled with secure messaging was activated at the 4 CSCs in a random-stepped fashion. Once activated, clinicians and radiologists received real-time alerts to their mobile phones notifying them of possible LVO within minutes of CT imaging completion. Main Outcomes and Measures: Primary outcome was the effect of AI-enabled LVO detection on door-to-groin (DTG) time and was measured using a mixed-effects linear regression model, which included a random effect for cluster (CSC) and a fixed effect for exposure status (pre-AI vs post-AI). Secondary outcomes included time from hospital arrival to intravenous tissue plasminogen activator (IV tPA) bolus in eligible patients, time from initiation of CT scan to start of EVT, and hospital length of stay. In exploratory analysis, the study team evaluated the impact of AI implementation on 90-day modified Rankin Scale disability outcomes. Results: Among 243 patients who met inclusion criteria, 140 were treated during the unexposed period and 103 during the exposed period. Median age for the complete cohort was 70 (IQR, 58-79) years and 122 were female (50%). Median National Institutes of Health Stroke Scale score at presentation was 17 (IQR, 11-22) and the median DTG preexposure was 100 (IQR, 81-116) minutes. In mixed-effects linear regression, implementation of the AI algorithm was associated with a reduction in DTG time by 11.2 minutes (95% CI, -18.22 to -4.2). Time from CT scan initiation to EVT start fell by 9.8 minutes (95% CI, -16.9 to -2.6). There were no differences in IV tPA treatment times nor hospital length of stay. In multivariable logistic regression adjusted for age, National Institutes of Health Stroke scale score, and the Alberta Stroke Program Early CT Score, there was no difference in likelihood of functional independence (modified Rankin Scale score, 0-2; odds ratio, 1.3; 95% CI, 0.42-4.0). Conclusions and Relevance: Automated LVO detection coupled with secure mobile phone application-based communication improved in-hospital acute ischemic stroke workflows. Software implementation was associated with clinically meaningful reductions in EVT treatment times. Trial Registration: ClinicalTrials.gov Identifier: NCT05838456.


Assuntos
Arteriopatias Oclusivas , Isquemia Encefálica , Procedimentos Endovasculares , AVC Isquêmico , Acidente Vascular Cerebral , Humanos , Feminino , Pessoa de Meia-Idade , Idoso , Masculino , Ativador de Plasminogênio Tecidual/uso terapêutico , Isquemia Encefálica/diagnóstico por imagem , Isquemia Encefálica/cirurgia , Inteligência Artificial , AVC Isquêmico/diagnóstico por imagem , AVC Isquêmico/cirurgia , Procedimentos Endovasculares/métodos , Acidente Vascular Cerebral/diagnóstico por imagem , Acidente Vascular Cerebral/cirurgia , Trombectomia/métodos , Arteriopatias Oclusivas/tratamento farmacológico , Software , Resultado do Tratamento
8.
Sci Rep ; 13(1): 15325, 2023 09 15.
Artigo em Inglês | MEDLINE | ID: mdl-37714881

RESUMO

Vessel segmentation in fundus images permits understanding retinal diseases and computing image-based biomarkers. However, manual vessel segmentation is a time-consuming process. Optical coherence tomography angiography (OCT-A) allows direct, non-invasive estimation of retinal vessels. Unfortunately, compared to fundus images, OCT-A cameras are more expensive, less portable, and have a reduced field of view. We present an automated strategy relying on generative adversarial networks to create vascular maps from fundus images without training using manual vessel segmentation maps. Further post-processing used for standard en face OCT-A allows obtaining a vessel segmentation map. We compare our approach to state-of-the-art vessel segmentation algorithms trained on manual vessel segmentation maps and vessel segmentations derived from OCT-A. We evaluate them from an automatic vascular segmentation perspective and as vessel density estimators, i.e., the most common imaging biomarker for OCT-A used in studies. Using OCT-A as a training target over manual vessel delineations yields improved vascular maps for the optic disc area and compares to the best-performing vessel segmentation algorithm in the macular region. This technique could reduce the cost and effort incurred when training vessel segmentation algorithms. To incentivize research in this field, we will make the dataset publicly available to the scientific community.


Assuntos
Disco Óptico , Tomografia de Coerência Óptica , Angiografia , Fundo de Olho , Vasos Retinianos/diagnóstico por imagem
9.
AMIA Jt Summits Transl Sci Proc ; 2023: 300-309, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-37350885

RESUMO

Learning about diagnostic features and related clinical information from dental radiographs is important for dental research. However, the lack of expert-annotated data and convenient search tools poses challenges. Our primary objective is to design a search tool that uses a user's query for oral-related research. The proposed framework, Contrastive LAnguage Image REtrieval Search for dental research, Dental CLAIRES, utilizes periapical radiographs and associated clinical details such as periodontal diagnosis, demographic information to retrieve the best-matched images based on the text query. We applied a contrastive representation learning method to find images described by the user's text by maximizing the similarity score of positive pairs (true pairs) and minimizing the score of negative pairs (random pairs). Our model achieved a hit@3 ratio of 96% and a Mean Reciprocal Rank (MRR) of 0.82. We also designed a graphical user interface that allows researchers to verify the model's performance with interactions.

10.
Neuroimage Clin ; 37: 103362, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-36893661

RESUMO

Acute ischemic stroke is a leading cause of death and disability in the world. Treatment decisions, especially around emergent revascularization procedures, rely heavily on size and location of the infarct core. Currently, accurate assessment of this measure is challenging. While MRI-DWI is considered the gold standard, its availability is limited for most patients suffering from stroke. Another well-studied imaging modality is CT-Perfusion (CTP) which is much more common than MRI-DWI in acute stroke care, but not as precise as MRI-DWI, and it is still unavailable in many stroke hospitals. A method to determine infarct core using CT-Angiography (CTA), a much more available imaging modality albeit with significantly less contrast in stroke core area than CTP or MRI-DWI, would enable significantly better treatment decisions for stroke patients throughout the world. Existing deep-learning-based approaches for stroke core estimation have to face the trade-off between voxel-level segmentation / image-level labels and the difficulty of obtaining large enough samples of high-quality DWI images. The former occurs when algorithms can either output voxel-level labeling which is more informative but requires a significant effort by annotators, or image-level labels that allow for much simpler labeling of the images but results in less informative and interpretable output; the latter is a common issue that forces training either on small training sets using DWI as the target or larger, but noisier, dataset using CT-Perfusion (CTP) as the target. In this work, we present a deep learning approach including a new weighted gradient-based approach to obtain stroke core segmentation with image-level labeling, specifically the size of the acute stroke core volume. Additionally, this strategy allows us to train using labels derived from CTP estimations. We find that the proposed approach outperforms segmentation approaches trained on voxel-level data and the CTP estimation themselves.


Assuntos
Isquemia Encefálica , AVC Isquêmico , Acidente Vascular Cerebral , Humanos , Tomografia Computadorizada por Raios X/métodos , Acidente Vascular Cerebral/diagnóstico por imagem , Infarto , Angiografia
11.
IEEE Trans Biomed Eng ; 70(1): 182-192, 2023 01.
Artigo em Inglês | MEDLINE | ID: mdl-35767495

RESUMO

Parkinson's disease (PD) is the second most prevalent neurodegenerative disease disorder in the world. A prompt diagnosis would enable clinical trials for disease-modifying neuroprotective therapies. Recent research efforts have unveiled imaging and blood markers that have the potential to be used to identify PD patients promptly, however, the idiopathic nature of PD makes these tests very hard to scale to the general population. To this end, we need an easily deployable tool that would enable screening for PD signs in the general population. In this work, we propose a new set of features based on keystroke dynamics, i.e., the time required to press and release keyboard keys during typing, and used to detect PD in an ecologically valid data acquisition setup at the subject's homes, without requiring any pre-defined task. We compare and contrast existing models presented in the literature and present a new model that combines a new type of keystroke dynamics signal representation using hold time and flight time series as a function of key types and asymmetry in the time series using a convolutional neural network. We show how this model achieves an Area Under the Receiving Operating Characteristic curve ranging from 0.80 to 0.83 on a dataset of subjects who actively interacted with their computers for at least 5 months and positively compares against other state-of-the-art approaches previously tested on keystroke dynamics data acquired with mechanical keyboards.


Assuntos
Doenças Neurodegenerativas , Doença de Parkinson , Humanos , Doença de Parkinson/diagnóstico , Benchmarking , Computadores , Redes Neurais de Computação
12.
J Neurointerv Surg ; 15(2): 195-199, 2023 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-35613840

RESUMO

BACKGROUND: In recent years, machine learning (ML) has had notable success in providing automated analyses of neuroimaging studies, and its role is likely to increase in the future. Thus, it is paramount for clinicians to understand these approaches, gain facility with interpreting ML results, and learn how to assess algorithm performance. OBJECTIVE: To provide an overview of ML, present its role in acute stroke imaging, discuss methods to evaluate algorithms, and then provide an assessment of existing approaches. METHODS: In this review, we give an overview of ML techniques commonly used in medical imaging analysis and methods to evaluate performance. We then review the literature for relevant publications. Searches were run in November 2021 in Ovid Medline and PubMed. Inclusion criteria included studies in English reporting use of artificial intelligence (AI), machine learning, or similar techniques in the setting of, and in applications for, acute ischemic stroke or mechanical thrombectomy. Articles that included image-level data with meaningful results and sound ML approaches were included in this discussion. RESULTS: Many publications on acute stroke imaging, including detection of large vessel occlusion, detection and quantification of intracranial hemorrhage and detection of infarct core, have been published using ML methods. Imaging inputs have included non-contrast head CT, CT angiograph and MRI, with a range of performances. We discuss and review several of the most relevant publications. CONCLUSIONS: ML in acute ischemic stroke imaging has already made tremendous headway. Additional applications and further integration with clinical care is inevitable. Thus, facility with these approaches is critical for the neurointerventional clinician.


Assuntos
AVC Isquêmico , Acidente Vascular Cerebral , Humanos , Inteligência Artificial , Acidente Vascular Cerebral/diagnóstico por imagem , Acidente Vascular Cerebral/terapia , Aprendizado de Máquina , Imageamento por Ressonância Magnética
13.
J Clin Med ; 11(24)2022 Dec 14.
Artigo em Inglês | MEDLINE | ID: mdl-36556024

RESUMO

Acute cerebral stroke is a leading cause of disability and death, which could be reduced with a prompt diagnosis during patient transportation to the hospital. A portable retina imaging system could enable this by measuring vascular information and blood perfusion in the retina and, due to the homology between retinal and cerebral vessels, infer if a cerebral stroke is underway. However, the feasibility of this strategy, the imaging features, and retina imaging modalities to do this are not clear. In this work, we show initial evidence of the feasibility of this approach by training machine learning models using feature engineering and self-supervised learning retina features extracted from OCT-A and fundus images to classify controls and acute stroke patients. Models based on macular microvasculature density features achieved an area under the receiver operating characteristic curve (AUC) of 0.87-0.88. Self-supervised deep learning models were able to generate features resulting in AUCs ranging from 0.66 to 0.81. While further work is needed for the final proof for a diagnostic system, these results indicate that microvasculature density features from OCT-A images have the potential to be used to diagnose acute cerebral stroke from the retina.

14.
Brain Commun ; 4(4): fcac194, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-35950091

RESUMO

Measuring cognitive function is essential for characterizing brain health and tracking cognitive decline in Alzheimer's Disease and other neurodegenerative conditions. Current tools to accurately evaluate cognitive impairment typically rely on a battery of questionnaires administered during clinical visits which is essential for the acquisition of repeated measurements in longitudinal studies. Previous studies have shown that the remote data collection of passively monitored daily interaction with personal digital devices can measure motor signs in the early stages of synucleinopathies, as well as facilitate longitudinal patient assessment in the real-world scenario with high patient compliance. This was achieved by the automatic discovery of patterns in the time series of keystroke dynamics, i.e. the time required to press and release keys, by machine learning algorithms. In this work, our hypothesis is that the typing patterns generated from user-device interaction may reflect relevant features of the effects of cognitive impairment caused by neurodegeneration. We use machine learning algorithms to estimate cognitive performance through the analysis of keystroke dynamic patterns that were extracted from mechanical and touchscreen keyboard use in a dataset of cognitively normal (n = 39, 51% male) and cognitively impaired subjects (n = 38, 60% male). These algorithms are trained and evaluated using a novel framework that integrates items from multiple neuropsychological and clinical scales into cognitive subdomains to generate a more holistic representation of multifaceted clinical signs. In our results, we see that these models based on typing input achieve moderate correlations with verbal memory, non-verbal memory and executive function subdomains [Spearman's ρ between 0.54 (P < 0.001) and 0.42 (P < 0.001)] and a weak correlation with language/verbal skills [Spearman's ρ 0.30 (P < 0.05)]. In addition, we observe a moderate correlation between our typing-based approach and the Total Montreal Cognitive Assessment score [Spearman's ρ 0.48 (P < 0.001)]. Finally, we show that these machine learning models can perform better by using our subdomain framework that integrates the information from multiple neuropsychological scales as opposed to using the individual items that make up these scales. Our results support our hypothesis that typing patterns are able to reflect the effects of neurodegeneration in mild cognitive impairment and Alzheimer's disease and that this new subdomain framework both helps the development of machine learning models and improves their interpretability.

15.
Neuroimage Clin ; 34: 102998, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-35378498

RESUMO

In stroke care, the extent of irreversible brain injury, termed infarct core, plays a key role in determining eligibility for acute treatments, such as intravenous thrombolysis and endovascular reperfusion therapies. Many of the pivotal randomized clinical trials testing those therapies used MRI Diffusion-Weighted Imaging (DWI) or CT Perfusion (CTP) to define infarct core. Unfortunately, these modalities are not available 24/7 outside of large stroke centers. As such, there is a need for accurate infarct core determination using faster and more widely available imaging modalities including Non-Contrast CT (NCCT) and CT Angiography (CTA). Prior studies have suggested that CTA provides improved predictions of infarct core relative to NCCT; however, this assertion has never been numerically quantified by automatic medical image computing pipelines using acquisition protocols not confounded by different scanner manufacturers, or other protocol settings such as exposure times, kilovoltage peak, or imprecision due to contrast bolus delays. In addition, single-phase CTA protocols are at present designed to optimize contrast opacification in the arterial phase. This approach works well to maximize the sensitivity to detect vessel occlusions, however, it may not be the ideal timing to enhance the ischemic infarct core signal (ICS). In this work, we propose an image analysis pipeline on CT-based images of 88 acute ischemic stroke (AIS) patients drawn from a single dynamic acquisition protocol acquired at the acute ischemic phase. We use the first scan at the time of the dynamic acquisition as a proxy for NCCT, and the rest of the scans as a proxy for CTA scans, with bolus imaged at different brain enhancement phases. Thus, we use the terms "NCCT" and "CTA" to refer to them. This pipeline enables us to answer the questions "Does the injection of bolus enhance the infarct core signal?" and "What is the ideal bolus timing to enhance the infarct core signal?" without being influenced by aforementioned factors such as scanner model, acquisition settings, contrast bolus delay, and human reader errors. We use reference MRI DWI images acquired after successful recanalization acting as our gold standard for infarct core. The ICS is quantified by calculating the difference in intensity distribution between the infarct core region and its symmetrical healthy counterpart on the contralateral hemisphere of the brain using a metric derived from information theory, the Kullback-Leibler divergence (KL divergence). We compare the ICS provided by NCCT and CTA and retrieve the optimal timing of CTA bolus to maximize the ICS. In our experiments, we numerically confirm that CTAs provide greater ICS compared to NCCT. Then, we find that, on average, the ideal CTA acquisition time to maximize the ICS is not the current target of standard CTA protocols, i.e., during the peak of arterial enhancement, but a few seconds afterward (median of 3 s; 95% CI [1.5, 3.0]). While there are other studies comparing the prediction potential of ischemic infarct core from NCCT and CTA images, to the best of our knowledge, this analysis is the first to perform a quantitative comparison of the ICS among CT based scans, with and without bolus injection, acquired using the same scanning sequence and a precise characterization of the bolus uptake, hence, reducing potential confounding factors.


Assuntos
Isquemia Encefálica , AVC Isquêmico , Acidente Vascular Cerebral , Isquemia Encefálica/diagnóstico por imagem , Humanos , Infarto , Acidente Vascular Cerebral/diagnóstico , Tomografia Computadorizada por Raios X/métodos
16.
Sci Rep ; 12(1): 4554, 2022 03 16.
Artigo em Inglês | MEDLINE | ID: mdl-35296719

RESUMO

Providers currently rely on universal screening to identify health-related social needs (HRSNs). Predicting HRSNs using EHR and community-level data could be more efficient and less resource intensive. Using machine learning models, we evaluated the predictive performance of HRSN status from EHR and community-level social determinants of health (SDOH) data for Medicare and Medicaid beneficiaries participating in the Accountable Health Communities Model. We hypothesized that Medicaid insurance coverage would predict HRSN status. All models significantly outperformed the baseline Medicaid hypothesis. AUCs ranged from 0.59 to 0.68. The top performance (AUC = 0.68 CI 0.66-0.70) was achieved by the "any HRSNs" outcome, which is the most useful for screening prioritization. Community-level SDOH features had lower predictive performance than EHR features. Machine learning models can be used to prioritize patients for screening. However, screening only patients identified by our current model(s) would miss many patients. Future studies are warranted to optimize prediction of HRSNs.


Assuntos
Medicaid , Medicare , Idoso , Humanos , Aprendizado de Máquina , Programas de Rastreamento , Determinantes Sociais da Saúde , Estados Unidos
17.
JMIR Dermatol ; 5(4): e39113, 2022 Dec 12.
Artigo em Inglês | MEDLINE | ID: mdl-37632881

RESUMO

BACKGROUND: Automatic skin lesion recognition has shown to be effective in increasing access to reliable dermatology evaluation; however, most existing algorithms rely solely on images. Many diagnostic rules, including the 3-point checklist, are not considered by artificial intelligence algorithms, which comprise human knowledge and reflect the diagnosis process of human experts. OBJECTIVE: In this paper, we aimed to develop a semisupervised model that can not only integrate the dermoscopic features and scoring rule from the 3-point checklist but also automate the feature-annotation process. METHODS: We first trained the semisupervised model on a small, annotated data set with disease and dermoscopic feature labels and tried to improve the classification accuracy by integrating the 3-point checklist using ranking loss function. We then used a large, unlabeled data set with only disease label to learn from the trained algorithm to automatically classify skin lesions and features. RESULTS: After adding the 3-point checklist to our model, its performance for melanoma classification improved from a mean of 0.8867 (SD 0.0191) to 0.8943 (SD 0.0115) under 5-fold cross-validation. The trained semisupervised model can automatically detect 3 dermoscopic features from the 3-point checklist, with best performances of 0.80 (area under the curve [AUC] 0.8380), 0.89 (AUC 0.9036), and 0.76 (AUC 0.8444), in some cases outperforming human annotators. CONCLUSIONS: Our proposed semisupervised learning framework can help with the automatic diagnosis of skin disease based on its ability to detect dermoscopic features and automate the label-annotation process. The framework can also help combine semantic knowledge with a computer algorithm to arrive at a more accurate and more interpretable diagnostic result, which can be applied to broader use cases.

18.
Stroke ; 53(5): 1651-1656, 2022 05.
Artigo em Inglês | MEDLINE | ID: mdl-34865511

RESUMO

BACKGROUND: Prehospital automated large vessel occlusion (LVO) detection in Mobile Stroke Units (MSUs) could accelerate identification and treatment of patients with LVO acute ischemic stroke. Here, we evaluate the performance of a machine learning (ML) model on CT angiograms (CTAs) obtained from 2 MSUs to detect LVO. METHODS: Patients evaluated on MSUs in Houston and Los Angeles with out-of-hospital CTAs were identified. Anterior circulation LVO was defined as an occlusion of the intracranial internal carotid artery, middle cerebral artery (M1 or M2), or anterior cerebral artery vessels and determined by an expert human reader. A ML model to detect LVO was trained and tested on independent data sets consisting of in-hospital CTAs and then tested on MSU CTA images. Model performance was determined using area under the receiver-operator curve statistics. RESULTS: Among 68 patients with out-of-hospital MSU CTAs, 40% had an LVO. The most common occlusion location was the middle cerebral artery M1 segment (59%), followed by the internal carotid artery (30%), and middle cerebral artery M2 (11%). Median time from last known well to CTA imaging was 88.0 (interquartile range, 59.5-196.0) minutes. After training on 870 in-hospital CTAs, the ML model performed well in identifying LVO in a separate in-hospital data set of 441 images with area under receiver-operator curve of 0.84 (95% CI, 0.80-0.87). ML algorithm analysis time was under 1 minute. The performance of the ML model on the MSU CTA images was comparable with area under receiver-operator curve 0.80 (95% CI, 0.71-0.89). There was no significant difference in performance between the Houston and Los Angeles MSU CTA cohorts. CONCLUSIONS: In this study of patients evaluated on MSUs in 2 cities, a ML algorithm was able to accurately and rapidly detect LVO using prehospital CTA acquisitions.


Assuntos
AVC Isquêmico , Acidente Vascular Cerebral , Angiografia , Angiografia por Tomografia Computadorizada/métodos , Humanos , Aprendizado de Máquina , Acidente Vascular Cerebral/diagnóstico por imagem , Tomografia Computadorizada por Raios X
19.
Annu Int Conf IEEE Eng Med Biol Soc ; 2021: 3873-3876, 2021 11.
Artigo em Inglês | MEDLINE | ID: mdl-34892078

RESUMO

Fundus Retinal imaging is an easy-to-acquire modality typically used for monitoring eye health. Current evidence indicates that the retina, and its vasculature in particular, is associated with other disease processes making it an ideal candidate for biomarker discovery. The development of these biomarkers has typically relied on predefined measurements, which makes the development process slow. Recently, representation learning algorithms such as general purpose convolutional neural networks or vasculature embeddings have been proposed as an approach to learn imaging biomarkers directly from the data, hence greatly speeding up their discovery. In this work, we compare and contrast different state-of-the-art retina biomarker discovery methods to identify signs of past stroke in the retinas of a curated patient cohort of 2,472 subjects from the UK Biobank dataset. We investigate two convolutional neural networks previously used in retina biomarker discovery and directly trained on the stroke outcome, and an extension of the vasculature embedding approach which infers its feature representation from the vasculature and combines the information of retinal images from both eyes.In our experiments, we show that the pipeline based on vasculature embeddings has comparable or better performance than other methods with a much more compact feature representation and ease of training.Clinical Relevance-This study compares and contrasts three retinal biomarker discovery strategies, using a curated dataset of subject evidence, for the analysis of the retina as a proxy in the assessment of clinical outcomes, such as stroke risk.


Assuntos
Redes Neurais de Computação , Acidente Vascular Cerebral , Biomarcadores , Fundo de Olho , Humanos , Retina/diagnóstico por imagem , Acidente Vascular Cerebral/diagnóstico por imagem
20.
Med Image Anal ; 74: 102253, 2021 12.
Artigo em Inglês | MEDLINE | ID: mdl-34614474

RESUMO

Glaucoma is an ocular disease threatening irreversible vision loss. Primary screening of Glaucoma involves computation of optic cup (OC) to optic disc (OD) ratio that is widely accepted metric. Recent deep learning frameworks for OD and OC segmentation have shown promising results and ways to attain remarkable performance. In this paper, we present a novel segmentation network, Nested EfficientNet (NENet) that consists of EfficientNetB4 as an encoder along with a nested network of pre-activated residual blocks, atrous spatial pyramid pooling (ASPP) block and attention gates (AGs). The combination of cross-entropy and dice coefficient (DC) loss is utilized to guide the network for accurate segmentation. Further, a modified patch-based discriminator is designed for use with the NENet to improve the local segmentation details. Three publicly available datasets, REFUGE, Drishti-GS, and RIM-ONE-r3 were utilized to evaluate the performances of the proposed network. In our experiments, NENet outperformed state-of-the-art methods for segmentation of OD and OC. Additionally, we show that NENet has excellent generalizability across camera types and image resolution. The obtained results suggest that the proposed technique has potential to be an important component for an automated Glaucoma screening system.


Assuntos
Glaucoma , Disco Óptico , Técnicas de Diagnóstico Oftalmológico , Fundo de Olho , Glaucoma/diagnóstico por imagem , Humanos , Processamento de Imagem Assistida por Computador , Programas de Rastreamento , Disco Óptico/diagnóstico por imagem
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA