Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 12 de 12
Filter
1.
Nat Commun ; 14(1): 4936, 2023 08 15.
Article in English | MEDLINE | ID: mdl-37582955

ABSTRACT

Our knowledge of non-linear genetic effects on complex traits remains limited, in part, due to the modest power to detect such effects. While kernel-based tests offer a versatile approach to test for non-linear relationships between sets of genetic variants and traits, current approaches cannot be applied to Biobank-scale datasets containing hundreds of thousands of individuals. We propose, FastKAST, a kernel-based approach that can test for non-linear effects of a set of variants on a quantitative trait. FastKAST provides calibrated hypothesis tests while enabling analysis of Biobank-scale datasets with hundreds of thousands of unrelated individuals from a homogeneous population. We apply FastKAST to 53 quantitative traits measured across ≈ 300 K unrelated white British individuals in the UK Biobank to detect sets of variants with non-linear effects at genome-wide significance.


Subject(s)
Biological Specimen Banks , Multifactorial Inheritance , Humans , Phenotype , Genome , Genome-Wide Association Study , Models, Genetic , Polymorphism, Single Nucleotide
2.
Sci Adv ; 9(9): eabm3449, 2023 03 03.
Article in English | MEDLINE | ID: mdl-36867695

ABSTRACT

Anticipating food crisis outbreaks is crucial to efficiently allocate emergency relief and reduce human suffering. However, existing predictive models rely on risk measures that are often delayed, outdated, or incomplete. Using the text of 11.2 million news articles focused on food-insecure countries and published between 1980 and 2020, we leverage recent advances in deep learning to extract high-frequency precursors to food crises that are both interpretable and validated by traditional risk indicators. We demonstrate that over the period from July 2009 to July 2020 and across 21 food-insecure countries, news indicators substantially improve the district-level predictions of food insecurity up to 12 months ahead relative to baseline models that do not include text information. These results could have profound implications on how humanitarian aid gets allocated and open previously unexplored avenues for machine learning to improve decision-making in data-scarce environments.


Subject(s)
Disease Outbreaks , Rivers , Humans , Food , Machine Learning , Risk Factors
3.
NPJ Clim Atmos Sci ; 5(1): 76, 2022.
Article in English | MEDLINE | ID: mdl-36254321

ABSTRACT

The use of air quality monitoring networks to inform urban policies is critical especially where urban populations are exposed to unprecedented levels of air pollution. High costs, however, limit city governments' ability to deploy reference grade air quality monitors at scale; for instance, only 33 reference grade monitors are available for the entire territory of Delhi, India, spanning 1500 sq km with 15 million residents. In this paper, we describe a high-precision spatio-temporal prediction model that can be used to derive fine-grained pollution maps. We utilize two years of data from a low-cost monitoring network of 28 custom-designed low-cost portable air quality sensors covering a dense region of Delhi. The model uses a combination of message-passing recurrent neural networks combined with conventional spatio-temporal geostatistics models to achieve high predictive accuracy in the face of high data variability and intermittent data availability from low-cost sensors (due to sensor faults, network, and power issues). Using data from reference grade monitors for validation, our spatio-temporal pollution model can make predictions within 1-hour time-windows at 9.4, 10.5, and 9.6% Mean Absolute Percentage Error (MAPE) over our low-cost monitors, reference grade monitors, and the combined monitoring network respectively. These accurate fine-grained pollution sensing maps provide a way forward to build citizen-driven low-cost monitoring systems that detect hazardous urban air quality at fine-grained granularities.

4.
Recent Adv Food Nutr Agric ; 13(1): 27-50, 2022 Nov 14.
Article in English | MEDLINE | ID: mdl-36173075

ABSTRACT

The drug-food interaction brings forth changes in the clinical effects of drugs. While favourable interactions bring positive clinical outcomes, unfavourable interactions may lead to toxicity. This article reviews the impact of food intake on drug-food interactions, the clinical effects of drugs, and the effect of drug-food in correlation with diet and precision medicine. Emerging areas in drug-food interactions are the food-genome interface (nutrigenomics) and nutrigenetics. Understanding the molecular basis of food ingredients, including genomic sequencing and pharmacological implications of food molecules, helps to reduce the impact of drug-food interactions. Various strategies are being leveraged to alleviate drug-food interactions; measures including patient engagement, digital health, approaches involving machine intelligence, and big data are a few of them. Furthermore, delineating the molecular communications across dietmicrobiome- drug-food-drug interactions in a pharmacomicrobiome framework may also play a vital role in personalized nutrition. Determining nutrient-gene interactions aids in making nutrition deeply personalized and helps mitigate unwanted drug-food interactions, chronic diseases, and adverse events from their onset. Translational bioinformatics approaches could play an essential role in the next generation of drug-food interaction research. In this landscape review, we discuss important tools, databases, and approaches along with key challenges and opportunities in drug-food interaction and its immediate impact on precision medicine.


Subject(s)
Big Data , Food-Drug Interactions , Humans , Nutrigenomics , Diet , Artificial Intelligence
5.
Cells ; 11(11)2022 06 02.
Article in English | MEDLINE | ID: mdl-35681523

ABSTRACT

Organ-on-a-chip (OOAC) is an emerging technology based on microfluid platforms and in vitro cell culture that has a promising future in the healthcare industry. The numerous advantages of OOAC over conventional systems make it highly popular. The chip is an innovative combination of novel technologies, including lab-on-a-chip, microfluidics, biomaterials, and tissue engineering. This paper begins by analyzing the need for the development of OOAC followed by a brief introduction to the technology. Later sections discuss and review the various types of OOACs and the fabrication materials used. The implementation of artificial intelligence in the system makes it more advanced, thereby helping to provide a more accurate diagnosis as well as convenient data management. We introduce selected OOAC projects, including applications to organ/disease modelling, pharmacology, personalized medicine, and dentistry. Finally, we point out certain challenges that need to be surmounted in order to further develop and upgrade the current systems.


Subject(s)
Artificial Intelligence , Lab-On-A-Chip Devices , Biocompatible Materials , Microfluidics , Tissue Engineering
6.
Front Artif Intell ; 4: 742723, 2021.
Article in English | MEDLINE | ID: mdl-34957391

ABSTRACT

Objective: Opioids are a class of drugs that are known for their use as pain relievers. They bind to opioid receptors on nerve cells in the brain and the nervous system to mitigate pain. Addiction is one of the chronic and primary adverse events of prolonged usage of opioids. They may also cause psychological disorders, muscle pain, depression, anxiety attacks etc. In this study, we present a collection of predictive models to identify patients at risk of opioid abuse and mortality by using their prescription histories. Also, we discover particularly threatening drug-drug interactions in the context of opioid usage. Methods and Materials: Using a publicly available dataset from MIMIC-III, two models were trained, Logistic Regression with L2 regularization (baseline) and Extreme Gradient Boosting (enhanced model), to classify the patients of interest into two categories based on their susceptibility to opioid abuse. We've also used K-Means clustering, an unsupervised algorithm, to explore drug-drug interactions that might be of concern. Results: The baseline model for classifying patients susceptible to opioid abuse has an F1 score of 76.64% (accuracy 77.16%) while the enhanced model has an F1 score of 94.45% (accuracy 94.35%). These models can be used as a preliminary step towards inferring the causal effect of opioid usage and can help monitor the prescription practices to minimize the opioid abuse. Discussion and Conclusion: Results suggest that the enhanced model provides a promising approach in preemptive identification of patients at risk for opioid abuse. By discovering and correlating the patterns contributing to opioid overdose or abuse among a variety of patients, machine learning models can be used as an efficient tool to help uncover the existing gaps and/or fraudulent practices in prescription writing. To quote an example of one such incidental finding, our study discovered that insulin might possibly be interacting with opioids in an unfavourable way leading to complications in diabetic patients. This indicates that diabetic patients under long term opioid usage might need to take increased amounts of insulin to make it more effective. This observation backs up prior research studies done on a similar aspect. To increase the translational value of our work, the predictive models and the associated software code are made available under the MIT License.

7.
J Am Med Inform Assoc ; 28(12): 2641-2653, 2021 11 25.
Article in English | MEDLINE | ID: mdl-34571540

ABSTRACT

OBJECTIVE: Deep significance clustering (DICE) is a self-supervised learning framework. DICE identifies clinically similar and risk-stratified subgroups that neither unsupervised clustering algorithms nor supervised risk prediction algorithms alone are guaranteed to generate. MATERIALS AND METHODS: Enabled by an optimization process that enforces statistical significance between the outcome and subgroup membership, DICE jointly trains 3 components, representation learning, clustering, and outcome prediction while providing interpretability to the deep representations. DICE also allows unseen patients to be predicted into trained subgroups for population-level risk stratification. We evaluated DICE using electronic health record datasets derived from 2 urban hospitals. Outcomes and patient cohorts used include discharge disposition to home among heart failure (HF) patients and acute kidney injury among COVID-19 (Cov-AKI) patients, respectively. RESULTS: Compared to baseline approaches including principal component analysis, DICE demonstrated superior performance in the cluster purity metrics: Silhouette score (0.48 for HF, 0.51 for Cov-AKI), Calinski-Harabasz index (212 for HF, 254 for Cov-AKI), and Davies-Bouldin index (0.86 for HF, 0.66 for Cov-AKI), and prediction metric: area under the Receiver operating characteristic (ROC) curve (0.83 for HF, 0.78 for Cov-AKI). Clinical evaluation of DICE-generated subgroups revealed more meaningful distributions of member characteristics across subgroups, and higher risk ratios between subgroups. Furthermore, DICE-generated subgroup membership alone was moderately predictive of outcomes. DISCUSSION: DICE addresses a gap in current machine learning approaches where predicted risk may not lead directly to actionable clinical steps. CONCLUSION: DICE demonstrated the potential to apply in heterogeneous populations, where having the same quantitative risk does not equate with having a similar clinical profile.


Subject(s)
COVID-19 , Cluster Analysis , Humans , Machine Learning , ROC Curve , SARS-CoV-2
8.
Front Big Data ; 4: 742779, 2021.
Article in English | MEDLINE | ID: mdl-34977563

ABSTRACT

Breast cancer screening using Mammography serves as the earliest defense against breast cancer, revealing anomalous tissue years before it can be detected through physical screening. Despite the use of high resolution radiography, the presence of densely overlapping patterns challenges the consistency of human-driven diagnosis and drives interest in leveraging state-of-art localization ability of deep convolutional neural networks (DCNN). The growing availability of digitized clinical archives enables the training of deep segmentation models, but training using the most widely available form of coarse hand-drawn annotations works against learning the precise boundary of cancerous tissue in evaluation, while producing results that are more aligned with the annotations rather than the underlying lesions. The expense of collecting high quality pixel-level data in the field of medical science makes this even more difficult. To surmount this fundamental challenge, we propose LatentCADx, a deep learning segmentation model capable of precisely annotating cancer lesions underlying hand-drawn annotations, which we procedurally obtain using joint classification training and a strict segmentation penalty. We demonstrate the capability of LatentCADx on a publicly available dataset of 2,620 Mammogram case files, where LatentCADx obtains classification ROC of 0.97, AP of 0.87, and segmentation AP of 0.75 (IOU = 0.5), giving comparable or better performance than other models. Qualitative and precision evaluation of LatentCADx annotations on validation samples reveals that LatentCADx increases the specificity of segmentations beyond that of existing models trained on hand-drawn annotations, with pixel level specificity reaching a staggering value of 0.90. It also obtains sharp boundary around lesions unlike other methods, reducing the confused pixels in the output by more than 60%.

10.
PLoS Negl Trop Dis ; 14(5): e0008273, 2020 05.
Article in English | MEDLINE | ID: mdl-32392225

ABSTRACT

Increasing urbanization is having a profound effect on infectious disease risk, posing significant challenges for governments to allocate limited resources for their optimal control at a sub-city scale. With recent advances in data collection practices, empirical evidence about the efficacy of highly localized containment and intervention activities, which can lead to optimal deployment of resources, is possible. However, there are several challenges in analyzing data from such real-world observational settings. Using data on 3.9 million instances of seven dengue vector containment activities collected between 2012 and 2017, here we develop and assess two frameworks for understanding how the generation of new dengue cases changes in space and time with respect to application of different types of containment activities. Accounting for the non-random deployment of each containment activity in relation to dengue cases and other types of containment activities, as well as deployment of activities in different epidemiological contexts, results from both frameworks reinforce existing knowledge about the efficacy of containment activities aimed at the adult phase of the mosquito lifecycle. Results show a 10% (95% CI: 1-19%) and 20% reduction (95% CI: 4-34%) reduction in probability of a case occurring in 50 meters and 30 days of cases which had Indoor Residual Spraying (IRS) and fogging performed in the immediate vicinity, respectively, compared to cases of similar epidemiological context and which had no containment in their vicinity. Simultaneously, limitations due to the real-world nature of activity deployment are used to guide recommendations for future deployment of resources during outbreaks as well as data collection practices. Conclusions from this study will enable more robust and comprehensive analyses of localized containment activities in resource-scarce urban settings and lead to improved allocation of resources of government in an outbreak setting.


Subject(s)
Dengue/epidemiology , Dengue/prevention & control , Mosquito Control/methods , Animals , Cities/epidemiology , Humans , Incidence , Pakistan/epidemiology , Spatio-Temporal Analysis , Urban Population
11.
Brief Bioinform ; 21(4): 1182-1195, 2020 07 15.
Article in English | MEDLINE | ID: mdl-31190075

ABSTRACT

Sepsis is a series of clinical syndromes caused by the immunological response to infection. The clinical evidence for sepsis could typically attribute to bacterial infection or bacterial endotoxins, but infections due to viruses, fungi or parasites could also lead to sepsis. Regardless of the etiology, rapid clinical deterioration, prolonged stay in intensive care units and high risk for mortality correlate with the incidence of sepsis. Despite its prevalence and morbidity, improvement in sepsis outcomes has remained limited. In this comprehensive review, we summarize the current landscape of risk estimation, diagnosis, treatment and prognosis strategies in the setting of sepsis and discuss future challenges. We argue that the advent of modern technologies such as in-depth molecular profiling, biomedical big data and machine intelligence methods will augment the treatment and prevention of sepsis. The volume, variety, veracity and velocity of heterogeneous data generated as part of healthcare delivery and recent advances in biotechnology-driven therapeutics and companion diagnostics may provide a new wave of approaches to identify the most at-risk sepsis patients and reduce the symptom burden in patients within shorter turnaround times. Developing novel therapies by leveraging modern drug discovery strategies including computational drug repositioning, cell and gene-therapy, clustered regularly interspaced short palindromic repeats -based genetic editing systems, immunotherapy, microbiome restoration, nanomaterial-based therapy and phage therapy may help to develop treatments to target sepsis. We also provide empirical evidence for potential new sepsis targets including FER and STARD3NL. Implementing data-driven methods that use real-time collection and analysis of clinical variables to trace, track and treat sepsis-related adverse outcomes will be key. Understanding the root and route of sepsis and its comorbid conditions that complicate treatment outcomes and lead to organ dysfunction may help to facilitate identification of most at-risk patients and prevent further deterioration. To conclude, leveraging the advances in precision medicine, biomedical data science and translational bioinformatics approaches may help to develop better strategies to diagnose and treat sepsis in the next decade.


Subject(s)
Precision Medicine , Sepsis/diagnosis , Sepsis/therapy , Humans , Prognosis , Risk Factors , Sepsis/pathology
12.
Sci Adv ; 2(7): e1501215, 2016 07.
Article in English | MEDLINE | ID: mdl-27419226

ABSTRACT

Thousands of lives are lost every year in developing countries for failing to detect epidemics early because of the lack of real-time disease surveillance data. We present results from a large-scale deployment of a telephone triage service as a basis for dengue forecasting in Pakistan. Our system uses statistical analysis of dengue-related phone calls to accurately forecast suspected dengue cases 2 to 3 weeks ahead of time at a subcity level (correlation of up to 0.93). Our system has been operational at scale in Pakistan for the past 3 years and has received more than 300,000 phone calls. The predictions from our system are widely disseminated to public health officials and form a critical part of active government strategies for dengue containment. Our work is the first to demonstrate, with significant empirical evidence, that an accurate, location-specific disease forecasting system can be built using analysis of call volume data from a public health hotline.


Subject(s)
Dengue/prevention & control , Triage , Awareness , Community Health Services , Forecasting , Hospitals , Hotlines , Humans , Telephone
SELECTION OF CITATIONS
SEARCH DETAIL