Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 588
Filter
1.
Transl Vis Sci Technol ; 13(5): 11, 2024 May 01.
Article in English | MEDLINE | ID: mdl-38748408

ABSTRACT

Purpose: Computational models can help clinicians plan surgeries by accounting for factors such as mechanical imbalances or testing different surgical techniques beforehand. Different levels of modeling complexity are found in the literature, and it is still not clear what aspects should be included to obtain accurate results in finite-element (FE) corneal models. This work presents a methodology to narrow down minimal requirements of modeling features to report clinical data for a refractive intervention such as PRK. Methods: A pipeline to create FE models of a refractive surgery is presented: It tests different geometries, boundary conditions, loading, and mesh size on the optomechanical simulation output. The mechanical model for the corneal tissue accounts for the collagen fiber distribution in human corneas. Both mechanical and optical outcome are analyzed for the different models. Finally, the methodology is applied to five patient-specific models to ensure accuracy. Results: To simulate the postsurgical corneal optomechanics, our results suggest that the most precise outcome is obtained with patient-specific models with a 100 µm mesh size, sliding boundary condition at the limbus, and intraocular pressure enforced as a distributed load. Conclusions: A methodology for laser surgery simulation has been developed that is able to reproduce the optical target of the laser intervention while also analyzing the mechanical outcome. Translational Relevance: The lack of standardization in modeling refractive interventions leads to different simulation strategies, making difficult to compare them against other publications. This work establishes the standardization guidelines to be followed when performing optomechanical simulations of refractive interventions.


Subject(s)
Computer Simulation , Cornea , Finite Element Analysis , Photorefractive Keratectomy , Humans , Cornea/surgery , Cornea/physiology , Photorefractive Keratectomy/methods , Computer Simulation/standards , Lasers, Excimer/therapeutic use , Models, Biological
2.
JAMA ; 330(1): 78-80, 2023 07 03.
Article in English | MEDLINE | ID: mdl-37318797

ABSTRACT

This study assesses the diagnostic accuracy of the Generative Pre-trained Transformer 4 (GPT-4) artificial intelligence (AI) model in a series of challenging cases.


Subject(s)
Artificial Intelligence , Diagnosis, Computer-Assisted , Artificial Intelligence/standards , Reproducibility of Results , Computer Simulation/standards , Diagnosis, Computer-Assisted/standards
3.
JAMA ; 329(4): 306-317, 2023 01 24.
Article in English | MEDLINE | ID: mdl-36692561

ABSTRACT

Importance: Stroke is the fifth-highest cause of death in the US and a leading cause of serious long-term disability with particularly high risk in Black individuals. Quality risk prediction algorithms, free of bias, are key for comprehensive prevention strategies. Objective: To compare the performance of stroke-specific algorithms with pooled cohort equations developed for atherosclerotic cardiovascular disease for the prediction of new-onset stroke across different subgroups (race, sex, and age) and to determine the added value of novel machine learning techniques. Design, Setting, and Participants: Retrospective cohort study on combined and harmonized data from Black and White participants of the Framingham Offspring, Atherosclerosis Risk in Communities (ARIC), Multi-Ethnic Study for Atherosclerosis (MESA), and Reasons for Geographical and Racial Differences in Stroke (REGARDS) studies (1983-2019) conducted in the US. The 62 482 participants included at baseline were at least 45 years of age and free of stroke or transient ischemic attack. Exposures: Published stroke-specific algorithms from Framingham and REGARDS (based on self-reported risk factors) as well as pooled cohort equations for atherosclerotic cardiovascular disease plus 2 newly developed machine learning algorithms. Main Outcomes and Measures: Models were designed to estimate the 10-year risk of new-onset stroke (ischemic or hemorrhagic). Discrimination concordance index (C index) and calibration ratios of expected vs observed event rates were assessed at 10 years. Analyses were conducted by race, sex, and age groups. Results: The combined study sample included 62 482 participants (median age, 61 years, 54% women, and 29% Black individuals). Discrimination C indexes were not significantly different for the 2 stroke-specific models (Framingham stroke, 0.72; 95% CI, 0.72-073; REGARDS self-report, 0.73; 95% CI, 0.72-0.74) vs the pooled cohort equations (0.72; 95% CI, 0.71-0.73): differences 0.01 or less (P values >.05) in the combined sample. Significant differences in discrimination were observed by race: the C indexes were 0.76 for all 3 models in White vs 0.69 in Black women (all P values <.001) and between 0.71 and 0.72 in White men and between 0.64 and 0.66 in Black men (all P values ≤.001). When stratified by age, model discrimination was better for younger (<60 years) vs older (≥60 years) adults for both Black and White individuals. The ratios of observed to expected 10-year stroke rates were closest to 1 for the REGARDS self-report model (1.05; 95% CI, 1.00-1.09) and indicated risk overestimation for Framingham stroke (0.86; 95% CI, 0.82-0.89) and pooled cohort equations (0.74; 95% CI, 0.71-0.77). Performance did not significantly improve when novel machine learning algorithms were applied. Conclusions and Relevance: In this analysis of Black and White individuals without stroke or transient ischemic attack among 4 US cohorts, existing stroke-specific risk prediction models and novel machine learning techniques did not significantly improve discriminative accuracy for new-onset stroke compared with the pooled cohort equations, and the REGARDS self-report model had the best calibration. All algorithms exhibited worse discrimination in Black individuals than in White individuals, indicating the need to expand the pool of risk factors and improve modeling techniques to address observed racial disparities and improve model performance.


Subject(s)
Black People , Healthcare Disparities , Prejudice , Risk Assessment , Stroke , White People , Female , Humans , Male , Middle Aged , Atherosclerosis/epidemiology , Cardiovascular Diseases/epidemiology , Ischemic Attack, Transient/epidemiology , Retrospective Studies , Stroke/diagnosis , Stroke/epidemiology , Stroke/ethnology , Risk Assessment/standards , Reproducibility of Results , Sex Factors , Age Factors , Race Factors/statistics & numerical data , Black People/statistics & numerical data , White People/statistics & numerical data , United States/epidemiology , Machine Learning/standards , Bias , Prejudice/prevention & control , Healthcare Disparities/ethnology , Healthcare Disparities/standards , Healthcare Disparities/statistics & numerical data , Computer Simulation/standards , Computer Simulation/statistics & numerical data
4.
Eur Radiol ; 33(5): 3544-3556, 2023 May.
Article in English | MEDLINE | ID: mdl-36538072

ABSTRACT

OBJECTIVES: To evaluate AI biases and errors in estimating bone age (BA) by comparing AI and radiologists' clinical determinations of BA. METHODS: We established three deep learning models from a Chinese private dataset (CHNm), an American public dataset (USAm), and a joint dataset combining the above two datasets (JOIm). The test data CHNt (n = 1246) were labeled by ten senior pediatric radiologists. The effects of data site differences, interpretation bias, and interobserver variability on BA assessment were evaluated. The differences between the AI models' and radiologists' clinical determinations of BA (normal, advanced, and delayed BA groups by using the Brush data) were evaluated by the chi-square test and Kappa values. The heatmaps of CHNm-CHNt were generated by using Grad-CAM. RESULTS: We obtained an MAD value of 0.42 years on CHNm-CHNt; this result indicated an appropriate accuracy for the whole group but did not indicate an accurate estimation of individual BA because with a kappa value of 0.714, the agreement between AI and human clinical determinations of BA was significantly different. The features of the heatmaps were not fully consistent with the human vision on the X-ray films. Variable performance in BA estimation by different AI models and the disagreement between AI and radiologists' clinical determinations of BA may be caused by data biases, including patients' sex and age, institutions, and radiologists. CONCLUSIONS: The deep learning models outperform external validation in predicting BA on both internal and joint datasets. However, the biases and errors in the models' clinical determinations of child development should be carefully considered. KEY POINTS: • With a kappa value of 0.714, clinical determinations of bone age by using AI did not accord well with clinical determinations by radiologists. • Several biases, including patients' sex and age, institutions, and radiologists, may cause variable performance by AI bone age models and disagreement between AI and radiologists' clinical determinations of bone age. • AI heatmaps of bone age were not fully consistent with human vision on X-ray films.


Subject(s)
Age Determination by Skeleton , Computer Simulation , Deep Learning , Child , Humans , Bias , Deep Learning/standards , Radiologists/standards , United States , Age Determination by Skeleton/methods , Age Determination by Skeleton/standards , Wrist/diagnostic imaging , Fingers/diagnostic imaging , Male , Female , Child, Preschool , Adolescent , Observer Variation , Diagnostic Errors , Computer Simulation/standards
5.
Tob Control ; 32(5): 589-598, 2023 09.
Article in English | MEDLINE | ID: mdl-35017262

ABSTRACT

BACKGROUND: Policy simulation models (PSMs) have been used extensively to shape health policies before real-world implementation and evaluate post-implementation impact. This systematic review aimed to examine best practices, identify common pitfalls in tobacco control PSMs and propose a modelling quality assessment framework. METHODS: We searched five databases to identify eligible publications from July 2013 to August 2019. We additionally included papers from Feirman et al for studies before July 2013. Tobacco control PSMs that project tobacco use and tobacco-related outcomes from smoking policies were included. We extracted model inputs, structure and outputs data for models used in two or more included papers. Using our proposed quality assessment framework, we scored these models on population representativeness, policy effectiveness evidence, simulated smoking histories, included smoking-related diseases, exposure-outcome lag time, transparency, sensitivity analysis, validation and equity. FINDINGS: We found 146 eligible papers and 25 distinct models. Most models used population data from public or administrative registries, and all performed sensitivity analysis. However, smoking behaviour was commonly modelled into crude categories of smoking status. Eight models only presented overall changes in mortality rather than explicitly considering smoking-related diseases. Only four models reported impacts on health inequalities, and none offered the source code. Overall, the higher scored models achieved higher citation rates. CONCLUSIONS: While fragments of good practices were widespread across the reviewed PSMs, only a few included a 'critical mass' of the good practices specified in our quality assessment framework. This framework might, therefore, potentially serve as a benchmark and support sharing of good modelling practices.


Subject(s)
Computer Simulation , Health Policy , Policy Making , Quality Assurance, Health Care , Tobacco Control , Humans , Benchmarking , Computer Simulation/standards , Reproducibility of Results , Smoking/adverse effects , Smoking/epidemiology , Smoking/mortality
6.
Pathol Res Pract ; 231: 153771, 2022 Mar.
Article in English | MEDLINE | ID: mdl-35091177

ABSTRACT

Mass-forming ductal carcinoma in situ (DCIS) detected on core needle biopsy (CNB) is often a radiology-pathology discordance and thought to represent missed invasive carcinoma. This brief report applied supervised machine learning (ML) for image segmentation to investigate a series of 44 mass-forming DCIS cases, with the primary focus being stromal computational signatures. The area under the curve (AUC) for receiver operator curves (ROC) in relation to upgrade to invasive carcinoma from DCIS were as follows: high myxoid stromal ratio (MSR): 0.923, P = <0.001; low collagenous stromal percentage (CSP): 0.875, P = <0.001; and low proportionated stromal area (PSA): 0.682, P = 0.039. The use of ML in mass-forming DCIS could predict upgraded to invasive carcinoma with high sensitivity and specificity. The findings from this brief report are clinically useful and should be further validated by future studies.


Subject(s)
Biopsy, Large-Core Needle/statistics & numerical data , Carcinoma, Intraductal, Noninfiltrating/diagnosis , Computer Simulation/standards , Models, Genetic , Aged , Analysis of Variance , Area Under Curve , Biopsy, Large-Core Needle/methods , Carcinoma, Intraductal, Noninfiltrating/epidemiology , Computer Simulation/statistics & numerical data , Female , Humans , Male , Middle Aged , ROC Curve , Retrospective Studies
7.
Oxid Med Cell Longev ; 2022: 4378413, 2022.
Article in English | MEDLINE | ID: mdl-35035662

ABSTRACT

BACKGROUND: Vascular calcification (VC) constitutes subclinical vascular burden and increases cardiovascular mortality. Effective therapeutics for VC remains to be procured. We aimed to use a deep learning-based strategy to screen and uncover plant compounds that potentially can be repurposed for managing VC. METHODS: We integrated drugome, interactome, and diseasome information from Comparative Toxicogenomic Database (CTD), DrugBank, PubChem, Gene Ontology (GO), and BioGrid to analyze drug-disease associations. A deep representation learning was done using a high-level description of the local network architecture and features of the entities, followed by learning the global embeddings of nodes derived from a heterogeneous network using the graph neural network architecture and a random forest classifier established for prediction. Predicted results were tested in an in vitro VC model for validity based on the probability scores. RESULTS: We collected 6,790 compounds with available Simplified Molecular-Input Line-Entry System (SMILES) data, 11,958 GO terms, 7,238 diseases, and 25,482 proteins, followed by local embedding vectors using an end-to-end transformer network and a node2vec algorithm and global embedding vectors learned from heterogeneous network via the graph neural network. Our algorithm conferred a good distinction between potential compounds, presenting as higher prediction scores for the compound categories with a higher potential but lower scores for other categories. Probability score-dependent selection revealed that antioxidants such as sulforaphane and daidzein were potentially effective compounds against VC, while catechin had low probability. All three compounds were validated in vitro. CONCLUSIONS: Our findings exemplify the utility of deep learning in identifying promising VC-treating plant compounds. Our model can be a quick and comprehensive computational screening tool to assist in the early drug discovery process.


Subject(s)
Computer Simulation/standards , Deep Learning/standards , Machine Learning/standards , Plants/chemistry , Vascular Calcification/therapy , Algorithms , Humans
8.
Br J Cancer ; 126(2): 204-210, 2022 02.
Article in English | MEDLINE | ID: mdl-34750494

ABSTRACT

BACKGROUND: Efficient trial designs are required to prioritise promising drugs within Phase II trials. Adaptive designs are examples of such designs, but their efficiency is reduced if there is a delay in assessing patient responses to treatment. METHODS: Motivated by the WIRE trial in renal cell carcinoma (NCT03741426), we compare three trial approaches to testing multiple treatment arms: (1) single-arm trials in sequence with interim analyses; (2) a parallel multi-arm multi-stage trial and (3) the design used in WIRE, which we call the Multi-Arm Sequential Trial with Efficient Recruitment (MASTER) design. The MASTER design recruits patients to one arm at a time, pausing recruitment to an arm when it has recruited the required number for an interim analysis. We conduct a simulation study to compare how long the three different trial designs take to evaluate a number of new treatment arms. RESULTS: The parallel multi-arm multi-stage and the MASTER design are much more efficient than separate trials. The MASTER design provides extra efficiency when there is endpoint delay, or recruitment is very quick. CONCLUSIONS: We recommend the MASTER design as an efficient way of testing multiple promising cancer treatments in non-comparative Phase II trials.


Subject(s)
Adaptive Clinical Trials as Topic/methods , Clinical Trials, Phase II as Topic/methods , Computer Simulation/standards , Medical Oncology/methods , Neoplasms/drug therapy , Non-Randomized Controlled Trials as Topic/methods , Research Design/standards , Cohort Studies , Humans , Neoplasms/pathology , Sample Size , Treatment Outcome
9.
J Hepatol ; 76(2): 311-318, 2022 02.
Article in English | MEDLINE | ID: mdl-34606915

ABSTRACT

BACKGROUND & AIMS: Several models have recently been developed to predict risk of hepatocellular carcinoma (HCC) in patients with chronic hepatitis B (CHB). Our aims were to develop and validate an artificial intelligence-assisted prediction model of HCC risk. METHODS: Using a gradient-boosting machine (GBM) algorithm, a model was developed using 6,051 patients with CHB who received entecavir or tenofovir therapy from 4 hospitals in Korea. Two external validation cohorts were independently established: Korean (5,817 patients from 14 Korean centers) and Caucasian (1,640 from 11 Western centers) PAGE-B cohorts. The primary outcome was HCC development. RESULTS: In the derivation cohort and the 2 validation cohorts, cirrhosis was present in 26.9%-50.2% of patients at baseline. A model using 10 parameters at baseline was derived and showed good predictive performance (c-index 0.79). This model showed significantly better discrimination than previous models (PAGE-B, modified PAGE-B, REACH-B, and CU-HCC) in both the Korean (c-index 0.79 vs. 0.64-0.74; all p <0.001) and Caucasian validation cohorts (c-index 0.81 vs. 0.57-0.79; all p <0.05 except modified PAGE-B, p = 0.42). A calibration plot showed a satisfactory calibration function. When the patients were grouped into 4 risk groups, the minimal-risk group (11.2% of the Korean cohort and 8.8% of the Caucasian cohort) had a less than 0.5% risk of HCC during 8 years of follow-up. CONCLUSIONS: This GBM-based model provides the best predictive power for HCC risk in Korean and Caucasian patients with CHB treated with entecavir or tenofovir. LAY SUMMARY: Risk scores have been developed to predict the risk of hepatocellular carcinoma (HCC) in patients with chronic hepatitis B. We developed and validated a new risk prediction model using machine learning algorithms in 13,508 antiviral-treated patients with chronic hepatitis B. Our new model, based on 10 common baseline characteristics, demonstrated superior performance in risk stratification compared with previous risk scores. This model also identified a group of patients at minimal risk of developing HCC, who could be indicated for less intensive HCC surveillance.


Subject(s)
Artificial Intelligence/standards , Carcinoma, Hepatocellular/physiopathology , Hepatitis B, Chronic/complications , Adult , Antiviral Agents/pharmacology , Antiviral Agents/therapeutic use , Artificial Intelligence/statistics & numerical data , Asian People/ethnology , Asian People/statistics & numerical data , Carcinoma, Hepatocellular/etiology , Cohort Studies , Computer Simulation/standards , Computer Simulation/statistics & numerical data , Female , Follow-Up Studies , Guanine/analogs & derivatives , Guanine/pharmacology , Guanine/therapeutic use , Hepatitis B, Chronic/physiopathology , Humans , Liver Neoplasms/complications , Liver Neoplasms/physiopathology , Male , Middle Aged , Republic of Korea/ethnology , Tenofovir/pharmacology , Tenofovir/therapeutic use , White People/ethnology , White People/statistics & numerical data
10.
J Immunother Cancer ; 9(12)2021 12.
Article in English | MEDLINE | ID: mdl-34952852

ABSTRACT

Therapeutic combinations of VEGFR tyrosine kinase inhibitor plus immune checkpoint blockade now represent a standard in the first-line management of patients with advanced renal cell carcinoma. Tumor molecular profiling has shown notable heterogeneity when it comes to activation states of relevant pathways, and it is not clear that concurrent pursuit of two mechanisms of action is needed in all patients. Here, we applied an in silico drug model to simulate combination therapy by integrating previously reported findings from individual monotherapy studies. Clinical data was collected from prospective clinical trials of axitinib, cabozantinib, pembrolizumab and nivolumab. Efficacy of two-drug combination regimens (cabozantinib plus nivolumab, and axitinib plus pembrolizumab) was then modeled assuming independent effects of each partner. Reduction in target lesions, objective response rates (ORR), and progression-free survival (PFS) were projected based on previously reported activity of each agent, randomly pairing efficacy data from two source trials for individual patients and including only the superior effect of each pair in the model. In silico results were then contextualized to register phase III studies of these combinations with similar ORR, PFS, and best tumor response. As increasingly complex therapeutic strategies emerge, computational tools like this could help define benchmarks for trial designs and precision medicine efforts. Summary statement: In silico drug modeling provides meaningful insights into the effects of combination immunotherapy for patients with advanced kidney cancer.


Subject(s)
Carcinoma, Renal Cell/drug therapy , Computer Simulation/standards , Immunotherapy/methods , Kidney Neoplasms/drug therapy , Carcinoma, Renal Cell/mortality , Carcinoma, Renal Cell/pathology , Humans , Kidney Neoplasms/mortality , Kidney Neoplasms/pathology , Progression-Free Survival
11.
JAMA Netw Open ; 4(10): e2129392, 2021 10 01.
Article in English | MEDLINE | ID: mdl-34677596

ABSTRACT

Importance: The possibility of widespread use of a novel effective therapy for Alzheimer disease (AD) will present important clinical, policy, and financial challenges. Objective: To describe how including different patient, caregiver, and societal treatment-related factors affects estimates of the cost-effectiveness of a hypothetical disease-modifying AD treatment. Design, Setting, and Participants: In this economic evaluation, the Alzheimer Disease Archimedes Condition Event Simulator was used to simulate the prognosis of a hypothetical cohort of patients selected from the Alzheimer Disease Neuroimaging Initiative database who received the diagnosis of mild cognitive impairment (MCI). Scenario analyses that varied costs and quality of life inputs relevant to patients and caregivers were conducted. The analysis was designed and conducted from June 15, 2019, to September 30, 2020. Exposures: A hypothetical drug that would delay progression to dementia in individuals with MCI compared with usual care. Main Outcomes and Measures: Incremental cost-effectiveness ratio (ICER), measured by cost per quality-adjusted life-year (QALY) gained. Results: The model included a simulated cohort of patients who scored between 24 and 30 on the Mini-Mental State Examination and had a global Clinical Dementia Rating scale of 0.5, with a required memory box score of 0.5 or higher, at baseline. Using a health care sector perspective, which included only individual patient health care costs, the ICER for the hypothetical treatment was $192 000 per QALY gained. The result decreased to $183 000 per QALY gained in a traditional societal perspective analysis with the inclusion of patient non-health care costs. The inclusion of estimated caregiver health care costs produced almost no change in the ICER, but the inclusion of QALYs gained by caregivers led to a substantial reduction in the ICER for the hypothetical treatment, to $107 000 per QALY gained in the health sector perspective. In the societal perspective scenario, with the broadest inclusion of patient and caregiver factors, the ICER decreased to $74 000 per added QALY. Conclusions and Relevance: The findings of this economic evaluation suggest that policy makers should be aware that efforts to estimate and include the effects of AD treatments outside those on patients themselves can affect the results of the cost-effectiveness analyses that often underpin assessments of the value of new treatments. Further research and debate on including these factors in assessments that will inform discussions on fair pricing for new treatments are needed.


Subject(s)
Alzheimer Disease/drug therapy , Computer Simulation/standards , Cost-Benefit Analysis/methods , Alzheimer Disease/economics , Caregivers/economics , Caregivers/psychology , Cohort Studies , Computer Simulation/statistics & numerical data , Cost-Benefit Analysis/statistics & numerical data , Humans , Quality-Adjusted Life Years , Social Norms
12.
Value Health ; 24(10): 1435-1445, 2021 10.
Article in English | MEDLINE | ID: mdl-34593166

ABSTRACT

OBJECTIVES: Developing and validating a discrete event simulation model that is able to model patients with heart failure managed with usual care or an early warning system (with or without a diagnostic algorithm) and to account for the impact of individual patient characteristics in their health outcomes. METHODS: The model was developed using patient-level data from the Trans-European Network - Home-Care Management System study. It was coded using RStudio Version 1.3.1093 (version 3.6.2.) and validated along the lines of the Assessment of the Validation Status of Health-Economic decision models tool. The model includes 20 patient and disease characteristics and generates 8 different outcomes. Model outcomes were generated for the base-case analysis and used in the model validation. RESULTS: Patients managed with the early warning system, compared with usual care, experienced an average increase of 2.99 outpatient visits and a decrease of 0.02 hospitalizations per year, with a gain of 0.81 life years (0.45 quality-adjusted life years) and increased average total costs of €11 249. Adding a diagnostic algorithm to the early warning system resulted in a 0.92 life year gain (0.57 quality-adjusted life years) and increased average costs of €9680. These patients experienced a decrease of 0.02 outpatient visits and 0.65 hospitalizations per year, while they avoided being hospitalized 0.93 times. The model showed robustness and validity of generated outcomes when comparing them with other models addressing the same problem and with external data. CONCLUSIONS: This study developed and validated a unique patient-level simulation model that can be used for simulating a wide range of outcomes for different patient subgroups and treatment scenarios. It provides useful information for guiding research and for developing new treatment options by showing the hypothetical impact of these interventions on a large number of important heart failure outcomes.


Subject(s)
Computer Simulation/standards , Heart Failure/complications , Patient Simulation , Computer Simulation/trends , Heart Failure/physiopathology , Humans
13.
Value Health ; 24(11): 1570-1577, 2021 11.
Article in English | MEDLINE | ID: mdl-34711356

ABSTRACT

OBJECTIVES: To assist with planning hospital resources, including critical care (CC) beds, for managing patients with COVID-19. METHODS: An individual simulation was implemented in Microsoft Excel using a discretely integrated condition event simulation. Expected daily cases presented to the emergency department were modeled in terms of transitions to and from ward and CC and to discharge or death. The duration of stay in each location was selected from trajectory-specific distributions. Daily ward and CC bed occupancy and the number of discharges according to care needs were forecast for the period of interest. Face validity was ascertained by local experts and, for the case study, by comparing forecasts with actual data. RESULTS: To illustrate the use of the model, a case study was developed for Guy's and St Thomas' Trust. They provided inputs for January 2020 to early April 2020, and local observed case numbers were fit to provide estimates of emergency department arrivals. A peak demand of 467 ward and 135 CC beds was forecast, with diminishing numbers through July. The model tended to predict higher occupancy in Level 1 than what was eventually observed, but the timing of peaks was quite close, especially for CC, where the model predicted at least 120 beds would be occupied from April 9, 2020, to April 17, 2020, compared with April 7, 2020, to April 19, 2020, in reality. The care needs on discharge varied greatly from day to day. CONCLUSIONS: The DICE simulation of hospital trajectories of patients with COVID-19 provides forecasts of resources needed with only a few local inputs. This should help planners understand their expected resource needs.


Subject(s)
COVID-19/economics , Computer Simulation/standards , Resource Allocation/methods , Surge Capacity/economics , COVID-19/prevention & control , COVID-19/therapy , Humans , Resource Allocation/standards , Surge Capacity/trends
14.
Biochemistry ; 60(36): 2727-2738, 2021 09 14.
Article in English | MEDLINE | ID: mdl-34455776

ABSTRACT

Zinc homeostasis in mammals is constantly and precisely maintained by sophisticated regulatory proteins. Among them, the Zrt/Irt-like protein (ZIP) regulates the influx of zinc into the cytoplasm. In this work, we have employed all-atom molecular dynamics simulations to investigate the Zn2+ transport mechanism in prokaryotic ZIP obtained from Bordetella bronchiseptica (BbZIP) in a membrane bilayer. Additionally, the structural and dynamical transformations of BbZIP during this process have been analyzed. This study allowed us to develop a hypothesis for the zinc influx mechanism and formation of the metal-binding site. We have created a model for the outward-facing form of BbZIP (experimentally only the inward-facing form has been characterized) that has allowed us, for the first time, to observe the Zn2+ ion entering the channel and binding to the negatively charged M2 site. It is thought that the M2 site is less favored than the M1 site, which then leads to metal ion egress; however, we have not observed the M1 site being occupied in our simulations. Furthermore, removing both Zn2+ ions from this complex resulted in the collapse of the metal-binding site, illustrating the "structural role" of metal ions in maintaining the binding site and holding the proteins together. Finally, due to the long Cd2+-residue bond distances observed in the X-ray structures, we have proposed the existence of an H3O+ ion at the M2 site that plays an important role in protein stability in the absence of the metal ion.


Subject(s)
Bordetella bronchiseptica/metabolism , Carrier Proteins/chemistry , Cation Transport Proteins/metabolism , Computer Simulation/standards , Zinc/metabolism , Carrier Proteins/metabolism , Molecular Dynamics Simulation , Protein Structural Elements
15.
Curr Opin Ophthalmol ; 32(5): 452-458, 2021 Sep 01.
Article in English | MEDLINE | ID: mdl-34231530

ABSTRACT

PURPOSE OF REVIEW: In this article, we introduce the concept of model interpretability, review its applications in deep learning models for clinical ophthalmology, and discuss its role in the integration of artificial intelligence in healthcare. RECENT FINDINGS: The advent of deep learning in medicine has introduced models with remarkable accuracy. However, the inherent complexity of these models undermines its users' ability to understand, debug and ultimately trust them in clinical practice. Novel methods are being increasingly explored to improve models' 'interpretability' and draw clearer associations between their outputs and features in the input dataset. In the field of ophthalmology, interpretability methods have enabled users to make informed adjustments, identify clinically relevant imaging patterns, and predict outcomes in deep learning models. SUMMARY: Interpretability methods support the transparency necessary to implement, operate and modify complex deep learning models. These benefits are becoming increasingly demonstrated in models for clinical ophthalmology. As quality standards for deep learning models used in healthcare continue to evolve, interpretability methods may prove influential in their path to regulatory approval and acceptance in clinical practice.


Subject(s)
Deep Learning , Ophthalmology , Artificial Intelligence , Clinical Competence , Computer Simulation/standards , Deep Learning/standards , Diagnostic Imaging , Humans , Ophthalmology/standards
16.
Pediatrics ; 148(Suppl 1): s3-s10, 2021 07.
Article in English | MEDLINE | ID: mdl-34210841

ABSTRACT

BACKGROUND AND OBJECTIVES: Screening interventions in pediatric primary care often have limited effects on patients' health. Using simulation, we examined what conditions must hold for screening to improve population health outcomes, using screening for depression in adolescence as an example. METHODS: Through simulation, we varied parameters describing the working recognition and treatment of depression in primary care. The outcome measure was the effect of universal screening on adolescent population mental health, expressed as a percentage of the maximum possible effect. Through simulations, we randomly selected parameter values from the ranges of possible values identified from studies of care delivery in real-world pediatric settings. RESULTS: We examined the comparative effectiveness of universal screening over assessment as usual in 10 000 simulations. Screening achieved a median of 4.2% of the possible improvement in population mental health (average: 4.8%). Screening had more impact on population health with a higher sensitivity of the screen, lower false-positive rate, higher percentage screened, and higher probability of treatment, given the recognition of depression. However, even at the best levels of each of these parameters, screening usually achieved <10% of the possible effect. CONCLUSIONS: The many points at which the mental health care delivery process breaks down limit the population health effects of universal screening in primary care. Screening should be evaluated in the context of a realistic model of health care system functioning. We need to identify health care system structures and processes that strengthen the population effectiveness of screening or consider alternate solutions outside of primary care.


Subject(s)
Computer Simulation , Depression/diagnosis , Mass Screening/methods , Mental Health , Population Health , Primary Health Care/methods , Adolescent , Child , Computer Simulation/standards , Depression/therapy , Education , Humans , Mass Screening/standards , Mental Health/standards , Primary Health Care/standards , Treatment Outcome
17.
Biopharm Drug Dispos ; 42(8): 393-398, 2021 Sep.
Article in English | MEDLINE | ID: mdl-34272891

ABSTRACT

P-glycoprotein (P-gp) is an efflux pump implicated in pharmacokinetics and drug-drug interactions. The identification of its substrates is consequently an important issue, notably for drugs under development. For such a purpose, various in silico methods have been developed, but their relevance remains to be fully established. The present study was designed to get insight about this point, through determining the performance values of six freely accessible Web-tools (ADMETlab, AdmetSAR2.0, PgpRules, pkCSM, SwissADME and vNN-ADMET), computationally predicting P-gp-mediated transport. Using an external test set of 231 marketed drugs, approved over the 2010-2020 period by the US Food and Drug Administration and fully in vitro characterized for their P-gp substrate status, various performance parameters (including sensitivity, specificity, accuracy, Matthews correlation coefficient and area under the receiver operating characteristics curve) were determined. They were found to rather poorly meet criteria commonly required for acceptable prediction, whatever the Web-tools were used alone or in combination. Predictions of being P-gp substrate or non-substrate by these online in silico methods may therefore be considered with caution.


Subject(s)
ATP Binding Cassette Transporter, Subfamily B, Member 1/metabolism , Computer Simulation/standards , Drug Development , Drug Interactions , Pharmacokinetics , Drug Approval , Drug Development/methods , Drug Development/trends , Humans , Predictive Value of Tests , Proof of Concept Study , Reproducibility of Results , United States
18.
PLoS One ; 16(5): e0250959, 2021.
Article in English | MEDLINE | ID: mdl-33970949

ABSTRACT

Compression at a very low bit rate(≤0.5bpp) causes degradation in video frames with standard decoding algorithms like H.261, H.262, H.264, and MPEG-1 and MPEG-4, which itself produces lots of artifacts. This paper focuses on an efficient pre-and post-processing technique (PP-AFT) to address and rectify the problems of quantization error, ringing, blocking artifact, and flickering effect, which significantly degrade the visual quality of video frames. The PP-AFT method differentiates the blocked images or frames using activity function into different regions and developed adaptive filters as per the classified region. The designed process also introduces an adaptive flicker extraction and removal method and a 2-D filter to remove ringing effects in edge regions. The PP-AFT technique is implemented on various videos, and results are compared with different existing techniques using performance metrics like PSNR-B, MSSIM, and GBIM. Simulation results show significant improvement in the subjective quality of different video frames. The proposed method outperforms state-of-the-art de-blocking methods in terms of PSNR-B with average value lying between (0.7-1.9db) while (35.83-47.7%) reduced average GBIM keeping MSSIM values very close to the original sequence statistically 0.978.


Subject(s)
Algorithms , Computer Simulation/standards , Data Compression/methods , Image Enhancement/methods , Signal-To-Noise Ratio , Artifacts , Humans
19.
Nature ; 594(7861): 106-110, 2021 06.
Article in English | MEDLINE | ID: mdl-33953404

ABSTRACT

Cancer of unknown primary (CUP) origin is an enigmatic group of diagnoses in which the primary anatomical site of tumour origin cannot be determined1,2. This poses a considerable challenge, as modern therapeutics are predominantly specific to the primary tumour3. Recent research has focused on using genomics and transcriptomics to identify the origin of a tumour4-9. However, genomic testing is not always performed and lacks clinical penetration in low-resource settings. Here, to overcome these challenges, we present a deep-learning-based algorithm-Tumour Origin Assessment via Deep Learning (TOAD)-that can provide a differential diagnosis for the origin of the primary tumour using routinely acquired histology slides. We used whole-slide images of tumours with known primary origins to train a model that simultaneously identifies the tumour as primary or metastatic and predicts its site of origin. On our held-out test set of tumours with known primary origins, the model achieved a top-1 accuracy of 0.83 and a top-3 accuracy of 0.96, whereas on our external test set it achieved top-1 and top-3 accuracies of 0.80 and 0.93, respectively. We further curated a dataset of 317 cases of CUP for which a differential diagnosis was assigned. Our model predictions resulted in concordance for 61% of cases and a top-3 agreement of 82%. TOAD can be used as an assistive tool to assign a differential diagnosis to complicated cases of metastatic tumours and CUPs and could be used in conjunction with or in lieu of ancillary tests and extensive diagnostic work-ups to reduce the occurrence of CUP.


Subject(s)
Artificial Intelligence , Computer Simulation , Neoplasms, Unknown Primary/pathology , Cohort Studies , Computer Simulation/standards , Female , Humans , Male , Neoplasm Metastasis/pathology , Neoplasms, Unknown Primary/diagnosis , Reproducibility of Results , Sensitivity and Specificity , Workflow
20.
Traffic Inj Prev ; 22(5): 384-389, 2021.
Article in English | MEDLINE | ID: mdl-33881358

ABSTRACT

OBJECTIVE: Road traffic laws explicitly refer to a safe and cautious driving style as a means of ensuring safety. For automated vehicles to adhere to these laws, objective measurements of safe and cautious behavior in normal driving conditions are required. This paper describes the conception, implementation and initial testing of an objective scoring system that assigns safety indexes to observed driving style, and aggregates them to provide an overall safety score for a given driving session. METHODS: The safety score was developed by matching safety indexes with maneuver-based parameter ranges processed from an existing highway traffic data set with a newly developed algorithm. The concept stands on the idea that safety, rather than suddenly changing from a safe to an unsafe condition at a certain parameter value, can be better modeled as a continuum of values that consider the safety margins available for interactions among multiple vehicles and that depend on present traffic conditions. A sensitivity test of the developed safety score was conducted by comparing the results of applying the algorithm to two drivers in a simulator who were instructed to drive normally and risky, respectively. RESULTS: The evaluation of normal driving statistics provided suitable ranges for safety parameters like vehicle distances, time headways, and time to collision based on real traffic data. The sensitivity test provided preliminary evidence that the scoring method can discriminate between safe and risky drivers based on their driving style. In contrast to previous approaches, collision situations are not needed for this assessment. CONCLUSIONS: The developed safety score shows potential for assessing the level of safety of automated vehicle (AV) behavior in traffic, including AV ability to avoid exposure to collision-prone situations. Occasional bad scores may occur even for good drivers or autonomously driving vehicles. However, if the safety index becomes low during a significant part of a driving session, due to frequent or harsh safety margin violations, the corresponding driving style should not be accepted for driving in real traffic.


Subject(s)
Accidents, Traffic/prevention & control , Automobile Driving/standards , Computer Simulation/standards , Safety/standards , Algorithms , Automobile Driver Examination , Humans , Risk-Taking
SELECTION OF CITATIONS
SEARCH DETAIL
...