Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 41
Filtrar
Mais filtros

Bases de dados
País/Região como assunto
Tipo de documento
Intervalo de ano de publicação
1.
Proc Natl Acad Sci U S A ; 120(41): e2301842120, 2023 10 10.
Artigo em Inglês | MEDLINE | ID: mdl-37782786

RESUMO

One of the most troubling trends in criminal investigations is the growing use of "black box" technology, in which law enforcement rely on artificial intelligence (AI) models or algorithms that are either too complex for people to understand or they simply conceal how it functions. In criminal cases, black box systems have proliferated in forensic areas such as DNA mixture interpretation, facial recognition, and recidivism risk assessments. The champions and critics of AI argue, mistakenly, that we face a catch 22: While black box AI is not understandable by people, they assume that it produces more accurate forensic evidence. In this Article, we question this assertion, which has so powerfully affected judges, policymakers, and academics. We describe a mature body of computer science research showing how "glass box" AI-designed to be interpretable-can be more accurate than black box alternatives. Indeed, black box AI performs predictably worse in settings like the criminal system. Debunking the black box performance myth has implications for forensic evidence, constitutional criminal procedure rights, and legislative policy. Absent some compelling-or even credible-government interest in keeping AI as a black box, and given the constitutional rights and public safety interests at stake, we argue that a substantial burden rests on the government to justify black box AI in criminal cases. We conclude by calling for judicial rulings and legislation to safeguard a right to interpretable forensic AI.


Assuntos
Inteligência Artificial , Criminosos , Humanos , Medicina Legal , Aplicação da Lei , Algoritmos
2.
Radiology ; 310(3): e232780, 2024 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-38501952

RESUMO

Background Mirai, a state-of-the-art deep learning-based algorithm for predicting short-term breast cancer risk, outperforms standard clinical risk models. However, Mirai is a black box, risking overreliance on the algorithm and incorrect diagnoses. Purpose To identify whether bilateral dissimilarity underpins Mirai's reasoning process; create a simplified, intelligible model, AsymMirai, using bilateral dissimilarity; and determine if AsymMirai may approximate Mirai's performance in 1-5-year breast cancer risk prediction. Materials and Methods This retrospective study involved mammograms obtained from patients in the EMory BrEast imaging Dataset, known as EMBED, from January 2013 to December 2020. To approximate 1-5-year breast cancer risk predictions from Mirai, another deep learning-based model, AsymMirai, was built with an interpretable module: local bilateral dissimilarity (localized differences between left and right breast tissue). Pearson correlation coefficients were computed between the risk scores of Mirai and those of AsymMirai. Subgroup analysis was performed in patients for whom AsymMirai's year-over-year reasoning was consistent. AsymMirai and Mirai risk scores were compared using the area under the receiver operating characteristic curve (AUC), and 95% CIs were calculated using the DeLong method. Results Screening mammograms (n = 210 067) from 81 824 patients (mean age, 59.4 years ± 11.4 [SD]) were included in the study. Deep learning-extracted bilateral dissimilarity produced similar risk scores to those of Mirai (1-year risk prediction, r = 0.6832; 4-5-year prediction, r = 0.6988) and achieved similar performance as Mirai. For AsymMirai, the 1-year breast cancer risk AUC was 0.79 (95% CI: 0.73, 0.85) (Mirai, 0.84; 95% CI: 0.79, 0.89; P = .002), and the 5-year risk AUC was 0.66 (95% CI: 0.63, 0.69) (Mirai, 0.71; 95% CI: 0.68, 0.74; P < .001). In a subgroup of 183 patients for whom AsymMirai repeatedly highlighted the same tissue over time, AsymMirai achieved a 3-year AUC of 0.92 (95% CI: 0.86, 0.97). Conclusion Localized bilateral dissimilarity, an imaging marker for breast cancer risk, approximated the predictive power of Mirai and was a key to Mirai's reasoning. © RSNA, 2024 Supplemental material is available for this article See also the editorial by Freitas in this issue.


Assuntos
Neoplasias da Mama , Aprendizado Profundo , Humanos , Pessoa de Meia-Idade , Feminino , Neoplasias da Mama/diagnóstico por imagem , Neoplasias da Mama/epidemiologia , Estudos Retrospectivos , Mamografia , Mama
3.
J Infect Dis ; 228(11): 1600-1609, 2023 11 28.
Artigo em Inglês | MEDLINE | ID: mdl-37606598

RESUMO

BACKGROUND: Human immunodeficiency virus (HIV) infection remains incurable due to the persistence of a viral reservoir despite antiretroviral therapy (ART). Cannabis (CB) use is prevalent amongst people with HIV (PWH), but the impact of CB on the latent HIV reservoir has not been investigated. METHODS: Peripheral blood cells from a cohort of PWH who use CB and a matched cohort of PWH who do not use CB on ART were evaluated for expression of maturation/activation markers, HIV-specific T-cell responses, and intact proviral DNA. RESULTS: CB use was associated with increased abundance of naive T cells, reduced effector T cells, and reduced expression of activation markers. CB use was also associated with reduced levels of exhausted and senescent T cells compared to nonusing controls. HIV-specific T-cell responses were unaffected by CB use. CB use was not associated with intact or total HIV DNA frequency in CD4 T cells. CONCLUSIONS: This analysis is consistent with the hypothesis that CB use reduces activation, exhaustion, and senescence in the T cells of PWH, and does not impair HIV-specific CD8 T-cell responses. Longitudinal and interventional studies with evaluation of CB exposure are needed to fully evaluate the impact of CB use on the HIV reservoir.


Assuntos
Cannabis , Infecções por HIV , HIV-1 , Humanos , Cannabis/genética , HIV-1/genética , Latência Viral , Linfócitos T CD4-Positivos , DNA , Carga Viral , Antirretrovirais/uso terapêutico , DNA Viral/genética
4.
Reprod Biomed Online ; 45(1): 10-13, 2022 07.
Artigo em Inglês | MEDLINE | ID: mdl-35523713

RESUMO

The last decade has seen an explosion of machine learning applications in healthcare, with mixed and sometimes harmful results despite much promise and associated hype. A significant reason for the reversal in the reported benefit of these applications is the premature implementation of machine learning algorithms in clinical practice. This paper argues the critical need for 'data solidarity' for machine learning for embryo selection. A recent Lancet and Financial Times commission defined data solidarity as 'an approach to the collection, use, and sharing of health data and data for health that safeguards individual human rights while building a culture of data justice and equity, and ensuring that the value of data is harnessed for public good' (Kickbusch et al., 2021).


Assuntos
Acesso à Informação , Justiça Social , Humanos , Aprendizado de Máquina
5.
Biostatistics ; 20(4): 549-564, 2019 10 01.
Artigo em Inglês | MEDLINE | ID: mdl-29741607

RESUMO

In many clinical settings, a patient outcome takes the form of a scalar time series with a recovery curve shape, which is characterized by a sharp drop due to a disruptive event (e.g., surgery) and subsequent monotonic smooth rise towards an asymptotic level not exceeding the pre-event value. We propose a Bayesian model that predicts recovery curves based on information available before the disruptive event. A recovery curve of interest is the quantified sexual function of prostate cancer patients after prostatectomy surgery. We illustrate the utility of our model as a pre-treatment medical decision aid, producing personalized predictions that are both interpretable and accurate. We uncover covariate relationships that agree with and supplement that in existing medical literature.


Assuntos
Técnicas de Apoio para a Decisão , Modelos Estatísticos , Avaliação de Resultados em Cuidados de Saúde/estatística & dados numéricos , Prostatectomia/estatística & dados numéricos , Idoso , Teorema de Bayes , Humanos , Masculino , Pessoa de Meia-Idade , Prostatectomia/efeitos adversos
6.
J Neurosci ; 38(7): 1601-1607, 2018 02 14.
Artigo em Inglês | MEDLINE | ID: mdl-29374138

RESUMO

With ever-increasing advancements in technology, neuroscientists are able to collect data in greater volumes and with finer resolution. The bottleneck in understanding how the brain works is consequently shifting away from the amount and type of data we can collect and toward what we actually do with the data. There has been a growing interest in leveraging this vast volume of data across levels of analysis, measurement techniques, and experimental paradigms to gain more insight into brain function. Such efforts are visible at an international scale, with the emergence of big data neuroscience initiatives, such as the BRAIN initiative (Bargmann et al., 2014), the Human Brain Project, the Human Connectome Project, and the National Institute of Mental Health's Research Domain Criteria initiative. With these large-scale projects, much thought has been given to data-sharing across groups (Poldrack and Gorgolewski, 2014; Sejnowski et al., 2014); however, even with such data-sharing initiatives, funding mechanisms, and infrastructure, there still exists the challenge of how to cohesively integrate all the data. At multiple stages and levels of neuroscience investigation, machine learning holds great promise as an addition to the arsenal of analysis tools for discovering how the brain works.


Assuntos
Aprendizado de Máquina/tendências , Neurociências/tendências , Animais , Big Data , Encéfalo/fisiologia , Conectoma , Humanos , Disseminação de Informação , Reprodutibilidade dos Testes
7.
Chaos ; 26(6): 063110, 2016 06.
Artigo em Inglês | MEDLINE | ID: mdl-27368775

RESUMO

Dynamical systems are frequently used to model biological systems. When these models are fit to data, it is necessary to ascertain the uncertainty in the model fit. Here, we present prediction deviation, a metric of uncertainty that determines the extent to which observed data have constrained the model's predictions. This is accomplished by solving an optimization problem that searches for a pair of models that each provides a good fit for the observed data, yet has maximally different predictions. We develop a method for estimating a priori the impact that additional experiments would have on the prediction deviation, allowing the experimenter to design a set of experiments that would most reduce uncertainty. We use prediction deviation to assess uncertainty in a model of interferon-alpha inhibition of viral infection, and to select a sequence of experiments that reduces this uncertainty. Finally, we prove a theoretical result which shows that prediction deviation provides bounds on the trajectories of the underlying true model. These results show that prediction deviation is a meaningful metric of uncertainty that can be used for optimal experimental design.


Assuntos
Aprendizagem , Modelos Teóricos , Incerteza , Infecções por HIV , Humanos
8.
ArXiv ; 2024 Apr 01.
Artigo em Inglês | MEDLINE | ID: mdl-37808086

RESUMO

Quantifying variable importance is essential for answering high-stakes questions in fields like genetics, public policy, and medicine. Current methods generally calculate variable importance for a given model trained on a given dataset. However, for a given dataset, there may be many models that explain the target outcome equally well; without accounting for all possible explanations, different researchers may arrive at many conflicting yet equally valid conclusions given the same data. Additionally, even when accounting for all possible explanations for a given dataset, these insights may not generalize because not all good explanations are stable across reasonable data perturbations. We propose a new variable importance framework that quantifies the importance of a variable across the set of all good models and is stable across the data distribution. Our framework is extremely flexible and can be integrated with most existing model classes and global variable importance metrics. We demonstrate through experiments that our framework recovers variable importance rankings for complex simulation setups where other methods fail. Further, we show that our framework accurately estimates the true importance of a variable for the underlying data distribution. We provide theoretical guarantees on the consistency and finite sample error rates for our estimator. Finally, we demonstrate its utility with a real-world case study exploring which genes are important for predicting HIV load in persons with HIV, highlighting an important gene that has not previously been studied in connection with HIV. Code is available at https://github.com/jdonnelly36/Rashomon_Importance_Distribution.

9.
Artigo em Inglês | MEDLINE | ID: mdl-38867375

RESUMO

BACKGROUND/OBJECTIVES: Epileptiform activity (EA), including seizures and periodic patterns, worsens outcomes in patients with acute brain injuries (e.g., aneurysmal subarachnoid hemorrhage [aSAH]). Randomized control trials (RCTs) assessing anti-seizure interventions are needed. Due to scant drug efficacy data and ethical reservations with placebo utilization, and complex physiology of acute brain injury, RCTs are lacking or hindered by design constraints. We used a pharmacological model-guided simulator to design and determine the feasibility of RCTs evaluating EA treatment. METHODS: In a single-center cohort of adults (age >18) with aSAH and EA, we employed a mechanistic pharmacokinetic-pharmacodynamic framework to model treatment response using observational data. We subsequently simulated RCTs for levetiracetam and propofol, each with three treatment arms mirroring clinical practice and an additional placebo arm. Using our framework, we simulated EA trajectories across treatment arms. We predicted discharge modified Rankin Scale as a function of baseline covariates, EA burden, and drug doses using a double machine learning model learned from observational data. Differences in outcomes across arms were used to estimate the required sample size. RESULTS: Sample sizes ranged from 500 for levetiracetam 7 mg/kg versus placebo, to >4000 for levetiracetam 15 versus 7 mg/kg to achieve 80% power (5% type I error). For propofol 1 mg/kg/h versus placebo, 1200 participants were needed. Simulations comparing propofol at varying doses did not reach 80% power even at samples >1200. CONCLUSIONS: Our simulations using drug efficacy show sample sizes are infeasible, even for potentially unethical placebo-control trials. We highlight the strength of simulations with observational data to inform the null hypotheses and propose use of this simulation-based RCT paradigm to assess the feasibility of future trials of anti-seizure treatment in acute brain injury.

10.
IEEE J Biomed Health Inform ; 28(5): 2650-2661, 2024 May.
Artigo em Inglês | MEDLINE | ID: mdl-38300786

RESUMO

Atrial fibrillation (AF) is a common cardiac arrhythmia with serious health consequences if not detected and treated early. Detecting AF using wearable devices with photoplethysmography (PPG) sensors and deep neural networks has demonstrated some success using proprietary algorithms in commercial solutions. However, to improve continuous AF detection in ambulatory settings towards a population-wide screening use case, we face several challenges, one of which is the lack of large-scale labeled training data. To address this challenge, we propose to leverage AF alarms from bedside patient monitors to label concurrent PPG signals, resulting in the largest PPG-AF dataset so far (8.5 M 30-second records from 24,100 patients) and demonstrating a practical approach to build large labeled PPG datasets. Furthermore, we recognize that the AF labels thus obtained contain errors because of false AF alarms generated from imperfect built-in algorithms from bedside monitors. Dealing with label noise with unknown distribution characteristics in this case requires advanced algorithms. We, therefore, introduce and open-source a novel loss design, the cluster membership consistency (CMC) loss, to mitigate label errors. By comparing CMC with state-of-the-art methods selected from a noisy label competition, we demonstrate its superiority in handling label noise in PPG data, resilience to poor-quality signals, and computational efficiency.


Assuntos
Algoritmos , Fibrilação Atrial , Fotopletismografia , Processamento de Sinais Assistido por Computador , Humanos , Fotopletismografia/métodos , Fibrilação Atrial/fisiopatologia , Fibrilação Atrial/diagnóstico , Alarmes Clínicos , Aprendizado de Máquina , Dispositivos Eletrônicos Vestíveis
11.
Artigo em Inglês | MEDLINE | ID: mdl-38902848

RESUMO

Despite the success of antiretroviral therapy, human immunodeficiency virus (HIV) cannot be cured because of a reservoir of latently infected cells that evades therapy. To understand the mechanisms of HIV latency, we employed an integrated single-cell RNA sequencing (scRNA-seq) and single-cell assay for transposase-accessible chromatin with sequencing (scATAC-seq) approach to simultaneously profile the transcriptomic and epigenomic characteristics of ∼ 125,000 latently infected primary CD4+ T cells after reactivation using three different latency reversing agents. Differentially expressed genes and differentially accessible motifs were used to examine transcriptional pathways and transcription factor (TF) activities across the cell population. We identified cellular transcripts and TFs whose expression/activity was correlated with viral reactivation and demonstrated that a machine learning model trained on these data was 75%-79% accurate at predicting viral reactivation. Finally, we validated the role of two candidate HIV-regulating factors, FOXP1 and GATA3, in viral transcription. These data demonstrate the power of integrated multimodal single-cell analysis to uncover novel relationships between host cell factors and HIV latency.


Assuntos
Linfócitos T CD4-Positivos , Fator de Transcrição GATA3 , HIV-1 , Análise de Célula Única , Ativação Viral , Latência Viral , Latência Viral/genética , Humanos , Ativação Viral/genética , Análise de Célula Única/métodos , HIV-1/genética , HIV-1/fisiologia , Linfócitos T CD4-Positivos/virologia , Linfócitos T CD4-Positivos/metabolismo , Fator de Transcrição GATA3/metabolismo , Fator de Transcrição GATA3/genética , Fatores de Transcrição Forkhead/metabolismo , Fatores de Transcrição Forkhead/genética , Infecções por HIV/virologia , Infecções por HIV/genética , Infecções por HIV/metabolismo , Proteínas Repressoras/metabolismo , Proteínas Repressoras/genética , Transcriptoma/genética , Regulação Viral da Expressão Gênica
12.
NEJM AI ; 1(6)2024 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-38872809

RESUMO

BACKGROUND: In intensive care units (ICUs), critically ill patients are monitored with electroencephalography (EEG) to prevent serious brain injury. EEG monitoring is constrained by clinician availability, and EEG interpretation can be subjective and prone to interobserver variability. Automated deep-learning systems for EEG could reduce human bias and accelerate the diagnostic process. However, existing uninterpretable (black-box) deep-learning models are untrustworthy, difficult to troubleshoot, and lack accountability in real-world applications, leading to a lack of both trust and adoption by clinicians. METHODS: We developed an interpretable deep-learning system that accurately classifies six patterns of potentially harmful EEG activity - seizure, lateralized periodic discharges (LPDs), generalized periodic discharges (GPDs), lateralized rhythmic delta activity (LRDA), generalized rhythmic delta activity (GRDA), and other patterns - while providing faithful case-based explanations of its predictions. The model was trained on 50,697 total 50-second continuous EEG samples collected from 2711 patients in the ICU between July 2006 and March 2020 at Massachusetts General Hospital. EEG samples were labeled as one of the six EEG patterns by 124 domain experts and trained annotators. To evaluate the model, we asked eight medical professionals with relevant backgrounds to classify 100 EEG samples into the six pattern categories - once with and once without artificial intelligence (AI) assistance - and we assessed the assistive power of this interpretable system by comparing the diagnostic accuracy of the two methods. The model's discriminatory performance was evaluated with area under the receiver-operating characteristic curve (AUROC) and area under the precision-recall curve. The model's interpretability was measured with task-specific neighborhood agreement statistics that interrogated the similarities of samples and features. In a separate analysis, the latent space of the neural network was visualized by using dimension reduction techniques to examine whether the ictal-interictal injury continuum hypothesis, which asserts that seizures and seizure-like patterns of brain activity lie along a spectrum, is supported by data. RESULTS: The performance of all users significantly improved when provided with AI assistance. Mean user diagnostic accuracy improved from 47 to 71% (P<0.04). The model achieved AUROCs of 0.87, 0.93, 0.96, 0.92, 0.93, and 0.80 for the classes seizure, LPD, GPD, LRDA, GRDA, and other patterns, respectively. This performance was significantly higher than that of a corresponding uninterpretable black-box model (with P<0.0001). Videos traversing the ictal-interictal injury manifold from dimension reduction (a two-dimensional representation of the original high-dimensional feature space) give insight into the layout of EEG patterns within the network's latent space and illuminate relationships between EEG patterns that were previously hypothesized but had not yet been shown explicitly. These results indicate that the ictal-interictal injury continuum hypothesis is supported by data. CONCLUSIONS: Users showed significant pattern classification accuracy improvement with the assistance of this interpretable deep-learning model. The interpretable design facilitates effective human-AI collaboration; this system may improve diagnosis and patient care in clinical settings. The model may also provide a better understanding of how EEG patterns relate to each other along the ictal-interictal injury continuum. (Funded by the National Science Foundation, National Institutes of Health, and others.).

13.
Adv Neural Inf Process Syst ; 36: 3362-3401, 2023 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-38577617

RESUMO

The Rashomon set is the set of models that perform approximately equally well on a given dataset, and the Rashomon ratio is the fraction of all models in a given hypothesis space that are in the Rashomon set. Rashomon ratios are often large for tabular datasets in criminal justice, healthcare, lending, education, and in other areas, which has practical implications about whether simpler models can attain the same level of accuracy as more complex models. An open question is why Rashomon ratios often tend to be large. In this work, we propose and study a mechanism of the data generation process, coupled with choices usually made by the analyst during the learning process, that determines the size of the Rashomon ratio. Specifically, we demonstrate that noisier datasets lead to larger Rashomon ratios through the way that practitioners train models. Additionally, we introduce a measure called pattern diversity, which captures the average difference in predictions between distinct classification patterns in the Rashomon set, and motivate why it tends to increase with label noise. Our results explain a key aspect of why simpler models often tend to perform as well as black box models on complex, noisier datasets.

14.
Proc AAAI Conf Artif Intell ; 37(9): 11270-11279, 2023 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-38650922

RESUMO

Regression trees are one of the oldest forms of AI models, and their predictions can be made without a calculator, which makes them broadly useful, particularly for high-stakes applications. Within the large literature on regression trees, there has been little effort towards full provable optimization, mainly due to the computational hardness of the problem. This work proposes a dynamic-programming-with-bounds approach to the construction of provably-optimal sparse regression trees. We leverage a novel lower bound based on an optimal solution to the k-Means clustering algorithm on one dimensional data. We are often able to find optimal sparse trees in seconds, even for challenging datasets that involve large numbers of samples and highly-correlated features.

15.
Adv Neural Inf Process Syst ; 36: 41076-41258, 2023 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-38505104

RESUMO

We consider an important problem in scientific discovery, namely identifying sparse governing equations for nonlinear dynamical systems. This involves solving sparse ridge regression problems to provable optimality in order to determine which terms drive the underlying dynamics. We propose a fast algorithm, OKRidge, for sparse ridge regression, using a novel lower bound calculation involving, first, a saddle point formulation, and from there, either solving (i) a linear system or (ii) using an ADMM-based approach, where the proximal operators can be efficiently evaluated by solving another linear system and an isotonic regression problem. We also propose a method to warm-start our solver, which leverages a beam search. Experimentally, our methods attain provable optimality with run times that are orders of magnitude faster than those of the existing MIP formulations solved by the commercial solver Gurobi.

16.
Adv Neural Inf Process Syst ; 36: 56673-56699, 2023 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-38623077

RESUMO

In real applications, interaction between machine learning models and domain experts is critical; however, the classical machine learning paradigm that usually produces only a single model does not facilitate such interaction. Approximating and exploring the Rashomon set, i.e., the set of all near-optimal models, addresses this practical challenge by providing the user with a searchable space containing a diverse set of models from which domain experts can choose. We present algorithms to efficiently and accurately approximate the Rashomon set of sparse, generalized additive models with ellipsoids for fixed support sets and use these ellipsoids to approximate Rashomon sets for many different support sets. The approximated Rashomon set serves as a cornerstone to solve practical challenges such as (1) studying the variable importance for the model class; (2) finding models under user-specified constraints (monotonicity, direct editing); and (3) investigating sudden changes in the shape functions. Experiments demonstrate the fidelity of the approximated Rashomon set and its effectiveness in solving practical challenges.

17.
Data Brief ; 49: 109396, 2023 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-37600123

RESUMO

Additive manufacturing has provided the ability to manufacture complex structures using a wide variety of materials and geometries. Structures such as triply periodic minimal surface (TPMS) lattices have been incorporated into products across many fields due to their unique combinations of mechanical, geometric, and physical properties. Yet, the near limitless possibility of combining geometry and material into these lattices leaves much to be discovered. This article provides a dataset of experimentally gathered tensile stress-strain curves and measured porosity values for 389 unique gyroid lattice structures manufactured using vat photopolymerization 3D printing. The lattice samples were printed from one of twenty different photopolymer materials available from either Formlabs, LOCTITE AM, or ETEC that range from strong and brittle to elastic and ductile and were printed on commercially available 3D printers, specifically the Formlabs Form2, Prusa SL1, and ETEC Envision One cDLM Mechanical. The stress-strain curves were recorded with an MTS Criterion C43.504 mechanical testing apparatus and following ASTM standards, and the void fraction or "porosity" of each lattice was measured using a calibrated scale. This data serves as a valuable resource for use in the development of novel printing materials and lattice geometries and provides insight into the influence of photopolymer material properties on the printability, geometric accuracy, and mechanical performance of 3D printed lattice structures. The data described in this article was used to train a machine learning model capable of predicting mechanical properties of 3D printed gyroid lattices based on the base mechanical properties of the printing material and porosity of the lattice in the research article [1].

18.
Nat Commun ; 14(1): 4838, 2023 08 10.
Artigo em Inglês | MEDLINE | ID: mdl-37563117

RESUMO

Polymers are ubiquitous to almost every aspect of modern society and their use in medical products is similarly pervasive. Despite this, the diversity in commercial polymers used in medicine is stunningly low. Considerable time and resources have been extended over the years towards the development of new polymeric biomaterials which address unmet needs left by the current generation of medical-grade polymers. Machine learning (ML) presents an unprecedented opportunity in this field to bypass the need for trial-and-error synthesis, thus reducing the time and resources invested into new discoveries critical for advancing medical treatments. Current efforts pioneering applied ML in polymer design have employed combinatorial and high throughput experimental design to address data availability concerns. However, the lack of available and standardized characterization of parameters relevant to medicine, including degradation time and biocompatibility, represents a nearly insurmountable obstacle to ML-aided design of biomaterials. Herein, we identify a gap at the intersection of applied ML and biomedical polymer design, highlight current works at this junction more broadly and provide an outlook on challenges and future directions.


Assuntos
Materiais Biocompatíveis , Polímeros
19.
medRxiv ; 2023 Aug 22.
Artigo em Inglês | MEDLINE | ID: mdl-37662339

RESUMO

Objectives: Epileptiform activity (EA) worsens outcomes in patients with acute brain injuries (e.g., aneurysmal subarachnoid hemorrhage [aSAH]). Randomized trials (RCTs) assessing anti-seizure interventions are needed. Due to scant drug efficacy data and ethical reservations with placebo utilization, RCTs are lacking or hindered by design constraints. We used a pharmacological model-guided simulator to design and determine feasibility of RCTs evaluating EA treatment. Methods: In a single-center cohort of adults (age >18) with aSAH and EA, we employed a mechanistic pharmacokinetic-pharmacodynamic framework to model treatment response using observational data. We subsequently simulated RCTs for levetiracetam and propofol, each with three treatment arms mirroring clinical practice and an additional placebo arm. Using our framework we simulated EA trajectories across treatment arms. We predicted discharge modified Rankin Scale as a function of baseline covariates, EA burden, and drug doses using a double machine learning model learned from observational data. Differences in outcomes across arms were used to estimate the required sample size. Results: Sample sizes ranged from 500 for levetiracetam 7 mg/kg vs placebo, to >4000 for levetiracetam 15 vs. 7 mg/kg to achieve 80% power (5% type I error). For propofol 1mg/kg/hr vs. placebo 1200 participants were needed. Simulations comparing propofol at varying doses did not reach 80% power even at samples >1200. Interpretation: Our simulations using drug efficacy show sample sizes are infeasible, even for potentially unethical placebo-control trials. We highlight the strength of simulations with observational data to inform the null hypotheses and assess feasibility of future trials of EA treatment.

20.
Lancet Digit Health ; 5(8): e495-e502, 2023 08.
Artigo em Inglês | MEDLINE | ID: mdl-37295971

RESUMO

BACKGROUND: Epileptiform activity is associated with worse patient outcomes, including increased risk of disability and death. However, the effect of epileptiform activity on neurological outcome is confounded by the feedback between treatment with antiseizure medications and epileptiform activity burden. We aimed to quantify the heterogeneous effects of epileptiform activity with an interpretability-centred approach. METHODS: We did a retrospective, cross-sectional study of patients in the intensive care unit who were admitted to Massachusetts General Hospital (Boston, MA, USA). Participants were aged 18 years or older and had electrographic epileptiform activity identified by a clinical neurophysiologist or epileptologist. The outcome was the dichotomised modified Rankin Scale (mRS) at discharge and the exposure was epileptiform activity burden defined as mean or maximum proportion of time spent with epileptiform activity in 6 h windows in the first 24 h of electroencephalography. We estimated the change in discharge mRS if everyone in the dataset had experienced a specific epileptiform activity burden and were untreated. We combined pharmacological modelling with an interpretable matching method to account for confounding and epileptiform activity-antiseizure medication feedback. The quality of the matched groups was validated by the neurologists. FINDINGS: Between Dec 1, 2011, and Oct 14, 2017, 1514 patients were admitted to Massachusetts General Hospital intensive care unit, 995 (66%) of whom were included in the analysis. Compared with patients with a maximum epileptiform activity of 0 to less than 25%, patients with a maximum epileptiform activity burden of 75% or more when untreated had a mean 22·27% (SD 0·92) increased chance of a poor outcome (severe disability or death). Moderate but long-lasting epileptiform activity (mean epileptiform activity burden 2% to <10%) increased the risk of a poor outcome by mean 13·52% (SD 1·93). The effect sizes were heterogeneous depending on preadmission profile-eg, patients with hypoxic-ischaemic encephalopathy or acquired brain injury were more adversely affected compared with patients without these conditions. INTERPRETATION: Our results suggest that interventions should put a higher priority on patients with an average epileptiform activity burden 10% or greater, and treatment should be more conservative when maximum epileptiform activity burden is low. Treatment should also be tailored to individual preadmission profiles because the potential for epileptiform activity to cause harm depends on age, medical history, and reason for admission. FUNDING: National Institutes of Health and National Science Foundation.


Assuntos
Estado Terminal , Alta do Paciente , Estados Unidos , Humanos , Estudos Retrospectivos , Estudos Transversais , Resultado do Tratamento
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA