Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 323
Filter
1.
Article in English | MEDLINE | ID: mdl-38767890

ABSTRACT

OBJECTIVES: Surface the urgent dilemma that healthcare delivery organizations (HDOs) face navigating the US Food and Drug Administration (FDA) final guidance on the use of clinical decision support (CDS) software. MATERIALS AND METHODS: We use sepsis as a case study to highlight the patient safety and regulatory compliance tradeoffs that 6129 hospitals in the United States must navigate. RESULTS: Sepsis CDS remains in broad, routine use. There is no commercially available sepsis CDS system that is FDA cleared as a medical device. There is no public disclosure of an HDO turning off sepsis CDS due to regulatory compliance concerns. And there is no public disclosure of FDA enforcement action against an HDO for using sepsis CDS that is not cleared as a medical device. DISCUSSION AND CONCLUSION: We present multiple policy interventions that would relieve the current tension to enable HDOs to utilize artificial intelligence to improve patient care while also addressing FDA concerns about product safety, efficacy, and equity.

2.
Chest ; 2024 May 22.
Article in English | MEDLINE | ID: mdl-38788896

ABSTRACT

BACKGROUND: The last national estimates of US intensive care unit (ICU) physician staffing are 25 years old and lack information about interprofessional teams. RESEARCH QUESTION: How are US adult ICUs currently staffed? STUDY DESIGN AND METHODS: We conducted a cross-sectional survey (05/04/2022-02/02/2023) of adult ICU clinicians (targeting nurse/physician leadership) contacted using 2020 American Hospital Association (AHA) database information and, secondarily, through professional organizations. The survey included questions about interprofessional ICU staffing availability and roles at steady-state (pre-COVID-19). We linked survey data to hospital data in the AHA database to create weighted national estimates by extrapolating ICU staffing data to non-respondent hospitals based on hospital characteristics. RESULTS: The cohort consisted of 596 adult ICUs (response rates-AHA contacts: 2.1%; professional organizations: unknown) with geographic diversity and size variability (median [interquartile range]: 20 [12,25] beds); most cared for mixed populations (414 [69.5%]), yet medical (55 [9.2%]), surgical (70 [11.7%]), and specialty (57 [9.6%]) ICUs were well represented. 554 (93.0%) had intensivists available, with intensivists covering all patients in 75.6% of these and onsite 24 hours/day in half (53.3% weekdays; 51.8% weekends). Of all ICUs, 69.8% had physicians-in-training and 77.7% nurse practitioners/physician assistants. For mechanically ventilated patients, nurse:patient ratios were 1:2 in 89.6%. Clinical pharmacists were available in 92.6%, and respiratory therapists in 98.8%. We estimated 85.1% (95% confidence interval: 85.7%, 84.5%) of hospitals nationally had ICUs with intensivists, 51.6% (50.6%, 52.5%) with physicians-in-training, 72.1% (71.3%, 72.9%) with nurse practitioners/physician assistants, 98.5% (98.4%, 98.7%) with respiratory therapists, and 86.9% (86.4%, 87.4%) with clinical pharmacists. For mechanically ventilated patients, 86.4% (85.8%, 87.0%) used 1:2 nurses:patients. INTERPRETATION: Intensivist presence in adult US ICUs has greatly increased over 25 years. Intensivists, respiratory therapists, and clinical pharmacists are commonly available, and each nurse usually provides care for two mechanically ventilated patients. However, team composition and workload vary.

3.
JAMA Netw Open ; 7(5): e248881, 2024 May 01.
Article in English | MEDLINE | ID: mdl-38700865

ABSTRACT

Importance: With increased use of robots, there is an inadequate understanding of minimally invasive modalities' time costs. This study evaluates the operative durations of robotic-assisted vs video-assisted lung lobectomies. Objective: To compare resource utilization, specifically operative time, between video-assisted and robotic-assisted thoracoscopic lung lobectomies. Design, Setting, and Participants: This retrospective cohort study evaluated patients aged 18 to 90 years who underwent minimally invasive (robotic-assisted or video-assisted) lung lobectomy from January 1, 2020, to December 31, 2022, with 90 days' follow-up after surgery. The study included multicenter electronic health record data from 21 hospitals within an integrated health care system in Northern California. Thoracic surgery was regionalized to 4 centers with 14 board-certified general thoracic surgeons. Exposures: Robotic-assisted or video-assisted lung lobectomy. Main Outcomes and Measures: The primary outcome was operative duration (cut to close) in minutes. Secondary outcomes were length of stay, 30-day readmission, and 90-day mortality. Comparisons between video-assisted and robotic-assisted lobectomies were generated using the Wilcoxon rank sum test for continuous variables and the χ2 test for categorical variables. The average treatment effects were estimated with augmented inverse probability treatment weighting (AIPTW). Patient and surgeon covariates were adjusted for and included patient demographics, comorbidities, and case complexity (age, sex, race and ethnicity, neighborhood deprivation index, body mass index, Charlson Comorbidity Index score, nonelective hospitalizations, emergency department visits, a validated laboratory derangement score, a validated institutional comorbidity score, a surgeon-designated complexity indicator, and a procedural code count), and a primary surgeon-specific indicator. Results: The study included 1088 patients (median age, 70.1 years [IQR, 63.3-75.8 years]; 704 [64.7%] female), of whom 446 (41.0%) underwent robotic-assisted and 642 (59.0%) underwent video-assisted lobectomy. The median unadjusted operative duration was 172.0 minutes (IQR, 128.0-226.0 minutes). After AIPTW, there was less than a 10% difference in all covariates between groups, and operative duration was a median 20.6 minutes (95% CI, 12.9-28.2 minutes; P < .001) longer for robotic-assisted compared with video-assisted lobectomies. There was no difference in adjusted secondary patient outcomes, specifically for length of stay (0.3 days; 95% CI, -0.3 to 0.8 days; P = .11) or risk of 30-day readmission (adjusted odds ratio, 1.29; 95% CI, 0.84-1.98; P = .13). The unadjusted 90-day mortality rate (1.3% [n = 14]) was too low for the AIPTW modeling process. Conclusions and Relevance: In this cohort study, there was no difference in patient outcomes between modalities, but operative duration was longer in robotic-assisted compared with video-assisted lung lobectomy. Given that this elevated operative duration is additive when applied systematically, increased consideration of appropriate patient selection for robotic-assisted lung lobectomy is needed to improve resource utilization.


Subject(s)
Pneumonectomy , Robotic Surgical Procedures , Thoracic Surgery, Video-Assisted , Humans , Female , Male , Middle Aged , Robotic Surgical Procedures/statistics & numerical data , Robotic Surgical Procedures/methods , Robotic Surgical Procedures/economics , Aged , Retrospective Studies , Pneumonectomy/methods , Pneumonectomy/statistics & numerical data , Thoracic Surgery, Video-Assisted/methods , Thoracic Surgery, Video-Assisted/statistics & numerical data , Adult , Operative Time , Operating Rooms/statistics & numerical data , Aged, 80 and over , Length of Stay/statistics & numerical data , Lung Neoplasms/surgery , Adolescent , Treatment Outcome
4.
Crit Care ; 28(1): 113, 2024 04 08.
Article in English | MEDLINE | ID: mdl-38589940

ABSTRACT

BACKGROUND: Perhaps nowhere else in the healthcare system than in the intensive care unit environment are the challenges to create useful models with direct time-critical clinical applications more relevant and the obstacles to achieving those goals more massive. Machine learning-based artificial intelligence (AI) techniques to define states and predict future events are commonplace activities of modern life. However, their penetration into acute care medicine has been slow, stuttering and uneven. Major obstacles to widespread effective application of AI approaches to the real-time care of the critically ill patient exist and need to be addressed. MAIN BODY: Clinical decision support systems (CDSSs) in acute and critical care environments support clinicians, not replace them at the bedside. As will be discussed in this review, the reasons are many and include the immaturity of AI-based systems to have situational awareness, the fundamental bias in many large databases that do not reflect the target population of patient being treated making fairness an important issue to address and technical barriers to the timely access to valid data and its display in a fashion useful for clinical workflow. The inherent "black-box" nature of many predictive algorithms and CDSS makes trustworthiness and acceptance by the medical community difficult. Logistically, collating and curating in real-time multidimensional data streams of various sources needed to inform the algorithms and ultimately display relevant clinical decisions support format that adapt to individual patient responses and signatures represent the efferent limb of these systems and is often ignored during initial validation efforts. Similarly, legal and commercial barriers to the access to many existing clinical databases limit studies to address fairness and generalizability of predictive models and management tools. CONCLUSIONS: AI-based CDSS are evolving and are here to stay. It is our obligation to be good shepherds of their use and further development.


Subject(s)
Algorithms , Artificial Intelligence , Humans , Critical Care , Intensive Care Units , Delivery of Health Care
5.
Article in English | MEDLINE | ID: mdl-38687499

ABSTRACT

Critical care uses syndromic definitions to describe patient groups for clinical practice and research. There is growing recognition that a "precision medicine" approach is required and that integrated biologic and physiologic data identify reproducible subpopulations that may respond differently to treatment. This article reviews the current state of the field and considers how to successfully transition to a precision medicine approach. In order to impact clinical care, identified subpopulations must do more than differentiate prognosis. They must differentiate response to treatment, ideally by defining subgroups with distinct functional or pathobiological mechanisms (endotypes). There are now multiple examples of reproducible subpopulations of sepsis, acute respiratory distress syndrome, and acute kidney or brain injury described using clinical, physiological, and/or biological data. Many of these subpopulations have demonstrated the potential to define differential treatment response, largely in retrospective studies, and that the same treatment-responsive subpopulations may cross multiple clinical syndromes (treatable traits). To bring about a change in clinical practice, a precision medicine approach must be evaluated in prospective clinical studies requiring novel adaptive trial designs. Several such studies are underway but there are multiple challenges to be tackled. Such subpopulations must be readily identifiable and be applicable to all critically ill populations around the world. Subdividing clinical syndromes into subpopulations will require large patient numbers. Global collaboration of investigators, clinicians, industry and patients over many years will therefore be required to transition to a precision medicine approach and ultimately realize treatment advances seen in other medical fields. This article is open access and distributed under the terms of the Creative Commons Attribution Non-Commercial No Derivatives License 4.0 (http://creativecommons.org/licenses/by-nc-nd/4.0/).

6.
JAMA Netw Open ; 7(4): e244867, 2024 Apr 01.
Article in English | MEDLINE | ID: mdl-38573639

ABSTRACT

This quality improvement study describes the content of electronic health record messages from patients to physicians in a large integrated health care system using natural language processing algorithms.


Subject(s)
Physicians , Humans
7.
J Hosp Med ; 2024 Apr 12.
Article in English | MEDLINE | ID: mdl-38606546

ABSTRACT

BACKGROUND: Hospital-acquired venous thromboembolism (HA VTE) is a preventable complication in hospitalized patients. OBJECTIVE: We aimed to examine the use of pharmacologic prophylaxis (pPPX) and compare two risk assessment methods for HA VTE: a retrospective electronic Padua Score (ePaduaKP) and admitting clinician's choice of risk within the admission orderset (low, moderate, or high). DESIGN, SETTINGS AND PARTICIPANTS: We retrospectively analyzed prophylaxis orders for adult medical admissions (2013-2019) at Kaiser Permanente Northern California, excluding surgical and ICU patients. INTERVENTION: ePaduaKP was calculated for all admissions. For a subset of these admissions, clinician-assigned HA VTE risk was extracted. MAIN OUTCOME AND MEASURES: Descriptive pPPX utilization rates between ePaduaKP and clinician-assigned risk as well as concordance between ePaduaKP and clinician-assigned risk. RESULTS: Among 849,059 encounters, 82.2% were classified as low risk by ePaduaKP, with 42.3% receiving pPPX. In the subset with clinician-assigned risk (608,512 encounters), low and high ePaduaKP encounters were classified as moderate risk in 87.5% and 92.0% of encounters, respectively. Overall, 56.7% of encounters with moderate clinician-assigned risk received pPPX, compared to 7.2% of encounters with low clinician-assigned risk. pPPX use occurred in a large portion of low ePaduaKP risk encounters. Clinicians frequently assigned moderate risk to encounters at admission irrespective of their ePaduaKP risk when retrospectively examined. We hypothesize that the current orderset design may have negatively influenced clinician-assigned risk choice as well as pPPX utilization. Future work should explore optimizing pPPX for high-risk patients only.

8.
JAMA Surg ; 2024 Apr 10.
Article in English | MEDLINE | ID: mdl-38598191

ABSTRACT

Importance: Prior studies demonstrated consistent associations of low skeletal muscle mass assessed on surgical planning scans with postoperative morbidity and mortality. The increasing availability of imaging artificial intelligence enables development of more comprehensive imaging biomarkers to objectively phenotype frailty in surgical patients. Objective: To evaluate the associations of body composition scores derived from multiple skeletal muscle and adipose tissue measurements from automated segmentation of computed tomography (CT) with the Hospital Frailty Risk Score (HFRS) and adverse outcomes after abdominal surgery. Design, Setting, and Participants: This retrospective cohort study used CT imaging and electronic health record data from a random sample of adults who underwent abdominal surgery at 20 medical centers within Kaiser Permanente Northern California from January 1, 2010, to December 31, 2020. Data were analyzed from April 1, 2022, to December 1, 2023. Exposure: Body composition derived from automated analysis of multislice abdominal CT scans. Main Outcomes and Measures: The primary outcome of the study was all-cause 30-day postdischarge readmission or postoperative mortality. The secondary outcome was 30-day postoperative morbidity among patients undergoing abdominal surgery who were sampled for reporting to the National Surgical Quality Improvement Program. Results: The study included 48 444 adults; mean [SD] age at surgery was 61 (17) years, and 51% were female. Using principal component analysis, 3 body composition scores were derived: body size, muscle quantity and quality, and distribution of adiposity. Higher muscle quantity and quality scores were inversely correlated (r = -0.42; 95% CI, -0.43 to -0.41) with the HFRS and associated with a reduced risk of 30-day readmission or mortality (quartile 4 vs quartile 1: relative risk, 0.61; 95% CI, 0.56-0.67) and 30-day postoperative morbidity (quartile 4 vs quartile 1: relative risk, 0.59; 95% CI, 0.52-0.67), independent of sex, age, comorbidities, body mass index, procedure characteristics, and the HFRS. In contrast to the muscle score, scores for body size and greater subcutaneous and intermuscular vs visceral adiposity had inconsistent associations with postsurgical outcomes and were attenuated and only associated with 30-day postoperative morbidity after adjustment for the HFRS. Conclusions and Relevance: In this study, higher muscle quantity and quality scores were correlated with frailty and associated with 30-day readmission and postoperative mortality and morbidity, whereas body size and adipose tissue distribution scores were not correlated with patient frailty and had inconsistent associations with surgical outcomes. The findings suggest that assessment of muscle quantity and quality on CT can provide an objective measure of patient frailty that would not otherwise be clinically apparent and that may complement existing risk stratification tools to identify patients at high risk of mortality, morbidity, and readmission.

9.
J Hosp Med ; 2024 Apr 09.
Article in English | MEDLINE | ID: mdl-38594918

ABSTRACT

BACKGROUND: New-onset atrial fibrillation (AF) during sepsis is common, but models designed to stratify stroke risk excluded patients with secondary AF. We assessed the predictive validity of CHA2DS2VASc scores among patients with new-onset AF during sepsis and developed a novel stroke prediction model incorporating presepsis and intrasepsis characteristics. METHODS: We included patients ≥40 years old who survived hospitalizations with sepsis and new-onset AF across 21 Kaiser Permanente Northern California hospitals from January 1, 2011 to September 30, 2017. We calculated the area under the receiver operating curve (AUC) for CHA2DS2VASc scores to predict stroke or transient ischemic attack (TIA) within 1 year after a hospitalization with new-onset AF during sepsis using Fine-Gray models with death as competing risk. We similarly derived and validated a novel model using presepsis and intrasepsis characteristics associated with 1-year stroke/TIA risk. RESULTS: Among 82,748 adults hospitalized with sepsis, 3992 with new-onset AF (median age: 80 years, median CHA2DS2VASc of 4) survived to discharge, among whom 70 (2.1%) experienced stroke or TIA outcome and 1393 (41.0%) died within 1 year of sepsis. The CHA2DS2VASc score was not predictive of stroke risk after sepsis (AUC: 0.50, 95% confidence interval [CI]: 0.48-0.52). A newly derived model among 2555 (64%) patients in the derivation set and 1437 (36%) in the validation set included 13 variables and produced an AUC of 0.61 (0.49-0.73) in derivation and 0.54 (0.43-0.65) in validation. CONCLUSION: Current models do not accurately stratify risk of stroke following new-onset AF secondary to sepsis. New tools are required to guide anticoagulation decisions following new-onset AF in sepsis.

10.
JAMA Psychiatry ; 2024 Mar 27.
Article in English | MEDLINE | ID: mdl-38536187

ABSTRACT

Importance: Given that suicide rates have been increasing over the past decade and the demand for mental health care is at an all-time high, targeted prevention efforts are needed to identify individuals seeking to initiate mental health outpatient services who are at high risk for suicide. Suicide prediction models have been developed using outpatient mental health encounters, but their performance among intake appointments has not been directly examined. Objective: To assess the performance of a predictive model of suicide attempts among individuals seeking to initiate an episode of outpatient mental health care. Design, Setting, and Participants: This prognostic study tested the performance of a previously developed machine learning model designed to predict suicide attempts within 90 days of any mental health outpatient visit. All mental health intake appointments scheduled between January 1, 2012, and April 1, 2022, at Kaiser Permanente Northern California, a large integrated health care delivery system serving over 4.5 million patients, were included. Data were extracted and analyzed from August 9, 2022, to July 31, 2023. Main Outcome and Measures: Suicide attempts (including completed suicides) within 90 days of the appointment, determined by diagnostic codes and government databases. All predictors were extracted from electronic health records. Results: The study included 1 623 232 scheduled appointments from 835 616 unique patients. There were 2800 scheduled appointments (0.17%) followed by a suicide attempt within 90 days. The mean (SD) age across appointments was 39.7 (15.8) years, and most appointments were for women (1 103 184 [68.0%]). The model had an area under the receiver operating characteristic curve of 0.77 (95% CI, 0.76-0.78), an area under the precision-recall curve of 0.02 (95% CI, 0.02-0.02), an expected calibration error of 0.0012 (95% CI, 0.0011-0.0013), and sensitivities of 37.2% (95% CI, 35.5%-38.9%) and 18.8% (95% CI, 17.3%-20.2%) at specificities of 95% and 99%, respectively. The 10% of appointments at the highest risk level accounted for 48.8% (95% CI, 47.0%-50.6%) of the appointments followed by a suicide attempt. Conclusions and Relevance: In this prognostic study involving mental health intakes, a previously developed machine learning model of suicide attempts showed good overall classification performance. Implementation research is needed to determine appropriate thresholds and interventions for applying the model in an intake setting to target high-risk cases in a manner that is acceptable to patients and clinicians.

12.
J Acquir Immune Defic Syndr ; 95(4): 362-369, 2024 Apr 01.
Article in English | MEDLINE | ID: mdl-38412047

ABSTRACT

BACKGROUND: Preexposure prophylaxis (PrEP) use remains limited and inequitable, and strategies are needed to improve PrEP provision in primary care. METHODS: We conducted a cluster randomized trial at Kaiser Permanente, San Francisco, to evaluate the effectiveness of a clinical decision support intervention guided by an electronic health record (EHR)-based HIV risk prediction model to improve PrEP provision. Primary care providers (PCPs) were randomized to usual care or intervention, with PCPs who provide care to people with HIV balanced between arms. PCPs in the intervention arm received an EHR-based staff message with prompts to discuss HIV prevention and PrEP before upcoming in-person or video visits with patients whose predicted 3-year HIV risk was above a prespecified threshold. The main study outcome was initiation of PrEP care within 90 days, defined as PrEP discussions, referrals, or prescription fills. RESULTS: One hundred twenty-one PCPs had 5051 appointments with eligible patients (2580 usual care; 2471 intervention). There was a nonsignificant increase in initiation of PrEP care in the intervention arm (6.0% vs 4.5%, HR 1.32, 95% CI: 0.84 to 2.1). There was a significant interaction by HIV provider status, with an intervention HR of 2.59 (95% CI: 1.30 to 5.16) for HIV providers and 0.89 (95% CI: 0.59 to 1.35) for non-HIV providers (P-interaction <0.001). CONCLUSION: An EHR-based intervention guided by an HIV risk prediction model substantially increased initiation of PrEP care among patients of PCPs who also care for people with HIV. Higher-intensity interventions may be needed to improve PrEP provision among PCPs less familiar with PrEP and HIV care.


Subject(s)
Anti-HIV Agents , HIV Infections , Pre-Exposure Prophylaxis , Humans , HIV Infections/drug therapy , HIV Infections/prevention & control , Electronic Health Records , Cognition , Prescriptions , Anti-HIV Agents/therapeutic use
15.
Am J Respir Crit Care Med ; 209(7): 852-860, 2024 Apr 01.
Article in English | MEDLINE | ID: mdl-38261986

ABSTRACT

Rationale: Shorter time-to-antibiotics improves survival from sepsis, particularly among patients in shock. There may be other subgroups for whom faster antibiotics are particularly beneficial.Objectives: Identify patient characteristics associated with greater benefit from shorter time-to-antibiotics.Methods: Observational cohort study of patients hospitalized with community-onset sepsis at 173 hospitals and treated with antimicrobials within 12 hours. We used three approaches to evaluate heterogeneity of benefit from shorter time-to-antibiotics: 1) conditional average treatment effects of shorter (⩽3 h) versus longer (>3-12 h) time-to-antibiotics on 30-day mortality using multivariable Poisson regression; 2) causal forest to identify characteristics associated with greatest benefit from shorter time-to-antibiotics; and 3) logistic regression with time-to-antibiotics modeled as a spline.Measurements and Main Results: Among 273,255 patients with community-onset sepsis, 131,094 (48.0%) received antibiotics within 3 hours. In Poisson models, shorter time-to-antibiotics was associated with greater absolute mortality reduction among patients with metastatic cancer (5.0% [95% confidence interval; CI: 4.3-5.7] vs. 0.4% [95% CI: 0.2-0.6] for patients without cancer, P < 0.001); patients with shock (7.0% [95% CI: 5.8-8.2%] vs. 2.8% [95% CI: 2.7-3.5%] for patients without shock, P = 0.005); and patients with more acute organ dysfunctions (4.8% [95% CI: 3.9-5.6%] for three or more dysfunctions vs. 0.5% [95% CI: 0.3-0.8] for one dysfunction, P < 0.001). In causal forest, metastatic cancer and shock were associated with greatest benefit from shorter time-to-antibiotics. Spline analysis confirmed differential nonlinear associations of time-to-antibiotics with mortality in patients with metastatic cancer and shock.Conclusions: In patients with community-onset sepsis, the mortality benefit of shorter time-to-antibiotics varied by patient characteristics. These findings suggest that shorter time-to-antibiotics for sepsis is particularly important among patients with cancer and/or shock.


Subject(s)
Neoplasms , Sepsis , Shock, Septic , Humans , Anti-Bacterial Agents/therapeutic use , Sepsis/therapy , Cohort Studies , Retrospective Studies , Hospital Mortality
16.
Learn Health Syst ; 8(1): e10361, 2024 Jan.
Article in English | MEDLINE | ID: mdl-38249850

ABSTRACT

Introduction: Learning health systems require a workforce of researchers trained in the methods of identifying and overcoming barriers to effective, evidence-based care. Most existing postdoctoral training programs, such as NIH-funded postdoctoral T32 awards, support basic and epidemiological science with very limited focus on rigorous delivery science methods for improving care. In this report, we present the 10-year experience of developing and implementing a Delivery Science postdoctoral fellowship embedded within an integrated health care delivery system. Methods: In 2012, the Kaiser Permanente Northern California Division of Research designed and implemented a 2-year postdoctoral Delivery Science Fellowship research training program to foster research expertise in identifying and addressing barriers to evidence-based care within health care delivery systems. Results: Since 2014, 20 fellows have completed the program. Ten fellows had PhD-level scientific training, and 10 fellows had clinical doctorates (eg, MD, RN/PhD, PharmD). Fellowship alumni have graduated to faculty research positions at academic institutions (9), and research or clinical organizations (4). Seven alumni now hold positions in Kaiser Permanente's clinical operations or medical group (7). Conclusions: This delivery science fellowship program has succeeded in training graduates to address delivery science problems from both research and operational perspectives. In the next 10 years, additional goals of the program will be to expand its reach (eg, by developing joint research training models in collaboration with clinical fellowships) and strengthen mechanisms to support transition from fellowship to the workforce, especially for researchers from underrepresented groups.

17.
Nat Commun ; 15(1): 104, 2024 Jan 02.
Article in English | MEDLINE | ID: mdl-38168074

ABSTRACT

Spin defects in van der Waals materials offer a promising platform for advancing quantum technologies. Here, we propose and demonstrate a powerful technique based on isotope engineering of host materials to significantly enhance the coherence properties of embedded spin defects. Focusing on the recently-discovered negatively charged boron vacancy center ([Formula: see text]) in hexagonal boron nitride (hBN), we grow isotopically purified h10B15N crystals. Compared to [Formula: see text] in hBN with the natural distribution of isotopes, we observe substantially narrower and less crowded [Formula: see text] spin transitions as well as extended coherence time T2 and relaxation time T1. For quantum sensing, [Formula: see text] centers in our h10B15N samples exhibit a factor of 4 (2) enhancement in DC (AC) magnetic field sensitivity. For additional quantum resources, the individual addressability of the [Formula: see text] hyperfine levels enables the dynamical polarization and coherent control of the three nearest-neighbor 15N nuclear spins. Our results demonstrate the power of isotope engineering for enhancing the properties of quantum spin defects in hBN, and can be readily extended to improving spin qubits in a broad family of van der Waals materials.

18.
BMJ Open ; 14(1): e073622, 2024 01 08.
Article in English | MEDLINE | ID: mdl-38191255

ABSTRACT

OBJECTIVES: In the first year of the COVID-19 pandemic, health systems implemented programmes to manage outpatients with COVID-19. The goal was to expedite patients' referral to acute care and prevent overcrowding of medical centres. We sought to evaluate the impact of such a programme, the COVID-19 Home Care Team (CHCT) programme. DESIGN: Retrospective cohort. SETTING: Kaiser Permanente Northern California. PARTICIPANTS: Adult members before COVID-19 vaccine availability (1 February 2020-31 January 2021) with positive SARS-CoV-2 tests. INTERVENTION: Virtual programme to track and treat patients with 'CHCT programme'. OUTCOMES: The outcomes were (1) COVID-19-related emergency department visit, (2) COVID-19-related hospitalisation and (3) inpatient mortality or 30-day hospice referral. MEASURES: We estimated the average effect comparing patients who were and were not treated by CHCT. We estimated propensity scores using an ensemble super learner (random forest, XGBoost, generalised additive model and multivariate adaptive regression splines) and augmented inverse probability weighting. RESULTS: There were 98 585 patients with COVID-19. The majority were followed by CHCT (n=80 067, 81.2%). Patients followed by CHCT were older (mean age 43.9 vs 41.6 years, p<0.001) and more comorbid with COmorbidity Point Score, V.2, score ≥65 (1.7% vs 1.1%, p<0.001). Unadjusted analyses showed more COVID-19-related emergency department visits (9.5% vs 8.5%, p<0.001) and hospitalisations (3.9% vs 3.2%, p<0.001) in patients followed by CHCT but lower inpatient death or 30-day hospice referral (0.3% vs 0.5%, p<0.001). After weighting, there were higher rates of COVID-19-related emergency department visits (estimated intervention effect -0.8%, 95% CI -1.4% to -0.3%) and hospitalisation (-0.5%, 95% CI -0.9% to -0.1%) but lower inpatient mortality or 30-day hospice referral (-0.5%, 95% CI -0.7% to -0.3%) in patients followed by CHCT. CONCLUSIONS: Despite CHCT following older patients with higher comorbidity burden, there appeared to be a protective effect. Patients followed by CHCT were more likely to present to acute care and less likely to die inpatient.


Subject(s)
COVID-19 , Delivery of Health Care, Integrated , Hospices , Adult , Humans , Retrospective Studies , COVID-19 Vaccines , Pandemics , COVID-19/therapy , SARS-CoV-2 , Inpatients
20.
Transfusion ; 64(1): 53-67, 2024 01.
Article in English | MEDLINE | ID: mdl-38054619

ABSTRACT

BACKGROUND: The safety of transfusion of SARS-CoV-2 antibodies in high plasma volume blood components to recipients without COVID-19 is not established. We assessed whether transfusion of plasma or platelet products during periods of increasing prevalence of blood donor SARS-CoV-2 infection and vaccination was associated with changes in outcomes in hospitalized patients without COVID-19. METHODS: We conducted a retrospective cohort study of hospitalized adults who received plasma or platelet transfusions at 21 hospitals during pre-COVID-19 (3/1/2018-2/29/2020), COVID-19 pre-vaccine (3/1/2020-2/28/2021), and COVID-19 post-vaccine (3/1/2021-8/31/2022) study periods. We used multivariable logistic regression with generalized estimating equations to adjust for demographics and comorbidities to calculate odds ratios (ORs) and 95% confidence intervals (CIs). RESULTS: Among 21,750 hospitalizations of 18,584 transfusion recipients without COVID-19, there were 697 post-transfusion thrombotic events, and oxygen requirements were increased in 1751 hospitalizations. Intensive care unit length of stay (n = 11,683) was 3 days (interquartile range 1-5), hospital mortality occurred in 3223 (14.8%), and 30-day rehospitalization in 4144 (23.7%). Comparing the pre-COVID, pre-vaccine and post-vaccine study periods, there were no trends in thromboses (OR 0.9 [95% CI 0.8, 1.1]; p = .22) or oxygen requirements (OR 1.0 [95% CI 0.9, 1.1]; p = .41). In parallel, there were no trends across study periods for ICU length of stay (p = .83), adjusted hospital mortality (OR 1.0 [95% CI 0.9-1.0]; p = .36), or 30-day rehospitalization (p = .29). DISCUSSION: Transfusion of plasma and platelet blood components collected during the pre-vaccine and post-vaccine periods of the COVID-19 pandemic was not associated with increased adverse outcomes in transfusion recipients without COVID-19.


Subject(s)
Blood Component Transfusion , Blood Donors , COVID-19 , Platelet Transfusion , Adult , Humans , COVID-19/epidemiology , Oxygen , Platelet Transfusion/adverse effects , Retrospective Studies , Vaccination , COVID-19 Vaccines , Blood Component Transfusion/adverse effects , Plasma , Hospitalization
SELECTION OF CITATIONS
SEARCH DETAIL
...