RESUMO
Aims: Deep learning methods have recently gained success in detecting left ventricular systolic dysfunction (LVSD) from electrocardiogram (ECG) waveforms. Despite their high level of accuracy, they are difficult to interpret and deploy broadly in the clinical setting. In this study, we set out to determine whether simpler models based on standard ECG measurements could detect LVSD with similar accuracy to that of deep learning models. Methods and results: Using an observational data set of 40 994 matched 12-lead ECGs and transthoracic echocardiograms, we trained a range of models with increasing complexity to detect LVSD based on ECG waveforms and derived measurements. The training data were acquired from the Stanford University Medical Center. External validation data were acquired from the Columbia Medical Center and the UK Biobank. The Stanford data set consisted of 40 994 matched ECGs and echocardiograms, of which 9.72% had LVSD. A random forest model using 555 discrete, automated measurements achieved an area under the receiver operator characteristic curve (AUC) of 0.92 (0.91-0.93), similar to a deep learning waveform model with an AUC of 0.94 (0.93-0.94). A logistic regression model based on five measurements achieved high performance [AUC of 0.86 (0.85-0.87)], close to a deep learning model and better than N-terminal prohormone brain natriuretic peptide (NT-proBNP). Finally, we found that simpler models were more portable across sites, with experiments at two independent, external sites. Conclusion: Our study demonstrates the value of simple electrocardiographic models that perform nearly as well as deep learning models, while being much easier to implement and interpret.
RESUMO
BACKGROUND: High blood pressure affects approximately 116 million adults in the United States. It is the leading risk factor for death and disability across the world. Unfortunately, over the past decade, hypertension control rates have decreased across the United States. Prediction models and clinical studies have shown that reducing clinician inertia alone is sufficient to reach the target of ≥80% blood pressure control. Digital health tools containing evidence-based algorithms that are able to reduce clinician inertia are a good fit for turning the tide in blood pressure control, but careful consideration should be taken in the design process to integrate digital health interventions into the clinical workflow. METHODS: We describe the development of a provider-facing hypertension management platform. We enumerate key steps of the development process, including needs finding, clinical workflow analysis, treatment algorithm creation, platform design and electronic health record integration. We interviewed and surveyed 5 Stanford clinicians from primary care, cardiology, and their clinical care team members (including nurses, advanced practice providers, medical assistants) to identify needs and break down the steps of clinician workflow analysis. The application design and development stage were aided by a team of approximately 15 specialists in the fields of primary care, hypertension, bioinformatics, and software development. CONCLUSIONS: Digital monitoring holds immense potential for revolutionizing chronic disease management. Our team developed a hypertension management platform at an academic medical center to address some of the top barriers to adoption and achieving clinical outcomes. The frameworks and processes described in this article may be used for the development of a diverse range of digital health tools in the cardiovascular space.
Assuntos
Registros Eletrônicos de Saúde , Hipertensão , Adulto , Humanos , Estados Unidos , Hipertensão/terapia , Hipertensão/tratamento farmacológico , Pressão Sanguínea , Fatores de Risco , Inquéritos e QuestionáriosRESUMO
BACKGROUND: Preoperative risk assessments used in clinical practice are insufficient in their ability to identify risk for postoperative mortality. Deep-learning analysis of electrocardiography can identify hidden risk markers that can help to prognosticate postoperative mortality. We aimed to develop a prognostic model that accurately predicts postoperative mortality in patients undergoing medical procedures and who had received preoperative electrocardiographic diagnostic testing. METHODS: In a derivation cohort of preoperative patients with available electrocardiograms (ECGs) from Cedars-Sinai Medical Center (Los Angeles, CA, USA) between Jan 1, 2015 and Dec 31, 2019, a deep-learning algorithm was developed to leverage waveform signals to discriminate postoperative mortality. We randomly split patients (8:1:1) into subsets for training, internal validation, and final algorithm test analyses. Model performance was assessed using area under the receiver operating characteristic curve (AUC) values in the hold-out test dataset and in two external hospital cohorts and compared with the established Revised Cardiac Risk Index (RCRI) score. The primary outcome was post-procedural mortality across three health-care systems. FINDINGS: 45 969 patients had a complete ECG waveform image available for at least one 12-lead ECG performed within the 30 days before the procedure date (59 975 inpatient procedures and 112 794 ECGs): 36 839 patients in the training dataset, 4549 in the internal validation dataset, and 4581 in the internal test dataset. In the held-out internal test cohort, the algorithm discriminates mortality with an AUC value of 0·83 (95% CI 0·79-0·87), surpassing the discrimination of the RCRI score with an AUC of 0·67 (0·61-0·72). The algorithm similarly discriminated risk for mortality in two independent US health-care systems, with AUCs of 0·79 (0·75-0·83) and 0·75 (0·74-0·76), respectively. Patients determined to be high risk by the deep-learning model had an unadjusted odds ratio (OR) of 8·83 (5·57-13·20) for postoperative mortality compared with an unadjusted OR of 2·08 (0·77-3·50) for postoperative mortality for RCRI scores of more than 2. The deep-learning algorithm performed similarly for patients undergoing cardiac surgery (AUC 0·85 [0·77-0·92]), non-cardiac surgery (AUC 0·83 [0·79-0·88]), and catheterisation or endoscopy suite procedures (AUC 0·76 [0·72-0·81]). INTERPRETATION: A deep-learning algorithm interpreting preoperative ECGs can improve discrimination of postoperative mortality. The deep-learning algorithm worked equally well for risk stratification of cardiac surgeries, non-cardiac surgeries, and catheterisation laboratory procedures, and was validated in three independent health-care systems. This algorithm can provide additional information to clinicians making the decision to perform medical procedures and stratify the risk of future complications. FUNDING: National Heart, Lung, and Blood Institute.
Assuntos
Aprendizado Profundo , Humanos , Medição de Risco/métodos , Algoritmos , Prognóstico , EletrocardiografiaRESUMO
The electrocardiogram (ECG) is the most frequently performed cardiovascular diagnostic test, but it is unclear how much information resting ECGs contain about long term cardiovascular risk. Here we report that a deep convolutional neural network can accurately predict the long-term risk of cardiovascular mortality and disease based on a resting ECG alone. Using a large dataset of resting 12-lead ECGs collected at Stanford University Medical Center, we developed SEER, the Stanford Estimator of Electrocardiogram Risk. SEER predicts 5-year cardiovascular mortality with an area under the receiver operator characteristic curve (AUC) of 0.83 in a held-out test set at Stanford, and with AUCs of 0.78 and 0.83 respectively when independently evaluated at Cedars-Sinai Medical Center and Columbia University Irving Medical Center. SEER predicts 5-year atherosclerotic disease (ASCVD) with an AUC of 0.67, similar to the Pooled Cohort Equations for ASCVD Risk, while being only modestly correlated. When used in conjunction with the Pooled Cohort Equations, SEER accurately reclassified 16% of patients from low to moderate risk, uncovering a group with an actual average 9.9% 10-year ASCVD risk who would not have otherwise been indicated for statin therapy. SEER can also predict several other cardiovascular conditions such as heart failure and atrial fibrillation. Using only lead I of the ECG it predicts 5-year cardiovascular mortality with an AUC of 0.80. SEER, used alongside the Pooled Cohort Equations and other risk tools, can substantially improve cardiovascular risk stratification and aid in medical decision making.
RESUMO
Atrial fibrillation is a common arrhythmia associated with significant morbidity, mortality and decreased quality of life. Mobile health devices marketed directly to consumers capable of detecting atrial fibrillation through methods including photoplethysmography, single-lead ECG as well as contactless methods are becoming ubiquitous. Large-scale screening for atrial fibrillation is feasible and has been shown to detect more cases than usual care-however, controversy still exists surrounding screening even in older higher risk populations. Given widespread use of mobile health devices, consumer-driven screening is happening on a large scale in both low-risk and high-risk populations. Given that young people make up a large portion of early adopters of mobile health devices, there is the potential that many more patients with early onset atrial fibrillation will come to clinical attention requiring possible referral to genetic arrythmia clinic. Physicians need to be familiar with these technologies, and understand their risks, and limitations. In the current review, we discuss current mobile health devices used to detect atrial fibrillation, recent and upcoming trials using them for diagnosis of atrial fibrillation, practical recommendations for patients with atrial fibrillation diagnosed by a mobile health device and special consideration in young patients.
Assuntos
Fibrilação Atrial , Telemedicina , Adolescente , Idoso , Fibrilação Atrial/diagnóstico , Fibrilação Atrial/terapia , Eletrocardiografia , Humanos , Fotopletismografia , Qualidade de VidaRESUMO
BACKGROUND: Laboratory testing is routinely used to assay blood biomarkers to provide information on physiologic state beyond what clinicians can evaluate from interpreting medical imaging. We hypothesized that deep learning interpretation of echocardiogram videos can provide additional value in understanding disease states and can evaluate common biomarkers results. METHODS: We developed EchoNet-Labs, a video-based deep learning algorithm to detect evidence of anemia, elevated B-type natriuretic peptide (BNP), troponin I, and blood urea nitrogen (BUN), as well as values of ten additional lab tests directly from echocardiograms. We included patients (n = 39,460) aged 18 years or older with one or more apical-4-chamber echocardiogram videos (n = 70,066) from Stanford Healthcare for training and internal testing of EchoNet-Lab's performance in estimating the most proximal biomarker result. Without fine-tuning, the performance of EchoNet-Labs was further evaluated on an additional external test dataset (n = 1,301) from Cedars-Sinai Medical Center. We calculated the area under the curve (AUC) of the receiver operating characteristic curve for the internal and external test datasets. FINDINGS: On the held-out test set of Stanford patients not previously seen during model training, EchoNet-Labs achieved an AUC of 0.80 (0.79-0.81) in detecting anemia (low hemoglobin), 0.86 (0.85-0.88) in detecting elevated BNP, 0.75 (0.73-0.78) in detecting elevated troponin I, and 0.74 (0.72-0.76) in detecting elevated BUN. On the external test dataset from Cedars-Sinai, EchoNet-Labs achieved an AUC of 0.80 (0.77-0.82) in detecting anemia, of 0.82 (0.79-0.84) in detecting elevated BNP, of 0.75 (0.72-0.78) in detecting elevated troponin I, and of 0.69 (0.66-0.71) in detecting elevated BUN. We further demonstrate the utility of the model in detecting abnormalities in 10 additional lab tests. We investigate the features necessary for EchoNet-Labs to make successful detection and identify potential mechanisms for each biomarker using well-known and novel explainability techniques. INTERPRETATION: These results show that deep learning applied to diagnostic imaging can provide additional clinical value and identify phenotypic information beyond current imaging interpretation methods. FUNDING: J.W.H. and B.H. are supported by the NSF Graduate Research Fellowship. D.O. is supported by NIH K99 HL157421-01. J.Y.Z. is supported by NSF CAREER 1942926, NIH R21 MD012867-01, NIH P30AG059307 and by a Chan-Zuckerberg Biohub Fellowship.
Assuntos
Biomarcadores , Aprendizado Profundo , Ecocardiografia , Interpretação de Imagem Assistida por Computador/métodos , Processamento de Imagem Assistida por Computador/métodos , Algoritmos , Humanos , Curva ROC , SoftwareRESUMO
Antiarrhythmic drugs used in atrial fibrillation (AF) cause QT prolongation and are associated with torsades de pointes, a deadly ventricular arrhythmia. No consensus exists on the optimal method of QT measurement or correction in AF. Therefore, we compared common methods to measure and correct QT in AF to identify the most accurate approach. We identified patients who had electrocardiograms done at Stanford Hospital (Stanford, California) between January 2014 and October 2016 with conversion from AF to sinus rhythm (SR) within a 24-hour period. QT intervals were determined using different measurement methods and corrected using the Bazett's, Framingham, Fridericia, or Hodges formulas for heart rate (HR). Comparisons were made between QT in a patient's last instance of AF to SR. Computerized measurements were taken from 715 patients. Manual measurements were taken from a 50-patient subset. Bazett's formula produced the longest corrected QT in AF compared with other formulas (p <0.005). Measuring QT as an average over multiple beats resulted in a smaller difference between AF and SR than choosing a single beat. Determining QT from a 5-beat average resulted in a QTc that was 19.0 ms higher (interquartile range 0.30 to 43.7) in AF than SR. After correcting for residual effect of HR on QTc, there was not a significant difference between QTc in AF to SR. In conclusion, measuring QT over multiple beats produces a more accurate measurement of QT in AF. Differences between QTc in AF and SR exist because of imperfect HR correction formula and not due to an independent effect of AF.
Assuntos
Fibrilação Atrial/diagnóstico , Fibrilação Atrial/fisiopatologia , Eletrocardiografia/métodos , Eletrocardiografia/estatística & dados numéricos , Adulto , Idoso , Idoso de 80 Anos ou mais , Feminino , Humanos , Masculino , Pessoa de Meia-Idade , Estudos RetrospectivosRESUMO
BACKGROUND: Abdominoperineal resection (APR) is primarily used for rectal cancer and is associated with a high rate of complications. Though the majority of APRs are performed as open procedures, laparoscopic APRs have become more popular. The differences in short-term complications between open and laparoscopic APR are poorly characterized. METHODS: We conducted a retrospective cohort study using the American College of Surgeons National Surgical Quality Improvement Program database to determine the frequency and timing of onset of 30-d postoperative complications after APR and identify differences between open and laparoscopic APR. RESULTS: A total of 7681 patients undergoing laparoscopic or open APR between 2011 and 2015 were identified. The total complication rate for APR was high (45.4%). APRs were commonly complicated by blood transfusion (20.1%), surgical site infection (19.3%), and readmission (12.3%). Laparoscopic APR was associated with a 14% lower total complication rate compared to open APR (36.0% versus 50.1%, P < 0.001). This was primarily driven by a decreased rate of transfusion (10.7% versus 24.9%, P < 0.001) and surgical site infection (15.5% versus 21.2%, P < 0.001). Laparoscopic APR had shorter length of stay and decreased reoperation rate but similar rates of readmission and death. Cardiopulmonary complications occurred earlier in the postoperative period after APR, whereas infectious complications occurred later. CONCLUSIONS: Short-term complications following APR are common and occur more frequently in patients who undergo open APR. This, along with factors such as risk of positive pathologic margins, surgeon skill set, and patient characteristics, should contribute to the decision-making process when planning rectal cancer surgery.
Assuntos
Complicações Pós-Operatórias/epidemiologia , Protectomia/efeitos adversos , Idoso , Feminino , Humanos , Laparoscopia , Masculino , Pessoa de Meia-Idade , Complicações Pós-Operatórias/etiologia , Estudos Retrospectivos , Fatores de Tempo , Estados Unidos/epidemiologiaRESUMO
BACKGROUND: Cardiovascular disease is the leading cause of morbidity and mortality in patients with chronic kidney disease (CKD). In fact, death from cardiovascular disease is the number one cause of graft loss in kidney transplant (KTx) patients. Compared to patients on dialysis, CKD patients with KTx have increased quality and length of life. It is not known, however, whether outcomes of coronary artery bypass graft (CABG) surgery differ between CKD patients with KTx or on dialysis. METHODS: This was a retrospective cohort study comparing CKD patients with KTx or on dialysis undergoing CABG surgery included in the Nationwide Inpatient Sample from 2002 to 2011. Logistic and linear regression models were used to estimate the adjusted associations of KTx on all-cause in-hospital mortality, length of stay, cost of hospitalization, and rate of complications in CABG surgery. RESULTS: CKD patients with KTx had decreased all-cause in-hospital mortality (2.68% vs 5.86%, odds ratio (OR)=0.56, 95% confidence interval (CI)=0.32 to 0.99, P=.046), length of stay (ß=-2.96, 95% CI=-3.67 to -2.46, P<.001), and total hospital charges (difference=-$38 884, 95% CI=-$48 173 to -29 596, P<.001). They also had decreased rate of a number of perioperative complications. CONCLUSIONS: CKD patient with KTx have better perioperative outcomes in CABG surgery compared to patients on dialysis.
Assuntos
Ponte de Artéria Coronária , Doença da Artéria Coronariana/cirurgia , Transplante de Rim , Diálise Renal , Insuficiência Renal Crônica/complicações , Insuficiência Renal Crônica/terapia , Adolescente , Adulto , Idoso , Idoso de 80 Anos ou mais , Ponte de Artéria Coronária/economia , Doença da Artéria Coronariana/economia , Doença da Artéria Coronariana/mortalidade , Bases de Dados Factuais , Feminino , Custos Hospitalares/estatística & dados numéricos , Mortalidade Hospitalar , Humanos , Tempo de Internação/economia , Tempo de Internação/estatística & dados numéricos , Modelos Lineares , Modelos Logísticos , Masculino , Pessoa de Meia-Idade , Complicações Pós-Operatórias/economia , Complicações Pós-Operatórias/epidemiologia , Complicações Pós-Operatórias/etiologia , Insuficiência Renal Crônica/economia , Estudos Retrospectivos , Resultado do Tratamento , Estados Unidos , Adulto JovemRESUMO
The mechanisms whereby immune therapies affect progression of type 1 diabetes (T1D) are not well understood. Teplizumab, an FcR nonbinding anti-CD3 mAb, has shown efficacy in multiple randomized clinical trials. We previously reported an increase in the frequency of circulating CD8(+) central memory (CD8CM) T cells in clinical responders, but the generalizability of this finding and the molecular effects of teplizumab on these T cells have not been evaluated. We analyzed data from two randomized clinical studies of teplizumab in patients with new- and recent-onset T1D. At the conclusion of therapy, clinical responders showed a significant reduction in circulating CD4(+) effector memory T cells. Afterward, there was an increase in the frequency and absolute number of CD8CM T cells. In vitro, teplizumab expanded CD8CM T cells by proliferation and conversion of non-CM T cells. Nanostring analysis of gene expression of CD8CM T cells from responders and nonresponders versus placebo-treated control subjects identified decreases in expression of genes associated with immune activation and increases in expression of genes associated with T-cell differentiation and regulation. We conclude that CD8CM T cells with decreased activation and regulatory gene expression are associated with clinical responses to teplizumab in patients with T1D.
Assuntos
Anticorpos Monoclonais Humanizados/uso terapêutico , Linfócitos T CD8-Positivos/imunologia , Diabetes Mellitus Tipo 1/tratamento farmacológico , Diabetes Mellitus Tipo 1/imunologia , Subpopulações de Linfócitos T/imunologia , Transcriptoma/efeitos dos fármacos , Adolescente , Adulto , Linfócitos T CD8-Positivos/efeitos dos fármacos , Diferenciação Celular/genética , Criança , Feminino , Citometria de Fluxo , Humanos , Ativação Linfocitária/efeitos dos fármacos , Ativação Linfocitária/genética , Ativação Linfocitária/imunologia , Masculino , Subpopulações de Linfócitos T/efeitos dos fármacos , Adulto JovemRESUMO
BACKGROUND: Therapeutic hypothermia is the standard of care after perinatal asphyxia. Preclinical studies show 50% xenon improves outcome, if started early. METHODS: During a 32-patient study randomized between hypothermia only and hypothermia with xenon, 5 neonates were given xenon during retrieval using a closed-circuit incubator-mounted system. RESULTS: Without xenon availability during retrieval, 50% of eligible infants exceeded the 5-hour treatment window. With the transportable system, 100% were recruited. Xenon delivery lasted 55 to 120 minutes, using 174 mL/h (117.5-193.2) (median [interquartile range]), after circuit priming (1300 mL). CONCLUSIONS: Xenon delivery during ambulance retrieval was feasible, reduced starting delays, and used very little gas.
Assuntos
Ambulâncias , Anestesia com Circuito Fechado/instrumentação , Asfixia Neonatal/terapia , Serviços Médicos de Emergência , Hipotermia Induzida , Sistemas Automatizados de Assistência Junto ao Leito , Respiração Artificial/instrumentação , Ventiladores Mecânicos , Xenônio/administração & dosagem , Administração por Inalação , Inglaterra , Desenho de Equipamento , Estudos de Viabilidade , Humanos , Recém-Nascido , Estudos Prospectivos , Fatores de Tempo , Resultado do TratamentoRESUMO
PURPOSE OF REVIEW: Biomarkers of type 1 diabetes (T1D) are important for assessing risk of developing disease, monitoring disease progression, and determining responses to clinical treatments. Here we review recent advances in the development of biomarkers of T1D with a focus on their utility in clinical trials. RECENT FINDINGS: Measurements of autoantibodies and metabolic outcomes have been the foundation of monitoring T1D for the past 20 years. Recent advancements have led to improvements in T-cell-specific assays that have been used in large-scale clinical trials to measure antigen-specific T cell responses. Additionally, new tools are being developed for the measurement of ß cell mass and death that will allow for more direct measurement of disease activity. Lastly, recent studies have used both immunologic and nonimmunologic biomarkers to identify responders to treatments in clinical trials. SUMMARY: Use of biomarkers in the study of T1D has largely not changed over the past 20 years; however, recent advancements in the field are establishing new techniques that allow for more precise monitoring of disease progression. These new tools will ultimately lead to an improvement in understanding of disease and will be utilized in clinical trials.
Assuntos
Autoanticorpos/metabolismo , Autoantígenos/metabolismo , Diabetes Mellitus Tipo 1/metabolismo , Progressão da Doença , Células Secretoras de Insulina/metabolismo , Interleucina-18/metabolismo , Linfócitos T/metabolismo , Biomarcadores/metabolismo , Diabetes Mellitus Tipo 1/fisiopatologia , Citometria de Fluxo , Humanos , ImmunoblottingRESUMO
BACKGROUND AND OBJECTIVES: Therapeutic hypothermia has become standard of care in newborns with moderate and severe neonatal encephalopathy; however, additional interventions are needed. In experimental models, breathing xenon gas during cooling offers long-term additive neuroprotection. This is the first xenon feasibility study in cooled infants. Xenon is expensive, requiring a closed-circuit delivery system. METHODS: Cooled newborns with neonatal encephalopathy were eligible for this single-arm, dose-escalation study if clinically stable, under 18 hours of age and requiring less than 35% oxygen. Xenon duration increased stepwise from 3 to 18 hours in 14 subjects; 1 received 25% xenon and 13 received 50%. Respiratory, cardiovascular, neurologic (ie, amplitude-integrated EEG, seizures), and inflammatory (C-reactive protein) effects were examined. The effects of starting or stopping xenon rapidly or slowly were studied. Three matched control subjects per xenon treated subject were selected from our cooling database. Follow-up was at 18 months using mental developmental and physical developmental indexes of the Bayley Scales of Infant Development II. RESULTS: No adverse respiratory or cardiovascular effects, including post-extubation stridor, were seen. Xenon increased sedation and suppressed seizures and background electroencephalographic activity. Seizures sometimes occurred during rapid weaning of xenon but not during slow weaning. C-reactive protein levels were similar between groups. Hourly xenon consumption was 0.52 L. Three died, and 7 of 11 survivors had mental and physical developmental index scores ≥70 at follow-up. CONCLUSIONS: Breathing 50% xenon for up to 18 hours with 72 hours of cooling was feasible, with no adverse effects seen with 18 months' follow-up.
Assuntos
Anestesia com Circuito Fechado/instrumentação , Asfixia Neonatal/terapia , Hipotermia Induzida/métodos , Hipóxia-Isquemia Encefálica/terapia , Xenônio/uso terapêutico , Asfixia Neonatal/diagnóstico , Dano Encefálico Crônico/diagnóstico , Relação Dose-Resposta a Droga , Estudos de Viabilidade , Seguimentos , Humanos , Hipóxia-Isquemia Encefálica/diagnóstico , Lactente , Recém-Nascido , Exame Neurológico , Centros de Atenção TerciáriaRESUMO
OBJECTIVE: Therapeutic hypothermia (HT) is the standard treatment for newborns after perinatal asphyxia. Preclinical studies report that HT is more effective when started early. METHODS: Eighty cooled newborns were analyzed and grouped according to when cooling was started after birth: early (≤180 min) or late (>181 min). For survivors we analyzed whether starting cooling early was associated with a better psychomotor or mental developmental index (PDI or MDI, Bayley Scales of Infant Development II) than late cooling. RESULTS: Forty-three newborns started cooling early and 37 started late. There was no significant difference in the severity markers of perinatal asphyxia between the groups; however, nonsurvivors (n = 15) suffered more severe asphyxia and had significantly lower centiles for weight (BWC; p = 0.009). Of the 65 infants that survived, 35 were cooled early and 30 were cooled late. There was no difference in time to start cooling between those who survived and those who did not. For survivors, median PDI (IQR) was significantly higher when cooled early [90 (77-99)] compared to being cooled later [78 (70-90); p = 0.033]. There was no increase in cardiovascular adverse effects in those cooled early. There was no significant difference in MDI between early and late cooling [93 (77-103) vs. 89 (76-106), p = 0.594]. CONCLUSION: Starting cooling before 3 h of age in surviving asphyxiated newborns is safe and significantly improves motor outcome. Cooling should be initiated as soon as possible after birth in eligible infants.
Assuntos
Asfixia Neonatal/terapia , Desenvolvimento Infantil/fisiologia , Hipotermia Induzida/métodos , Recém-Nascido , Atividade Motora/fisiologia , Feminino , Humanos , Hipotermia Induzida/normas , Masculino , Estatísticas não Paramétricas , Fatores de TempoRESUMO
OBJECTIVE: To assess whether increased inspired oxygen and/or hypocarbia during the first 6 hours of life are associated with adverse outcome at 18 months in term neonates treated with therapeutic hypothermia. STUDY DESIGN: Blood gas values and ventilatory settings were monitored hourly in 61 newborns for 6 hours after birth. We investigated if there was an association between increased inspired oxygen and/or hypocarbia and adverse outcome (death or disability by Bayley Scales of Newborn Development II examination at 18-20 months). RESULTS: Hypothermia was started from 3 hours 45 minutes (10 minutes-10 hours) and median lowest Pco(2) level within the first 6 hours of life was 30 mm Hg (16.5-96 mm Hg). The median highest fraction of inspiratory oxygen within the first hour of life was 0.43 (0.21-1.00). The area under the curve fraction of inspiratory oxygen and Pao(2) for hours 1-6 of life was 0.23 (0.21-1.0) and 86 mm Hg (22-197 mm Hg), respectively. We did not find any association between any measures of hypocapnia and adverse outcome (P > .05), but increased inspired oxygen correlated with adverse outcome, even when excluding newborns with initial oxygenation failure (P < .05). CONCLUSION: Increased fraction of inspired oxygen within the first 6 hours of life was significantly associated with adverse outcome in newborns treated with therapeutic hypothermia following hypoxic ischemic encephalopathy.
Assuntos
Asfixia Neonatal/terapia , Hipotermia Induzida , Hipóxia-Isquemia Encefálica/terapia , Oxigenoterapia/efeitos adversos , Oxigênio/administração & dosagem , Índice de Apgar , Gasometria , Feminino , Humanos , Hipóxia-Isquemia Encefálica/fisiopatologia , Recém-Nascido , Masculino , Consumo de Oxigênio , Estudos RetrospectivosRESUMO
Type 1 diabetes is a common autoimmune disease that affects millions of people worldwide and has an incidence that is increasing at a striking rate, especially in young children. It results from the targeted self-destruction of the insulin-secreting ß cells of the pancreas and requires lifelong insulin treatment. The effects of chronic hyperglycemia - the result of insulin deficiency - include secondary endorgan complications. Over the past two decades our increased understanding of the pathogenesis of this disease has led to the development of new immunomodulatory treatments. None have yet received regulatory approval, but this report highlights recent progress in this area.
Assuntos
Diabetes Mellitus Tipo 1/imunologia , Diabetes Mellitus Tipo 1/terapia , Imunomodulação , Animais , Ensaios Clínicos como Assunto , Humanos , Fatores Imunológicos/imunologia , Fatores Imunológicos/uso terapêutico , Terapia NutricionalRESUMO
The development and optimization of immune therapies in patients has been hampered by the lack of preclinical models in which their effects on human immune cells can be studied. As a result, observations that have been made in preclinical studies have suggested mechanisms of drug action in murine models that have not been confirmed in clinical studies. Here, we used a humanized mouse reconstituted with human hematopoietic stem cells to study the mechanism of action of teplizumab, an Fc receptor nonbinding humanized monoclonal antibody to CD3 being tested in clinical trials for the treatment of patients with type 1 diabetes mellitus. In this model, human gut-tropic CCR6(+) T cells exited the circulation and secondary lymph organs and migrated to the small intestine. These cells then produced interleukin-10 (IL-10), a regulatory cytokine, in quantities that could be detected in the peripheral circulation. Blocking T cell migration to the small intestine with natalizumab, which prevents cellular adhesion by inhibiting α(4) integrin binding, abolished the treatment effects of teplizumab. Moreover, IL-10 expression by CD4(+)CD25(high)CCR6(+)FoxP3 cells returning to the peripheral circulation was increased in patients with type 1 diabetes treated with teplizumab. These findings demonstrate that humanized mice may be used to identify novel immunologic mechanisms that occur in patients treated with immunomodulators.