RESUMO
This article is one of ten reviews selected from the Annual Update in Intensive Care and Emergency Medicine 2022. Other selected articles can be found online at https://www.biomedcentral.com/collections/annualupdate2022 . Further information about the Annual Update in Intensive Care and Emergency Medicine is available from https://link.springer.com/bookseries/8901 .
Assuntos
Inteligência Artificial , Medicina de Emergência , Cuidados Críticos , HumanosRESUMO
BACKGROUND: Even brief hypotension is associated with increased morbidity and mortality. We developed a machine learning model to predict the initial hypotension event among intensive care unit (ICU) patients and designed an alert system for bedside implementation. MATERIALS AND METHODS: From the Medical Information Mart for Intensive Care III (MIMIC-3) dataset, minute-by-minute vital signs were extracted. A hypotension event was defined as at least five measurements within a 10-min period of systolic blood pressure ≤ 90 mmHg and mean arterial pressure ≤ 60 mmHg. Using time series data from 30-min overlapping time windows, a random forest (RF) classifier was used to predict risk of hypotension every minute. Chronologically, the first half of extracted data was used to train the model, and the second half was used to validate the trained model. The model's performance was measured with area under the receiver operating characteristic curve (AUROC) and area under the precision recall curve (AUPRC). Hypotension alerts were generated using risk score time series, a stacked RF model. A lockout time were applied for real-life implementation. RESULTS: We identified 1307 subjects (1580 ICU stays) as the hypotension group and 1619 subjects (2279 ICU stays) as the non-hypotension group. The RF model showed AUROC of 0.93 and 0.88 at 15 and 60 min, respectively, before hypotension, and AUPRC of 0.77 at 60 min before. Risk score trajectories revealed 80% and > 60% of hypotension predicted at 15 and 60 min before the hypotension, respectively. The stacked model with 15-min lockout produced on average 0.79 alerts/subject/hour (sensitivity 92.4%). CONCLUSION: Clinically significant hypotension events in the ICU can be predicted at least 1 h before the initial hypotension episode. With a highly sensitive and reliable practical alert system, a vast majority of future hypotension could be captured, suggesting potential real-life utility.
Assuntos
Hipotensão/diagnóstico , Monitorização Fisiológica/normas , Medicina de Precisão/métodos , Sinais Vitais/fisiologia , Idoso , Área Sob a Curva , Feminino , Humanos , Hipotensão/fisiopatologia , Unidades de Terapia Intensiva/organização & administração , Unidades de Terapia Intensiva/estatística & dados numéricos , Aprendizado de Máquina/normas , Aprendizado de Máquina/estatística & dados numéricos , Masculino , Pessoa de Meia-Idade , Monitorização Fisiológica/métodos , Monitorização Fisiológica/estatística & dados numéricos , Curva ROC , Medição de Risco/métodos , Medição de Risco/normas , Medição de Risco/estatística & dados numéricosRESUMO
Tachycardia is a strong though non-specific marker of cardiovascular stress that proceeds hemodynamic instability. We designed a predictive model of tachycardia using multi-granular intensive care unit (ICU) data by creating a risk score and dynamic trajectory. A subset of clinical and numerical signals were extracted from the Multiparameter Intelligent Monitoring in Intensive Care II database. A tachycardia episode was defined as heart rate ≥ 130/min lasting for ≥ 5 min, with ≥ 10% density. Regularized logistic regression (LR) and random forest (RF) classifiers were trained to create a risk score for upcoming tachycardia. Three different risk score models were compared for tachycardia and control (non-tachycardia) groups. Risk trajectory was generated from time windows moving away at 1 min increments from the tachycardia episode. Trajectories were computed over 3 hours leading up to the episode for three different models. From 2809 subjects, 787 tachycardia episodes and 707 control periods were identified. Patients with tachycardia had increased vasopressor support, longer ICU stay, and increased ICU mortality than controls. In model evaluation, RF was slightly superior to LR, which accuracy ranged from 0.847 to 0.782, with area under the curve from 0.921 to 0.842. Risk trajectory analysis showed average risks for tachycardia group evolved to 0.78 prior to the tachycardia episodes, while control group risks remained < 0.3. Among the three models, the internal control model demonstrated evolving trajectory approximately 75 min before tachycardia episode. Clinically relevant tachycardia episodes can be predicted from vital sign time series using machine learning algorithms.
Assuntos
Doenças Cardiovasculares/diagnóstico , Cuidados Críticos/métodos , Pneumopatias/diagnóstico , Monitorização Intraoperatória/métodos , Taquicardia/diagnóstico , Adulto , Idoso , Algoritmos , Área Sob a Curva , Coleta de Dados , Bases de Dados Factuais , Registros Eletrônicos de Saúde , Frequência Cardíaca , Mortalidade Hospitalar , Humanos , Unidades de Terapia Intensiva , Modelos Logísticos , Aprendizado de Máquina , Pessoa de Meia-Idade , Curva ROC , Análise de Regressão , Reprodutibilidade dos Testes , Risco , Centros de Atenção Terciária , Adulto JovemRESUMO
BACKGROUND: Lung Cancer is occasionally observed in patients with Idiopathic Pulmonary Fibrosis (IPF). We sought to describe the epidemiologic and clinical characteristics of lung cancer for patients with IPF and other interstitial lung disease (ILD) using institutional and statewide data registries. METHODS: We conducted a retrospective analysis of IPF and non-IPF ILD patients from the ILD center registry, to compare with lung cancer registries at the University of Pittsburgh as well as with population data of lung cancer obtained from Pennsylvania Department of Health between 2000 and 2015. RESULTS: Among 1108 IPF patients, 31 patients were identified with IPF and lung cancer. The age-adjusted standard incidence ratio of lung cancer was 3.34 (with IPF) and 2.3 (with non-IPF ILD) (between-group Hazard ratio = 1.4, p = 0.3). Lung cancer worsened the mortality of IPF (p < 0.001). Lung cancer with IPF had higher mortality compared to lung cancer in non-IPF ILD (Hazard ratio = 6.2, p = 0.001). Lung cancer among IPF was characterized by a predilection for lower lobes (63% vs. 26% in non-IPF lung cancer, p < 0.001) and by squamous cell histology (41% vs. 29%, p = 0.07). Increased incidence of lung cancer was observed among single lung transplant (SLT) recipients for IPF (13 out of 97, 13.4%), with increased mortality compared to SLT for IPF without lung cancer (p = 0.028) during observational period. CONCLUSIONS: Lung cancer is approximately 3.34 times more frequently diagnosed in IPF patients compared to general population, and associated with worse prognosis compared with IPF without lung cancer, with squamous cell carcinoma and lower lobe predilection. The causality between non-smoking IPF patients and lung cancer is to be determined.
Assuntos
Análise de Dados , Bases de Dados Factuais/tendências , Fibrose Pulmonar Idiopática/epidemiologia , Doenças Pulmonares Intersticiais/epidemiologia , Neoplasias Pulmonares/epidemiologia , Idoso , Feminino , Humanos , Fibrose Pulmonar Idiopática/diagnóstico , Doenças Pulmonares Intersticiais/diagnóstico , Neoplasias Pulmonares/diagnóstico , Masculino , Pessoa de Meia-Idade , Sistema de Registros , Estudos RetrospectivosRESUMO
OBJECTIVES: Early signs of bleeding are often masked by the physiologic compensatory responses delaying its identification. We sought to describe early physiologic signatures of bleeding during the blood donation process. SETTING: Waveform-level vital sign data including electrocardiography, photoplethysmography (PPG), continuous noninvasive arterial pressure, and respiratory waveforms were collected before, during, and after bleeding. SUBJECTS: Fifty-five healthy volunteers visited blood donation center to donate whole blood. INTERVENTION: After obtaining the informed consent, 3 minutes of resting time was given to each subject. Then 3 minutes of orthostasis was done, followed by another 3 minutes of resting before the blood donation. After the completion of donating blood, another 3 minutes of postbleeding resting time, followed by 3 minutes of orthostasis period again. MEASUREMENTS AND MAIN RESULTS: From 55 subjects, waveform signals as well as numerical vital signs (heart rate [HR], respiratory rate, blood pressure) and clinical characteristics were collected, and data from 51 subjects were analyzable. Any adverse events (AEs; dizziness, lightheadedness, nausea) were documented. Statistical and physiologic features including HR variability (HRV) metrics and other waveform morphologic parameters were modeled. Feature trends for all participants across the study protocol were analyzed. No significant changes in HR, blood pressure, or estimated cardiac output were seen during bleeding. Both orthostatic challenges and bleeding significantly decreased time domain and high-frequency domain HRV, and PPG amplitude, whereas increasing PPG amplitude variation. During bleeding, time-domain HRV feature trends were most sensitive to the first 100 mL of blood loss, and incremental changes of different HRV parameters (from 300 mL of blood loss), as well as a PPG morphologic feature (from 400 mL of blood loss), were shown with statistical significance. The AE group (n = 6) showed decreased sample entropy compared with the non-AE group during postbleed orthostatic challenge (p = 0.003). No significant other trend differences were observed during bleeding between AE and non-AE groups. CONCLUSIONS: Various HRV-related features were changed during rapid bleeding seen within the first minute. Subjects with AE during postbleeding orthostasis showed decreased sample entropy. These findings could be leveraged toward earlier identification of donors at risk for AE, and more broadly building a data-driven hemorrhage model for the early treatment of critical bleeding.
RESUMO
The role of initial hemodialysis vascular access in the subsequent kidney transplant outcome is unclear. Study population was derived from the United States Renal Data System and included adult patients with end-stage renal disease who started HD 1/1/2005-9/1/2009 and subsequently received at least one kidney transplant. Primary outcome variables were death-censored graft loss and all-cause recipient mortality. Among the study population (n = 17 157), 12 428 (72.4%) patients were initiated on HD with a catheter, 4090 (23.8%) patients with an arterio-venous fistula (AVF), and 639 (13.7%) patients with an arterio-venous graft (AVG). The rate of death-censored kidney allograft loss in AVF and AVG groups was not significantly different from the catheter group (HR, 0.82; p = 0.07 and HR, 0.68; p = 0.13, respectively). All-cause mortality of patients initiated on HD with AVG (HR, 0.761; p = 0.21) was not significantly different compared to those with catheters. However, all-cause mortality in the AVF group was lower compared to patients initiated on HD with catheters (HR, 0.65; p = 0.001). AVF used at the initiation of HD was associated with lower rate of all-cause mortality after kidney transplantation compared to the catheter. The type of initial vascular access for hemodialysis was not associated with kidney allograft survival.
Assuntos
Derivação Arteriovenosa Cirúrgica/efeitos adversos , Falência Renal Crônica/cirurgia , Transplante de Rim , Diálise Renal/métodos , Dispositivos de Acesso Vascular/efeitos adversos , Adulto , Idoso , Feminino , Sobrevivência de Enxerto , Humanos , Falência Renal Crônica/mortalidade , Falência Renal Crônica/terapia , Transplante de Rim/mortalidade , Masculino , Pessoa de Meia-Idade , Modelos de Riscos Proporcionais , Diálise Renal/instrumentação , Estudos Retrospectivos , Resultado do TratamentoRESUMO
Over the past decades, the field of machine learning (ML) has made great strides in medicine. Despite the number of ML-inspired publications in the clinical arena, the results and implications are not readily accepted at the bedside. Although ML is very powerful in deciphering hidden patterns in complex critical care and emergency medicine data, various factors including data, feature generation, model design, performance assessment, and limited implementation could affect the utility of the research. In this short review, a series of current challenges of adopting ML models to clinical research will be discussed.
RESUMO
BACKGROUND AND OBJECTIVES: Proper airway management during emergencies can prevent serious complications. However, cricothyroidotomy is challenging in patients with obesity. Since this technique is not performed frequently but at a critical time, the opportunity for trainees is rare. Simulators for these procedures are also lacking. Therefore, we proposed a realistic and interactive cricothyroidotomy simulator. METHODS: All anatomical structures were modeled based on computed tomography images of a patient with obesity. To mimic the feeling of incision during cricothyroidotomy, the incision site was modeled to distinguish between the skin and fat. To reinforce the educational purpose, capacitive touch sensors were attached to the artery, vein, and thyroid to generate audio feedback. The tensile strength of the silicone-cast skin was measured to verify the similarity of the mechanical properties between humans and our model. The fabrication and assembly accuracies of the phantom between the Standard Tessellation Language and the fabricated model were evaluated. Audio feedback through sensing the anatomy parts and utilization was evaluated. RESULTS: The body, skull, clavicle, artery, vein, and thyroid were fabricated using fused deposition modeling (FDM) with polylactic acid. A skin mold was fabricated using FDM with thermoplastic polyurethane. A fat mold was fabricated using stereolithography apparatus (SLA) with a clear resin. The airway and tongue were fabricated using SLA with an elastic resin. The tensile strength of the skin using silicone with and without polyester mesh was 2.63 ± 0.68 and 2.46 ± 0.21 MPa. The measurement errors for fabricating and assembling parts of the phantom between the STL and the fabricated models were -0.08 ± 0.19 mm and 0.13 ± 0.64 mm. The measurement errors internal anatomy embodied surfaces in fat part were 0.41 ± 0.89 mm. Audio feedback was generated 100% in all the areas tested. The realism, understanding of clinical skills, and intention to retrain were 7.1, 8.8, and 8.3 average points. CONCLUSIONS: Our simulator can provide a realistic simulation experience for trainees through a realistic feeling of incision and audio feedback, which can be used for actual clinical education.
Assuntos
Impressão Tridimensional , Estereolitografia , Humanos , Simulação por Computador , Crânio , ObesidadeRESUMO
Importance: Dexmedetomidine is a widely used sedative in the intensive care unit (ICU) and has unique properties that may be associated with reduced occurrence of new-onset atrial fibrillation (NOAF). Objective: To investigate whether the use of dexmedetomidine is associated with the incidence of NOAF in patients with critical illness. Design, Setting, and Participants: This propensity score-matched cohort study was conducted using the Medical Information Mart for Intensive Care-IV database, which includes records of patients admitted to the ICU at Beth Israel Deaconess Medical Center in Boston dating 2008 through 2019. Included patients were those aged 18 years or older and hospitalized in the ICU. Data were analyzed from March through May 2022. Exposure: Patients were divided into 2 groups according to dexmedetomidine exposure: those who received dexmedetomidine within 48 hours after ICU admission (dexmedetomidine group) and those who never received dexmedetomidine (no dexmedetomidine group). Main Outcomes and Measures: The primary outcome was the occurrence of NOAF within 7 days of ICU admission, as defined by the nurse-recorded rhythm status. Secondary outcomes were ICU length of stay, hospital length of stay, and in-hospital mortality. Results: This study included 22â¯237 patients before matching (mean [SD] age, 65.9 [16.7] years; 12â¯350 male patients [55.5%]). After 1:3 propensity score matching, the cohort included 8015 patients (mean [SD] age, 61.0 [17.1] years; 5240 males [65.4%]), among whom 2106 and 5909 patients were in the dexmedetomidine and no dexmedetomidine groups, respectively. Use of dexmedetomidine was associated with a decreased risk of NOAF (371 patients [17.6%] vs 1323 patients [22.4%]; hazard ratio, 0.80; 95% CI, 0.71-0.90). Although patients in the dexmedetomidine group had longer median (IQR) length of stays in the ICU (4.0 [2.7-6.9] days vs 3.5 [2.5-5.9] days; P < .001) and hospital (10.0 [6.6-16.3] days vs 8.8 [5.9-14.0] days; P < .001), dexmedetomidine was associated with decreased risk of in-hospital mortality (132 deaths [6.3%] vs 758 deaths [12.8%]; hazard ratio, 0.43; 95% CI, 0.36-0.52). Conclusions and Relevance: This study found that dexmedetomidine was associated with decreased risk of NOAF in patients with critical illness, suggesting that it may be necessary and warranted to evaluate this association in future clinical trials.
Assuntos
Fibrilação Atrial , Estado Terminal , Humanos , Masculino , Idoso , Pessoa de Meia-Idade , Estudos de Coortes , Fibrilação Atrial/tratamento farmacológico , Fibrilação Atrial/epidemiologia , Hipnóticos e Sedativos/efeitos adversos , Unidades de Terapia IntensivaRESUMO
Higher education level might result in reduced disparities in access to renal transplantation. We analyzed two outcomes: (i) being placed on the waiting list or transplanted without listing and (ii) transplantation in patients who were placed on the waiting list. We identified 3224 adult patients with end-stage renal disease (ESRD) in United States Renal Data System with education information available (mean age of ESRD onset of 57.1 ± 16.2 yr old, 54.3% men, 64.2% white, and 50.4% diabetics). Compared to whites, fewer African Americans graduated from college (10% vs. 16.7%) and a higher percentage never graduated from the high school (38.6% vs. 30.8%). African American race was associated with reduced access to transplantation (hazard ratio [HR] 0.70, p < 0.001 for wait-listing/transplantation without listing; HR 0.58, p < 0.001 for transplantation after listing). African American patients were less likely to be wait-listed/transplanted in the three less-educated groups: HR 0.67 (p = 0.005) for those never completed high school, HR 0.76 (p = 0.02) for high school graduates, and HR 0.65 (p = 0.003) for those with partial college education. However, the difference lost statistical significance in those who completed college education (HR 0.75, p = 0.1). In conclusion, in comparing white and African American candidates, racial disparities in access to kidney transplantation do exist. However, they might be alleviated in highly educated individuals.
Assuntos
Negro ou Afro-Americano/estatística & dados numéricos , Acessibilidade aos Serviços de Saúde/estatística & dados numéricos , Disparidades em Assistência à Saúde , Falência Renal Crônica/etnologia , Transplante de Rim/estatística & dados numéricos , Educação de Pacientes como Assunto , População Branca/estatística & dados numéricos , Adolescente , Adulto , Escolaridade , Feminino , Disparidades nos Níveis de Saúde , Humanos , Falência Renal Crônica/cirurgia , Masculino , Pessoa de Meia-Idade , Prognóstico , Estudos Retrospectivos , Listas de Espera , Adulto JovemRESUMO
A significant proportion of clinical physiologic monitoring alarms are false. This often leads to alarm fatigue in clinical personnel, inevitably compromising patient safety. To combat this issue, researchers have attempted to build Machine Learning (ML) models capable of accurately adjudicating Vital Sign (VS) alerts raised at the bedside of hemodynamically monitored patients as real or artifact. Previous studies have utilized supervised ML techniques that require substantial amounts of hand-labeled data. However, manually harvesting such data can be costly, time-consuming, and mundane, and is a key factor limiting the widespread adoption of ML in healthcare (HC). Instead, we explore the use of multiple, individually imperfect heuristics to automatically assign probabilistic labels to unlabeled training data using weak supervision. Our weakly supervised models perform competitively with traditional supervised techniques and require less involvement from domain experts, demonstrating their use as efficient and practical alternatives to supervised learning in HC applications of ML.
Assuntos
Artefatos , Monitorização Fisiológica , Aprendizado de Máquina Supervisionado , Sinais Vitais , Humanos , Monitorização Fisiológica/métodos , Monitorização Fisiológica/normas , Heurística , AutomaçãoRESUMO
This study performed two different analyses using a large set of population data from the Korean National Health Insurance Service Health Screening Cohort to evaluate the interactional association between temporomandibular disorder (TMD) and Parkinson's disease (PD). Two nested case-control population-based studies were conducted on 514,866 participants. In Study I, 4455 participants with TMD were matched with 17,820 control participants, with a ratio of 1:4. In Study II, 6076 participants with PD were matched with 24,304 control participants, with a ratio of 1:4. Obesity, smoking, alcohol consumption, systolic, diastolic blood pressure, fasting blood glucose level, and total cholesterol were adjusted. The adjusted odds ratio (OR) for TMD was 1.43 (95% confidence interval (CI) = 1.02-2.00) in PD patients compared to non-PD patients in Study I (p < 0.001). The adjusted OR for PD was 1.56 (95% CI = 1.13-2.15) in TMD patients compared to non-TMD patients in Study II (p = 0.007). This study demonstrated that patients with TMD have a significantly higher risk of developing PD and, conversely, those with PD have a significantly higher risk of developing TMD.
RESUMO
Heparin-induced thrombocytopenia (HIT) is associated with a high incidence of vein graft occlusion after cardiac surgery. When HIT is suspected during the post-operative period, current guideline recommends a direct thrombin inhibitor such as argatroban to be started immediately. The aim of this retrospective study was to evaluate the safety and efficacy of argatroban in the early period after cardiac surgery. All patients who received argatroban within 72 h after cardiac surgery from September 2005 to June 2009 from a single center were included. Patient demographics, pre-operative relevant history, intra-operative events and post-operative data were collected and analyzed. The primary endpoints were bleeding, thrombotic complication during or after argatroban administration, and in-hospital mortality. The study population comprised 31 patients administered argatroban within 72 h after cardiac surgery. Argatroban was started a mean of 1.7 days after surgery (median dose, 0.66 µg/kg/min; median duration, 5.9 days). Twenty patients (64.5%) experienced bleeding; episode driven entirely by the need for blood transfusion. No new thromboembolic complication occurred during or after argatroban infusion. One patient died from aspiration pneumonia. Compared to those without bleeding complications, patients who bled had longer operation times and increased use of intra-aortic balloon pump. However, argatroban therapy including the starting time, median dose, infusion duration, and activated partial thromboplastin times showed no difference between the two groups. In cardiac surgery patients with clinical suspicion of HIT, early postoperative use of argatroban seems well-tolerated and associated with a low risk of thrombotic events.
Assuntos
Procedimentos Cirúrgicos Cardíacos/métodos , Ácidos Pipecólicos/uso terapêutico , Cuidados Pós-Operatórios/métodos , Idoso , Idoso de 80 Anos ou mais , Arginina/análogos & derivados , Procedimentos Cirúrgicos Cardíacos/efeitos adversos , Efeitos Colaterais e Reações Adversas Relacionados a Medicamentos , Feminino , Hemorragia , Mortalidade Hospitalar , Humanos , Masculino , Pessoa de Meia-Idade , Ácidos Pipecólicos/efeitos adversos , Complicações Pós-Operatórias , Estudos Retrospectivos , Sulfonamidas , Trombina/antagonistas & inibidores , Trombose/etiologia , Resultado do TratamentoRESUMO
OBJECTIVE: Early detection and timely management of bleeding is critical as failure to recognize physiologically significant bleeding is associated with significant morbidity and mortality. Many such instances are detected late, even in highly monitored environments, contributing to delay in recognition and intervention. We propose a non-invasive early identification model to detect bleeding events using continuously collected photoplethysmography (PPG) and electrocardiography (ECG) waveforms. APPROACH: Fifty-nine York pigs undergoing fixed-rate, controlled hemorrhage were involved in this study and a least absolute shrinkage and selection operator regression-based early detection model was developed and tested using PPG and ECG derived features. The output of the early detection model was a risk trajectory indicating the future probability of bleeding. MAIN RESULTS: Our proposed models were generally accurate in predicting bleeding with an area under the curve of 0.89 (95% CI 0.87-0.92) and achieved an average time of 16.1 mins to detect 16.8% blood loss when a false alert rate of 1% was tolerated. Models developed on non-invasive data performed with similar discrimination and lead time to hemorrhage compared to models using invasive arterial blood pressure as monitoring data. SIGNIFICANCE: A bleed detection model using only non-invasive monitoring performs as well as those using invasive arterial pressure monitoring.
Assuntos
Hemorragia/diagnóstico , Sinais Vitais , Animais , Eletrocardiografia , Fotopletismografia , Processamento de Sinais Assistido por Computador , SuínosRESUMO
We hypothesized that differences in the microbiome could be a cause of the substantial differences in the symptoms of and treatment options for adult and pediatric patients with chronic rhinosinusitis (CRS). First, we characterized the differences in the nasal microbiomes of pediatric and adult CRS patients. Swabs were obtained from 19 patients with chronic rhinosinusitis (9 children and 10 adults). The bacterial 16S rRNA gene was pyrosequenced to compare the microbiota of the middle meatus. No significant differences were found in species richness and alpha-diversity indices between the two groups. However, in the comparison of diversity between groups using the unweighted pair group method with arithmetic mean (UPGMA) clustering of microbiome taxonomic profiles, we observed a relatively clear separation between the adult and pediatric groups. Actinobacteria had a significantly higher relative abundance in the adult group than in the pediatric group at the phylum level. At the genus level, Corynebacterium showed significantly higher relative abundance in the adult group than in the pediatric group. This is a comparative study between the microbiomes of adult and pediatric CRS patients. We expect this study to be the first step in understanding the pathogenesis of CRS in adults and children using microbiome analysis.
Assuntos
Microbiota , Rinite/microbiologia , Sinusite/microbiologia , Adulto , Bactérias/metabolismo , Biodiversidade , Criança , Doença Crônica , Feminino , Humanos , MasculinoRESUMO
BACKGROUND/AIMS: Current evidence supports lung ultrasound as a point-ofcare alternative diagnostic tool for various respiratory diseases. We sought to determine the utility of lung ultrasound for early detection of pneumonia and for assessment of respiratory failure among patients with coronavirus disease 2019 (COVID-19). METHODS: Six patients with confirmed COVID-19 by reverse transcription-polymerase chain reaction were enrolled. All had undergone chest X-ray and chest computed tomography (CT) on the day of admission and underwent multiple point-of-care lung ultrasound scans over the course of their hospitalization. RESULTS: Lung ultrasound detected early abnormal findings of representative B-lines in a patient with a normal chest X-ray, corresponding to ground-glass opacities on the chest CT scan. The ultrasound findings improved as her clinical condition improved and her viral load decreased. In another minimally symptomatic patient without significant chest X-ray findings, the ultrasound showed B-lines, an early sign of pneumonia before abnormalities were detected on the chest CT scan. In two critically ill patients, ultrasound was performed to assess for evaluation of disease severity. In both patients, the clinicians conducted emergency rapid sequence intubation based on the ultrasound findings without awaiting the laboratory results and radiological reports. In two children, ultrasound was used to assess the improvement in their pneumonia, thus avoiding further imaging tests such as chest CT. CONCLUSION: Lung ultrasound is feasible and useful as a rapid, sensitive, and affordable point-of-care screening tool to detect pneumonia and assess the severity of respiratory failure in patients hospitalized with COVID-19.
Assuntos
Infecções por Coronavirus/diagnóstico por imagem , Pulmão/diagnóstico por imagem , Pneumonia Viral/diagnóstico por imagem , Ultrassonografia , Adulto , Idoso de 80 Anos ou mais , Betacoronavirus/isolamento & purificação , COVID-19 , Criança , Infecções por Coronavirus/virologia , Feminino , Humanos , Lactente , Masculino , Pessoa de Meia-Idade , Pandemias , Pneumonia Viral/virologia , Síndrome do Desconforto Respiratório/diagnóstico por imagem , Síndrome do Desconforto Respiratório/virologia , Estudos Retrospectivos , SARS-CoV-2RESUMO
Irreproducibility of preclinical biomedical research has gained recent attention. It is suggested that requiring authors to complete a checklist at the time of manuscript submission would improve the quality and transparency of scientific reporting, and ultimately enhance reproducibility. Whether a checklist enhances quality and transparency in reporting preclinical animal studies, however, has not been empirically studied. Here we searched two highly cited life science journals, one that requires a checklist at submission (Nature) and one that does not (Cell), to identify in vivo animal studies. After screening 943 articles, a total of 80 articles were identified in 2013 (pre-checklist) and 2015 (post-checklist), and included for the detailed evaluation of reporting methodological and analytical information. We compared the quality of reporting preclinical animal studies between the two journals, accounting for differences between journals and changes over time in reporting. We find that reporting of randomization, blinding, and sample-size estimation significantly improved when comparing Nature to Cell from 2013 to 2015, likely due to implementation of a checklist. Specifically, improvement in reporting of the three methodological information was at least three times greater when a mandatory checklist was implemented than when it was not. Reporting the sex of animals and the number of independent experiments performed also improved from 2013 to 2015, likely from factors not related to a checklist. Our study demonstrates that completing a checklist at manuscript submission is associated with improved reporting of key methodological information in preclinical animal studies.
Assuntos
Pesquisa Biomédica/normas , Lista de Checagem , Confiabilidade dos Dados , Animais , Pesquisa Biomédica/estatística & dados numéricos , Avaliação Pré-Clínica de Medicamentos/normas , Humanos , Modelos Animais , Publicações/normas , Publicações/estatística & dados numéricos , Reprodutibilidade dos Testes , Projetos de Pesquisa/normasRESUMO
AIMS: We compared optical coherence tomography (OCT) features of intermediate and severe coronary stenoses in patients with stable angina and acute coronary syndrome (ACS), and tested the clinical impact of an OCT-based strategy for treating intermediate stenoses. METHODS: The study enrolled 135 consecutive patients with either ACS or stable angina and a single de-novo coronary stenosis. Patients were divided into two groups: intermediate stenosis defined as quantitative coronary angiography percentage narrowing less than 70%, or presence of angiographic vessel haziness and severe stenosis with percentage narrowing more than 70%. OCT was performed to assess features of plaque vulnerability and to measure the minimal lumen area. We also appraised the 12-month rate of major adverse event (MACE) of an OCT-guided strategy of percutaneous coronary intervention (PCI) based on the presence of thrombus and/or minimal lumen area less than 3.0âmm. RESULTS: Fifty-six patients had intermediate stenoses, whilst 79 had severe stenoses. In the 'intermediate stenosis group', patients with stable angina had a lower asymmetric index (Pâ=â0.02) and a greater calcific arc (Pâ=â0.0001). In the 'severe stenosis group', intermediate lesions of patients with ACS exhibited a greater lipid arc as compared with patients with stable angina (Pâ=â0.03). A higher prevalence of thin cap fibroatheroma was seen in patients with ACS of both groups. The incidence of MACE was not significantly different between patients with an intermediate stenosis who received PCI vs. optimal medical therapy on the basis of OCT findings (Pâ=â0.26). CONCLUSIONS: Intermediate coronary stenoses showed distinctive OCT-based features according to the initial clinical presentation. The adoption of an OCT-guided PCI strategy, based on the presence of coronary thrombus and significant vessel narrowing, led to encouraging results.