Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 55
Filter
2.
Pain ; 165(5): 1121-1130, 2024 May 01.
Article in English | MEDLINE | ID: mdl-38015622

ABSTRACT

ABSTRACT: Although inflammation is known to play a role in knee osteoarthritis (KOA), inflammation-specific imaging is not routinely performed. In this article, we evaluate the role of joint inflammation, measured using [ 11 C]-PBR28, a radioligand for the inflammatory marker 18-kDa translocator protein (TSPO), in KOA. Twenty-one KOA patients and 11 healthy controls (HC) underwent positron emission tomography/magnetic resonance imaging (PET/MRI) knee imaging with the TSPO ligand [ 11 C]-PBR28. Standardized uptake values were extracted from regions-of-interest (ROIs) semiautomatically segmented from MRI data, and compared across groups (HC, KOA) and subgroups (unilateral/bilateral KOA symptoms), across knees (most vs least painful), and against clinical variables (eg, pain and Kellgren-Lawrence [KL] grades). Overall, KOA patients demonstrated elevated [ 11 C]-PBR28 binding across all knee ROIs, compared with HC (all P 's < 0.005). Specifically, PET signal was significantly elevated in both knees in patients with bilateral KOA symptoms (both P 's < 0.01), and in the symptomatic knee ( P < 0.05), but not the asymptomatic knee ( P = 0.95) of patients with unilateral KOA symptoms. Positron emission tomography signal was higher in the most vs least painful knee ( P < 0.001), and the difference in pain ratings across knees was proportional to the difference in PET signal ( r = 0.74, P < 0.001). Kellgren-Lawrence grades neither correlated with PET signal (left knee r = 0.32, P = 0.19; right knee r = 0.18, P = 0.45) nor pain ( r = 0.39, P = 0.07). The current results support further exploration of [ 11 C]-PBR28 PET signal as an imaging marker candidate for KOA and a link between joint inflammation and osteoarthritis-related pain severity.


Subject(s)
Osteoarthritis, Knee , Humans , Osteoarthritis, Knee/diagnostic imaging , Positron-Emission Tomography/methods , Knee Joint/metabolism , Inflammation/diagnostic imaging , Pain , Receptors, GABA/metabolism
3.
4.
J Am Coll Radiol ; 20(11): 1126-1130, 2023 11.
Article in English | MEDLINE | ID: mdl-37392983

ABSTRACT

Users of artificial intelligence (AI) can become overreliant on AI, negatively affecting the performance of human-AI teams. For a future in which radiologists use interpretive AI tools routinely in clinical practice, radiology education will need to evolve to provide radiologists with the skills to use AI appropriately and wisely. In this work, we examine how overreliance on AI may develop in radiology trainees and explore how this problem can be mitigated, including through the use of AI-augmented education. Radiology trainees will still need to develop the perceptual skills and mastery of knowledge fundamental to radiology to use AI safely. We propose a framework for radiology trainees to use AI tools with appropriate reliance, drawing on lessons from human-AI interactions research.


Subject(s)
Artificial Intelligence , Radiology , Humans , Radiology/education , Radiologists , Forecasting
5.
Br J Radiol ; 96(1149): 20220769, 2023 Sep.
Article in English | MEDLINE | ID: mdl-37162253

ABSTRACT

OBJECTIVES: Current state-of-the-art natural language processing (NLP) techniques use transformer deep-learning architectures, which depend on large training datasets. We hypothesized that traditional NLP techniques may outperform transformers for smaller radiology report datasets. METHODS: We compared the performance of BioBERT, a deep-learning-based transformer model pre-trained on biomedical text, and three traditional machine-learning models (gradient boosted tree, random forest, and logistic regression) on seven classification tasks given free-text radiology reports. Tasks included detection of appendicitis, diverticulitis, bowel obstruction, and enteritis/colitis on abdomen/pelvis CT reports, ischemic infarct on brain CT/MRI reports, and medial and lateral meniscus tears on knee MRI reports (7,204 total annotated reports). The performance of NLP models on held-out test sets was compared after training using the full training set, and 2.5%, 10%, 25%, 50%, and 75% random subsets of the training data. RESULTS: In all tested classification tasks, BioBERT performed poorly at smaller training sample sizes compared to non-deep-learning NLP models. Specifically, BioBERT required training on approximately 1,000 reports to perform similarly or better than non-deep-learning models. At around 1,250 to 1,500 training samples, the testing performance for all models began to plateau, where additional training data yielded minimal performance gain. CONCLUSIONS: With larger sample sizes, transformer NLP models achieved superior performance in radiology report binary classification tasks. However, with smaller sizes (<1000) and more imbalanced training data, traditional NLP techniques performed better. ADVANCES IN KNOWLEDGE: Our benchmarks can help guide clinical NLP researchers in selecting machine-learning models according to their dataset characteristics.


Subject(s)
Natural Language Processing , Radiology , Humans , Tomography, X-Ray Computed/methods , Machine Learning , Magnetic Resonance Imaging
6.
J Clin Med ; 12(4)2023 Feb 04.
Article in English | MEDLINE | ID: mdl-36835785

ABSTRACT

(1) The use of high-flow nasal cannula (HFNC) combined with frequent respiratory monitoring in patients with acute hypoxic respiratory failure due to COVID-19 has been shown to reduce intubation and mechanical ventilation. (2) This prospective, single-center, observational study included consecutive adult patients with COVID-19 pneumonia treated with a high-flow nasal cannula. Hemodynamic parameters, respiratory rate, inspiratory fraction of oxygen (FiO2), saturation of oxygen (SpO2), and the ratio of oxygen saturation to respiratory rate (ROX) were recorded prior to treatment initiation and every 2 h for 24 h. A 6-month follow-up questionnaire was also conducted. (3) Over the study period, 153 of 187 patients were eligible for HFNC. Of these patients, 80% required intubation and 37% of the intubated patients died in hospital. Male sex (OR = 4.65; 95% CI [1.28; 20.6], p = 0.03) and higher BMI (OR = 2.63; 95% CI [1.14; 6.76], p = 0.03) were associated with an increased risk for new limitations at 6-months after hospital discharge. (4) 20% of patients who received HFNC did not require intubation and were discharged alive from the hospital. Male sex and higher BMI were associated with poor long-term functional outcomes.

7.
Arthritis Care Res (Hoboken) ; 75(3): 657-666, 2023 03.
Article in English | MEDLINE | ID: mdl-35313091

ABSTRACT

OBJECTIVE: COVID-19 patients with rheumatic disease have a higher risk of mechanical ventilation than the general population. The present study was undertaken to assess lung involvement using a validated deep learning algorithm that extracts a quantitative measure of radiographic lung disease severity. METHODS: We performed a comparative cohort study of rheumatic disease patients with COVID-19 and ≥1 chest radiograph within ±2 weeks of COVID-19 diagnosis and matched comparators. We used unadjusted and adjusted (for age, Charlson comorbidity index, and interstitial lung disease) quantile regression to compare the maximum pulmonary x-ray severity (PXS) score at the 10th to 90th percentiles between groups. We evaluated the association of severe PXS score (>9) with mechanical ventilation and death using Cox regression. RESULTS: We identified 70 patients with rheumatic disease and 463 general population comparators. Maximum PXS scores were similar in the rheumatic disease patients and comparators at the 10th to 60th percentiles but significantly higher among rheumatic disease patients at the 70th to 90th percentiles (90th percentile score of 10.2 versus 9.2; adjusted P = 0.03). Rheumatic disease patients were more likely to have a PXS score of >9 (20% versus 11%; P = 0.02), indicating severe pulmonary disease. Rheumatic disease patients with PXS scores >9 versus ≤9 had higher risk of mechanical ventilation (hazard ratio [HR] 24.1 [95% confidence interval (95% CI) 6.7, 86.9]) and death (HR 8.2 [95% CI 0.7, 90.4]). CONCLUSION: Rheumatic disease patients with COVID-19 had more severe radiographic lung involvement than comparators. Higher PXS scores were associated with mechanical ventilation and will be important for future studies leveraging big data to assess COVID-19 outcomes in rheumatic disease patients.


Subject(s)
COVID-19 , Deep Learning , Lung Injury , Rheumatic Diseases , Humans , Cohort Studies , SARS-CoV-2 , COVID-19 Testing , Rheumatic Diseases/epidemiology
8.
BJR Open ; 4(1): 20210062, 2022.
Article in English | MEDLINE | ID: mdl-36105420

ABSTRACT

Objective: To predict short-term outcomes in hospitalized COVID-19 patients using a model incorporating clinical variables with automated convolutional neural network (CNN) chest radiograph analysis. Methods: A retrospective single center study was performed on patients consecutively admitted with COVID-19 between March 14 and April 21 2020. Demographic, clinical and laboratory data were collected, and automated CNN scoring of the admission chest radiograph was performed. The two outcomes of disease progression were intubation or death within 7 days and death within 14 days following admission. Multiple imputation was performed for missing predictor variables and, for each imputed data set, a penalized logistic regression model was constructed to identify predictors and their functional relationship to each outcome. Cross-validated area under the characteristic (AUC) curves were estimated to quantify the discriminative ability of each model. Results: 801 patients (median age 59; interquartile range 46-73 years, 469 men) were evaluated. 36 patients were deceased and 207 were intubated at 7 days and 65 were deceased at 14 days. Cross-validated AUC values for predictive models were 0.82 (95% CI, 0.79-0.86) for death or intubation within 7 days and 0.82 (0.78-0.87) for death within 14 days. Automated CNN chest radiograph score was an important variable in predicting both outcomes. Conclusion: Automated CNN chest radiograph analysis, in combination with clinical variables, predicts short-term intubation and death in patients hospitalized for COVID-19 infection. Chest radiograph scoring of more severe disease was associated with a greater probability of adverse short-term outcome. Advances in knowledge: Model-based predictions of intubation and death in COVID-19 can be performed with high discriminative performance using admission clinical data and convolutional neural network-based scoring of chest radiograph severity.

9.
Medicine (Baltimore) ; 101(29): e29587, 2022 Jul 22.
Article in English | MEDLINE | ID: mdl-35866818

ABSTRACT

To tune and test the generalizability of a deep learning-based model for assessment of COVID-19 lung disease severity on chest radiographs (CXRs) from different patient populations. A published convolutional Siamese neural network-based model previously trained on hospitalized patients with COVID-19 was tuned using 250 outpatient CXRs. This model produces a quantitative measure of COVID-19 lung disease severity (pulmonary x-ray severity (PXS) score). The model was evaluated on CXRs from 4 test sets, including 3 from the United States (patients hospitalized at an academic medical center (N = 154), patients hospitalized at a community hospital (N = 113), and outpatients (N = 108)) and 1 from Brazil (patients at an academic medical center emergency department (N = 303)). Radiologists from both countries independently assigned reference standard CXR severity scores, which were correlated with the PXS scores as a measure of model performance (Pearson R). The Uniform Manifold Approximation and Projection (UMAP) technique was used to visualize the neural network results. Tuning the deep learning model with outpatient data showed high model performance in 2 United States hospitalized patient datasets (R = 0.88 and R = 0.90, compared to baseline R = 0.86). Model performance was similar, though slightly lower, when tested on the United States outpatient and Brazil emergency department datasets (R = 0.86 and R = 0.85, respectively). UMAP showed that the model learned disease severity information that generalized across test sets. A deep learning model that extracts a COVID-19 severity score on CXRs showed generalizable performance across multiple populations from 2 continents, including outpatients and hospitalized patients.


Subject(s)
COVID-19 , Deep Learning , COVID-19/diagnostic imaging , Humans , Lung , Radiography, Thoracic/methods , Radiologists
10.
Acad Radiol ; 29(12): 1899-1902, 2022 12.
Article in English | MEDLINE | ID: mdl-35606258

ABSTRACT

In 2019, the journal Radiology: Artificial Intelligence introduced its Trainee Editorial Board (TEB) to offer formal training in medical journalism to medical students, radiology residents and fellows, and research-career trainees. The TEB aims to build a community of radiologists, radiation oncologists, medical physicists, and researchers in fields related to artificial intelligence (AI) in radiology. The program presented opportunities to learn about the editorial process, improve skills in writing and reviewing, advance the field of AI in radiology, and help translate and disseminate AI research. To meet these goals, TEB members contribute actively to the editorial process from peer review to publication, participate in educational webinars, and create and curate content in a variety of forms. Almost all of the contact has been mediated through the web. In this article, we share initial experiences and identify future directions and opportunities.


Subject(s)
Radiology , Students, Medical , Humans , Artificial Intelligence , Radiology/education , Radiologists , Radiography
11.
J Am Coll Radiol ; 19(7): 891-900, 2022 07.
Article in English | MEDLINE | ID: mdl-35483438

ABSTRACT

PURPOSE: Deploying external artificial intelligence (AI) models locally can be logistically challenging. We aimed to use the ACR AI-LAB software platform for local testing of a chest radiograph (CXR) algorithm for COVID-19 lung disease severity assessment. METHODS: An externally developed deep learning model for COVID-19 radiographic lung disease severity assessment was loaded into the AI-LAB platform at an independent academic medical center, which was separate from the institution in which the model was trained. The data set consisted of CXR images from 141 patients with reverse transcription-polymerase chain reaction-confirmed COVID-19, which were routed to AI-LAB for model inference. The model calculated a Pulmonary X-ray Severity (PXS) score for each image. This score was correlated with the average of a radiologist-based assessment of severity, the modified Radiographic Assessment of Lung Edema score, independently interpreted by three radiologists. The associations between the PXS score and patient admission and intubation or death were assessed. RESULTS: The PXS score deployed in AI-LAB correlated with the radiologist-determined modified Radiographic Assessment of Lung Edema score (r = 0.80). PXS score was significantly higher in patients who were admitted (4.0 versus 1.3, P < .001) or intubated or died within 3 days (5.5 versus 3.3, P = .001). CONCLUSIONS: AI-LAB was successfully used to test an external COVID-19 CXR AI algorithm on local data with relative ease, showing generalizability of the PXS score model. For AI models to scale and be clinically useful, software tools that facilitate the local testing process, like the freely available AI-LAB, will be important to cross the AI implementation gap in health care systems.


Subject(s)
COVID-19 , Deep Learning , Artificial Intelligence , COVID-19/diagnostic imaging , Edema , Humans , Tomography, X-Ray Computed/methods
12.
Acad Radiol ; 29(1): 119-128, 2022 01.
Article in English | MEDLINE | ID: mdl-34561163

ABSTRACT

The Radiology Research Alliance (RRA) of the Association of University Radiologists (AUR) convenes Task Forces to address current topics in radiology. In this article, the AUR-RRA Task Force on Academic-Industry Partnerships for Artificial Intelligence, considered issues of importance to academic radiology departments contemplating industry partnerships in artificial intelligence (AI) development, testing and evaluation. Our goal was to create a framework encompassing the domains of clinical, technical, regulatory, legal and financial considerations that impact the arrangement and success of such partnerships.


Subject(s)
Artificial Intelligence , Radiology , Humans , Radiography , Radiologists , Universities
13.
AJR Am J Roentgenol ; 219(1): 15-23, 2022 07.
Article in English | MEDLINE | ID: mdl-34612681

ABSTRACT

Hundreds of imaging-based artificial intelligence (AI) models have been developed in response to the COVID-19 pandemic. AI systems that incorporate imaging have shown promise in primary detection, severity grading, and prognostication of outcomes in COVID-19, and have enabled integration of imaging with a broad range of additional clinical and epidemiologic data. However, systematic reviews of AI models applied to COVID-19 medical imaging have highlighted problems in the field, including methodologic issues and problems in real-world deployment. Clinical use of such models should be informed by both the promise and potential pitfalls of implementation. How does a practicing radiologist make sense of this complex topic, and what factors should be considered in the implementation of AI tools for imaging of COVID-19? This critical review aims to help the radiologist understand the nuances that impact the clinical deployment of AI for imaging of COVID-19. We review imaging use cases for AI models in COVID-19 (e.g., diagnosis, severity assessment, and prognostication) and explore considerations for AI model development and testing, deployment infrastructure, clinical user interfaces, quality control, and institutional review board and regulatory approvals, with a practical focus on what a radiologist should consider when implementing an AI tool for COVID-19.


Subject(s)
COVID-19 , Radiology , Artificial Intelligence , Humans , Pandemics , Radiography
14.
Acad Radiol ; 29(4): 479-487, 2022 04.
Article in English | MEDLINE | ID: mdl-33583713

ABSTRACT

RATIONALE AND OBJECTIVES: Train and apply natural language processing (NLP) algorithms for automated radiology-arthroscopy correlation of meniscal tears. MATERIALS AND METHODS: In this retrospective single-institution study, we trained supervised machine learning models (logistic regression, support vector machine, and random forest) to detect medial or lateral meniscus tears on free-text MRI reports. We trained and evaluated model performances with cross-validation using 3593 manually annotated knee MRI reports. To assess radiology-arthroscopy correlation, we then randomly partitioned this dataset 80:20 for training and testing, where 108 test set MRIs were followed by knee arthroscopy within 1 year. These free-text arthroscopy reports were also manually annotated. The NLP algorithms trained on the knee MRI training dataset were then evaluated on the MRI and arthroscopy report test datasets. We assessed radiology-arthroscopy agreement using the ensembled NLP-extracted findings versus manually annotated findings. RESULTS: The NLP models showed high cross-validation performance for meniscal tear detection on knee MRI reports (medial meniscus F1 scores 0.93-0.94, lateral meniscus F1 scores 0.86-0.88). When these algorithms were evaluated on arthroscopy reports, despite never training on arthroscopy reports, performance was similar, though higher with model ensembling (medial meniscus F1 score 0.97, lateral meniscus F1 score 0.99). However, ensembling did not improve performance on knee MRI reports. In the radiology-arthroscopy test set, the ensembled NLP models were able to detect mismatches between MRI and arthroscopy reports with sensitivity 79% and specificity 87%. CONCLUSION: Radiology-arthroscopy correlation can be automated for knee meniscal tears using NLP algorithms, which shows promise for education and quality improvement.


Subject(s)
Radiology , Tibial Meniscus Injuries , Arthroscopy , Humans , Magnetic Resonance Imaging , Natural Language Processing , Retrospective Studies , Sensitivity and Specificity , Support Vector Machine , Tibial Meniscus Injuries/diagnostic imaging
15.
Skeletal Radiol ; 51(2): 245-256, 2022 Feb.
Article in English | MEDLINE | ID: mdl-34013447

ABSTRACT

Developments in artificial intelligence have the potential to improve the care of patients with musculoskeletal tumors. We performed a systematic review of the published scientific literature to identify the current state of the art of artificial intelligence applied to musculoskeletal oncology, including both primary and metastatic tumors, and across the radiology, nuclear medicine, pathology, clinical research, and molecular biology literature. Through this search, we identified 252 primary research articles, of which 58 used deep learning and 194 used other machine learning techniques. Articles involving deep learning have mostly involved bone scintigraphy, histopathology, and radiologic imaging. Articles involving other machine learning techniques have mostly involved transcriptomic analyses, radiomics, and clinical outcome prediction models using medical records. These articles predominantly present proof-of-concept work, other than the automated bone scan index for bone metastasis quantification, which has translated to clinical workflows in some regions. We systematically review and discuss this literature, highlight opportunities for multidisciplinary collaboration, and identify potentially clinically useful topics with a relative paucity of research attention. Musculoskeletal oncology is an inherently multidisciplinary field, and future research will need to integrate and synthesize noisy siloed data from across clinical, imaging, and molecular datasets. Building the data infrastructure for collaboration will help to accelerate progress towards making artificial intelligence truly useful in musculoskeletal oncology.


Subject(s)
Musculoskeletal System , Radiology , Artificial Intelligence , Humans , Machine Learning , Medical Oncology
16.
Radiol Artif Intell ; 3(6): e200267, 2021 Nov.
Article in English | MEDLINE | ID: mdl-34870212

ABSTRACT

PURPOSE: To evaluate the trustworthiness of saliency maps for abnormality localization in medical imaging. MATERIALS AND METHODS: Using two large publicly available radiology datasets (Society for Imaging Informatics in Medicine-American College of Radiology Pneumothorax Segmentation dataset and Radiological Society of North America Pneumonia Detection Challenge dataset), the performance of eight commonly used saliency map techniques were quantified in regard to (a) localization utility (segmentation and detection), (b) sensitivity to model weight randomization, (c) repeatability, and (d) reproducibility. Their performances versus baseline methods and localization network architectures were compared, using area under the precision-recall curve (AUPRC) and structural similarity index measure (SSIM) as metrics. RESULTS: All eight saliency map techniques failed at least one of the criteria and were inferior in performance compared with localization networks. For pneumothorax segmentation, the AUPRC ranged from 0.024 to 0.224, while a U-Net achieved a significantly superior AUPRC of 0.404 (P < .005). For pneumonia detection, the AUPRC ranged from 0.160 to 0.519, while a RetinaNet achieved a significantly superior AUPRC of 0.596 (P <.005). Five and two saliency methods (of eight) failed the model randomization test on the segmentation and detection datasets, respectively, suggesting that these methods are not sensitive to changes in model parameters. The repeatability and reproducibility of the majority of the saliency methods were worse than localization networks for both the segmentation and detection datasets. CONCLUSION: The use of saliency maps in the high-risk domain of medical imaging warrants additional scrutiny and recommend that detection or segmentation models be used if localization is the desired output of the network.Keywords: Technology Assessment, Technical Aspects, Feature Detection, Convolutional Neural Network (CNN) Supplemental material is available for this article. © RSNA, 2021.

18.
Eur J Radiol ; 142: 109865, 2021 Sep.
Article in English | MEDLINE | ID: mdl-34298389

ABSTRACT

PURPOSE: MRI is a powerful tool for optic nerve assessment, but image quality can be degraded by artifacts related to ocular motion. The purpose of this investigation was to evaluate the effect of undergoing MRI with eyes open versus closed on the degree of motion degradation affecting the optic nerves. METHOD: Patients undergoing 3 Tesla orbital MRI were randomized to undergo the coronal STIR sequence with eyes open and focused on a standardized fixation point, blinking as needed, or with eyes closed. The sequence was then performed again with the other instruction set. Two neuroradiologists rated the intraorbital optic nerves for motion artifact on a 5-point scale (higher numbers reflecting greater motion artifact) in 2 locations of each nerve. Differences were evaluated by the clustered Wilcoxon signed rank test. RESULTS: Seventy-seven orbits were included. Interrater reliability was high (weighted kappa = 0.78). The anterior intraorbital optic nerves were rated with less motion artifact when eyes were open and focused during acquisition than when closed (p = 0.006), but this was not the case for the posterior intraorbital optic nerve (p = 0.69). For example, at the anterior intraorbital optic nerve, motion artifact of mean grade better than 2 was seen in 60% of eyes-open vs. 32% of eyes-closed acquisitions, while mean grade 4 or worse was seen in 4% of eyes-open vs. 12% of eyes-closed acquisitions. CONCLUSION: Undergoing orbital MRI with eyes open and focused rather than closed reduces motion artifact at the anterior intraorbital segment of the optic nerve.


Subject(s)
Magnetic Resonance Imaging , Optic Nerve , Artifacts , Humans , Motion , Optic Nerve/diagnostic imaging , Reproducibility of Results
19.
Am J Emerg Med ; 49: 52-57, 2021 Nov.
Article in English | MEDLINE | ID: mdl-34062318

ABSTRACT

PURPOSE: During the COVID-19 pandemic, emergency department (ED) volumes have fluctuated. We hypothesized that natural language processing (NLP) models could quantify changes in detection of acute abdominal pathology (acute appendicitis (AA), acute diverticulitis (AD), or bowel obstruction (BO)) on CT reports. METHODS: This retrospective study included 22,182 radiology reports from CT abdomen/pelvis studies performed at an urban ED between January 1, 2018 to August 14, 2020. Using a subset of 2448 manually annotated reports, we trained random forest NLP models to classify the presence of AA, AD, and BO in report impressions. Performance was assessed using 5-fold cross validation. The NLP classifiers were then applied to all reports. RESULTS: The NLP classifiers for AA, AD, and BO demonstrated cross-validation classification accuracies between 0.97 and 0.99 and F1-scores between 0.86 and 0.91. When applied to all CT reports, the estimated numbers of AA, AD, and BO cases decreased 43-57% in April 2020 (first regional peak of COVID-19 cases) compared to 2018-2019. However, the number of abdominal pathologies detected rebounded in May-July 2020, with increases above historical averages for AD. The proportions of CT studies with these pathologies did not significantly increase during the pandemic period. CONCLUSION: Dramatic decreases in numbers of acute abdominal pathologies detected by ED CT studies were observed early on during the COVID-19 pandemic, though these numbers rapidly rebounded. The proportions of CT cases with these pathologies did not increase, which suggests patients deferred care during the first pandemic peak. NLP can help automatically track findings in ED radiology reporting.


Subject(s)
Appendicitis/diagnostic imaging , Diverticulitis/diagnostic imaging , Emergency Service, Hospital , Intestinal Obstruction/diagnostic imaging , Tomography, X-Ray Computed/statistics & numerical data , Abdomen/diagnostic imaging , COVID-19/epidemiology , Humans , Massachusetts/epidemiology , Natural Language Processing , Retrospective Studies , SARS-CoV-2 , Utilization Review
20.
Front Neurol ; 12: 642912, 2021.
Article in English | MEDLINE | ID: mdl-33897598

ABSTRACT

Objectives: Patients with comorbidities are at increased risk for poor outcomes in COVID-19, yet data on patients with prior neurological disease remains limited. Our objective was to determine the odds of critical illness and duration of mechanical ventilation in patients with prior cerebrovascular disease and COVID-19. Methods: A observational study of 1,128 consecutive adult patients admitted to an academic center in Boston, Massachusetts, and diagnosed with laboratory-confirmed COVID-19. We tested the association between prior cerebrovascular disease and critical illness, defined as mechanical ventilation (MV) or death by day 28, using logistic regression with inverse probability weighting of the propensity score. Among intubated patients, we estimated the cumulative incidence of successful extubation without death over 45 days using competing risk analysis. Results: Of the 1,128 adults with COVID-19, 350 (36%) were critically ill by day 28. The median age of patients was 59 years (SD: 18 years) and 640 (57%) were men. As of June 2nd, 2020, 127 (11%) patients had died. A total of 177 patients (16%) had a prior cerebrovascular disease. Prior cerebrovascular disease was significantly associated with critical illness (OR = 1.54, 95% CI = 1.14-2.07), lower rate of successful extubation (cause-specific HR = 0.57, 95% CI = 0.33-0.98), and increased duration of intubation (restricted mean time difference = 4.02 days, 95% CI = 0.34-10.92) compared to patients without cerebrovascular disease. Interpretation: Prior cerebrovascular disease adversely affects COVID-19 outcomes in hospitalized patients. Further study is required to determine if this subpopulation requires closer monitoring for disease progression during COVID-19.

SELECTION OF CITATIONS
SEARCH DETAIL