Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 46
Filter
1.
Genome Biol ; 25(1): 202, 2024 Aug 01.
Article in English | MEDLINE | ID: mdl-39090688

ABSTRACT

BACKGROUND: A number of deep learning models have been developed to predict epigenetic features such as chromatin accessibility from DNA sequence. Model evaluations commonly report performance genome-wide; however, cis regulatory elements (CREs), which play critical roles in gene regulation, make up only a small fraction of the genome. Furthermore, cell type-specific CREs contain a large proportion of complex disease heritability. RESULTS: We evaluate genomic deep learning models in chromatin accessibility regions with varying degrees of cell type specificity. We assess two modeling directions in the field: general purpose models trained across thousands of outputs (cell types and epigenetic marks) and models tailored to specific tissues and tasks. We find that the accuracy of genomic deep learning models, including two state-of-the-art general purpose models-Enformer and Sei-varies across the genome and is reduced in cell type-specific accessible regions. Using accessibility models trained on cell types from specific tissues, we find that increasing model capacity to learn cell type-specific regulatory syntax-through single-task learning or high capacity multi-task models-can improve performance in cell type-specific accessible regions. We also observe that improving reference sequence predictions does not consistently improve variant effect predictions, indicating that novel strategies are needed to improve performance on variants. CONCLUSIONS: Our results provide a new perspective on the performance of genomic deep learning models, showing that performance varies across the genome and is particularly reduced in cell type-specific accessible regions. We also identify strategies to maximize performance in cell type-specific accessible regions.


Subject(s)
Chromatin , Deep Learning , Genomics , Humans , Chromatin/genetics , Genomics/methods , Regulatory Sequences, Nucleic Acid , Organ Specificity/genetics , Epigenesis, Genetic , Models, Genetic
2.
Article in English | MEDLINE | ID: mdl-39115834

ABSTRACT

Importance: Cannabis is the most commonly used illicit substance worldwide. Whether cannabis use is associated with head and neck cancer (HNC) is unclear. Objective: To assess the clinical association between cannabis use and HNC. Design, Setting, and Participants: This large multicenter cohort study used clinical records from a database that included 20 years of data (through April 2024) from 64 health care organizations. A database was searched for medical records for US adults with and without cannabis-related disorder who had recorded outpatient hospital clinic visits and no prior history of HNC. Propensity score matching was performed for demographic characteristics, alcohol-related disorders, and tobacco use. Subsequently, relative risks (RRs) were calculated to explore risk of HNC, including HNC subsites. This analysis was repeated among those younger than 60 years and 60 years or older. Exposure: Cannabis-related disorder. Main Outcomes and Measures: Diagnosis of HNC and any HNC subsite. Results: The cannabis-related disorder cohort included 116 076 individuals (51 646 women [44.5%]) with a mean (SD) age of 46.4 (16.8) years. The non-cannabis-related disorder cohort included 3 985 286 individuals (2 173 684 women [54.5%]) with a mean (SD) age of 60.8 (20.6) years. The rate of new HNC diagnosis in all sites was higher in the cannabis-related disorder cohort. After matching (n = 115 865 per group), patients with cannabis-related disorder had a higher risk of any HNC (RR, 3.49; 95% CI, 2.78-4.39) than those without HNC. A site-specific analysis yielded that those with cannabis-related disorder had a higher risk of oral (RR, 2.51; 95% CI, 1.81-3.47), oropharyngeal (RR, 4.90; 95% CI, 2.99-8.02), and laryngeal (RR, 8.39; 95% CI, 4.72-14.90) cancer. Results were consistent when stratifying by older and younger age group. Conclusions and Relevance: This cohort study highlights an association between cannabis-related disorder and the development of HNC in adult patients. Given the limitations of the database, future research should examine the mechanism of this association and analyze dose response with strong controls to further support evidence of cannabis use as a risk factor for HNCs.

3.
Eur Radiol ; 2024 Jul 24.
Article in English | MEDLINE | ID: mdl-39046499

ABSTRACT

OBJECTIVES: To perform a multi-reader comparison of multiparametric dual-energy computed tomography (DECT) images reconstructed with deep-learning image reconstruction (DLIR) and standard-of-care adaptive statistical iterative reconstruction-V (ASIR-V). METHODS: This retrospective study included 100 patients undergoing portal venous phase abdominal CT on a rapid kVp switching DECT scanner. Six reconstructed DECT sets (ASIR-V and DLIR, each at three strengths) were generated. Each DECT set included 65 keV monoenergetic, iodine, and virtual unenhanced (VUE) images. Using a Likert scale, three radiologists performed qualitative assessments for image noise, contrast, small structure visibility, sharpness, artifact, and image preference. Quantitative assessment was performed by measuring attenuation, image noise, and contrast-to-noise ratios (CNR). For the qualitative analysis, Gwet's AC2 estimates were used to assess agreement. RESULTS: DECT images reconstructed with DLIR yielded better qualitative scores than ASIR-V images except for artifacts, where both groups were comparable. DLIR-H images were rated higher than other reconstructions on all parameters (p-value < 0.05). On quantitative analysis, there was no significant difference in the attenuation values between ASIR-V and DLIR groups. DLIR images had higher CNR values for the liver and portal vein, and lower image noise, compared to ASIR-V images (p-value < 0.05). The subgroup analysis of patients with large body habitus (weight ≥ 90 kg) showed similar results to the study population. Inter-reader agreement was good-to-very good overall. CONCLUSION: Multiparametric post-processed DECT datasets reconstructed with DLIR were preferred over ASIR-V images with DLIR-H yielding the highest image quality scores. CLINICAL RELEVANCE STATEMENT: Deep-learning image reconstruction in dual-energy CT demonstrated significant benefits in qualitative and quantitative image metrics compared to adaptive statistical iterative reconstruction-V. KEY POINTS: Dual-energy CT (DECT) images reconstructed using deep-learning image reconstruction (DLIR) showed superior qualitative scores compared to adaptive statistical iterative reconstruction-V (ASIR-V) reconstructed images, except for artifacts where both reconstructions were rated comparable. While there was no significant difference in attenuation values between ASIR-V and DLIR groups, DLIR images showed higher contrast-to-noise ratios (CNR) for liver and portal vein, and lower image noise (p value < 0.05). Subgroup analysis of patients with large body habitus (weight ≥ 90 kg) yielded similar findings to the overall study population.

4.
bioRxiv ; 2024 Jul 10.
Article in English | MEDLINE | ID: mdl-39026761

ABSTRACT

Background: A number of deep learning models have been developed to predict epigenetic features such as chromatin accessibility from DNA sequence. Model evaluations commonly report performance genome-wide; however, cis regulatory elements (CREs), which play critical roles in gene regulation, make up only a small fraction of the genome. Furthermore, cell type specific CREs contain a large proportion of complex disease heritability. Results: We evaluate genomic deep learning models in chromatin accessibility regions with varying degrees of cell type specificity. We assess two modeling directions in the field: general purpose models trained across thousands of outputs (cell types and epigenetic marks), and models tailored to specific tissues and tasks. We find that the accuracy of genomic deep learning models, including two state-of-the-art general purpose models - Enformer and Sei - varies across the genome and is reduced in cell type specific accessible regions. Using accessibility models trained on cell types from specific tissues, we find that increasing model capacity to learn cell type specific regulatory syntax - through single-task learning or high capacity multi-task models - can improve performance in cell type specific accessible regions. We also observe that improving reference sequence predictions does not consistently improve variant effect predictions, indicating that novel strategies are needed to improve performance on variants. Conclusions: Our results provide a new perspective on the performance of genomic deep learning models, showing that performance varies across the genome and is particularly reduced in cell type specific accessible regions. We also identify strategies to maximize performance in cell type specific accessible regions.

5.
J Neural Eng ; 21(4)2024 Jul 16.
Article in English | MEDLINE | ID: mdl-38959877

ABSTRACT

Objective. Traditionally known for its involvement in emotional processing, the amygdala's involvement in motor control remains relatively unexplored, with sparse investigations into the neural mechanisms governing amygdaloid motor movement and inhibition. This study aimed to characterize the amygdaloid beta-band (13-30 Hz) power between 'Go' and 'No-go' trials of an arm-reaching task.Approach. Ten participants with drug-resistant epilepsy implanted with stereoelectroencephalographic (SEEG) electrodes in the amygdala were enrolled in this study. SEEG data was recorded throughout discrete phases of a direct reach Go/No-go task, during which participants reached a touchscreen monitor or withheld movement based on a colored cue. Multitaper power analysis along with Wilcoxon signed-rank and Yates-correctedZtests were used to assess significant modulations of beta power between the Response and fixation (baseline) phases in the 'Go' and 'No-go' conditions.Main results. In the 'Go' condition, nine out of the ten participants showed a significant decrease in relative beta-band power during the Response phase (p⩽ 0.0499). In the 'No-go' condition, eight out of the ten participants presented a statistically significant increase in relative beta-band power during the response phase (p⩽ 0.0494). Four out of the eight participants with electrodes in the contralateral hemisphere and seven out of the eight participants with electrodes in the ipsilateral hemisphere presented significant modulation in beta-band power in both the 'Go' and 'No-go' conditions. At the group level, no significant differences were found between the contralateral and ipsilateral sides or between genders.Significance.This study reports beta-band power modulation in the human amygdala during voluntary movement in the setting of motor execution and inhibition. This finding supplements prior research in various brain regions associating beta-band power with motor control. The distinct beta-power modulation observed between these response conditions suggests involvement of amygdaloid oscillations in differentiating between motor inhibition and execution.


Subject(s)
Amygdala , Arm , Beta Rhythm , Psychomotor Performance , Humans , Amygdala/physiology , Male , Female , Adult , Beta Rhythm/physiology , Psychomotor Performance/physiology , Arm/physiology , Young Adult , Movement/physiology , Middle Aged , Drug Resistant Epilepsy/physiopathology , Electroencephalography/methods
6.
bioRxiv ; 2024 Jun 22.
Article in English | MEDLINE | ID: mdl-38948875

ABSTRACT

Kidney disease is highly heritable; however, the causal genetic variants, the cell types in which these variants function, and the molecular mechanisms underlying kidney disease remain largely unknown. To identify genetic loci affecting kidney function, we performed a GWAS using multiple kidney function biomarkers and identified 462 loci. To begin to investigate how these loci affect kidney function, we generated single-cell chromatin accessibility (scATAC-seq) maps of the human kidney and identified candidate cis-regulatory elements (cCREs) for kidney podocytes, tubule epithelial cells, and kidney endothelial, stromal, and immune cells. Kidney tubule epithelial cCREs explained 58% of kidney function SNP-heritability and kidney podocyte cCREs explained an additional 6.5% of SNP-heritability. In contrast, little kidney function heritability was explained by kidney endothelial, stromal, or immune cell-specific cCREs. Through functionally informed fine-mapping, we identified putative causal kidney function variants and their corresponding cCREs. Using kidney scATAC-seq data, we created a deep learning model (which we named ChromKid) to predict kidney cell type-specific chromatin accessibility from sequence. ChromKid and allele specific kidney scATAC-seq revealed that many fine-mapped kidney function variants locally change chromatin accessibility in tubule epithelial cells. Enhancer assays confirmed that fine-mapped kidney function variants alter tubule epithelial regulatory element function. To map the genes which these regulatory elements control, we used CRISPR interference (CRISPRi) to target these regulatory elements in tubule epithelial cells and assessed changes in gene expression. CRISPRi of enhancers harboring kidney function variants regulated NDRG1 and RBPMS expression. Thus, inherited differences in tubule epithelial NDRG1 and RBPMS expression may predispose to kidney disease in humans. We conclude that genetic variants affecting tubule epithelial regulatory element function account for most SNP-heritability of human kidney function. This work provides an experimental approach to identify the variants, regulatory elements, and genes involved in polygenic disease.

7.
J Neural Eng ; 21(4)2024 Jul 15.
Article in English | MEDLINE | ID: mdl-38914073

ABSTRACT

Objective.Can we classify movement execution and inhibition from hippocampal oscillations during arm-reaching tasks? Traditionally associated with memory encoding, spatial navigation, and motor sequence consolidation, the hippocampus has come under scrutiny for its potential role in movement processing. Stereotactic electroencephalography (SEEG) has provided a unique opportunity to study the neurophysiology of the human hippocampus during motor tasks. In this study, we assess the accuracy of discriminant functions, in combination with principal component analysis (PCA), in classifying between 'Go' and 'No-go' trials in a Go/No-go arm-reaching task.Approach.Our approach centers on capturing the modulation of beta-band (13-30 Hz) power from multiple SEEG contacts in the hippocampus and minimizing the dimensional complexity of channels and frequency bins. This study utilizes SEEG data from the human hippocampus of 10 participants diagnosed with epilepsy. Spectral power was computed during a 'center-out' Go/No-go arm-reaching task, where participants reached or withheld their hand based on a colored cue. PCA was used to reduce data dimension and isolate the highest-variance components within the beta band. The Silhouette score was employed to measure the quality of clustering between 'Go' and 'No-go' trials. The accuracy of five different discriminant functions was evaluated using cross-validation.Main results.The Diagonal-Quadratic model performed best of the 5 classification models, exhibiting the lowest error rate in all participants (median: 9.91%, average: 14.67%). PCA showed that the first two principal components collectively accounted for 54.83% of the total variance explained on average across all participants, ranging from 36.92% to 81.25% among participants.Significance.This study shows that PCA paired with a Diagonal-Quadratic model can be an effective method for classifying between Go/No-go trials from beta-band power in the hippocampus during arm-reaching responses. This emphasizes the significance of hippocampal beta-power modulation in motor control, unveiling its potential implications for brain-computer interface applications.


Subject(s)
Arm , Beta Rhythm , Hippocampus , Humans , Hippocampus/physiology , Female , Beta Rhythm/physiology , Male , Adult , Arm/physiology , Psychomotor Performance/physiology , Movement/physiology , Electroencephalography/methods , Electroencephalography/classification , Principal Component Analysis , Young Adult , Reproducibility of Results , Middle Aged
8.
Oper Neurosurg (Hagerstown) ; 27(3): 265-278, 2024 Sep 01.
Article in English | MEDLINE | ID: mdl-38869495

ABSTRACT

BACKGROUND AND OBJECTIVES: Suprasellar tumors, particularly pituitary adenomas (PAs), commonly present with visual decline, and the endoscopic endonasal transsphenoidal approach (EETA) is the primary management for optic apparatus decompression. Patients presenting with complete preoperative monocular blindness comprise a high-risk subgroup, given concern for complete blindness. This retrospective cohort study evaluates outcomes after EETA for patients with PA presenting with monocular blindness. METHODS: Retrospective analysis of all EETA cases at our institution from June 2012 to August 2023 was performed. Inclusion criteria included adults with confirmed PA and complete monocular blindness, defined as no light perception, and a relative afferent pupillary defect secondary to tumor mass effect. RESULTS: Our cohort includes 15 patients (9 males, 6 females), comprising 2.4% of the overall PA cohort screened. The mean tumor diameter was 3.8 cm, with 6 being giant PAs (>4 cm). The mean duration of preoperative monocular blindness was 568 days. Additional symptoms included contralateral visual field defects (n = 11) and headaches (n = 10). Two patients presented with subacute PA apoplexy. Gross total resection was achieved in 46% of patients, reflecting tumor size and invasiveness. Postoperatively, 2 patients experienced improvement in their effectively blind eye and 2 had improved visual fields of the contralateral eye. Those with improvements were operated within 10 days of presentation, and no patients experienced worsened vision. CONCLUSION: This is the first series of EETA outcomes in patients with higher-risk PA with monocular blindness on presentation. In these extensive lesions, vision remained stable for most without further decline and improvement from monocular blindness was observed in a small subset of patients with no light perception and relative afferent pupillary defect. Timing from vision loss to surgical intervention seemed to be associated with improvement. From a surgical perspective, caution is warranted to protect remaining vision and we conclude that EETA is safe in the management of these patients.


Subject(s)
Adenoma , Blindness , Pituitary Neoplasms , Humans , Male , Pituitary Neoplasms/surgery , Pituitary Neoplasms/complications , Pituitary Neoplasms/diagnostic imaging , Female , Blindness/etiology , Blindness/surgery , Middle Aged , Adenoma/surgery , Adenoma/complications , Retrospective Studies , Adult , Aged , Neuroendoscopy/methods , Treatment Outcome , Natural Orifice Endoscopic Surgery/methods
9.
bioRxiv ; 2024 Jun 08.
Article in English | MEDLINE | ID: mdl-38895200

ABSTRACT

Regular, systematic, and independent assessment of computational tools used to predict the pathogenicity of missense variants is necessary to evaluate their clinical and research utility and suggest directions for future improvement. Here, as part of the sixth edition of the Critical Assessment of Genome Interpretation (CAGI) challenge, we assess missense variant effect predictors (or variant impact predictors) on an evaluation dataset of rare missense variants from disease-relevant databases. Our assessment evaluates predictors submitted to the CAGI6 Annotate-All-Missense challenge, predictors commonly used by the clinical genetics community, and recently developed deep learning methods for variant effect prediction. To explore a variety of settings that are relevant for different clinical and research applications, we assess performance within different subsets of the evaluation data and within high-specificity and high-sensitivity regimes. We find strong performance of many predictors across multiple settings. Meta-predictors tend to outperform their constituent individual predictors; however, several individual predictors have performance similar to that of commonly used meta-predictors. The relative performance of predictors differs in high-specificity and high-sensitivity regimes, suggesting that different methods may be best suited to different use cases. We also characterize two potential sources of bias. Predictors that incorporate allele frequency as a predictive feature tend to have reduced performance when distinguishing pathogenic variants from very rare benign variants, and predictors supervised on pathogenicity labels from curated variant databases often learn label imbalances within genes. Overall, we find notable advances over the oldest and most cited missense variant effect predictors and continued improvements among the most recently developed tools, and the CAGI Annotate-All-Missense challenge (also termed the Missense Marathon) will continue to assess state-of-the-art methods as the field progresses. Together, our results help illuminate the current clinical and research utility of missense variant effect predictors and identify potential areas for future development.

10.
J Comput Assist Tomogr ; 48(4): 614-627, 2024.
Article in English | MEDLINE | ID: mdl-38626756

ABSTRACT

ABSTRACT: Neuroendocrine neoplasms (NENs) are rare neoplasms originating from neuroendocrine cells, with increasing incidence due to enhanced detection methods. These tumors display considerable heterogeneity, necessitating diverse management strategies based on factors like organ of origin and tumor size. This article provides a comprehensive overview of therapeutic approaches for NENs, emphasizing the role of imaging in treatment decisions. It categorizes tumors based on their locations: gastric, duodenal, pancreatic, small bowel, colonic, rectal, appendiceal, gallbladder, prostate, lung, gynecological, and others. The piece also elucidates the challenges in managing metastatic disease and controversies surrounding MEN1-neuroendocrine tumor management. The article underscores the significance of individualized treatment plans, underscoring the need for a multidisciplinary approach to ensure optimal patient outcomes.


Subject(s)
Neuroendocrine Tumors , Humans , Neuroendocrine Tumors/diagnostic imaging , Neuroendocrine Tumors/therapy , Neuroendocrine Tumors/pathology
11.
Neurosci Res ; 2024 Apr 04.
Article in English | MEDLINE | ID: mdl-38582242

ABSTRACT

The Stroop Task is a well-known neuropsychological task developed to investigate conflict processing in the human brain. Our group has utilized direct intracranial neural recordings in various brain regions during performance of a modified color-word Stroop Task to gain a mechanistic understanding of non-emotional human conflict processing. The purpose of this review article is to: 1) synthesize our own studies into a model of human conflict processing, 2) review the current literature on the Stroop Task and other conflict tasks to put our research in context, and 3) describe how these studies define a network in conflict processing. The figures presented are reprinted from our prior publications and key publications referenced in the manuscript. We summarize all studies to date that employ invasive intracranial recordings in humans during performance of conflict-inducing tasks. For our own studies, we analyzed local field potentials (LFPs) from patients with implanted stereotactic electroencephalography (SEEG) electrodes, and we observed intracortical oscillation patterns as well as intercortical temporal relationships in the hippocampus, amygdala, and orbitofrontal cortex (OFC) during the cue-processing phase of a modified Stroop Task. Our findings suggest that non-emotional human conflict processing involves modulation across multiple frequency bands within and between brain structures.

12.
AJR Am J Roentgenol ; 222(5): e2330720, 2024 May.
Article in English | MEDLINE | ID: mdl-38353447

ABSTRACT

BACKGROUND. The 2022 Society of Radiologists in Ultrasound (SRU) consensus conference recommendations for small gallbladder polyps support management that is less aggressive than earlier approaches and may help standardize evaluation of polyps by radiologists. OBJECTIVE. The purpose of the present study was to assess the interreader agreement of radiologists in applying SRU recommendations for management of incidental gallbladder polyps on ultrasound. METHODS. This retrospective study included 105 patients (75 women and 30 men; median age, 51 years) with a gallbladder polyp on ultrasound (without features highly suspicious for invasive or malignant tumor) who underwent cholecystectomy between January 1, 2003, and January 1, 2021. Ten abdominal radiologists independently reviewed ultrasound examinations and, using the SRU recommendations, assessed one polyp per patient to assign risk category (extremely low risk, low risk, or indeterminate risk) and make a possible recommendation for surgical consultation. Five radiologists were considered less experienced (< 5 years of experience), and five were considered more experienced (≥ 5 years of experience). Interreader agreement was evaluated. Polyps were classified pathologically as nonneoplastic or neoplastic. RESULTS. For risk category assignments, interreader agreement was substantial among all readers (k = 0.710), less-experienced readers (k = 0.705), and more-experienced readers (k = 0.692). For surgical consultation recommendations, inter-reader agreement was substantial among all readers (k = 0.795) and more-experienced readers (k = 0.740) and was almost perfect among less-experienced readers (k = 0.811). Of 10 readers, a median of 5.0 (IQR, 2.0-8.0), 4.0 (IQR, 2.0-7.0), and 0.0 (IQR, 0.0-0.0) readers classified polyps as extremely low risk, low risk, and indeterminate risk, respectively. Across readers, the percentage of polyps classified as extremely low risk ranged from 32% to 72%; as low risk, from 24% to 65%; and as indeterminate risk, from 0% to 8%. Of 10 readers, a median of zero change to 0 (IQR, 0.0-1.0) readers recommended surgical consultation; the percentage of polyps receiving a recommendation for surgical consultation ranged from 4% to 22%. Of a total of 105 polyps, 102 were nonneo-plastic and three were neoplastic (all benign). Based on readers' most common assessments for nonneoplastic polyps, the risk category was extremely low risk for 53 polyps, low risk for 48 polyps, and indeterminate risk for one polyp; surgical consultation was recommended for 16 polyps. CONCLUSION. Ten abdominal radiologists showed substantial agreement for polyp risk categorizations and surgical consultation recommendations, although areas of reader variability were identified. CLINICAL IMPACT. The findings support the overall reproducibility of the SRU recommendations, while indicating opportunity for improvement.


Subject(s)
Incidental Findings , Polyps , Ultrasonography , Humans , Female , Male , Middle Aged , Polyps/diagnostic imaging , Polyps/surgery , Retrospective Studies , Ultrasonography/methods , Adult , Gallbladder Diseases/diagnostic imaging , Gallbladder Diseases/surgery , Aged , Observer Variation , Radiologists , Societies, Medical , Consensus , Practice Guidelines as Topic
13.
AJR Am J Roentgenol ; 222(4): e2329806, 2024 04.
Article in English | MEDLINE | ID: mdl-38230904

ABSTRACT

BACKGROUND. Examination protocoling is a noninterpretive task that increases radiologists' workload and can cause workflow inefficiencies. OBJECTIVE. The purpose of this study was to evaluate effects of an automated CT protocoling system on examination process times and protocol error rates. METHODS. This retrospective study included 317,597 CT examinations (mean age, 61.8 ± 18.1 [SD] years; male, 161,125; female, 156,447; unspecified sex, 25) from July 2020 to June 2022. A rules-based automated protocoling system was implemented institution-wide; the system evaluated all CT orders in the EHR and assigned a protocol or directed the order for manual radiologist protocoling. The study period comprised pilot (July 2020 to December 2020), implementation (January 2021 to December 2021), and postimplementation (January 2022 to June 2022) phases. Proportions of automatically protocoled examinations were summarized. Process times were recorded. Protocol error rates were assessed by counts of quality improvement (QI) reports and examination recalls and comparison with retrospectively assigned protocols in 450 randomly selected examinations. RESULTS. Frequency of automatic protocoling was 19,366/70,780 (27.4%), 68,875/163,068 (42.2%), and 54,045/83,749 (64.5%) in pilot, implementation, and postimplementation phases, respectively (p < .001). Mean (± SD) times from order entry to protocol assignment for automatically and manually protocoled examinations for emergency department examinations were 0.2 ± 18.2 and 2.1 ± 69.7 hours, respectively; mean inpatient examination times were 0.5 ± 50.0 and 3.5 ± 105.5 hours; and mean outpatient examination times were 361.7 ± 1165.5 and 1289.9 ± 2050.9 hours (all p < .001). Mean (± SD) times from order entry to examination completion for automatically and manually protocoled examinations for emergency department examinations were 2.6 ± 38.6 and 4.2 ± 73.0 hours, respectively (p < .001); for inpatient examinations were 6.3 ± 74.6 and 8.7 ± 109.3 hours (p = .001); and for outpatient examinations were 1367.2 ± 1795.8 and 1471.8 ± 2118.3 hours (p < .001). In the three phases, there were three, 19, and 25 QI reports and zero, one, and three recalls, respectively, for automatically protocoled examinations, versus nine, 19, and five QI reports and one, seven, and zero recalls for manually protocoled examinations. Retrospectively assigned protocols were concordant with 212/214 (99.1%) of automatically protocoled versus 233/236 (98.7%) of manually protocoled examinations. CONCLUSION. The automated protocoling system substantially reduced radiologists' protocoling workload and decreased times from order entry to protocol assignment and examination completion; protocol errors and recalls were infrequent. CLINICAL IMPACT. The system represents a solution for reducing radiologists' time spent performing noninterpretive tasks and improving care efficiency.


Subject(s)
Tomography, X-Ray Computed , Humans , Female , Male , Retrospective Studies , Middle Aged , Tomography, X-Ray Computed/methods , Quality Improvement , Clinical Protocols , Workflow , Workload , Aged , Adult
14.
Turk Neurosurg ; 34(1): 128-134, 2024.
Article in English | MEDLINE | ID: mdl-38282591

ABSTRACT

AIM: To investigate the relationship between planned drill approach angle and angular deviation of the stereotactically placed intracranial electrode tips. MATERIAL AND METHODS: Stereotactic electrode implantation was performed in 13 patients with drug resistant epilepsy. A total of 136 electrodes were included in our analysis. Stereotactic targets were planned on pre-operative magnetic resonance imaging (MRI) scans and implantation was carried out using a Cosman-Roberts-Wells stereotactic frame with the Ad-Tech drill guide and electrodes. Post implant electrode angles in the axial, coronal, and sagittal planes were determined from post-operative computerized tomography (CT) scans and compared with planned angles using Bland-Altman plots and linear regression. RESULTS: Qualitative assessment of correlation plots between planned and actual angles demonstrated a linear relationship for axial, coronal, and sagittal planes, with no overt angular deflection for any magnitude of the planned angle. CONCLUSION: The accuracy of CRW frame-based electrode placement using the Ad-Tech drill guide and electrodes is not significantly affected by the magnitude of the planning angle. Based on our results, oblique electrode insertion is a safe and accurate procedure.


Subject(s)
Drug Resistant Epilepsy , Stereotaxic Techniques , Humans , Drug Resistant Epilepsy/diagnostic imaging , Drug Resistant Epilepsy/surgery , Imaging, Three-Dimensional , Electrodes, Implanted , Magnetic Resonance Imaging
15.
Nat Genet ; 55(12): 2056-2059, 2023 Dec.
Article in English | MEDLINE | ID: mdl-38036790

ABSTRACT

Genomic deep learning models can predict genome-wide epigenetic features and gene expression levels directly from DNA sequence. While current models perform well at predicting gene expression levels across genes in different cell types from the reference genome, their ability to explain expression variation between individuals due to cis-regulatory genetic variants remains largely unexplored. Here, we evaluate four state-of-the-art models on paired personal genome and transcriptome data and find limited performance when explaining variation in expression across individuals. In addition, models often fail to predict the correct direction of effect of cis-regulatory genetic variation on expression.


Subject(s)
Deep Learning , Transcriptome , Humans , Transcriptome/genetics , Genetic Variation/genetics , Genome , Genomics
17.
PLoS One ; 18(9): e0292240, 2023.
Article in English | MEDLINE | ID: mdl-37773956

ABSTRACT

OBJECTIVE: To provide quantitative evidence for systematically prioritising individuals for full formal cardiovascular disease (CVD) risk assessment using primary care records with a novel tool (eHEART) with age- and sex- specific risk thresholds. METHODS AND ANALYSIS: eHEART was derived using landmark Cox models for incident CVD with repeated measures of conventional CVD risk predictors in 1,642,498 individuals from the Clinical Practice Research Datalink. Using 119,137 individuals from UK Biobank, we modelled the implications of initiating guideline-recommended statin therapy using eHEART with age- and sex-specific prioritisation thresholds corresponding to 5% false negative rates to prioritise adults aged 40-69 years in a population in England for invitation to a formal CVD risk assessment. RESULTS: Formal CVD risk assessment on all adults would identify 76% and 49% of future CVD events amongst men and women respectively, and 93 (95% CI: 90, 95) men and 279 (95% CI: 259, 297) women would need to be screened (NNS) to prevent one CVD event. In contrast, if eHEART was first used to prioritise individuals for formal CVD risk assessment, we would identify 73% and 47% of future events amongst men and women respectively, and a NNS of 75 (95% CI: 72, 77) men and 162 (95% CI: 150, 172) women. Replacing the age- and sex-specific prioritisation thresholds with a 10% threshold identify around 10% less events. CONCLUSIONS: The use of prioritisation tools with age- and sex-specific thresholds could lead to more efficient CVD assessment programmes with only small reductions in effectiveness at preventing new CVD events.


Subject(s)
Cardiovascular Diseases , Adult , Male , Humans , Female , Cardiovascular Diseases/epidemiology , Cardiovascular Diseases/prevention & control , England/epidemiology , Risk Assessment , Primary Health Care , Risk Factors
18.
Radiol Clin North Am ; 61(6): 945-961, 2023 Nov.
Article in English | MEDLINE | ID: mdl-37758362

ABSTRACT

Compared to conventional single-energy CT (SECT), dual-energy CT (DECT) provides additional information to better characterize imaged tissues. Approaches to DECT acquisition vary by vendor and include source-based and detector-based systems, each with its own advantages and disadvantages. Despite the different approaches to DECT acquisition, the most utilized DECT images include routine SECT equivalent, virtual monoenergetic, material density (eg, iodine map), and virtual non-contrast images. These images are generated either through reconstructions in the projection or image domains. Designing and implementing an optimal DECT workflow into routine clinical practice depends on radiologist and technologist input with special considerations including appropriate patient and protocol selection and workflow automation. In addition to better tissue characterization, DECT provides numerous advantages over SECT such as the characterization of incidental findings and dose reduction in radiation and iodinated contrast.


Subject(s)
Iodine , Radiography, Dual-Energy Scanned Projection , Humans , Tomography, X-Ray Computed/methods , Contrast Media , Radiation Dosage , Radiography, Dual-Energy Scanned Projection/methods
19.
Trials ; 24(1): 512, 2023 Aug 10.
Article in English | MEDLINE | ID: mdl-37563721

ABSTRACT

BACKGROUND: Vasovagal reactions (VVRs) are the most common acute complications of blood donation. Responsible for substantial morbidity, they also reduce the likelihood of repeated donations and are disruptive and costly for blood services. Although blood establishments worldwide have adopted different strategies to prevent VVRs (including water loading and applied muscle tension [AMT]), robust evidence is limited. The Strategies to Improve Donor Experiences (STRIDES) trial aims to reliably assess the impact of four different interventions to prevent VVRs among blood donors. METHODS: STRIDES is a cluster-randomised cross-over/stepped-wedge factorial trial of four interventions to reduce VVRs involving about 1.4 million whole blood donors enrolled from all 73 blood donation sites (mobile teams and donor centres) of National Health Service Blood and Transplant (NHSBT) in England. Each site ("cluster") has been randomly allocated to receive one or more interventions during a 36-month period, using principles of cross-over, stepped-wedge and factorial trial design to assign the sequence of interventions. Each of the four interventions is compared to NHSBT's current practices: (i) 500-ml isotonic drink before donation (vs current 500-ml plain water); (ii) 3-min rest on donation chair after donation (vs current 2 min); (iii) new modified AMT (vs current practice of AMT); and (iv) psychosocial intervention using preparatory materials (vs current practice of nothing). The primary outcome is the number of in-session VVRs with loss of consciousness (i.e. episodes involving loss of consciousness of any duration, with or without additional complications). Secondary outcomes include all in-session VVRs (i.e. with and without loss of consciousness), all delayed VVRs (i.e. those occurring after leaving the venue) and any in-session non-VVR adverse events or reactions. DISCUSSION: The STRIDES trial should yield novel information about interventions, singly and in combination, for the prevention of VVRs, with the aim of generating policy-shaping evidence to help inform blood services to improve donor health, donor experience, and service efficiency. TRIAL REGISTRATION: ISRCTN: 10412338. Registration date: October 24, 2019.


Subject(s)
Blood Donors , Syncope, Vasovagal , Humans , State Medicine , Syncope, Vasovagal/diagnosis , Syncope, Vasovagal/etiology , Syncope, Vasovagal/prevention & control , Water , Blood Donation
20.
J Am Heart Assoc ; 12(15): e029296, 2023 08.
Article in English | MEDLINE | ID: mdl-37489768

ABSTRACT

Background The aim of this study was to provide quantitative evidence of the use of polygenic risk scores for systematically identifying individuals for invitation for full formal cardiovascular disease (CVD) risk assessment. Methods and Results A total of 108 685 participants aged 40 to 69 years, with measured biomarkers, linked primary care records, and genetic data in UK Biobank were used for model derivation and population health modeling. Prioritization tools using age, polygenic risk scores for coronary artery disease and stroke, and conventional risk factors for CVD available within longitudinal primary care records were derived using sex-specific Cox models. We modeled the implications of initiating guideline-recommended statin therapy after prioritizing individuals for invitation to a formal CVD risk assessment. If primary care records were used to prioritize individuals for formal risk assessment using age- and sex-specific thresholds corresponding to 5% false-negative rates, then the numbers of men and women needed to be screened to prevent 1 CVD event are 149 and 280, respectively. In contrast, adding polygenic risk scores to both prioritization and formal assessments, and selecting thresholds to capture the same number of events, resulted in a number needed to screen of 116 for men and 180 for women. Conclusions Using both polygenic risk scores and primary care records to prioritize individuals at highest risk of a CVD event for a formal CVD risk assessment can efficiently prioritize those who need interventions the most than using primary care records alone. This could lead to better allocation of resources by reducing the number of risk assessments in primary care while still preventing the same number of CVD events.


Subject(s)
Cardiovascular Diseases , Coronary Artery Disease , Stroke , Male , Humans , Female , Cardiovascular Diseases/diagnosis , Cardiovascular Diseases/epidemiology , Cardiovascular Diseases/genetics , Risk Factors , Coronary Artery Disease/complications , Risk Assessment/methods , Stroke/epidemiology , Stroke/genetics , Stroke/prevention & control
SELECTION OF CITATIONS
SEARCH DETAIL