Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 2.352
Filtrar
Más filtros

Intervalo de año de publicación
1.
RNA ; 2024 Aug 02.
Artículo en Inglés | MEDLINE | ID: mdl-39095083

RESUMEN

The nonsense-mediated RNA decay (NMD) pathway is a crucial mechanism of mRNA quality control. Current annotations of NMD substrate RNAs are rarely data-driven, but use general established rules. We present a dataset with 4 cell lines and combinations for SMG5, SMG6 and SMG7 knockdowns or SMG7 knockout. Based on this dataset, we implemented a workflow that combines Nanopore and Illumina sequencing to assemble a transcriptome, which is enriched for NMD target transcripts. Moreover, we use coding sequence information from Ensembl, Gencode consensus RiboSeq ORFs and OpenProt to enhance the CDS annotation of novel transcript isoforms. In summary, 302,889 transcripts were obtained from the transcriptome assembly process, out of which, 24% are absent from Ensembl database annotations, 48,213 contain a premature stop codon and 6,433 are significantly upregulated in three or more comparisons of NMD active vs deficient cell lines. We present an in-depth view on these results through the NMDtxDB database, which is available at https://shiny.dieterichlab.org/app/NMDtxDB, and supports the study of NMD-sensitive transcripts. We open sourced our implementation of the respective web-application and analysis workflow at https://github.com/dieterich-lab/NMDtxDB and https://github.com/dieterich-lab/nmd-wf.

2.
Brief Bioinform ; 25(4)2024 May 23.
Artículo en Inglés | MEDLINE | ID: mdl-38997128

RESUMEN

This manuscript describes the development of a resource module that is part of a learning platform named "NIGMS Sandbox for Cloud-based Learning" https://github.com/NIGMS/NIGMS-Sandbox. The overall genesis of the Sandbox is described in the editorial NIGMS Sandbox at the beginning of this Supplement. This module delivers learning materials on RNA sequencing (RNAseq) data analysis in an interactive format that uses appropriate cloud resources for data access and analyses. Biomedical research is increasingly data-driven, and dependent upon data management and analysis methods that facilitate rigorous, robust, and reproducible research. Cloud-based computing resources provide opportunities to broaden the application of bioinformatics and data science in research. Two obstacles for researchers, particularly those at small institutions, are: (i) access to bioinformatics analysis environments tailored to their research; and (ii) training in how to use Cloud-based computing resources. We developed five reusable tutorials for bulk RNAseq data analysis to address these obstacles. Using Jupyter notebooks run on the Google Cloud Platform, the tutorials guide the user through a workflow featuring an RNAseq dataset from a study of prophage altered drug resistance in Mycobacterium chelonae. The first tutorial uses a subset of the data so users can learn analysis steps rapidly, and the second uses the entire dataset. Next, a tutorial demonstrates how to analyze the read count data to generate lists of differentially expressed genes using R/DESeq2. Additional tutorials generate read counts using the Snakemake workflow manager and Nextflow with Google Batch. All tutorials are open-source and can be used as templates for other analysis.


Asunto(s)
Nube Computacional , Biología Computacional , Análisis de Secuencia de ARN , Programas Informáticos , Biología Computacional/métodos , Análisis de Secuencia de ARN/métodos , Regulación Bacteriana de la Expresión Génica
3.
Brief Bioinform ; 25(3)2024 Mar 27.
Artículo en Inglés | MEDLINE | ID: mdl-38647153

RESUMEN

Computational drug repositioning, which involves identifying new indications for existing drugs, is an increasingly attractive research area due to its advantages in reducing both overall cost and development time. As a result, a growing number of computational drug repositioning methods have emerged. Heterogeneous network-based drug repositioning methods have been shown to outperform other approaches. However, there is a dearth of systematic evaluation studies of these methods, encompassing performance, scalability and usability, as well as a standardized process for evaluating new methods. Additionally, previous studies have only compared several methods, with conflicting results. In this context, we conducted a systematic benchmarking study of 28 heterogeneous network-based drug repositioning methods on 11 existing datasets. We developed a comprehensive framework to evaluate their performance, scalability and usability. Our study revealed that methods such as HGIMC, ITRPCA and BNNR exhibit the best overall performance, as they rely on matrix completion or factorization. HINGRL, MLMC, ITRPCA and HGIMC demonstrate the best performance, while NMFDR, GROBMC and SCPMF display superior scalability. For usability, HGIMC, DRHGCN and BNNR are the top performers. Building on these findings, we developed an online tool called HN-DREP (http://hn-drep.lyhbio.com/) to facilitate researchers in viewing all the detailed evaluation results and selecting the appropriate method. HN-DREP also provides an external drug repositioning prediction service for a specific disease or drug by integrating predictions from all methods. Furthermore, we have released a Snakemake workflow named HN-DRES (https://github.com/lyhbio/HN-DRES) to facilitate benchmarking and support the extension of new methods into the field.


Asunto(s)
Benchmarking , Reposicionamiento de Medicamentos , Reposicionamiento de Medicamentos/métodos , Humanos , Biología Computacional/métodos , Programas Informáticos , Algoritmos
4.
Mol Cell Proteomics ; 23(7): 100790, 2024 Jul.
Artículo en Inglés | MEDLINE | ID: mdl-38777088

RESUMEN

Protein identification and quantification is an important tool for biomarker discovery. With the increased sensitivity and speed of modern mass spectrometers, sample preparation remains a bottleneck for studying large cohorts. To address this issue, we prepared and evaluated a simple and efficient workflow on the Opentrons OT-2 robot that combines sample digestion, cleanup, and loading on Evotips in a fully automated manner, allowing the processing of up to 192 samples in 6 h. Analysis of 192 automated HeLa cell sample preparations consistently identified ∼8000 protein groups and ∼130,000 peptide precursors with an 11.5 min active liquid chromatography gradient with the Evosep One and narrow-window data-independent acquisition (nDIA) with the Orbitrap Astral mass spectrometer providing a throughput of 100 samples per day. Our results demonstrate a highly sensitive workflow yielding both reproducibility and stability at low sample inputs. The workflow is optimized for minimal sample starting amount to reduce the costs for reagents needed for sample preparation, which is critical when analyzing large biological cohorts. Building on the digesting workflow, we incorporated an automated phosphopeptide enrichment step using magnetic titanium-immobilized metal ion affinity chromatography beads. This allows for a fully automated proteome and phosphoproteome sample preparation in a single step with high sensitivity. Using the integrated digestion and Evotip loading workflow, we evaluated the effects of cancer immune therapy on the plasma proteome in metastatic melanoma patients.


Asunto(s)
Proteómica , Flujo de Trabajo , Humanos , Proteómica/métodos , Células HeLa , Cromatografía Liquida , Automatización , Proteoma/metabolismo , Ensayos Analíticos de Alto Rendimiento/métodos , Reproducibilidad de los Resultados , Melanoma/metabolismo , Fosfopéptidos/metabolismo
5.
Mol Biol Evol ; 41(1)2024 Jan 03.
Artículo en Inglés | MEDLINE | ID: mdl-38069903

RESUMEN

The increasing availability of genomic resequencing data sets and high-quality reference genomes across the tree of life present exciting opportunities for comparative population genomic studies. However, substantial challenges prevent the simple reuse of data across different studies and species, arising from variability in variant calling pipelines, data quality, and the need for computationally intensive reanalysis. Here, we present snpArcher, a flexible and highly efficient workflow designed for the analysis of genomic resequencing data in nonmodel organisms. snpArcher provides a standardized variant calling pipeline and includes modules for variant quality control, data visualization, variant filtering, and other downstream analyses. Implemented in Snakemake, snpArcher is user-friendly, reproducible, and designed to be compatible with high-performance computing clusters and cloud environments. To demonstrate the flexibility of this pipeline, we applied snpArcher to 26 public resequencing data sets from nonmammalian vertebrates. These variant data sets are hosted publicly to enable future comparative population genomic analyses. With its extensibility and the availability of public data sets, snpArcher will contribute to a broader understanding of genetic variation across species by facilitating the rapid use and reuse of large genomic data sets.


Asunto(s)
Metagenómica , Programas Informáticos , Animales , Flujo de Trabajo , Genómica , Análisis de Secuencia de ADN , Secuenciación de Nucleótidos de Alto Rendimiento
6.
Brief Bioinform ; 24(5)2023 09 20.
Artículo en Inglés | MEDLINE | ID: mdl-37738400

RESUMEN

Implementing a specific cloud resource to analyze extensive genomic data on severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) poses a challenge when resources are limited. To overcome this, we repurposed a cloud platform initially designed for use in research on cancer genomics (https://cgc.sbgenomics.com) to enable its use in research on SARS-CoV-2 to build Cloud Workflow for Viral and Variant Identification (COWID). COWID is a workflow based on the Common Workflow Language that realizes the full potential of sequencing technology for use in reliable SARS-CoV-2 identification and leverages cloud computing to achieve efficient parallelization. COWID outperformed other contemporary methods for identification by offering scalable identification and reliable variant findings with no false-positive results. COWID typically processed each sample of raw sequencing data within 5 min at a cost of only US$0.01. The COWID source code is publicly available (https://github.com/hendrick0403/COWID) and can be accessed on any computer with Internet access. COWID is designed to be user-friendly; it can be implemented without prior programming knowledge. Therefore, COWID is a time-efficient tool that can be used during a pandemic.


Asunto(s)
COVID-19 , Humanos , COVID-19/diagnóstico , Nube Computacional , SARS-CoV-2/genética , Flujo de Trabajo , Genómica
7.
Hum Genomics ; 18(1): 72, 2024 Jun 27.
Artículo en Inglés | MEDLINE | ID: mdl-38937848

RESUMEN

BACKGROUND: Wastewater surveillance (WWS) acts as a vigilant sentinel system for communities, analysing sewage to protect public health by detecting outbreaks and monitoring trends in pathogens and contaminants. To achieve a thorough comprehension of present and upcoming practices and to identify challenges and opportunities for standardisation and improvement in WWS methodologies, two EU surveys were conducted targeting over 750 WWS laboratories across Europe and other regions. The first survey explored a diverse range of activities currently undertaken or planned by laboratories. The second survey specifically targeted methods and quality controls utilised for SARS-CoV-2 surveillance. RESULTS: The findings of the two surveys provide a comprehensive insight into the procedures and methodologies applied in WWS. In Europe, WWS primarily focuses on SARS-CoV-2 with 99% of the survey participants dedicated to this virus. However, the responses highlighted a lack of standardisation in the methodologies employed for monitoring SARS-CoV-2. The surveillance of other pathogens, including antimicrobial resistance, is currently fragmented and conducted by only a limited number of laboratories. Notably, these activities are anticipated to expand in the future. Survey replies emphasise the collective recognition of the need to enhance the accuracy of results in WWS practices, reflecting a shared commitment to advancing precision and effectiveness in WWS methodologies. CONCLUSIONS: These surveys identified a lack of standardised common procedures in WWS practices and the need for quality standards and reference materials to enhance the accuracy and reliability of WWS methods in the future. In addition, it is important to broaden surveillance efforts beyond SARS-CoV-2 to include other emerging pathogens and antimicrobial resistance to ensure a comprehensive approach to protecting public health.


Asunto(s)
COVID-19 , SARS-CoV-2 , Aguas Residuales , Humanos , Aguas Residuales/virología , Aguas Residuales/microbiología , SARS-CoV-2/efectos de los fármacos , COVID-19/epidemiología , COVID-19/prevención & control , COVID-19/virología , Europa (Continente)/epidemiología , Encuestas y Cuestionarios , Aguas del Alcantarillado/virología , Aguas del Alcantarillado/microbiología , Farmacorresistencia Microbiana
8.
BMC Bioinformatics ; 25(1): 8, 2024 Jan 03.
Artículo en Inglés | MEDLINE | ID: mdl-38172657

RESUMEN

BACKGROUND: The increasing volume and complexity of genomic data pose significant challenges for effective data management and reuse. Public genomic data often undergo similar preprocessing across projects, leading to redundant or inconsistent datasets and inefficient use of computing resources. This is especially pertinent for bioinformaticians engaged in multiple projects. Tools have been created to address challenges in managing and accessing curated genomic datasets, however, the practical utility of such tools becomes especially beneficial for users who seek to work with specific types of data or are technically inclined toward a particular programming language. Currently, there exists a gap in the availability of an R-specific solution for efficient data management and versatile data reuse. RESULTS: Here we present ReUseData, an R software tool that overcomes some of the limitations of existing solutions and provides a versatile and reproducible approach to effective data management within R. ReUseData facilitates the transformation of ad hoc scripts for data preprocessing into Common Workflow Language (CWL)-based data recipes, allowing for the reproducible generation of curated data files in their generic formats. The data recipes are standardized and self-contained, enabling them to be easily portable and reproducible across various computing platforms. ReUseData also streamlines the reuse of curated data files and their integration into downstream analysis tools and workflows with different frameworks. CONCLUSIONS: ReUseData provides a reliable and reproducible approach for genomic data management within the R environment to enhance the accessibility and reusability of genomic data. The package is available at Bioconductor ( https://bioconductor.org/packages/ReUseData/ ) with additional information on the project website ( https://rcwl.org/dataRecipes/ ).


Asunto(s)
Manejo de Datos , Genómica , Programas Informáticos , Lenguajes de Programación , Flujo de Trabajo
9.
J Proteome Res ; 23(1): 418-429, 2024 01 05.
Artículo en Inglés | MEDLINE | ID: mdl-38038272

RESUMEN

The inherent diversity of approaches in proteomics research has led to a wide range of software solutions for data analysis. These software solutions encompass multiple tools, each employing different algorithms for various tasks such as peptide-spectrum matching, protein inference, quantification, statistical analysis, and visualization. To enable an unbiased comparison of commonly used bottom-up label-free proteomics workflows, we introduce WOMBAT-P, a versatile platform designed for automated benchmarking and comparison. WOMBAT-P simplifies the processing of public data by utilizing the sample and data relationship format for proteomics (SDRF-Proteomics) as input. This feature streamlines the analysis of annotated local or public ProteomeXchange data sets, promoting efficient comparisons among diverse outputs. Through an evaluation using experimental ground truth data and a realistic biological data set, we uncover significant disparities and a limited overlap in the quantified proteins. WOMBAT-P not only enables rapid execution and seamless comparison of workflows but also provides valuable insights into the capabilities of different software solutions. These benchmarking metrics are a valuable resource for researchers in selecting the most suitable workflow for their specific data sets. The modular architecture of WOMBAT-P promotes extensibility and customization. The software is available at https://github.com/wombat-p/WOMBAT-Pipelines.


Asunto(s)
Benchmarking , Proteómica , Flujo de Trabajo , Programas Informáticos , Proteínas , Análisis de Datos
10.
J Proteome Res ; 23(7): 2332-2342, 2024 Jul 05.
Artículo en Inglés | MEDLINE | ID: mdl-38787630

RESUMEN

Here, we present FLiPPR, or FragPipe LiP (limited proteolysis) Processor, a tool that facilitates the analysis of data from limited proteolysis mass spectrometry (LiP-MS) experiments following primary search and quantification in FragPipe. LiP-MS has emerged as a method that can provide proteome-wide information on protein structure and has been applied to a range of biological and biophysical questions. Although LiP-MS can be carried out with standard laboratory reagents and mass spectrometers, analyzing the data can be slow and poses unique challenges compared to typical quantitative proteomics workflows. To address this, we leverage FragPipe and then process its output in FLiPPR. FLiPPR formalizes a specific data imputation heuristic that carefully uses missing data in LiP-MS experiments to report on the most significant structural changes. Moreover, FLiPPR introduces a data merging scheme and a protein-centric multiple hypothesis correction scheme, enabling processed LiP-MS data sets to be more robust and less redundant. These improvements strengthen statistical trends when previously published data are reanalyzed with the FragPipe/FLiPPR workflow. We hope that FLiPPR will lower the barrier for more users to adopt LiP-MS, standardize statistical procedures for LiP-MS data analysis, and systematize output to facilitate eventual larger-scale integration of LiP-MS data.


Asunto(s)
Espectrometría de Masas , Proteolisis , Proteómica , Proteómica/métodos , Espectrometría de Masas/métodos , Programas Informáticos , Proteoma/análisis , Flujo de Trabajo , Humanos
11.
Stroke ; 55(5): 1329-1338, 2024 May.
Artículo en Inglés | MEDLINE | ID: mdl-38488367

RESUMEN

BACKGROUND: The relative value of computed tomography (CT) and magnetic resonance imaging (MRI) in acute ischemic stroke (AIS) is debated. In May 2018, our center transitioned from using CT to MRI as first-line imaging for AIS. This retrospective study aims to assess the effects of this paradigm change on diagnosis and disability outcomes. METHODS: We compared all consecutive patients with confirmed diagnosis of AIS admitted to our center during the MRI-period (May 2018-August 2022) and an identical number of patients from the preceding CT-period (December 2012-April 2018). Univariable and multivariable analyses were performed to evaluate outcomes, including the number and delay of imaging exams, the rate of missed strokes, stroke mimics treated with thrombolysis, undetermined stroke mechanisms, length of hospitalization, and 3-month disability. RESULTS: The median age of the 2972 included patients was 76 years (interquartile range, 65-84), and 46% were female. In the MRI-period, 80% underwent MRI as first acute imaging. The proportion of patients requiring a second acute imaging modality for diagnostic ± revascularization reasons increased from 2.1% to 5% (Punadj <0.05), but it decreased in the subacute phase from 79.0% to 60.1% (Padj <0.05). In thrombolysis candidates, there was a 2-minute increase in door-to-imaging delay (Padj <0.05). The rates of initially missed AIS diagnosis was similar (3.8% versus 4.4%, Padj=0.32) and thrombolysis in stroke mimics decreased by half (8.6% versus 4.3%; Padj <0.05). Rates of unidentified stroke mechanism at hospital discharge were similar (22.8% versus 28.1%; Padj=0.99). The length of hospitalization decreased from 9 (interquartile range, 6-14) to 7 (interquartile range, 4-12) days (Padj=0.62). Disability at 3 months was similar (common adjusted odds ratio for favorable Rankin shift, 0.98 [95% CI, 0.71-1.36]; Padj=0.91), as well as mortality and symptomatic intracranial hemorrhage. CONCLUSIONS: A paradigm shift from CT to MRI as first-line imaging for AIS seems feasible in a comprehensive stroke center, with a minimally increased delay to imaging in thrombolysis candidates. MRI was associated with reduced thrombolysis rates of stroke mimics and subacute neuroimaging needs.

12.
BMC Genomics ; 25(1): 647, 2024 Jun 28.
Artículo en Inglés | MEDLINE | ID: mdl-38943066

RESUMEN

BACKGROUND: At a global scale, the SARS-CoV-2 virus did not remain in its initial genotype for a long period of time, with the first global reports of variants of concern (VOCs) in late 2020. Subsequently, genome sequencing has become an indispensable tool for characterizing the ongoing pandemic, particularly for typing SARS-CoV-2 samples obtained from patients or environmental surveillance. For such SARS-CoV-2 typing, various in vitro and in silico workflows exist, yet to date, no systematic cross-platform validation has been reported. RESULTS: In this work, we present the first comprehensive cross-platform evaluation and validation of in silico SARS-CoV-2 typing workflows. The evaluation relies on a dataset of 54 patient-derived samples sequenced with several different in vitro approaches on all relevant state-of-the-art sequencing platforms. Moreover, we present UnCoVar, a robust, production-grade reproducible SARS-CoV-2 typing workflow that outperforms all other tested approaches in terms of precision and recall. CONCLUSIONS: In many ways, the SARS-CoV-2 pandemic has accelerated the development of techniques and analytical approaches. We believe that this can serve as a blueprint for dealing with future pandemics. Accordingly, UnCoVar is easily generalizable towards other viral pathogens and future pandemics. The fully automated workflow assembles virus genomes from patient samples, identifies existing lineages, and provides high-resolution insights into individual mutations. UnCoVar includes extensive quality control and automatically generates interactive visual reports. UnCoVar is implemented as a Snakemake workflow. The open-source code is available under a BSD 2-clause license at github.com/IKIM-Essen/uncovar.


Asunto(s)
COVID-19 , Genoma Viral , SARS-CoV-2 , Flujo de Trabajo , SARS-CoV-2/genética , Humanos , COVID-19/virología , COVID-19/epidemiología , Programas Informáticos , Reproducibilidad de los Resultados
13.
BMC Genomics ; 25(1): 282, 2024 Mar 16.
Artículo en Inglés | MEDLINE | ID: mdl-38493105

RESUMEN

BACKGROUND: Blood transcriptomic analysis is widely used to provide a detailed picture of a physiological state with potential outcomes for applications in diagnostics and monitoring of the immune response to vaccines. However, multi-species transcriptomic analysis is still a challenge from a technological point of view and a standardized workflow is urgently needed to allow interspecies comparisons. RESULTS: Here, we propose a single and complete total RNA-Seq workflow to generate reliable transcriptomic data from blood samples from humans and from animals typically used in preclinical models. Blood samples from a maximum of six individuals and four different species (rabbit, non-human primate, mouse and human) were extracted and sequenced in triplicates. The workflow was evaluated using different wet-lab and dry-lab criteria, including RNA quality and quantity, the library molarity, the number of raw sequencing reads, the Phred-score quality, the GC content, the performance of ribosomal-RNA and globin depletion, the presence of residual DNA, the strandness, the percentage of coding genes, the number of genes expressed, and the presence of saturation plateau in rarefaction curves. We identified key criteria and their associated thresholds to be achieved for validating the transcriptomic workflow. In this study, we also generated an automated analysis of the transcriptomic data that streamlines the validation of the dataset generated. CONCLUSIONS: Our study has developed an end-to-end workflow that should improve the standardization and the inter-species comparison in blood transcriptomics studies. In the context of vaccines and drug development, RNA sequencing data from preclinical models can be directly compared with clinical data and used to identify potential biomarkers of value to monitor safety and efficacy.


Asunto(s)
Perfilación de la Expresión Génica , Vacunas , Humanos , Animales , Ratones , Conejos , Flujo de Trabajo , Transcriptoma , ARN , Secuenciación de Nucleótidos de Alto Rendimiento
14.
Mol Biol Evol ; 40(12)2023 Dec 01.
Artículo en Inglés | MEDLINE | ID: mdl-38091963

RESUMEN

The burgeoning amount of single-cell data has been accompanied by revolutionary changes to computational methods to map, quantify, and analyze the outputs of these cutting-edge technologies. Many are still unable to reap the benefits of these advancements due to the lack of bioinformatics expertise. To address this issue, we present Ursa, an automated single-cell multiomics R package containing 6 automated single-cell omics and spatial transcriptomics workflows. Ursa allows scientists to carry out post-quantification single or multiomics analyses in genomics, transcriptomics, epigenetics, proteomics, and immunomics at the single-cell level. It serves as a 1-stop analytic solution by providing users with outcomes to quality control assessments, multidimensional analyses such as dimension reduction and clustering, and extended analyses such as pseudotime trajectory and gene-set enrichment analyses. Ursa aims bridge the gap between those with bioinformatics expertise and those without by providing an easy-to-use bioinformatics package for scientists in hoping to accelerate their research potential. Ursa is freely available at https://github.com/singlecellomics/ursa.


Asunto(s)
Multiómica , Programas Informáticos , Genómica/métodos , Biología Computacional/métodos , Análisis de la Célula Individual
15.
Cancer ; 130(1): 68-76, 2024 01 01.
Artículo en Inglés | MEDLINE | ID: mdl-37851511

RESUMEN

BACKGROUND: Provider and institutional practices have been shown to have a large impact on cancer clinical trial enrollment. Understanding provider perspectives on screening for trial eligibility is necessary to improve enrollment. METHODS: A questionnaire about incentives, barriers, process tools, and infrastructure related to opening trials and referring patients to onsite and offsite trials was administered to diverse stakeholders, including professional societies, advocacy organizations, and industry networks. Descriptive statistics were used to summarize findings. RESULTS: Overall, 693 responses were received, primarily from physicians (42.7%) and nurses (35.6%) employed at hospital health systems (43.7%) and academic centers (36.5%). Approximately half (49.2%) screened all patients for onsite clinical trials with screening typically done by manual chart review (81.9%). The greatest incentive reported for offering trials was providing the best treatment options for patients (67.7%). Contracting and paperwork (48.5%) were the greatest barriers to opening more onsite trials. Offsite referrals were rare. CONCLUSIONS: Screening for trial eligibility is a largely manual and ad hoc process, with screening and referral to offsite trials occurring infrequently. Administrative and infrastructure barriers commonly prevent sites from opening more onsite trials. These findings suggest that automated trial screening tools built into workflows that screen in a site-agnostic manner could result in more frequent trial eligibility screening, especially for offsite trials. With recent momentum, in part in response to the COVID-19 pandemic, to improve clinical trial efficiencies and broaden access and participant diversity, implementing tools to improve screening and referral processes is timely and essential. PLAIN LANGUAGE SUMMARY: There are many factors that contribute to low adult enrollment in cancer clinical trials, but previous research has indicated that provider and institutional barriers are the largest contributors to low cancer clinical trial enrollment. In this survey, we sought to gain insight into cancer clinical trial enrollment practices from the perspective of health care providers such as physicians and nurses. We found that only approximately half of respondents indicated their institution systematically screens their patients for clinical trials and this process is manual and time consuming. Furthermore, we found that providers infrequently search for and refer patients to clinical trials at other sites. Creating better screening methods could improve enrollment in clinical trials.


Asunto(s)
Motivación , Neoplasias , Adulto , Humanos , Detección Precoz del Cáncer , Neoplasias/diagnóstico , Neoplasias/terapia , Pandemias , Derivación y Consulta , Encuestas y Cuestionarios , Ensayos Clínicos como Asunto
16.
J Synchrotron Radiat ; 2024 Sep 01.
Artículo en Inglés | MEDLINE | ID: mdl-39007823

RESUMEN

StreamSAXS is a Python-based small- and wide-angle X-ray scattering (SAXS/WAXS) data analysis workflow platform with graphical user interface (GUI). It aims to provide an interactive and user-friendly tool for analysis of both batch data files and real-time data streams. Users can easily create customizable workflows through the GUI to meet their specific needs. One characteristic of StreamSAXS is its plug-in framework, which enables developers to extend the built-in workflow tasks. Another feature is the support for both already acquired and real-time data sources, allowing StreamSAXS to function as an offline analysis platform or be integrated into large-scale acquisition systems for end-to-end data management. This paper presents the core design of StreamSAXS and provides user cases demonstrating its utilization for SAXS/WAXS data analysis in offline and online scenarios.

17.
Brief Bioinform ; 23(5)2022 09 20.
Artículo en Inglés | MEDLINE | ID: mdl-36058206

RESUMEN

Updated and expert-quality knowledge bases are fundamental to biomedical research. A knowledge base established with human participation and subject to multiple inspections is needed to support clinical decision making, especially in the growing field of precision oncology. The number of original publications in this field has risen dramatically with the advances in technology and the evolution of in-depth research. Consequently, the issue of how to gather and mine these articles accurately and efficiently now requires close consideration. In this study, we present OncoPubMiner (https://oncopubminer.chosenmedinfo.com), a free and powerful system that combines text mining, data structure customisation, publication search with online reading and project-centred and team-based data collection to form a one-stop 'keyword in-knowledge out' oncology publication mining platform. The platform was constructed by integrating all open-access abstracts from PubMed and full-text articles from PubMed Central, and it is updated daily. OncoPubMiner makes obtaining precision oncology knowledge from scientific articles straightforward and will assist researchers in efficiently developing structured knowledge base systems and bring us closer to achieving precision oncology goals.


Asunto(s)
Neoplasias , Minería de Datos , Humanos , Oncología Médica , Medicina de Precisión , PubMed , Publicaciones
18.
J Transl Med ; 22(1): 185, 2024 02 20.
Artículo en Inglés | MEDLINE | ID: mdl-38378565

RESUMEN

Clinical data mining of predictive models offers significant advantages for re-evaluating and leveraging large amounts of complex clinical real-world data and experimental comparison data for tasks such as risk stratification, diagnosis, classification, and survival prediction. However, its translational application is still limited. One challenge is that the proposed clinical requirements and data mining are not synchronized. Additionally, the exotic predictions of data mining are difficult to apply directly in local medical institutions. Hence, it is necessary to incisively review the translational application of clinical data mining, providing an analytical workflow for developing and validating prediction models to ensure the scientific validity of analytic workflows in response to clinical questions. This review systematically revisits the purpose, process, and principles of clinical data mining and discusses the key causes contributing to the detachment from practice and the misuse of model verification in developing predictive models for research. Based on this, we propose a niche-targeting framework of four principles: Clinical Contextual, Subgroup-Oriented, Confounder- and False Positive-Controlled (CSCF), to provide guidance for clinical data mining prior to the model's development in clinical settings. Eventually, it is hoped that this review can help guide future research and develop personalized predictive models to achieve the goal of discovering subgroups with varied remedial benefits or risks and ensuring that precision medicine can deliver its full potential.


Asunto(s)
Minería de Datos , Medicina de Precisión
19.
J Cardiovasc Electrophysiol ; 35(8): 1601-1613, 2024 Aug.
Artículo en Inglés | MEDLINE | ID: mdl-38887842

RESUMEN

INTRODUCTION: Four-dimensional (4D) intracardiac echocardiography (ICE) is a novel cardiac imaging modality that has been applied to various workflows, including catheter ablation, tricuspid valve repair, and left atrial appendage occlusion (LAAO). The use of this type of advanced ICE imaging may ultimately allow for the replacement of transesophageal echocardiography (TEE) for LAAO, providing comparable imaging quality while eliminating the need for general anesthesia. METHODS: Based on our initial clinical experience with 4D ICE in LAAO, we have developed an optimized workflow for the use of the NUVISION™ 4D ICE Catheter in conjunction with the GE E95 and S70N Ultrasound Systems in LAAO. In this manuscript, we provide a step-by-step guide to using 4D ICE in conjunction with compatible imaging consoles. We have also evaluated the performance of 4D ICE with the NUVISION Ultrasound Catheter versus TEE in one LAAO case and present those results here. RESULTS: In our comparison of 4D ICE using our optimized workflow with TEE in an LAAO case, ICE LAA measurements were similar to those from TEE. The best image resolution was seen via ICE in 2-dimensional and multislice modes (triplane and biplane). The FlexiSlice multiplanar reconstruction tool, which creates an en-face image derived from a 4D volume set, also provided valuable information but yielded slightly lower image quality, as expected for these volume-derived images. For this case, comparable images were obtained with TEE and ICE but with less need to reposition the ICE catheter. CONCLUSION: The use of optimized 4D ICE catheter workflow recommendations allows for efficient LAAO procedures, with higher resolution imaging, comparable to TEE.


Asunto(s)
Apéndice Atrial , Ecocardiografía Tetradimensional , Ecocardiografía Transesofágica , Flujo de Trabajo , Apéndice Atrial/diagnóstico por imagen , Apéndice Atrial/cirugía , Humanos , Fibrilación Atrial/cirugía , Fibrilación Atrial/diagnóstico por imagen , Fibrilación Atrial/fisiopatología , Valor Predictivo de las Pruebas , Cateterismo Cardíaco/instrumentación , Ultrasonografía Intervencional , Masculino
20.
J Cardiovasc Electrophysiol ; 35(2): 341-345, 2024 Feb.
Artículo en Inglés | MEDLINE | ID: mdl-38164063

RESUMEN

INTRODUCTION: The increasing use of insertable cardiac monitors (ICMs) for long-term continuous arrhythmia monitoring creates a high volume of transmissions and a significant workload for clinics. The ability to remotely reprogram device alert settings without in-office patient visits was recently introduced, but its impact on clinic workflow compared to the previous ICM iteration is unknown. METHODS: The aim of this real-world study was to evaluate the impact of device reprogramming capabilities on ICM alert burden and on clinic workflow. Deidentified data was obtained from US patients and a total of 19 525 receiving a LINQ II were propensity score-matched with 19 525 implanted with LINQ TruRhythm (TR) ICM based on age and reason for monitoring. RESULTS: After reprogramming, ICM alerts reduced by 20.5% (p < .001). Compared with patients monitored with LINQ TR, patients with LINQ II had their device reprogrammed sooner after implant and more frequently during follow-up. Adoption of remote programming was projected to lead to an annual total clinic time savings of 211 h per 100 ICM patients managed. CONCLUSION: These data suggest that utilization of ICM alert reprogramming has increased with remote capabilities, which may reduce clinic and patient burden for ICM follow-up and free clinician time for other valuable patient care activities.


Asunto(s)
Arritmias Cardíacas , Electrocardiografía Ambulatoria , Humanos , Arritmias Cardíacas/diagnóstico , Arritmias Cardíacas/terapia , Trastorno del Sistema de Conducción Cardíaco
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA