Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 129.132
Filtrar
1.
BMC Bioinformatics ; 25(1): 177, 2024 May 04.
Artículo en Inglés | MEDLINE | ID: mdl-38704528

RESUMEN

BACKGROUND: Hepatitis B virus (HBV) integrates into human chromosomes and can lead to genomic instability and hepatocarcinogenesis. Current tools for HBV integration site detection lack accuracy and stability. RESULTS: This study proposes a deep learning-based method, named ViroISDC, for detecting integration sites. ViroISDC generates corresponding grammar rules and encodes the characteristics of the language data to predict integration sites accurately. Compared with Lumpy, Pindel, Seeksv, and SurVirus, ViroISDC exhibits better overall performance and is less sensitive to sequencing depth and integration sequence length, displaying good reliability, stability, and generality. Further downstream analysis of integrated sites detected by ViroISDC reveals the integration patterns and features of HBV. It is observed that HBV integration exhibits specific chromosomal preferences and tends to integrate into cancerous tissue. Moreover, HBV integration frequency was higher in males than females, and high-frequency integration sites were more likely to be present on hepatocarcinogenesis- and anti-cancer-related genes, validating the reliability of the ViroISDC. CONCLUSIONS: ViroISDC pipeline exhibits superior precision, stability, and reliability across various datasets when compared to similar software. It is invaluable in exploring HBV infection in the human body, holding significant implications for the diagnosis, treatment, and prognosis assessment of HCC.


Asunto(s)
Virus de la Hepatitis B , Integración Viral , Virus de la Hepatitis B/genética , Humanos , Integración Viral/genética , Programas Informáticos , Aprendizaje Profundo , Masculino , Femenino , Hepatitis B/genética , Hepatitis B/virología , Neoplasias Hepáticas/genética , Neoplasias Hepáticas/virología , Biología Computacional/métodos
2.
Int J Esthet Dent ; 19(2): 140-150, 2024 May 10.
Artículo en Inglés | MEDLINE | ID: mdl-38726856

RESUMEN

The present technical article describes a protocol to digitally reproduce the emergence profile of an interim implant prosthesis (IP) and to transfer its macrogeometry into the definitive restoration. The purpose of this protocol was to minimize alterations in the gingival architecture developed during the interim restorative phase of a single implant that could potentially jeopardize its esthetic outcome. The process included obtaining an intraoral scan with the interim IP in situ, a duplicate of this intraoral scan that was used to capture the exact position of the implant, and an extraoral scan of the prosthesis. These data could then be imported into IOS software to create a model where the patients' soft tissue was incorporated with precision, allowing for the fabrication of a definitive crown with an optimal soft tissue adaptation. As there are few articles in the scientific literature that have reported a consistent method to replicate the emergence profile of an interim IP, the present technical article aims to highlight the potential of utilizing the emergence profile of an interim IP created by IOS software.


Asunto(s)
Programas Informáticos , Humanos , Estética Dental , Diseño Asistido por Computadora , Coronas , Prótesis Dental de Soporte Implantado/métodos , Restauración Dental Provisional/métodos , Diseño de Prótesis Dental/métodos , Implantes Dentales de Diente Único
3.
Health Informatics J ; 30(2): 14604582241249927, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-38717450

RESUMEN

A public health registry and intervention was created in response to the Flint water crisis to identify and refer exposed individuals to public health services to ameliorate the deleterious impact of lead exposure. Traditional technology architecture domains, funded scope of work, as well as community input were considered when defining the requirements of the selected solutions. A hybrid software solution was created using Research Electronic Data Capture (REDCap) to deploy an open participant survey and bypass requirements to create user accounts, and Epic to manage deduplication and participant communication and tracking. To bridge the two software systems, REDCap to Epic unidirectional ADT and Documentation Flowsheet interfaces were built to automate creation of subject records in Epic identical to those created in REDCap and to copy key protocol-driving variables from REDCap to Epic. The interfaces were critical to deliver a successful hybrid solution in which the desired features of each software could be leveraged to satisfy specific protocol requirements and community input. Data from the start of survey administration (December 2018) through 31 December 2020 are reported to demonstrate the usefulness of the interfaces.


Asunto(s)
Salud Pública , Sistema de Registros , Programas Informáticos , Humanos , Sistema de Registros/estadística & datos numéricos , Salud Pública/métodos , Registros Electrónicos de Salud , Interfaz Usuario-Computador , Encuestas y Cuestionarios
4.
Curr Protoc ; 4(5): e1047, 2024 May.
Artículo en Inglés | MEDLINE | ID: mdl-38720559

RESUMEN

Recent advancements in protein structure determination and especially in protein structure prediction techniques have led to the availability of vast amounts of macromolecular structures. However, the accessibility and integration of these structures into scientific workflows are hindered by the lack of standardization among publicly available data resources. To address this issue, we introduced the 3D-Beacons Network, a unified platform that aims to establish a standardized framework for accessing and displaying protein structure data. In this article, we highlight the importance of standardized approaches for accessing protein structure data and showcase the capabilities of 3D-Beacons. We describe four protocols for finding and accessing macromolecular structures from various specialist data resources via 3D-Beacons. First, we describe three scenarios for programmatically accessing and retrieving data using the 3D-Beacons API. Next, we show how to perform sequence-based searches to find structures from model providers. Then, we demonstrate how to search for structures and fetch them directly into a workflow using JalView. Finally, we outline the process of facilitating access to data from providers interested in contributing their structures to the 3D-Beacons Network. © 2024 The Authors. Current Protocols published by Wiley Periodicals LLC. Basic Protocol 1: Programmatic access to the 3D-Beacons API Basic Protocol 2: Sequence-based search using the 3D-Beacons API Basic Protocol 3: Accessing macromolecules from 3D-Beacons with JalView Basic Protocol 4: Enhancing data accessibility through 3D-Beacons.


Asunto(s)
Conformación Proteica , Proteínas , Proteínas/química , Bases de Datos de Proteínas , Programas Informáticos
5.
Curr Protoc ; 4(5): e1036, 2024 May.
Artículo en Inglés | MEDLINE | ID: mdl-38713133

RESUMEN

Identifying impacted pathways is important because it provides insights into the biology underlying conditions beyond the detection of differentially expressed genes. Because of the importance of such analysis, more than 100 pathway analysis methods have been developed thus far. Despite the availability of many methods, it is challenging for biomedical researchers to learn and properly perform pathway analysis. First, the sheer number of methods makes it challenging to learn and choose the correct method for a given experiment. Second, computational methods require users to be savvy with coding syntax, and comfortable with command-line environments, areas that are unfamiliar to most life scientists. Third, as learning tools and computational methods are typically implemented only for a few species (i.e., human and some model organisms), it is difficult to perform pathway analysis on other species that are not included in many of the current pathway analysis tools. Finally, existing pathway tools do not allow researchers to combine, compare, and contrast the results of different methods and experiments for both hypothesis testing and analysis purposes. To address these challenges, we developed an open-source R package for Consensus Pathway Analysis (RCPA) that allows researchers to conveniently: (1) download and process data from NCBI GEO; (2) perform differential analysis using established techniques developed for both microarray and sequencing data; (3) perform both gene set enrichment, as well as topology-based pathway analysis using different methods that seek to answer different research hypotheses; (4) combine methods and datasets to find consensus results; and (5) visualize analysis results and explore significantly impacted pathways across multiple analyses. This protocol provides many example code snippets with detailed explanations and supports the analysis of more than 1000 species, two pathway databases, three differential analysis techniques, eight pathway analysis tools, six meta-analysis methods, and two consensus analysis techniques. The package is freely available on the CRAN repository. © 2024 The Authors. Current Protocols published by Wiley Periodicals LLC. Basic Protocol 1: Processing Affymetrix microarrays Basic Protocol 2: Processing Agilent microarrays Support Protocol: Processing RNA sequencing (RNA-Seq) data Basic Protocol 3: Differential analysis of microarray data (Affymetrix and Agilent) Basic Protocol 4: Differential analysis of RNA-Seq data Basic Protocol 5: Gene set enrichment analysis Basic Protocol 6: Topology-based (TB) pathway analysis Basic Protocol 7: Data integration and visualization.


Asunto(s)
Biología Computacional , Programas Informáticos , Humanos , Biología Computacional/métodos , Perfilación de la Expresión Génica/métodos
6.
Curr Protoc ; 4(5): e1046, 2024 May.
Artículo en Inglés | MEDLINE | ID: mdl-38717471

RESUMEN

Whole-genome sequencing is widely used to investigate population genomic variation in organisms of interest. Assorted tools have been independently developed to call variants from short-read sequencing data aligned to a reference genome, including single nucleotide polymorphisms (SNPs) and structural variations (SVs). We developed SNP-SVant, an integrated, flexible, and computationally efficient bioinformatic workflow that predicts high-confidence SNPs and SVs in organisms without benchmarked variants, which are traditionally used for distinguishing sequencing errors from real variants. In the absence of these benchmarked datasets, we leverage multiple rounds of statistical recalibration to increase the precision of variant prediction. The SNP-SVant workflow is flexible, with user options to tradeoff accuracy for sensitivity. The workflow predicts SNPs and small insertions and deletions using the Genome Analysis ToolKit (GATK) and predicts SVs using the Genome Rearrangement IDentification Software Suite (GRIDSS), and it culminates in variant annotation using custom scripts. A key utility of SNP-SVant is its scalability. Variant calling is a computationally expensive procedure, and thus, SNP-SVant uses a workflow management system with intermediary checkpoint steps to ensure efficient use of resources by minimizing redundant computations and omitting steps where dependent files are available. SNP-SVant also provides metrics to assess the quality of called variants and converts between VCF and aligned FASTA format outputs to ensure compatibility with downstream tools to calculate selection statistics, which are commonplace in population genomics studies. By accounting for both small and large structural variants, users of this workflow can obtain a wide-ranging view of genomic alterations in an organism of interest. Overall, this workflow advances our capabilities in assessing the functional consequences of different types of genomic alterations, ultimately improving our ability to associate genotypes with phenotypes. © 2024 The Authors. Current Protocols published by Wiley Periodicals LLC. Basic Protocol: Predicting single nucleotide polymorphisms and structural variations Support Protocol 1: Downloading publicly available sequencing data Support Protocol 2: Visualizing variant loci using Integrated Genome Viewer Support Protocol 3: Converting between VCF and aligned FASTA formats.


Asunto(s)
Polimorfismo de Nucleótido Simple , Programas Informáticos , Flujo de Trabajo , Polimorfismo de Nucleótido Simple/genética , Biología Computacional/métodos , Genómica/métodos , Anotación de Secuencia Molecular/métodos , Secuenciación Completa del Genoma/métodos
7.
Protein Sci ; 33(6): e4985, 2024 Jun.
Artículo en Inglés | MEDLINE | ID: mdl-38717278

RESUMEN

Inteins are proteins that excise themselves out of host proteins and ligate the flanking polypeptides in an auto-catalytic process called protein splicing. In nature, inteins are either contiguous or split. In the case of split inteins, the two fragments must first form a complex for the splicing to occur. Contiguous inteins have previously been artificially split in two fragments because split inteins allow for distinct applications than contiguous ones. Even naturally split inteins have been split at unnatural split sites to obtain fragments with reduced affinity for one another, which are useful to create conditional inteins or to study protein-protein interactions. So far, split sites in inteins have been heuristically identified. We developed Int&in, a web server freely available for academic research (https://intein.biologie.uni-freiburg.de) that runs a machine learning model using logistic regression to predict active and inactive split sites in inteins with high accuracy. The model was trained on a dataset of 126 split sites generated using the gp41-1, Npu DnaE and CL inteins and validated using 97 split sites extracted from the literature. Despite the limited data size, the model, which uses various protein structural features, as well as sequence conservation information, achieves an accuracy of 0.79 and 0.78 for the training and testing sets, respectively. We envision Int&in will facilitate the engineering of novel split inteins for applications in synthetic and cell biology.


Asunto(s)
Inteínas , Internet , Aprendizaje Automático , Empalme de Proteína , Programas Informáticos , Dominio Catalítico
8.
PLoS Comput Biol ; 20(5): e1012024, 2024 May.
Artículo en Inglés | MEDLINE | ID: mdl-38717988

RESUMEN

The activation levels of biologically significant gene sets are emerging tumor molecular markers and play an irreplaceable role in the tumor research field; however, web-based tools for prognostic analyses using it as a tumor molecular marker remain scarce. We developed a web-based tool PESSA for survival analysis using gene set activation levels. All data analyses were implemented via R. Activation levels of The Molecular Signatures Database (MSigDB) gene sets were assessed using the single sample gene set enrichment analysis (ssGSEA) method based on data from the Gene Expression Omnibus (GEO), The Cancer Genome Atlas (TCGA), The European Genome-phenome Archive (EGA) and supplementary tables of articles. PESSA was used to perform median and optimal cut-off dichotomous grouping of ssGSEA scores for each dataset, relying on the survival and survminer packages for survival analysis and visualisation. PESSA is an open-access web tool for visualizing the results of tumor prognostic analyses using gene set activation levels. A total of 238 datasets from the GEO, TCGA, EGA, and supplementary tables of articles; covering 51 cancer types and 13 survival outcome types; and 13,434 tumor-related gene sets are obtained from MSigDB for pre-grouping. Users can obtain the results, including Kaplan-Meier analyses based on the median and optimal cut-off values and accompanying visualization plots and the Cox regression analyses of dichotomous and continuous variables, by selecting the gene set markers of interest. PESSA (https://smuonco.shinyapps.io/PESSA/ OR http://robinl-lab.com/PESSA) is a large-scale web-based tumor survival analysis tool covering a large amount of data that creatively uses predefined gene set activation levels as molecular markers of tumors.


Asunto(s)
Biomarcadores de Tumor , Biología Computacional , Bases de Datos Genéticas , Internet , Neoplasias , Programas Informáticos , Humanos , Neoplasias/genética , Neoplasias/mortalidad , Análisis de Supervivencia , Biomarcadores de Tumor/genética , Biomarcadores de Tumor/metabolismo , Biología Computacional/métodos , Pronóstico , Perfilación de la Expresión Génica/métodos , Regulación Neoplásica de la Expresión Génica/genética
9.
PLoS One ; 19(5): e0298192, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-38717996

RESUMEN

Area cartograms are map-based data visualizations in which the area of each map region is proportional to the data value it represents. Long utilized in print media, area cartograms have also become increasingly popular online, often accompanying news articles and blog posts. Despite their popularity, there is a dearth of cartogram generation tools accessible to non-technical users unfamiliar with Geographic Information Systems software. Few tools support the generation of contiguous cartograms (i.e., area cartograms that faithfully represent the spatial adjacency of neighboring regions). We thus reviewed existing contiguous cartogram software and compared two web-based cartogram tools: fBlog and go-cart.io. We experimentally evaluated their usability through a user study comprising cartogram generation and analysis tasks. The System Usability Scale was adopted to quantify how participants perceived the usability of both tools. We also collected written feedback from participants to determine the main challenges faced while using the software. Participants generally rated go-cart.io as being more usable than fBlog. Compared to fBlog, go-cart.io offers a greater variety of built-in maps and allows importing data values by file upload. Still, our results suggest that even go-cart.io suffers from poor usability because the graphical user interface is complex and data can only be imported as a comma-separated-values file. We also propose changes to go-cart.io and make general recommendations for web-based cartogram tools to address these concerns.


Asunto(s)
Internet , Programas Informáticos , Humanos , Femenino , Masculino , Adulto , Sistemas de Información Geográfica , Interfaz Usuario-Computador , Adulto Joven
10.
Sci Rep ; 14(1): 10129, 2024 05 02.
Artículo en Inglés | MEDLINE | ID: mdl-38698074

RESUMEN

Artificial Intelligence (AI) systems are becoming widespread in all aspects of society, bringing benefits to the whole economy. There is a growing understanding of the potential benefits and risks of this type of technology. While the benefits are more efficient decision processes and industrial productivity, the risks may include a potential progressive disengagement of human beings in crucial aspects of decision-making. In this respect, a new perspective is emerging that aims at reconsidering the centrality of human beings while reaping the benefits of AI systems to augment rather than replace professional skills: Human-Centred AI (HCAI) is a novel framework that posits that high levels of human control do not contradict high levels of computer automation. In this paper, we investigate the two antipodes, automation vs augmentation, in the context of website usability evaluation. Specifically, we have analyzed whether the level of automation provided by a tool for semi-automatic usability evaluation can support evaluators in identifying usability problems. Three different visualizations, each one corresponding to a different level of automation, ranging from a full-automation approach to an augmentation approach, were compared in an experimental study. We found that a fully automated approach could help evaluators detect a significant number of medium and high-severity usability problems, which are the most critical in a software system; however, it also emerged that it was possible to detect more low-severity usability problems using one of the augmented approaches proposed in this paper.


Asunto(s)
Inteligencia Artificial , Automatización , Humanos , Internet , Interfaz Usuario-Computador , Programas Informáticos
11.
Sci Rep ; 14(1): 10189, 2024 05 03.
Artículo en Inglés | MEDLINE | ID: mdl-38702352

RESUMEN

The study aimed to determine the accuracy of diagnosing periodontal conditions using the developed web-based PocketPerio application and evaluate the user's perspective on the use of PocketPerio. First, 22 third-year dental students (DS3) diagnosed ten cases without PocketPerio (control) and with PocketPerio (test) during a mock examination. Then, 105 DS3, 13 fourth-year dental students (DS4), and 32 senior second-year International Standing Program students (ISP2) used PocketPerio chairside. Statistical analysis was performed using a non-parametric paired two-tailed test of significance with the Wilcoxon matched-pairs signed rank test. The null hypothesis that PocketPerio did not increase the accuracy of periodontal diagnoses was rejected at α < 0.01. Periodontal diagnoses made using PocketPerio correlated with those made by periodontics faculty ("gold standard") in all cases. During the mock examination, PocketPerio significantly increased the accuracy of periodontal diagnoses compared to the control (52.73 vs. 13.18%, respectively). Chairside, PocketPerio significantly increased the accuracy of primary (100 vs. 40.0%) and secondary (100 vs. 14.25%) periodontal diagnoses compared to the respective controls. Students regardless of their training year felt more confident in diagnosing periodontal conditions using PocketPerio than their current tools, provided positive feedback on its features, and suggested avenues for its further development.


Asunto(s)
Enfermedades Periodontales , Estudiantes de Odontología , Humanos , Enfermedades Periodontales/diagnóstico , Periodoncia/educación , Educación en Odontología/métodos , Femenino , Masculino , Programas Informáticos
12.
Sci Rep ; 14(1): 10288, 2024 05 04.
Artículo en Inglés | MEDLINE | ID: mdl-38704392

RESUMEN

Ultrasonography (US)-guided fine-needle aspiration cytology (FNAC) is the primary modality for evaluating thyroid nodules. However, in cases of atypia of undetermined significance (AUS) or follicular lesion of undetermined significance (FLUS), supplemental tests are necessary for a definitive diagnosis. Accordingly, we aimed to develop a non-invasive quantification software using the heterogeneity scores of thyroid nodules. This cross-sectional study retrospectively enrolled 188 patients who were categorized into four groups according to their diagnostic classification in the Bethesda system and surgical pathology [II-benign (B) (n = 24); III-B (n = 52); III-malignant (M) (n = 54); V/VI-M (n = 58)]. Heterogeneity scores were derived using an image pixel-based heterogeneity index, utilized as a coefficient of variation (CV) value, and analyzed across all US images. Differences in heterogeneity scores were compared using one-way analysis of variance with Tukey's test. Diagnostic accuracy was determined by calculating the area under the receiver operating characteristic (AUROC) curve. The results of this study indicated significant differences in mean heterogeneity scores between benign and malignant thyroid nodules, except in the comparison between III-M and V/VI-M nodules. Among malignant nodules, the Bethesda classification was not observed to be associated with mean heterogeneity scores. Moreover, there was a positive correlation between heterogeneity scores and the combined diagnostic category, which was based on the Bethesda system and surgical cytology grades (R = 0.639, p < 0.001). AUROC for heterogeneity scores showed the highest diagnostic performance (0.818; cut-off: 30.22% CV value) for differentiating the benign group (normal/II-B/III-B) from the malignant group (III-M/V&VI-M), with a diagnostic accuracy of 72.5% (161/122). Quantitative heterogeneity measurement of US images is a valuable non-invasive diagnostic tool for predicting the likelihood of malignancy in thyroid nodules, including AUS or FLUS.


Asunto(s)
Programas Informáticos , Nódulo Tiroideo , Ultrasonografía , Humanos , Nódulo Tiroideo/diagnóstico por imagen , Nódulo Tiroideo/patología , Femenino , Masculino , Persona de Mediana Edad , Ultrasonografía/métodos , Diagnóstico Diferencial , Adulto , Estudios Transversales , Estudios Retrospectivos , Anciano , Biopsia con Aguja Fina/métodos , Curva ROC , Neoplasias de la Tiroides/diagnóstico por imagen , Neoplasias de la Tiroides/patología , Neoplasias de la Tiroides/diagnóstico
13.
Proc Natl Acad Sci U S A ; 121(23): e2403750121, 2024 Jun 04.
Artículo en Inglés | MEDLINE | ID: mdl-38805269

RESUMEN

Haplotype-resolved genome assemblies were produced for Chasselas and Ugni Blanc, two heterozygous Vitis vinifera cultivars by combining high-fidelity long-read sequencing and high-throughput chromosome conformation capture (Hi-C). The telomere-to-telomere full coverage of the chromosomes allowed us to assemble separately the two haplo-genomes of both cultivars and revealed structural variations between the two haplotypes of a given cultivar. The deletions/insertions, inversions, translocations, and duplications provide insight into the evolutionary history and parental relationship among grape varieties. Integration of de novo single long-read sequencing of full-length transcript isoforms (Iso-Seq) yielded a highly improved genome annotation. Given its higher contiguity, and the robustness of the IsoSeq-based annotation, the Chasselas assembly meets the standard to become the annotated reference genome for V. vinifera. Building on these resources, we developed VitExpress, an open interactive transcriptomic platform, that provides a genome browser and integrated web tools for expression profiling, and a set of statistical tools (StatTools) for the identification of highly correlated genes. Implementation of the correlation finder tool for MybA1, a major regulator of the anthocyanin pathway, identified candidate genes associated with anthocyanin metabolism, whose expression patterns were experimentally validated as discriminating between black and white grapes. These resources and innovative tools for mining genome-related data are anticipated to foster advances in several areas of grapevine research.


Asunto(s)
Genoma de Planta , Haplotipos , Transcriptoma , Vitis , Vitis/genética , Haplotipos/genética , Transcriptoma/genética , Anotación de Secuencia Molecular/métodos , Perfilación de la Expresión Génica/métodos , Programas Informáticos
14.
PLoS One ; 19(5): e0300279, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-38805433

RESUMEN

Software engineers post their opinions about various topics on social media that can be collectively mined using Sentiment Analysis. Analyzing this opinion is useful because it can provide insight into developers' feedback about various tools and topics. General-purpose sentiment analysis tools do not work well in the software domain because most of these tools are trained on movies and review datasets. Therefore, efforts are underway to develop domain-specific sentiment analysis tools for the Software Engineering (SE) domain. However, existing domain-specific tools for SE struggle to compute negative and neutral sentiments and can not be used on all SE datasets. This work uses a hybrid technique based on deep learning and a fine-tuned BERT model, i.e., Bert-Base, Bert-Large, Bert-LSTM, Bert-GRU, and Bert-CNN presented that is adapted as a domain-specific sentiment analysis tool for Community Question Answering datasets (named as Fuzzy Ensemble). Five different variants of fine-tuned BERT on the SE dataset are developed, and an ensemble of these fine-tuned models is taken using fuzzy logic. The trained model is evaluated on four publicly available benchmark datasets, i.e., Stack Overflow, JavaLib, Jira, and Code Review, using various evaluation metrics. The fuzzy Ensemble model is also compared with the state-of-the-art sentiment analysis tools for the software engineering domain, i.e., SentiStrength-SE, Senti4SD, SentiCR, and Generative Pre-Training Transformer (GPT). GPT mode is fine-tuned by the authors for domain-specific sentiment analysis. The Fuzzy Ensemble model covers the limitation of existing tools and improve accuracy to predict neutral sentiments even on diverse dataset. The fuzzy Ensemble model performs superior to state-of-the-art tools by achieving a maximum F1-score of 0.883.


Asunto(s)
Lógica Difusa , Programas Informáticos , Humanos , Medios de Comunicación Sociales , Aprendizaje Profundo
15.
PLoS One ; 19(5): e0303426, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-38805437

RESUMEN

This paper aims to extend the applications of the projected fractional improved Adomian Decomposition method (fIADM) to the fractional order new coupled Korteweg-de Vries (cKdV) system. This technique is significantly recognized for its effectiveness in addressing nonlinearities and iteratively handling fractional derivatives. The approximate solutions of the fractional-order new cKdV system are obtained by employing the improved ADM in fractional form. These solutions play a crucial role in designing and optimizing systems in engineering applications where accurate modeling of wave phenomena is essential, including fluid dynamics, plasma physics, nonlinear optics, and other mathematical physics domains. The fractional order new cKdV system, integrating fractional calculus, enhances accuracy in modeling wave interactions compared to the classical cKdV system. Comparison with exact solutions demonstrates the high accuracy and ease of application of the projected method. This proposed technique proves influential in resolving fractional coupled systems encountered in various fields, including engineering and physics. Numerical results obtained using Mathematica software further verify and demonstrate its efficacy.


Asunto(s)
Algoritmos , Modelos Teóricos , Dinámicas no Lineales , Programas Informáticos , Simulación por Computador
16.
Database (Oxford) ; 20242024 May 28.
Artículo en Inglés | MEDLINE | ID: mdl-38805754

RESUMEN

In the field of complex autoimmune diseases such as systemic lupus erythematosus (SLE), systems immunology approaches have proven invaluable in translational research settings. Large-scale datasets of transcriptome profiling have been collected and made available to the research community in public repositories, but remain poorly accessible and usable by mainstream researchers. Enabling tools and technologies facilitating investigators' interaction with large-scale datasets such as user-friendly web applications could promote data reuse and foster knowledge discovery. Microarray blood transcriptomic data from the LUPUCE cohort (publicly available on Gene Expression Omnibus, GSE49454), which comprised 157 samples from 62 adult SLE patients, were analyzed with the third-generation (BloodGen3) module repertoire framework, which comprises modules and module aggregates. These well-characterized samples corresponded to different levels of disease activity, different types of flares (including biopsy-proven lupus nephritis), different auto-antibody profiles and different levels of interferon signatures. A web application was deployed to present the aggregate-level, module-level and gene-level analysis results from LUPUCE dataset. Users can explore the similarities and heterogeneity of SLE samples, navigate through different levels of analysis, test hypotheses and generate custom fingerprint grids and heatmaps, which may be used in reports or manuscripts. This resource is available via this link: https://immunology-research.shinyapps.io/LUPUCE/. This web application can be employed as a stand-alone resource to explore changes in blood transcript profiles in SLE, and their relation to clinical and immunological parameters, to generate new research hypotheses.


Asunto(s)
Lupus Eritematoso Sistémico , Transcriptoma , Lupus Eritematoso Sistémico/genética , Lupus Eritematoso Sistémico/sangre , Humanos , Internet , Bases de Datos Genéticas , Perfilación de la Expresión Génica/métodos , Programas Informáticos
17.
Nat Commun ; 15(1): 4536, 2024 May 28.
Artículo en Inglés | MEDLINE | ID: mdl-38806453

RESUMEN

Protein-ligand docking is an established tool in drug discovery and development to narrow down potential therapeutics for experimental testing. However, a high-quality protein structure is required and often the protein is treated as fully or partially rigid. Here we develop an AI system that can predict the fully flexible all-atom structure of protein-ligand complexes directly from sequence information. We find that classical docking methods are still superior, but depend upon having crystal structures of the target protein. In addition to predicting flexible all-atom structures, predicted confidence metrics (plDDT) can be used to select accurate predictions as well as to distinguish between strong and weak binders. The advances presented here suggest that the goal of AI-based drug discovery is one step closer, but there is still a way to go to grasp the complexity of protein-ligand interactions fully. Umol is available at: https://github.com/patrickbryant1/Umol .


Asunto(s)
Simulación del Acoplamiento Molecular , Proteínas , Ligandos , Proteínas/química , Proteínas/metabolismo , Unión Proteica , Descubrimiento de Drogas/métodos , Conformación Proteica , Programas Informáticos , Sitios de Unión
18.
Curr Protoc ; 4(5): e1054, 2024 May.
Artículo en Inglés | MEDLINE | ID: mdl-38808970

RESUMEN

RNA sequencing (RNA-seq) has emerged as a powerful tool for assessing genome-wide gene expression, revolutionizing various fields of biology. However, analyzing large RNA-seq datasets can be challenging, especially for students or researchers lacking bioinformatics experience. To address these challenges, we present a comprehensive guide to provide step-by-step workflows for analyzing RNA-seq data, from raw reads to functional enrichment analysis, starting with considerations for experimental design. This is designed to aid students and researchers working with any organism, irrespective of whether an assembled genome is available. Within this guide, we employ various recognized bioinformatics tools to navigate the landscape of RNA-seq analysis and discuss the advantages and disadvantages of different tools for the same task. Our protocol focuses on clarity, reproducibility, and practicality to enable users to navigate the complexities of RNA-seq data analysis easily and gain valuable biological insights from the datasets. Additionally, all scripts and a sample dataset are available in a GitHub repository to facilitate the implementation of the analysis pipeline. © 2024 The Authors. Current Protocols published by Wiley Periodicals LLC. Basic Protocol 1: Analysis of data from a model plant with an available reference genome Basic Protocol 2: Gene ontology enrichment analysis Basic Protocol 3: De novo assembly of data from non-model plants.


Asunto(s)
RNA-Seq , RNA-Seq/métodos , Biología Computacional/métodos , Análisis de Secuencia de ARN/métodos , Programas Informáticos
19.
Curr Protoc ; 4(5): e1062, 2024 May.
Artículo en Inglés | MEDLINE | ID: mdl-38775005

RESUMEN

The architecture and morphology of the intestinal tissue from mice or other small animals are difficult to preserve for histological and molecular analysis due to the fragile nature of this tissue. The intestinal mucosa consists of villi and crypts lined with epithelial cells. In between the epithelial folds extends the lamina propria, a loose connective tissue that contains blood and lymph vessels, fibroblasts, and immune cells. Underneath the mucosa are two layers of contractile smooth muscle and nerves. The tissue experiences significant changes during fixation, which can impair the reliability of histologic analysis. Poor-quality histologic sections are not suitable for quantitative image-based tissue analysis. This article offers a new fixative composed of neutral buffered formalin (NBF) and acetic acid, called FA. This fixative significantly improved the histology of mouse intestinal tissue compared to traditional NBF and enabled precise, reproducible histologic molecular analyses using QuPath software. Algorithmic training of QuPath allows for automated segmentation of intestinal compartments, which can be further interrogated for cellular composition and disease-related changes. © 2024 The Authors. Current Protocols published by Wiley Periodicals LLC. Basic Protocol: Improved preservation of mouse intestinal tissue using a formalin/acetic acid fixative Support Protocol: Quantitative tissue analysis using QuPath.


Asunto(s)
Ácido Acético , Fijadores , Formaldehído , Fijación del Tejido , Animales , Ratones , Fijación del Tejido/métodos , Mucosa Intestinal/citología , Intestinos/citología , Intestinos/patología , Programas Informáticos
20.
Brief Bioinform ; 25(3)2024 Mar 27.
Artículo en Inglés | MEDLINE | ID: mdl-38770716

RESUMEN

Temporal RNA-sequencing (RNA-seq) studies of bulk samples provide an opportunity for improved understanding of gene regulation during dynamic phenomena such as development, tumor progression or response to an incremental dose of a pharmacotherapeutic. Moreover, single-cell RNA-seq (scRNA-seq) data implicitly exhibit temporal characteristics because gene expression values recapitulate dynamic processes such as cellular transitions. Unfortunately, temporal RNA-seq data continue to be analyzed by methods that ignore this ordinal structure and yield results that are often difficult to interpret. Here, we present Error Modelled Gene Expression Analysis (EMOGEA), a framework for analyzing RNA-seq data that incorporates measurement uncertainty, while introducing a special formulation for those acquired to monitor dynamic phenomena. This method is specifically suited for RNA-seq studies in which low-count transcripts with small-fold changes lead to significant biological effects. Such transcripts include genes involved in signaling and non-coding RNAs that inherently exhibit low levels of expression. Using simulation studies, we show that this framework down-weights samples that exhibit extreme responses such as batch effects allowing them to be modeled with the rest of the samples and maintain the degrees of freedom originally envisioned for a study. Using temporal experimental data, we demonstrate the framework by extracting a cascade of gene expression waves from a well-designed RNA-seq study of zebrafish embryogenesis and an scRNA-seq study of mouse pre-implantation and provide unique biological insights into the regulation of genes in each wave. For non-ordinal measurements, we show that EMOGEA has a much higher rate of true positive calls and a vanishingly small rate of false negative discoveries compared to common approaches. Finally, we provide two packages in Python and R that are self-contained and easy to use, including test data.


Asunto(s)
RNA-Seq , Pez Cebra , Animales , Pez Cebra/genética , RNA-Seq/métodos , Perfilación de la Expresión Génica/métodos , Análisis de la Célula Individual/métodos , Ratones , Análisis de Secuencia de ARN/métodos , Programas Informáticos
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA