Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 33
Filtrar
Mais filtros

Tipo de documento
Intervalo de ano de publicação
1.
Mol Cell Proteomics ; 23(6): 100777, 2024 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-38670310

RESUMO

Transmembrane (TM) proteins constitute over 30% of the mammalian proteome and play essential roles in mediating cell-cell communication, synaptic transmission, and plasticity in the central nervous system. Many of these proteins, especially the G protein-coupled receptors (GPCRs), are validated or candidate drug targets for therapeutic development for mental diseases, yet their expression profiles are underrepresented in most global proteomic studies. Herein, we establish a brain TM protein-enriched spectral library based on 136 data-dependent acquisition runs acquired from various brain regions of both naïve mice and mental disease models. This spectral library comprises 3043 TM proteins including 171 GPCRs, 231 ion channels, and 598 transporters. Leveraging this library, we analyzed the data-independent acquisition data from different brain regions of two mouse models exhibiting depression- or anxiety-like behaviors. By integrating multiple informatics workflows and library sources, our study significantly expanded the mental stress-perturbed TM proteome landscape, from which a new GPCR regulator of depression was verified by in vivo pharmacological testing. In summary, we provide a high-quality mouse brain TM protein spectral library to largely increase the TM proteome coverage in specific brain regions, which would catalyze the discovery of new potential drug targets for the treatment of mental disorders.


Assuntos
Encéfalo , Modelos Animais de Doenças , Transtornos Mentais , Camundongos Endogâmicos C57BL , Proteoma , Proteômica , Animais , Proteoma/metabolismo , Encéfalo/metabolismo , Proteômica/métodos , Camundongos , Transtornos Mentais/metabolismo , Proteínas de Membrana/metabolismo , Masculino , Receptores Acoplados a Proteínas G/metabolismo
2.
Mol Biol Evol ; 41(4)2024 Apr 02.
Artigo em Inglês | MEDLINE | ID: mdl-38507648

RESUMO

Population genomic analyses such as inference of population structure and identifying signatures of selection usually involve the application of a plethora of tools. The installation of tools and their dependencies, data transformation, or series of data preprocessing in a particular order sometimes makes the analyses challenging. While the usage of container-based technologies has significantly resolved the problems associated with the installation of tools and their dependencies, population genomic analyses requiring multistep pipelines or complex data transformation can greatly be facilitated by the application of workflow management systems such as Nextflow and Snakemake. Here, we present scalepopgen, a collection of fully automated workflows that can carry out widely used population genomic analyses on the biallelic single nucleotide polymorphism data stored in either variant calling format files or the plink-generated binary files. scalepopgen is developed in Nextflow and can be run locally or on high-performance computing systems using either Conda, Singularity, or Docker. The automated workflow includes procedures such as (i) filtering of individuals and genotypes; (ii) principal component analysis, admixture with identifying optimal K-values; (iii) running TreeMix analysis with or without bootstrapping and migration edges, followed by identification of an optimal number of migration edges; (iv) implementing single-population and pair-wise population comparison-based procedures to identify genomic signatures of selection. The pipeline uses various open-source tools; additionally, several Python and R scripts are also provided to collect and visualize the results. The tool is freely available at https://github.com/Popgen48/scalepopgen.


Assuntos
Metagenômica , Software , Humanos , Fluxo de Trabalho , Genômica/métodos , Biologia Computacional/métodos
3.
BMC Bioinformatics ; 25(1): 200, 2024 May 27.
Artigo em Inglês | MEDLINE | ID: mdl-38802733

RESUMO

BACKGROUND: The initial version of SEDA assists life science researchers without programming skills with the preparation of DNA and protein sequence FASTA files for multiple bioinformatics applications. However, the initial version of SEDA lacks a command-line interface for more advanced users and does not allow the creation of automated analysis pipelines. RESULTS: The present paper discusses the updates of the new SEDA release, including the addition of a complete command-line interface, new functionalities like gene annotation, a framework for automated pipelines, and improved integration in Linux environments. CONCLUSION: SEDA is an open-source Java application and can be installed using the different distributions available ( https://www.sing-group.org/seda/download.html ) as well as through a Docker image ( https://hub.docker.com/r/pegi3s/seda ). It is released under a GPL-3.0 license, and its source code is publicly accessible on GitHub ( https://github.com/sing-group/seda ). The software version at the time of submission is archived at Zenodo (version v1.6.0, http://doi.org/10.5281/zenodo.10201605 ).


Assuntos
Biologia Computacional , Software , Biologia Computacional/métodos , Análise de Dados
4.
BMC Bioinformatics ; 25(1): 11, 2024 Jan 04.
Artigo em Inglês | MEDLINE | ID: mdl-38177985

RESUMO

BACKGROUND: Machine learning (ML) has a rich history in structural bioinformatics, and modern approaches, such as deep learning, are revolutionizing our knowledge of the subtle relationships between biomolecular sequence, structure, function, dynamics and evolution. As with any advance that rests upon statistical learning approaches, the recent progress in biomolecular sciences is enabled by the availability of vast volumes of sufficiently-variable data. To be useful, such data must be well-structured, machine-readable, intelligible and manipulable. These and related requirements pose challenges that become especially acute at the computational scales typical in ML. Furthermore, in structural bioinformatics such data generally relate to protein three-dimensional (3D) structures, which are inherently more complex than sequence-based data. A significant and recurring challenge concerns the creation of large, high-quality, openly-accessible datasets that can be used for specific training and benchmarking tasks in ML pipelines for predictive modeling projects, along with reproducible splits for training and testing. RESULTS: Here, we report 'Prop3D', a platform that allows for the creation, sharing and extensible reuse of libraries of protein domains, featurized with biophysical and evolutionary properties that can range from detailed, atomically-resolved physicochemical quantities (e.g., electrostatics) to coarser, residue-level features (e.g., phylogenetic conservation). As a community resource, we also supply a 'Prop3D-20sf' protein dataset, obtained by applying our approach to CATH . We have developed and deployed the Prop3D framework, both in the cloud and on local HPC resources, to systematically and reproducibly create comprehensive datasets via the Highly Scalable Data Service ( HSDS ). Our datasets are freely accessible via a public HSDS instance, or they can be used with accompanying Python wrappers for popular ML frameworks. CONCLUSION: Prop3D and its associated Prop3D-20sf dataset can be of broad utility in at least three ways. Firstly, the Prop3D workflow code can be customized and deployed on various cloud-based compute platforms, with scalability achieved largely by saving the results to distributed HDF5 files via HSDS . Secondly, the linked Prop3D-20sf dataset provides a hand-crafted, already-featurized dataset of protein domains for 20 highly-populated CATH families; importantly, provision of this pre-computed resource can aid the more efficient development (and reproducible deployment) of ML pipelines. Thirdly, Prop3D-20sf's construction explicitly takes into account (in creating datasets and data-splits) the enigma of 'data leakage', stemming from the evolutionary relationships between proteins.


Assuntos
Biologia Computacional , Proteínas , Humanos , Filogenia , Biologia Computacional/métodos , Fluxo de Trabalho , Aprendizado de Máquina
5.
J Microsc ; 295(2): 93-101, 2024 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-38532662

RESUMO

As microscopy diversifies and becomes ever more complex, the problem of quantification of microscopy images has emerged as a major roadblock for many researchers. All researchers must face certain challenges in turning microscopy images into answers, independent of their scientific question and the images they have generated. Challenges may arise at many stages throughout the analysis process, including handling of the image files, image pre-processing, object finding, or measurement, and statistical analysis. While the exact solution required for each obstacle will be problem-specific, by keeping analysis in mind, optimizing data quality, understanding tools and tradeoffs, breaking workflows and data sets into chunks, talking to experts, and thoroughly documenting what has been done, analysts at any experience level can learn to overcome these challenges and create better and easier image analyses.

6.
Crit Rev Food Sci Nutr ; : 1-22, 2024 Jan 11.
Artigo em Inglês | MEDLINE | ID: mdl-38206576

RESUMO

Over the past decade, a remarkable surge in the development of functional nano-delivery systems loaded with bioactive compounds for healthcare has been witnessed. Notably, the demanding requirements of high solubility, prolonged circulation, high tissue penetration capability, and strong targeting ability of nanocarriers have posed interdisciplinary research challenges to the community. While extensive experimental studies have been conducted to understand the construction of nano-delivery systems and their metabolic behavior in vivo, less is known about these molecular mechanisms and kinetic pathways during their metabolic process in vivo, and lacking effective means for high-throughput screening. Molecular dynamics (MD) simulation techniques provide a reliable tool for investigating the design of nano-delivery carriers encapsulating these functional ingredients, elucidating the synthesis, translocation, and delivery of nanocarriers. This review introduces the basic MD principles, discusses how to apply MD simulation to design nanocarriers, evaluates the ability of nanocarriers to adhere to or cross gastrointestinal mucosa, and regulates plasma proteins in vivo. Moreover, we presented the critical role of MD simulation in developing delivery systems for precise nutrition and prospects for the future. This review aims to provide insights into the implications of MD simulation techniques for designing and optimizing nano-delivery systems in the healthcare food industry.

7.
J Comput Aided Mol Des ; 38(1): 24, 2024 Jul 17.
Artigo em Inglês | MEDLINE | ID: mdl-39014286

RESUMO

Molecular dynamics (MD) simulation is a powerful tool for characterizing ligand-protein conformational dynamics and offers significant advantages over docking and other rigid structure-based computational methods. However, setting up, running, and analyzing MD simulations continues to be a multi-step process making it cumbersome to assess a library of ligands in a protein binding pocket using MD. We present an automated workflow that streamlines setting up, running, and analyzing Desmond MD simulations for protein-ligand complexes using machine learning (ML) models. The workflow takes a library of pre-docked ligands and a prepared protein structure as input, sets up and runs MD with each protein-ligand complex, and generates simulation fingerprints for each ligand. Simulation fingerprints (SimFP) capture protein-ligand compatibility, including stability of different ligand-pocket interactions and other useful metrics that enable easy rank-ordering of the ligand library for pocket optimization. SimFPs from a ligand library are used to build & deploy ML models that predict binding assay outcomes and automatically infer important interactions. Unlike relative free-energy methods that are constrained to assess ligands with high chemical similarity, ML models based on SimFPs can accommodate diverse ligand sets. We present two case studies on how SimFP helps delineate structure-activity relationship (SAR) trends and explain potency differences across matched-molecular pairs of (1) cyclic peptides targeting PD-L1 and (2) small molecule inhibitors targeting CDK9.


Assuntos
Aprendizado de Máquina , Simulação de Dinâmica Molecular , Ligação Proteica , Proteínas , Ligantes , Proteínas/química , Proteínas/metabolismo , Sítios de Ligação , Simulação de Acoplamento Molecular , Conformação Proteica , Fluxo de Trabalho , Humanos , Desenho de Fármacos , Software
8.
J Biomed Inform ; 154: 104647, 2024 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-38692465

RESUMO

OBJECTIVE: To use software, datasets, and data formats in the domain of Infectious Disease Epidemiology as a test collection to evaluate a novel M1 use case, which we introduce in this paper. M1 is a machine that upon receipt of a new digital object of research exhaustively finds all valid compositions of it with existing objects. METHOD: We implemented a data-format-matching-only M1 using exhaustive search, which we refer to as M1DFM. We then ran M1DFM on the test collection and used error analysis to identify needed semantic constraints. RESULTS: Precision of M1DFM search was 61.7%. Error analysis identified needed semantic constraints and needed changes in handling of data services. Most semantic constraints were simple, but one data format was sufficiently complex to be practically impossible to represent semantic constraints over, from which we conclude limitatively that software developers will have to meet the machines halfway by engineering software whose inputs are sufficiently simple that their semantic constraints can be represented, akin to the simple APIs of services. We summarize these insights as M1-FAIR guiding principles for composability and suggest a roadmap for progressively capable devices in the service of reuse and accelerated scientific discovery. CONCLUSION: Algorithmic search of digital repositories for valid workflow compositions has potential to accelerate scientific discovery but requires a scalable solution to the problem of knowledge acquisition about semantic constraints on software inputs. Additionally, practical limitations on the logical complexity of semantic constraints must be respected, which has implications for the design of software.


Assuntos
Software , Humanos , Semântica , Aprendizado de Máquina , Algoritmos , Bases de Dados Factuais
9.
Microsc Microanal ; 2024 Jun 21.
Artigo em Inglês | MEDLINE | ID: mdl-38905154

RESUMO

There has been an increasing interest in atom probe tomography (APT) to characterize hydrated and biological materials. A major benefit of APT compared to microscopy techniques more commonly used in biology is its combination of outstanding three-dimensional (3D) spatial resolution and mass sensitivity. APT has already been successfully used to characterize biominerals, revealing key structural information at the atomic scale, however there are many challenges inherent to the analysis of soft hydrated materials. New preparation protocols, often involving specimen preparation and transfer at cryogenic temperature, enable APT analysis of hydrated materials and have the potential to enable 3D atomic scale characterization of biological materials in the near-native hydrated state. In this study, samples of pure water at the tips of tungsten needle specimens were prepared at room temperature by graphene encapsulation. A comparative study was conducted where specimens were transferred at either room temperature or cryo-temperature and analyzed by APT by varying the flight path and pulsing mode. The differences between the analysis workflows are presented along with recommendations for future studies, and the compatibility between graphene coating and cryogenic workflows is demonstrated.

10.
Sensors (Basel) ; 24(7)2024 Apr 08.
Artigo em Inglês | MEDLINE | ID: mdl-38610587

RESUMO

This paper describes a novel architecture that aims to create a template for the implementation of an IT platform, supporting the deployment and integration of the different digital twin subsystems that compose a complex urban intelligence system. In more detail, the proposed Smart City IT architecture has the following main purposes: (i) facilitating the deployment of the subsystems in a cloud environment; (ii) effectively storing, integrating, managing, and sharing the huge amount of heterogeneous data acquired and produced by each subsystem, using a data lake; (iii) supporting data exchange and sharing; (iv) managing and executing workflows, to automatically coordinate and run processes; and (v) to provide and visualize the required information. A prototype of the proposed IT solution was implemented leveraging open-source frameworks and technologies, to test its functionalities and performance. The results of the tests performed in real-world settings confirmed that the proposed architecture could efficiently and easily support the deployment and integration of heterogeneous subsystems, allowing them to share and integrate their data and to select, extract, and visualize the information required by a user, as well as promoting the integration with other external systems, and defining and executing workflows to orchestrate the various subsystems involved in complex analyses and processes.

11.
Int Nurs Rev ; 2024 Jul 08.
Artigo em Inglês | MEDLINE | ID: mdl-38973347

RESUMO

AIM: This research examines the effects of artificial intelligence (AI)-based decision support systems (DSS) on the operational processes of nurses in critical care units (CCU) located in Amman, Jordan. BACKGROUND: The deployment of AI technology within the healthcare sector presents substantial opportunities for transforming patient care, with a particular emphasis on the field of nursing. METHOD: This paper examines how AI-based DSS affect CCU nursing workflows in Amman, Jordan, using a cross-sectional analysis. A study group of 112 registered nurses was enlisted throughout a research period spanning one month. Data were gathered using surveys that specifically examined several facets of nursing workflows, the employment of AI, encountered problems, and the sufficiency of training. RESULT: The findings indicate a varied demographic composition among the participants, with notable instances of AI technology adoption being reported. Nurses have the perception that there are favorable effects on time management, patient monitoring, and clinical decision-making. However, they continue to face persistent hurdles, including insufficient training, concerns regarding data privacy, and technical difficulties. DISCUSSION: The study highlights the significance of thorough training programs and supportive mechanisms to improve nurses' involvement with AI technologies and maximize their use in critical care environments. Although there are differing degrees of contentment with existing AI systems, there is a general agreement on the necessity of ongoing enhancement and fine-tuning to optimize their efficacy in enhancing patient care results. CONCLUSION AND IMPLICATIONS FOR NURSING AND/OR HEALTH POLICY: This research provides essential knowledge about the intricacies of incorporating AI into nursing practice, highlighting the significance of tackling obstacles to guarantee the ethical and efficient use of AI technology in healthcare.

13.
Chemosphere ; 360: 142436, 2024 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-38797214

RESUMO

This study sought to develop a non-targeted workflow using high-resolution mass spectrometry (HRMS) to investigate previously unknown PFAS in consumer food packaging samples. Samples composed of various materials for different food types were subjected to methanolic extraction, controlled migration with food simulants and total oxidizable precursor (TOP) assay. The developed HRMS workflow utilized many signatures unique to PFAS compounds: negative mass defect, diagnostic breakdown structures, as well as retention time prediction. Potential PFAS features were identified in all packaging studied, regardless of food and material types. Five tentatively identified compounds were confirmed with analytical standards: 6:2 fluorotelomer phosphate diester (6:2 diPAP) and one of its intermediate breakdown products 2H-perfluoro-2-octenoic acid (6:2 FTUCA), perfluoropentadecanoic acid (PFPeDA), perfluorohexadecanoic acid (PFHxDA) and perfluorooctadecanoic acid (PFOcDA). Longer perfluorocarboxylic acids including C17 and C19 to C24 were also found present within a foil sample. Concentrations of 6:2 FTUCA ranged from 0.78 to 127 ng g-1 in methanolic extracts and up to 6 ng g-1 in food simulant after 240 h migration test. These results demonstrate the prevalence of both emerging and legacy PFAS in food packaging samples and highlight the usefulness of non-targeted tools to identify PFAS not included in targeted methods.


Assuntos
Fluorocarbonos , Embalagem de Alimentos , Fluorocarbonos/análise , Contaminação de Alimentos/análise , Espectrometria de Massas
14.
Stud Health Technol Inform ; 316: 1401-1405, 2024 Aug 22.
Artigo em Inglês | MEDLINE | ID: mdl-39176642

RESUMO

Established cardiovascular risk scores are typically based on items from structured clinical data such as age, sex, or smoking status. Cardiovascular risk is also assessed from physiological measurements such as electrocardiography (ECG). Although ECGs are standard diagnostic tools in clinical care, they are scarcely integrated into clinical information systems. To overcome this roadblock, we propose the integration of an automatic workflow for ECG processing using the DICOMweb interface to transfer ECGs in a standardised way. We implemented the workflow using non-commercial software and tested it with about 150,000 resting ECGs acquired in a maximum-care hospital. We employed Orthanc as DICOM server and AcuWave as signal processing application and implemented a fully-automated workflow which reads the ECG data and computes heart rate-related parameters. The workflow is evaluated on off-the-shelf hardware and results in an average run time of approximately 40 ms for processing a single ECG.


Assuntos
Eletrocardiografia , Software , Humanos , Processamento de Sinais Assistido por Computador , Fluxo de Trabalho , Integração de Sistemas , Registros Eletrônicos de Saúde
15.
Mol Oncol ; 18(3): 606-619, 2024 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-38158740

RESUMO

Molecular subtyping is essential to infer tumor aggressiveness and predict prognosis. In practice, tumor profiling requires in-depth knowledge of bioinformatics tools involved in the processing and analysis of the generated data. Additionally, data incompatibility (e.g., microarray versus RNA sequencing data) and technical and uncharacterized biological variance between training and test data can pose challenges in classifying individual samples. In this article, we provide a roadmap for implementing bioinformatics frameworks for molecular profiling of human cancers in a clinical diagnostic setting. We describe a framework for integrating several methods for quality control, normalization, batch correction, classification and reporting, and develop a use case of the framework in breast cancer.


Assuntos
Neoplasias da Mama , Perfilação da Expressão Gênica , Humanos , Feminino , Perfilação da Expressão Gênica/métodos , Neoplasias da Mama/diagnóstico , Neoplasias da Mama/genética , RNA , Biologia Computacional/métodos , Regulação Neoplásica da Expressão Gênica
16.
Gigascience ; 132024 01 02.
Artigo em Inglês | MEDLINE | ID: mdl-38896539

RESUMO

BACKGROUND: Scientific workflow systems are increasingly popular for expressing and executing complex data analysis pipelines over large datasets, as they offer reproducibility, dependability, and scalability of analyses by automatic parallelization on large compute clusters. However, implementing workflows is difficult due to the involvement of many black-box tools and the deep infrastructure stack necessary for their execution. Simultaneously, user-supporting tools are rare, and the number of available examples is much lower than in classical programming languages. RESULTS: To address these challenges, we investigate the efficiency of large language models (LLMs), specifically ChatGPT, to support users when dealing with scientific workflows. We performed 3 user studies in 2 scientific domains to evaluate ChatGPT for comprehending, adapting, and extending workflows. Our results indicate that LLMs efficiently interpret workflows but achieve lower performance for exchanging components or purposeful workflow extensions. We characterize their limitations in these challenging scenarios and suggest future research directions. CONCLUSIONS: Our results show a high accuracy for comprehending and explaining scientific workflows while achieving a reduced performance for modifying and extending workflow descriptions. These findings clearly illustrate the need for further research in this area.


Assuntos
Fluxo de Trabalho , Linguagens de Programação , Software , Biologia Computacional/métodos , Humanos
17.
J Am Med Inform Assoc ; 31(3): 631-639, 2024 02 16.
Artigo em Inglês | MEDLINE | ID: mdl-38164994

RESUMO

INTRODUCTION: This study aimed to identify barriers and facilitators to the implementation of family cancer history (FCH) collection tools in clinical practices and community settings by assessing clinicians' perceptions of implementing a chatbot interface to collect FCH information and provide personalized results to patients and providers. OBJECTIVES: By identifying design and implementation features that facilitate tool adoption and integration into clinical workflows, this study can inform future FCH tool development and adoption in healthcare settings. MATERIALS AND METHODS: Quantitative data were collected using survey to evaluate the implementation outcomes of acceptability, adoption, appropriateness, feasibility, and sustainability of the chatbot tool for collecting FCH. Semistructured interviews were conducted to gather qualitative data on respondents' experiences using the tool and recommendations for enhancements. RESULTS: We completed data collection with 19 providers (n = 9, 47%), clinical staff (n = 5, 26%), administrators (n = 4, 21%), and other staff (n = 1, 5%) affiliated with the NCI Community Oncology Research Program. FCH was systematically collected using a wide range of tools at sites, with information being inserted into the patient's medical record. Participants found the chatbot tool to be highly acceptable, with the tool aligning with existing workflows, and were open to adopting the tool into their practice. DISCUSSION AND CONCLUSIONS: We further the evidence base about the appropriateness of scripted chatbots to support FCH collection. Although the tool had strong support, the varying clinical workflows across clinic sites necessitate that future FCH tool development accommodates customizable implementation strategies. Implementation support is necessary to overcome technical and logistical barriers to enhance the uptake of FCH tools in clinical practices and community settings.


Assuntos
Oncologia , Neoplasias , Humanos , Pessoal Administrativo , Coleta de Dados , Atenção à Saúde , Anamnese
18.
Plant Methods ; 20(1): 103, 2024 Jul 13.
Artigo em Inglês | MEDLINE | ID: mdl-39003455

RESUMO

BACKGROUND: Genotyping of individuals plays a pivotal role in various biological analyses, with technology choice influenced by multiple factors including genomic constraints, number of targeted loci and individuals, cost considerations, and the ease of sample preparation and data processing. Target enrichment capture of specific polymorphic regions has emerged as a flexible and cost-effective genomic reduction method for genotyping, especially adapted to the case of very large genomes. However, this approach necessitates complex bioinformatics treatment to extract genotyping data from raw reads. Existing workflows predominantly cater to phylogenetic inference, leaving a gap in user-friendly tools for genotyping analysis based on capture methods. In response to these challenges, we have developed GeCKO (Genotyping Complexity Knocked-Out). To assess the effectiveness of combining target enrichment capture with GeCKO, we conducted a case study on durum wheat domestication history, involving sequencing, processing, and analyzing variants in four relevant durum wheat groups. RESULTS: GeCKO encompasses four distinct workflows, each designed for specific steps of genomic data processing: (i) read demultiplexing and trimming for data cleaning, (ii) read mapping to align sequences to a reference genome, (iii) variant calling to identify genetic variants, and (iv) variant filtering. Each workflow in GeCKO can be easily configured and is executable across diverse computational environments. The workflows generate comprehensive HTML reports including key summary statistics and illustrative graphs, ensuring traceable, reproducible results and facilitating straightforward quality assessment. A specific innovation within GeCKO is its 'targeted remapping' feature, specifically designed for efficient treatment of targeted enrichment capture data. This process consists of extracting reads mapped to the targeted regions, constructing a smaller sub-reference genome, and remapping the reads to this sub-reference, thereby enhancing the efficiency of subsequent steps. CONCLUSIONS: The case study results showed the expected intra-group diversity and inter-group differentiation levels, confirming the method's effectiveness for genotyping and analyzing genetic diversity in species with complex genomes. GeCKO streamlined the data processing, significantly improving computational performance and efficiency. The targeted remapping enabled straightforward SNP calling in durum wheat, a task otherwise complicated by the species' large genome size. This illustrates its potential applications in various biological research contexts.

19.
Curr Protoc ; 4(6): e1065, 2024 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-38857087

RESUMO

The European Bioinformatics Institute (EMBL-EBI)'s Job Dispatcher framework provides access to a wide range of core databases and analysis tools that are of key importance in bioinformatics. As well as providing web interfaces to these resources, web services are available using REST and SOAP protocols that enable programmatic access and allow their integration into other applications and analytical workflows and pipelines. This article describes the various options available to researchers and bioinformaticians who would like to use our resources via the web interface employing RESTful web services clients provided in Perl, Python, and Java or who would like to use Docker containers to integrate the resources into analysis pipelines and workflows. © 2024 The Authors. Current Protocols published by Wiley Periodicals LLC. Basic Protocol 1: Retrieving data from EMBL-EBI using Dbfetch via the web interface Alternate Protocol 1: Retrieving data from EMBL-EBI using WSDbfetch via the REST interface Alternate Protocol 2: Retrieving data from EMBL-EBI using Dbfetch via RESTful web services with Python client Support Protocol 1: Installing Python REST web services clients Basic Protocol 2: Sequence similarity search using FASTA search via the web interface Alternate Protocol 3: Sequence similarity search using FASTA via RESTful web services with Perl client Support Protocol 2: Installing Perl REST web services clients Basic Protocol 3: Sequence similarity search using NCBI BLAST+ RESTful web services with Python client Basic Protocol 4: Sequence similarity search using HMMER3 phmmer REST web services with Perl client and Docker Support Protocol 3: Installing Docker and running the EMBL-EBI client container Basic Protocol 5: Protein functional analysis using InterProScan 5 RESTful web services with the Python client and Docker Alternate Protocol 4: Protein functional analysis using InterProScan 5 RESTful web services with the Java client Support Protocol 4: Installing Java web services clients Basic Protocol 6: Multiple sequence alignment using Clustal Omega via web interface Alternate Protocol 5: Multiple sequence alignment using Clustal Omega with Perl client and Docker Support Protocol 5: Exploring the RESTful API with OpenAPI User Inferface.


Assuntos
Internet , Software , Biologia Computacional/métodos , Interface Usuário-Computador
20.
Artigo em Inglês | MEDLINE | ID: mdl-38719713

RESUMO

Use of artificial intelligence (AI) is expanding exponentially as it pertains to workflow operations. Otolaryngology-Head and Neck Surgery (OHNS), as with all medical fields, is just now beginning to realize the exciting upsides of AI as it relates to patient care but otolaryngologists should also be critical when considering using AI solutions. This paper highlights how AI can optimize clinical workflows in the outpatient, inpatient, and surgical settings while also discussing some of the possible drawbacks with the burgeoning technology.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA