Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 232
Filtrar
1.
Gigascience ; 132024 Jan 02.
Artigo em Inglês | MEDLINE | ID: mdl-38896539

RESUMO

BACKGROUND: Scientific workflow systems are increasingly popular for expressing and executing complex data analysis pipelines over large datasets, as they offer reproducibility, dependability, and scalability of analyses by automatic parallelization on large compute clusters. However, implementing workflows is difficult due to the involvement of many black-box tools and the deep infrastructure stack necessary for their execution. Simultaneously, user-supporting tools are rare, and the number of available examples is much lower than in classical programming languages. RESULTS: To address these challenges, we investigate the efficiency of large language models (LLMs), specifically ChatGPT, to support users when dealing with scientific workflows. We performed 3 user studies in 2 scientific domains to evaluate ChatGPT for comprehending, adapting, and extending workflows. Our results indicate that LLMs efficiently interpret workflows but achieve lower performance for exchanging components or purposeful workflow extensions. We characterize their limitations in these challenging scenarios and suggest future research directions. CONCLUSIONS: Our results show a high accuracy for comprehending and explaining scientific workflows while achieving a reduced performance for modifying and extending workflow descriptions. These findings clearly illustrate the need for further research in this area.


Assuntos
Fluxo de Trabalho , Linguagens de Programação , Software , Biologia Computacional/métodos , Humanos
2.
Microsc Microanal ; 2024 Jun 21.
Artigo em Inglês | MEDLINE | ID: mdl-38905154

RESUMO

There has been an increasing interest in atom probe tomography (APT) to characterize hydrated and biological materials. A major benefit of APT compared to microscopy techniques more commonly used in biology is its combination of outstanding three-dimensional (3D) spatial resolution and mass sensitivity. APT has already been successfully used to characterize biominerals, revealing key structural information at the atomic scale, however there are many challenges inherent to the analysis of soft hydrated materials. New preparation protocols, often involving specimen preparation and transfer at cryogenic temperature, enable APT analysis of hydrated materials and have the potential to enable 3D atomic scale characterization of biological materials in the near-native hydrated state. In this study, samples of pure water at the tips of tungsten needle specimens were prepared at room temperature by graphene encapsulation. A comparative study was conducted where specimens were transferred at either room temperature or cryo-temperature and analyzed by APT by varying the flight path and pulsing mode. The differences between the analysis workflows are presented along with recommendations for future studies, and the compatibility between graphene coating and cryogenic workflows is demonstrated.

3.
J Biomed Semantics ; 15(1): 9, 2024 Jun 06.
Artigo em Inglês | MEDLINE | ID: mdl-38845042

RESUMO

BACKGROUND: In healthcare, an increasing collaboration can be noticed between different caregivers, especially considering the shift to homecare. To provide optimal patient care, efficient coordination of data and workflows between these different stakeholders is required. To achieve this, data should be exposed in a machine-interpretable, reusable manner. In addition, there is a need for smart, dynamic, personalized and performant services provided on top of this data. Flexible workflows should be defined that realize their desired functionality, adhere to use case specific quality constraints and improve coordination across stakeholders. User interfaces should allow configuring all of this in an easy, user-friendly way. METHODS: A distributed, generic, cascading reasoning reference architecture can solve the presented challenges. It can be instantiated with existing tools built upon Semantic Web technologies that provide data-driven semantic services and constructing cross-organizational workflows. These tools include RMLStreamer to generate Linked Data, DIVIDE to adaptively manage contextually relevant local queries, Streaming MASSIF to deploy reusable services, AMADEUS to compose semantic workflows, and RMLEditor and Matey to configure rules to generate Linked Data. RESULTS: A use case demonstrator is built on a scenario that focuses on personalized smart monitoring and cross-organizational treatment planning. The performance and usability of the demonstrator's implementation is evaluated. The former shows that the monitoring pipeline efficiently processes a stream of 14 observations per second: RMLStreamer maps JSON observations to RDF in 13.5 ms, a C-SPARQL query to generate fever alarms is executed on a window of 5 s in 26.4 ms, and Streaming MASSIF generates a smart notification for fever alarms based on severity and urgency in 1539.5 ms. DIVIDE derives the C-SPARQL queries in 7249.5 ms, while AMADEUS constructs a colon cancer treatment plan and performs conflict detection with it in 190.8 ms and 1335.7 ms, respectively. CONCLUSIONS: Existing tools built upon Semantic Web technologies can be leveraged to optimize continuous care provisioning. The evaluation of the building blocks on a realistic homecare monitoring use case demonstrates their applicability, usability and good performance. Further extending the available user interfaces for some tools is required to increase their adoption.


Assuntos
Serviços de Assistência Domiciliar , Fluxo de Trabalho , Semântica , Humanos
4.
Curr Protoc ; 4(6): e1065, 2024 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-38857087

RESUMO

The European Bioinformatics Institute (EMBL-EBI)'s Job Dispatcher framework provides access to a wide range of core databases and analysis tools that are of key importance in bioinformatics. As well as providing web interfaces to these resources, web services are available using REST and SOAP protocols that enable programmatic access and allow their integration into other applications and analytical workflows and pipelines. This article describes the various options available to researchers and bioinformaticians who would like to use our resources via the web interface employing RESTful web services clients provided in Perl, Python, and Java or who would like to use Docker containers to integrate the resources into analysis pipelines and workflows. © 2024 The Authors. Current Protocols published by Wiley Periodicals LLC. Basic Protocol 1: Retrieving data from EMBL-EBI using Dbfetch via the web interface Alternate Protocol 1: Retrieving data from EMBL-EBI using WSDbfetch via the REST interface Alternate Protocol 2: Retrieving data from EMBL-EBI using Dbfetch via RESTful web services with Python client Support Protocol 1: Installing Python REST web services clients Basic Protocol 2: Sequence similarity search using FASTA search via the web interface Alternate Protocol 3: Sequence similarity search using FASTA via RESTful web services with Perl client Support Protocol 2: Installing Perl REST web services clients Basic Protocol 3: Sequence similarity search using NCBI BLAST+ RESTful web services with Python client Basic Protocol 4: Sequence similarity search using HMMER3 phmmer REST web services with Perl client and Docker Support Protocol 3: Installing Docker and running the EMBL-EBI client container Basic Protocol 5: Protein functional analysis using InterProScan 5 RESTful web services with the Python client and Docker Alternate Protocol 4: Protein functional analysis using InterProScan 5 RESTful web services with the Java client Support Protocol 4: Installing Java web services clients Basic Protocol 6: Multiple sequence alignment using Clustal Omega via web interface Alternate Protocol 5: Multiple sequence alignment using Clustal Omega with Perl client and Docker Support Protocol 5: Exploring the RESTful API with OpenAPI User Inferface.


Assuntos
Internet , Software , Biologia Computacional/métodos , Interface Usuário-Computador
5.
Chemosphere ; 360: 142436, 2024 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-38797214

RESUMO

This study sought to develop a non-targeted workflow using high-resolution mass spectrometry (HRMS) to investigate previously unknown PFAS in consumer food packaging samples. Samples composed of various materials for different food types were subjected to methanolic extraction, controlled migration with food simulants and total oxidizable precursor (TOP) assay. The developed HRMS workflow utilized many signatures unique to PFAS compounds: negative mass defect, diagnostic breakdown structures, as well as retention time prediction. Potential PFAS features were identified in all packaging studied, regardless of food and material types. Five tentatively identified compounds were confirmed with analytical standards: 6:2 fluorotelomer phosphate diester (6:2 diPAP) and one of its intermediate breakdown products 2H-perfluoro-2-octenoic acid (6:2 FTUCA), perfluoropentadecanoic acid (PFPeDA), perfluorohexadecanoic acid (PFHxDA) and perfluorooctadecanoic acid (PFOcDA). Longer perfluorocarboxylic acids including C17 and C19 to C24 were also found present within a foil sample. Concentrations of 6:2 FTUCA ranged from 0.78 to 127 ng g-1 in methanolic extracts and up to 6 ng g-1 in food simulant after 240 h migration test. These results demonstrate the prevalence of both emerging and legacy PFAS in food packaging samples and highlight the usefulness of non-targeted tools to identify PFAS not included in targeted methods.


Assuntos
Fluorocarbonos , Embalagem de Alimentos , Fluorocarbonos/análise , Contaminação de Alimentos/análise , Espectrometria de Massas
6.
Artigo em Inglês | MEDLINE | ID: mdl-38719713

RESUMO

Use of artificial intelligence (AI) is expanding exponentially as it pertains to workflow operations. Otolaryngology-Head and Neck Surgery (OHNS), as with all medical fields, is just now beginning to realize the exciting upsides of AI as it relates to patient care but otolaryngologists should also be critical when considering using AI solutions. This paper highlights how AI can optimize clinical workflows in the outpatient, inpatient, and surgical settings while also discussing some of the possible drawbacks with the burgeoning technology.

7.
BMC Bioinformatics ; 25(1): 200, 2024 May 27.
Artigo em Inglês | MEDLINE | ID: mdl-38802733

RESUMO

BACKGROUND: The initial version of SEDA assists life science researchers without programming skills with the preparation of DNA and protein sequence FASTA files for multiple bioinformatics applications. However, the initial version of SEDA lacks a command-line interface for more advanced users and does not allow the creation of automated analysis pipelines. RESULTS: The present paper discusses the updates of the new SEDA release, including the addition of a complete command-line interface, new functionalities like gene annotation, a framework for automated pipelines, and improved integration in Linux environments. CONCLUSION: SEDA is an open-source Java application and can be installed using the different distributions available ( https://www.sing-group.org/seda/download.html ) as well as through a Docker image ( https://hub.docker.com/r/pegi3s/seda ). It is released under a GPL-3.0 license, and its source code is publicly accessible on GitHub ( https://github.com/sing-group/seda ). The software version at the time of submission is archived at Zenodo (version v1.6.0, http://doi.org/10.5281/zenodo.10201605 ).


Assuntos
Biologia Computacional , Software , Biologia Computacional/métodos , Análise de Dados
8.
J Biomed Inform ; 154: 104647, 2024 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-38692465

RESUMO

OBJECTIVE: To use software, datasets, and data formats in the domain of Infectious Disease Epidemiology as a test collection to evaluate a novel M1 use case, which we introduce in this paper. M1 is a machine that upon receipt of a new digital object of research exhaustively finds all valid compositions of it with existing objects. METHOD: We implemented a data-format-matching-only M1 using exhaustive search, which we refer to as M1DFM. We then ran M1DFM on the test collection and used error analysis to identify needed semantic constraints. RESULTS: Precision of M1DFM search was 61.7%. Error analysis identified needed semantic constraints and needed changes in handling of data services. Most semantic constraints were simple, but one data format was sufficiently complex to be practically impossible to represent semantic constraints over, from which we conclude limitatively that software developers will have to meet the machines halfway by engineering software whose inputs are sufficiently simple that their semantic constraints can be represented, akin to the simple APIs of services. We summarize these insights as M1-FAIR guiding principles for composability and suggest a roadmap for progressively capable devices in the service of reuse and accelerated scientific discovery. CONCLUSION: Algorithmic search of digital repositories for valid workflow compositions has potential to accelerate scientific discovery but requires a scalable solution to the problem of knowledge acquisition about semantic constraints on software inputs. Additionally, practical limitations on the logical complexity of semantic constraints must be respected, which has implications for the design of software.


Assuntos
Software , Humanos , Semântica , Aprendizado de Máquina , Algoritmos , Bases de Dados Factuais
9.
Mol Cell Proteomics ; 23(6): 100777, 2024 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-38670310

RESUMO

Transmembrane (TM) proteins constitute over 30% of the mammalian proteome and play essential roles in mediating cell-cell communication, synaptic transmission, and plasticity in the central nervous system. Many of these proteins, especially the G protein-coupled receptors (GPCRs), are validated or candidate drug targets for therapeutic development for mental diseases, yet their expression profiles are underrepresented in most global proteomic studies. Herein, we establish a brain TM protein-enriched spectral library based on 136 data-dependent acquisition runs acquired from various brain regions of both naïve mice and mental disease models. This spectral library comprises 3043 TM proteins including 171 GPCRs, 231 ion channels, and 598 transporters. Leveraging this library, we analyzed the data-independent acquisition data from different brain regions of two mouse models exhibiting depression- or anxiety-like behaviors. By integrating multiple informatics workflows and library sources, our study significantly expanded the mental stress-perturbed TM proteome landscape, from which a new GPCR regulator of depression was verified by in vivo pharmacological testing. In summary, we provide a high-quality mouse brain TM protein spectral library to largely increase the TM proteome coverage in specific brain regions, which would catalyze the discovery of new potential drug targets for the treatment of mental disorders.


Assuntos
Encéfalo , Modelos Animais de Doenças , Transtornos Mentais , Camundongos Endogâmicos C57BL , Proteoma , Proteômica , Animais , Proteoma/metabolismo , Encéfalo/metabolismo , Proteômica/métodos , Camundongos , Transtornos Mentais/metabolismo , Proteínas de Membrana/metabolismo , Masculino , Receptores Acoplados a Proteínas G/metabolismo
10.
Int J Neonatal Screen ; 10(2)2024 Mar 25.
Artigo em Inglês | MEDLINE | ID: mdl-38651392

RESUMO

The Connecticut Newborn Screening (NBS) Network, in partnership with the Connecticut Department of Public Health, strategically utilized the Epic electronic health record (EHR) system to establish registries for tracking long-term follow-up (LTFU) of NBS patients. After launching the LTFU registry in 2019, the Network obtained funding from the Health Resources and Services Administration to address the slow adoption by specialty care teams. An LTFU model was implemented in the three highest-volume specialty care teams at Connecticut Children's, involving an early childhood cohort diagnosed with an NBS-identified disorder since the formation of the Network in March 2019. This cohort grew from 87 to 115 over the two-year project. Methods included optimizing registries, capturing external data from Health Information Exchanges, incorporating evidence-based guidelines, and conducting qualitative and quantitative evaluations. The early childhood cohort demonstrated significant and sustainable improvements in the percentage of visits up-to-date (%UTD) compared to the non-intervention legacy cohort of patients diagnosed with an NBS disorder before the formation of the Network. Positive trends in the early childhood cohort, including %UTD for visits and condition-specific performance metrics, were observed. The qualitative evaluation highlighted the achievability of practice behavior changes for specialty care teams through responsive support from the nurse analyst. The Network's model serves as a use case for applying and achieving the adoption of population health tools within an EHR system to track care delivery and quickly fill identified care gaps, with the aim of improving long-term health for NBS patients.

11.
Sensors (Basel) ; 24(7)2024 Apr 08.
Artigo em Inglês | MEDLINE | ID: mdl-38610587

RESUMO

This paper describes a novel architecture that aims to create a template for the implementation of an IT platform, supporting the deployment and integration of the different digital twin subsystems that compose a complex urban intelligence system. In more detail, the proposed Smart City IT architecture has the following main purposes: (i) facilitating the deployment of the subsystems in a cloud environment; (ii) effectively storing, integrating, managing, and sharing the huge amount of heterogeneous data acquired and produced by each subsystem, using a data lake; (iii) supporting data exchange and sharing; (iv) managing and executing workflows, to automatically coordinate and run processes; and (v) to provide and visualize the required information. A prototype of the proposed IT solution was implemented leveraging open-source frameworks and technologies, to test its functionalities and performance. The results of the tests performed in real-world settings confirmed that the proposed architecture could efficiently and easily support the deployment and integration of heterogeneous subsystems, allowing them to share and integrate their data and to select, extract, and visualize the information required by a user, as well as promoting the integration with other external systems, and defining and executing workflows to orchestrate the various subsystems involved in complex analyses and processes.

12.
J Microsc ; 2024 Mar 27.
Artigo em Inglês | MEDLINE | ID: mdl-38532662

RESUMO

As microscopy diversifies and becomes ever more complex, the problem of quantification of microscopy images has emerged as a major roadblock for many researchers. All researchers must face certain challenges in turning microscopy images into answers, independent of their scientific question and the images they have generated. Challenges may arise at many stages throughout the analysis process, including handling of the image files, image pre-processing, object finding, or measurement, and statistical analysis. While the exact solution required for each obstacle will be problem-specific, by keeping analysis in mind, optimizing data quality, understanding tools and tradeoffs, breaking workflows and data sets into chunks, talking to experts, and thoroughly documenting what has been done, analysts at any experience level can learn to overcome these challenges and create better and easier image analyses.

13.
ArXiv ; 2024 Mar 06.
Artigo em Inglês | MEDLINE | ID: mdl-38495561

RESUMO

As microscopy diversifies and becomes ever-more complex, the problem of quantification of microscopy images has emerged as a major roadblock for many researchers. All researchers must face certainchallenges in turning microscopy images into answers, independent of their scientific question and the images they've generated. Challenges may arise at many stages throughout the analysis process, including handling of the image files, image pre-processing, object finding, or measurement, and statistical analysis. While the exact solution required for each obstacle will be problem-specific, by understanding tools and tradeoffs, optimizing data quality, breaking workflows and data sets into chunks, talking to experts, and thoroughly documenting what has been done, analysts at any experience level can learn to overcome these challenges and create better and easier image analyses.

14.
Mol Biol Evol ; 41(4)2024 Apr 02.
Artigo em Inglês | MEDLINE | ID: mdl-38507648

RESUMO

Population genomic analyses such as inference of population structure and identifying signatures of selection usually involve the application of a plethora of tools. The installation of tools and their dependencies, data transformation, or series of data preprocessing in a particular order sometimes makes the analyses challenging. While the usage of container-based technologies has significantly resolved the problems associated with the installation of tools and their dependencies, population genomic analyses requiring multistep pipelines or complex data transformation can greatly be facilitated by the application of workflow management systems such as Nextflow and Snakemake. Here, we present scalepopgen, a collection of fully automated workflows that can carry out widely used population genomic analyses on the biallelic single nucleotide polymorphism data stored in either variant calling format files or the plink-generated binary files. scalepopgen is developed in Nextflow and can be run locally or on high-performance computing systems using either Conda, Singularity, or Docker. The automated workflow includes procedures such as (i) filtering of individuals and genotypes; (ii) principal component analysis, admixture with identifying optimal K-values; (iii) running TreeMix analysis with or without bootstrapping and migration edges, followed by identification of an optimal number of migration edges; (iv) implementing single-population and pair-wise population comparison-based procedures to identify genomic signatures of selection. The pipeline uses various open-source tools; additionally, several Python and R scripts are also provided to collect and visualize the results. The tool is freely available at https://github.com/Popgen48/scalepopgen.


Assuntos
Metagenômica , Software , Humanos , Fluxo de Trabalho , Genômica/métodos , Biologia Computacional/métodos
15.
Phys Imaging Radiat Oncol ; 29: 100535, 2024 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-38298885

RESUMO

Background and purpose: Many 4D particle therapy research concepts have been recently translated into clinics, however, remaining substantial differences depend on the indication and institute-related aspects. This work aims to summarise current state-of-the-art 4D particle therapy technology and outline a roadmap for future research and developments. Material and methods: This review focused on the clinical implementation of 4D approaches for imaging, treatment planning, delivery and evaluation based on the 2021 and 2022 4D Treatment Workshops for Particle Therapy as well as a review of the most recent surveys, guidelines and scientific papers dedicated to this topic. Results: Available technological capabilities for motion surveillance and compensation determined the course of each 4D particle treatment. 4D motion management, delivery techniques and strategies including imaging were diverse and depended on many factors. These included aspects of motion amplitude, tumour location, as well as accelerator technology driving the necessity of centre-specific dosimetric validation. Novel methodologies for X-ray based image processing and MRI for real-time tumour tracking and motion management were shown to have a large potential for online and offline adaptation schemes compensating for potential anatomical changes over the treatment course. The latest research developments were dominated by particle imaging, artificial intelligence methods and FLASH adding another level of complexity but also opportunities in the context of 4D treatments. Conclusion: This review showed that the rapid technological advances in radiation oncology together with the available intrafractional motion management and adaptive strategies paved the way towards clinical implementation.

17.
Crit Rev Food Sci Nutr ; : 1-22, 2024 Jan 11.
Artigo em Inglês | MEDLINE | ID: mdl-38206576

RESUMO

Over the past decade, a remarkable surge in the development of functional nano-delivery systems loaded with bioactive compounds for healthcare has been witnessed. Notably, the demanding requirements of high solubility, prolonged circulation, high tissue penetration capability, and strong targeting ability of nanocarriers have posed interdisciplinary research challenges to the community. While extensive experimental studies have been conducted to understand the construction of nano-delivery systems and their metabolic behavior in vivo, less is known about these molecular mechanisms and kinetic pathways during their metabolic process in vivo, and lacking effective means for high-throughput screening. Molecular dynamics (MD) simulation techniques provide a reliable tool for investigating the design of nano-delivery carriers encapsulating these functional ingredients, elucidating the synthesis, translocation, and delivery of nanocarriers. This review introduces the basic MD principles, discusses how to apply MD simulation to design nanocarriers, evaluates the ability of nanocarriers to adhere to or cross gastrointestinal mucosa, and regulates plasma proteins in vivo. Moreover, we presented the critical role of MD simulation in developing delivery systems for precise nutrition and prospects for the future. This review aims to provide insights into the implications of MD simulation techniques for designing and optimizing nano-delivery systems in the healthcare food industry.

18.
BMC Bioinformatics ; 25(1): 11, 2024 Jan 04.
Artigo em Inglês | MEDLINE | ID: mdl-38177985

RESUMO

BACKGROUND: Machine learning (ML) has a rich history in structural bioinformatics, and modern approaches, such as deep learning, are revolutionizing our knowledge of the subtle relationships between biomolecular sequence, structure, function, dynamics and evolution. As with any advance that rests upon statistical learning approaches, the recent progress in biomolecular sciences is enabled by the availability of vast volumes of sufficiently-variable data. To be useful, such data must be well-structured, machine-readable, intelligible and manipulable. These and related requirements pose challenges that become especially acute at the computational scales typical in ML. Furthermore, in structural bioinformatics such data generally relate to protein three-dimensional (3D) structures, which are inherently more complex than sequence-based data. A significant and recurring challenge concerns the creation of large, high-quality, openly-accessible datasets that can be used for specific training and benchmarking tasks in ML pipelines for predictive modeling projects, along with reproducible splits for training and testing. RESULTS: Here, we report 'Prop3D', a platform that allows for the creation, sharing and extensible reuse of libraries of protein domains, featurized with biophysical and evolutionary properties that can range from detailed, atomically-resolved physicochemical quantities (e.g., electrostatics) to coarser, residue-level features (e.g., phylogenetic conservation). As a community resource, we also supply a 'Prop3D-20sf' protein dataset, obtained by applying our approach to CATH . We have developed and deployed the Prop3D framework, both in the cloud and on local HPC resources, to systematically and reproducibly create comprehensive datasets via the Highly Scalable Data Service ( HSDS ). Our datasets are freely accessible via a public HSDS instance, or they can be used with accompanying Python wrappers for popular ML frameworks. CONCLUSION: Prop3D and its associated Prop3D-20sf dataset can be of broad utility in at least three ways. Firstly, the Prop3D workflow code can be customized and deployed on various cloud-based compute platforms, with scalability achieved largely by saving the results to distributed HDF5 files via HSDS . Secondly, the linked Prop3D-20sf dataset provides a hand-crafted, already-featurized dataset of protein domains for 20 highly-populated CATH families; importantly, provision of this pre-computed resource can aid the more efficient development (and reproducible deployment) of ML pipelines. Thirdly, Prop3D-20sf's construction explicitly takes into account (in creating datasets and data-splits) the enigma of 'data leakage', stemming from the evolutionary relationships between proteins.


Assuntos
Biologia Computacional , Proteínas , Humanos , Filogenia , Biologia Computacional/métodos , Fluxo de Trabalho , Aprendizado de Máquina
19.
J Am Med Inform Assoc ; 31(3): 631-639, 2024 Feb 16.
Artigo em Inglês | MEDLINE | ID: mdl-38164994

RESUMO

INTRODUCTION: This study aimed to identify barriers and facilitators to the implementation of family cancer history (FCH) collection tools in clinical practices and community settings by assessing clinicians' perceptions of implementing a chatbot interface to collect FCH information and provide personalized results to patients and providers. OBJECTIVES: By identifying design and implementation features that facilitate tool adoption and integration into clinical workflows, this study can inform future FCH tool development and adoption in healthcare settings. MATERIALS AND METHODS: Quantitative data were collected using survey to evaluate the implementation outcomes of acceptability, adoption, appropriateness, feasibility, and sustainability of the chatbot tool for collecting FCH. Semistructured interviews were conducted to gather qualitative data on respondents' experiences using the tool and recommendations for enhancements. RESULTS: We completed data collection with 19 providers (n = 9, 47%), clinical staff (n = 5, 26%), administrators (n = 4, 21%), and other staff (n = 1, 5%) affiliated with the NCI Community Oncology Research Program. FCH was systematically collected using a wide range of tools at sites, with information being inserted into the patient's medical record. Participants found the chatbot tool to be highly acceptable, with the tool aligning with existing workflows, and were open to adopting the tool into their practice. DISCUSSION AND CONCLUSIONS: We further the evidence base about the appropriateness of scripted chatbots to support FCH collection. Although the tool had strong support, the varying clinical workflows across clinic sites necessitate that future FCH tool development accommodates customizable implementation strategies. Implementation support is necessary to overcome technical and logistical barriers to enhance the uptake of FCH tools in clinical practices and community settings.


Assuntos
Oncologia , Neoplasias , Humanos , Pessoal Administrativo , Coleta de Dados , Atenção à Saúde , Anamnese
20.
IEEE Internet Things J ; 11(3): 3779-3791, 2024 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-38283301

RESUMO

Current Internet of Things (IoT) devices provide a diverse range of functionalities, ranging from measurement and dissemination of sensory data observation, to computation services for real-time data stream processing. In extreme situations such as emergencies, a significant benefit of IoT devices is that they can help gain a more complete situational understanding of the environment. However, this requires the ability to utilize IoT resources while taking into account location, battery life, and other constraints of the underlying edge and IoT devices. A dynamic approach is proposed for orchestration and management of distributed workflow applications using services available in cloud data centers, deployed on servers, or IoT devices at the network edge. Our proposed approach is specifically designed for knowledge-driven business process workflows that are adaptive, interactive, evolvable and emergent. A comprehensive empirical evaluation shows that the proposed approach is effective and resilient to situational changes.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...