Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 8.001
Filter
1.
J Biomed Opt ; 29(Suppl 2): S22702, 2025 Dec.
Article in English | MEDLINE | ID: mdl-38434231

ABSTRACT

Significance: Advancements in label-free microscopy could provide real-time, non-invasive imaging with unique sources of contrast and automated standardized analysis to characterize heterogeneous and dynamic biological processes. These tools would overcome challenges with widely used methods that are destructive (e.g., histology, flow cytometry) or lack cellular resolution (e.g., plate-based assays, whole animal bioluminescence imaging). Aim: This perspective aims to (1) justify the need for label-free microscopy to track heterogeneous cellular functions over time and space within unperturbed systems and (2) recommend improvements regarding instrumentation, image analysis, and image interpretation to address these needs. Approach: Three key research areas (cancer research, autoimmune disease, and tissue and cell engineering) are considered to support the need for label-free microscopy to characterize heterogeneity and dynamics within biological systems. Based on the strengths (e.g., multiple sources of molecular contrast, non-invasive monitoring) and weaknesses (e.g., imaging depth, image interpretation) of several label-free microscopy modalities, improvements for future imaging systems are recommended. Conclusion: Improvements in instrumentation including strategies that increase resolution and imaging speed, standardization and centralization of image analysis tools, and robust data validation and interpretation will expand the applications of label-free microscopy to study heterogeneous and dynamic biological systems.


Subject(s)
Histological Techniques , Microscopy , Animals , Flow Cytometry , Image Processing, Computer-Assisted
2.
BMC Nutr ; 10(1): 120, 2024 Sep 09.
Article in English | MEDLINE | ID: mdl-39252140

ABSTRACT

BACKGROUND: High sodium intake has been linked to negative health effects, including cardiovascular and renal diseases. Traditional dietary sodium assessment methods are time-consuming and subjected to errors. Using technology may increase the efficiency and accuracy of dietary assessment. The objective of this study is to develop and validate a food frequency questionnaire (FFQ) screener using software to assess sodium intake among the Palestinian population. METHODOLOGY: The study was conducted in four phases. In Phase 1, Palestinian foods were categorized and subcategorized according to their mode of consumption, sodium content, and food groups. The sodium content values were calculated from Palestinian food composition database. Content validity was done in Phase 2, while in Phase 3, a pilot study was conducted to determine test-retest reliability. In Phase 4, the criterion validity of the screener was assessed by comparing the results of sodium intake from the FFQ screener with the results from a 24-hour urinary sodium test and a 3-day diet recall. Correlations between the sodium intake values from the three methods were analyzed using Pearson correlation tests, and the difference was assessed using the Bland-Altman test. RESULTS: The developed FFQ screener sodium screener included 41 food items categorized into nine groups, with photo-based portion size estimation and frequency of consumption. The reliability test showed a Pearson correlation coefficient of 0.7, p < 0.01 using test and retest. For criterion validity, the correlation coefficient between dietary sodium intake using the FFQ screener software and the 24-hour urine sodium test was (0.6, p < 0.000). The correlation coefficient between dietary sodium intake using the FFQ screener software and dietary sodium intake using a 3-day recall was (0.3, p < 0.000). Sodium intake was significantly correlated with preferences for low-sodium food and previous salt reduction, p < 0.05. CONCLUSIONS: Using the FFQ screener software was a valid and reliable method for assessing dietary sodium intake. Using the photo-based method to estimate portion size improved precision and accuracy in diet assessment.

3.
Front Syst Neurosci ; 18: 1302429, 2024.
Article in English | MEDLINE | ID: mdl-39229305

ABSTRACT

Background: Imagination represents a pivotal capability of human intelligence. To develop human-like artificial intelligence, uncovering the computational architecture pertinent to imaginative capabilities through reverse engineering the brain's computational functions is essential. The existing Structure-Constrained Interface Decomposition (SCID) method, leverages the anatomical structure of the brain to extract computational architecture. However, its efficacy is limited to narrow brain regions, making it unsuitable for realizing the function of imagination, which involves diverse brain areas such as the neocortex, basal ganglia, thalamus, and hippocampus. Objective: In this study, we proposed the Function-Oriented SCID method, an advancement over the existing SCID method, comprising four steps designed for reverse engineering broader brain areas. This method was applied to the brain's imaginative capabilities to design a hypothetical computational architecture. The implementation began with defining the human imaginative ability that we aspire to simulate. Subsequently, six critical requirements necessary for actualizing the defined imagination were identified. Constraints were established considering the unique representational capacity and the singularity of the neocortex's modes, a distributed memory structure responsible for executing imaginative functions. In line with these constraints, we developed five distinct functions to fulfill the requirements. We allocated specific components for each function, followed by an architectural proposal aligning each component with a corresponding brain organ. Results: In the proposed architecture, the distributed memory component, associated with the neocortex, realizes the representation and execution function; the imaginary zone maker component, associated with the claustrum, accomplishes the dynamic-zone partitioning function; the routing conductor component, linked with the complex of thalamus and basal ganglia, performs the manipulation function; the mode memory component, related to the specific agranular neocortical area executes the mode maintenance function; and the recorder component, affiliated with the hippocampal formation, handles the history management function. Thus, we have provided a fundamental cognitive architecture of the brain that comprehensively covers the brain's imaginative capacities.

4.
JMIR Med Inform ; 12: e58080, 2024 Sep 05.
Article in English | MEDLINE | ID: mdl-39235850

ABSTRACT

In light of rapid technological advancements, the health care sector is undergoing significant transformation with the continuous emergence of novel digital solutions. Consequently, regulatory frameworks must continuously adapt to ensure their main goal to protect patients. In 2017, the new Medical Device Regulation (EU) 2017/745 (MDR) came into force, bringing more complex requirements for development, launch, and postmarket surveillance. However, the updated regulation considerably impacts the manufacturers, especially small- and medium-sized enterprises, and consequently, the accessibility of medical devices in the European Union market, as many manufacturers decide to either discontinue their products, postpone the launch of new innovative solutions, or leave the European Union market in favor of other regions such as the United States. This could lead to reduced health care quality and slower industry innovation efforts. Effective policy calibration and collaborative efforts are essential to mitigate these effects and promote ongoing advancements in health care technologies in the European Union market. This paper is a narrative review with the objective of exploring hindering factors to software as a medical device development, launch, and marketing brought by the new regulation. It exclusively focuses on the factors that engender obstacles. Related regulations, directives, and proposals were discussed for comparison and further analysis.

5.
J Mol Biol ; 436(17): 168656, 2024 Sep 01.
Article in English | MEDLINE | ID: mdl-39237202

ABSTRACT

Crosslinking mass spectrometry (MS) has emerged as an important technique for elucidating the in-solution structures of protein complexes and the topology of protein-protein interaction networks. However, the expanding user community lacked an integrated visualisation tool that helped them make use of the crosslinking data for investigating biological mechanisms. We addressed this need by developing xiVIEW, a web-based application designed to streamline crosslinking MS data analysis, which we present here. xiVIEW provides a user-friendly interface for accessing coordinated views of mass spectrometric data, network visualisation, annotations extracted from trusted repositories like UniProtKB, and available 3D structures. In accordance with recent recommendations from the crosslinking MS community, xiVIEW (i) provides a standards compliant parser to improve data integration and (ii) offers accessible visualisation tools. By promoting the adoption of standard file formats and providing a comprehensive visualisation platform, xiVIEW empowers both experimentalists and modellers alike to pursue their respective research interests. We anticipate that xiVIEW will advance crosslinking MS-inspired research, and facilitate broader and more effective investigations into complex biological systems.


Subject(s)
Cross-Linking Reagents , Mass Spectrometry , Mass Spectrometry/methods , Cross-Linking Reagents/chemistry , Software , Proteins/chemistry , Protein Interaction Mapping/methods , Databases, Protein , User-Computer Interface , Protein Interaction Maps
6.
Radiother Oncol ; : 110499, 2024 Sep 04.
Article in English | MEDLINE | ID: mdl-39242029

ABSTRACT

BACKGROUND: Stereotactic arrhythmia radioablation (STAR) is a therapeutic option for ventricular tachycardia (VT) where catheter-based ablation is not feasible or has previously failed. Target definition and its transfer from electro-anatomic maps (EAM) to radiotherapy treatment planning systems (TPS) is challenging and operator-dependent. Software solutions have been developed to register EAM with cardiac CT and semi-automatically transfer 2D target surface data into 3D CT volume coordinates. Results of a cross-validation study of two conceptually different open-source software solutions using data from the RAVENTA trial (NCT03867747) are reported. METHODS: Clinical Target Volumes (CTVs) were created from target regions delineated on EAM using two conceptually different approaches by separate investigators on data of 10 patients, blinded to each other's results. Targets were transferred using 3D-3D registration and 2D-3D registration, respectively. The resulting CTVs were compared in a core-lab using two complementary analysis software packages for structure similarity and geometric characteristics. RESULTS: Volumes and surface areas of the CTVs created by both methods were comparable: 14.88 ±â€¯11.72 ml versus 15.15 ±â€¯11.35 ml and 44.29 ±â€¯33.63 cm2 versus 46.43 ±â€¯35.13 cm2. The Dice-coefficient was 0.84 ±â€¯0.04; median surface-distance and Hausdorff-distance were 0.53 ±â€¯0.37 mm and 6.91 ±â€¯2.26 mm, respectively. The 3D-center-of-mass difference was 3.62 ±â€¯0.99 mm. Geometrical volume similarity was 0.94 ±â€¯0.05 %. CONCLUSION: The STAR targets transferred from EAM to TPS using both software solutions resulted in nearly identical 3D structures. Both solutions can be used for QA (quality assurance) and EAM-to-TPS transfer of STAR-targets. Semi-automated methods could potentially help to avoid mistargeting in STAR and offer standardized workflows for methodically harmonized treatments.

7.
Anal Bioanal Chem ; 2024 Sep 10.
Article in English | MEDLINE | ID: mdl-39251428

ABSTRACT

Pharmaceuticals released into the aquatic and soil environments can be absorbed by plants and soil organisms, potentially leading to the formation of unknown metabolites that may negatively affect these organisms or contaminate the food chain. The aim of this study was to identify pharmaceutical metabolites through a triplet approach for metabolite structure prediction (software-based predictions, literature review, and known common metabolic pathways), followed by generating in silico mass spectral libraries and applying various mass spectrometry modes for untargeted LC-qTOF analysis. Therefore, Eisenia fetida and Lactuca sativa were exposed to a pharmaceutical mixture (atenolol, enrofloxacin, erythromycin, ketoprofen, sulfametoxazole, tetracycline) under hydroponic and soil conditions at environmentally relevant concentrations. Samples collected at different time points were extracted using QuEChERS and analyzed with LC-qTOF in data-dependent (DDA) and data-independent (DIA) acquisition modes, applying both positive and negative electrospray ionization. The triplet approach for metabolite structure prediction yielded a total of 3762 pharmaceutical metabolites, and an in silico mass spectral library was created based on these predicted metabolites. This approach resulted in the identification of 26 statistically significant metabolites (p < 0.05), with DDA + and DDA - outperforming DIA modes by successfully detecting 56/67 sample type:metabolite combinations. Lettuce roots had the highest metabolite count (26), followed by leaves (6) and earthworms (2). Despite the lower metabolite count, earthworms showed the highest peak intensities, closely followed by roots, with leaves displaying the lowest intensities. Common metabolic reactions observed included hydroxylation, decarboxylation, acetylation, and glucosidation, with ketoprofen-related metabolites being the most prevalent, totaling 12 distinct metabolites. In conclusion, we developed a high-throughput workflow combining open-source software with LC-HRMS for identifying unknown metabolites across various sample types.

8.
Trials ; 25(1): 604, 2024 Sep 10.
Article in English | MEDLINE | ID: mdl-39252100

ABSTRACT

BACKGROUND: The field of digital mental health has followed an exponential growth trajectory in recent years. While the evidence base has increased significantly, its adoption within health and care services has been slowed by several challenges, including a lack of knowledge from researchers regarding how to navigate the pathway for mandatory regulatory approval. This paper details the steps that a team must take to achieve the required approvals to carry out a research study using a novel digital mental health intervention. We used a randomised controlled trial of a digital mental health intervention called STOP (Successful Treatment of Paranoia) as a worked example. METHODS: The methods section explains the two main objectives that are required to achieve regulatory approval (MHRA Notification of No Objection) and the detailed steps involved within each, as carried out for the STOP trial. First, the existing safety of digital mental health interventions must be demonstrated. This can refer to literature reviews, any feasibility/pilot safety data, and requires a risk management plan. Second, a detailed plan to further evaluate the safety of the digital mental health intervention is needed. As part of this we describe the STOP study's development of a framework for categorising adverse events and based on this framework, a tool to collect adverse event data. RESULTS: We present literature review results, safety-related feasibility study findings and the full risk management plan for STOP, which addressed 26 possible hazards, and included the 6-point scales developed to quantify the probability and severity of typical risks involved when a psychiatric population receives a digital intervention without the direct support of a therapist. We also present an Adverse Event Category Framework for Digital Therapeutic Devices and the Adverse Events Checklist-which assesses 15 different categories of adverse events-that was constructed from this and used in the STOP trial. CONCLUSIONS: The example shared in this paper serves as a guide for academics and professionals working in the field of digital mental health. It provides insights into the safety assessment requirements of regulatory bodies when a clinical investigation of a digital mental health intervention is proposed. Methods, scales and tools that could easily be adapted for use in other similar research are presented, with the expectation that these will assist other researchers in the field seeking regulatory approval for digital mental health products.


Subject(s)
Mental Health , Humans , Patient Safety , Research Design , Risk Assessment , Treatment Outcome , Risk Factors , Telemedicine
9.
J Proteome Res ; 2024 Sep 10.
Article in English | MEDLINE | ID: mdl-39254081

ABSTRACT

The FragPipe computational proteomics platform is gaining widespread popularity among the proteomics research community because of its fast processing speed and user-friendly graphical interface. Although FragPipe produces well-formatted output tables that are ready for analysis, there is still a need for an easy-to-use and user-friendly downstream statistical analysis and visualization tool. FragPipe-Analyst addresses this need by providing an R shiny web server to assist FragPipe users in conducting downstream analyses of the resulting quantitative proteomics data. It supports major quantification workflows, including label-free quantification, tandem mass tags, and data-independent acquisition. FragPipe-Analyst offers a range of useful functionalities, such as various missing value imputation options, data quality control, unsupervised clustering, differential expression (DE) analysis using Limma, and gene ontology and pathway enrichment analysis using Enrichr. To support advanced analysis and customized visualizations, we also developed FragPipeAnalystR, an R package encompassing all FragPipe-Analyst functionalities that is extended to support site-specific analysis of post-translational modifications (PTMs). FragPipe-Analyst and FragPipeAnalystR are both open-source and freely available.

10.
Diagn Interv Radiol ; 2024 Sep 02.
Article in English | MEDLINE | ID: mdl-39221690

ABSTRACT

PURPOSE: Unstructured, free-text dictation (FT), the current standard in breast magnetic resonance imaging (MRI) reporting, is considered time-consuming and prone to error. The purpose of this study is to assess the usability and performance of a novel, software-based guided reporting (GR) strategy in breast MRI. METHODS: Eighty examinations previously evaluated for a clinical indication (e.g., mass and focus/non-mass enhancement) with FT were reevaluated by three specialized radiologists using GR. Each radiologist had a different number of cases (R1, n = 24; R2, n = 20; R3, n = 36). Usability was assessed by subjective feedback, and quality was assessed by comparing the completeness of automatically generated GR reports with that of their FT counterparts. Errors in GR were categorized and analyzed for debugging with a final software version. Combined reading and reporting times and learning curves were analyzed. RESULTS: Usability was rated high by all readers. No non-sense, omission/commission, or translational errors were detected with the GR method. Spelling and grammar errors were observed in 3/80 patient reports (3.8%) with GR (exclusively in the discussion section) and in 36/80 patient reports (45%) with FT. Between FT and GR, 41 patient reports revealed no content differences, 33 revealed minor differences, and 6 revealed major differences that resulted in changes in treatment. The errors in all patient reports with major content differences were categorized as content omission errors caused by improper software operation (n = 2) or by missing content in software v. 0.8 displayable with v. 1.7 (n = 4). The mean combined reading and reporting time was 576 s (standard deviation: 327 s; min: 155 s; max: 1,517 s). The mean times for each reader were 485, 557, and 754 s, and the respective learning curves evaluated by regression models revealed statistically significant slopes (P = 0.002; P = 0.0002; P < 0.0001). Overall times were shorter compared with external references that used FT. The mean combined reading and reporting time of MRI examinations using FT was 1,043 s and decreased by 44.8% with GR. CONCLUSION: GR allows for complete reporting with minimized error rates and reduced combined reading and reporting times. The streamlining of the process (evidenced by lower reading times) for the readers in this study proves that GR can be learned quickly. Reducing reporting errors leads to fewer therapeutic faults and lawsuits against radiologists. It is known that delays in radiology reporting hinder early treatment and lead to poorer patient outcomes. CLINICAL SIGNIFICANCE: While the number of scans and images per examination is continuously rising, staff shortages create a bottleneck in radiology departments. The IT-based GR method can be a major boon, improving radiologist efficiency, report quality, and the quality of simultaneously generated data.

11.
Article in English | MEDLINE | ID: mdl-39221961

ABSTRACT

Mass spectrometry imaging (MSI) provides information about the spatial localization of molecules in complex samples with high sensitivity and molecular selectivity. Although point-wise data acquisition, in which mass spectra are acquired at predefined points in a grid pattern, is common in MSI, several MSI techniques use line-wise data acquisition. In line-wise mode, the imaged surface is continuously sampled along consecutive parallel lines and MSI data are acquired as a collection of line scans across the sample. Furthermore, aside from the standard imaging mode in which full mass spectra are acquired, other acquisition modes have been developed to enhance molecular specificity, enable separation of isobaric and isomeric species, and improve sensitivity to facilitate the imaging of low abundance species. These methods, including MS/MS-MSI in both MS2 and MS3 modes, multiple-reaction monitoring (MRM)-MSI, and ion mobility spectrometry (IMS)-MSI have all demonstrated their capabilities, but their broader implementation is limited by the existing MSI analysis software. Here, we present MSIGen, an open-source Python package for the visualization of MSI experiments performed in line-wise acquisition mode containing MS1, MS2, MRM, and IMS data, which is available at https://github.com/LabLaskin/MSIGen. The package supports multiple vendor-specific and open-source data formats and contains tools for targeted extraction of ion images, normalization, and exportation as images, arrays, or publication-style images. MSIGen offers multiple interfaces, allowing for accessibility and easy integration with other workflows. Considering its support for a wide variety of MSI imaging modes and vendor formats, MSIGen is a valuable tool for the visualization and analysis of MSI data.

12.
Heliyon ; 10(16): e35937, 2024 Aug 30.
Article in English | MEDLINE | ID: mdl-39247305

ABSTRACT

The growing demand for easily available healthcare in recent years has fuelled the digitization of healthcare services. The Hospital Management System (HMS) software stands out as a comprehensive solution among the software systems and tools that hospitals and clinics are developing in tandem with this trend. In order to effectively manage many facets of hospital operations, in this paper, we propose an approach for investigating software of this kind. Thus, we characterise the HMS software as a unique sort of batch arrival retrial queueing system (QS) that can handle both ordinary and priority patient demands. Furthermore, it permits patient resistance (balk) and departure (renege) in specific circumstances. The proposed model is additionally deployed within the framework of Bernoulli working vacation. The supplementary variable technique (SVT) has been utilised to obtain the necessary results. ANFIS, a soft computing tool, is used to validate the analytical results as well. Finally, this study seeks to enhance the cost-effectiveness of software creation by employing four unique optimization methods, aiming to achieve optimal efficiency in resource utilization.

13.
Sci Rep ; 14(1): 20722, 2024 09 05.
Article in English | MEDLINE | ID: mdl-39237737

ABSTRACT

We here introduce Ensemble Optimizer (EnOpt), a machine-learning tool to improve the accuracy and interpretability of ensemble virtual screening (VS). Ensemble VS is an established method for predicting protein/small-molecule (ligand) binding. Unlike traditional VS, which focuses on a single protein conformation, ensemble VS better accounts for protein flexibility by predicting binding to multiple protein conformations. Each compound is thus associated with a spectrum of scores (one score per protein conformation) rather than a single score. To effectively rank and prioritize the molecules for further evaluation (including experimental testing), researchers must select which protein conformations to consider and how best to map each compound's spectrum of scores to a single value, decisions that are system-specific. EnOpt uses machine learning to address these challenges. We perform benchmark VS to show that for many systems, EnOpt ranking distinguishes active compounds from inactive or decoy molecules more effectively than traditional ensemble VS methods. To encourage broad adoption, we release EnOpt free of charge under the terms of the MIT license.


Subject(s)
Machine Learning , Molecular Docking Simulation , Proteins , Molecular Docking Simulation/methods , Proteins/chemistry , Proteins/metabolism , Protein Binding , Ligands , Protein Conformation , Software
14.
BioData Min ; 17(1): 31, 2024 Sep 05.
Article in English | MEDLINE | ID: mdl-39238044

ABSTRACT

Genome-wide association studies (GWAS) have revolutionized our understanding of the genetic architecture of complex traits and diseases. GWAS summary statistics have become essential tools for various genetic analyses, including meta-analysis, fine-mapping, and risk prediction. However, the increasing number of GWAS summary statistics and the diversity of software tools available for their analysis can make it challenging for researchers to select the most appropriate tools for their specific needs. This systematic review aims to provide a comprehensive overview of the currently available software tools and databases for GWAS summary statistics analysis. We conducted a comprehensive literature search to identify relevant software tools and databases. We categorized the tools and databases by their functionality, including data management, quality control, single-trait analysis, and multiple-trait analysis. We also compared the tools and databases based on their features, limitations, and user-friendliness. Our review identified a total of 305 functioning software tools and databases dedicated to GWAS summary statistics, each with unique strengths and limitations. We provide descriptions of the key features of each tool and database, including their input/output formats, data types, and computational requirements. We also discuss the overall usability and applicability of each tool for different research scenarios. This comprehensive review will serve as a valuable resource for researchers who are interested in using GWAS summary statistics to investigate the genetic basis of complex traits and diseases. By providing a detailed overview of the available tools and databases, we aim to facilitate informed tool selection and maximize the effectiveness of GWAS summary statistics analysis.

15.
Bioinformatics ; 2024 Sep 06.
Article in English | MEDLINE | ID: mdl-39240327

ABSTRACT

SUMMARY: We introduce a unified Python package for the prediction of protein biophysical properties, streamlining previous tools developed by the Bio2Byte research group. This suite facilitates comprehensive assessments of protein characteristics, incorporating predictors for backbone and sidechain dynamics, local secondary structure propensities, early folding, long disorder, beta-sheet aggregation and FUS-like phase separation. Our package significantly eases the integration and execution of these tools, enhancing accessibility for both computational and experimental researchers. AVAILABILITY AND IMPLEMENTATION: The suite is available on the Python Package Index (PyPI): https://pypi.org/project/b2bTools/ and Bioconda: https://bioconda.github.io/recipes/b2btools/README.html for Linux and macOS systems, with Docker images hosted on Biocontainers: https://quay.io/repository/biocontainers/b2btools?tab=tags&tag=latest and Docker Hub: https://hub.docker.com/u/bio2byte. Online deployments are available on Galaxy Europe: https://usegalaxy.eu/root?tool_id=b2btools_single_sequence and our online server: https://bio2byte.be/b2btools/. The source code can be found at https://bitbucket.org/bio2byte/b2btools_releases. SUPPLEMENTARY INFORMATION: Supplementary information are available at Bioinformatics online.

16.
bioRxiv ; 2024 Aug 19.
Article in English | MEDLINE | ID: mdl-39229008

ABSTRACT

The rapid expansion of multi-omics data has transformed biological research, offering unprecedented opportunities to explore complex genomic relationships across diverse organisms. However, the vast volume and heterogeneity of these datasets presents significant challenges for analyses. Here we introduce SocialGene, a comprehensive software suite designed to collect, analyze, and organize multi-omics data into structured knowledge graphs, with the ability to handle small projects to repository-scale analyses. Originally developed to enhance genome mining for natural product drug discovery, SocialGene has been effective across various applications, including functional genomics, evolutionary studies, and systems biology. SocialGene's concerted Python and Nextflow libraries streamline data ingestion, manipulation, aggregation, and analysis, culminating in a custom Neo4j database. The software not only facilitates the exploration of genomic synteny but also provides a foundational knowledge graph supporting the integration of additional diverse datasets and the development of advanced search engines and analyses. This manuscript introduces some of SocialGene's capabilities through brief case studies including targeted genome mining for drug discovery, accelerated searches for similar and distantly related biosynthetic gene clusters in biobank-available organisms, integration of chemical and analytical data, and more. SocialGene is free, open-source, MIT-licensed, designed for adaptability and extension, and available from github.com/socialgene.

17.
Eur Heart J Imaging Methods Pract ; 2(1): qyae061, 2024 Jan.
Article in English | MEDLINE | ID: mdl-39224103

ABSTRACT

Aims: Speckle tracking echocardiography increasingly supports left atrial (LA) strain (LAS) analysis for diagnosis and prognosis of various clinical conditions. Prior limitations, such as the absence of dedicated software, have been overcome by validated ventricular-based software. A newly automated real-time and offline LA-specific software have now become available on echocardiographs and dedicated workstations. This study aimed at comparing LA strain measures obtained from new fully automated software vs. traditional semi-automated ventricular-based methods in different groups of patients. Methods and results: Two operators acquired LA images in a mixed population of healthy individuals and patients with pressure overload (hypertension and aortic stenosis) or pressure-volume overload (mitral regurgitation and heart failure). Subjects with prosthetic valves, heart transplant, or atrial fibrillation were excluded. Strain analysis was performed twice by old semi-automated software and new LA dedicated. LAS was then measured online on the scanning echocardiograph. Overall, 100 patients were analysed (41 healthy subjects, 28 pressure overload, 31 volume overload). LAS proved to be highly reproducible with both software. The dedicated method exhibited slightly superior inter- and intra-operator reproducibility. The online software results showed a nearly perfect reproducibility with offline software [intraclass correlation coefficient = 0.99 [0.99; 1.00]] in addition to being able to save an average of ∼30 s. Conclusion: The recently developed fully automated software for dedicated LAS analysis demonstrates excellent inter- and intra-operator reproducibility, making it a reliable and efficient strain calculation method in routine clinical practice. Another advantage of online LAS calculation is time efficiency.

18.
Stud Health Technol Inform ; 317: 251-259, 2024 Aug 30.
Article in English | MEDLINE | ID: mdl-39234729

ABSTRACT

INTRODUCTION: Drawing tasks are an elementary component of psychological assessment in the evaluation of mental health. With the rise of digitalization not only in psychology but healthcare in general, digital drawing tools (dDTs) have also been developed for this purpose. This scoping review aims at summarizing the state of the art of dDTs available to assess mental health conditions in people above preschool age. METHODS: PubMed, PsycInfo, PsycArticles, CINAHL, and Psychology and Behavioral Sciences Collection were searched for dDTs from 2000 onwards. The focus was on dDTs, which not only evaluate the final drawing, but also process data. RESULTS: After applying the search and selection strategy, a total of 37 articles, comprising unique dDTs, remained for data extraction. Around 75 % of these articles were published after 2014 and most of them target adults (86.5 %). In addition, dDTs were mainly used in two areas: tremor detection and assessment of cognitive states, utilizing, for example, the Spiral Drawing Test and the Clock Drawing Test. CONCLUSION: Early detection of mental diseases is an increasingly important field in healthcare. Through the integration of digital and art-based solutions, this area could expand into an interdisciplinary science. This review shows that the first steps in this direction have already been taken and that the possibilities for further research, e.g., on the optimized application of dDTs, are still open.


Subject(s)
Mental Disorders , Humans , Mental Disorders/diagnosis , Mental Health , Adult , Art
19.
Front Neurosci ; 18: 1385847, 2024.
Article in English | MEDLINE | ID: mdl-39221005

ABSTRACT

Diffusion-weighted imaging (DWI) is the primary method to investigate macro- and microstructure of neural white matter in vivo. DWI can be used to identify and characterize individual-specific white matter bundles, enabling precise analyses on hypothesis-driven connections in the brain and bridging the relationships between brain structure, function, and behavior. However, cortical endpoints of bundles may span larger areas than what a researcher is interested in, challenging presumptions that bundles are specifically tied to certain brain functions. Functional MRI (fMRI) can be integrated to further refine bundles such that they are restricted to functionally-defined cortical regions. Analyzing properties of these Functional Sub-Bundles (FSuB) increases precision and interpretability of results when studying neural connections supporting specific tasks. Several parameters of DWI and fMRI analyses, ranging from data acquisition to processing, can impact the efficacy of integrating functional and diffusion MRI. Here, we discuss the applications of the FSuB approach, suggest best practices for acquiring and processing neuroimaging data towards this end, and introduce the FSuB-Extractor, a flexible open-source software for creating FSuBs. We demonstrate our processing code and the FSuB-Extractor on an openly-available dataset, the Natural Scenes Dataset.

20.
Environ Monit Assess ; 196(10): 874, 2024 Sep 02.
Article in English | MEDLINE | ID: mdl-39222246

ABSTRACT

The present study deals with the assessment of different physicochemical parameters (pH, electrical conductivity (E.C.), turbidity, total dissolved solids (TDS), and dissolved oxygen) in different surface water such as pond, river, and canal water in four different seasons, viz. March, June, September, and December 2023. The research endeavors to assess the impact of a cationic polyelectrolyte, specifically poly(diallyl dimethyl ammonium chloride) (PDADMAC), utilized as a coagulation aid in conjunction with lime for water treatment. Employing a conventional jar test apparatus, turbidity removal from diverse water samples is examined. Furthermore, the samples undergo characterization utilizing X-ray diffraction (XRD) and scanning electron microscopy (SEM) techniques. The study also conducts correlation analyses on various parameters such as electrical conductivity (EC), pH, total dissolved solids (TDS), turbidity of raw water, polyelectrolyte dosage, and percentage of turbidity removal across different water sources. Utilizing the Statistical Package for Social Science (SPSS) software, these analyses aim to establish robust relationships among initial turbidity, temperature, percentage of turbidity removal, dosage of coagulant aid, electrical conductivity, and total dissolved solids (TDS) in pond water, river water, and canal water. A strong positive correlation could be found between the percentage of turbidity removal and the value of initial turbidity of all surface water. However, a negative correlation could be observed between the polyelectrolyte dosage and raw water's turbidity. By elucidating these correlations, the study contributes to a deeper understanding of the effectiveness of PDADMAC and lime in water treatment processes across diverse environmental conditions. This research enhances our comprehension of surface water treatment methodologies and provides valuable insights for optimizing water treatment strategies to address the challenges posed by varying water sources and seasonal fluctuations.


Subject(s)
Calcium Compounds , Oxides , Quaternary Ammonium Compounds , Rivers , Seasons , Water Purification , Oxides/chemistry , Calcium Compounds/chemistry , Quaternary Ammonium Compounds/chemistry , Quaternary Ammonium Compounds/analysis , Rivers/chemistry , Water Purification/methods , Polyethylenes/chemistry , Water Pollutants, Chemical/analysis , Ponds/chemistry , Environmental Monitoring/methods
SELECTION OF CITATIONS
SEARCH DETAIL