Your browser doesn't support javascript.
loading
Montrer: 20 | 50 | 100
Résultats 1 - 20 de 19.307
Filtrer
1.
J Biomed Opt ; 30(Suppl 1): S13703, 2025 Jan.
Article de Anglais | MEDLINE | ID: mdl-39034959

RÉSUMÉ

Significance: Standardization of fluorescence molecular imaging (FMI) is critical for ensuring quality control in guiding surgical procedures. To accurately evaluate system performance, two metrics, the signal-to-noise ratio (SNR) and contrast, are widely employed. However, there is currently no consensus on how these metrics can be computed. Aim: We aim to examine the impact of SNR and contrast definitions on the performance assessment of FMI systems. Approach: We quantified the SNR and contrast of six near-infrared FMI systems by imaging a multi-parametric phantom. Based on approaches commonly used in the literature, we quantified seven SNRs and four contrast values considering different background regions and/or formulas. Then, we calculated benchmarking (BM) scores and respective rank values for each system. Results: We show that the performance assessment of an FMI system changes depending on the background locations and the applied quantification method. For a single system, the different metrics can vary up to ∼ 35 dB (SNR), ∼ 8.65 a . u . (contrast), and ∼ 0.67 a . u . (BM score). Conclusions: The definition of precise guidelines for FMI performance assessment is imperative to ensure successful clinical translation of the technology. Such guidelines can also enable quality control for the already clinically approved indocyanine green-based fluorescence image-guided surgery.


Sujet(s)
Référenciation , Imagerie moléculaire , Imagerie optique , Fantômes en imagerie , Rapport signal-bruit , Imagerie moléculaire/méthodes , Imagerie moléculaire/normes , Imagerie optique/méthodes , Imagerie optique/normes , Traitement d'image par ordinateur/méthodes
2.
PLoS One ; 19(7): e0305856, 2024.
Article de Anglais | MEDLINE | ID: mdl-38968250

RÉSUMÉ

Continual learning and few-shot learning are important frontiers in progress toward broader Machine Learning (ML) capabilities. Recently, there has been intense interest in combining both. One of the first examples to do so was the Continual few-shot Learning (CFSL) framework of Antoniou et al. (2020). In this study, we extend CFSL in two ways that capture a broader range of challenges, important for intelligent agent behaviour in real-world conditions. First, we increased the number of classes by an order of magnitude, making the results more comparable to standard continual learning experiments. Second, we introduced an 'instance test' which requires recognition of specific instances of classes-a capability of animal cognition that is usually neglected in ML. For an initial exploration of ML model performance under these conditions, we selected representative baseline models from the original CFSL work and added a model variant with replay. As expected, learning more classes is more difficult than the original CFSL experiments, and interestingly, the way in which image instances and classes are presented affects classification performance. Surprisingly, accuracy in the baseline instance test is comparable to other classification tasks, but poor given significant occlusion and noise. The use of replay for consolidation substantially improves performance for both types of tasks, but particularly for the instance test.


Sujet(s)
Référenciation , Apprentissage machine , Animaux , Algorithmes
3.
Front Public Health ; 12: 1363957, 2024.
Article de Anglais | MEDLINE | ID: mdl-38952740

RÉSUMÉ

Background and aims: Laboratory performance as a relative concept needs repetitive benchmarking for continuous improvement of laboratory procedures and medical processes. Benchmarking as such establishes reference levels as a basis for improvements efforts for healthcare institutions along the diagnosis cycle, with the patient at its center. But while this concept seems to be generally acknowledged in laboratory medicine, a lack of practical implementation hinders progress at a global level. The aim of this study was to examine the utility of a specific combination of indicators and survey-based data collection approach, and to establish a global benchmarking dataset of laboratory performance for decision makers in healthcare institutions. Methods: The survey consisted of 44 items relating to laboratory operations in general and three subscales identified in previous studies. A global sample of laboratories was approached by trained professionals. Results were analyzed with standard descriptive statistics and exploratory factor analysis. Dimensional reduction of specific items was performed using confirmatory factor analysis, resulting in individual laboratory scores for the three subscales of "Operational performance," "Integrated clinical care performance," and "Financial sustainability" for the high-level concept of laboratory performance. Results and conclusions: In total, 920 laboratories from 55 countries across the globe participated in the survey, of which 401 were government hospital laboratories, 296 private hospital laboratories, and 223 commercial laboratories. Relevant results include the need for digitalization and automation along the diagnosis cycle. Formal quality management systems (ISO 9001, ISO 15189 etc.) need to be adapted more broadly to increase patient safety. Monitoring of key performance indicators (KPIs) relating to healthcare performance was generally low (in the range of 10-30% of laboratories overall), and as a particularly salient result, only 19% of laboratories monitored KPIs relating to speeding up diagnosis and treatment. Altogether, this benchmark elucidates current practice and has the potential to guide improvement efforts and standardization in quality & safety for patients and employees alike as well as sustainability of healthcare systems around the globe.


Sujet(s)
Référenciation , Humains , Enquêtes et questionnaires , Laboratoires cliniques/normes , Santé mondiale
4.
Genome Biol ; 25(1): 172, 2024 07 01.
Article de Anglais | MEDLINE | ID: mdl-38951922

RÉSUMÉ

BACKGROUND: Computational variant effect predictors offer a scalable and increasingly reliable means of interpreting human genetic variation, but concerns of circularity and bias have limited previous methods for evaluating and comparing predictors. Population-level cohorts of genotyped and phenotyped participants that have not been used in predictor training can facilitate an unbiased benchmarking of available methods. Using a curated set of human gene-trait associations with a reported rare-variant burden association, we evaluate the correlations of 24 computational variant effect predictors with associated human traits in the UK Biobank and All of Us cohorts. RESULTS: AlphaMissense outperformed all other predictors in inferring human traits based on rare missense variants in UK Biobank and All of Us participants. The overall rankings of computational variant effect predictors in these two cohorts showed a significant positive correlation. CONCLUSION: We describe a method to assess computational variant effect predictors that sidesteps the limitations of previous evaluations. This approach is generalizable to future predictors and could continue to inform predictor choice for personal and clinical genetics.


Sujet(s)
Référenciation , Variation génétique , Humains , Phénotype , Biologie informatique/méthodes , Génotype
5.
Genome Biol ; 25(1): 169, 2024 07 01.
Article de Anglais | MEDLINE | ID: mdl-38956606

RÉSUMÉ

BACKGROUND: Computational cell type deconvolution enables the estimation of cell type abundance from bulk tissues and is important for understanding tissue microenviroment, especially in tumor tissues. With rapid development of deconvolution methods, many benchmarking studies have been published aiming for a comprehensive evaluation for these methods. Benchmarking studies rely on cell-type resolved single-cell RNA-seq data to create simulated pseudobulk datasets by adding individual cells-types in controlled proportions. RESULTS: In our work, we show that the standard application of this approach, which uses randomly selected single cells, regardless of the intrinsic difference between them, generates synthetic bulk expression values that lack appropriate biological variance. We demonstrate why and how the current bulk simulation pipeline with random cells is unrealistic and propose a heterogeneous simulation strategy as a solution. The heterogeneously simulated bulk samples match up with the variance observed in real bulk datasets and therefore provide concrete benefits for benchmarking in several ways. We demonstrate that conceptual classes of deconvolution methods differ dramatically in their robustness to heterogeneity with reference-free methods performing particularly poorly. For regression-based methods, the heterogeneous simulation provides an explicit framework to disentangle the contributions of reference construction and regression methods to performance. Finally, we perform an extensive benchmark of diverse methods across eight different datasets and find BayesPrism and a hybrid MuSiC/CIBERSORTx approach to be the top performers. CONCLUSIONS: Our heterogeneous bulk simulation method and the entire benchmarking framework is implemented in a user friendly package https://github.com/humengying0907/deconvBenchmarking and https://doi.org/10.5281/zenodo.8206516 , enabling further developments in deconvolution methods.


Sujet(s)
Référenciation , Analyse sur cellule unique , Analyse sur cellule unique/méthodes , Humains , Simulation numérique , RNA-Seq/méthodes , Biologie informatique/méthodes
6.
Br Dent J ; 237(2): 142, 2024 Jul.
Article de Anglais | MEDLINE | ID: mdl-39060603
7.
Genes (Basel) ; 15(7)2024 Jul 16.
Article de Anglais | MEDLINE | ID: mdl-39062704

RÉSUMÉ

The identification of structural variants (SVs) in genomic data represents an ongoing challenge because of difficulties in reliable SV calling leading to reduced sensitivity and specificity. We prepared high-quality DNA from 9 parent-child trios, who had previously undergone short-read whole-genome sequencing (Illumina platform) as part of the Genomics England 100,000 Genomes Project. We reanalysed the genomes using both Bionano optical genome mapping (OGM; 8 probands and one trio) and Nanopore long-read sequencing (Oxford Nanopore Technologies [ONT] platform; all samples). To establish a "truth" dataset, we asked whether rare proband SV calls (n = 234) made by the Bionano Access (version 1.6.1)/Solve software (version 3.6.1_11162020) could be verified by individual visualisation using the Integrative Genomics Viewer with either or both of the Illumina and ONT raw sequence. Of these, 222 calls were verified, indicating that Bionano OGM calls have high precision (positive predictive value 95%). We then asked what proportion of the 222 true Bionano SVs had been identified by SV callers in the other two datasets. In the Illumina dataset, sensitivity varied according to variant type, being high for deletions (115/134; 86%) but poor for insertions (13/58; 22%). In the ONT dataset, sensitivity was generally poor using the original Sniffles variant caller (48% overall) but improved substantially with use of Sniffles2 (36/40; 90% and 17/23; 74% for deletions and insertions, respectively). In summary, we show that the precision of OGM is very high. In addition, when applying the Sniffles2 caller, the sensitivity of SV calling using ONT long-read sequence data outperforms Illumina sequencing for most SV types.


Sujet(s)
Référenciation , Séquençage par nanopores , Séquençage du génome entier , Humains , Séquençage du génome entier/méthodes , Séquençage du génome entier/normes , Séquençage par nanopores/méthodes , Référenciation/méthodes , Variation structurale du génome/génétique , Cartographie chromosomique/méthodes , Génome humain/génétique , Génomique/méthodes , Logiciel , Séquençage nucléotidique à haut débit/méthodes , Séquençage nucléotidique à haut débit/normes , Femelle , Nanopores , Mâle , Analyse de séquence d'ADN/méthodes , Analyse de séquence d'ADN/normes
8.
BMC Public Health ; 24(1): 1790, 2024 Jul 05.
Article de Anglais | MEDLINE | ID: mdl-38970046

RÉSUMÉ

BACKGROUND: Aboriginal and Torres Strait Islander communities in remote Australia have initiated bold policies for health-enabling stores. Benchmarking, a data-driven and facilitated 'audit and feedback' with action planning process, provides a potential strategy to strengthen and scale health-enabling best-practice adoption by remote community store directors/owners. We aim to co-design a benchmarking model with five partner organisations and test its effectiveness with Aboriginal and Torres Strait Islander community stores in remote Australia. METHODS: Study design is a pragmatic randomised controlled trial with consenting eligible stores (located in very remote Northern Territory (NT) of Australia, primary grocery store for an Aboriginal community, and serviced by a Nutrition Practitioner with a study partner organisation). The Benchmarking model is informed by research evidence, purpose-built best-practice audit and feedback tools, and co-designed with partner organisation and community representatives. The intervention comprises two full benchmarking cycles (one per year, 2022/23 and 2023/24) of assessment, feedback, action planning and action implementation. Assessment of stores includes i adoption status of 21 evidence-and industry-informed health-enabling policies for remote stores, ii implementation of health-enabling best-practice using a purpose-built Store Scout App, iii price of a standardised healthy diet using the Aboriginal and Torres Strait Islander Healthy Diets ASAP protocol; and, iv healthiness of food purchasing using sales data indicators. Partner organisations feedback reports and co-design action plans with stores. Control stores receive assessments and continue with usual retail practice. All stores provide weekly electronic sales data to assess the primary outcome, change in free sugars (g) to energy (MJ) from all food and drinks purchased, baseline (July-December 2021) vs July-December 2023. DISCUSSION: We hypothesise that the benchmarking intervention can improve the adoption of health-enabling store policy and practice and reduce sales of unhealthy foods and drinks in remote community stores of Australia. This innovative research with remote Aboriginal and Torres Strait Islander communities can inform effective implementation strategies for healthy food retail more broadly. TRIAL REGISTRATION: ACTRN12622000596707, Protocol version 1.


Sujet(s)
Référenciation , Régime alimentaire sain , Approvisionnement en nourriture , Humains , Australie , Aborigènes australiens et insulaires du détroit de Torrès , Commerce , Approvisionnement en nourriture/normes , Population rurale , Essais contrôlés randomisés comme sujet
9.
Rev Lat Am Enfermagem ; 32: e4221, 2024.
Article de Anglais, Espagnol, Portugais | MEDLINE | ID: mdl-38985044

RÉSUMÉ

OBJECTIVE: to map the content and features of mobile applications on the management of Diabetes Mellitus and their usability on the main operating systems. METHOD: benchmarking research. The mapping of apps, content, and resources on the Play Store and App Store platforms was based on an adaptation of the Joanna Briggs Institute's scoping review framework. For the usability analysis, the apps were tested for two weeks and the System Usability Scale instrument was used, with scores between 50-67 points being considered borderline, between 68-84, products with acceptable usability and above 85, excellent user acceptance and, for the analysis, descriptive statistics. RESULTS: the most prevalent contents were capillary blood glucose management, diet, oral drug therapy, and insulin therapy. As for resources, diaries and graphs were the most common. With regard to usability, two apps were considered to have excellent usability; 34, products with acceptable usability; 29, the resource may have some flaws but still has acceptable usability standards and 6, with flaws and no usability conditions. CONCLUSION: the content and resources of mobile applications address the fundamental points for managing Diabetes Mellitus with user-friendly resources, with usability acceptable to users and have the potential to assist in the management of Diabetes Mellitus in patients' daily lives.


Sujet(s)
Référenciation , Diabète , Applications mobiles , Humains , Applications mobiles/normes , Diabète/thérapie
10.
NPJ Syst Biol Appl ; 10(1): 73, 2024 Jul 12.
Article de Anglais | MEDLINE | ID: mdl-38997321

RÉSUMÉ

Immunoglobulins (Ig), which exist either as B-cell receptors (BCR) on the surface of B cells or as antibodies when secreted, play a key role in the recognition and response to antigenic threats. The capability to jointly characterize the BCR and antibody repertoire is crucial for understanding human adaptive immunity. From peripheral blood, bulk BCR sequencing (bulkBCR-seq) currently provides the highest sampling depth, single-cell BCR sequencing (scBCR-seq) allows for paired chain characterization, and antibody peptide sequencing by tandem mass spectrometry (Ab-seq) provides information on the composition of secreted antibodies in the serum. Yet, it has not been benchmarked to what extent the datasets generated by these three technologies overlap and complement each other. To address this question, we isolated peripheral blood B cells from healthy human donors and sequenced BCRs at bulk and single-cell levels, in addition to utilizing publicly available sequencing data. Integrated analysis was performed on these datasets, resolved by replicates and across individuals. Simultaneously, serum antibodies were isolated, digested with multiple proteases, and analyzed with Ab-seq. Systems immunology analysis showed high concordance in repertoire features between bulk and scBCR-seq within individuals, especially when replicates were utilized. In addition, Ab-seq identified clonotype-specific peptides using both bulk and scBCR-seq library references, demonstrating the feasibility of combining scBCR-seq and Ab-seq for reconstructing paired-chain Ig sequences from the serum antibody repertoire. Collectively, our work serves as a proof-of-principle for combining bulk sequencing, single-cell sequencing, and mass spectrometry as complementary methods towards capturing humoral immunity in its entirety.


Sujet(s)
Lymphocytes B , Référenciation , Protéomique , Récepteurs pour l'antigène des lymphocytes B , Analyse sur cellule unique , Humains , Récepteurs pour l'antigène des lymphocytes B/génétique , Récepteurs pour l'antigène des lymphocytes B/immunologie , Protéomique/méthodes , Lymphocytes B/immunologie , Analyse sur cellule unique/méthodes , Anticorps/immunologie , Anticorps/génétique , Génomique/méthodes , Spectrométrie de masse en tandem/méthodes
11.
BMC Genomics ; 25(1): 679, 2024 Jul 08.
Article de Anglais | MEDLINE | ID: mdl-38978005

RÉSUMÉ

BACKGROUND: Oxford Nanopore provides high throughput sequencing platforms able to reconstruct complete bacterial genomes with 99.95% accuracy. However, even small levels of error can obscure the phylogenetic relationships between closely related isolates. Polishing tools have been developed to correct these errors, but it is uncertain if they obtain the accuracy needed for the high-resolution source tracking of foodborne illness outbreaks. RESULTS: We tested 132 combinations of assembly and short- and long-read polishing tools to assess their accuracy for reconstructing the genome sequences of 15 highly similar Salmonella enterica serovar Newport isolates from a 2020 onion outbreak. While long-read polishing alone improved accuracy, near perfect accuracy (99.9999% accuracy or ~ 5 nucleotide errors across the 4.8 Mbp genome, excluding low confidence regions) was only obtained by pipelines that combined both long- and short-read polishing tools. Notably, medaka was a more accurate and efficient long-read polisher than Racon. Among short-read polishers, NextPolish showed the highest accuracy, but Pilon, Polypolish, and POLCA performed similarly. Among the 5 best performing pipelines, polishing with medaka followed by NextPolish was the most common combination. Importantly, the order of polishing tools mattered i.e., using less accurate tools after more accurate ones introduced errors. Indels in homopolymers and repetitive regions, where the short reads could not be uniquely mapped, remained the most challenging errors to correct. CONCLUSIONS: Short reads are still needed to correct errors in nanopore sequenced assemblies to obtain the accuracy required for source tracking investigations. Our granular assessment of the performance of the polishing pipelines allowed us to suggest best practices for tool users and areas for improvement for tool developers.


Sujet(s)
Référenciation , Épidémies de maladies , Génome bactérien , Nanopores , Séquençage par nanopores/méthodes , Séquençage nucléotidique à haut débit/méthodes , Salmonella enterica/génétique , Salmonella enterica/isolement et purification , Humains , Phylogenèse
12.
Article de Anglais | MEDLINE | ID: mdl-39049508

RÉSUMÉ

Gene set scoring (GSS) has been routinely conducted for gene expression analysis of bulk or single-cell RNA sequencing (RNA-seq) data, which helps to decipher single-cell heterogeneity and cell type-specific variability by incorporating prior knowledge from functional gene sets. Single-cell assay for transposase accessible chromatin using sequencing (scATAC-seq) is a powerful technique for interrogating single-cell chromatin-based gene regulation, and genes or gene sets with dynamic regulatory potentials can be regarded as cell type-specific markers as if in single-cell RNA-seq (scRNA-seq). However, there are few GSS tools specifically designed for scATAC-seq, and the applicability and performance of RNA-seq GSS tools on scATAC-seq data remain to be investigated. Here, we systematically benchmarked ten GSS tools, including four bulk RNA-seq tools, five scRNA-seq tools, and one scATAC-seq method. First, using matched scATAC-seq and scRNA-seq datasets, we found that the performance of GSS tools on scATAC-seq data was comparable to that on scRNA-seq, suggesting their applicability to scATAC-seq. Then, the performance of different GSS tools was extensively evaluated using up to ten scATAC-seq datasets. Moreover, we evaluated the impact of gene activity conversion, dropout imputation, and gene set collections on the results of GSS. Results show that dropout imputation can significantly promote the performance of almost all GSS tools, while the impact of gene activity conversion methods or gene set collections on GSS performance is more dependent on GSS tools or datasets. Finally, we provided practical guidelines for choosing appropriate preprocessing methods and GSS tools in different application scenarios.


Sujet(s)
Algorithmes , Référenciation , Séquençage après immunoprécipitation de la chromatine , Analyse sur cellule unique , Analyse sur cellule unique/méthodes , Analyse sur cellule unique/normes , Humains , Séquençage après immunoprécipitation de la chromatine/méthodes , RNA-Seq/méthodes , RNA-Seq/normes , Analyse de séquence d'ARN/méthodes , Analyse de séquence d'ARN/normes , Analyse de profil d'expression de gènes/méthodes , Analyse de profil d'expression de gènes/normes , Chromatine/génétique , Chromatine/métabolisme
13.
Injury ; 55(8): 111698, 2024 Aug.
Article de Anglais | MEDLINE | ID: mdl-38959675

RÉSUMÉ

INTRODUCTION: Case volumes of trauma centers and surgeons influence clinical outcomes following orthopaedic trauma surgery. This study quantifies surgical volume benchmarks for Orthopaedic Trauma Association (OTA)-accredited fellowship training in the United States. METHODS: This was a retrospective cross-sectional study of orthopaedic trauma fellows graduating between 2018 and 2019 to 2022-2023. Case volume percentiles were calculated across categories and variability defined as the fold-difference between 90th and 10th percentiles. Temporal trends were assessed with linear regression. RESULTS: 446 orthopaedic trauma fellows were included in this study. Mean reported case volume increased from 898 ± 245 in 2018-2019 to 974 ± 329 in 2022-2023 (P = 0.066). Mean case volume was 924 over the study period and mostly consisted of other (418 cases, 45 %), subtrochanteric/intertrochanteric femoral neck (84 cases, 9 %), open fracture debridement (72 cases, 8 %), pelvic ring disruption / fracture (55 cases, 6 %), acetabular fracture (41 cases, 4 %), tibial shaft fracture (39 cases, 4 %), and femoral shaft fracture (38 cases, 4 %) cases. Overall variability in total reported case volume was 2.0. Variability was greatest in distal radius fracture (14.8), amputation (9.5), fasciotomy (8.0), and proximal humerus repair (5.0). CONCLUSION: Graduates from OTA-accredited fellowship training perform 924 cases on average, which exceeds the current minimum requirement of 600 cases. Case volume benchmarks can assist trainees and faculty align training goals with fellowship program strengths. More research is needed to determine evidence-based case minimum requirements for core competency training in orthopaedic trauma surgery.


Sujet(s)
Référenciation , Compétence clinique , Bourses d'études et bourses universitaires , Orthopédie , Humains , Études rétrospectives , Études transversales , Orthopédie/enseignement et éducation , Orthopédie/normes , États-Unis , Compétence clinique/normes , Enseignement spécialisé en médecine/normes , Mâle , Femelle , Procédures orthopédiques/enseignement et éducation , Procédures orthopédiques/normes , Centres de traumatologie/normes , Traumatologie/enseignement et éducation , Traumatologie/normes , Agrément , Adulte , Internat et résidence
14.
Nat Commun ; 15(1): 6167, 2024 Jul 22.
Article de Anglais | MEDLINE | ID: mdl-39039053

RÉSUMÉ

Translating RNA-seq into clinical diagnostics requires ensuring the reliability and cross-laboratory consistency of detecting clinically relevant subtle differential expressions, such as those between different disease subtypes or stages. As part of the Quartet project, we present an RNA-seq benchmarking study across 45 laboratories using the Quartet and MAQC reference samples spiked with ERCC controls. Based on multiple types of 'ground truth', we systematically assess the real-world RNA-seq performance and investigate the influencing factors involved in 26 experimental processes and 140 bioinformatics pipelines. Here we show greater inter-laboratory variations in detecting subtle differential expressions among the Quartet samples. Experimental factors including mRNA enrichment and strandedness, and each bioinformatics step, emerge as primary sources of variations in gene expression. We underscore the profound influence of experimental execution, and provide best practice recommendations for experimental designs, strategies for filtering low-expression genes, and the optimal gene annotation and analysis pipelines. In summary, this study lays the foundation for developing and quality control of RNA-seq for clinical diagnostic purposes.


Sujet(s)
Référenciation , Biologie informatique , Contrôle de qualité , RNA-Seq , Normes de référence , Référenciation/méthodes , Humains , RNA-Seq/méthodes , RNA-Seq/normes , Biologie informatique/méthodes , Reproductibilité des résultats , Analyse de séquence d'ARN/méthodes , Analyse de séquence d'ARN/normes , Analyse de profil d'expression de gènes/méthodes , Analyse de profil d'expression de gènes/normes , ARN messager/génétique , ARN messager/métabolisme
15.
JMIR Ment Health ; 11: e57306, 2024 Jul 23.
Article de Anglais | MEDLINE | ID: mdl-39042893

RÉSUMÉ

BACKGROUND: Comprehensive session summaries enable effective continuity in mental health counseling, facilitating informed therapy planning. However, manual summarization presents a significant challenge, diverting experts' attention from the core counseling process. Leveraging advances in automatic summarization to streamline the summarization process addresses this issue because this enables mental health professionals to access concise summaries of lengthy therapy sessions, thereby increasing their efficiency. However, existing approaches often overlook the nuanced intricacies inherent in counseling interactions. OBJECTIVE: This study evaluates the effectiveness of state-of-the-art large language models (LLMs) in selectively summarizing various components of therapy sessions through aspect-based summarization, aiming to benchmark their performance. METHODS: We first created Mental Health Counseling-Component-Guided Dialogue Summaries, a benchmarking data set that consists of 191 counseling sessions with summaries focused on 3 distinct counseling components (also known as counseling aspects). Next, we assessed the capabilities of 11 state-of-the-art LLMs in addressing the task of counseling-component-guided summarization. The generated summaries were evaluated quantitatively using standard summarization metrics and verified qualitatively by mental health professionals. RESULTS: Our findings demonstrated the superior performance of task-specific LLMs such as MentalLlama, Mistral, and MentalBART evaluated using standard quantitative metrics such as Recall-Oriented Understudy for Gisting Evaluation (ROUGE)-1, ROUGE-2, ROUGE-L, and Bidirectional Encoder Representations from Transformers Score across all aspects of the counseling components. Furthermore, expert evaluation revealed that Mistral superseded both MentalLlama and MentalBART across 6 parameters: affective attitude, burden, ethicality, coherence, opportunity costs, and perceived effectiveness. However, these models exhibit a common weakness in terms of room for improvement in the opportunity costs and perceived effectiveness metrics. CONCLUSIONS: While LLMs fine-tuned specifically on mental health domain data display better performance based on automatic evaluation scores, expert assessments indicate that these models are not yet reliable for clinical application. Further refinement and validation are necessary before their implementation in practice.


Sujet(s)
Référenciation , Assistance , Humains , Assistance/méthodes , Adulte , Troubles mentaux/thérapie , Femelle
16.
Genome Biol ; 25(1): 192, 2024 Jul 19.
Article de Anglais | MEDLINE | ID: mdl-39030569

RÉSUMÉ

BACKGROUND: CRISPR-Cas9 dropout screens are formidable tools for investigating biology with unprecedented precision and scale. However, biases in data lead to potential confounding effects on interpretation and compromise overall quality. The activity of Cas9 is influenced by structural features of the target site, including copy number amplifications (CN bias). More worryingly, proximal targeted loci tend to generate similar gene-independent responses to CRISPR-Cas9 targeting (proximity bias), possibly due to Cas9-induced whole chromosome-arm truncations or other genomic structural features and different chromatin accessibility levels. RESULTS: We benchmarked eight computational methods, rigorously evaluating their ability to reduce both CN and proximity bias in the two largest publicly available cell-line-based CRISPR-Cas9 screens to date. We also evaluated the capability of each method to preserve data quality and heterogeneity by assessing the extent to which the processed data allows accurate detection of true positive essential genes, established oncogenetic addictions, and known/novel biomarkers of cancer dependency. Our analysis sheds light on the ability of each method to correct biases under different scenarios. AC-Chronos outperforms other methods in correcting both CN and proximity biases when jointly processing multiple screens of models with available CN information, whereas CRISPRcleanR is the top performing method for individual screens or when CN information is not available. In addition, Chronos and AC-Chronos yield a final dataset better able to recapitulate known sets of essential and non-essential genes. CONCLUSIONS: Overall, our investigation provides guidance for the selection of the most appropriate bias-correction method, based on its strengths, weaknesses and experimental settings.


Sujet(s)
Référenciation , Systèmes CRISPR-Cas , Humains , Biologie informatique/méthodes , Biais (épidémiologie)
17.
PLoS One ; 19(7): e0307894, 2024.
Article de Anglais | MEDLINE | ID: mdl-39058731

RÉSUMÉ

The quest for sustainable energy solutions has intensified interest in marine renewables, particularly wave energy. This study addresses the crucial need for an objective assessment of Wave Energy Converter (WEC) technologies, which are instrumental in harnessing ocean waves for electricity generation. To benchmark WEC technologies, we employed an integrated approach combining the MEthod based on the Removal Effects of Criteria (MEREC) and the Spherical Fuzzy Combine Compromise Solution (SF-CoCoSo). MEREC provided a systematic way to determine the importance of various benchmarking criteria, while SF-CoCoSo facilitated the synthesis of complex decision-making data into a coherent evaluation score for each technology. The results of the study offer a definitive ranking of WEC technologies, with findings emphasizing the importance of grid connectivity and adaptability to various wave conditions as pivotal to the technologies' success. While the study makes significant strides in the evaluation of WECs, it also recognizes limitations, including the potential for evolving market dynamics to influence criteria weightings and the assumption that the MCDM methods capture all decision-making complexities. Future work should expand the evaluative criteria and explore additional MCDM methods to validate and refine the benchmarking process further.


Sujet(s)
Référenciation , Prise de décision , Logique floue , Énergie renouvelable , Électricité
18.
J Clin Virol ; 173: 105695, 2024 Aug.
Article de Anglais | MEDLINE | ID: mdl-38823290

RÉSUMÉ

Metagenomics is gradually being implemented for diagnosing infectious diseases. However, in-depth protocol comparisons for viral detection have been limited to individual sets of experimental workflows and laboratories. In this study, we present a benchmark of metagenomics protocols used in clinical diagnostic laboratories initiated by the European Society for Clinical Virology (ESCV) Network on NGS (ENNGS). A mock viral reference panel was designed to mimic low biomass clinical specimens. The panel was used to assess the performance of twelve metagenomic wet lab protocols currently in use in the diagnostic laboratories of participating ENNGS member institutions. Both Illumina and Nanopore, shotgun and targeted capture probe protocols were included. Performance metrics sensitivity, specificity, and quantitative potential were assessed using a central bioinformatics pipeline. Overall, viral pathogens with loads down to 104 copies/ml (corresponding to CT values of 31 in our PCR assays) were detected by all the evaluated metagenomic wet lab protocols. In contrast, lower abundant mixed viruses of CT values of 35 and higher were detected only by a minority of the protocols. Considering the reference panel as the gold standard, optimal thresholds to define a positive result were determined per protocol, based on the horizontal genome coverage. Implementing these thresholds, sensitivity and specificity of the protocols ranged from 67 to 100 % and 87 to 100 %, respectively. A variety of metagenomic protocols are currently in use in clinical diagnostic laboratories. Detection of low abundant viral pathogens and mixed infections remains a challenge, implying the need for standardization of metagenomic analysis for use in clinical settings.


Sujet(s)
Référenciation , Métagénomique , Sensibilité et spécificité , Virus , Métagénomique/méthodes , Métagénomique/normes , Humains , Virus/génétique , Virus/classification , Virus/isolement et purification , Séquençage nucléotidique à haut débit/méthodes , Séquençage nucléotidique à haut débit/normes , Maladies virales/diagnostic , Maladies virales/virologie , Biologie informatique/méthodes
19.
J Robot Surg ; 18(1): 271, 2024 Jun 27.
Article de Anglais | MEDLINE | ID: mdl-38937307

RÉSUMÉ

We investigated the use of robotic objective performance metrics (OPM) to predict number of cases to proficiency and independence among abdominal transplant fellows performing robot-assisted donor nephrectomy (RDN). 101 RDNs were performed by 5 transplant fellows from September 2020 to October 2023. OPM included fellow percent active control time (%ACT) and handoff counts (HC). Proficiency was defined as ACT ≥ 80% and HC ≤ 2, and independence as ACT ≥ 99% and HC ≤ 1. Case number was significantly associated with increasing fellow %ACT, with proficiency estimated at 14 cases and independence at 32 cases (R2 = 0.56, p < 0.001). Similarly, case number was significantly associated with decreasing HC, with proficiency at 18 cases and independence at 33 cases (R2 = 0.29, p < 0.001). Case number was not associated with total active console time (p = 0.91). Patient demographics, operative characteristics, and outcomes were not associated with OPM, except for donor estimated blood loss (EBL), which positively correlated with HC. Abdominal transplant fellows demonstrated proficiency at 14-18 cases and independence at 32-33 cases. Total active console time remained unchanged, suggesting that increasing fellow autonomy does not impede operative efficiency. These findings may serve as a benchmark for training abdominal transplant surgery fellows independently and safely in RDN.


Sujet(s)
Compétence clinique , Donneur vivant , Néphrectomie , Interventions chirurgicales robotisées , Néphrectomie/méthodes , Néphrectomie/enseignement et éducation , Humains , Interventions chirurgicales robotisées/enseignement et éducation , Interventions chirurgicales robotisées/méthodes , Femelle , Mâle , Transplantation rénale/méthodes , Transplantation rénale/enseignement et éducation , Adulte d'âge moyen , Adulte , Référenciation , Bourses d'études et bourses universitaires
20.
Sci Rep ; 14(1): 14255, 2024 06 20.
Article de Anglais | MEDLINE | ID: mdl-38902397

RÉSUMÉ

The coronavirus disease 19 pandemic, caused by severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2), has led to a global health crisis with millions of confirmed cases and related deaths. The main protease (Mpro) of SARS-CoV-2 is crucial for viral replication and presents an attractive target for drug development. Despite the approval of some drugs, the search for effective treatments continues. In this study, we systematically evaluated 342 holo-crystal structures of Mpro to identify optimal conformations for structure-based virtual screening (SBVS). Our analysis revealed limited structural flexibility among the structures. Three docking programs, AutoDock Vina, rDock, and Glide were employed to assess the efficiency of virtual screening, revealing diverse performances across selected Mpro structures. We found that the structures 5RHE, 7DDC, and 7DPU (PDB Ids) consistently displayed the lowest EF, AUC, and BEDROCK scores. Furthermore, these structures demonstrated the worst pose prediction results in all docking programs. Two structural differences contribute to variations in docking performance: the absence of the S1 subsite in 7DDC and 7DPU, and the presence of a subpocket in the S2 subsite of 7DDC, 7DPU, and 5RHE. These findings underscore the importance of selecting appropriate Mpro conformations for SBVS, providing valuable insights for advancing drug discovery efforts.


Sujet(s)
Protéases 3C des coronavirus , Simulation de docking moléculaire , SARS-CoV-2 , SARS-CoV-2/enzymologie , Protéases 3C des coronavirus/composition chimique , Protéases 3C des coronavirus/métabolisme , Humains , Conformation des protéines , Cristallographie aux rayons X , Antiviraux/composition chimique , Antiviraux/pharmacologie , Référenciation , COVID-19/virologie , Liaison aux protéines
SÉLECTION CITATIONS
DÉTAIL DE RECHERCHE