Your browser doesn't support javascript.
loading
Montrer: 20 | 50 | 100
Résultats 1 - 20 de 1.449
Filtrer
1.
J Med Internet Res ; 26: e51397, 2024 Jul 04.
Article de Anglais | MEDLINE | ID: mdl-38963923

RÉSUMÉ

BACKGROUND: Machine learning (ML) models can yield faster and more accurate medical diagnoses; however, developing ML models is limited by a lack of high-quality labeled training data. Crowdsourced labeling is a potential solution but can be constrained by concerns about label quality. OBJECTIVE: This study aims to examine whether a gamified crowdsourcing platform with continuous performance assessment, user feedback, and performance-based incentives could produce expert-quality labels on medical imaging data. METHODS: In this diagnostic comparison study, 2384 lung ultrasound clips were retrospectively collected from 203 emergency department patients. A total of 6 lung ultrasound experts classified 393 of these clips as having no B-lines, one or more discrete B-lines, or confluent B-lines to create 2 sets of reference standard data sets (195 training clips and 198 test clips). Sets were respectively used to (1) train users on a gamified crowdsourcing platform and (2) compare the concordance of the resulting crowd labels to the concordance of individual experts to reference standards. Crowd opinions were sourced from DiagnosUs (Centaur Labs) iOS app users over 8 days, filtered based on past performance, aggregated using majority rule, and analyzed for label concordance compared with a hold-out test set of expert-labeled clips. The primary outcome was comparing the labeling concordance of collated crowd opinions to trained experts in classifying B-lines on lung ultrasound clips. RESULTS: Our clinical data set included patients with a mean age of 60.0 (SD 19.0) years; 105 (51.7%) patients were female and 114 (56.1%) patients were White. Over the 195 training clips, the expert-consensus label distribution was 114 (58%) no B-lines, 56 (29%) discrete B-lines, and 25 (13%) confluent B-lines. Over the 198 test clips, expert-consensus label distribution was 138 (70%) no B-lines, 36 (18%) discrete B-lines, and 24 (12%) confluent B-lines. In total, 99,238 opinions were collected from 426 unique users. On a test set of 198 clips, the mean labeling concordance of individual experts relative to the reference standard was 85.0% (SE 2.0), compared with 87.9% crowdsourced label concordance (P=.15). When individual experts' opinions were compared with reference standard labels created by majority vote excluding their own opinion, crowd concordance was higher than the mean concordance of individual experts to reference standards (87.4% vs 80.8%, SE 1.6 for expert concordance; P<.001). Clips with discrete B-lines had the most disagreement from both the crowd consensus and individual experts with the expert consensus. Using randomly sampled subsets of crowd opinions, 7 quality-filtered opinions were sufficient to achieve near the maximum crowd concordance. CONCLUSIONS: Crowdsourced labels for B-line classification on lung ultrasound clips via a gamified approach achieved expert-level accuracy. This suggests a strategic role for gamified crowdsourcing in efficiently generating labeled image data sets for training ML systems.


Sujet(s)
Externalisation ouverte , Poumon , Échographie , Externalisation ouverte/méthodes , Humains , Échographie/méthodes , Échographie/normes , Poumon/imagerie diagnostique , Études prospectives , Femelle , Mâle , Apprentissage machine , Adulte , Adulte d'âge moyen , Études rétrospectives
2.
J Med Internet Res ; 26: e54263, 2024 Jul 05.
Article de Anglais | MEDLINE | ID: mdl-38968598

RÉSUMÉ

BACKGROUND: The medical knowledge graph provides explainable decision support, helping clinicians with prompt diagnosis and treatment suggestions. However, in real-world clinical practice, patients visit different hospitals seeking various medical services, resulting in fragmented patient data across hospitals. With data security issues, data fragmentation limits the application of knowledge graphs because single-hospital data cannot provide complete evidence for generating precise decision support and comprehensive explanations. It is important to study new methods for knowledge graph systems to integrate into multicenter, information-sensitive medical environments, using fragmented patient records for decision support while maintaining data privacy and security. OBJECTIVE: This study aims to propose an electronic health record (EHR)-oriented knowledge graph system for collaborative reasoning with multicenter fragmented patient medical data, all the while preserving data privacy. METHODS: The study introduced an EHR knowledge graph framework and a novel collaborative reasoning process for utilizing multicenter fragmented information. The system was deployed in each hospital and used a unified semantic structure and Observational Medical Outcomes Partnership (OMOP) vocabulary to standardize the local EHR data set. The system transforms local EHR data into semantic formats and performs semantic reasoning to generate intermediate reasoning findings. The generated intermediate findings used hypernym concepts to isolate original medical data. The intermediate findings and hash-encrypted patient identities were synchronized through a blockchain network. The multicenter intermediate findings were collaborated for final reasoning and clinical decision support without gathering original EHR data. RESULTS: The system underwent evaluation through an application study involving the utilization of multicenter fragmented EHR data to alert non-nephrology clinicians about overlooked patients with chronic kidney disease (CKD). The study covered 1185 patients in nonnephrology departments from 3 hospitals. The patients visited at least two of the hospitals. Of these, 124 patients were identified as meeting CKD diagnosis criteria through collaborative reasoning using multicenter EHR data, whereas the data from individual hospitals alone could not facilitate the identification of CKD in these patients. The assessment by clinicians indicated that 78/91 (86%) patients were CKD positive. CONCLUSIONS: The proposed system was able to effectively utilize multicenter fragmented EHR data for clinical application. The application study showed the clinical benefits of the system with prompt and comprehensive decision support.


Sujet(s)
Systèmes d'aide à la décision clinique , Dossiers médicaux électroniques , Humains
3.
Am J Emerg Med ; 83: 40-46, 2024 Jun 26.
Article de Anglais | MEDLINE | ID: mdl-38954885

RÉSUMÉ

BACKGROUND: Academic productivity is bolstered by collaboration, which is in turn related to connectivity between individuals. Gender disparities have been identified in academics in terms of both academic promotion and output. Using gender propensity and network analysis, we aimed to describe patterns of collaboration on publications in emergency medicine (EM), focusing on two Midwest academic departments. METHODS: We identified faculty at two EM departments, their academic rank, and their publications from 2020 to 2022 and gathered information on their co-authors. Using network analysis, gender propensity and standard statistical analyses we assessed the collaboration network for differences between men and women. RESULTS: Social network analysis of collaboration in academic emergency medicine showed no difference in the ways that men and women publish together. However, individuals with higher academic rank, regardless of gender, had more importance to the network. Men had a propensity to collaborate with men, and women with women. The rates of gender propensity for men and women fell between the gender ratios of emergency medicine (65%/35%) and the general population (50%/50%), 59.6% and 44%, respectively, suggesting a tendency toward homophily among men. CONCLUSION: Our study aims to use network analysis and gender propensity to identify patterns of collaboration. We found that further work in the area of network analysis application to academic productivity may be of value, with a particular focus on the role of academic rank. Our methodology may aid department leaders by using the information from local analyses to identify opportunities to support faculty members to broaden and diversify their networks.

4.
Eur Radiol Exp ; 8(1): 79, 2024 Jul 05.
Article de Anglais | MEDLINE | ID: mdl-38965128

RÉSUMÉ

Sample size, namely the number of subjects that should be included in a study to reach the desired endpoint and statistical power, is a fundamental concept of scientific research. Indeed, sample size must be planned a priori, and tailored to the main endpoint of the study, to avoid including too many subjects, thus possibly exposing them to additional risks while also wasting time and resources, or too few subjects, failing to reach the desired purpose. We offer a simple, go-to review of methods for sample size calculation for studies concerning data reliability (repeatability/reproducibility) and diagnostic performance. For studies concerning data reliability, we considered Cohen's κ or intraclass correlation coefficient (ICC) for hypothesis testing, estimation of Cohen's κ or ICC, and Bland-Altman analyses. With regards to diagnostic performance, we considered accuracy or sensitivity/specificity versus reference standards, the comparison of diagnostic performances, and the comparisons of areas under the receiver operating characteristics curve. Finally, we considered the special cases of dropouts or retrospective case exclusions, multiple endpoints, lack of prior data estimates, and the selection of unusual thresholds for α and ß errors. For the most frequent cases, we provide example of software freely available on the Internet.Relevance statement Sample size calculation is a fundamental factor influencing the quality of studies on repeatability/reproducibility and diagnostic performance in radiology.Key points• Sample size is a concept related to precision and statistical power.• It has ethical implications, especially when patients are exposed to risks.• Sample size should always be calculated before starting a study.• This review offers simple, go-to methods for sample size calculations.


Sujet(s)
Plan de recherche , Taille de l'échantillon , Humains , Reproductibilité des résultats
5.
PeerJ Comput Sci ; 10: e2092, 2024.
Article de Anglais | MEDLINE | ID: mdl-38983225

RÉSUMÉ

More sophisticated data access is possible with artificial intelligence (AI) techniques such as question answering (QA), but regulations and privacy concerns have limited their use. Federated learning (FL) deals with these problems, and QA is a viable substitute for AI. The utilization of hierarchical FL systems is examined in this research, along with an ideal method for developing client-specific adapters. The User Modified Hierarchical Federated Learning Model (UMHFLM) selects local models for users' tasks. The article suggests employing recurrent neural network (RNN) as a neural network (NN) technique for learning automatically and categorizing questions based on natural language into the appropriate templates. Together, local and global models are developed, with the worldwide model influencing local models, which are, in turn, combined for personalization. The method is applied in natural language processing pipelines for phrase matching employing template exact match, segmentation, and answer type detection. The (SQuAD-2.0), a DL-based QA method for acquiring knowledge of complicated SPARQL test questions and their accompanying SPARQL queries across the DBpedia dataset, was used to train and assess the model. The SQuAD2.0 datasets evaluate the model, which identifies 38 distinct templates. Considering the top two most likely templates, the RNN model achieves template classification accuracy of 92.8% and 61.8% on the SQuAD2.0 and QALD-7 datasets. A study on data scarcity among participants found that FL Match outperformed BERT significantly. A MAP margin of 2.60% exists between BERT and FL Match at a 100% data ratio and an MRR margin of 7.23% at a 20% data ratio.

6.
Front Netw Physiol ; 4: 1211413, 2024.
Article de Anglais | MEDLINE | ID: mdl-38948084

RÉSUMÉ

Algorithms for the detection of COVID-19 illness from wearable sensor devices tend to implicitly treat the disease as causing a stereotyped (and therefore recognizable) deviation from healthy physiology. In contrast, a substantial diversity of bodily responses to SARS-CoV-2 infection have been reported in the clinical milieu. This raises the question of how to characterize the diversity of illness manifestations, and whether such characterization could reveal meaningful relationships across different illness manifestations. Here, we present a framework motivated by information theory to generate quantified maps of illness presentation, which we term "manifestations," as resolved by continuous physiological data from a wearable device (Oura Ring). We test this framework on five physiological data streams (heart rate, heart rate variability, respiratory rate, metabolic activity, and sleep temperature) assessed at the time of reported illness onset in a previously reported COVID-19-positive cohort (N = 73). We find that the number of distinct manifestations are few in this cohort, compared to the space of all possible manifestations. In addition, manifestation frequency correlates with the rough number of symptoms reported by a given individual, over a several-day period prior to their imputed onset of illness. These findings suggest that information-theoretic approaches can be used to sort COVID-19 illness manifestations into types with real-world value. This proof of concept supports the use of information-theoretic approaches to map illness manifestations from continuous physiological data. Such approaches could likely inform algorithm design and real-time treatment decisions if developed on large, diverse samples.

7.
Synth Biol (Oxf) ; 9(1): ysae010, 2024.
Article de Anglais | MEDLINE | ID: mdl-38973982

RÉSUMÉ

Data science is playing an increasingly important role in the design and analysis of engineered biology. This has been fueled by the development of high-throughput methods like massively parallel reporter assays, data-rich microscopy techniques, computational protein structure prediction and design, and the development of whole-cell models able to generate huge volumes of data. Although the ability to apply data-centric analyses in these contexts is appealing and increasingly simple to do, it comes with potential risks. For example, how might biases in the underlying data affect the validity of a result and what might the environmental impact of large-scale data analyses be? Here, we present a community-developed framework for assessing data hazards to help address these concerns and demonstrate its application to two synthetic biology case studies. We show the diversity of considerations that arise in common types of bioengineering projects and provide some guidelines and mitigating steps. Understanding potential issues and dangers when working with data and proactively addressing them will be essential for ensuring the appropriate use of emerging data-intensive AI methods and help increase the trustworthiness of their applications in synthetic biology.

8.
Article de Anglais | MEDLINE | ID: mdl-38985412

RÉSUMÉ

PURPOSE: Decision support systems and context-aware assistance in the operating room have emerged as the key clinical applications supporting surgeons in their daily work and are generally based on single modalities. The model- and knowledge-based integration of multimodal data as a basis for decision support systems that can dynamically adapt to the surgical workflow has not yet been established. Therefore, we propose a knowledge-enhanced method for fusing multimodal data for anticipation tasks. METHODS: We developed a holistic, multimodal graph-based approach combining imaging and non-imaging information in a knowledge graph representing the intraoperative scene of a surgery. Node and edge features of the knowledge graph are extracted from suitable data sources in the operating room using machine learning. A spatiotemporal graph neural network architecture subsequently allows for interpretation of relational and temporal patterns within the knowledge graph. We apply our approach to the downstream task of instrument anticipation while presenting a suitable modeling and evaluation strategy for this task. RESULTS: Our approach achieves an F1 score of 66.86% in terms of instrument anticipation, allowing for a seamless surgical workflow and adding a valuable impact for surgical decision support systems. A resting recall of 63.33% indicates the non-prematurity of the anticipations. CONCLUSION: This work shows how multimodal data can be combined with the topological properties of an operating room in a graph-based approach. Our multimodal graph architecture serves as a basis for context-sensitive decision support systems in laparoscopic surgery considering a comprehensive intraoperative operating scene.

9.
JAMIA Open ; 7(3): ooae059, 2024 Oct.
Article de Anglais | MEDLINE | ID: mdl-39006216

RÉSUMÉ

Objectives: Missed appointments can lead to treatment delays and adverse outcomes. Telemedicine may improve appointment completion because it addresses barriers to in-person visits, such as childcare and transportation. This study compared appointment completion for appointments using telemedicine versus in-person care in a large cohort of patients at an urban academic health sciences center. Materials and Methods: We conducted a retrospective cohort study of electronic health record data to determine whether telemedicine appointments have higher odds of completion compared to in-person care appointments, January 1, 2021, and April 30, 2023. The data were obtained from the University of South Florida (USF), a large academic health sciences center serving Tampa, FL, and surrounding communities. We implemented 1:1 propensity score matching based on age, gender, race, visit type, and Charlson Comorbidity Index (CCI). Results: The matched cohort included 87 376 appointments, with diverse patient demographics. The percentage of completed telemedicine appointments exceeded that of completed in-person care appointments by 9.2 points (73.4% vs 64.2%, P < .001). The adjusted odds ratio for telemedicine versus in-person care in relation to appointment completion was 1.64 (95% CI, 1.59-1.69, P < .001), indicating that telemedicine appointments are associated with 64% higher odds of completion than in-person care appointments when controlling for other factors. Discussion: This cohort study indicated that telemedicine appointments are more likely to be completed than in-person care appointments, regardless of demographics, comorbidity, payment type, or distance. Conclusion: Telemedicine appointments are more likely to be completed than in-person healthcare appointments.

10.
Surg Endosc ; 2024 Jul 03.
Article de Anglais | MEDLINE | ID: mdl-38958719

RÉSUMÉ

BACKGROUND: Laparoscopic pancreatoduodenectomy (LPD) is one of the most challenging operations and has a long learning curve. Artificial intelligence (AI) automated surgical phase recognition in intraoperative videos has many potential applications in surgical education, helping shorten the learning curve, but no study has made this breakthrough in LPD. Herein, we aimed to build AI models to recognize the surgical phase in LPD and explore the performance characteristics of AI models. METHODS: Among 69 LPD videos from a single surgical team, we used 42 in the building group to establish the models and used the remaining 27 videos in the analysis group to assess the models' performance characteristics. We annotated 13 surgical phases of LPD, including 4 key phases and 9 necessary phases. Two minimal invasive pancreatic surgeons annotated all the videos. We built two AI models for the key phase and necessary phase recognition, based on convolutional neural networks. The overall performance of the AI models was determined mainly by mean average precision (mAP). RESULTS: Overall mAPs of the AI models in the test set of the building group were 89.7% and 84.7% for key phases and necessary phases, respectively. In the 27-video analysis group, overall mAPs were 86.8% and 71.2%, with maximum mAPs of 98.1% and 93.9%. We found commonalities between the error of model recognition and the differences of surgeon annotation, and the AI model exhibited bad performance in cases with anatomic variation or lesion involvement with adjacent organs. CONCLUSIONS: AI automated surgical phase recognition can be achieved in LPD, with outstanding performance in selective cases. This breakthrough may be the first step toward AI- and video-based surgical education in more complex surgeries.

11.
Transl Clin Pharmacol ; 32(2): 73-82, 2024 Jun.
Article de Anglais | MEDLINE | ID: mdl-38974344

RÉSUMÉ

Large language models (LLMs) have emerged as a powerful tool for biomedical researchers, demonstrating remarkable capabilities in understanding and generating human-like text. ChatGPT with its Code Interpreter functionality, an LLM connected with the ability to write and execute code, streamlines data analysis workflows by enabling natural language interactions. Using materials from a previously published tutorial, similar analyses can be performed through conversational interactions with the chatbot, covering data loading and exploration, model development and comparison, permutation feature importance, partial dependence plots, and additional analyses and recommendations. The findings highlight the significant potential of LLMs in assisting researchers with data analysis tasks, allowing them to focus on higher-level aspects of their work. However, there are limitations and potential concerns associated with the use of LLMs, such as the importance of critical thinking, privacy, security, and equitable access to these tools. As LLMs continue to improve and integrate with available tools, data science may experience a transformation similar to the shift from manual to automatic transmission in driving. The advancements in LLMs call for considering the future directions of data science and its education, ensuring that the benefits of these powerful tools are utilized with proper human supervision and responsibility.

13.
Cytotherapy ; 2024 Apr 04.
Article de Anglais | MEDLINE | ID: mdl-38842968

RÉSUMÉ

Although several cell-based therapies have received FDA approval, and others are showing promising results, scalable, and quality-driven reproducible manufacturing of therapeutic cells at a lower cost remains challenging. Challenges include starting material and patient variability, limited understanding of manufacturing process parameter effects on quality, complex supply chain logistics, and lack of predictive, well-understood product quality attributes. These issues can manifest as increased production costs, longer production times, greater batch-to-batch variability, and lower overall yield of viable, high-quality cells. The lack of data-driven insights and decision-making in cell manufacturing and delivery is an underlying commonality behind all these problems. Data collection and analytics from discovery, preclinical and clinical research, process development, and product manufacturing have not been sufficiently utilized to develop a "systems" understanding and identify actionable controls. Experience from other industries shows that data science and analytics can drive technological innovations and manufacturing optimization, leading to improved consistency, reduced risk, and lower cost. The cell therapy manufacturing industry will benefit from implementing data science tools, such as data-driven modeling, data management and mining, AI, and machine learning. The integration of data-driven predictive capabilities into cell therapy manufacturing, such as predicting product quality and clinical outcomes based on manufacturing data, or ensuring robustness and reliability using data-driven supply-chain modeling could enable more precise and efficient production processes and lead to better patient access and outcomes. In this review, we introduce some of the relevant computational and data science tools and how they are being or can be implemented in the cell therapy manufacturing workflow. We also identify areas where innovative approaches are required to address challenges and opportunities specific to the cell therapy industry. We conclude that interfacing data science throughout a cell therapy product lifecycle, developing data-driven manufacturing workflow, designing better data collection tools and algorithms, using data analytics and AI-based methods to better understand critical quality attributes and critical-process parameters, and training the appropriate workforce will be critical for overcoming current industry and regulatory barriers and accelerating clinical translation.

14.
Transl Anim Sci ; 8: txae092, 2024.
Article de Anglais | MEDLINE | ID: mdl-38939728

RÉSUMÉ

Advancements in technology have ushered in a new era of sensor-based measurement and management of livestock production systems. These sensor-based technologies have the ability to automatically monitor feeding, growth, and enteric emissions for individual animals across confined and extensive production systems. One challenge with sensor-based technologies is the large amount of data generated, which can be difficult to access, process, visualize, and monitor information in real time to ensure equipment is working properly and animals are utilizing it correctly. A solution to this problem is the development of application programming interfaces (APIs) to automate downloading, visualizing, and summarizing datasets generated from precision livestock technology (PLT). For this methods paper, we develop three APIs and accompanying processes for rapid data acquisition, visualization, systems tracking, and summary statistics for three technologies (SmartScale, SmartFeed, and GreenFeed) manufactured by C-Lock Inc (Rapid City, SD). Program R markdown documents and example datasets are provided to facilitate greater adoption of these techniques and to further advance PLT. The methodology presented successfully downloaded data from the cloud and generated a series of visualizations to conduct systems checks, animal usage rates, and calculate summary statistics. These tools will be essential for further adoption of precision technology. There is huge potential to further leverage APIs to incorporate a wide range of datasets such as weather data, animal locations, and sensor data to facilitate decision-making on time scales relevant to researchers and livestock managers.

15.
Article de Anglais | MEDLINE | ID: mdl-38940994

RÉSUMÉ

In this paper, we analyse the different advances in artificial intelligence (AI) approaches in multiple sclerosis (MS). AI applications in MS range across investigation of disease pathogenesis, diagnosis, treatment, and prognosis. A subset of AI, Machine learning (ML) models analyse various data sources, including magnetic resonance imaging (MRI), genetic, and clinical data, to distinguish MS from other conditions, predict disease progression, and personalize treatment strategies. Additionally, AI models have been extensively applied to lesion segmentation, identification of biomarkers, and prediction of outcomes, disease monitoring, and management. Despite the big promises of AI solutions, model interpretability and transparency remain critical for gaining clinician and patient trust in these methods. The future of AI in MS holds potential for open data initiatives that could feed ML models and increasing generalizability, the implementation of federated learning solutions for training the models addressing data sharing issues, and generative AI approaches to address challenges in model interpretability, and transparency. In conclusion, AI presents an opportunity to advance our understanding and management of MS. AI promises to aid clinicians in MS diagnosis and prognosis improving patient outcomes and quality of life, however ensuring the interpretability and transparency of AI-generated results is going to be key for facilitating the integration of AI into clinical practice.

16.
Phytopathology ; 2024 Jun 03.
Article de Anglais | MEDLINE | ID: mdl-38831567

RÉSUMÉ

Net blotch disease caused by Drechslera teres is a major fungal disease that affects barley (Hordeum vulgare) plants and can result in significant crop losses. In this study, we developed a deep-learning model to quantify net blotch disease symptoms on different days post-infection on seedling leaves using Cascade R-CNN (Region-Based Convolutional Neural Networks) and U-Net (a convolutional neural network) architectures. We used a dataset of barley leaf images with annotations of net blotch disease to train and evaluate the model. The model achieved an accuracy of 95% for cascade R-CNN in net blotch disease detection and a Jaccard index score of 0.99, indicating high accuracy in disease quantification and location. The combination of Cascade R-CNN and U-Net architectures improved the detection of small and irregularly shaped lesions in the images at 4-days post infection, leading to better disease quantification. To validate the model developed, we compared the results obtained by automated measurement with a classical method (necrosis diameter measurement) and a pathogen detection by real-time PCR. The proposed deep learning model could be used in automated systems for disease quantification and to screen the efficacy of potential biocontrol agents to protect against disease.

17.
Int Rev Sport Exerc Psychol ; 17(1): 564-586, 2024.
Article de Anglais | MEDLINE | ID: mdl-38835409

RÉSUMÉ

Athletes are exposed to various psychological and physiological stressors, such as losing matches and high training loads. Understanding and improving the resilience of athletes is therefore crucial to prevent performance decrements and psychological or physical problems. In this review, resilience is conceptualized as a dynamic process of bouncing back to normal functioning following stressors. This process has been of wide interest in psychology, but also in the physiology and sports science literature (e.g. load and recovery). To improve our understanding of the process of resilience, we argue for a collaborative synthesis of knowledge from the domains of psychology, physiology, sports science, and data science. Accordingly, we propose a multidisciplinary, dynamic, and personalized research agenda on resilience. We explain how new technologies and data science applications are important future trends (1) to detect warning signals for resilience losses in (combinations of) psychological and physiological changes, and (2) to provide athletes and their coaches with personalized feedback about athletes' resilience.

18.
BMJ Open Sport Exerc Med ; 10(2): e001890, 2024.
Article de Anglais | MEDLINE | ID: mdl-38835540

RÉSUMÉ

Objective: This paper presents an exploratory case study focusing on the applicability and value of process mining in a professional sports healthcare setting. We explore whether process mining can be retrospectively applied to readily available data at a professional sports club (Football Club Barcelona) and whether it can be used to obtain insights related to care flows. Design: Our study used discovery process mining to detect patterns and trends in athletes' Post-Pre-Participation Medical Evaluation injury route, encompassing five phases for analysis and interpretation. Results: We examined preprocessed data in event log format to determine the injury status of athletes in respective baseline groups (healthy or pathological). Our analysis found a link between thigh muscle injuries and later ankle joint problems. The process model found three loops with recurring injuries, the most common of which were thigh muscle injuries. There were no differences in injury rates or the median number of days to return to play between the healthy and pathological groups. Conclusions: This study explored the applicability and value of process mining in a professional sports healthcare setting. We established that process mining can be retrospectively applied to readily available data at a professional sports club and that this approach can be used to obtain insights related to sports healthcare flows.

19.
PeerJ Comput Sci ; 10: e2059, 2024.
Article de Anglais | MEDLINE | ID: mdl-38855223

RÉSUMÉ

Diagnosing gastrointestinal (GI) disorders, which affect parts of the digestive system such as the stomach and intestines, can be difficult even for experienced gastroenterologists due to the variety of ways these conditions present. Early diagnosis is critical for successful treatment, but the review process is time-consuming and labor-intensive. Computer-aided diagnostic (CAD) methods provide a solution by automating diagnosis, saving time, reducing workload, and lowering the likelihood of missing critical signs. In recent years, machine learning and deep learning approaches have been used to develop many CAD systems to address this issue. However, existing systems need to be improved for better safety and reliability on larger datasets before they can be used in medical diagnostics. In our study, we developed an effective CAD system for classifying eight types of GI images by combining transfer learning with an attention mechanism. Our experimental results show that ConvNeXt is an effective pre-trained network for feature extraction, and ConvNeXt+Attention (our proposed method) is a robust CAD system that outperforms other cutting-edge approaches. Our proposed method had an area under the receiver operating characteristic curve of 0.9997 and an area under the precision-recall curve of 0.9973, indicating excellent performance. The conclusion regarding the effectiveness of the system was also supported by the values of other evaluation metrics.

20.
PeerJ Comput Sci ; 10: e2044, 2024.
Article de Anglais | MEDLINE | ID: mdl-38855258

RÉSUMÉ

Patent lifespan is commonly used as a quantitative measure in patent assessments. Patent holders maintain exclusive rights by paying significant maintenance fees, suggesting a strong correlation between a patent's lifespan and its business potential or economic value. Therefore, accurately forecasting the duration of a patent is of great significance. This study introduces a highly effective method that combines LightGBM, a sophisticated machine learning algorithm, with a customized loss function derived from Focal Loss. The purpose of this approach is to accurately predict the probability of a patent remaining valid until its maximum expiration date. This research differs from previous studies that have examined the various stages and phases of patents. Instead, it assesses the commercial viability of individual patents by considering their lifespan. The evaluation process utilizes a dataset consisting of 200,000 patents. The experimental results show a significant improvement in the performance of the model by combining Focal Loss with LightGBM. By incorporating Focal Loss into LightGBM, its ability to give priority to difficult instances during training is enhanced, resulting in an overall improvement in performance. This targeted approach enhances the model's ability to distinguish between different samples and its ability to recover from challenges by giving priority to difficult samples. As a result, it improves the model's accuracy in making predictions and its ability to apply those predictions to new data.

SÉLECTION CITATIONS
DÉTAIL DE RECHERCHE
...