Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 53
Filtrar
Más filtros

Banco de datos
Tipo del documento
Intervalo de año de publicación
1.
Nat Methods ; 21(2): 195-212, 2024 Feb.
Artículo en Inglés | MEDLINE | ID: mdl-38347141

RESUMEN

Increasing evidence shows that flaws in machine learning (ML) algorithm validation are an underestimated global problem. In biomedical image analysis, chosen performance metrics often do not reflect the domain interest, and thus fail to adequately measure scientific progress and hinder translation of ML techniques into practice. To overcome this, we created Metrics Reloaded, a comprehensive framework guiding researchers in the problem-aware selection of metrics. Developed by a large international consortium in a multistage Delphi process, it is based on the novel concept of a problem fingerprint-a structured representation of the given problem that captures all aspects that are relevant for metric selection, from the domain interest to the properties of the target structure(s), dataset and algorithm output. On the basis of the problem fingerprint, users are guided through the process of choosing and applying appropriate validation metrics while being made aware of potential pitfalls. Metrics Reloaded targets image analysis problems that can be interpreted as classification tasks at image, object or pixel level, namely image-level classification, object detection, semantic segmentation and instance segmentation tasks. To improve the user experience, we implemented the framework in the Metrics Reloaded online tool. Following the convergence of ML methodology across application domains, Metrics Reloaded fosters the convergence of validation methodology. Its applicability is demonstrated for various biomedical use cases.


Asunto(s)
Algoritmos , Procesamiento de Imagen Asistido por Computador , Aprendizaje Automático , Semántica
2.
Gastrointest Endosc ; 2024 Apr 16.
Artículo en Inglés | MEDLINE | ID: mdl-38639679

RESUMEN

BACKGROUND AND AIMS: The American Society for Gastrointestinal Endoscopy (ASGE) AI Task Force along with experts in endoscopy, technology space, regulatory authorities, and other medical subspecialties initiated a consensus process that analyzed the current literature, highlighted potential areas, and outlined the necessary research in artificial intelligence (AI) to allow a clearer understanding of AI as it pertains to endoscopy currently. METHODS: A modified Delphi process was used to develop these consensus statements. RESULTS: Statement 1: Current advances in AI allow for the development of AI-based algorithms that can be applied to endoscopy to augment endoscopist performance in detection and characterization of endoscopic lesions. Statement 2: Computer vision-based algorithms provide opportunities to redefine quality metrics in endoscopy using AI, which can be standardized and can reduce subjectivity in reporting quality metrics. Natural language processing-based algorithms can help with the data abstraction needed for reporting current quality metrics in GI endoscopy effortlessly. Statement 3: AI technologies can support smart endoscopy suites, which may help optimize workflows in the endoscopy suite, including automated documentation. Statement 4: Using AI and machine learning helps in predictive modeling, diagnosis, and prognostication. High-quality data with multidimensionality are needed for risk prediction, prognostication of specific clinical conditions, and their outcomes when using machine learning methods. Statement 5: Big data and cloud-based tools can help advance clinical research in gastroenterology. Multimodal data are key to understanding the maximal extent of the disease state and unlocking treatment options. Statement 6: Understanding how to evaluate AI algorithms in the gastroenterology literature and clinical trials is important for gastroenterologists, trainees, and researchers, and hence education efforts by GI societies are needed. Statement 7: Several challenges regarding integrating AI solutions into the clinical practice of endoscopy exist, including understanding the role of human-AI interaction. Transparency, interpretability, and explainability of AI algorithms play a key role in their clinical adoption in GI endoscopy. Developing appropriate AI governance, data procurement, and tools needed for the AI lifecycle are critical for the successful implementation of AI into clinical practice. Statement 8: For payment of AI in endoscopy, a thorough evaluation of the potential value proposition for AI systems may help guide purchasing decisions in endoscopy. Reliable cost-effectiveness studies to guide reimbursement are needed. Statement 9: Relevant clinical outcomes and performance metrics for AI in gastroenterology are currently not well defined. To improve the quality and interpretability of research in the field, steps need to be taken to define these evidence standards. Statement 10: A balanced view of AI technologies and active collaboration between the medical technology industry, computer scientists, gastroenterologists, and researchers are critical for the meaningful advancement of AI in gastroenterology. CONCLUSIONS: The consensus process led by the ASGE AI Task Force and experts from various disciplines has shed light on the potential of AI in endoscopy and gastroenterology. AI-based algorithms have shown promise in augmenting endoscopist performance, redefining quality metrics, optimizing workflows, and aiding in predictive modeling and diagnosis. However, challenges remain in evaluating AI algorithms, ensuring transparency and interpretability, addressing governance and data procurement, determining payment models, defining relevant clinical outcomes, and fostering collaboration between stakeholders. Addressing these challenges while maintaining a balanced perspective is crucial for the meaningful advancement of AI in gastroenterology.

3.
Bipolar Disord ; 2024 Apr 19.
Artículo en Inglés | MEDLINE | ID: mdl-38639725

RESUMEN

INTRODUCTION: Alterations in motor activity are well-established symptoms of bipolar disorder, and time series of motor activity can be considered complex dynamical systems. In such systems, early warning signals (EWS) occur in a critical transition period preceding a sudden shift (tipping point) in the system. EWS are statistical observations occurring due to a system's declining ability to maintain homeostasis when approaching a tipping point. The aim was to identify critical transition periods preceding bipolar mood state changes. METHODS: Participants with a validated bipolar diagnosis were included to a one-year follow-up study, with repeated assessments of the participants' mood. Motor activity was recorded continuously by a wrist-worn actigraph. Participants assessed to have relapsed during follow-up were analyzed. Recognized EWS features were extracted from the motor activity data and analyzed by an unsupervised change point detection algorithm, capable of processing multi-dimensional data and developed to identify when the statistical property of a time series changes. RESULTS: Of 49 participants, four depressive and four hypomanic/manic relapses among six individuals occurred, recording actigraphy for 23.8 ± 0.2 h/day, for 39.8 ± 4.6 days. The algorithm detected change points in the time series and identified critical transition periods spanning 13.5 ± 7.2 days. For depressions 11.4 ± 1.8, and hypomania/mania 15.6 ± 10.2 days. CONCLUSION: The change point detection algorithm seems capable of recognizing impending mood episodes in continuous flowing data streams. Hence, we present an innovative method for forecasting approaching relapses to improve the clinical management of bipolar disorder.

4.
BMC Health Serv Res ; 23(1): 1047, 2023 Sep 30.
Artículo en Inglés | MEDLINE | ID: mdl-37777722

RESUMEN

BACKGROUND: e-Health has played a crucial role during the COVID-19 pandemic in primary health care. e-Health is the cost-effective and secure use of Information and Communication Technologies (ICTs) to support health and health-related fields. Various stakeholders worldwide use ICTs, including individuals, non-profit organizations, health practitioners, and governments. As a result of the COVID-19 pandemic, ICT has improved the quality of healthcare, the exchange of information, training of healthcare professionals and patients, and facilitated the relationship between patients and healthcare providers. This study systematically reviews the literature on ICT-based automatic and remote monitoring methods, as well as different ICT techniques used in the care of COVID-19-infected patients. OBJECTIVE: The purpose of this systematic literature review is to identify the e-Health methods, associated ICTs, method implementation strategies, information collection techniques, advantages, and disadvantages of remote and automatic patient monitoring and care in COVID-19 pandemic. METHODS: The search included primary studies that were published between January 2020 and June 2022 in scientific and electronic databases, such as EBSCOhost, Scopus, ACM, Nature, SpringerLink, IEEE Xplore, MEDLINE, Google Scholar, JMIR, Web of Science, Science Direct, and PubMed. In this review, the findings from the included publications are presented and elaborated according to the identified research questions. Evidence-based systematic reviews and meta-analyses were conducted using the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) framework. Additionally, we improved the review process using the Rayyan tool and the Scale for the Assessment of Narrative Review Articles (SANRA). Among the eligibility criteria were methodological rigor, conceptual clarity, and useful implementation of ICTs in e-Health for remote and automatic monitoring of COVID-19 patients. RESULTS: Our initial search identified 664 potential studies; 102 were assessed for eligibility in the pre-final stage and 65 articles were used in the final review with the inclusion and exclusion criteria. The review identified the following eHealth methods-Telemedicine, Mobile Health (mHealth), and Telehealth. The associated ICTs are Wearable Body Sensors, Artificial Intelligence (AI) algorithms, Internet-of-Things, or Internet-of-Medical-Things (IoT or IoMT), Biometric Monitoring Technologies (BioMeTs), and Bluetooth-enabled (BLE) home health monitoring devices. Spatial or positional data, personal and individual health, and wellness data, including vital signs, symptoms, biomedical images and signals, and lifestyle data are examples of information that is managed by ICTs. Different AI and IoT methods have opened new possibilities for automatic and remote patient monitoring with associated advantages and weaknesses. Our findings were represented in a structured manner using a semantic knowledge graph (e.g., ontology model). CONCLUSIONS: Various e-Health methods, related remote monitoring technologies, different approaches, information categories, the adoption of ICT tools for an automatic remote patient monitoring (RPM), advantages and limitations of RMTs in the COVID-19 case are discussed in this review. The use of e-Health during the COVID-19 pandemic illustrates the constraints and possibilities of using ICTs. ICTs are not merely an external tool to achieve definite remote and automatic health monitoring goals; instead, they are embedded in contexts. Therefore, the importance of the mutual design process between ICT and society during the global health crisis has been observed from a social informatics perspective. A global health crisis can be observed as an information crisis (e.g., insufficient information, unreliable information, and inaccessible information); however, this review shows the influence of ICTs on COVID-19 patients' health monitoring and related information collection techniques.


Asunto(s)
COVID-19 , Humanos , COVID-19/epidemiología , Pandemias , Inteligencia Artificial , Atención a la Salud , Monitoreo Fisiológico
5.
BMC Med Inform Decis Mak ; 23(1): 278, 2023 12 01.
Artículo en Inglés | MEDLINE | ID: mdl-38041041

RESUMEN

BACKGROUND: Automated coaches (eCoach) can help people lead a healthy lifestyle (e.g., reduction of sedentary bouts) with continuous health status monitoring and personalized recommendation generation with artificial intelligence (AI). Semantic ontology can play a crucial role in knowledge representation, data integration, and information retrieval. METHODS: This study proposes a semantic ontology model to annotate the AI predictions, forecasting outcomes, and personal preferences to conceptualize a personalized recommendation generation model with a hybrid approach. This study considers a mixed activity projection method that takes individual activity insights from the univariate time-series prediction and ensemble multi-class classification approaches. We have introduced a way to improve the prediction result with a residual error minimization (REM) technique and make it meaningful in recommendation presentation with a Naïve-based interval prediction approach. We have integrated the activity prediction results in an ontology for semantic interpretation. A SPARQL query protocol and RDF Query Language (SPARQL) have generated personalized recommendations in an understandable format. Moreover, we have evaluated the performance of the time-series prediction and classification models against standard metrics on both imbalanced and balanced public PMData and private MOX2-5 activity datasets. We have used Adaptive Synthetic (ADASYN) to generate synthetic data from the minority classes to avoid bias. The activity datasets were collected from healthy adults (n = 16 for public datasets; n = 15 for private datasets). The standard ensemble algorithms have been used to investigate the possibility of classifying daily physical activity levels into the following activity classes: sedentary (0), low active (1), active (2), highly active (3), and rigorous active (4). The daily step count, low physical activity (LPA), medium physical activity (MPA), and vigorous physical activity (VPA) serve as input for the classification models. Subsequently, we re-verify the classifiers on the private MOX2-5 dataset. The performance of the ontology has been assessed with reasoning and SPARQL query execution time. Additionally, we have verified our ontology for effective recommendation generation. RESULTS: We have tested several standard AI algorithms and selected the best-performing model with optimized configuration for our use case by empirical testing. We have found that the autoregression model with the REM method outperforms the autoregression model without the REM method for both datasets. Gradient Boost (GB) classifier outperforms other classifiers with a mean accuracy score of 98.00%, and 99.00% for imbalanced PMData and MOX2-5 datasets, respectively, and 98.30%, and 99.80% for balanced PMData and MOX2-5 datasets, respectively. Hermit reasoner performs better than other ontology reasoners under defined settings. Our proposed algorithm shows a direction to combine the AI prediction forecasting results in an ontology to generate personalized activity recommendations in eCoaching. CONCLUSION: The proposed method combining step-prediction, activity-level classification techniques, and personal preference information with semantic rules is an asset for generating personalized recommendations.


Asunto(s)
Inteligencia Artificial , Heurística , Humanos , Semántica , Algoritmos , Almacenamiento y Recuperación de la Información
6.
Sensors (Basel) ; 23(4)2023 Feb 13.
Artículo en Inglés | MEDLINE | ID: mdl-36850686

RESUMEN

The interest in video anomaly detection systems that can detect different types of anomalies, such as violent behaviours in surveillance videos, has gained traction in recent years. The current approaches employ deep learning to perform anomaly detection in videos, but this approach has multiple problems. For example, deep learning in general has issues with noise, concept drift, explainability, and training data volumes. Additionally, anomaly detection in itself is a complex task and faces challenges such as unknownness, heterogeneity, and class imbalance. Anomaly detection using deep learning is therefore mainly constrained to generative models such as generative adversarial networks and autoencoders due to their unsupervised nature; however, even they suffer from general deep learning issues and are hard to properly train. In this paper, we explore the capabilities of the Hierarchical Temporal Memory (HTM) algorithm to perform anomaly detection in videos, as it has favorable properties such as noise tolerance and online learning which combats concept drift. We introduce a novel version of HTM, named GridHTM, which is a grid-based HTM architecture specifically for anomaly detection in complex videos such as surveillance footage. We have tested GridHTM using the VIRAT video surveillance dataset, and the subsequent evaluation results and online learning capabilities prove the great potential of using our system for real-time unsupervised anomaly detection in complex videos.

7.
Sensors (Basel) ; 22(7)2022 Apr 06.
Artículo en Inglés | MEDLINE | ID: mdl-35408416

RESUMEN

Many data related problems involve handling multiple data streams of different types at the same time. These problems are both complex and challenging, and researchers often end up using only one modality or combining them via a late fusion based approach. To tackle this challenge, we develop and investigate the usefulness of a novel deep learning method called tower networks. This method is able to learn from multiple input data sources at once. We apply the tower network to the problem of short-term temperature forecasting. First, we compare our method to a number of meteorological baselines and simple statistical approaches. Further, we compare the tower network with two core network architectures that are often used, namely the convolutional neural network (CNN) and convolutional long short-term memory (convLSTM). The methods are compared for the task of weather forecasting performance, and the deep learning methods are also compared in terms of memory usage and training time. The tower network performs well in comparison both with the meteorological baselines, and with the other core architectures. Compared with the state-of-the-art operational Norwegian weather forecasting service, yr.no, the tower network has an overall 11% smaller root mean squared forecasting error. For the core architectures, the tower network documents competitive performance and proofs to be more robust compared to CNN and convLSTM models.


Asunto(s)
Redes Neurales de la Computación , Tiempo (Meteorología) , Predicción , Almacenamiento y Recuperación de la Información , Temperatura
8.
Sensors (Basel) ; 22(10)2022 May 10.
Artículo en Inglés | MEDLINE | ID: mdl-35632034

RESUMEN

The increasing popularity of social networks and users' tendency towards sharing their feelings, expressions, and opinions in text, visual, and audio content have opened new opportunities and challenges in sentiment analysis. While sentiment analysis of text streams has been widely explored in the literature, sentiment analysis from images and videos is relatively new. This article focuses on visual sentiment analysis in a societally important domain, namely disaster analysis in social media. To this aim, we propose a deep visual sentiment analyzer for disaster-related images, covering different aspects of visual sentiment analysis starting from data collection, annotation, model selection, implementation, and evaluations. For data annotation and analyzing people's sentiments towards natural disasters and associated images in social media, a crowd-sourcing study has been conducted with a large number of participants worldwide. The crowd-sourcing study resulted in a large-scale benchmark dataset with four different sets of annotations, each aiming at a separate task. The presented analysis and the associated dataset, which is made public, will provide a baseline/benchmark for future research in the domain. We believe the proposed system can contribute toward more livable communities by helping different stakeholders, such as news broadcasters, humanitarian organizations, as well as the general public.


Asunto(s)
Desastres , Medios de Comunicación Sociales , Recolección de Datos , Humanos , Análisis de Sentimientos , Red Social
9.
J Med Syst ; 44(10): 187, 2020 Sep 15.
Artículo en Inglés | MEDLINE | ID: mdl-32929615

RESUMEN

In this work, we propose the use of a genetic-algorithm-based attack against machine learning classifiers with the aim of 'stealing' users' biometric actigraphy profiles from health related sensor data. The target classification model uses daily actigraphy patterns for user identification. The biometric profiles are modeled as what we call impersonator examples which are generated based solely on the predictions' confidence score by repeatedly querying the target classifier. We conducted experiments in a black-box setting on a public dataset that contains actigraphy profiles from 55 individuals. The data consists of daily motion patterns recorded with an actigraphy device. These patterns can be used as biometric profiles to identify each individual. Our attack was able to generate examples capable of impersonating a target user with a success rate of 94.5%. Furthermore, we found that the impersonator examples have high transferability to other classifiers trained with the same training set. We also show that the generated biometric profiles have a close resemblance to the ground truth profiles which can lead to sensitive data exposure, like revealing the time of the day an individual wakes-up and goes to bed.


Asunto(s)
Actigrafía , Robo , Algoritmos , Biometría , Humanos , Aprendizaje Automático
10.
J Appl Clin Med Phys ; 20(8): 141-154, 2019 Aug.
Artículo en Inglés | MEDLINE | ID: mdl-31251460

RESUMEN

Wireless capsule endoscopy (WCE) is an effective technology that can be used to make a gastrointestinal (GI) tract diagnosis of various lesions and abnormalities. Due to a long time required to pass through the GI tract, the resulting WCE data stream contains a large number of frames which leads to a tedious job for clinical experts to perform a visual check of each and every frame of a complete patient's video footage. In this paper, an automated technique for bleeding detection based on color and texture features is proposed. The approach combines the color information which is an essential feature for initial detection of frame with bleeding. Additionally, it uses the texture which plays an important role to extract more information from the lesion captured in the frames and allows the system to distinguish finely between borderline cases. The detection algorithm utilizes machine-learning-based classification methods, and it can efficiently distinguish between bleeding and nonbleeding frames and perform pixel-level segmentation of bleeding areas in WCE frames. The performed experimental studies demonstrate the performance of the proposed bleeding detection method in terms of detection accuracy, where we are at least as good as the state-of-the-art approaches. In this research, we have conducted a broad comparison of a number of different state-of-the-art features and classification methods that allows building an efficient and flexible WCE video processing system.


Asunto(s)
Algoritmos , Endoscopía Capsular/métodos , Color , Hemorragia Gastrointestinal/diagnóstico , Tracto Gastrointestinal/patología , Reconocimiento de Normas Patrones Automatizadas/métodos , Grabación en Video/métodos , Hemorragia Gastrointestinal/diagnóstico por imagen , Tracto Gastrointestinal/diagnóstico por imagen , Humanos , Aprendizaje Automático , Tecnología Inalámbrica
11.
Sci Rep ; 14(1): 4634, 2024 02 26.
Artículo en Inglés | MEDLINE | ID: mdl-38409365

RESUMEN

The widespread use of devices like mobile phones and wearables allows for automatic monitoring of human daily activities, generating vast datasets that offer insights into long-term human behavior. A structured and controlled data collection process is essential to unlock the full potential of this information. While wearable sensors for physical activity monitoring have gained significant traction in healthcare, sports science, and fitness applications, securing diverse and comprehensive datasets for research and algorithm development poses a notable challenge. In this proof-of-concept study, we underscore the significance of semantic representation in enhancing data interoperability and facilitating advanced analytics for physical activity sensor observations. Our approach focuses on enhancing the usability of physical activity datasets by employing a medical-grade (CE certified) sensor to generate synthetic datasets. Additionally, we provide insights into ethical considerations related to synthetic datasets. The study conducts a comparative analysis between real and synthetic activity datasets, assessing their effectiveness in mitigating model bias and promoting fairness in predictive analysis. We have created an ontology for semantically representing observations from physical activity sensors and conducted predictive analysis on data collected using MOX2-5 activity sensors. Until now, there has been a lack of publicly available datasets for physical activity collected with MOX2-5 activity monitoring medical grade (CE certified) device. The MOX2-5 captures and transmits high-resolution data, including activity intensity, weight-bearing, sedentary, standing, low, moderate, and vigorous physical activity, as well as steps per minute. Our dataset consists of physical activity data collected from 16 adults (Male: 12; Female: 4) over a period of 30-45 days (approximately 1.5 months), yielding a relatively small volume of 539 records. To address this limitation, we employ various synthetic data generation methods, such as Gaussian Capula (GC), Conditional Tabular General Adversarial Network (CTGAN), and Tabular General Adversarial Network (TABGAN), to augment the dataset with synthetic data. For both the authentic and synthetic datasets, we have developed a Multilayer Perceptron (MLP) classification model for accurately classifying daily physical activity levels. The findings underscore the effectiveness of semantic ontology in semantic search, knowledge representation, data integration, reasoning, and capturing meaningful relationships between data. The analysis supports the hypothesis that the efficiency of predictive models improves as the volume of additional synthetic training data increases. Ontology and Generative AI hold the potential to expedite advancements in behavioral monitoring research. The data presented, encompassing both real MOX2-5 and its synthetic counterpart, serves as a valuable resource for developing robust methods in activity type classification. Furthermore, it opens avenues for exploration into research directions related to synthetic data, including model efficiency, detection of generated data, and considerations regarding data privacy.


Asunto(s)
Ejercicio Físico , Semántica , Adulto , Masculino , Humanos , Femenino , Redes Neurales de la Computación , Algoritmos , Actividades Humanas
12.
Sci Data ; 11(1): 245, 2024 Feb 27.
Artículo en Inglés | MEDLINE | ID: mdl-38413601

RESUMEN

Clouds are important factors when projecting future climate. Unfortunately, future cloud fractional cover (the portion of the sky covered by clouds) is associated with significant uncertainty, making climate projections difficult. In this paper, we present the European Cloud Cover dataset, which can be used to learn statistical relations between cloud cover and other environmental variables, to potentially improve future climate projections. The dataset was created using a novel technique called Area Weighting Regridding Scheme to map satellite observations to cloud fractional cover on the same grid as the other variables in the dataset. Baseline experiments using autoregressive models document that it is possible to use the dataset to predict cloud fractional cover.

13.
Trauma Violence Abuse ; 25(1): 260-274, 2024 01.
Artículo en Inglés | MEDLINE | ID: mdl-36727734

RESUMEN

Livestreaming of child sexual abuse (LSCSA) is an established form of online child sexual exploitation and abuse (OCSEA). However, only a limited body of research has examined this issue. The Covid-19 pandemic has accelerated internet use and user knowledge of livestreaming services emphasizing the importance of understanding this crime. In this scoping review, existing literature was brought together through an iterative search of eight databases containing peer-reviewed journal articles, as well as grey literature. Records were eligible for inclusion if the primary focus was on livestream technology and OCSEA, the child being defined as eighteen years or younger. Fourteen of the 2,218 records were selected. The data were charted and divided into four categories: victims, offenders, legislation, and technology. Limited research, differences in terminology, study design, and population inclusion criteria present a challenge to drawing general conclusions on the current state of LSCSA. The records show that victims are predominantly female. The average livestream offender was found to be older than the average online child sexual abuse offender. Therefore, it is unclear whether the findings are representative of the global population of livestream offenders. Furthermore, there appears to be a gap in what the records show on platforms and payment services used and current digital trends. The lack of a legal definition and privacy considerations pose a challenge to investigation, detection, and prosecution. The available data allow some insights into a potentially much larger issue.


Asunto(s)
Abuso Sexual Infantil , Maltrato a los Niños , Criminales , Niño , Humanos , Femenino , Masculino , Pandemias , Conducta Sexual
14.
PLoS One ; 19(5): e0304069, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-38820304

RESUMEN

Deep learning has achieved immense success in computer vision and has the potential to help physicians analyze visual content for disease and other abnormalities. However, the current state of deep learning is very much a black box, making medical professionals skeptical about integrating these methods into clinical practice. Several methods have been proposed to shed some light on these black boxes, but there is no consensus on the opinion of medical doctors that will consume these explanations. This paper presents a study asking medical professionals about their opinion of current state-of-the-art explainable artificial intelligence methods when applied to a gastrointestinal disease detection use case. We compare two different categories of explanation methods, intrinsic and extrinsic, and gauge their opinion of the current value of these explanations. The results indicate that intrinsic explanations are preferred and that physicians see value in the explanations. Based on the feedback collected in our study, future explanations of medical deep neural networks can be tailored to the needs and expectations of doctors. Hopefully, this will contribute to solving the issue of black box medical systems and lead to successful implementation of this powerful technology in the clinic.


Asunto(s)
Aprendizaje Profundo , Médicos , Humanos , Médicos/psicología , Inteligencia Artificial , Redes Neurales de la Computación , Pólipos del Colon/diagnóstico , Colonoscopía/métodos
15.
Child Maltreat ; : 10775595241263017, 2024 Jun 18.
Artículo en Inglés | MEDLINE | ID: mdl-38889731

RESUMEN

This proof-of- concept study focused on interviewers' behaviors and perceptions when interacting with a dynamic AI child avatar alleging abuse. Professionals (N = 68) took part in a virtual reality (VR) study in which they questioned an avatar presented as a child victim of sexual or physical abuse. Of interest was how interviewers questioned the avatar, how productive the child avatar was in response, and how interviewers perceived the VR interaction. Findings suggested alignment between interviewers' virtual questioning approaches and interviewers' typical questioning behavior in real-world investigative interviews, with a diverse range of questions used to elicit disclosures from the child avatar. The avatar responded to most question types as children typically do, though more nuanced programming of the avatar's productivity in response to complex question types is needed. Participants rated the avatar positively and felt comfortable with the VR experience. Results underscored the potential of AI-based interview training as a scalable, standardized alternative to traditional methods.

16.
Sci Data ; 11(1): 553, 2024 May 30.
Artículo en Inglés | MEDLINE | ID: mdl-38816403

RESUMEN

Data analysis for athletic performance optimization and injury prevention is of tremendous interest to sports teams and the scientific community. However, sports data are often sparse and hard to obtain due to legal restrictions, unwillingness to share, and lack of personnel resources to be assigned to the tedious process of data curation. These constraints make it difficult to develop automated systems for analysis, which require large datasets for learning. We therefore present SoccerMon, the largest soccer athlete dataset available today containing both subjective and objective metrics, collected from two different elite women's soccer teams over two years. Our dataset contains 33,849 subjective reports and 10,075 objective reports, the latter including over six billion GPS position measurements. SoccerMon can not only play a valuable role in developing better analysis and prediction systems for soccer, but also inspire similar data collection activities in other domains which can benefit from subjective athlete reports, GPS position information, and/or time-series data in general.


Asunto(s)
Rendimiento Atlético , Fútbol , Humanos , Femenino , Sistemas de Información Geográfica , Atletas
17.
Sci Rep ; 14(1): 2032, 2024 01 23.
Artículo en Inglés | MEDLINE | ID: mdl-38263232

RESUMEN

Polyps are well-known cancer precursors identified by colonoscopy. However, variability in their size, appearance, and location makes the detection of polyps challenging. Moreover, colonoscopy surveillance and removal of polyps are highly operator-dependent procedures and occur in a highly complex organ topology. There exists a high missed detection rate and incomplete removal of colonic polyps. To assist in clinical procedures and reduce missed rates, automated methods for detecting and segmenting polyps using machine learning have been achieved in past years. However, the major drawback in most of these methods is their ability to generalise to out-of-sample unseen datasets from different centres, populations, modalities, and acquisition systems. To test this hypothesis rigorously, we, together with expert gastroenterologists, curated a multi-centre and multi-population dataset acquired from six different colonoscopy systems and challenged the computational expert teams to develop robust automated detection and segmentation methods in a crowd-sourcing Endoscopic computer vision challenge. This work put forward rigorous generalisability tests and assesses the usability of devised deep learning methods in dynamic and actual clinical colonoscopy procedures. We analyse the results of four top performing teams for the detection task and five top performing teams for the segmentation task. Our analyses demonstrate that the top-ranking teams concentrated mainly on accuracy over the real-time performance required for clinical applicability. We further dissect the devised methods and provide an experiment-based hypothesis that reveals the need for improved generalisability to tackle diversity present in multi-centre datasets and routine clinical procedures.


Asunto(s)
Colaboración de las Masas , Aprendizaje Profundo , Pólipos , Humanos , Colonoscopía , Computadores
18.
Child Abuse Negl ; 143: 106324, 2023 09.
Artículo en Inglés | MEDLINE | ID: mdl-37390589

RESUMEN

BACKGROUND: Child investigative interviewing is a complex skill requiring specialised training. A critical training element is practice. Simulations with digital avatars are cost-effective options for delivering training. This study of real-world data provides novel insights evaluating a large number of trainees' engagement with LiveSimulation (LiveSim), an online child-avatar that involves a trainee selecting a question (i.e., an option-tree) and the avatar responding with the level of detail appropriate for the question type. While LiveSim has been shown to facilitate learning of open-ended questions, its utility (from a user engagement perspective) remains to be examined. OBJECTIVE: We evaluated trainees' engagement with LiveSim, focusing on patterns of interaction (e.g., amount), appropriateness of the prompt structure, and the programme's technical compatibility. PARTICIPANTS AND SETTING: Professionals (N = 606, mainly child protection workers and police) being offered the avatar as part of an intensive course on how to interview a child conducted between 2009 and 2018. METHODS: For descriptive analysis, Visual Basic for Applications coding in Excel was applied to evaluate engagement and internal attributes of LiveSim. A compatibility study of the programme was run testing different hardware focusing on access and function. RESULTS: The trainees demonstrated good engagement with the programme across a variety of measures, including number and timing of activity completions. Overall, knowing the utility of avatars, our results provide strong support for the notion that a technically simple avatar like LiveSim awake user engagement. This is important knowledge in further development of learning simulations using next-generation technology.


Asunto(s)
Maltrato a los Niños , Humanos , Niño , Maltrato a los Niños/prevención & control , Aprendizaje
19.
Sci Rep ; 13(1): 10182, 2023 06 22.
Artículo en Inglés | MEDLINE | ID: mdl-37349483

RESUMEN

Electronic coaching (eCoach) facilitates goal-focused development for individuals to optimize certain human behavior. However, the automatic generation of personalized recommendations in eCoaching remains a challenging task. This research paper introduces a novel approach that combines deep learning and semantic ontologies to generate hybrid and personalized recommendations by considering "Physical Activity" as a case study. To achieve this, we employ three methods: time-series forecasting, time-series physical activity level classification, and statistical metrics for data processing. Additionally, we utilize a naïve-based probabilistic interval prediction technique with the residual standard deviation used to make point predictions meaningful in the recommendation presentation. The processed results are integrated into activity datasets using an ontology called OntoeCoach, which facilitates semantic representation and reasoning. To generate personalized recommendations in an understandable format, we implement the SPARQL Protocol and RDF Query Language (SPARQL). We evaluate the performance of standard time-series forecasting algorithms [such as 1D Convolutional Neural Network Model (CNN1D), autoregression, Long Short-Term Memory (LSTM), and Gated Recurrent Units (GRU)] and classifiers [including Multilayer Perceptron (MLP), Rocket, MiniRocket, and MiniRocketVoting] using state-of-the-art metrics. We conduct evaluations on both public datasets (e.g., PMData) and private datasets (e.g., MOX2-5 activity). Our CNN1D model achieves the highest prediction accuracy of 97[Formula: see text], while the MLP model outperforms other classifiers with an accuracy of 74[Formula: see text]. Furthermore, we evaluate the performance of our proposed OntoeCoach ontology model by assessing reasoning and query execution time metrics. The results demonstrate that our approach effectively plans and generates recommendations on both datasets. The rule set of OntoeCoach can also be generalized to enhance interpretability.


Asunto(s)
Aprendizaje Profundo , Humanos , Redes Neurales de la Computación , Algoritmos , Predicción
20.
Diagnostics (Basel) ; 13(22)2023 Nov 09.
Artículo en Inglés | MEDLINE | ID: mdl-37998548

RESUMEN

An important part of diagnostics is to gain insight into properties that characterize a disease. Machine learning has been used for this purpose, for instance, to identify biomarkers in genomics. However, when patient data are presented as images, identifying properties that characterize a disease becomes far more challenging. A common strategy involves extracting features from the images and analyzing their occurrence in healthy versus pathological images. A limitation of this approach is that the ability to gain new insights into the disease from the data is constrained by the information in the extracted features. Typically, these features are manually extracted by humans, which further limits the potential for new insights. To overcome these limitations, in this paper, we propose a novel framework that provides insights into diseases without relying on handcrafted features or human intervention. Our framework is based on deep learning (DL), explainable artificial intelligence (XAI), and clustering. DL is employed to learn deep patterns, enabling efficient differentiation between healthy and pathological images. Explainable artificial intelligence (XAI) visualizes these patterns, and a novel "explanation-weighted" clustering technique is introduced to gain an overview of these patterns across multiple patients. We applied the method to images from the gastrointestinal tract. In addition to real healthy images and real images of polyps, some of the images had synthetic shapes added to represent other types of pathologies than polyps. The results show that our proposed method was capable of organizing the images based on the reasons they were diagnosed as pathological, achieving high cluster quality and a rand index close to or equal to one.

SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA