Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 37
Filtrar
Más filtros

Banco de datos
País/Región como asunto
Tipo del documento
Intervalo de año de publicación
1.
J Med Internet Res ; 26: e53396, 2024 Jul 05.
Artículo en Inglés | MEDLINE | ID: mdl-38967964

RESUMEN

BACKGROUND: In the realm of in vitro fertilization (IVF), artificial intelligence (AI) models serve as invaluable tools for clinicians, offering predictive insights into ovarian stimulation outcomes. Predicting and understanding a patient's response to ovarian stimulation can help in personalizing doses of drugs, preventing adverse outcomes (eg, hyperstimulation), and improving the likelihood of successful fertilization and pregnancy. Given the pivotal role of accurate predictions in IVF procedures, it becomes important to investigate the landscape of AI models that are being used to predict the outcomes of ovarian stimulation. OBJECTIVE: The objective of this review is to comprehensively examine the literature to explore the characteristics of AI models used for predicting ovarian stimulation outcomes in the context of IVF. METHODS: A total of 6 electronic databases were searched for peer-reviewed literature published before August 2023, using the concepts of IVF and AI, along with their related terms. Records were independently screened by 2 reviewers against the eligibility criteria. The extracted data were then consolidated and presented through narrative synthesis. RESULTS: Upon reviewing 1348 articles, 30 met the predetermined inclusion criteria. The literature primarily focused on the number of oocytes retrieved as the main predicted outcome. Microscopy images stood out as the primary ground truth reference. The reviewed studies also highlighted that the most frequently adopted stimulation protocol was the gonadotropin-releasing hormone (GnRH) antagonist. In terms of using trigger medication, human chorionic gonadotropin (hCG) was the most commonly selected option. Among the machine learning techniques, the favored choice was the support vector machine. As for the validation of AI algorithms, the hold-out cross-validation method was the most prevalent. The area under the curve was highlighted as the primary evaluation metric. The literature exhibited a wide variation in the number of features used for AI algorithm development, ranging from 2 to 28,054 features. Data were mostly sourced from patient demographics, followed by laboratory data, specifically hormonal levels. Notably, the vast majority of studies were restricted to a single infertility clinic and exclusively relied on nonpublic data sets. CONCLUSIONS: These insights highlight an urgent need to diversify data sources and explore varied AI techniques for improved prediction accuracy and generalizability of AI models for the prediction of ovarian stimulation outcomes. Future research should prioritize multiclinic collaborations and consider leveraging public data sets, aiming for more precise AI-driven predictions that ultimately boost patient care and IVF success rates.


Asunto(s)
Inteligencia Artificial , Fertilización In Vitro , Inducción de la Ovulación , Humanos , Inducción de la Ovulación/métodos , Fertilización In Vitro/métodos , Femenino , Embarazo
2.
J Med Internet Res ; 26: e52622, 2024 Jan 31.
Artículo en Inglés | MEDLINE | ID: mdl-38294846

RESUMEN

BACKGROUND: Students usually encounter stress throughout their academic path. Ongoing stressors may lead to chronic stress, adversely affecting their physical and mental well-being. Thus, early detection and monitoring of stress among students are crucial. Wearable artificial intelligence (AI) has emerged as a valuable tool for this purpose. It offers an objective, noninvasive, nonobtrusive, automated approach to continuously monitor biomarkers in real time, thereby addressing the limitations of traditional approaches such as self-reported questionnaires. OBJECTIVE: This systematic review and meta-analysis aim to assess the performance of wearable AI in detecting and predicting stress among students. METHODS: Search sources in this review included 7 electronic databases (MEDLINE, Embase, PsycINFO, ACM Digital Library, Scopus, IEEE Xplore, and Google Scholar). We also checked the reference lists of the included studies and checked studies that cited the included studies. The search was conducted on June 12, 2023. This review included research articles centered on the creation or application of AI algorithms for the detection or prediction of stress among students using data from wearable devices. In total, 2 independent reviewers performed study selection, data extraction, and risk-of-bias assessment. The Quality Assessment of Diagnostic Accuracy Studies-Revised tool was adapted and used to examine the risk of bias in the included studies. Evidence synthesis was conducted using narrative and statistical techniques. RESULTS: This review included 5.8% (19/327) of the studies retrieved from the search sources. A meta-analysis of 37 accuracy estimates derived from 32% (6/19) of the studies revealed a pooled mean accuracy of 0.856 (95% CI 0.70-0.93). Subgroup analyses demonstrated that the accuracy of wearable AI was moderated by the number of stress classes (P=.02), type of wearable device (P=.049), location of the wearable device (P=.02), data set size (P=.009), and ground truth (P=.001). The average estimates of sensitivity, specificity, and F1-score were 0.755 (SD 0.181), 0.744 (SD 0.147), and 0.759 (SD 0.139), respectively. CONCLUSIONS: Wearable AI shows promise in detecting student stress but currently has suboptimal performance. The results of the subgroup analyses should be carefully interpreted given that many of these findings may be due to other confounding factors rather than the underlying grouping characteristics. Thus, wearable AI should be used alongside other assessments (eg, clinical questionnaires) until further evidence is available. Future research should explore the ability of wearable AI to differentiate types of stress, distinguish stress from other mental health issues, predict future occurrences of stress, consider factors such as the placement of the wearable device and the methods used to assess the ground truth, and report detailed results to facilitate the conduct of meta-analyses. TRIAL REGISTRATION: PROSPERO CRD42023435051; http://tinyurl.com/3fzb5rnp.


Asunto(s)
Algoritmos , Inteligencia Artificial , Humanos , Bases de Datos Factuales , Bibliotecas Digitales , Salud Mental
3.
J Med Internet Res ; 25: e40259, 2023 03 14.
Artículo en Inglés | MEDLINE | ID: mdl-36917147

RESUMEN

BACKGROUND: In 2021 alone, diabetes mellitus, a metabolic disorder primarily characterized by abnormally high blood glucose (BG) levels, affected 537 million people globally, and over 6 million deaths were reported. The use of noninvasive technologies, such as wearable devices (WDs), to regulate and monitor BG in people with diabetes is a relatively new concept and yet in its infancy. Noninvasive WDs coupled with machine learning (ML) techniques have the potential to understand and conclude meaningful information from the gathered data and provide clinically meaningful advanced analytics for the purpose of forecasting or prediction. OBJECTIVE: The purpose of this study is to provide a systematic review complete with a quality assessment looking at diabetes effectiveness of using artificial intelligence (AI) in WDs for forecasting or predicting BG levels. METHODS: We searched 7 of the most popular bibliographic databases. Two reviewers performed study selection and data extraction independently before cross-checking the extracted data. A narrative approach was used to synthesize the data. Quality assessment was performed using an adapted version of the Quality Assessment of Diagnostic Accuracy Studies-2 (QUADAS-2) tool. RESULTS: From the initial 3872 studies, the features from 12 studies were reported after filtering according to our predefined inclusion criteria. The reference standard in all studies overall (n=11, 92%) was classified as low, as all ground truths were easily replicable. Since the data input to AI technology was highly standardized and there was no effect of flow or time frame on the final output, both factors were categorized in a low-risk group (n=11, 92%). It was observed that classical ML approaches were deployed by half of the studies, the most popular being ensemble-boosted trees (random forest). The most common evaluation metric used was Clarke grid error (n=7, 58%), followed by root mean square error (n=5, 42%). The wide usage of photoplethysmogram and near-infrared sensors was observed on wrist-worn devices. CONCLUSIONS: This review has provided the most extensive work to date summarizing WDs that use ML for diabetic-related BG level forecasting or prediction. Although current studies are few, this study suggests that the general quality of the studies was considered high, as revealed by the QUADAS-2 assessment tool. Further validation is needed for commercially available devices, but we envisage that WDs in general have the potential to remove the need for invasive devices completely for glucose monitoring in the not-too-distant future. TRIAL REGISTRATION: PROSPERO CRD42022303175; https://tinyurl.com/3n9jaayc.


Asunto(s)
Diabetes Mellitus Tipo 1 , Dispositivos Electrónicos Vestibles , Humanos , Inteligencia Artificial , Glucemia/metabolismo , Automonitorización de la Glucosa Sanguínea/métodos , Predicción
4.
J Med Internet Res ; 25: e46233, 2023 Feb 07.
Artículo en Inglés | MEDLINE | ID: mdl-36749946

RESUMEN

[This corrects the article DOI: 10.2196/42672.].

5.
J Med Internet Res ; 25: e42672, 2023 01 19.
Artículo en Inglés | MEDLINE | ID: mdl-36656625

RESUMEN

BACKGROUND: Anxiety and depression are the most common mental disorders worldwide. Owing to the lack of psychiatrists around the world, the incorporation of artificial intelligence (AI) into wearable devices (wearable AI) has been exploited to provide mental health services. OBJECTIVE: This review aimed to explore the features of wearable AI used for anxiety and depression to identify application areas and open research issues. METHODS: We searched 8 electronic databases (MEDLINE, PsycINFO, Embase, CINAHL, IEEE Xplore, ACM Digital Library, Scopus, and Google Scholar) and included studies that met the inclusion criteria. Then, we checked the studies that cited the included studies and screened studies that were cited by the included studies. The study selection and data extraction were carried out by 2 reviewers independently. The extracted data were aggregated and summarized using narrative synthesis. RESULTS: Of the 1203 studies identified, 69 (5.74%) were included in this review. Approximately, two-thirds of the studies used wearable AI for depression, whereas the remaining studies used it for anxiety. The most frequent application of wearable AI was in diagnosing anxiety and depression; however, none of the studies used it for treatment purposes. Most studies targeted individuals aged between 18 and 65 years. The most common wearable device used in the studies was Actiwatch AW4 (Cambridge Neurotechnology Ltd). Wrist-worn devices were the most common type of wearable device in the studies. The most commonly used category of data for model development was physical activity data, followed by sleep data and heart rate data. The most frequently used data set from open sources was Depresjon. The most commonly used algorithm was random forest, followed by support vector machine. CONCLUSIONS: Wearable AI can offer great promise in providing mental health services related to anxiety and depression. Wearable AI can be used by individuals for the prescreening assessment of anxiety and depression. Further reviews are needed to statistically synthesize the studies' results related to the performance and effectiveness of wearable AI. Given its potential, technology companies should invest more in wearable AI for the treatment of anxiety and depression.


Asunto(s)
Inteligencia Artificial , Depresión , Humanos , Adolescente , Adulto Joven , Adulto , Persona de Mediana Edad , Anciano , Depresión/diagnóstico , Depresión/terapia , Ansiedad/diagnóstico , Ansiedad/terapia , Trastornos de Ansiedad , Algoritmos
6.
J Med Internet Res ; 25: e48754, 2023 11 08.
Artículo en Inglés | MEDLINE | ID: mdl-37938883

RESUMEN

BACKGROUND: Anxiety disorders rank among the most prevalent mental disorders worldwide. Anxiety symptoms are typically evaluated using self-assessment surveys or interview-based assessment methods conducted by clinicians, which can be subjective, time-consuming, and challenging to repeat. Therefore, there is an increasing demand for using technologies capable of providing objective and early detection of anxiety. Wearable artificial intelligence (AI), the combination of AI technology and wearable devices, has been widely used to detect and predict anxiety disorders automatically, objectively, and more efficiently. OBJECTIVE: This systematic review and meta-analysis aims to assess the performance of wearable AI in detecting and predicting anxiety. METHODS: Relevant studies were retrieved by searching 8 electronic databases and backward and forward reference list checking. In total, 2 reviewers independently carried out study selection, data extraction, and risk-of-bias assessment. The included studies were assessed for risk of bias using a modified version of the Quality Assessment of Diagnostic Accuracy Studies-Revised. Evidence was synthesized using a narrative (ie, text and tables) and statistical (ie, meta-analysis) approach as appropriate. RESULTS: Of the 918 records identified, 21 (2.3%) were included in this review. A meta-analysis of results from 81% (17/21) of the studies revealed a pooled mean accuracy of 0.82 (95% CI 0.71-0.89). Meta-analyses of results from 48% (10/21) of the studies showed a pooled mean sensitivity of 0.79 (95% CI 0.57-0.91) and a pooled mean specificity of 0.92 (95% CI 0.68-0.98). Subgroup analyses demonstrated that the performance of wearable AI was not moderated by algorithms, aims of AI, wearable devices used, status of wearable devices, data types, data sources, reference standards, and validation methods. CONCLUSIONS: Although wearable AI has the potential to detect anxiety, it is not yet advanced enough for clinical use. Until further evidence shows an ideal performance of wearable AI, it should be used along with other clinical assessments. Wearable device companies need to develop devices that can promptly detect anxiety and identify specific time points during the day when anxiety levels are high. Further research is needed to differentiate types of anxiety, compare the performance of different wearable devices, and investigate the impact of the combination of wearable device data and neuroimaging data on the performance of wearable AI. TRIAL REGISTRATION: PROSPERO CRD42023387560; https://www.crd.york.ac.uk/prospero/display_record.php?RecordID=387560.


Asunto(s)
Ansiedad , Inteligencia Artificial , Humanos , Ansiedad/diagnóstico , Trastornos de Ansiedad , Algoritmos , Bases de Datos Factuales
7.
J Med Internet Res ; 25: e43607, 2023 04 12.
Artículo en Inglés | MEDLINE | ID: mdl-37043277

RESUMEN

BACKGROUND: Learning disabilities are among the major cognitive impairments caused by aging. Among the interventions used to improve learning among older adults are serious games, which are participative electronic games designed for purposes other than entertainment. Although some systematic reviews have examined the effectiveness of serious games on learning, they are undermined by some limitations, such as focusing on older adults without cognitive impairments, focusing on particular types of serious games, and not considering the comparator type in the analysis. OBJECTIVE: This review aimed to evaluate the effectiveness of serious games on verbal and nonverbal learning among older adults with cognitive impairment. METHODS: Eight electronic databases were searched to retrieve studies relevant to this systematic review and meta-analysis. Furthermore, we went through the studies that cited the included studies and screened the reference lists of the included studies and relevant reviews. Two reviewers independently checked the eligibility of the identified studies, extracted data from the included studies, and appraised their risk of bias and the quality of the evidence. The results of the included studies were summarized using a narrative synthesis or meta-analysis, as appropriate. RESULTS: Of the 559 citations retrieved, 11 (2%) randomized controlled trials (RCTs) ultimately met all eligibility criteria for this review. A meta-analysis of 45% (5/11) of the RCTs revealed that serious games are effective in improving verbal learning among older adults with cognitive impairment in comparison with no or sham interventions (P=.04), and serious games do not have a different effect on verbal learning between patients with mild cognitive impairment and those with Alzheimer disease (P=.89). A meta-analysis of 18% (2/11) of the RCTs revealed that serious games are as effective as conventional exercises in promoting verbal learning (P=.98). We also found that serious games outperformed no or sham interventions (4/11, 36%; P=.03) and conventional cognitive training (2/11, 18%; P<.001) in enhancing nonverbal learning. CONCLUSIONS: Serious games have the potential to enhance verbal and nonverbal learning among older adults with cognitive impairment. However, our findings remain inconclusive because of the low quality of evidence, the small sample size in most of the meta-analyzed studies (6/8, 75%), and the paucity of studies included in the meta-analyses. Thus, until further convincing proof of their effectiveness is offered, serious games should be used to supplement current interventions for verbal and nonverbal learning rather than replace them entirely. Further studies are needed to compare serious games with conventional cognitive training and conventional exercises, as well as different types of serious games, different platforms, different intervention periods, and different follow-up periods. TRIAL REGISTRATION: PROSPERO CRD42022348849; https://tinyurl.com/y6yewwfa.


Asunto(s)
Enfermedad de Alzheimer , Disfunción Cognitiva , Videojuego de Ejercicio , Memoria Episódica , Anciano , Humanos , Disfunción Cognitiva/terapia , Ejercicio Físico , Aprendizaje
8.
J Med Internet Res ; 24(8): e36010, 2022 08 09.
Artículo en Inglés | MEDLINE | ID: mdl-35943772

RESUMEN

BACKGROUND: Prevalence of diabetes has steadily increased over the last few decades with 1.5 million deaths reported in 2012 alone. Traditionally, analyzing patients with diabetes has remained a largely invasive approach. Wearable devices (WDs) make use of sensors historically reserved for hospital settings. WDs coupled with artificial intelligence (AI) algorithms show promise to help understand and conclude meaningful information from the gathered data and provide advanced and clinically meaningful analytics. OBJECTIVE: This review aimed to provide an overview of AI-driven WD features for diabetes and their use in monitoring diabetes-related parameters. METHODS: We searched 7 of the most popular bibliographic databases using 3 groups of search terms related to diabetes, WDs, and AI. A 2-stage process was followed for study selection: reading abstracts and titles followed by full-text screening. Two reviewers independently performed study selection and data extraction, and disagreements were resolved by consensus. A narrative approach was used to synthesize the data. RESULTS: From an initial 3872 studies, we report the features from 37 studies post filtering according to our predefined inclusion criteria. Most of the studies targeted type 1 diabetes, type 2 diabetes, or both (21/37, 57%). Many studies (15/37, 41%) reported blood glucose as their main measurement. More than half of the studies (21/37, 57%) had the aim of estimation and prediction of glucose or glucose level monitoring. Over half of the reviewed studies looked at wrist-worn devices. Only 41% of the study devices were commercially available. We observed the use of multiple sensors with photoplethysmography sensors being most prevalent in 32% (12/37) of studies. Studies reported and compared >1 machine learning (ML) model with high levels of accuracy. Support vector machine was the most reported (13/37, 35%), followed by random forest (12/37, 32%). CONCLUSIONS: This review is the most extensive work, to date, summarizing WDs that use ML for people with diabetes, and provides research direction to those wanting to further contribute to this emerging field. Given the advancements in WD technologies replacing the need for invasive hospital setting devices, we see great advancement potential in this domain. Further work is needed to validate the ML approaches on clinical data from WDs and provide meaningful analytics that could serve as data gathering, monitoring, prediction, classification, and recommendation devices in the context of diabetes.


Asunto(s)
Diabetes Mellitus Tipo 1 , Diabetes Mellitus Tipo 2 , Dispositivos Electrónicos Vestibles , Inteligencia Artificial , Glucemia , Diabetes Mellitus Tipo 1/terapia , Humanos
9.
J Med Internet Res ; 23(11): e29749, 2021 11 19.
Artículo en Inglés | MEDLINE | ID: mdl-34806996

RESUMEN

BACKGROUND: Bipolar disorder (BD) is the 10th most common cause of frailty in young individuals and has triggered morbidity and mortality worldwide. Patients with BD have a life expectancy 9 to 17 years lower than that of normal people. BD is a predominant mental disorder, but it can be misdiagnosed as depressive disorder, which leads to difficulties in treating affected patients. Approximately 60% of patients with BD are treated for depression. However, machine learning provides advanced skills and techniques for better diagnosis of BD. OBJECTIVE: This review aims to explore the machine learning algorithms used for the detection and diagnosis of bipolar disorder and its subtypes. METHODS: The study protocol adopted the PRISMA-ScR (Preferred Reporting Items for Systematic Reviews and Meta-Analyses Extension for Scoping Reviews) guidelines. We explored 3 databases, namely Google Scholar, ScienceDirect, and PubMed. To enhance the search, we performed backward screening of all the references of the included studies. Based on the predefined selection criteria, 2 levels of screening were performed: title and abstract review, and full review of the articles that met the inclusion criteria. Data extraction was performed independently by all investigators. To synthesize the extracted data, a narrative synthesis approach was followed. RESULTS: We retrieved 573 potential articles were from the 3 databases. After preprocessing and screening, only 33 articles that met our inclusion criteria were identified. The most commonly used data belonged to the clinical category (19, 58%). We identified different machine learning models used in the selected studies, including classification models (18, 55%), regression models (5, 16%), model-based clustering methods (2, 6%), natural language processing (1, 3%), clustering algorithms (1, 3%), and deep learning-based models (3, 9%). Magnetic resonance imaging data were most commonly used for classifying bipolar patients compared to other groups (11, 34%), whereas microarray expression data sets and genomic data were the least commonly used. The maximum ratio of accuracy was 98%, whereas the minimum accuracy range was 64%. CONCLUSIONS: This scoping review provides an overview of recent studies based on machine learning models used to diagnose patients with BD regardless of their demographics or if they were compared to patients with psychiatric diagnoses. Further research can be conducted to provide clinical decision support in the health industry.


Asunto(s)
Trastorno Bipolar , Algoritmos , Trastorno Bipolar/diagnóstico , Manejo de Datos , Humanos , Aprendizaje Automático , Procesamiento de Lenguaje Natural
10.
J Med Internet Res ; 23(9): e29136, 2021 09 14.
Artículo en Inglés | MEDLINE | ID: mdl-34406962

RESUMEN

BACKGROUND: Technologies have been extensively implemented to provide health care services for all types of clinical conditions during the COVID-19 pandemic. While several reviews have been conducted regarding technologies used during the COVID-19 pandemic, they were limited by focusing either on a specific technology (or features) or proposed rather than implemented technologies. OBJECTIVE: This review aims to provide an overview of technologies, as reported in the literature, implemented during the first wave of the COVID-19 pandemic. METHODS: We conducted a scoping review using PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-analyses) Extension for Scoping Reviews. Studies were retrieved by searching 8 electronic databases, checking the reference lists of included studies and relevant reviews (backward reference list checking), and checking studies that cited included studies (forward reference list checking). The search terms were chosen based on the target intervention (ie, technologies) and the target disease (ie, COVID-19). We included English publications that focused on technologies or digital tools implemented during the COVID-19 pandemic to provide health-related services regardless of target health condition, user, or setting. Two reviewers independently assessed the eligibility of studies and extracted data from eligible papers. We used a narrative approach to synthesize extracted data. RESULTS: Of 7374 retrieved papers, 126 were deemed eligible. Telemedicine was the most common type of technology (107/126, 84.9%) implemented in the first wave of the COVID-19 pandemic, and the most common mode of telemedicine was synchronous (100/108, 92.6%). The most common purpose of the technologies was providing consultation (75/126, 59.5%), followed by following up with patients (45/126, 35.7%), and monitoring their health status (22/126, 17.4%). Zoom (22/126, 17.5%) and WhatsApp (12/126, 9.5%) were the most commonly used videoconferencing and social media platforms, respectively. Both health care professionals and health consumers were the most common target users (103/126, 81.7%). The health condition most frequently targeted was COVID-19 (38/126, 30.2%), followed by any physical health conditions (21/126, 16.7%), and mental health conditions (13/126, 10.3%). Technologies were web-based in 84.1% of the studies (106/126). Technologies could be used through 11 modes, and the most common were mobile apps (86/126, 68.3%), desktop apps (73/126, 57.9%), telephone calls (49/126, 38.9%), and websites (45/126, 35.7%). CONCLUSIONS: Technologies played a crucial role in mitigating the challenges faced during the COVID-19 pandemic. We did not find papers describing the implementation of other technologies (eg, contact-tracing apps, drones, blockchain) during the first wave. Furthermore, technologies in this review were used for other purposes (eg, drugs and vaccines discovery, social distancing, and immunity passport). Future research on studies on these technologies and purposes is recommended, and further reviews are required to investigate technologies implemented in subsequent waves of the pandemic.


Asunto(s)
COVID-19 , Telemedicina , Humanos , Pandemias , SARS-CoV-2 , Tecnología
11.
Bioinformatics ; 35(24): 5359-5360, 2019 12 15.
Artículo en Inglés | MEDLINE | ID: mdl-31350543

RESUMEN

SUMMARY: As large-scale metabolic phenotyping studies become increasingly common, the need for systemic methods for pre-processing and quality control (QC) of analytical data prior to statistical analysis has become increasingly important, both within a study, and to allow meaningful inter-study comparisons. The nPYc-Toolbox provides software for the import, pre-processing, QC and visualization of metabolic phenotyping datasets, either interactively, or in automated pipelines. AVAILABILITY AND IMPLEMENTATION: The nPYc-Toolbox is implemented in Python, and is freely available from the Python package index https://pypi.org/project/nPYc/, source is available at https://github.com/phenomecentre/nPYc-Toolbox. Full documentation can be found at http://npyc-toolbox.readthedocs.io/ and exemplar datasets and tutorials at https://github.com/phenomecentre/nPYc-toolbox-tutorials.


Asunto(s)
Metabolómica , Programas Informáticos , Documentación , Control de Calidad
12.
Sci Rep ; 14(1): 6173, 2024 03 14.
Artículo en Inglés | MEDLINE | ID: mdl-38486010

RESUMEN

A kidney stone is a solid formation that can lead to kidney failure, severe pain, and reduced quality of life from urinary system blockages. While medical experts can interpret kidney-ureter-bladder (KUB) X-ray images, specific images pose challenges for human detection, requiring significant analysis time. Consequently, developing a detection system becomes crucial for accurately classifying KUB X-ray images. This article applies a transfer learning (TL) model with a pre-trained VGG16 empowered with explainable artificial intelligence (XAI) to establish a system that takes KUB X-ray images and accurately categorizes them as kidney stones or normal cases. The findings demonstrate that the model achieves a testing accuracy of 97.41% in identifying kidney stones or normal KUB X-rays in the dataset used. VGG16 model delivers highly accurate predictions but lacks fairness and explainability in their decision-making process. This study incorporates the Layer-Wise Relevance Propagation (LRP) technique, an explainable artificial intelligence (XAI) technique, to enhance the transparency and effectiveness of the model to address this concern. The XAI technique, specifically LRP, increases the model's fairness and transparency, facilitating human comprehension of the predictions. Consequently, XAI can play an important role in assisting doctors with the accurate identification of kidney stones, thereby facilitating the execution of effective treatment strategies.


Asunto(s)
Inteligencia Artificial , Cálculos Renales , Humanos , Rayos X , Calidad de Vida , Cálculos Renales/diagnóstico por imagen , Fluoroscopía
13.
JMIR Form Res ; 8: e49411, 2024 Mar 05.
Artículo en Inglés | MEDLINE | ID: mdl-38441952

RESUMEN

BACKGROUND: Research gaps refer to unanswered questions in the existing body of knowledge, either due to a lack of studies or inconclusive results. Research gaps are essential starting points and motivation in scientific research. Traditional methods for identifying research gaps, such as literature reviews and expert opinions, can be time consuming, labor intensive, and prone to bias. They may also fall short when dealing with rapidly evolving or time-sensitive subjects. Thus, innovative scalable approaches are needed to identify research gaps, systematically assess the literature, and prioritize areas for further study in the topic of interest. OBJECTIVE: In this paper, we propose a machine learning-based approach for identifying research gaps through the analysis of scientific literature. We used the COVID-19 pandemic as a case study. METHODS: We conducted an analysis to identify research gaps in COVID-19 literature using the COVID-19 Open Research (CORD-19) data set, which comprises 1,121,433 papers related to the COVID-19 pandemic. Our approach is based on the BERTopic topic modeling technique, which leverages transformers and class-based term frequency-inverse document frequency to create dense clusters allowing for easily interpretable topics. Our BERTopic-based approach involves 3 stages: embedding documents, clustering documents (dimension reduction and clustering), and representing topics (generating candidates and maximizing candidate relevance). RESULTS: After applying the study selection criteria, we included 33,206 abstracts in the analysis of this study. The final list of research gaps identified 21 different areas, which were grouped into 6 principal topics. These topics were: "virus of COVID-19," "risk factors of COVID-19," "prevention of COVID-19," "treatment of COVID-19," "health care delivery during COVID-19," "and impact of COVID-19." The most prominent topic, observed in over half of the analyzed studies, was "the impact of COVID-19." CONCLUSIONS: The proposed machine learning-based approach has the potential to identify research gaps in scientific literature. This study is not intended to replace individual literature research within a selected topic. Instead, it can serve as a guide to formulate precise literature search queries in specific areas associated with research questions that previous publications have earmarked for future exploration. Future research should leverage an up-to-date list of studies that are retrieved from the most common databases in the target area. When feasible, full texts or, at minimum, discussion sections should be analyzed rather than limiting their analysis to abstracts. Furthermore, future studies could evaluate more efficient modeling algorithms, especially those combining topic modeling with statistical uncertainty quantification, such as conformal prediction.

14.
J Magn Reson Imaging ; 38(1): 89-101, 2013 Jul.
Artículo en Inglés | MEDLINE | ID: mdl-23238914

RESUMEN

PURPOSE: To assess the efficacy of dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI)-based textural analysis in predicting response to chemotherapy in a cohort of breast cancer patients. MATERIALS AND METHODS: In all, 100 patients were scanned on a 3.0T HDx scanner immediately prior to neoadjuvant chemotherapy treatment. A software application to use texture features based on co-occurrence matrices was developed. Texture analysis was performed on precontrast and 1-5 minutes postcontrast data. Patients were categorized according to their chemotherapeutic response: partial responders corresponding to a decrease in tumor diameter over 50% (40) and nonresponders corresponding to a decrease of less than 50% (4). Data were also split based on factors that influence response: triple receptor negative phenotype (TNBC) (22) vs. non-TNBC (49); node negative (45) vs. node positive (46); and biopsy grade 1 or 2 (38) vs. biopsy grade 3 (55). RESULTS: Parameters f2 (contrast), f4 (variance), f10 (difference in variance), f6 (sum average), f7 (sum variance), f8 (sum entropy), f15 (cluster shade), and f16 (cluster prominence) showed significant differences between responders and partial responders of chemotherapy. Differences were mainly seen at 1-3 minutes postcontrast administration. No significant differences were found precontrast administration. Node +ve, high grade, and TNBC are associated with poorer prognosis and appear to be more heterogeneous in appearance according to texture analysis. CONCLUSION: This work highlights that textural differences between groups (based on response, nodal status, and triple negative groupings) are apparent and appear to be most evident 1-3 minutes postcontrast administration. The fact that significant differences for certain texture parameters and groupings are consistently observed is encouraging.


Asunto(s)
Antineoplásicos/uso terapéutico , Neoplasias de la Mama/tratamiento farmacológico , Neoplasias de la Mama/patología , Interpretación de Imagen Asistida por Computador/métodos , Imagen por Resonancia Magnética/estadística & datos numéricos , Reconocimiento de Normas Patrones Automatizadas/estadística & datos numéricos , Adulto , Anciano , Neoplasias de la Mama/epidemiología , Estudios de Cohortes , Medios de Contraste , Femenino , Humanos , Aumento de la Imagen/métodos , Imagen por Resonancia Magnética/métodos , Masculino , Persona de Mediana Edad , Reconocimiento de Normas Patrones Automatizadas/métodos , Prevalencia , Pronóstico , Reproducibilidad de los Resultados , Factores de Riesgo , Sensibilidad y Especificidad , Resultado del Tratamiento , Reino Unido
15.
Stud Health Technol Inform ; 305: 452-455, 2023 Jun 29.
Artículo en Inglés | MEDLINE | ID: mdl-37387063

RESUMEN

Depression is a prevalent mental condition that is challenging to diagnose using conventional techniques. Using machine learning and deep learning models with motor activity data, wearable AI technology has shown promise in reliably and effectively identifying or predicting depression. In this work, we aim to examine the performance of simple linear and non-linear models in the prediction of depression levels. We compared eight linear and non-linear models (Ridge, ElasticNet, Lasso, Random Forest, Gradient boosting, Decision trees, Support vector machines, and Multilayer perceptron) for the task of predicting depression scores over a period using physiological features, motor activity data, and MADRAS scores. For the experimental evaluation, we used the Depresjon dataset which contains the motor activity data of depressed and non-depressed participants. According to our findings, simple linear and non-linear models may effectively estimate depression scores for depressed people without the need for complex models. This opens the door for the development of more effective and impartial techniques for identifying depression and treating/preventing it using commonly used, widely accessible wearable technology.


Asunto(s)
Inteligencia Artificial , Depresión , Humanos , Depresión/diagnóstico , India , Redes Neurales de la Computación , Aprendizaje Automático
16.
Stud Health Technol Inform ; 305: 283-286, 2023 Jun 29.
Artículo en Inglés | MEDLINE | ID: mdl-37387018

RESUMEN

In 2019 alone, Diabetes Mellitus impacted 463 million individuals worldwide. Blood glucose levels (BGL) are often monitored via invasive techniques as part of routine protocols. Recently, AI-based approaches have shown the ability to predict BGL using data acquired by non-invasive Wearable Devices (WDs), therefore improving diabetes monitoring and treatment. It is crucial to study the relationships between non-invasive WD features and markers of glycemic health. Therefore, this study aimed to investigate accuracy of linear and non-linear models in estimating BGL. A dataset containing digital metrics as well as diabetic status collected using traditional means was used. Data consisted of 13 participants data collected from WDs, these participants were divided in two groups young, and Adult Our experimental design included Data Collection, Feature Engineering, ML model selection/development, and reporting evaluation of metrics. The study showed that linear and non-linear models both have high accuracy in estimating BGL using WD data (RMSE range: 0.181 to 0.271, MAE range: 0.093 to 0.142). We provide further evidence of the feasibility of using commercially available WDs for the purpose of BGL estimation amongst diabetics when using Machine learning approaches.


Asunto(s)
Glucemia , Datos de Salud Recolectados Rutinariamente , Adulto , Humanos , Benchmarking , Recolección de Datos , Aprendizaje Automático
17.
Stud Health Technol Inform ; 305: 291-294, 2023 Jun 29.
Artículo en Inglés | MEDLINE | ID: mdl-37387020

RESUMEN

Intermittent fasting has been practiced for centuries across many cultures globally. Recently many studies have reported intermittent fasting for its lifestyle benefits, the major shift in eating habits and patterns is associated with several changes in hormones and circadian rhythms. Whether there are accompanying changes in stress levels is not widely reported especially in school children. The objective of this study is to examine the impact of intermittent fasting during Ramadan on stress levels in school children as measured using wearable artificial intelligence (AI). Twenty-nine school children (aged 13-17 years and 12M / 17F ratio) were given Fitbit devices and their stress, activity and sleep patterns analyzed 2 weeks before, 4 weeks during Ramadan fasting and 2 weeks after. This study revealed no statistically significant difference on stress scores during fasting, despite changes in stress levels being observed for 12 of the participants. Our study may imply intermittent fasting during Ramadan poses no direct risks in terms of stress, suggesting rather it may be linked to dietary habits, furthermore as stress score calculations are based on heart rate variability, this study implies fasting does not interfere the cardiac autonomic nervous system.


Asunto(s)
Inteligencia Artificial , Ayuno Intermitente , Humanos , Niño , Ayuno , Sistema Nervioso Autónomo , Monitores de Ejercicio
18.
NPJ Digit Med ; 6(1): 84, 2023 May 05.
Artículo en Inglés | MEDLINE | ID: mdl-37147384

RESUMEN

Given the limitations of traditional approaches, wearable artificial intelligence (AI) is one of the technologies that have been exploited to detect or predict depression. The current review aimed at examining the performance of wearable AI in detecting and predicting depression. The search sources in this systematic review were 8 electronic databases. Study selection, data extraction, and risk of bias assessment were carried out by two reviewers independently. The extracted results were synthesized narratively and statistically. Of the 1314 citations retrieved from the databases, 54 studies were included in this review. The pooled mean of the highest accuracy, sensitivity, specificity, and root mean square error (RMSE) was 0.89, 0.87, 0.93, and 4.55, respectively. The pooled mean of lowest accuracy, sensitivity, specificity, and RMSE was 0.70, 0.61, 0.73, and 3.76, respectively. Subgroup analyses revealed that there is a statistically significant difference in the highest accuracy, lowest accuracy, highest sensitivity, highest specificity, and lowest specificity between algorithms, and there is a statistically significant difference in the lowest sensitivity and lowest specificity between wearable devices. Wearable AI is a promising tool for depression detection and prediction although it is in its infancy and not ready for use in clinical practice. Until further research improve its performance, wearable AI should be used in conjunction with other methods for diagnosing and predicting depression. Further studies are needed to examine the performance of wearable AI based on a combination of wearable device data and neuroimaging data and to distinguish patients with depression from those with other diseases.

19.
Artículo en Inglés | MEDLINE | ID: mdl-36743720

RESUMEN

Background: The rates of mental health disorders such as anxiety and depression are at an all-time high especially since the onset of COVID-19, and the need for readily available digital health care solutions has never been greater. Wearable devices have increasingly incorporated sensors that were previously reserved for hospital settings. The availability of wearable device features that address anxiety and depression is still in its infancy, but consumers will soon have the potential to self-monitor moods and behaviors using everyday commercially-available devices. Objective: This study aims to explore the features of wearable devices that can be used for monitoring anxiety and depression. Methods: Six bibliographic databases, including MEDLINE, EMBASE, PsycINFO, IEEE Xplore, ACM Digital Library, and Google Scholar were used as search engines for this review. Two independent reviewers performed study selection and data extraction, while two other reviewers justified the cross-checking of extracted data. A narrative approach for synthesizing the data was utilized. Results: From 2408 initial results, 58 studies were assessed and highlighted according to our inclusion criteria. Wrist-worn devices were identified in the bulk of our studies (n = 42 or 71%). For the identification of anxiety and depression, we reported 26 methods for assessing mood, with the State-Trait Anxiety Inventory being the joint most common along with the Diagnostic and Statistical Manual of Mental Disorders (n = 8 or 14%). Finally, n = 26 or 46% of studies highlighted the smartphone as a wearable device host device. Conclusion: The emergence of affordable, consumer-grade biosensors offers the potential for new approaches to support mental health therapies for illnesses such as anxiety and depression. We believe that purposefully-designed wearable devices that combine the expertise of technologists and clinical experts can play a key role in self-care monitoring and diagnosis.

20.
NPJ Digit Med ; 6(1): 122, 2023 Jul 08.
Artículo en Inglés | MEDLINE | ID: mdl-37422507

RESUMEN

Attention, which is the process of noticing the surrounding environment and processing information, is one of the cognitive functions that deteriorate gradually as people grow older. Games that are used for other than entertainment, such as improving attention, are often referred to as serious games. This study examined the effectiveness of serious games on attention among elderly individuals suffering from cognitive impairment. A systematic review and meta-analyses of randomized controlled trials were carried out. A total of 10 trials ultimately met all eligibility criteria of the 559 records retrieved. The synthesis of very low-quality evidence from three trials, as analyzed in a meta-study, indicated that serious games outperform no/passive interventions in enhancing attention in cognitively impaired older adults (P < 0.001). Additionally, findings from two other studies demonstrated that serious games are more effective than traditional cognitive training in boosting attention among cognitively impaired older adults. One study also concluded that serious games are better than traditional exercises in enhancing attention. Serious games can enhance attention in cognitively impaired older adults. However, given the low quality of the evidence, the limited number of participants in most studies, the absence of some comparative studies, and the dearth of studies included in the meta-analyses, the results remain inconclusive. Thus, until the aforementioned limitations are rectified in future research, serious games should serve as a supplement, rather than a replacement, to current interventions.

SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA