Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 39
Filtrar
1.
J Med Internet Res ; 26: e53396, 2024 Jul 05.
Artigo em Inglês | MEDLINE | ID: mdl-38967964

RESUMO

BACKGROUND: In the realm of in vitro fertilization (IVF), artificial intelligence (AI) models serve as invaluable tools for clinicians, offering predictive insights into ovarian stimulation outcomes. Predicting and understanding a patient's response to ovarian stimulation can help in personalizing doses of drugs, preventing adverse outcomes (eg, hyperstimulation), and improving the likelihood of successful fertilization and pregnancy. Given the pivotal role of accurate predictions in IVF procedures, it becomes important to investigate the landscape of AI models that are being used to predict the outcomes of ovarian stimulation. OBJECTIVE: The objective of this review is to comprehensively examine the literature to explore the characteristics of AI models used for predicting ovarian stimulation outcomes in the context of IVF. METHODS: A total of 6 electronic databases were searched for peer-reviewed literature published before August 2023, using the concepts of IVF and AI, along with their related terms. Records were independently screened by 2 reviewers against the eligibility criteria. The extracted data were then consolidated and presented through narrative synthesis. RESULTS: Upon reviewing 1348 articles, 30 met the predetermined inclusion criteria. The literature primarily focused on the number of oocytes retrieved as the main predicted outcome. Microscopy images stood out as the primary ground truth reference. The reviewed studies also highlighted that the most frequently adopted stimulation protocol was the gonadotropin-releasing hormone (GnRH) antagonist. In terms of using trigger medication, human chorionic gonadotropin (hCG) was the most commonly selected option. Among the machine learning techniques, the favored choice was the support vector machine. As for the validation of AI algorithms, the hold-out cross-validation method was the most prevalent. The area under the curve was highlighted as the primary evaluation metric. The literature exhibited a wide variation in the number of features used for AI algorithm development, ranging from 2 to 28,054 features. Data were mostly sourced from patient demographics, followed by laboratory data, specifically hormonal levels. Notably, the vast majority of studies were restricted to a single infertility clinic and exclusively relied on nonpublic data sets. CONCLUSIONS: These insights highlight an urgent need to diversify data sources and explore varied AI techniques for improved prediction accuracy and generalizability of AI models for the prediction of ovarian stimulation outcomes. Future research should prioritize multiclinic collaborations and consider leveraging public data sets, aiming for more precise AI-driven predictions that ultimately boost patient care and IVF success rates.


Assuntos
Inteligência Artificial , Fertilização in vitro , Indução da Ovulação , Humanos , Indução da Ovulação/métodos , Fertilização in vitro/métodos , Feminino , Gravidez
2.
J Med Internet Res ; 26: e52622, 2024 Jan 31.
Artigo em Inglês | MEDLINE | ID: mdl-38294846

RESUMO

BACKGROUND: Students usually encounter stress throughout their academic path. Ongoing stressors may lead to chronic stress, adversely affecting their physical and mental well-being. Thus, early detection and monitoring of stress among students are crucial. Wearable artificial intelligence (AI) has emerged as a valuable tool for this purpose. It offers an objective, noninvasive, nonobtrusive, automated approach to continuously monitor biomarkers in real time, thereby addressing the limitations of traditional approaches such as self-reported questionnaires. OBJECTIVE: This systematic review and meta-analysis aim to assess the performance of wearable AI in detecting and predicting stress among students. METHODS: Search sources in this review included 7 electronic databases (MEDLINE, Embase, PsycINFO, ACM Digital Library, Scopus, IEEE Xplore, and Google Scholar). We also checked the reference lists of the included studies and checked studies that cited the included studies. The search was conducted on June 12, 2023. This review included research articles centered on the creation or application of AI algorithms for the detection or prediction of stress among students using data from wearable devices. In total, 2 independent reviewers performed study selection, data extraction, and risk-of-bias assessment. The Quality Assessment of Diagnostic Accuracy Studies-Revised tool was adapted and used to examine the risk of bias in the included studies. Evidence synthesis was conducted using narrative and statistical techniques. RESULTS: This review included 5.8% (19/327) of the studies retrieved from the search sources. A meta-analysis of 37 accuracy estimates derived from 32% (6/19) of the studies revealed a pooled mean accuracy of 0.856 (95% CI 0.70-0.93). Subgroup analyses demonstrated that the accuracy of wearable AI was moderated by the number of stress classes (P=.02), type of wearable device (P=.049), location of the wearable device (P=.02), data set size (P=.009), and ground truth (P=.001). The average estimates of sensitivity, specificity, and F1-score were 0.755 (SD 0.181), 0.744 (SD 0.147), and 0.759 (SD 0.139), respectively. CONCLUSIONS: Wearable AI shows promise in detecting student stress but currently has suboptimal performance. The results of the subgroup analyses should be carefully interpreted given that many of these findings may be due to other confounding factors rather than the underlying grouping characteristics. Thus, wearable AI should be used alongside other assessments (eg, clinical questionnaires) until further evidence is available. Future research should explore the ability of wearable AI to differentiate types of stress, distinguish stress from other mental health issues, predict future occurrences of stress, consider factors such as the placement of the wearable device and the methods used to assess the ground truth, and report detailed results to facilitate the conduct of meta-analyses. TRIAL REGISTRATION: PROSPERO CRD42023435051; http://tinyurl.com/3fzb5rnp.


Assuntos
Algoritmos , Inteligência Artificial , Humanos , Bases de Dados Factuais , Bibliotecas Digitais , Saúde Mental
3.
J Med Internet Res ; 25: e40259, 2023 03 14.
Artigo em Inglês | MEDLINE | ID: mdl-36917147

RESUMO

BACKGROUND: In 2021 alone, diabetes mellitus, a metabolic disorder primarily characterized by abnormally high blood glucose (BG) levels, affected 537 million people globally, and over 6 million deaths were reported. The use of noninvasive technologies, such as wearable devices (WDs), to regulate and monitor BG in people with diabetes is a relatively new concept and yet in its infancy. Noninvasive WDs coupled with machine learning (ML) techniques have the potential to understand and conclude meaningful information from the gathered data and provide clinically meaningful advanced analytics for the purpose of forecasting or prediction. OBJECTIVE: The purpose of this study is to provide a systematic review complete with a quality assessment looking at diabetes effectiveness of using artificial intelligence (AI) in WDs for forecasting or predicting BG levels. METHODS: We searched 7 of the most popular bibliographic databases. Two reviewers performed study selection and data extraction independently before cross-checking the extracted data. A narrative approach was used to synthesize the data. Quality assessment was performed using an adapted version of the Quality Assessment of Diagnostic Accuracy Studies-2 (QUADAS-2) tool. RESULTS: From the initial 3872 studies, the features from 12 studies were reported after filtering according to our predefined inclusion criteria. The reference standard in all studies overall (n=11, 92%) was classified as low, as all ground truths were easily replicable. Since the data input to AI technology was highly standardized and there was no effect of flow or time frame on the final output, both factors were categorized in a low-risk group (n=11, 92%). It was observed that classical ML approaches were deployed by half of the studies, the most popular being ensemble-boosted trees (random forest). The most common evaluation metric used was Clarke grid error (n=7, 58%), followed by root mean square error (n=5, 42%). The wide usage of photoplethysmogram and near-infrared sensors was observed on wrist-worn devices. CONCLUSIONS: This review has provided the most extensive work to date summarizing WDs that use ML for diabetic-related BG level forecasting or prediction. Although current studies are few, this study suggests that the general quality of the studies was considered high, as revealed by the QUADAS-2 assessment tool. Further validation is needed for commercially available devices, but we envisage that WDs in general have the potential to remove the need for invasive devices completely for glucose monitoring in the not-too-distant future. TRIAL REGISTRATION: PROSPERO CRD42022303175; https://tinyurl.com/3n9jaayc.


Assuntos
Diabetes Mellitus Tipo 1 , Dispositivos Eletrônicos Vestíveis , Humanos , Inteligência Artificial , Glicemia/metabolismo , Automonitorização da Glicemia/métodos , Previsões
4.
J Med Internet Res ; 25: e42672, 2023 01 19.
Artigo em Inglês | MEDLINE | ID: mdl-36656625

RESUMO

BACKGROUND: Anxiety and depression are the most common mental disorders worldwide. Owing to the lack of psychiatrists around the world, the incorporation of artificial intelligence (AI) into wearable devices (wearable AI) has been exploited to provide mental health services. OBJECTIVE: This review aimed to explore the features of wearable AI used for anxiety and depression to identify application areas and open research issues. METHODS: We searched 8 electronic databases (MEDLINE, PsycINFO, Embase, CINAHL, IEEE Xplore, ACM Digital Library, Scopus, and Google Scholar) and included studies that met the inclusion criteria. Then, we checked the studies that cited the included studies and screened studies that were cited by the included studies. The study selection and data extraction were carried out by 2 reviewers independently. The extracted data were aggregated and summarized using narrative synthesis. RESULTS: Of the 1203 studies identified, 69 (5.74%) were included in this review. Approximately, two-thirds of the studies used wearable AI for depression, whereas the remaining studies used it for anxiety. The most frequent application of wearable AI was in diagnosing anxiety and depression; however, none of the studies used it for treatment purposes. Most studies targeted individuals aged between 18 and 65 years. The most common wearable device used in the studies was Actiwatch AW4 (Cambridge Neurotechnology Ltd). Wrist-worn devices were the most common type of wearable device in the studies. The most commonly used category of data for model development was physical activity data, followed by sleep data and heart rate data. The most frequently used data set from open sources was Depresjon. The most commonly used algorithm was random forest, followed by support vector machine. CONCLUSIONS: Wearable AI can offer great promise in providing mental health services related to anxiety and depression. Wearable AI can be used by individuals for the prescreening assessment of anxiety and depression. Further reviews are needed to statistically synthesize the studies' results related to the performance and effectiveness of wearable AI. Given its potential, technology companies should invest more in wearable AI for the treatment of anxiety and depression.


Assuntos
Inteligência Artificial , Depressão , Humanos , Adolescente , Adulto Jovem , Adulto , Pessoa de Meia-Idade , Idoso , Depressão/diagnóstico , Depressão/terapia , Ansiedade/diagnóstico , Ansiedade/terapia , Transtornos de Ansiedade , Algoritmos
5.
6.
J Med Internet Res ; 25: e48754, 2023 11 08.
Artigo em Inglês | MEDLINE | ID: mdl-37938883

RESUMO

BACKGROUND: Anxiety disorders rank among the most prevalent mental disorders worldwide. Anxiety symptoms are typically evaluated using self-assessment surveys or interview-based assessment methods conducted by clinicians, which can be subjective, time-consuming, and challenging to repeat. Therefore, there is an increasing demand for using technologies capable of providing objective and early detection of anxiety. Wearable artificial intelligence (AI), the combination of AI technology and wearable devices, has been widely used to detect and predict anxiety disorders automatically, objectively, and more efficiently. OBJECTIVE: This systematic review and meta-analysis aims to assess the performance of wearable AI in detecting and predicting anxiety. METHODS: Relevant studies were retrieved by searching 8 electronic databases and backward and forward reference list checking. In total, 2 reviewers independently carried out study selection, data extraction, and risk-of-bias assessment. The included studies were assessed for risk of bias using a modified version of the Quality Assessment of Diagnostic Accuracy Studies-Revised. Evidence was synthesized using a narrative (ie, text and tables) and statistical (ie, meta-analysis) approach as appropriate. RESULTS: Of the 918 records identified, 21 (2.3%) were included in this review. A meta-analysis of results from 81% (17/21) of the studies revealed a pooled mean accuracy of 0.82 (95% CI 0.71-0.89). Meta-analyses of results from 48% (10/21) of the studies showed a pooled mean sensitivity of 0.79 (95% CI 0.57-0.91) and a pooled mean specificity of 0.92 (95% CI 0.68-0.98). Subgroup analyses demonstrated that the performance of wearable AI was not moderated by algorithms, aims of AI, wearable devices used, status of wearable devices, data types, data sources, reference standards, and validation methods. CONCLUSIONS: Although wearable AI has the potential to detect anxiety, it is not yet advanced enough for clinical use. Until further evidence shows an ideal performance of wearable AI, it should be used along with other clinical assessments. Wearable device companies need to develop devices that can promptly detect anxiety and identify specific time points during the day when anxiety levels are high. Further research is needed to differentiate types of anxiety, compare the performance of different wearable devices, and investigate the impact of the combination of wearable device data and neuroimaging data on the performance of wearable AI. TRIAL REGISTRATION: PROSPERO CRD42023387560; https://www.crd.york.ac.uk/prospero/display_record.php?RecordID=387560.


Assuntos
Ansiedade , Inteligência Artificial , Humanos , Ansiedade/diagnóstico , Transtornos de Ansiedade , Algoritmos , Bases de Dados Factuais
7.
J Med Internet Res ; 25: e43607, 2023 04 12.
Artigo em Inglês | MEDLINE | ID: mdl-37043277

RESUMO

BACKGROUND: Learning disabilities are among the major cognitive impairments caused by aging. Among the interventions used to improve learning among older adults are serious games, which are participative electronic games designed for purposes other than entertainment. Although some systematic reviews have examined the effectiveness of serious games on learning, they are undermined by some limitations, such as focusing on older adults without cognitive impairments, focusing on particular types of serious games, and not considering the comparator type in the analysis. OBJECTIVE: This review aimed to evaluate the effectiveness of serious games on verbal and nonverbal learning among older adults with cognitive impairment. METHODS: Eight electronic databases were searched to retrieve studies relevant to this systematic review and meta-analysis. Furthermore, we went through the studies that cited the included studies and screened the reference lists of the included studies and relevant reviews. Two reviewers independently checked the eligibility of the identified studies, extracted data from the included studies, and appraised their risk of bias and the quality of the evidence. The results of the included studies were summarized using a narrative synthesis or meta-analysis, as appropriate. RESULTS: Of the 559 citations retrieved, 11 (2%) randomized controlled trials (RCTs) ultimately met all eligibility criteria for this review. A meta-analysis of 45% (5/11) of the RCTs revealed that serious games are effective in improving verbal learning among older adults with cognitive impairment in comparison with no or sham interventions (P=.04), and serious games do not have a different effect on verbal learning between patients with mild cognitive impairment and those with Alzheimer disease (P=.89). A meta-analysis of 18% (2/11) of the RCTs revealed that serious games are as effective as conventional exercises in promoting verbal learning (P=.98). We also found that serious games outperformed no or sham interventions (4/11, 36%; P=.03) and conventional cognitive training (2/11, 18%; P<.001) in enhancing nonverbal learning. CONCLUSIONS: Serious games have the potential to enhance verbal and nonverbal learning among older adults with cognitive impairment. However, our findings remain inconclusive because of the low quality of evidence, the small sample size in most of the meta-analyzed studies (6/8, 75%), and the paucity of studies included in the meta-analyses. Thus, until further convincing proof of their effectiveness is offered, serious games should be used to supplement current interventions for verbal and nonverbal learning rather than replace them entirely. Further studies are needed to compare serious games with conventional cognitive training and conventional exercises, as well as different types of serious games, different platforms, different intervention periods, and different follow-up periods. TRIAL REGISTRATION: PROSPERO CRD42022348849; https://tinyurl.com/y6yewwfa.


Assuntos
Doença de Alzheimer , Disfunção Cognitiva , Jogos Eletrônicos de Movimento , Memória Episódica , Idoso , Humanos , Disfunção Cognitiva/terapia , Exercício Físico , Aprendizagem
8.
J Med Internet Res ; 24(8): e36010, 2022 08 09.
Artigo em Inglês | MEDLINE | ID: mdl-35943772

RESUMO

BACKGROUND: Prevalence of diabetes has steadily increased over the last few decades with 1.5 million deaths reported in 2012 alone. Traditionally, analyzing patients with diabetes has remained a largely invasive approach. Wearable devices (WDs) make use of sensors historically reserved for hospital settings. WDs coupled with artificial intelligence (AI) algorithms show promise to help understand and conclude meaningful information from the gathered data and provide advanced and clinically meaningful analytics. OBJECTIVE: This review aimed to provide an overview of AI-driven WD features for diabetes and their use in monitoring diabetes-related parameters. METHODS: We searched 7 of the most popular bibliographic databases using 3 groups of search terms related to diabetes, WDs, and AI. A 2-stage process was followed for study selection: reading abstracts and titles followed by full-text screening. Two reviewers independently performed study selection and data extraction, and disagreements were resolved by consensus. A narrative approach was used to synthesize the data. RESULTS: From an initial 3872 studies, we report the features from 37 studies post filtering according to our predefined inclusion criteria. Most of the studies targeted type 1 diabetes, type 2 diabetes, or both (21/37, 57%). Many studies (15/37, 41%) reported blood glucose as their main measurement. More than half of the studies (21/37, 57%) had the aim of estimation and prediction of glucose or glucose level monitoring. Over half of the reviewed studies looked at wrist-worn devices. Only 41% of the study devices were commercially available. We observed the use of multiple sensors with photoplethysmography sensors being most prevalent in 32% (12/37) of studies. Studies reported and compared >1 machine learning (ML) model with high levels of accuracy. Support vector machine was the most reported (13/37, 35%), followed by random forest (12/37, 32%). CONCLUSIONS: This review is the most extensive work, to date, summarizing WDs that use ML for people with diabetes, and provides research direction to those wanting to further contribute to this emerging field. Given the advancements in WD technologies replacing the need for invasive hospital setting devices, we see great advancement potential in this domain. Further work is needed to validate the ML approaches on clinical data from WDs and provide meaningful analytics that could serve as data gathering, monitoring, prediction, classification, and recommendation devices in the context of diabetes.


Assuntos
Diabetes Mellitus Tipo 1 , Diabetes Mellitus Tipo 2 , Dispositivos Eletrônicos Vestíveis , Inteligência Artificial , Glicemia , Diabetes Mellitus Tipo 1/terapia , Humanos
9.
J Med Internet Res ; 23(11): e29749, 2021 11 19.
Artigo em Inglês | MEDLINE | ID: mdl-34806996

RESUMO

BACKGROUND: Bipolar disorder (BD) is the 10th most common cause of frailty in young individuals and has triggered morbidity and mortality worldwide. Patients with BD have a life expectancy 9 to 17 years lower than that of normal people. BD is a predominant mental disorder, but it can be misdiagnosed as depressive disorder, which leads to difficulties in treating affected patients. Approximately 60% of patients with BD are treated for depression. However, machine learning provides advanced skills and techniques for better diagnosis of BD. OBJECTIVE: This review aims to explore the machine learning algorithms used for the detection and diagnosis of bipolar disorder and its subtypes. METHODS: The study protocol adopted the PRISMA-ScR (Preferred Reporting Items for Systematic Reviews and Meta-Analyses Extension for Scoping Reviews) guidelines. We explored 3 databases, namely Google Scholar, ScienceDirect, and PubMed. To enhance the search, we performed backward screening of all the references of the included studies. Based on the predefined selection criteria, 2 levels of screening were performed: title and abstract review, and full review of the articles that met the inclusion criteria. Data extraction was performed independently by all investigators. To synthesize the extracted data, a narrative synthesis approach was followed. RESULTS: We retrieved 573 potential articles were from the 3 databases. After preprocessing and screening, only 33 articles that met our inclusion criteria were identified. The most commonly used data belonged to the clinical category (19, 58%). We identified different machine learning models used in the selected studies, including classification models (18, 55%), regression models (5, 16%), model-based clustering methods (2, 6%), natural language processing (1, 3%), clustering algorithms (1, 3%), and deep learning-based models (3, 9%). Magnetic resonance imaging data were most commonly used for classifying bipolar patients compared to other groups (11, 34%), whereas microarray expression data sets and genomic data were the least commonly used. The maximum ratio of accuracy was 98%, whereas the minimum accuracy range was 64%. CONCLUSIONS: This scoping review provides an overview of recent studies based on machine learning models used to diagnose patients with BD regardless of their demographics or if they were compared to patients with psychiatric diagnoses. Further research can be conducted to provide clinical decision support in the health industry.


Assuntos
Transtorno Bipolar , Algoritmos , Transtorno Bipolar/diagnóstico , Gerenciamento de Dados , Humanos , Aprendizado de Máquina , Processamento de Linguagem Natural
10.
J Med Internet Res ; 23(9): e29136, 2021 09 14.
Artigo em Inglês | MEDLINE | ID: mdl-34406962

RESUMO

BACKGROUND: Technologies have been extensively implemented to provide health care services for all types of clinical conditions during the COVID-19 pandemic. While several reviews have been conducted regarding technologies used during the COVID-19 pandemic, they were limited by focusing either on a specific technology (or features) or proposed rather than implemented technologies. OBJECTIVE: This review aims to provide an overview of technologies, as reported in the literature, implemented during the first wave of the COVID-19 pandemic. METHODS: We conducted a scoping review using PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-analyses) Extension for Scoping Reviews. Studies were retrieved by searching 8 electronic databases, checking the reference lists of included studies and relevant reviews (backward reference list checking), and checking studies that cited included studies (forward reference list checking). The search terms were chosen based on the target intervention (ie, technologies) and the target disease (ie, COVID-19). We included English publications that focused on technologies or digital tools implemented during the COVID-19 pandemic to provide health-related services regardless of target health condition, user, or setting. Two reviewers independently assessed the eligibility of studies and extracted data from eligible papers. We used a narrative approach to synthesize extracted data. RESULTS: Of 7374 retrieved papers, 126 were deemed eligible. Telemedicine was the most common type of technology (107/126, 84.9%) implemented in the first wave of the COVID-19 pandemic, and the most common mode of telemedicine was synchronous (100/108, 92.6%). The most common purpose of the technologies was providing consultation (75/126, 59.5%), followed by following up with patients (45/126, 35.7%), and monitoring their health status (22/126, 17.4%). Zoom (22/126, 17.5%) and WhatsApp (12/126, 9.5%) were the most commonly used videoconferencing and social media platforms, respectively. Both health care professionals and health consumers were the most common target users (103/126, 81.7%). The health condition most frequently targeted was COVID-19 (38/126, 30.2%), followed by any physical health conditions (21/126, 16.7%), and mental health conditions (13/126, 10.3%). Technologies were web-based in 84.1% of the studies (106/126). Technologies could be used through 11 modes, and the most common were mobile apps (86/126, 68.3%), desktop apps (73/126, 57.9%), telephone calls (49/126, 38.9%), and websites (45/126, 35.7%). CONCLUSIONS: Technologies played a crucial role in mitigating the challenges faced during the COVID-19 pandemic. We did not find papers describing the implementation of other technologies (eg, contact-tracing apps, drones, blockchain) during the first wave. Furthermore, technologies in this review were used for other purposes (eg, drugs and vaccines discovery, social distancing, and immunity passport). Future research on studies on these technologies and purposes is recommended, and further reviews are required to investigate technologies implemented in subsequent waves of the pandemic.


Assuntos
COVID-19 , Telemedicina , Humanos , Pandemias , SARS-CoV-2 , Tecnologia
11.
Bioinformatics ; 35(24): 5359-5360, 2019 12 15.
Artigo em Inglês | MEDLINE | ID: mdl-31350543

RESUMO

SUMMARY: As large-scale metabolic phenotyping studies become increasingly common, the need for systemic methods for pre-processing and quality control (QC) of analytical data prior to statistical analysis has become increasingly important, both within a study, and to allow meaningful inter-study comparisons. The nPYc-Toolbox provides software for the import, pre-processing, QC and visualization of metabolic phenotyping datasets, either interactively, or in automated pipelines. AVAILABILITY AND IMPLEMENTATION: The nPYc-Toolbox is implemented in Python, and is freely available from the Python package index https://pypi.org/project/nPYc/, source is available at https://github.com/phenomecentre/nPYc-Toolbox. Full documentation can be found at http://npyc-toolbox.readthedocs.io/ and exemplar datasets and tutorials at https://github.com/phenomecentre/nPYc-toolbox-tutorials.


Assuntos
Metabolômica , Software , Documentação , Controle de Qualidade
12.
Sci Rep ; 14(1): 18422, 2024 08 08.
Artigo em Inglês | MEDLINE | ID: mdl-39117650

RESUMO

This study explores integrating blockchain technology into the Internet of Medical Things (IoMT) to address security and privacy challenges. Blockchain's transparency, confidentiality, and decentralization offer significant potential benefits in the healthcare domain. The research examines various blockchain components, layers, and protocols, highlighting their role in IoMT. It also explores IoMT applications, security challenges, and methods for integrating blockchain to enhance security. Blockchain integration can be vital in securing and managing this data while preserving patient privacy. It also opens up new possibilities in healthcare, medical research, and data management. The results provide a practical approach to handling a large amount of data from IoMT devices. This strategy makes effective use of data resource fragmentation and encryption techniques. It is essential to have well-defined standards and norms, especially in the healthcare sector, where upholding safety and protecting the confidentiality of information are critical. These results illustrate that it is essential to follow standards like HIPAA, and blockchain technology can help ensure these criteria are met. Furthermore, the study explores the potential benefits of blockchain technology for enhancing inter-system communication in the healthcare industry while maintaining patient privacy protection. The results highlight the effectiveness of blockchain's consistency and cryptographic techniques in combining identity management and healthcare data protection, protecting patient privacy and data integrity. Blockchain is an unchangeable distributed ledger system. In short, the paper provides important insights into how blockchain technology may transform the healthcare industry by effectively addressing significant challenges and generating legal, safe, and interoperable solutions. Researchers, doctors, and graduate students are the audience for our paper.


Assuntos
Blockchain , Segurança Computacional , Confidencialidade , Internet das Coisas , Humanos , Internet
13.
Sci Rep ; 14(1): 18643, 2024 08 12.
Artigo em Inglês | MEDLINE | ID: mdl-39128933

RESUMO

Emerging Industry 5.0 designs promote artificial intelligence services and data-driven applications across multiple places with varying ownership that need special data protection and privacy considerations to prevent the disclosure of private information to outsiders. Due to this, federated learning offers a method for improving machine-learning models without accessing the train data at a single manufacturing facility. We provide a self-adaptive framework for federated machine learning of healthcare intelligent systems in this research. Our method takes into account the participating parties at various levels of healthcare ecosystem abstraction. Each hospital trains its local model internally in a self-adaptive style and transmits it to the centralized server for universal model optimization and communication cycle reduction. To represent a multi-task optimization issue, we split the dataset into as many subsets as devices. Each device selects the most advantageous subset for every local iteration of the model. On a training dataset, our initial study demonstrates the algorithm's ability to converge various hospital and device counts. By merging a federated machine-learning approach with advanced deep machine-learning models, we can simply and accurately predict multidisciplinary cancer diseases in the human body. Furthermore, in the smart healthcare industry 5.0, the results of federated machine learning approaches are used to validate multidisciplinary cancer disease prediction. The proposed adaptive federated machine learning methodology achieved 90.0%, while the conventional federated learning approach achieved 87.30%, both of which were higher than the previous state-of-the-art methodologies for cancer disease prediction in the smart healthcare industry 5.0.


Assuntos
Aprendizado de Máquina , Neoplasias , Humanos , Setor de Assistência à Saúde , Algoritmos , Inteligência Artificial , Atenção à Saúde
14.
Sci Rep ; 14(1): 6173, 2024 03 14.
Artigo em Inglês | MEDLINE | ID: mdl-38486010

RESUMO

A kidney stone is a solid formation that can lead to kidney failure, severe pain, and reduced quality of life from urinary system blockages. While medical experts can interpret kidney-ureter-bladder (KUB) X-ray images, specific images pose challenges for human detection, requiring significant analysis time. Consequently, developing a detection system becomes crucial for accurately classifying KUB X-ray images. This article applies a transfer learning (TL) model with a pre-trained VGG16 empowered with explainable artificial intelligence (XAI) to establish a system that takes KUB X-ray images and accurately categorizes them as kidney stones or normal cases. The findings demonstrate that the model achieves a testing accuracy of 97.41% in identifying kidney stones or normal KUB X-rays in the dataset used. VGG16 model delivers highly accurate predictions but lacks fairness and explainability in their decision-making process. This study incorporates the Layer-Wise Relevance Propagation (LRP) technique, an explainable artificial intelligence (XAI) technique, to enhance the transparency and effectiveness of the model to address this concern. The XAI technique, specifically LRP, increases the model's fairness and transparency, facilitating human comprehension of the predictions. Consequently, XAI can play an important role in assisting doctors with the accurate identification of kidney stones, thereby facilitating the execution of effective treatment strategies.


Assuntos
Inteligência Artificial , Cálculos Renais , Humanos , Raios X , Qualidade de Vida , Cálculos Renais/diagnóstico por imagem , Fluoroscopia
15.
JMIR Form Res ; 8: e49411, 2024 Mar 05.
Artigo em Inglês | MEDLINE | ID: mdl-38441952

RESUMO

BACKGROUND: Research gaps refer to unanswered questions in the existing body of knowledge, either due to a lack of studies or inconclusive results. Research gaps are essential starting points and motivation in scientific research. Traditional methods for identifying research gaps, such as literature reviews and expert opinions, can be time consuming, labor intensive, and prone to bias. They may also fall short when dealing with rapidly evolving or time-sensitive subjects. Thus, innovative scalable approaches are needed to identify research gaps, systematically assess the literature, and prioritize areas for further study in the topic of interest. OBJECTIVE: In this paper, we propose a machine learning-based approach for identifying research gaps through the analysis of scientific literature. We used the COVID-19 pandemic as a case study. METHODS: We conducted an analysis to identify research gaps in COVID-19 literature using the COVID-19 Open Research (CORD-19) data set, which comprises 1,121,433 papers related to the COVID-19 pandemic. Our approach is based on the BERTopic topic modeling technique, which leverages transformers and class-based term frequency-inverse document frequency to create dense clusters allowing for easily interpretable topics. Our BERTopic-based approach involves 3 stages: embedding documents, clustering documents (dimension reduction and clustering), and representing topics (generating candidates and maximizing candidate relevance). RESULTS: After applying the study selection criteria, we included 33,206 abstracts in the analysis of this study. The final list of research gaps identified 21 different areas, which were grouped into 6 principal topics. These topics were: "virus of COVID-19," "risk factors of COVID-19," "prevention of COVID-19," "treatment of COVID-19," "health care delivery during COVID-19," "and impact of COVID-19." The most prominent topic, observed in over half of the analyzed studies, was "the impact of COVID-19." CONCLUSIONS: The proposed machine learning-based approach has the potential to identify research gaps in scientific literature. This study is not intended to replace individual literature research within a selected topic. Instead, it can serve as a guide to formulate precise literature search queries in specific areas associated with research questions that previous publications have earmarked for future exploration. Future research should leverage an up-to-date list of studies that are retrieved from the most common databases in the target area. When feasible, full texts or, at minimum, discussion sections should be analyzed rather than limiting their analysis to abstracts. Furthermore, future studies could evaluate more efficient modeling algorithms, especially those combining topic modeling with statistical uncertainty quantification, such as conformal prediction.

16.
J Magn Reson Imaging ; 38(1): 89-101, 2013 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-23238914

RESUMO

PURPOSE: To assess the efficacy of dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI)-based textural analysis in predicting response to chemotherapy in a cohort of breast cancer patients. MATERIALS AND METHODS: In all, 100 patients were scanned on a 3.0T HDx scanner immediately prior to neoadjuvant chemotherapy treatment. A software application to use texture features based on co-occurrence matrices was developed. Texture analysis was performed on precontrast and 1-5 minutes postcontrast data. Patients were categorized according to their chemotherapeutic response: partial responders corresponding to a decrease in tumor diameter over 50% (40) and nonresponders corresponding to a decrease of less than 50% (4). Data were also split based on factors that influence response: triple receptor negative phenotype (TNBC) (22) vs. non-TNBC (49); node negative (45) vs. node positive (46); and biopsy grade 1 or 2 (38) vs. biopsy grade 3 (55). RESULTS: Parameters f2 (contrast), f4 (variance), f10 (difference in variance), f6 (sum average), f7 (sum variance), f8 (sum entropy), f15 (cluster shade), and f16 (cluster prominence) showed significant differences between responders and partial responders of chemotherapy. Differences were mainly seen at 1-3 minutes postcontrast administration. No significant differences were found precontrast administration. Node +ve, high grade, and TNBC are associated with poorer prognosis and appear to be more heterogeneous in appearance according to texture analysis. CONCLUSION: This work highlights that textural differences between groups (based on response, nodal status, and triple negative groupings) are apparent and appear to be most evident 1-3 minutes postcontrast administration. The fact that significant differences for certain texture parameters and groupings are consistently observed is encouraging.


Assuntos
Antineoplásicos/uso terapêutico , Neoplasias da Mama/tratamento farmacológico , Neoplasias da Mama/patologia , Interpretação de Imagem Assistida por Computador/métodos , Imageamento por Ressonância Magnética/estatística & dados numéricos , Reconhecimento Automatizado de Padrão/estatística & dados numéricos , Adulto , Idoso , Neoplasias da Mama/epidemiologia , Estudos de Coortes , Meios de Contraste , Feminino , Humanos , Aumento da Imagem/métodos , Imageamento por Ressonância Magnética/métodos , Masculino , Pessoa de Meia-Idade , Reconhecimento Automatizado de Padrão/métodos , Prevalência , Prognóstico , Reprodutibilidade dos Testes , Fatores de Risco , Sensibilidade e Especificidade , Resultado do Tratamento , Reino Unido
17.
Stud Health Technol Inform ; 305: 452-455, 2023 Jun 29.
Artigo em Inglês | MEDLINE | ID: mdl-37387063

RESUMO

Depression is a prevalent mental condition that is challenging to diagnose using conventional techniques. Using machine learning and deep learning models with motor activity data, wearable AI technology has shown promise in reliably and effectively identifying or predicting depression. In this work, we aim to examine the performance of simple linear and non-linear models in the prediction of depression levels. We compared eight linear and non-linear models (Ridge, ElasticNet, Lasso, Random Forest, Gradient boosting, Decision trees, Support vector machines, and Multilayer perceptron) for the task of predicting depression scores over a period using physiological features, motor activity data, and MADRAS scores. For the experimental evaluation, we used the Depresjon dataset which contains the motor activity data of depressed and non-depressed participants. According to our findings, simple linear and non-linear models may effectively estimate depression scores for depressed people without the need for complex models. This opens the door for the development of more effective and impartial techniques for identifying depression and treating/preventing it using commonly used, widely accessible wearable technology.


Assuntos
Inteligência Artificial , Depressão , Humanos , Depressão/diagnóstico , Índia , Redes Neurais de Computação , Aprendizado de Máquina
18.
Stud Health Technol Inform ; 305: 283-286, 2023 Jun 29.
Artigo em Inglês | MEDLINE | ID: mdl-37387018

RESUMO

In 2019 alone, Diabetes Mellitus impacted 463 million individuals worldwide. Blood glucose levels (BGL) are often monitored via invasive techniques as part of routine protocols. Recently, AI-based approaches have shown the ability to predict BGL using data acquired by non-invasive Wearable Devices (WDs), therefore improving diabetes monitoring and treatment. It is crucial to study the relationships between non-invasive WD features and markers of glycemic health. Therefore, this study aimed to investigate accuracy of linear and non-linear models in estimating BGL. A dataset containing digital metrics as well as diabetic status collected using traditional means was used. Data consisted of 13 participants data collected from WDs, these participants were divided in two groups young, and Adult Our experimental design included Data Collection, Feature Engineering, ML model selection/development, and reporting evaluation of metrics. The study showed that linear and non-linear models both have high accuracy in estimating BGL using WD data (RMSE range: 0.181 to 0.271, MAE range: 0.093 to 0.142). We provide further evidence of the feasibility of using commercially available WDs for the purpose of BGL estimation amongst diabetics when using Machine learning approaches.


Assuntos
Glicemia , Dados de Saúde Coletados Rotineiramente , Adulto , Humanos , Benchmarking , Coleta de Dados , Aprendizado de Máquina
19.
Stud Health Technol Inform ; 305: 291-294, 2023 Jun 29.
Artigo em Inglês | MEDLINE | ID: mdl-37387020

RESUMO

Intermittent fasting has been practiced for centuries across many cultures globally. Recently many studies have reported intermittent fasting for its lifestyle benefits, the major shift in eating habits and patterns is associated with several changes in hormones and circadian rhythms. Whether there are accompanying changes in stress levels is not widely reported especially in school children. The objective of this study is to examine the impact of intermittent fasting during Ramadan on stress levels in school children as measured using wearable artificial intelligence (AI). Twenty-nine school children (aged 13-17 years and 12M / 17F ratio) were given Fitbit devices and their stress, activity and sleep patterns analyzed 2 weeks before, 4 weeks during Ramadan fasting and 2 weeks after. This study revealed no statistically significant difference on stress scores during fasting, despite changes in stress levels being observed for 12 of the participants. Our study may imply intermittent fasting during Ramadan poses no direct risks in terms of stress, suggesting rather it may be linked to dietary habits, furthermore as stress score calculations are based on heart rate variability, this study implies fasting does not interfere the cardiac autonomic nervous system.


Assuntos
Inteligência Artificial , Jejum Intermitente , Humanos , Criança , Jejum , Sistema Nervoso Autônomo , Monitores de Aptidão Física
20.
NPJ Digit Med ; 6(1): 84, 2023 May 05.
Artigo em Inglês | MEDLINE | ID: mdl-37147384

RESUMO

Given the limitations of traditional approaches, wearable artificial intelligence (AI) is one of the technologies that have been exploited to detect or predict depression. The current review aimed at examining the performance of wearable AI in detecting and predicting depression. The search sources in this systematic review were 8 electronic databases. Study selection, data extraction, and risk of bias assessment were carried out by two reviewers independently. The extracted results were synthesized narratively and statistically. Of the 1314 citations retrieved from the databases, 54 studies were included in this review. The pooled mean of the highest accuracy, sensitivity, specificity, and root mean square error (RMSE) was 0.89, 0.87, 0.93, and 4.55, respectively. The pooled mean of lowest accuracy, sensitivity, specificity, and RMSE was 0.70, 0.61, 0.73, and 3.76, respectively. Subgroup analyses revealed that there is a statistically significant difference in the highest accuracy, lowest accuracy, highest sensitivity, highest specificity, and lowest specificity between algorithms, and there is a statistically significant difference in the lowest sensitivity and lowest specificity between wearable devices. Wearable AI is a promising tool for depression detection and prediction although it is in its infancy and not ready for use in clinical practice. Until further research improve its performance, wearable AI should be used in conjunction with other methods for diagnosing and predicting depression. Further studies are needed to examine the performance of wearable AI based on a combination of wearable device data and neuroimaging data and to distinguish patients with depression from those with other diseases.

SELEÇÃO DE REFERÊNCIAS
Detalhe da pesquisa