Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 5 de 5
Filtrar
Mais filtros

Base de dados
Tipo de documento
País de afiliação
Intervalo de ano de publicação
1.
J Med Syst ; 48(1): 54, 2024 May 23.
Artigo em Inglês | MEDLINE | ID: mdl-38780839

RESUMO

Artificial Intelligence (AI), particularly AI-Generated Imagery, has the potential to impact medical and patient education. This research explores the use of AI-generated imagery, from text-to-images, in medical education, focusing on congenital heart diseases (CHD). Utilizing ChatGPT's DALL·E 3, the research aims to assess the accuracy and educational value of AI-created images for 20 common CHDs. In this study, we utilized DALL·E 3 to generate a comprehensive set of 110 images, comprising ten images depicting the normal human heart and five images for each of the 20 common CHDs. The generated images were evaluated by a diverse group of 33 healthcare professionals. This cohort included cardiology experts, pediatricians, non-pediatric faculty members, trainees (medical students, interns, pediatric residents), and pediatric nurses. Utilizing a structured framework, these professionals assessed each image for anatomical accuracy, the usefulness of in-picture text, its appeal to medical professionals, and the image's potential applicability in medical presentations. Each item was assessed on a Likert scale of three. The assessments produced a total of 3630 images' assessments. Most AI-generated cardiac images were rated poorly as follows: 80.8% of images were rated as anatomically incorrect or fabricated, 85.2% rated to have incorrect text labels, 78.1% rated as not usable for medical education. The nurses and medical interns were found to have a more positive perception about the AI-generated cardiac images compared to the faculty members, pediatricians, and cardiology experts. Complex congenital anomalies were found to be significantly more predicted to anatomical fabrication compared to simple cardiac anomalies. There were significant challenges identified in image generation. Based on our findings, we recommend a vigilant approach towards the use of AI-generated imagery in medical education at present, underscoring the imperative for thorough validation and the importance of collaboration across disciplines. While we advise against its immediate integration until further validations are conducted, the study advocates for future AI-models to be fine-tuned with accurate medical data, enhancing their reliability and educational utility.


Assuntos
Inteligência Artificial , Cardiopatias Congênitas , Humanos , Cardiopatias Congênitas/diagnóstico por imagem , Cardiopatias Congênitas/diagnóstico
2.
Heliyon ; 10(7): e28962, 2024 Apr 15.
Artigo em Inglês | MEDLINE | ID: mdl-38623218

RESUMO

Artificial intelligence (AI) chatbots, such as ChatGPT, have widely invaded all domains of human life. They have the potential to transform healthcare future. However, their effective implementation hinges on healthcare workers' (HCWs) adoption and perceptions. This study aimed to evaluate HCWs usability of ChatGPT three months post-launch in Saudi Arabia using the System Usability Scale (SUS). A total of 194 HCWs participated in the survey. Forty-seven percent were satisfied with their usage, 57 % expressed moderate to high trust in its ability to generate medical decisions. 58 % expected ChatGPT would improve patients' outcomes, even though 84 % were optimistic of its potential to improve the future of healthcare practice. They expressed possible concerns like recommending harmful medical decisions and medicolegal implications. The overall mean SUS score was 64.52, equivalent to 50 % percentile rank, indicating high marginal acceptability of the system. The strongest positive predictors of high SUS scores were participants' belief in AI chatbot's benefits in medical research, self-rated familiarity with ChatGPT and self-rated computer skills proficiency. Participants' learnability and ease of use score correlated positively but weakly. On the other hand, medical students and interns had significantly high learnability scores compared to others, while ease of use scores correlated very strongly with participants' perception of positive impact of ChatGPT on the future of healthcare practice. Our findings highlight the HCWs' perceived marginal acceptance of ChatGPT at the current stage and their optimism of its potential in supporting them in future practice, especially in the research domain, in addition to humble ambition of its potential to improve patients' outcomes particularly in regard of medical decisions. On the other end, it underscores the need for ongoing efforts to build trust and address ethical and legal concerns of AI implications in healthcare. The study contributes to the growing body of literature on AI chatbots in healthcare, especially addressing its future improvement strategies and provides insights for policymakers and healthcare providers about the potential benefits and challenges of implementing them in their practice.

3.
Cureus ; 15(5): e38373, 2023 May.
Artigo em Inglês | MEDLINE | ID: mdl-37265897

RESUMO

During the early phase of the COVID-19 pandemic, reverse transcriptase-polymerase chain reaction (RT-PCR) testing faced limitations, prompting the exploration of machine learning (ML) alternatives for diagnosis and prognosis. Providing a comprehensive appraisal of such decision support systems and their use in COVID-19 management can aid the medical community in making informed decisions during the risk assessment of their patients, especially in low-resource settings. Therefore, the objective of this study was to systematically review the studies that predicted the diagnosis of COVID-19 or the severity of the disease using ML. Following the Preferred Reporting Items for Systematic Reviews and Meta-Analysis (PRISMA), we conducted a literature search of MEDLINE (OVID), Scopus, EMBASE, and IEEE Xplore from January 1 to June 31, 2020. The outcomes were COVID-19 diagnosis or prognostic measures such as death, need for mechanical ventilation, admission, and acute respiratory distress syndrome. We included peer-reviewed observational studies, clinical trials, research letters, case series, and reports. We extracted data about the study's country, setting, sample size, data source, dataset, diagnostic or prognostic outcomes, prediction measures, type of ML model, and measures of diagnostic accuracy. Bias was assessed using the Prediction model Risk Of Bias ASsessment Tool (PROBAST). This study was registered in the International Prospective Register of Systematic Reviews (PROSPERO), with the number CRD42020197109. The final records included for data extraction were 66. Forty-three (64%) studies used secondary data. The majority of studies were from Chinese authors (30%). Most of the literature (79%) relied on chest imaging for prediction, while the remainder used various laboratory indicators, including hematological, biochemical, and immunological markers. Thirteen studies explored predicting COVID-19 severity, while the rest predicted diagnosis. Seventy percent of the articles used deep learning models, while 30% used traditional ML algorithms. Most studies reported high sensitivity, specificity, and accuracy for the ML models (exceeding 90%). The overall concern about the risk of bias was "unclear" in 56% of the studies. This was mainly due to concerns about selection bias. ML may help identify COVID-19 patients in the early phase of the pandemic, particularly in the context of chest imaging. Although these studies reflect that these ML models exhibit high accuracy, the novelty of these models and the biases in dataset selection make using them as a replacement for the clinicians' cognitive decision-making questionable. Continued research is needed to enhance the robustness and reliability of ML systems in COVID-19 diagnosis and prognosis.

4.
Cureus ; 14(9): e29791, 2022 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-36340555

RESUMO

Background Pneumonia is a common respiratory infection that affects all ages, with a higher rate anticipated as age increases. It is a disease that impacts patient health and the economy of the healthcare institution. Therefore, machine learning methods have been used to guide clinical judgment in disease conditions and can recognize patterns based on patient data. This study aims to develop a prediction model for the readmission risk within 30 days of patient discharge after the management of community-acquired pneumonia (CAP). Methodology Univariate and multivariate logistic regression were used to identify the statistically significant factors that are associated with the readmission of patients with CAP. Multiple machine learning models were used to predict the readmission of CAP patients within 30 days by conducting a retrospective observational study on patient data. The dataset was obtained from the Hospital Information System of a tertiary healthcare organization across Saudi Arabia. The study included all patients diagnosed with CAP from 2016 until the end of 2018. Results The collected data included 8,690 admission records related to CAP for 5,776 patients (2,965 males, 2,811 females). The results of the analysis showed that patient age, heart rate, respiratory rate, medication count, and the number of comorbidities were significantly associated with the odds of being readmitted. All other variables showed no significant effect. We ran four algorithms to create the model on our data. The decision tree gave high accuracy of 83%, while support vector machine (SVM), random forest (RF), and logistic regression provided better accuracy of 90%. However, because the dataset was unbalanced, the precision and recall for readmission were zero for all models except the decision tree with 16% and 18%, respectively. By applying the Synthetic Minority Oversampling TEchnique technique to balance the training dataset, the results did not change significantly; the highest precision achieved was 16% in the SVM model. RF achieved the highest recall with 45%, but without any advantage to this model because the accuracy was reduced to 65%. Conclusions Pneumonia is an infectious disease with major health and economic complications. We identified that less than 10% of patients were readmitted for CAP after discharge; in addition, we identified significant predictors. However, our study did not have enough data to develop a proper machine learning prediction model for the risk of readmission.

5.
Cureus ; 14(8): e27630, 2022 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-36127978

RESUMO

Introduction Emergency readmissions have been a long-time, multifaceted, unsolved problem. Developing a predictive model calibrated with hospital-specific Electronic Health Record (EHR) data could give higher prediction accuracy and insights into high-risk patients for readmission. Thus, we need to proactively introduce the necessary interventions. This study aims to investigate the relationship between features that consider significant predictors of at-risk patients for seven-day readmission through logistic regression in addition to developing several machine learning models to test the predictability of those attributes using EHR data in a Saudi Arabia-specific ED context. Methods Univariate and multivariate logistic regression has been used to identify the most statistically significant features that contributed to classifying readmitted and not readmitted patients. Seven different machine learning models were trained and tested, and a comparison between the best-performing model was conducted in terms of five performance metrics. To construct the prediction model and internally validate it, the processed dataset was split into two sets: 70% for the training set and 30% for the test set or validation set. Results XGBoost achieved the highest accuracy (64%) in predicting early seven-day readmissions. Catboost was the second-best predictive model at 61%. XGBoost achieved the highest specificity at 70%, and all the models had a sensitivity of 57% except for XGBoost and Catboost at 32% and 38%, respectively. All predictive attributes, patient age, length of stay (LOS) in minutes, visit time (AM), marital status (married), number of medications, and number of abnormal lab results were significant predictors of early seven-day readmissions while marital status and number of vital-sign instabilities at discharge were not statistically significant predictors of seven-day readmission. Conclusion Although XGBoost and Catboost showed good accuracy, none of the models achieved good discriminative ability in terms of sensitivity and specificity. Thus, none can be clinically used for predicting early seven-day readmission. More predictive variables need to be fed into the model, specifically predictors approximate to the day of discharge, in order to optimize the model's performance.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA