Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 37
Filter
1.
J Med Internet Res ; 26: e56764, 2024 Apr 25.
Article in English | MEDLINE | ID: mdl-38662419

ABSTRACT

As the health care industry increasingly embraces large language models (LLMs), understanding the consequence of this integration becomes crucial for maximizing benefits while mitigating potential pitfalls. This paper explores the evolving relationship among clinician trust in LLMs, the transition of data sources from predominantly human-generated to artificial intelligence (AI)-generated content, and the subsequent impact on the performance of LLMs and clinician competence. One of the primary concerns identified in this paper is the LLMs' self-referential learning loops, where AI-generated content feeds into the learning algorithms, threatening the diversity of the data pool, potentially entrenching biases, and reducing the efficacy of LLMs. While theoretical at this stage, this feedback loop poses a significant challenge as the integration of LLMs in health care deepens, emphasizing the need for proactive dialogue and strategic measures to ensure the safe and effective use of LLM technology. Another key takeaway from our investigation is the role of user expertise and the necessity for a discerning approach to trusting and validating LLM outputs. The paper highlights how expert users, particularly clinicians, can leverage LLMs to enhance productivity by off-loading routine tasks while maintaining a critical oversight to identify and correct potential inaccuracies in AI-generated content. This balance of trust and skepticism is vital for ensuring that LLMs augment rather than undermine the quality of patient care. We also discuss the risks associated with the deskilling of health care professionals. Frequent reliance on LLMs for critical tasks could result in a decline in health care providers' diagnostic and thinking skills, particularly affecting the training and development of future professionals. The legal and ethical considerations surrounding the deployment of LLMs in health care are also examined. We discuss the medicolegal challenges, including liability in cases of erroneous diagnoses or treatment advice generated by LLMs. The paper references recent legislative efforts, such as The Algorithmic Accountability Act of 2023, as crucial steps toward establishing a framework for the ethical and responsible use of AI-based technologies in health care. In conclusion, this paper advocates for a strategic approach to integrating LLMs into health care. By emphasizing the importance of maintaining clinician expertise, fostering critical engagement with LLM outputs, and navigating the legal and ethical landscape, we can ensure that LLMs serve as valuable tools in enhancing patient care and supporting health care professionals. This approach addresses the immediate challenges posed by integrating LLMs and sets a foundation for their maintainable and responsible use in the future.


Subject(s)
Artificial Intelligence , Health Personnel , Trust , Humans , Health Personnel/psychology , Language , Learning
2.
ACS Biomater Sci Eng ; 10(5): 2636-2658, 2024 May 13.
Article in English | MEDLINE | ID: mdl-38606473

ABSTRACT

Nanosized mesoporous silica has emerged as a promising flexible platform delivering siRNA for cancer treatment. This ordered mesoporous nanosized silica provides attractive features of well-defined and tunable porosity, structure, high payload, and multiple functionalizations for targeted delivery and increasing biocompatibility over other polymeric nanocarriers. Moreover, it also overcomes the lacunae associated with traditional administration of drugs. Chemically modified porous silica matrix efficiently entraps siRNA molecules and prevents their enzymatic degradation and premature release. This Review discusses the synthesis of silica using the sol-gel approach and the advantages with different silica mesostructure. Herein, the factors affecting the synthesis of silica at nanometer scale, shape, porosity and nanoparticle surface modification are also highlighted to attain the desired nanostructured silica carriers. Additional emphasis is given to chemically modified silica delivering siRNA, where the silica nanoparticle surface was modified with different chemical moieties such as amine modified with (3-aminoropyl) triethoxysilane, polyethylenimine, chitosan, poly(ethylene glycol), and cyclodextrin polymer modification to attain high therapeutic loading, improved dispersibility and biocompatibility. Upon systemic administration, ordered mesoporous nanosized silica encounters blood cells, immune cells, and organs mainly of the reticuloendothelial system (RES). Thereby, biocompatibility and biodistribution of silica based nanocarriers are deliberated to design principles for smart and efficacious nanostructured silica-siRNA carriers and their clinical trial status. This Review further reports the future scopes and challenges for developing silica nanomaterial as a promising siRNA delivery vehicle demanding FDA approval.


Subject(s)
Neoplasms , RNA, Small Interfering , Silicon Dioxide , Silicon Dioxide/chemistry , RNA, Small Interfering/therapeutic use , RNA, Small Interfering/administration & dosage , RNA, Small Interfering/chemistry , Humans , Neoplasms/therapy , Neoplasms/drug therapy , Neoplasms/genetics , Porosity , Nanoparticles/chemistry , Nanoparticles/therapeutic use , Animals , Drug Carriers/chemistry
3.
Appl Ergon ; 118: 104280, 2024 Jul.
Article in English | MEDLINE | ID: mdl-38560964

ABSTRACT

The coronavirus pandemic shocked the already overwhelmed global healthcare system, challenging its preparedness to deal with mass fatalities. Our research examines the safety issues faced by healthcare workers when handling dead (deceased) bodies, highlighting the need for better strategies in the event of mass fatalities. Healthcare providers involved in dead body handling during the COVID-19 pandemic in the U.S. were eligible to participate in our study. Using a web-based survey, we analyzed responses of 206 participants across 43 U.S. states. We used the Systems Engineering Initiative for Patient Safety (SEIPS) framework to deduce themes from participants' open-ended responses. The study showed how routine tasks become extraordinarily challenging during pandemic due to increased workload, emotional stress, and resource constraints. Tasks such as lifting and transferring bodies, underscored physical and emotional toll on workers. The mental strain induced by mass fatalities and the complexities of communicating with families and peers were also prominent, adding to the overall burden on healthcare workers. The participants emphasized the importance of specialized training, policy refinements, and improvements in its implementation. In conclusion, our study contributes to understanding the complexities of dead body handling during a pandemic. It underscores the need for emergency response planning and systemic changes in healthcare policies and practices to ensure the safety and well-being of healthcare workers engaged in these critical tasks.


Subject(s)
COVID-19 , Health Personnel , Humans , COVID-19/epidemiology , COVID-19/psychology , Cross-Sectional Studies , Health Personnel/psychology , Male , Female , Adult , United States/epidemiology , Middle Aged , Moving and Lifting Patients , SARS-CoV-2 , Surveys and Questionnaires , Workload/psychology , Pandemics
5.
Front Digit Health ; 6: 1334266, 2024.
Article in English | MEDLINE | ID: mdl-38482048

ABSTRACT

[This corrects the article DOI: 10.3389/fdgth.2022.966174.].

6.
PLoS One ; 19(3): e0296151, 2024.
Article in English | MEDLINE | ID: mdl-38457373

ABSTRACT

As ChatGPT emerges as a potential ally in healthcare decision-making, it is imperative to investigate how users leverage and perceive it. The repurposing of technology is innovative but brings risks, especially since AI's effectiveness depends on the data it's fed. In healthcare, ChatGPT might provide sound advice based on current medical knowledge, which could turn into misinformation if its data sources later include erroneous information. Our study assesses user perceptions of ChatGPT, particularly of those who used ChatGPT for healthcare-related queries. By examining factors such as competence, reliability, transparency, trustworthiness, security, and persuasiveness of ChatGPT, the research aimed to understand how users rely on ChatGPT for health-related decision-making. A web-based survey was distributed to U.S. adults using ChatGPT at least once a month. Bayesian Linear Regression was used to understand how much ChatGPT aids in informed decision-making. This analysis was conducted on subsets of respondents, both those who used ChatGPT for healthcare decisions and those who did not. Qualitative data from open-ended questions were analyzed using content analysis, with thematic coding to extract public opinions on urban environmental policies. Six hundred and seven individuals responded to the survey. Respondents were distributed across 306 US cities of which 20 participants were from rural cities. Of all the respondents, 44 used ChatGPT for health-related queries and decision-making. In the healthcare context, the most effective model highlights 'Competent + Trustworthy + ChatGPT for healthcare queries', underscoring the critical importance of perceived competence and trustworthiness specifically in the realm of healthcare applications of ChatGPT. On the other hand, the non-healthcare context reveals a broader spectrum of influential factors in its best model, which includes 'Trustworthy + Secure + Benefits outweigh risks + Satisfaction + Willing to take decisions + Intent to use + Persuasive'. In conclusion our study findings suggest a clear demarcation in user expectations and requirements from AI systems based on the context of their use. We advocate for a balanced approach where technological advancement and user readiness are harmonized.


Subject(s)
Decision Making , Technology , Adult , Humans , Bayes Theorem , Cross-Sectional Studies , Reproducibility of Results
7.
Sci Rep ; 13(1): 18236, 2023 10 25.
Article in English | MEDLINE | ID: mdl-37880295

ABSTRACT

Studies have shown a heightened prevalence of depression and suicidal ideation among patients with Gastrointestinal Cancer (GIC). GIC patients are at a 1.5- to threefold increased risk of suicide and depression compared to other cancer patients. This study investigates the interplay of internet use, family burden, and emotional support on mental health (depression) and suicidal ideation among patients with GIC. The study involves 202 respondents of which 78 were undergoing GIC treatment during this study. Using structural equation modeling, our findings indicate a substantial negative correlation between mental health and suicidal ideation. Overall, suicidal ideation (median score) was noticeably lower in patient who completed their treatment with noticeable individuals with exceptionally high SI even after completing the treatment. Notably, participants who had completed their treatment demonstrated a significantly stronger correlation between emotional support and mental health compared to those who were still undergoing treatment. Age was found to moderate the mental health-suicidal ideation link significantly. Internet usage for health-related information was also inversely correlated with mental health (directly) and suicidal ideation (indirectly). We noted that the influence of emotional support on mental health was significantly higher among individuals who completed their treatment compared to those who were undergoing their GIC treatment. Family burden emerged as significant negative influences on mental health, while emotional support positively impacted mental health. The findings of this study contribute towards a deeper understanding of suicide risk factors in GIC patients, potentially shaping more effective preventive strategies.


Subject(s)
Gastrointestinal Neoplasms , Suicide , Humans , Suicidal Ideation , Depression , Suicide/psychology , Mental Health , Gastrointestinal Neoplasms/complications , Risk Factors
8.
Nutrients ; 15(17)2023 Aug 24.
Article in English | MEDLINE | ID: mdl-37686731

ABSTRACT

According to the National Family Health Survey of 2021, about 57% of women aged 15-49 in India currently suffer from anemia, marking a significant increase from the 53% recorded in 2016. Similarly, a study conducted in southern India reported a 32.60% prevalence of preeclampsia. Several community-based initiatives have been launched in India to address these public health challenges. However, these interventions have yet to achieve the desired results. Could the challenges faced by traditional healthcare interventions be overcome through a technological leap? This study assesses pregnant mothers' perceptions regarding mobile health interventions for managing anemia and preeclampsia. Additionally, the study captures their health awareness and knowledge. We conducted a survey with 131 pregnant mothers in three underserved villages in Jharkhand, India. Statistical analysis was conducted using the SEMinR package in R (Version 2023.06.0), utilizing the non-parametric partial least squares-structural equation modeling. We found that every household had at least one smartphone, with the respondents being the primary users. The main uses of smartphones were for calling, messaging, and social media. A total of 61% of respondents showed interest in a nutrition and pregnancy app, while 23.66% were uncertain. Regarding nutritional knowledge during pregnancy, 68.7% reported having some knowledge, but only 11.45% claimed comprehensive knowledge. There was a considerable knowledge gap regarding the critical nutrients needed during pregnancy and the foods recommended for a healthy pregnancy diet. Awareness of pregnancy-related conditions such as anemia and preeclampsia was low, with most respondents unsure of these conditions' primary causes, impacts, and symptoms. This study serves as a critical step towards leveraging technology to enhance public health outcomes in low-resource settings. With the accessibility of mobile devices and an apparent willingness to utilize mHealth apps, compounded by the pressing need for improved maternal health, the impetus for action is indisputable. It is incumbent upon us to seize this opportunity, ensuring that the potential of technology is fully realized and not squandered, thus circumventing the risk of a burgeoning digital divide.


Subject(s)
Anemia , Pre-Eclampsia , Telemedicine , Pregnancy , Humans , Female , Pregnant Women , Pre-Eclampsia/epidemiology , Pre-Eclampsia/prevention & control , Cross-Sectional Studies , Anemia/epidemiology , Anemia/prevention & control
9.
PLoS One ; 18(9): e0291064, 2023.
Article in English | MEDLINE | ID: mdl-37656716

ABSTRACT

This study investigates the complex interrelationships between peer support, mental distress, self-care abilities, health perceptions, and daily life activities among cancer patients and survivors while considering the evolving nature of these experiences over time. A cross-sectional survey design is employed, utilizing de-identified data from the National Cancer Institute's 2022 nationally representative dataset, which comprises responses from 1234 participants, including 134 newly diagnosed patients undergoing cancer treatment. Partial least squares structural equation modeling is employed for data analysis. The results reveal that peer support significantly reduces mental distress and positively influences the perception of self-care abilities and health perceptions among cancer patients and survivors. Additionally, the study finds that mental distress negatively affects daily life activities and self-care abilities. This means that when cancer patients and survivors experience high levels of mental distress, they may struggle with everyday tasks and find it challenging to care for themselves effectively. The research also shows that mental distress tends to decrease as time passes since diagnosis and health perceptions improve, highlighting the resilience of cancer patients and survivors over time. Furthermore, the study uncovers significant moderating effects of age, education, and income on the relationships between daily life activity difficulties, perception of self-care ability, and perception of health. In conclusion, this research provides a comprehensive understanding of the intricate associations between the variables of interest among cancer patients and survivors. The findings underscore the importance of peer support and targeted interventions for promoting well-being, resilience, and quality of life in this population, offering valuable insights for healthcare providers, researchers, and policymakers. Identifying moderating effects further emphasizes the need to consider individual differences when designing and implementing support systems and interventions tailored to the unique needs of cancer patients and survivors.


Subject(s)
Neoplasms , Self Care , Humans , Cross-Sectional Studies , Quality of Life , Neoplasms/therapy , Survivors , Perception
10.
Healthcare (Basel) ; 11(16)2023 Aug 16.
Article in English | MEDLINE | ID: mdl-37628506

ABSTRACT

Artificial intelligence (AI) offers the potential to revolutionize healthcare, from improving diagnoses to patient safety. However, many healthcare practitioners are hesitant to adopt AI technologies fully. To understand why, this research explored clinicians' views on AI, especially their level of trust, their concerns about potential risks, and how they believe AI might affect their day-to-day workload. We surveyed 265 healthcare professionals from various specialties in the U.S. The survey aimed to understand their perceptions and any concerns they might have about AI in their clinical practice. We further examined how these perceptions might align with three hypothetical approaches to integrating AI into healthcare: no integration, sequential (step-by-step) integration, and parallel (side-by-side with current practices) integration. The results reveal that clinicians who view AI as a workload reducer are more inclined to trust it and are more likely to use it in clinical decision making. However, those perceiving higher risks with AI are less inclined to adopt it in decision making. While the role of clinical experience was found to be statistically insignificant in influencing trust in AI and AI-driven decision making, further research might explore other potential moderating variables, such as technical aptitude, previous exposure to AI, or the specific medical specialty of the clinician. By evaluating three hypothetical scenarios of AI integration in healthcare, our study elucidates the potential pitfalls of sequential AI integration and the comparative advantages of parallel integration. In conclusion, this study underscores the necessity of strategic AI integration into healthcare. AI should be perceived as a supportive tool rather than an intrusive entity, augmenting the clinicians' skills and facilitating their workflow rather than disrupting it. As we move towards an increasingly digitized future in healthcare, comprehending the among AI technology, clinician perception, trust, and decision making is fundamental.

11.
J Med Internet Res ; 25: e47184, 2023 06 14.
Article in English | MEDLINE | ID: mdl-37314848

ABSTRACT

BACKGROUND: ChatGPT (Chat Generative Pre-trained Transformer) has gained popularity for its ability to generate human-like responses. It is essential to note that overreliance or blind trust in ChatGPT, especially in high-stakes decision-making contexts, can have severe consequences. Similarly, lacking trust in the technology can lead to underuse, resulting in missed opportunities. OBJECTIVE: This study investigated the impact of users' trust in ChatGPT on their intent and actual use of the technology. Four hypotheses were tested: (1) users' intent to use ChatGPT increases with their trust in the technology; (2) the actual use of ChatGPT increases with users' intent to use the technology; (3) the actual use of ChatGPT increases with users' trust in the technology; and (4) users' intent to use ChatGPT can partially mediate the effect of trust in the technology on its actual use. METHODS: This study distributed a web-based survey to adults in the United States who actively use ChatGPT (version 3.5) at least once a month between February 2023 through March 2023. The survey responses were used to develop 2 latent constructs: Trust and Intent to Use, with Actual Use being the outcome variable. The study used partial least squares structural equation modeling to evaluate and test the structural model and hypotheses. RESULTS: In the study, 607 respondents completed the survey. The primary uses of ChatGPT were for information gathering (n=219, 36.1%), entertainment (n=203, 33.4%), and problem-solving (n=135, 22.2%), with a smaller number using it for health-related queries (n=44, 7.2%) and other activities (n=6, 1%). Our model explained 50.5% and 9.8% of the variance in Intent to Use and Actual Use, respectively, with path coefficients of 0.711 and 0.221 for Trust on Intent to Use and Actual Use, respectively. The bootstrapped results failed to reject all 4 null hypotheses, with Trust having a significant direct effect on both Intent to Use (ß=0.711, 95% CI 0.656-0.764) and Actual Use (ß=0.302, 95% CI 0.229-0.374). The indirect effect of Trust on Actual Use, partially mediated by Intent to Use, was also significant (ß=0.113, 95% CI 0.001-0.227). CONCLUSIONS: Our results suggest that trust is critical to users' adoption of ChatGPT. It remains crucial to highlight that ChatGPT was not initially designed for health care applications. Therefore, an overreliance on it for health-related advice could potentially lead to misinformation and subsequent health risks. Efforts must be focused on improving the ChatGPT's ability to distinguish between queries that it can safely handle and those that should be redirected to human experts (health care professionals). Although risks are associated with excessive trust in artificial intelligence-driven chatbots such as ChatGPT, the potential risks can be reduced by advocating for shared accountability and fostering collaboration between developers, subject matter experts, and human factors researchers.


Subject(s)
Artificial Intelligence , Trust , Adult , Humans , Health Personnel , Intention , Surveys and Questionnaires
12.
JMIR Hum Factors ; 10: e47564, 2023 May 17.
Article in English | MEDLINE | ID: mdl-37195756

ABSTRACT

BACKGROUND: With the rapid advancement of artificial intelligence (AI) technologies, AI-powered chatbots, such as Chat Generative Pretrained Transformer (ChatGPT), have emerged as potential tools for various applications, including health care. However, ChatGPT is not specifically designed for health care purposes, and its use for self-diagnosis raises concerns regarding its adoption's potential risks and benefits. Users are increasingly inclined to use ChatGPT for self-diagnosis, necessitating a deeper understanding of the factors driving this trend. OBJECTIVE: This study aims to investigate the factors influencing users' perception of decision-making processes and intentions to use ChatGPT for self-diagnosis and to explore the implications of these findings for the safe and effective integration of AI chatbots in health care. METHODS: A cross-sectional survey design was used, and data were collected from 607 participants. The relationships between performance expectancy, risk-reward appraisal, decision-making, and intention to use ChatGPT for self-diagnosis were analyzed using partial least squares structural equation modeling (PLS-SEM). RESULTS: Most respondents were willing to use ChatGPT for self-diagnosis (n=476, 78.4%). The model demonstrated satisfactory explanatory power, accounting for 52.4% of the variance in decision-making and 38.1% in the intent to use ChatGPT for self-diagnosis. The results supported all 3 hypotheses: The higher performance expectancy of ChatGPT (ß=.547, 95% CI 0.474-0.620) and positive risk-reward appraisals (ß=.245, 95% CI 0.161-0.325) were positively associated with the improved perception of decision-making outcomes among users, and enhanced perception of decision-making processes involving ChatGPT positively impacted users' intentions to use the technology for self-diagnosis (ß=.565, 95% CI 0.498-0.628). CONCLUSIONS: Our research investigated factors influencing users' intentions to use ChatGPT for self-diagnosis and health-related purposes. Even though the technology is not specifically designed for health care, people are inclined to use ChatGPT in health care contexts. Instead of solely focusing on discouraging its use for health care purposes, we advocate for improving the technology and adapting it for suitable health care applications. Our study highlights the importance of collaboration among AI developers, health care providers, and policy makers in ensuring AI chatbots' safe and responsible use in health care. By understanding users' expectations and decision-making processes, we can develop AI chatbots, such as ChatGPT, that are tailored to human needs, providing reliable and verified health information sources. This approach not only enhances health care accessibility but also improves health literacy and awareness. As the field of AI chatbots in health care continues to evolve, future research should explore the long-term effects of using AI chatbots for self-diagnosis and investigate their potential integration with other digital health interventions to optimize patient care and outcomes. In doing so, we can ensure that AI chatbots, including ChatGPT, are designed and implemented to safeguard users' well-being and support positive health outcomes in health care settings.

13.
Interact J Med Res ; 12: e45382, 2023 Apr 07.
Article in English | MEDLINE | ID: mdl-37027201

ABSTRACT

BACKGROUND: Cancer is perceived as a life-threatening, fear-inducing, and stigmatized disease. Most patients with cancer and cancer survivors commonly experience social isolation, negative self-perception, and psychological distress. The heavy toll that cancer takes on patients continues even after treatment. It is common for many patients with cancer to feel uncertain about their future. Some undergo anxiety, loneliness, and fear of getting cancer again. OBJECTIVE: This study examined the impact of social isolation, self-perception, and physician-patient communication on the mental health of patients with cancer and cancer survivors. The study also explored the impact of social isolation and physician-patient communication on self-perception. METHODS: This retrospective study used restricted data from the 2021 Health Information National Trends Survey (HINTS), which collected data from January 11, 2021, to August 20, 2021. We used the partial least squares structural equation modeling (PLS-SEM) method for data analysis. We checked for quadratic effects among all the paths connecting social isolation, poor physician-patient communication, mental health (measured using the 4-item Patient Health Questionnaire [PHQ-4]), and negative self-perception. The model was controlled for confounding factors such as respondents' annual income, education level, and age. Bias-corrected and accelerated (BCA) bootstrap methods were used to estimate nonparametric CIs. Statistical significance was tested at 95% CI (2-tailed). We also conducted a multigroup analysis in which we created 2 groups. Group A consisted of newly diagnosed patients with cancer who were undergoing cancer treatment during the survey or had received cancer treatment within the last 12 months (receipt of cancer treatment during the COVID-19 pandemic). Group B consisted of respondents who had received cancer treatment between 5 and 10 years previously (receipt of cancer treatment before the COVID-19 pandemic). RESULTS: The analysis indicated that social isolation had a quadratic effect on mental health, with higher levels of social isolation associated with worse mental health outcomes up to a certain point. Self-perception positively affected mental health, with higher self-perception associated with better mental health outcomes. In addition, physician-patient communication significantly indirectly affected mental health via self-perception. CONCLUSIONS: The findings of this study provide important insights into the factors that affect the mental health of patients with cancer. Our results suggest that social isolation, negative self-perception, and communication with care providers are significantly related to mental health in patients with cancer.

14.
Healthcare (Basel) ; 11(2)2023 Jan 16.
Article in English | MEDLINE | ID: mdl-36673640

ABSTRACT

BACKGROUND: College students are one of the most susceptible age groups to mental health problems. With the growing popularity of mobile health (mHealth), there is an increasing need to investigate its implications for mental health solutions. This review evaluates mHealth interventions for addressing mental health problems among college students. METHODS: An online database search was conducted. Articles were required to focus on the impact of mHealth intervention on student mental health. Fifteen of the 487 articles, initially pulled from the search query, were included in the review. RESULTS: The review identified three primary aspects of mental health: depression, anxiety, and stress. Research that found statistically significant improvements following mHealth intervention involved study durations between four and eight weeks, daily app use, guided lessons using cognitive behavioral therapy, acceptance and commitment therapy, and meditation. The review's findings show that future work must address the concern of digital divide, gender and sex differences, and have larger sample sizes. CONCLUSIONS: There is potential to improve depressive symptoms and other similar mental health problems among college students via mobile app interventions. However, actions must be taken to improve barriers to communication and better reach the younger generations.

15.
Front Digit Health ; 4: 920662, 2022.
Article in English | MEDLINE | ID: mdl-36339516

ABSTRACT

Background: Given the opportunities created by artificial intelligence (AI) based decision support systems in healthcare, the vital question is whether clinicians are willing to use this technology as an integral part of clinical workflow. Purpose: This study leverages validated questions to formulate an online survey and consequently explore cognitive human factors influencing clinicians' intention to use an AI-based Blood Utilization Calculator (BUC), an AI system embedded in the electronic health record that delivers data-driven personalized recommendations for the number of packed red blood cells to transfuse for a given patient. Method: A purposeful sampling strategy was used to exclusively include BUC users who are clinicians in a university hospital in Wisconsin. We recruited 119 BUC users who completed the entire survey. We leveraged structural equation modeling to capture the direct and indirect effects of "AI Perception" and "Expectancy" on clinicians' Intention to use the technology when mediated by "Perceived Risk". Results: The findings indicate a significant negative relationship concerning the direct impact of AI's perception on BUC Risk (ß = -0.23, p < 0.001). Similarly, Expectancy had a significant negative effect on Risk (ß = -0.49, p < 0.001). We also noted a significant negative impact of Risk on the Intent to use BUC (ß = -0.34, p < 0.001). Regarding the indirect effect of Expectancy on the Intent to Use BUC, the findings show a significant positive impact mediated by Risk (ß = 0.17, p = 0.004). The study noted a significant positive and indirect effect of AI Perception on the Intent to Use BUC when mediated by risk (ß = 0.08, p = 0.027). Overall, this study demonstrated the influences of expectancy, perceived risk, and perception of AI on clinicians' intent to use BUC (an AI system). AI developers need to emphasize the benefits of AI technology, ensure ease of use (effort expectancy), clarify the system's potential (performance expectancy), and minimize the risk perceptions by improving the overall design. Conclusion: Identifying the factors that determine clinicians' intent to use AI-based decision support systems can help improve technology adoption and use in the healthcare domain. Enhanced and safe adoption of AI can uplift the overall care process and help standardize clinical decisions and procedures. An improved AI adoption in healthcare will help clinicians share their everyday clinical workload and make critical decisions.

16.
JMIR Hum Factors ; 9(4): e38411, 2022 Oct 31.
Article in English | MEDLINE | ID: mdl-36315238

ABSTRACT

BACKGROUND: According to the US Food and Drug Administration Center for Biologics Evaluation and Research, health care systems have been experiencing blood transfusion overuse. To minimize the overuse of blood product transfusions, a proprietary artificial intelligence (AI)-based blood utilization calculator (BUC) was developed and integrated into a US hospital's electronic health record. Despite the promising performance of the BUC, this technology remains underused in the clinical setting. OBJECTIVE: This study aims to explore how clinicians perceived this AI-based decision support system and, consequently, understand the factors hindering BUC use. METHODS: We interviewed 10 clinicians (BUC users) until the data saturation point was reached. The interviews were conducted over a web-based platform and were recorded. The audiovisual recordings were then anonymously transcribed verbatim. We used an inductive-deductive thematic analysis to analyze the transcripts, which involved applying predetermined themes to the data (deductive) and consecutively identifying new themes as they emerged in the data (inductive). RESULTS: We identified the following two themes: (1) workload and usability and (2) clinical decision-making. Clinicians acknowledged the ease of use and usefulness of the BUC for the general inpatient population. The clinicians also found the BUC to be useful in making decisions related to blood transfusion. However, some clinicians found the technology to be confusing due to inconsistent automation across different blood work processes. CONCLUSIONS: This study highlights that analytical efficacy alone does not ensure technology use or acceptance. The overall system's design, user perception, and users' knowledge of the technology are equally important and necessary (limitations, functionality, purpose, and scope). Therefore, the effective integration of AI-based decision support systems, such as the BUC, mandates multidisciplinary engagement, ensuring the adequate initial and recurrent training of AI users while maintaining high analytical efficacy and validity. As a final takeaway, the design of AI systems that are made to perform specific tasks must be self-explanatory, so that the users can easily understand how and when to use the technology. Using any technology on a population for whom it was not initially designed will hinder user perception and the technology's use.

18.
JMIR Mhealth Uhealth ; 10(9): e38368, 2022 09 21.
Article in English | MEDLINE | ID: mdl-36129749

ABSTRACT

BACKGROUND: Despite several initiatives taken by government bodies, disparities in maternal health have been noticeable across India's socioeconomic gradient due to poor health awareness. OBJECTIVE: The aim of this study was to implement an easy-to-use mobile health (mHealth) app-Mobile for Mothers (MfM)-as a supporting tool to improve (1) maternal health awareness and (2) maternal health-related behavioral changes among tribal and rural communities in India. METHODS: Pregnant women, aged 18 to 45 years, were selected from two rural villages of Jharkhand, India: (1) the intervention group received government-mandated maternal care through an mHealth app and (2) the control group received the same government-mandated care via traditional means (ie, verbally). A total of 800 accredited social health activists (ASHAs) were involved, of which 400 were allocated to the intervention group. ASHAs used the MfM app to engage with pregnant women during each home visit in the intervention group. The mHealth intervention commenced soon after the baseline survey was completed in February 2014. The end-line data were collected between November 2015 and January 2016. We calculated descriptive statistics related to demographics and the percentage changes for each variable between baseline and end line per group. The baseline preintervention groups were compared to the end-line postintervention groups using Pearson chi-square analyses. Mantel-Haenszel tests for conditional independence were conducted to determine if the pre- to postintervention differences in the intervention group were significantly different from those in the control group. RESULTS: Awareness regarding the five cleans (5Cs) in the intervention group increased (P<.001) from 143 (baseline) to 555 (end line) out of 740 participants. Awareness about tetanus vaccine injections and the fact that pregnant women should receive two shots of tetanus vaccine in the intervention group significantly increased (P<.001) from 73 out of 740 participants (baseline) to 372 out of 555 participants (end line). In the intervention group, awareness regarding the fact that problems like painful or burning urination and itchy genitals during pregnancy are indicative of a reproductive tract infection increased (P<.001) from 15 (baseline) to 608 (end line) out of 740 participants. Similarly, knowledge about HIV testing increased (P<.001) from 39 (baseline) to 572 (end line) out of 740 participants. We also noted that the number of pregnant women in the intervention group who consumed the prescribed dosage of iron tablets increased (P<.001) from 193 (baseline) out of 288 participants to 612 (end line) out of 663 participants. CONCLUSIONS: mHealth interventions can augment awareness of, and persistence in, recommended maternal health behaviors among tribal communities in Jharkhand, India. In addition, mHealth could act as an educational tool to help tribal societies break away from their traditional beliefs about maternal health and take up modern health care recommendations. TRIAL REGISTRATION: OSF Registries 9U8D5; https://doi.org/10.17605/OSF.IO/9U8D5.


Subject(s)
Maternal Health , Telemedicine , Female , Humans , Iron , Mothers , Pregnancy , Pregnant Women , Tetanus Toxoid
19.
JMIR Hum Factors ; 9(2): e35421, 2022 Jun 21.
Article in English | MEDLINE | ID: mdl-35727615

ABSTRACT

The health care management and the medical practitioner literature lack a descriptive conceptual framework for understanding the dynamic and complex interactions between clinicians and artificial intelligence (AI) systems. As most of the existing literature has been investigating AI's performance and effectiveness from a statistical (analytical) standpoint, there is a lack of studies ensuring AI's ecological validity. In this study, we derived a framework that focuses explicitly on the interaction between AI and clinicians. The proposed framework builds upon well-established human factors models such as the technology acceptance model and expectancy theory. The framework can be used to perform quantitative and qualitative analyses (mixed methods) to capture how clinician-AI interactions may vary based on human factors such as expectancy, workload, trust, cognitive variables related to absorptive capacity and bounded rationality, and concerns for patient safety. If leveraged, the proposed framework can help to identify factors influencing clinicians' intention to use AI and, consequently, improve AI acceptance and address the lack of AI accountability while safeguarding the patients, clinicians, and AI technology. Overall, this paper discusses the concepts, propositions, and assumptions of the multidisciplinary decision-making literature, constituting a sociocognitive approach that extends the theories of distributed cognition and, thus, will account for the ecological validity of AI.

20.
Healthcare (Basel) ; 10(5)2022 May 21.
Article in English | MEDLINE | ID: mdl-35628089

ABSTRACT

Pediatric patients, particularly in neonatal and pediatric intensive care units (NICUs and PICUs), are typically at an increased risk of fatal decompensation. That being said, any delay in treatment or minor errors in medication dosage can overcomplicate patient health. Under such an environment, clinicians are expected to quickly and effectively comprehend large volumes of medical information to diagnose and develop a treatment plan for any baby. The integration of Artificial Intelligence (AI) into the clinical workflow can be a potential solution to safeguard pediatric patients and augment the quality of care. However, before making AI an integral part of pediatric care, it is essential to evaluate the technology from a human factors perspective, ensuring its readiness (technology readiness level) and ecological validity. Addressing AI accountability is also critical to safeguarding clinicians and improving AI acceptance in the clinical workflow. This article summarizes the application of AI in NICU/PICU and consecutively identifies the existing flaws in AI (from clinicians' standpoint), and proposes related recommendations, which, if addressed, can improve AIs' readiness for a real clinical environment.

SELECTION OF CITATIONS
SEARCH DETAIL
...