Your browser doesn't support javascript.
loading
: 20 | 50 | 100
1 - 20 de 47.227
1.
Sci Rep ; 14(1): 12763, 2024 06 04.
Article En | MEDLINE | ID: mdl-38834661

With the continuous progress of technology, the subject of life science plays an increasingly important role, among which the application of artificial intelligence in the medical field has attracted more and more attention. Bell facial palsy, a neurological ailment characterized by facial muscle weakness or paralysis, exerts a profound impact on patients' facial expressions and masticatory abilities, thereby inflicting considerable distress upon their overall quality of life and mental well-being. In this study, we designed a facial attribute recognition model specifically for individuals with Bell's facial palsy. The model utilizes an enhanced SSD network and scientific computing to perform a graded assessment of the patients' condition. By replacing the VGG network with a more efficient backbone, we improved the model's accuracy and significantly reduced its computational burden. The results show that the improved SSD network has an average precision of 87.9% in the classification of light, middle and severe facial palsy, and effectively performs the classification of patients with facial palsy, where scientific calculations also increase the precision of the classification. This is also one of the most significant contributions of this article, which provides intelligent means and objective data for future research on intelligent diagnosis and treatment as well as progressive rehabilitation.


Bell Palsy , Humans , Bell Palsy/diagnosis , Bell Palsy/physiopathology , Neural Networks, Computer , Female , Male , Facial Expression , Adult , Artificial Intelligence , Middle Aged , Facial Paralysis/diagnosis , Facial Paralysis/physiopathology , Facial Paralysis/psychology , Facial Recognition , Automated Facial Recognition/methods
2.
Sci Rep ; 14(1): 12807, 2024 06 04.
Article En | MEDLINE | ID: mdl-38834718

The advent of the fourth industrial revolution, characterized by artificial intelligence (AI) as its central component, has resulted in the mechanization of numerous previously labor-intensive activities. The use of in silico tools has become prevalent in the design of biopharmaceuticals. Upon conducting a comprehensive analysis of the genomes of many organisms, it has been discovered that their tissues can generate specific peptides that confer protection against certain diseases. This study aims to identify a selected group of neuropeptides (NPs) possessing favorable characteristics that render them ideal for production as neurological biopharmaceuticals. Until now, the construction of NP classifiers has been the primary focus, neglecting to optimize these characteristics. Therefore, in this study, the task of creating ideal NPs has been formulated as a multi-objective optimization problem. The proposed framework, NPpred, comprises two distinct components: NSGA-NeuroPred and BERT-NeuroPred. The former employs the NSGA-II algorithm to explore and change a population of NPs, while the latter is an interpretable deep learning-based model. The utilization of explainable AI and motifs has led to the proposal of two novel operators, namely p-crossover and p-mutation. An online application has been deployed at https://neuropred.anvil.app for designing an ideal collection of synthesizable NPs from protein sequences.


Algorithms , Artificial Intelligence , Humans , Neuropeptides/genetics , Neuropeptides/chemistry , Drug Design , Computer Simulation , Deep Learning
3.
Rev. esp. patol ; 57(2): 91-96, Abr-Jun, 2024. graf
Article Es | IBECS | ID: ibc-232412

Introducción y objetivo: La inteligencia artificial se halla plenamente presente en nuestras vidas. En educación las posibilidades de su uso son infinitas, tanto para alumnos como para docentes. Material y métodos: Se ha explorado la capacidad de ChatGPT a la hora de resolver preguntas tipo test a partir del examen de la asignatura Procedimientos Diagnósticos y Terapéuticos Anatomopatológicos de la primera convocatoria del curso 2022-2023. Además de comparar su resultado con el del resto de alumnos presentados, se han evaluado las posibles causas de las respuestas incorrectas. Finalmente, se ha evaluado su capacidad para realizar preguntas de test nuevas a partir de instrucciones específicas. Resultados: ChatGPT ha acertado 47 de las 68 preguntas planteadas, obteniendo una nota superior a la de la media y mediana del curso. La mayor parte de preguntas falladas presentan enunciados negativos, utilizando las palabras «no», «falsa» o «incorrecta» en su enunciado. Tras interactuar con él, el programa es capaz de darse cuenta de su error y cambiar su respuesta inicial por la correcta. Finalmente, ChatGPT sabe elaborar nuevas preguntas a partir de un supuesto teórico o bien de una simulación clínica determinada. Conclusiones: Como docentes estamos obligados a explorar las utilidades de la inteligencia artificial, e intentar usarla en nuestro beneficio. La realización de tareas que suponen un consumo de tipo importante, como puede ser la elaboración de preguntas tipo test para evaluación de contenidos, es un buen ejemplo. (AU)


Introduction and objective: Artificial intelligence is fully present in our lives. In education, the possibilities of its use are endless, both for students and teachers. Material and methods: The capacity of ChatGPT has been explored when solving multiple choice questions based on the exam of the subject «Anatomopathological Diagnostic and Therapeutic Procedures» of the first call of the 2022-23 academic year. In addition, to comparing their results with those of the rest of the students presented the probable causes of incorrect answers have been evaluated. Finally, its ability to formulate new test questions based on specific instructions has been evaluated. Results: ChatGPT correctly answered 47 out of 68 questions, achieving a grade higher than the course average and median. Most failed questions present negative statements, using the words «no», «false» or «incorrect» in their statement. After interacting with it, the program can realize its mistake and change its initial response to the correct answer. Finally, ChatGPT can develop new questions based on a theoretical assumption or a specific clinical simulation. Conclusions: As teachers we are obliged to explore the uses of artificial intelligence and try to use it to our benefit. Carrying out tasks that involve significant consumption, such as preparing multiple-choice questions for content evaluation, is a good example. (AU)


Humans , Pathology , Artificial Intelligence , Teaching , Education , Faculty, Medical , Students
4.
Zhonghua Yan Ke Za Zhi ; 60(6): 484-489, 2024 Jun 11.
Article Zh | MEDLINE | ID: mdl-38825947

In recent years, artificial intelligence (AI) technologies have experienced substantial growth across various sectors, with significant strides made particularly in medical AI through advancements such as large models. The application of AI within the field of ophthalmology can enhance the accuracy of eye disease screening and diagnosis. However, the deployment of AI and its large models in ophthalmology still encounters numerous limitations and challenges. This article builds upon the transformative achievements in the medical AI sector and discusses the ongoing challenges faced by AI applications in ophthalmology. It provides forward-looking insights from an ophthalmic perspective regarding the era of large models and anticipates research trends in AI applications in ophthalmology, so as to foster the continuous advancement of AI technologies, thereby significantly promoting eye health.


Artificial Intelligence , Eye Diseases , Humans , Eye Diseases/diagnosis , Ophthalmology/methods , Mass Screening/methods , Diagnostic Techniques, Ophthalmological
5.
Radiat Oncol ; 19(1): 69, 2024 May 31.
Article En | MEDLINE | ID: mdl-38822385

BACKGROUND: Multiple artificial intelligence (AI)-based autocontouring solutions have become available, each promising high accuracy and time savings compared with manual contouring. Before implementing AI-driven autocontouring into clinical practice, three commercially available CT-based solutions were evaluated. MATERIALS AND METHODS: The following solutions were evaluated in this work: MIM-ProtégéAI+ (MIM), Radformation-AutoContour (RAD), and Siemens-DirectORGANS (SIE). Sixteen organs were identified that could be contoured by all solutions. For each organ, ten patients that had manually generated contours approved by the treating physician (AP) were identified, totaling forty-seven different patients. CT scans in the supine position were acquired using a Siemens-SOMATOMgo 64-slice helical scanner and used to generate autocontours. Physician scoring of contour accuracy was performed by at least three physicians using a five-point Likert scale. Dice similarity coefficient (DSC), Hausdorff distance (HD) and mean distance to agreement (MDA) were calculated comparing AI contours to "ground truth" AP contours. RESULTS: The average physician score ranged from 1.00, indicating that all physicians reviewed the contour as clinically acceptable with no modifications necessary, to 3.70, indicating changes are required and that the time taken to modify the structures would likely take as long or longer than manually generating the contour. When averaged across all sixteen structures, the AP contours had a physician score of 2.02, MIM 2.07, RAD 1.96 and SIE 1.99. DSC ranged from 0.37 to 0.98, with 41/48 (85.4%) contours having an average DSC ≥ 0.7. Average HD ranged from 2.9 to 43.3 mm. Average MDA ranged from 0.6 to 26.1 mm. CONCLUSIONS: The results of our comparison demonstrate that each vendor's AI contouring solution exhibited capabilities similar to those of manual contouring. There were a small number of cases where unusual anatomy led to poor scores with one or more of the solutions. The consistency and comparable performance of all three vendors' solutions suggest that radiation oncology centers can confidently choose any of the evaluated solutions based on individual preferences, resource availability, and compatibility with their existing clinical workflows. Although AI-based contouring may result in high-quality contours for the majority of patients, a minority of patients require manual contouring and more in-depth physician review.


Artificial Intelligence , Radiotherapy Planning, Computer-Assisted , Tomography, X-Ray Computed , Humans , Radiotherapy Planning, Computer-Assisted/methods , Organs at Risk/radiation effects , Algorithms , Image Processing, Computer-Assisted/methods
7.
JCO Clin Cancer Inform ; 8: e2400077, 2024 May.
Article En | MEDLINE | ID: mdl-38822755

PURPOSE: Artificial intelligence (AI) models can generate scientific abstracts that are difficult to distinguish from the work of human authors. The use of AI in scientific writing and performance of AI detection tools are poorly characterized. METHODS: We extracted text from published scientific abstracts from the ASCO 2021-2023 Annual Meetings. Likelihood of AI content was evaluated by three detectors: GPTZero, Originality.ai, and Sapling. Optimal thresholds for AI content detection were selected using 100 abstracts from before 2020 as negative controls, and 100 produced by OpenAI's GPT-3 and GPT-4 models as positive controls. Logistic regression was used to evaluate the association of predicted AI content with submission year and abstract characteristics, and adjusted odds ratios (aORs) were computed. RESULTS: Fifteen thousand five hundred and fifty-three abstracts met inclusion criteria. Across detectors, abstracts submitted in 2023 were significantly more likely to contain AI content than those in 2021 (aOR range from 1.79 with Originality to 2.37 with Sapling). Online-only publication and lack of clinical trial number were consistently associated with AI content. With optimal thresholds, 99.5%, 96%, and 97% of GPT-3/4-generated abstracts were identified by GPTZero, Originality, and Sapling respectively, and no sampled abstracts from before 2020 were classified as AI generated by the GPTZero and Originality detectors. Correlation between detectors was low to moderate, with Spearman correlation coefficient ranging from 0.14 for Originality and Sapling to 0.47 for Sapling and GPTZero. CONCLUSION: There is an increasing signal of AI content in ASCO abstracts, coinciding with the growing popularity of generative AI models.


Abstracting and Indexing , Artificial Intelligence , Medical Oncology , Humans , Medical Oncology/methods
8.
J Med Internet Res ; 26: e50344, 2024 Jun 05.
Article En | MEDLINE | ID: mdl-38838309

The growing prominence of artificial intelligence (AI) in mobile health (mHealth) has given rise to a distinct subset of apps that provide users with diagnostic information using their inputted health status and symptom information-AI-powered symptom checker apps (AISympCheck). While these apps may potentially increase access to health care, they raise consequential ethical and legal questions. This paper will highlight notable concerns with AI usage in the health care system, further entrenchment of preexisting biases in the health care system and issues with professional accountability. To provide an in-depth analysis of the issues of bias and complications of professional obligations and liability, we focus on 2 mHealth apps as examples-Babylon and Ada. We selected these 2 apps as they were both widely distributed during the COVID-19 pandemic and make prominent claims about their use of AI for the purpose of assessing user symptoms. First, bias entrenchment often originates from the data used to train AI systems, causing the AI to replicate these inequalities through a "garbage in, garbage out" phenomenon. Users of these apps are also unlikely to be demographically representative of the larger population, leading to distorted results. Second, professional accountability poses a substantial challenge given the vast diversity and lack of regulation surrounding the reliability of AISympCheck apps. It is unclear whether these apps should be subject to safety reviews, who is responsible for app-mediated misdiagnosis, and whether these apps ought to be recommended by physicians. With the rapidly increasing number of apps, there remains little guidance available for health professionals. Professional bodies and advocacy organizations have a particularly important role to play in addressing these ethical and legal gaps. Implementing technical safeguards within these apps could mitigate bias, AIs could be trained with primarily neutral data, and apps could be subject to a system of regulation to allow users to make informed decisions. In our view, it is critical that these legal concerns are considered throughout the design and implementation of these potentially disruptive technologies. Entrenched bias and professional responsibility, while operating in different ways, are ultimately exacerbated by the unregulated nature of mHealth.


Artificial Intelligence , COVID-19 , Mobile Applications , Telemedicine , Humans , Artificial Intelligence/ethics , Bias , SARS-CoV-2 , Pandemics , Social Responsibility
11.
Clin Orthop Surg ; 16(3): 347-356, 2024 Jun.
Article En | MEDLINE | ID: mdl-38827766

Artificial intelligence (AI) has rapidly transformed various aspects of life, and the launch of the chatbot "ChatGPT" by OpenAI in November 2022 has garnered significant attention and user appreciation. ChatGPT utilizes natural language processing based on a "generative pre-trained transfer" (GPT) model, specifically the transformer architecture, to generate human-like responses to a wide range of questions and topics. Equipped with approximately 57 billion words and 175 billion parameters from online data, ChatGPT has potential applications in medicine and orthopedics. One of its key strengths is its personalized, easy-to-understand, and adaptive response, which allows it to learn continuously through user interaction. This article discusses how AI, especially ChatGPT, presents numerous opportunities in orthopedics, ranging from preoperative planning and surgical techniques to patient education and medical support. Although ChatGPT's user-friendly responses and adaptive capabilities are laudable, its limitations, including biased responses and ethical concerns, necessitate its cautious and responsible use. Surgeons and healthcare providers should leverage the strengths of the ChatGPT while recognizing its current limitations and verifying critical information through independent research and expert opinions. As AI technology continues to evolve, ChatGPT may become a valuable tool in orthopedic education and patient care, leading to improved outcomes and efficiency in healthcare delivery. The integration of AI into orthopedics offers substantial benefits but requires careful consideration and continuous improvement.


Artificial Intelligence , Orthopedic Procedures , Humans , Natural Language Processing , Patient Care
13.
Sci Rep ; 14(1): 12734, 2024 06 03.
Article En | MEDLINE | ID: mdl-38830969

The early screening of depression is highly beneficial for patients to obtain better diagnosis and treatment. While the effectiveness of utilizing voice data for depression detection has been demonstrated, the issue of insufficient dataset size remains unresolved. Therefore, we propose an artificial intelligence method to effectively identify depression. The wav2vec 2.0 voice-based pre-training model was used as a feature extractor to automatically extract high-quality voice features from raw audio. Additionally, a small fine-tuning network was used as a classification model to output depression classification results. Subsequently, the proposed model was fine-tuned on the DAIC-WOZ dataset and achieved excellent classification results. Notably, the model demonstrated outstanding performance in binary classification, attaining an accuracy of 0.9649 and an RMSE of 0.1875 on the test set. Similarly, impressive results were obtained in multi-classification, with an accuracy of 0.9481 and an RMSE of 0.3810. The wav2vec 2.0 model was first used for depression recognition and showed strong generalization ability. The method is simple, practical, and applicable, which can assist doctors in the early screening of depression.


Depression , Voice , Humans , Depression/diagnosis , Male , Female , Artificial Intelligence , Adult
14.
J Med Internet Res ; 26: e44443, 2024 Jun 04.
Article En | MEDLINE | ID: mdl-38833294

BACKGROUND: Singapore, like the rest of Asia, faces persistent challenges to mental health promotion, including stigma around unwellness and seeking treatment and a lack of trained mental health personnel. The COVID-19 pandemic, which created a surge in mental health care needs and simultaneously accelerated the adoption of digital health solutions, revealed a new opportunity to quickly scale innovative solutions in the region. OBJECTIVE: In June 2020, the Singaporean government launched mindline.sg, an anonymous digital mental health resource website that has grown to include >500 curated local mental health resources, a clinically validated self-assessment tool for depression and anxiety, an artificial intelligence (AI) chatbot from Wysa designed to deliver digital therapeutic exercises, and a tailored version of the website for working adults called mindline at work. The goal of the platform is to empower Singapore residents to take charge of their own mental health and to be able to offer basic support to those around them through the ease and convenience of a barrier-free digital solution. METHODS: Website use is measured through click-level data analytics captured via Google Analytics and custom application programming interfaces, which in turn drive a customized analytics infrastructure based on the open-source platforms Titanium Database and Metabase. Unique, nonbounced (users that do not immediately navigate away from the site), engaged, and return users are reported. RESULTS: In the 2 years following launch (July 1, 2020, through June 30, 2022), the website received >447,000 visitors (approximately 15% of the target population of 3 million), 62.02% (277,727/447,783) of whom explored the site or engaged with resources (referred to as nonbounced visitors); 10.54% (29,271/277,727) of those nonbounced visitors returned. The most popular features on the platform were the dialogue-based therapeutic exercises delivered by the chatbot and the self-assessment tool, which were used by 25.54% (67,626/264,758) and 11.69% (32,469/277,727) of nonbounced visitors. On mindline at work, the rates of nonbounced visitors who engaged extensively (ie, spent ≥40 seconds exploring resources) and who returned were 51.56% (22,474/43,588) and 13.43% (5,853/43,588) over a year, respectively, compared to 30.9% (42,829/138,626) and 9.97% (13,822/138,626), respectively, on the generic mindline.sg site in the same year. CONCLUSIONS: The site has achieved desired reach and has seen a strong growth rate in the number of visitors, which required substantial and sustained digital marketing campaigns and strategic outreach partnerships. The site was careful to preserve anonymity, limiting the detail of analytics. The good levels of overall adoption encourage us to believe that mild to moderate mental health conditions and the social factors that underly them are amenable to digital interventions. While mindline.sg was primarily used in Singapore, we believe that similar solutions with local customization are widely and globally applicable.


COVID-19 , Mental Health , Self Care , Humans , Singapore , Self Care/methods , Telemedicine , Health Promotion/methods , Internet , Pandemics , Artificial Intelligence , SARS-CoV-2 , Mental Health Services
15.
Proc Natl Acad Sci U S A ; 121(24): e2317967121, 2024 Jun 11.
Article En | MEDLINE | ID: mdl-38833474

Large language models (LLMs) are currently at the forefront of intertwining AI systems with human communication and everyday life. Thus, aligning them with human values is of great importance. However, given the steady increase in reasoning abilities, future LLMs are under suspicion of becoming able to deceive human operators and utilizing this ability to bypass monitoring efforts. As a prerequisite to this, LLMs need to possess a conceptual understanding of deception strategies. This study reveals that such strategies emerged in state-of-the-art LLMs, but were nonexistent in earlier LLMs. We conduct a series of experiments showing that state-of-the-art LLMs are able to understand and induce false beliefs in other agents, that their performance in complex deception scenarios can be amplified utilizing chain-of-thought reasoning, and that eliciting Machiavellianism in LLMs can trigger misaligned deceptive behavior. GPT-4, for instance, exhibits deceptive behavior in simple test scenarios 99.16% of the time (P < 0.001). In complex second-order deception test scenarios where the aim is to mislead someone who expects to be deceived, GPT-4 resorts to deceptive behavior 71.46% of the time (P < 0.001) when augmented with chain-of-thought reasoning. In sum, revealing hitherto unknown machine behavior in LLMs, our study contributes to the nascent field of machine psychology.


Deception , Language , Humans , Artificial Intelligence
16.
Sci Rep ; 14(1): 12635, 2024 06 02.
Article En | MEDLINE | ID: mdl-38825652

We describe an approach aimed at helping artificial intelligence develop theory of mind of their human teammates to support team interactions. We show how this can be supported through the provision of quantifiable, machine-readable, a priori information about the human team members to an agent. We first show how our profiling approach can capture individual team member characteristic profiles that can be constructed from sparse data and provided to agents to support the development of artificial theory of mind. We then show how it captures features of team composition that may influence team performance. We document this through an experiment examining factors influencing the performance of ad-hoc teams executing a complex team coordination task when paired with an artificial social intelligence (ASI) teammate. We report the relationship between the individual and team characteristics and measures related to task performance and self-reported perceptions of the ASI. The results show that individual and emergent team profiles were able to characterize features of the team that predicted behavior and explain differences in perceptions of ASI. Further, the features of these profiles may interact differently when teams work with human versus ASI advisors. Most strikingly, our analyses showed that ASI advisors had a strong positive impact on low potential teams such that they improved the performance of those teams across mission outcome measures. We discuss these findings in the context of developing intelligent technologies capable of social cognition and engage in collaborative behaviors that improve team effectiveness.


Artificial Intelligence , Theory of Mind , Humans , Male , Female , Cooperative Behavior , Adult , Task Performance and Analysis
18.
Med Sci Monit ; 30: e944310, 2024 Jun 06.
Article En | MEDLINE | ID: mdl-38840416

Prosthodontics is a dental subspecialty that includes the preparation of dental prosthetics for missing or damaged teeth. It increasingly uses computer-assisted technologies for planning and preparing dental prosthetics. This study aims to present the findings from a systematic review of publications on artificial intelligence (AI) in prosthodontics to identify current trends and future opportunities. The review question was "What are the applications of AI in prosthodontics and how good is their performance in prosthodontics?" Electronic searching in the Web of Science, ScienceDirect, PubMed, and Cochrane Library was conducted. The search was limited to full text from January 2012 to January 2024. Quadas-2 was used for assessing quality and potential risk of bias for the selected studies. A total of 1925 studies were identified in the initial search. After removing the duplicates and applying exclusion criteria, a total of 30 studies were selected for this review. Results of the Quadas-2 assessment of included studies found that a total of 18.3% of studies were identified as low risk of bias studies, whereas 52.6% and 28.9% of included studies were identified as studies with high and unclear risk of bias, respectively. Although they are still developing, AI models have already shown promise in the areas of dental charting, tooth shade selection, automated restoration design, mapping the preparation finishing line, manufacturing casting optimization, predicting facial changes in patients wearing removable prostheses, and designing removable partial dentures.


Artificial Intelligence , Prosthodontics , Artificial Intelligence/trends , Humans , Prosthodontics/methods , Prosthodontics/trends , Dental Prosthesis
20.
J Med Internet Res ; 26: e50274, 2024 Jun 06.
Article En | MEDLINE | ID: mdl-38842929

Adverse drug reactions are a common cause of morbidity in health care. The US Food and Drug Administration (FDA) evaluates individual case safety reports of adverse events (AEs) after submission to the FDA Adverse Event Reporting System as part of its surveillance activities. Over the past decade, the FDA has explored the application of artificial intelligence (AI) to evaluate these reports to improve the efficiency and scientific rigor of the process. However, a gap remains between AI algorithm development and deployment. This viewpoint aims to describe the lessons learned from our experience and research needed to address both general issues in case-based reasoning using AI and specific needs for individual case safety report assessment. Beginning with the recognition that the trustworthiness of the AI algorithm is the main determinant of its acceptance by human experts, we apply the Diffusion of Innovations theory to help explain why certain algorithms for evaluating AEs at the FDA were accepted by safety reviewers and others were not. This analysis reveals that the process by which clinicians decide from case reports whether a drug is likely to cause an AE is not well defined beyond general principles. This makes the development of high performing, transparent, and explainable AI algorithms challenging, leading to a lack of trust by the safety reviewers. Even accounting for the introduction of large language models, the pharmacovigilance community needs an improved understanding of causal inference and of the cognitive framework for determining the causal relationship between a drug and an AE. We describe specific future research directions that underpin facilitating implementation and trust in AI for drug safety applications, including improved methods for measuring and controlling of algorithmic uncertainty, computational reproducibility, and clear articulation of a cognitive framework for causal inference in case-based reasoning.


Artificial Intelligence , United States Food and Drug Administration , United States , Humans , Drug-Related Side Effects and Adverse Reactions , Clinical Decision-Making , Product Surveillance, Postmarketing/methods , Adverse Drug Reaction Reporting Systems , Algorithms , Trust
...