Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 96
Filtrar
1.
Artif Intell Med ; 154: 102924, 2024 Jun 26.
Artículo en Inglés | MEDLINE | ID: mdl-38964194

RESUMEN

BACKGROUND: Radiology reports are typically written in a free-text format, making clinical information difficult to extract and use. Recently, the adoption of structured reporting (SR) has been recommended by various medical societies thanks to the advantages it offers, e.g. standardization, completeness, and information retrieval. We propose a pipeline to extract information from Italian free-text radiology reports that fits with the items of the reference SR registry proposed by a national society of interventional and medical radiology, focusing on CT staging of patients with lymphoma. METHODS: Our work aims to leverage the potential of Natural Language Processing and Transformer-based models to deal with automatic SR registry filling. With the availability of 174 Italian radiology reports, we investigate a rule-free generative Question Answering approach based on the Italian-specific version of T5: IT5. To address information content discrepancies, we focus on the six most frequently filled items in the annotations made on the reports: three categorical (multichoice), one free-text (free-text), and two continuous numerical (factual). In the preprocessing phase, we encode also information that is not supposed to be entered. Two strategies (batch-truncation and ex-post combination) are implemented to comply with the IT5 context length limitations. Performance is evaluated in terms of strict accuracy, f1, and format accuracy, and compared with the widely used GPT-3.5 Large Language Model. Unlike multichoice and factual, free-text answers do not have 1-to-1 correspondence with their reference annotations. For this reason, we collect human-expert feedback on the similarity between medical annotations and generated free-text answers, using a 5-point Likert scale questionnaire (evaluating the criteria of correctness and completeness). RESULTS: The combination of fine-tuning and batch splitting allows IT5 ex-post combination to achieve notable results in terms of information extraction of different types of structured data, performing on par with GPT-3.5. Human-based assessment scores of free-text answers show a high correlation with the AI performance metrics f1 (Spearman's correlation coefficients>0.5, p-values<0.001) for both IT5 ex-post combination and GPT-3.5. The latter is better at generating plausible human-like statements, even if it systematically provides answers even when they are not supposed to be given. CONCLUSIONS: In our experimental setting, a fine-tuned Transformer-based model with a modest number of parameters (i.e., IT5, 220 M) performs well as a clinical information extraction system for automatic SR registry filling task. It can extract information from more than one place in the report, elaborating it in a manner that complies with the response specifications provided by the SR registry (for multichoice and factual items), or that closely approximates the work of a human-expert (free-text items); with the ability to discern when an answer is supposed to be given or not to a user query.

2.
ACS Nano ; 2024 Jul 10.
Artículo en Inglés | MEDLINE | ID: mdl-38984372

RESUMEN

Multiscale design of catalyst layers (CLs) is important to advancing hydrogen electrochemical conversion devices toward commercialized deployment, which has nevertheless been greatly hampered by the complex interplay among multiscale CL components, high synthesis cost and vast design space. We lack rational design and optimization techniques that can accurately reflect the nanostructure-performance relationship and cost-effectively search the design space. Here, we fill this gap with a deep generative artificial intelligence (AI) framework, GLIDER, that integrates recent generative AI, data-driven surrogate techniques and collective intelligence to efficiently search the optimal CL nanostructures driven by their electrochemical performance. GLIDER achieves realistic multiscale CL digital generation by leveraging the dimensionality-reduction ability of quantized vector-variational autoencoder. The powerful generative capability of GLIDER allows the efficient search of the optimal design parameters for the Pt-carbon-ionomer nanostructures of CLs. We also demonstrate that GLIDER is transferable to other fuel cell electrode microstructure generation, e.g., fibrous gas diffusion layers and solid oxide fuel cell anode. GLIDER is of potential as a digital tool for the design and optimization of broad electrochemical energy devices.

3.
Heliyon ; 10(12): e32364, 2024 Jun 30.
Artículo en Inglés | MEDLINE | ID: mdl-38975200

RESUMEN

Introduction: The emergence and application of generative artificial intelligence/large language models (hereafter GenAI LLMs) have the potential for significant impact on the healthcare industry. However, there is currently a lack of systematic research on GenAI LLMs in healthcare based on reliable data. This article aims to conduct an exploratory study of the application of GenAI LLMs (i.e., ChatGPT) in healthcare from the perspective of digital media (i.e., online news), including the application scenarios, potential opportunities, and challenges. Methods: This research used thematic qualitative text analysis in five steps: firstly, developing main topical categories based on relevant articles; secondly, encoding the search keywords using these categories; thirdly, conducting searches for news articles via Google ; fourthly, encoding the sub-categories using the elaborate category system; and finally, conducting category-based analysis and presenting the results. Natural language processing techniques, including the TermRaider and AntConc tool, were applied in the aforementioned steps to assist in text qualitative analysis. Additionally, this study built a framework, using for analyzing the above three topics, from the perspective of five different stakeholders, including healthcare demanders and providers. Results: This study summarizes 26 applications (e.g., provide medical advice, provide diagnosis and triage recommendations, provide mental health support, etc.), 21 opportunities (e.g., make healthcare more accessible, reduce healthcare costs, improve patients care, etc.), and 17 challenges (e.g., generate inaccurate/misleading/wrong answers, raise privacy concerns, lack of transparency, etc.), and analyzes the reasons for the formation of these key items and the links between the three research topics. Conclusions: The application of GenAI LLMs in healthcare is primarily focused on transforming the way healthcare demanders access medical services (i.e., making it more intelligent, refined, and humane) and optimizing the processes through which healthcare providers offer medical services (i.e., simplifying, ensuring timeliness, and reducing errors). As the application becomes more widespread and deepens, GenAI LLMs is expected to have a revolutionary impact on traditional healthcare service models, but it also inevitably raises ethical and security concerns. Furthermore, GenAI LLMs applied in healthcare is still in the initial stage, which can be accelerated from a specific healthcare field (e.g., mental health) or a specific mechanism (e.g., GenAI LLMs' economic benefits allocation mechanism applied to healthcare) with empirical or clinical research.

4.
Diagnosis (Berl) ; 2024 Jul 11.
Artículo en Inglés | MEDLINE | ID: mdl-38987215

RESUMEN

OBJECTIVES: This short communication explores the potential, limitations, and future directions of generative artificial intelligence (GAI) in enhancing diagnostics. METHODS: This commentary reviews current applications and advancements in GAI, particularly focusing on its integration into medical diagnostics. It examines the role of GAI in supporting medical interviews, assisting in differential diagnosis, and aiding clinical reasoning through the lens of dual-process theory. The discussion is supported by recent examples and theoretical frameworks to illustrate the practical and potential uses of GAI in medicine. RESULTS: GAI shows significant promise in enhancing diagnostic processes by supporting the translation of patient descriptions into visual formats, providing differential diagnoses, and facilitating complex clinical reasoning. However, limitations such as the potential for generating medical misinformation, known as hallucinations, exist. Furthermore, the commentary highlights the integration of GAI with both intuitive and analytical decision-making processes in clinical diagnostics, demonstrating potential improvements in both the speed and accuracy of diagnoses. CONCLUSIONS: While GAI presents transformative potential for medical diagnostics, it also introduces risks that must be carefully managed. Future advancements should focus on refining GAI technologies to better align with human diagnostic reasoning, ensuring GAI enhances rather than replaces the medical professionals' expertise.

5.
Nurse Educ Pract ; 79: 104062, 2024 Jul 10.
Artículo en Inglés | MEDLINE | ID: mdl-38996582

RESUMEN

AIM: This qualitative study aims to explore the perspectives of nursing students regarding the application and integration of generative Artificial Intelligence (AI) tools in their studies. BACKGROUND: With the increasing prevalence of generative AI tools in academic settings, there is a growing interest in their use among students for learning and assessments. DESIGN: Employing a qualitative descriptive design, this study used semi-structured interviews with nursing students to capture the nuanced insights of the participants. METHODS: Semi-structured interviews were digitally recorded and then transcribed verbatim. The research team reviewed all the data independently and then convened to discuss and reach a consensus on the identified themes. RESULTS: This study was conducted within the discipline of nursing at a regional Australian university. Thirteen nursing students, from both first and second year of the programme, were interviewed as part of this study. Six distinct themes emerged from the data analysis, including the educational impact of AI tools, equitable learning environment, ethical considerations of AI use, technology integration, safe and practical utility and generational differences. CONCLUSIONS: This initial exploration sheds light on the diverse perspectives of nursing students concerning the incorporation of generative AI tools in their education. It underscores the potential for both positive contributions and challenges associated with the integration of generative AI in nursing education and practice.

6.
JMIR Med Educ ; 10: e52818, 2024 Jul 23.
Artículo en Inglés | MEDLINE | ID: mdl-39042876

RESUMEN

BACKGROUND: The rapid evolution of ChatGPT has generated substantial interest and led to extensive discussions in both public and academic domains, particularly in the context of medical education. OBJECTIVE: This study aimed to evaluate ChatGPT's performance in a pulmonology examination through a comparative analysis with that of third-year medical students. METHODS: In this cross-sectional study, we conducted a comparative analysis with 2 distinct groups. The first group comprised 244 third-year medical students who had previously taken our institution's 2020 pulmonology examination, which was conducted in French. The second group involved ChatGPT-3.5 in 2 separate sets of conversations: without contextualization (V1) and with contextualization (V2). In both V1 and V2, ChatGPT received the same set of questions administered to the students. RESULTS: V1 demonstrated exceptional proficiency in radiology, microbiology, and thoracic surgery, surpassing the majority of medical students in these domains. However, it faced challenges in pathology, pharmacology, and clinical pneumology. In contrast, V2 consistently delivered more accurate responses across various question categories, regardless of the specialization. ChatGPT exhibited suboptimal performance in multiple choice questions compared to medical students. V2 excelled in responding to structured open-ended questions. Both ChatGPT conversations, particularly V2, outperformed students in addressing questions of low and intermediate difficulty. Interestingly, students showcased enhanced proficiency when confronted with highly challenging questions. V1 fell short of passing the examination. Conversely, V2 successfully achieved examination success, outperforming 139 (62.1%) medical students. CONCLUSIONS: While ChatGPT has access to a comprehensive web-based data set, its performance closely mirrors that of an average medical student. Outcomes are influenced by question format, item complexity, and contextual nuances. The model faces challenges in medical contexts requiring information synthesis, advanced analytical aptitude, and clinical judgment, as well as in non-English language assessments and when confronted with data outside mainstream internet sources.


Asunto(s)
Evaluación Educacional , Neumología , Estudiantes de Medicina , Humanos , Estudios Transversales , Neumología/educación , Estudiantes de Medicina/estadística & datos numéricos , Evaluación Educacional/métodos , Educación de Pregrado en Medicina/métodos , Masculino , Aptitud , Femenino , Competencia Clínica
7.
J Surg Res ; 301: 504-511, 2024 Jul 22.
Artículo en Inglés | MEDLINE | ID: mdl-39042979

RESUMEN

INTRODUCTION: Large language models like Chat Generative Pre-Trained Transformer (ChatGPT) are increasingly used in academic writing. Faculty may consider use of artificial intelligence (AI)-generated responses a form of cheating. We sought to determine whether general surgery residency faculty could detect AI versus human-written responses to a text prompt; hypothesizing that faculty would not be able to reliably differentiate AI versus human-written responses. METHODS: Ten essays were generated using a text prompt, "Tell us in 1-2 paragraphs why you are considering the University of Rochester for General Surgery residency" (Current trainees: n = 5, ChatGPT: n = 5). Ten blinded faculty reviewers rated essays (ten-point Likert scale) on the following criteria: desire to interview, relevance to the general surgery residency, overall impression, and AI- or human-generated; with scores and identification error rates compared between the groups. RESULTS: There were no differences between groups for %total points (ChatGPT 66.0 ± 13.5%, human 70.0 ± 23.0%, P = 0.508) or identification error rates (ChatGPT 40.0 ± 35.0%, human 20.0 ± 30.0%, P = 0.175). Except for one, all essays were identified incorrectly by at least two reviewers. Essays identified as human-generated received higher overall impression scores (area under the curve: 0.82 ± 0.04, P < 0.01). CONCLUSIONS: Whether use of AI tools for academic purposes should constitute academic dishonesty is controversial. We demonstrate that human and AI-generated essays are similar in quality, but there is bias against presumed AI-generated essays. Faculty are not able to reliably differentiate human from AI-generated essays, thus bias may be misdirected. AI-tools are becoming ubiquitous and their use is not easily detected. Faculty must expect these tools to play increasing roles in medical education.

8.
Front Artif Intell ; 7: 1296034, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-39035790

RESUMEN

Music has always been thought of as a "human" endeavor- when praising a piece of music, we emphasize the composer's creativity and the emotions the music invokes. Because music also heavily relies on patterns and repetition in the form of recurring melodic themes and chord progressions, artificial intelligence has increasingly been able to replicate music in a human-like fashion. This research investigated the capabilities of Jukebox, an open-source commercially available neural network, to accurately replicate two genres of music often found in rhythm games, artcore and orchestral. A Google Colab notebook provided the computational resources necessary to sample and extend a total of 16 piano arrangements of both genres. A survey containing selected samples was distributed to a local youth orchestra to gauge people's perceptions of the musicality of AI and human-generated music. Even though humans preferred human-generated music, Jukebox's slightly high rating showed that it was somewhat capable at mimicking the styles of both genres. Despite limitations of Jukebox only using raw audio and a relatively small sample size, it shows promise for the future of AI as a collaborative tool in music production.

10.
Sensors (Basel) ; 24(11)2024 Jun 03.
Artículo en Inglés | MEDLINE | ID: mdl-38894404

RESUMEN

The interpretability of gait analysis studies in people with rare diseases, such as those with primary hereditary cerebellar ataxia (pwCA), is frequently limited by the small sample sizes and unbalanced datasets. The purpose of this study was to assess the effectiveness of data balancing and generative artificial intelligence (AI) algorithms in generating synthetic data reflecting the actual gait abnormalities of pwCA. Gait data of 30 pwCA (age: 51.6 ± 12.2 years; 13 females, 17 males) and 100 healthy subjects (age: 57.1 ± 10.4; 60 females, 40 males) were collected at the lumbar level with an inertial measurement unit. Subsampling, oversampling, synthetic minority oversampling, generative adversarial networks, and conditional tabular generative adversarial networks (ctGAN) were applied to generate datasets to be input to a random forest classifier. Consistency and explainability metrics were also calculated to assess the coherence of the generated dataset with known gait abnormalities of pwCA. ctGAN significantly improved the classification performance compared with the original dataset and traditional data augmentation methods. ctGAN are effective methods for balancing tabular datasets from populations with rare diseases, owing to their ability to improve diagnostic models with consistent explainability.


Asunto(s)
Algoritmos , Inteligencia Artificial , Ataxia Cerebelosa , Marcha , Enfermedades Raras , Humanos , Femenino , Masculino , Persona de Mediana Edad , Marcha/fisiología , Ataxia Cerebelosa/genética , Ataxia Cerebelosa/fisiopatología , Ataxia Cerebelosa/diagnóstico , Adulto , Análisis de la Marcha/métodos , Anciano
11.
BMC Nurs ; 23(1): 437, 2024 Jun 26.
Artículo en Inglés | MEDLINE | ID: mdl-38926867

RESUMEN

BACKGROUND: Despite the importance of collaboration and communication in global health, existing educational approaches often rely on traditional one-way instruction from instructor to student. Therefore, this study aimed to evaluate the effectiveness of a newly developed undergraduate curriculum on global health in enhancing nursing students' competencies in global health and communication, problem-solving, and self-directed learning skills. METHODS: A 15-week course "Global Health and Nursing" was designed for undergraduate nursing students, and a collaborative project-based learning method was used. Study participants were undergraduate nursing students enrolled in the course. The study was a multi-method study and included quantitative and qualitative components. It employed a one-group pretest-posttest design to quantitatively assess the impact of the curriculum. Additionally, student experiences with the learning process were qualitatively explored through a focus group interview. A total of 28 students participated in this study, and 5 of them participated in the focus group interview. RESULTS: The collaborative project-based learning method significantly improved global health competency (t = - 10.646, df = 22, p < 0.001), with a large effect size. It also improved communication skills (t = - 2.649, df = 22, p = 0.015), problem-solving skills (t = - 3.453, df = 22, p = 0.002), and self-directed learning skills (t = - 2.375, df = 22, p = 0.027). Three themes were found through the focus group interview: (a) Promoting global health competency; (b) Fostering life skills through collaborative projects; and (c) Recommendations for future classes. The focus group interview indicated that overall, the study participants were satisfied with the collaborative project-based method for global health education. CONCLUSIONS: This study confirms that project-based learning significantly boosts the competencies and skills of students, recommending its broader adoption in nursing education. Nursing instructors should consider adopting this teaching approach for global health education at the undergraduate level. Future studies may employ a longitudinal design to assess the prolonged effects of the collaborative project-based learning approach, particularly focusing on the long-term retention of skills and the broader applicability of this model across different educational settings.

12.
Cancer Control ; 31: 10732748241264704, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-38897721

RESUMEN

Therapeutic resistance is a major challenge facing the design of effective cancer treatments. Adaptive cancer therapy is in principle the most viable approach to manage cancer's adaptive dynamics through drug combinations with dose timing and modulation. However, there are numerous open issues facing the clinical success of adaptive therapy. Chief among these issues is the feasibility of real-time predictions of treatment response which represent a bedrock requirement of adaptive therapy. Generative artificial intelligence has the potential to learn prediction models of treatment response from clinical, molecular, and radiomics data about patients and their treatments. The article explores this potential through a proposed integration model of Generative Pre-Trained Transformers (GPTs) in a closed loop with adaptive treatments to predict the trajectories of disease progression. The conceptual model and the challenges facing its realization are discussed in the broader context of artificial intelligence integration in oncology.


Asunto(s)
Inteligencia Artificial , Neoplasias , Humanos , Neoplasias/tratamiento farmacológico , Neoplasias/terapia
13.
Front Digit Health ; 6: 1410947, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-38933900

RESUMEN

Prompt engineering, the process of arranging input or prompts given to a large language model to guide it in producing desired outputs, is an emerging field of research that shapes how these models understand tasks, process information, and generate responses in a wide range of natural language processing (NLP) applications. Digital mental health, on the other hand, is becoming increasingly important for several reasons including early detection and intervention, and to mitigate limited availability of highly skilled medical staff for clinical diagnosis. This short review outlines the latest advances in prompt engineering in the field of NLP for digital mental health. To our knowledge, this review is the first attempt to discuss the latest prompt engineering types, methods, and tasks that are used in digital mental health applications. We discuss three types of digital mental health tasks: classification, generation, and question answering. To conclude, we discuss the challenges, limitations, ethical considerations, and future directions in prompt engineering for digital mental health. We believe that this short review contributes a useful point of departure for future research in prompt engineering for digital mental health.

14.
Heliyon ; 10(11): e31965, 2024 Jun 15.
Artículo en Inglés | MEDLINE | ID: mdl-38841455

RESUMEN

Generative Artificial Intelligence foundation models (for example Generative Pre-trained Transformer - GPT - models) can generate the next token given a sequence of tokens. How can this 'generative AI' be compared with the 'real' intelligence of the human brain, when for example a human generates a whole memory in response to an incomplete retrieval cue, and then generates further prospective thoughts? Here these two types of generative intelligence, artificial in machines and real in the human brain are compared, and it is shown how when whole memories are generated by hippocampal recall in response to an incomplete retrieval cue, what the human brain computes, and how it computes it, are very different from generative AI. Key differences are the use of local associative learning rules in the hippocampal memory system, and of non-local backpropagation of error learning in AI. Indeed, it is argued that the whole operation of the human brain is performed computationally very differently to what is implemented in generative AI. Moreover, it is emphasized that the primate including human hippocampal system includes computations about spatial view and where objects and people are in scenes, whereas in rodents the emphasis is on place cells and path integration by movements between places. This comparison with generative memory and processing in the human brain has interesting implications for the further development of generative AI and for neuroscience research.

15.
Brain ; 2024 Jun 24.
Artículo en Inglés | MEDLINE | ID: mdl-38912855

RESUMEN

Neurodegenerative dementia syndromes, such as Primary Progressive Aphasias (PPA), have traditionally been diagnosed based in part on verbal and nonverbal cognitive profiles. Debate continues about whether PPA is best divided into three variants and also regarding the most distinctive linguistic features for classifying PPA variants. In this cross-sectional study, we first harnessed the capabilities of artificial intelligence (AI) and Natural Language Processing (NLP) to perform unsupervised classification of short, connected speech samples from 78 PPA patients. We then used NLP to identify linguistic features that best dissociate the three PPA variants. Large Language Models (LLMs) discerned three distinct PPA clusters, with 88.5% agreement with independent clinical diagnoses. Patterns of cortical atrophy of three data-driven clusters corresponded to the localization in the clinical diagnostic criteria. In the subsequent supervised classification, seventeen distinctive features emerged, including the observation that separating verbs into high and low-frequency types significantly improves classification accuracy. Using these linguistic features derived from the analysis of short, connected speech samples, we developed a classifier that achieved 97.9% accuracy in classifying the four groups (three PPA variants and healthy controls). The data-driven section of this study showcases the ability of LLMs to find natural partitioning in the speech of patients with PPA consistent with conventional variants. In addition, the work identifies a robust set of language features indicative of each PPA variant, emphasizing the significance of dividing verbs into high and low-frequency categories. Beyond improving diagnostic accuracy, these findings enhance our understanding of the neurobiology of language processing.

16.
Int J Nurs Stud Adv ; 6: 100181, 2024 Jun.
Artículo en Inglés | MEDLINE | ID: mdl-38746816

RESUMEN

Background: The release of ChatGPT for general use in 2023 by OpenAI has significantly expanded the possible applications of generative artificial intelligence in the healthcare sector, particularly in terms of information retrieval by patients, medical and nursing students, and healthcare personnel. Objective: To compare the performance of ChatGPT-3.5 and ChatGPT-4.0 to clinical nurses on answering questions about tracheostomy care, as well as to determine whether using different prompts to pre-define the scope of the ChatGPT affects the accuracy of their responses. Design: Cross-sectional study. Setting: The data collected from the ChatGPT was collected using the ChatGPT-3.5 and 4.0 using access provided by the University of Hong Kong. The data from the clinical nurses working in mainland China was collected using the Qualtrics survey program. Participants: No participants were needed for collecting the ChatGPT responses. A total of 272 clinical nurses, with 98.5 % of them working in tertiary care hospitals in mainland China, were recruited using a snowball sampling approach. Method: We used 43 tracheostomy care-related questions in a multiple-choice format to evaluate the performance of ChatGPT-3.5, ChatGPT-4.0, and clinical nurses. ChatGPT-3.5 and GPT-4.0 were both queried three times with the same questions by different prompts: no prompt, patient-friendly prompt, and act-as-nurse prompt. All responses were independently graded by two qualified otorhinolaryngology nurses on a 3-point accuracy scale (correct, partially correct, and incorrect). The Chi-squared test and Fisher exact test with post-hoc Bonferroni adjustment were used to assess the differences in performance between the three groups, as well as the differences in accuracy between different prompts. Results: ChatGPT-4.0 showed significantly higher accuracy, with 64.3 % of responses rated as 'correct', compared to 60.5 % in ChatGPT-3.5 and 36.7 % in clinical nurses (X 2 = 74.192, p < .001). Except for the 'care for the tracheostomy stoma and surrounding skin' domain (X2 = 6.227, p = .156), scores from ChatGPT-3.5 and -4.0 were significantly better than nurses' on domains related to airway humidification, cuff management, tracheostomy tube care, suction techniques, and management of complications. Overall, ChatGPT-4.0 consistently performed well in all domains, achieving over 50 % accuracy in each domain. Alterations to the prompt had no impact on the performance of ChatGPT-3.5 or -4.0. Conclusion: ChatGPT may serve as a complementary medical information tool for patients and physicians to improve knowledge in tracheostomy care. Tweetable abstract: ChatGPT-4.0 can answer tracheostomy care questions better than most clinical nurses. There is no reason nurses should not be using it.

17.
JAMIA Open ; 7(2): ooae043, 2024 Jul.
Artículo en Inglés | MEDLINE | ID: mdl-38818116

RESUMEN

Objectives: The generation of structured documents for clinical trials is a promising application of large language models (LLMs). We share opportunities, insights, and challenges from a competitive challenge that used LLMs for automating clinical trial documentation. Materials and Methods: As part of a challenge initiated by Pfizer (organizer), several teams (participant) created a pilot for generating summaries of safety tables for clinical study reports (CSRs). Our evaluation framework used automated metrics and expert reviews to assess the quality of AI-generated documents. Results: The comparative analysis revealed differences in performance across solutions, particularly in factual accuracy and lean writing. Most participants employed prompt engineering with generative pre-trained transformer (GPT) models. Discussion: We discuss areas for improvement, including better ingestion of tables, addition of context and fine-tuning. Conclusion: The challenge results demonstrate the potential of LLMs in automating table summarization in CSRs while also revealing the importance of human involvement and continued research to optimize this technology.

18.
JMIR Ment Health ; 11: e54781, 2024 May 23.
Artículo en Inglés | MEDLINE | ID: mdl-38787297

RESUMEN

Unlabelled: This paper explores a significant shift in the field of mental health in general and psychotherapy in particular following generative artificial intelligence's new capabilities in processing and generating humanlike language. Following Freud, this lingo-technological development is conceptualized as the "fourth narcissistic blow" that science inflicts on humanity. We argue that this narcissistic blow has a potentially dramatic influence on perceptions of human society, interrelationships, and the self. We should, accordingly, expect dramatic changes in perceptions of the therapeutic act following the emergence of what we term the artificial third in the field of psychotherapy. The introduction of an artificial third marks a critical juncture, prompting us to ask the following important core questions that address two basic elements of critical thinking, namely, transparency and autonomy: (1) What is this new artificial presence in therapy relationships? (2) How does it reshape our perception of ourselves and our interpersonal dynamics? and (3) What remains of the irreplaceable human elements at the core of therapy? Given the ethical implications that arise from these questions, this paper proposes that the artificial third can be a valuable asset when applied with insight and ethical consideration, enhancing but not replacing the human touch in therapy.


Asunto(s)
Inteligencia Artificial , Psicoterapia , Inteligencia Artificial/ética , Humanos , Psicoterapia/métodos , Psicoterapia/ética
19.
Sci Rep ; 14(1): 12304, 2024 05 29.
Artículo en Inglés | MEDLINE | ID: mdl-38811714

RESUMEN

Recent advances in artificial intelligence (AI) enable the generation of realistic facial images that can be used in police lineups. The use of AI image generation offers pragmatic advantages in that it allows practitioners to generate filler images directly from the description of the culprit using text-to-image generation, avoids the violation of identity rights of natural persons who are not suspects and eliminates the constraints of being bound to a database with a limited set of photographs. However, the risk exists that using AI-generated filler images provokes more biased selection of the suspect if eyewitnesses are able to distinguish AI-generated filler images from the photograph of the suspect's face. Using a model-based analysis, we compared biased suspect selection directly between lineups with AI-generated filler images and lineups with database-derived filler photographs. The results show that the lineups with AI-generated filler images were perfectly fair and, in fact, led to less biased suspect selection than the lineups with database-derived filler photographs used in previous experiments. These results are encouraging with regard to the potential of AI image generation for constructing fair lineups which should inspire more systematic research on the feasibility of adopting AI technology in forensic settings.


Asunto(s)
Inteligencia Artificial , Cara , Humanos , Procesamiento de Imagen Asistido por Computador/métodos , Fotograbar/métodos , Policia , Bases de Datos Factuales , Ciencias Forenses/métodos , Femenino , Crimen
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA