Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 6.960
Filtrar
1.
Hu Li Za Zhi ; 71(5): 7-13, 2024 Oct.
Artículo en Chino | MEDLINE | ID: mdl-39350704

RESUMEN

Artificial intelligence (AI) is driving global change, and the implementation of generative AI in higher education is inevitable. AI language models such as the chat generative pre-trained transformer (ChatGPT) hold the potential to revolutionize the delivery of nursing education in the future. Nurse educators play a crucial role in preparing nursing students for a future technology-integrated healthcare system. While the technology has limitations and potential biases, the emergence of ChatGPT presents both opportunities and challenges. It is critical for faculty to be familiar with the capabilities and limitations of this model to foster effective, ethical, and responsible utilization of AI technology while preparing students in advance for the dynamic and rapidly advancing landscape of nursing and healthcare. Therefore, this article was written to present a strengths, weaknesses, opportunities, and threats (SWOT) analysis of integrating ChatGPT into nursing education, providing a guide for implementing ChatGPT in nursing education and offering a well-rounded assessment to help nurse educators make informed decisions.


Asunto(s)
Inteligencia Artificial , Educación en Enfermería , Humanos
2.
Hu Li Za Zhi ; 71(5): 21-28, 2024 Oct.
Artículo en Chino | MEDLINE | ID: mdl-39350706

RESUMEN

The current uses, potential risks, and practical recommendations for using chat generative pre-trained transformers (ChatGPT) in systematic reviews (SRs) and meta-analyses (MAs) are reviewed in this article. The findings of prior research suggest that, for tasks such as literature screening and information extraction, ChatGPT can match or exceed the performance of human experts. However, for complex tasks such as risk of bias assessment, its performance remains significantly limited, underscoring the critical role of human expertise. The use of ChatGPT as an adjunct tool in SRs and MAs requires careful planning and the implementation of strict quality control and validation mechanisms to mitigate potential errors such as those arising from artificial intelligence (AI) 'hallucinations'. This paper also provides specific recommendations for optimizing human-AI collaboration in SRs and MAs. Assessing the specific context of each task and implementing the most appropriate strategies are critical when using ChatGPT in support of research goals. Furthermore, transparency regarding the use of ChatGPT in research reports is essential to maintaining research integrity. Close attention to ethical norms, including issues of privacy, bias, and fairness, is also imperative. Finally, from a human-centered perspective, this paper emphasizes the importance of researchers cultivating continuous self-iteration, prompt engineering skills, critical thinking, cross-disciplinary collaboration, and ethical awareness skills with the goals of: continuously optimizing human-AI collaboration models within reasonable and compliant norms, enhancing the complex-task performance of AI tools such as ChatGPT, and, ultimately, achieving greater efficiency through technological innovative while upholding scientific rigor.

3.
Data Brief ; 57: 110948, 2024 Dec.
Artículo en Inglés | MEDLINE | ID: mdl-39351136

RESUMEN

The study of beach morphology holds significant importance in coastal management, offering insights into coastal and environmental processes. It involves analyzing physical characteristics and beach features such as profile shape, slope, sediment composition, and grain size, as well as changes in elevation due to both erosion and accretion over time. Furthermore, studying changes in beach morphology is essential in predicting and monitoring coastal inundation events, especially in the context of rising sea levels and subsidence in some areas. However, having access to high-frequency oblique imagery and beach elevation datasets to document and confirm coastal forcing events and understand their impact on beach morphology is a notable challenge. This paper describes a one-year dataset comprising bi-monthly topographic surveys and imagery collected daily at 30 min increments at the beach adjacent to Horace Caldwell Pier in Port Aransas, Texas. The data collection started in February 2023 and ended in January 2024. The dataset includes 18 topographic surveys, 6879 beach images, and ocean/wave videos that can be combined with colocated National Oceanic and Atmospheric Administration metocean measurements. The one-year temporal span of the dataset allows for the observation and analysis of seasonal variations, contributing to a deeper understanding of coastal dynamics in the study area. Furthermore, a study that combines survey measurements with camera imagery is rare and provides valuable information on conditions before, after, and between surveys and periods of inundation. The imagery enables monitoring of inundation events, while the topographic surveys facilitate the analysis of their impact on beach morphology, including beach erosion and accretion. Various products, including beach profiles, contours, slope maps, triangular irregular networks, and digital elevation models, were derived from the topographic dataset, allowing in depth analysis of beach morphology. Additionally, the dataset contains a time series of four wet/dry shoreline delineations per day and their corresponding elevation extracted by combining the imagery with the digital elevation models. Thus, this paper provides a high-frequency morphological dataset and a machine learning-ready dataset suitable for predicting coastal inundation.

4.
Cureus ; 16(8): e68307, 2024 Aug.
Artículo en Inglés | MEDLINE | ID: mdl-39350844

RESUMEN

Introduction The study assesses the readability of AI-generated brochures for common emergency medical conditions like heart attack, anaphylaxis, and syncope. Thus, the study aims to compare the AI-generated responses for patient information guides of common emergency medical conditions using ChatGPT and Google Gemini. Methodology Brochures for each condition were created by both AI tools. Readability was assessed using the Flesch-Kincaid Calculator, evaluating word count, sentence count and ease of understanding. Reliability was measured using the Modified DISCERN Score. The similarity between AI outputs was determined using Quillbot. Statistical analysis was performed with R (v4.3.2). Results ChatGPT and Gemini produced brochures with no statistically significant differences in word count (p= 0.2119), sentence count (p=0.1276), readability (p=0.3796), or reliability (p=0.7407). However, ChatGPT provided more detailed content with 32.4% more words (582.80 vs. 440.20) and 51.6% more sentences (67.00 vs. 44.20). In addition, Gemini's brochures were slightly easier to read with a higher ease score (50.62 vs. 41.88). Reliability varied by topic with ChatGPT scoring higher for Heart Attack (4 vs. 3) and Choking (3 vs. 2), while Google Gemini scored higher for Anaphylaxis (4 vs. 3) and Drowning (4 vs. 3), highlighting the need for topic-specific evaluation. Conclusions Although AI-generated brochures from ChatGPT and Gemini are comparable in readability and reliability for patient information on emergency medical conditions, this study highlights that there is no statistically significant difference in the responses generated by the two AI tools.

5.
Cureus ; 16(8): e68313, 2024 Aug.
Artículo en Inglés | MEDLINE | ID: mdl-39350876

RESUMEN

Recent advances in generative artificial intelligence (AI) have enabled remarkable capabilities in generating images, audio, and videos from textual descriptions. Tools like Midjourney and DALL-E 3 can produce striking visualizations from simple prompts, while services like Kaiber.ai and RunwayML Gen-2 can generate short video clips. These technologies offer intriguing possibilities for clinical and educational applications in otolaryngology. Visualizing symptoms like vertigo or tinnitus could bolster patient-provider understanding, especially for those with communication challenges. One can envision patients selecting images to complement chief complaints, with AI-generated differential diagnoses. However, inaccuracies and biases necessitate caution. Images must serve to enrich, not replace, clinical judgment. While not a substitute for healthcare professionals, text-to-image and text-to-video generation could become valuable complementary diagnostic tools. Harnessed judiciously, generative AI offers new ways to enhance clinical dialogues. However, education on proper, equitable usage is paramount as these rapidly evolving technologies make their way into medicine.

6.
Cureus ; 16(8): e68298, 2024 Aug.
Artículo en Inglés | MEDLINE | ID: mdl-39350878

RESUMEN

GPT-4 Vision (GPT-4V) represents a significant advancement in multimodal artificial intelligence, enabling text generation from images without specialized training. This marks the transformation of ChatGPT as a large language model (LLM) into GPT-4's promised large multimodal model (LMM). As these AI models continue to advance, they may enhance radiology workflow and aid with decision support. This technical note explores potential GPT-4V applications in radiology and evaluates performance for sample tasks. GPT-4V capabilities were tested using images from the web, personal and institutional teaching files, and hand-drawn sketches. Prompts evaluated scientific figure analysis, radiologic image reporting, image comparison, handwriting interpretation, sketch-to-code, and artistic expression. In this limited demonstration of GPT-4V's capabilities, it showed promise in classifying images, counting entities, comparing images, and deciphering handwriting and sketches. However, it exhibited limitations in detecting some fractures, discerning a change in size of lesions, accurately interpreting complex diagrams, and consistently characterizing radiologic findings. Artistic expression responses were coherent. WhileGPT-4V may eventually assist with tasks related to radiology, current reliability gaps highlight the need for continued training and improvement before consideration for any medical use by the general public and ultimately clinical integration. Future iterations could enable a virtual assistant to discuss findings, improve reports, extract data from images, provide decision support based on guidelines, white papers, and appropriateness criteria. Human expertise remain essential for safe practice and partnerships between physicians, researchers, and technology leaders are necessary to safeguard against risks like bias and privacy concerns.

7.
Digit Health ; 10: 20552076241284936, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-39351313

RESUMEN

Objective: The enabling and derailing factors for using artificial intelligence (AI)-based applications to improve patient care in the United Arab Emirates (UAE) from the physicians' perspective are investigated. Factors to accelerate the adoption of AI-based applications in the UAE are identified to aid implementation. Methods: A qualitative, inductive research methodology was employed, utilizing semi-structured interviews with 12 physicians practicing in the UAE. The collected data were analyzed using NVIVO software and grounded theory was used for thematic analysis. Results: This study identified factors specific to the deployment of AI to transform patient care in the UAE. First, physicians must control the applications and be fully trained and engaged in the testing phase. Second, healthcare systems need to be connected, and the AI outcomes need to be easily interpretable by physicians. Third, the reimbursement for AI-based applications should be settled by insurance or the government. Fourth, patients should be aware of and accept the technology before physicians use it to avoid negative consequences for the doctor-patient relationship. Conclusions: This research was conducted with practicing physicians in the UAE to determine their understanding of enabling and derailing factors for improving patient care through AI-based applications. The importance of involving physicians as the accountable agents for AI tools is highlighted. Public awareness regarding AI in healthcare should be improved to drive public acceptance.

8.
Front Artif Intell ; 7: 1393903, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-39351510

RESUMEN

Introduction: Recent advances in generative Artificial Intelligence (AI) and Natural Language Processing (NLP) have led to the development of Large Language Models (LLMs) and AI-powered chatbots like ChatGPT, which have numerous practical applications. Notably, these models assist programmers with coding queries, debugging, solution suggestions, and providing guidance on software development tasks. Despite known issues with the accuracy of ChatGPT's responses, its comprehensive and articulate language continues to attract frequent use. This indicates potential for ChatGPT to support educators and serve as a virtual tutor for students. Methods: To explore this potential, we conducted a comprehensive analysis comparing the emotional content in responses from ChatGPT and human answers to 2000 questions sourced from Stack Overflow (SO). The emotional aspects of the answers were examined to understand how the emotional tone of AI responses compares to that of human responses. Results: Our analysis revealed that ChatGPT's answers are generally more positive compared to human responses. In contrast, human answers often exhibit emotions such as anger and disgust. Significant differences were observed in emotional expressions between ChatGPT and human responses, particularly in the emotions of anger, disgust, and joy. Human responses displayed a broader emotional spectrum compared to ChatGPT, suggesting greater emotional variability among humans. Discussion: The findings highlight a distinct emotional divergence between ChatGPT and human responses, with ChatGPT exhibiting a more uniformly positive tone and humans displaying a wider range of emotions. This variance underscores the need for further research into the role of emotional content in AI and human interactions, particularly in educational contexts where emotional nuances can impact learning and communication.

9.
JMIR Form Res ; 8: e51383, 2024 Oct 01.
Artículo en Inglés | MEDLINE | ID: mdl-39353189

RESUMEN

BACKGROUND: Generative artificial intelligence (AI) and large language models, such as OpenAI's ChatGPT, have shown promising potential in supporting medical education and clinical decision-making, given their vast knowledge base and natural language processing capabilities. As a general purpose AI system, ChatGPT can complete a wide range of tasks, including differential diagnosis without additional training. However, the specific application of ChatGPT in learning and applying a series of specialized, context-specific tasks mimicking the workflow of a human assessor, such as administering a standardized assessment questionnaire, followed by inputting assessment results in a standardized form, and interpretating assessment results strictly following credible, published scoring criteria, have not been thoroughly studied. OBJECTIVE: This exploratory study aims to evaluate and optimize ChatGPT's capabilities in administering and interpreting the Sour Seven Questionnaire, an informant-based delirium assessment tool. Specifically, the objectives were to train ChatGPT-3.5 and ChatGPT-4 to understand and correctly apply the Sour Seven Questionnaire to clinical vignettes using prompt engineering, assess the performance of these AI models in identifying and scoring delirium symptoms against scores from human experts, and refine and enhance the models' interpretation and reporting accuracy through iterative prompt optimization. METHODS: We used prompt engineering to train ChatGPT-3.5 and ChatGPT-4 models on the Sour Seven Questionnaire, a tool for assessing delirium through caregiver input. Prompt engineering is a methodology used to enhance the AI's processing of inputs by meticulously structuring the prompts to improve accuracy and consistency in outputs. In this study, prompt engineering involved creating specific, structured commands that guided the AI models in understanding and applying the assessment tool's criteria accurately to clinical vignettes. This approach also included designing prompts to explicitly instruct the AI on how to format its responses, ensuring they were consistent with clinical documentation standards. RESULTS: Both ChatGPT models demonstrated promising proficiency in applying the Sour Seven Questionnaire to the vignettes, despite initial inconsistencies and errors. Performance notably improved through iterative prompt engineering, enhancing the models' capacity to detect delirium symptoms and assign scores. Prompt optimizations included adjusting the scoring methodology to accept only definitive "Yes" or "No" responses, revising the evaluation prompt to mandate responses in a tabular format, and guiding the models to adhere to the 2 recommended actions specified in the Sour Seven Questionnaire. CONCLUSIONS: Our findings provide preliminary evidence supporting the potential utility of AI models such as ChatGPT in administering standardized clinical assessment tools. The results highlight the significance of context-specific training and prompt engineering in harnessing the full potential of these AI models for health care applications. Despite the encouraging results, broader generalizability and further validation in real-world settings warrant additional research.


Asunto(s)
Delirio , Humanos , Delirio/diagnóstico , Encuestas y Cuestionarios , Inteligencia Artificial
10.
Front Artif Intell ; 7: 1372161, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-39355146

RESUMEN

Artificial Intelligence (AI) has revolutionized the biomedical sector in advanced diagnosis, treatment, and personalized medicine. While these AI-driven innovations promise vast benefits for patients and service providers, they also raise complex intellectual property (IP) challenges due to the inherent nature of AI technology. In this review, we discussed the multifaceted impact of AI on IP within the biomedical sector, exploring implications in areas like drug research and discovery, personalized medicine, and medical diagnostics. We dissect critical issues surrounding AI inventorship, patent and copyright protection for AI-generated works, data ownership, and licensing. To provide context, we analyzed the current IP legislative landscape in the United States, EU, China, and India, highlighting convergences, divergences, and precedent-setting cases relevant to the biomedical sector. Recognizing the need for harmonization, we reviewed current developments and discussed a way forward. We advocate for a collaborative approach, convening policymakers, clinicians, researchers, industry players, legal professionals, and patient advocates to navigate this dynamic landscape. It will create a stable IP regime and unlock the full potential of AI for enhanced healthcare delivery and improved patient outcomes.

11.
Comput Biol Med ; 182: 109183, 2024 Oct 01.
Artículo en Inglés | MEDLINE | ID: mdl-39357134

RESUMEN

Explainable artificial intelligence (XAI) aims to offer machine learning (ML) methods that enable people to comprehend, properly trust, and create more explainable models. In medical imaging, XAI has been adopted to interpret deep learning black box models to demonstrate the trustworthiness of machine decisions and predictions. In this work, we proposed a deep learning and explainable AI-based framework for segmenting and classifying brain tumors. The proposed framework consists of two parts. The first part, encoder-decoder-based DeepLabv3+ architecture, is implemented with Bayesian Optimization (BO) based hyperparameter initialization. The different scales are performed, and features are extracted through the Atrous Spatial Pyramid Pooling (ASPP) technique. The extracted features are passed to the output layer for tumor segmentation. In the second part of the proposed framework, two customized models have been proposed named Inverted Residual Bottleneck 96 layers (IRB-96) and Inverted Residual Bottleneck Self-Attention (IRB-Self). Both models are trained on the selected brain tumor datasets and extracted features from the global average pooling and self-attention layers. Features are fused using a serial approach, and classification is performed. The BO-based hyperparameters optimization of the neural network classifiers is performed and the classification results have been optimized. An XAI method named LIME is implemented to check the interpretability of the proposed models. The experimental process of the proposed framework was performed on the Figshare dataset, and an average segmentation accuracy of 92.68 % and classification accuracy of 95.42 % were obtained, respectively. Compared with state-of-the-art techniques, the proposed framework shows improved accuracy.

12.
Acta Psychol (Amst) ; 250: 104501, 2024 Oct 01.
Artículo en Inglés | MEDLINE | ID: mdl-39357416

RESUMEN

The integration of artificial intelligence (AI) technology in e-commerce has currently stimulated scholarly attention, however studies on AI and e-commerce generally relatively few. The current study aims to evaluate how artificial intelligence (AI) chatbots persuade users to consider chatbot recommendations in a web-based buying situation. Employing the theory of elaboration likelihood, the current study presents an analytical framework for identifying factors and internal mechanisms of consumers' readiness to adopt AI chatbot recommendations. The authors evaluated the model employing questionnaire responses from 411 Chinese AI chatbot consumers. The findings of present study indicated that chatbot recommendation reliability and accuracy is positively related to AI technology trust and have negative effect on perceived self-threat. In addition, AI technology trust is positively related to intention to adopt chatbot decision whereas perceived self-threat negatively related to intention to adopt chatbot decision. The perceived dialogue strengthens the significant relationship between AI-tech trust and intention to adopt chatbot decision and weakens the negative relationship between perceived self-threat and intention to adopt AI chatbot decisions.

13.
Br J Clin Pharmacol ; 2024 Oct 02.
Artículo en Inglés | MEDLINE | ID: mdl-39359001

RESUMEN

Drug-drug interactions (DDIs) present a significant health burden, compounded by clinician time constraints and poor patient health literacy. We assessed the ability of ChatGPT (generative artificial intelligence-based large language model) to predict DDIs in a real-world setting. Demographics, diagnoses and prescribed medicines for 120 hospitalized patients were input through three standardized prompts to ChatGPT version 3.5 and compared against pharmacist DDI evaluation to estimate diagnostic accuracy. Area under receiver operating characteristic and inter-rater reliability (Cohen's and Fleiss' kappa coefficients) were calculated. ChatGPT's responses differed based on prompt wording style, with higher sensitivity for prompts mentioning 'drug interaction'. Confusion matrices displayed low true positive and high true negative rates, and there was minimal agreement between ChatGPT and pharmacists (Cohen's kappa values 0.077-0.143). Low sensitivity values suggest a lack of success in identifying DDIs by ChatGPT, and further development is required before it can reliably assess potential DDIs in real-world scenarios.

14.
Small Methods ; : e2401108, 2024 Oct 02.
Artículo en Inglés | MEDLINE | ID: mdl-39359026

RESUMEN

Transmission electron microscopy (TEM) plays a crucial role in heterogeneous catalysis for assessing the size distribution of supported metal nanoparticles. Typically, nanoparticle size is quantified by measuring the diameter under the assumption of spherical geometry, a simplification that limits the precision needed for advancing synthesis-structure-performance relationships. Currently, there is a lack of techniques that can reliably extract more meaningful information from atomically resolved TEM images, like nuclearity or geometry. Here, cycle-consistent generative adversarial networks (CycleGANs) are explored to bridge experimental and simulated images, directly linking experimental observations with information from their underlying atomic structure. Using the versatile Pt/CeO2 (Pt particles centered ≈2 nm) catalyst synthesized by impregnation, large datasets of experimental scanning transmission electron micrographs and physical image simulations are created to train a CycleGAN. A subsequent size-estimation network is developed to determine the nuclearity of imaged nanoparticles, providing plausible estimates for ≈70% of experimentally observed particles. This automatic approach enables precise size determination of supported nanoparticle-based catalysts overcoming crystal orientation limitations of conventional techniques, promising high accuracy with sufficient training data. Tools like this are envisioned to be of great use in designing and characterizing catalytic materials with improved atomic precision.

15.
Global Spine J ; : 21925682241290752, 2024 Oct 02.
Artículo en Inglés | MEDLINE | ID: mdl-39359113

RESUMEN

STUDY DESIGN: Narrative review. OBJECTIVES: Artificial intelligence (AI) is being increasingly applied to the domain of spine surgery. We present a review of AI in spine surgery, including its use across all stages of the perioperative process and applications for research. We also provide commentary regarding future ethical considerations of AI use and how it may affect surgeon-industry relations. METHODS: We conducted a comprehensive literature review of peer-reviewed articles that examined applications of AI during the pre-, intra-, or postoperative spine surgery process. We also discussed the relationship among AI, spine industry partners, and surgeons. RESULTS: Preoperatively, AI has been mainly applied to image analysis, patient diagnosis and stratification, decision-making. Intraoperatively, AI has been used to aid image guidance and navigation. Postoperatively, AI has been used for outcomes prediction and analysis. AI can enable curation and analysis of huge datasets that can enhance research efforts. Large amounts of data are being accrued by industry sources for use by their AI platforms, though the inner workings of these datasets or algorithms are not well known. CONCLUSIONS: AI has found numerous uses in the pre-, intra-, or postoperative spine surgery process, and the applications of AI continue to grow. The clinical applications and benefits of AI will continue to be more fully realized, but so will certain ethical considerations. Making industry-sponsored databases open source, or at least somehow available to the public, will help alleviate potential biases and obscurities between surgeons and industry and will benefit patient care.

16.
Cureus ; 16(10): e70640, 2024 Oct.
Artículo en Inglés | MEDLINE | ID: mdl-39359332

RESUMEN

This editorial explores the recent advancements in generative artificial intelligence with the newly-released OpenAI o1-Preview, comparing its capabilities to the traditional ChatGPT (GPT-4) model, particularly in the context of healthcare. While ChatGPT has shown many applications for general medical advice and patient interactions, OpenAI o1-Preview introduces new features with advanced reasoning skills using a chain of thought processes that could enable users to tackle more complex medical queries such as genetic disease discovery, multi-system or complex disease care, and medical research support. The article explores some of the new model's potential and other aspects that may affect its usage, like slower response times due to its extensive reasoning approach yet highlights its potential for reducing hallucinations and offering more accurate outputs for complex medical problems. Ethical challenges, data diversity, access equity, and transparency are also discussed, identifying key areas for future research, including optimizing the use of both models in tandem for healthcare applications. The editorial concludes by advocating for collaborative exploration of all large language models (LLMs), including the novel OpenAI o1-Preview, to fully utilize their transformative potential in medicine and healthcare delivery. This model, with its advanced reasoning capabilities, presents an opportunity to empower healthcare professionals, policymakers, and computer scientists to work together in transforming patient care, accelerating medical research, and enhancing healthcare outcomes. By optimizing the use of several LLM models in tandem, healthcare systems may enhance efficiency and precision, as well as mitigate previous LLM challenges, such as ethical concerns, access disparities, and technical limitations, steering to a new era of artificial intelligence (AI)-driven healthcare.

17.
PNAS Nexus ; 3(10): pgae418, 2024 Oct.
Artículo en Inglés | MEDLINE | ID: mdl-39359393

RESUMEN

ChatGPT-4 and 600 human raters evaluated 226 public figures' personalities using the Ten-Item Personality Inventory. The correlation between ChatGPT-4 and aggregate human ratings ranged from r = 0.76 to 0.87, outperforming the models specifically trained to make such predictions. Notably, the model was not provided with any training data or feedback on its performance. We discuss the potential explanations and practical implications of ChatGPT-4's ability to mimic human responses accurately.

18.
Front Artif Intell ; 7: 1408817, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-39359648

RESUMEN

Large language models have been shown to excel in many different tasks across disciplines and research sites. They provide novel opportunities to enhance educational research and instruction in different ways such as assessment. However, these methods have also been shown to have fundamental limitations. These relate, among others, to hallucinating knowledge, explainability of model decisions, and resource expenditure. As such, more conventional machine learning algorithms might be more convenient for specific research problems because they allow researchers more control over their research. Yet, the circumstances in which either conventional machine learning or large language models are preferable choices are not well understood. This study seeks to answer the question to what extent either conventional machine learning algorithms or a recently advanced large language model performs better in assessing students' concept use in a physics problem-solving task. We found that conventional machine learning algorithms in combination outperformed the large language model. Model decisions were then analyzed via closer examination of the models' classifications. We conclude that in specific contexts, conventional machine learning can supplement large language models, especially when labeled data is available.

19.
Front Robot AI ; 11: 1431826, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-39360225

RESUMEN

The rapidly increasing capabilities of autonomous mobile robots promise to make them ubiquitous in the coming decade. These robots will continue to enhance efficiency and safety in novel applications such as disaster management, environmental monitoring, bridge inspection, and agricultural inspection. To operate autonomously without constant human intervention, even in remote or hazardous areas, robots must sense, process, and interpret environmental data using only onboard sensing and computation. This capability is made possible by advancements in perception algorithms, allowing these robots to rely primarily on their perception capabilities for navigation tasks. However, tiny robot autonomy is hindered mainly by sensors, memory, and computing due to size, area, weight, and power constraints. The bottleneck in these robots lies in the real-time perception in resource-constrained robots. To enable autonomy in robots of sizes that are less than 100 mm in body length, we draw inspiration from tiny organisms such as insects and hummingbirds, known for their sophisticated perception, navigation, and survival abilities despite their minimal sensor and neural system. This work aims to provide insights into designing a compact and efficient minimal perception framework for tiny autonomous robots from higher cognitive to lower sensor levels.

20.
Cognition ; 254: 105958, 2024 Oct 02.
Artículo en Inglés | MEDLINE | ID: mdl-39362054

RESUMEN

How do ordinary people evaluate robots that make morally significant decisions? Previous work has found both equal and different evaluations, and different ones in either direction. In 13 studies (N = 7670), we asked people to evaluate humans and robots that make decisions in norm conflicts (variants of the classic trolley dilemma). We examined several conditions that may influence whether moral evaluations of human and robot agents are the same or different: the type of moral judgment (norms vs. blame); the structure of the dilemma (side effect vs. means-end); salience of particular information (victim, outcome); culture (Japan vs. US); and encouraged empathy. Norms for humans and robots are broadly similar, but blame judgments show a robust asymmetry under one condition: Humans are blamed less than robots specifically for inaction decisions-here, refraining from sacrificing one person for the good of many. This asymmetry may emerge because people appreciate that the human faces an impossible decision and deserves mitigated blame for inaction; when evaluating a robot, such appreciation appears to be lacking. However, our evidence for this explanation is mixed. We discuss alternative explanations and offer methodological guidance for future work into people's moral judgment of robots and humans.

SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA