Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 16 de 16
Filtrar
Mais filtros

Base de dados
Tipo de documento
Intervalo de ano de publicação
1.
Sci Eng Ethics ; 30(3): 26, 2024 Jun 10.
Artigo em Inglês | MEDLINE | ID: mdl-38856788

RESUMO

The rapid development of computer vision technologies and applications has brought forth a range of social and ethical challenges. Due to the unique characteristics of visual technology in terms of data modalities and application scenarios, computer vision poses specific ethical issues. However, the majority of existing literature either addresses artificial intelligence as a whole or pays particular attention to natural language processing, leaving a gap in specialized research on ethical issues and systematic solutions in the field of computer vision. This paper utilizes bibliometrics and text-mining techniques to quantitatively analyze papers from prominent academic conferences in computer vision over the past decade. It first reveals the developing trends and specific distribution of attention regarding trustworthy aspects in the computer vision field, as well as the inherent connections between ethical dimensions and different stages of visual model development. A life-cycle framework regarding trustworthy computer vision is then presented by making the relevant trustworthy issues, the operation pipeline of AI models, and viable technical solutions interconnected, providing researchers and policymakers with references and guidance for achieving trustworthy CV. Finally, it discusses particular motivations for conducting trustworthy practices and underscores the consistency and ambivalence among various trustworthy principles and technical attributes.


Assuntos
Inteligência Artificial , Humanos , Inteligência Artificial/ética , Inteligência Artificial/tendências , Confiança , Processamento de Linguagem Natural , Mineração de Dados/ética , Bibliometria
2.
J Headache Pain ; 25(1): 151, 2024 Sep 13.
Artigo em Inglês | MEDLINE | ID: mdl-39272003

RESUMO

Artificial intelligence (AI) is revolutionizing the field of biomedical research and treatment, leveraging machine learning (ML) and advanced algorithms to analyze extensive health and medical data more efficiently. In headache disorders, particularly migraine, AI has shown promising potential in various applications, such as understanding disease mechanisms and predicting patient responses to therapies. Implementing next-generation AI in headache research and treatment could transform the field by providing precision treatments and augmenting clinical practice, thereby improving patient and public health outcomes and reducing clinician workload. AI-powered tools, such as large language models, could facilitate automated clinical notes and faster identification of effective drug combinations in headache patients, reducing cognitive burdens and physician burnout. AI diagnostic models also could enhance diagnostic accuracy for non-headache specialists, making headache management more accessible in general medical practice. Furthermore, virtual health assistants, digital applications, and wearable devices are pivotal in migraine management, enabling symptom tracking, trigger identification, and preventive measures. AI tools also could offer stress management and pain relief solutions to headache patients through digital applications. However, considerations such as technology literacy, compatibility, privacy, and regulatory standards must be adequately addressed. Overall, AI-driven advancements in headache management hold significant potential for enhancing patient care, clinical practice and research, which should encourage the headache community to adopt AI innovations.


Assuntos
Inteligência Artificial , Humanos , Inteligência Artificial/tendências , Cefaleia/diagnóstico , Cefaleia/terapia , Pesquisa Biomédica/métodos , Pesquisa Biomédica/normas
3.
Entropy (Basel) ; 25(10)2023 Oct 09.
Artigo em Inglês | MEDLINE | ID: mdl-37895550

RESUMO

Recent advancements in artificial intelligence (AI) technology have raised concerns about the ethical, moral, and legal safeguards. There is a pressing need to improve metrics for assessing security and privacy of AI systems and to manage AI technology in a more ethical manner. To address these challenges, an AI Trust Framework and Maturity Model is proposed to enhance trust in the design and management of AI systems. Trust in AI involves an agreed-upon understanding between humans and machines about system performance. The framework utilizes an "entropy lens" to root the study in information theory and enhance transparency and trust in "black box" AI systems, which lack ethical guardrails. High entropy in AI systems can decrease human trust, particularly in uncertain and competitive environments. The research draws inspiration from entropy studies to improve trust and performance in autonomous human-machine teams and systems, including interconnected elements in hierarchical systems. Applying this lens to improve trust in AI also highlights new opportunities to optimize performance in teams. Two use cases are described to validate the AI framework's ability to measure trust in the design and management of AI systems.

4.
Sensors (Basel) ; 21(20)2021 Oct 13.
Artigo em Inglês | MEDLINE | ID: mdl-34696025

RESUMO

Retail shoplifting is one of the most prevalent forms of theft and has accounted for over one billion GBP in losses for UK retailers in 2018. An automated approach to detecting behaviours associated with shoplifting using surveillance footage could help reduce these losses. Until recently, most state-of-the-art vision-based approaches to this problem have relied heavily on the use of black box deep learning models. While these models have been shown to achieve very high accuracy, this lack of understanding on how decisions are made raises concerns about potential bias in the models. This limits the ability of retailers to implement these solutions, as several high-profile legal cases have recently ruled that evidence taken from these black box methods is inadmissible in court. There is an urgent need to develop models which can achieve high accuracy while providing the necessary transparency. One way to alleviate this problem is through the use of social signal processing to add a layer of understanding in the development of transparent models for this task. To this end, we present a social signal processing model for the problem of shoplifting prediction which has been trained and validated using a novel dataset of manually annotated shoplifting videos. The resulting model provides a high degree of understanding and achieves accuracy comparable with current state of the art black box methods.


Assuntos
Roubo
5.
Front Artif Intell ; 7: 1377011, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38601110

RESUMO

As Artificial Intelligence (AI) becomes more prevalent, protecting personal privacy is a critical ethical issue that must be addressed. This article explores the need for ethical AI systems that safeguard individual privacy while complying with ethical standards. By taking a multidisciplinary approach, the research examines innovative algorithmic techniques such as differential privacy, homomorphic encryption, federated learning, international regulatory frameworks, and ethical guidelines. The study concludes that these algorithms effectively enhance privacy protection while balancing the utility of AI with the need to protect personal data. The article emphasises the importance of a comprehensive approach that combines technological innovation with ethical and regulatory strategies to harness the power of AI in a way that respects and protects individual privacy.

6.
IEEE J Transl Eng Health Med ; 12: 256-257, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38196818

RESUMO

The rapid advancement of Artificial Intelligence (AI) is transforming healthcare and daily life, offering great opportunities but also posing ethical and societal challenges. To ensure AI benefits all individuals, including those with intellectual disabilities, the focus should be on adaptive technology that can adapt to the unique needs of the user. Biomedical engineers have an interdisciplinary background that helps them to lead multidisciplinary teams in the development of human-centered AI solutions. These solutions can personalize learning, enhance communication, and improve accessibility for individuals with intellectual disabilities. Furthermore, AI can aid in healthcare research, diagnostics, and therapy. The ethical use of AI in healthcare and the collaboration of AI with human expertise must be emphasized. Public funding for inclusive research is encouraged, promoting equity and economic growth while empowering those with intellectual disabilities in society.


Assuntos
Deficiência Intelectual , Humanos , Deficiência Intelectual/diagnóstico , Inteligência Artificial , Engenharia Biomédica , Comunicação , Tecnologia
7.
Front Psychiatry ; 15: 1346059, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38525252

RESUMO

The advent and growing popularity of generative artificial intelligence (GenAI) holds the potential to revolutionise AI applications in forensic psychiatry and criminal justice, which traditionally relied on discriminative AI algorithms. Generative AI models mark a significant shift from the previously prevailing paradigm through their ability to generate seemingly new realistic data and analyse and integrate a vast amount of unstructured content from different data formats. This potential extends beyond reshaping conventional practices, like risk assessment, diagnostic support, and treatment and rehabilitation plans, to creating new opportunities in previously underexplored areas, such as training and education. This paper examines the transformative impact of generative artificial intelligence on AI applications in forensic psychiatry and criminal justice. First, it introduces generative AI and its prevalent models. Following this, it reviews the current applications of discriminative AI in forensic psychiatry. Subsequently, it presents a thorough exploration of the potential of generative AI to transform established practices and introduce novel applications through multimodal generative models, data generation and data augmentation. Finally, it provides a comprehensive overview of ethical and legal issues associated with deploying generative AI models, focusing on their impact on individuals as well as their broader societal implications. In conclusion, this paper aims to contribute to the ongoing discourse concerning the dynamic challenges of generative AI applications in forensic contexts, highlighting potential opportunities, risks, and challenges. It advocates for interdisciplinary collaboration and emphasises the necessity for thorough, responsible evaluations of generative AI models before widespread adoption into domains where decisions with substantial life-altering consequences are routinely made.

8.
Front Robot AI ; 10: 1270460, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-38077452

RESUMO

Can we conceive machines that can formulate autonomous intentions and make conscious decisions? If so, how would this ability affect their ethical behavior? Some case studies help us understand how advances in understanding artificial consciousness can contribute to creating ethical AI systems.

9.
Cureus ; 15(11): e49082, 2023 Nov.
Artigo em Inglês | MEDLINE | ID: mdl-38125253

RESUMO

This comprehensive exploration unveils the transformative potential of Artificial Intelligence (AI) within medicine and surgery. Through a meticulous journey, we examine AI's current applications in healthcare, including medical diagnostics, surgical procedures, and advanced therapeutics. Delving into the theoretical foundations of AI, encompassing machine learning, deep learning, and Natural Language Processing (NLP), we illuminate the critical underpinnings supporting AI's integration into healthcare. Highlighting the symbiotic relationship between humans and machines, we emphasize how AI augments clinical capabilities without supplanting the irreplaceable human touch in healthcare delivery. Also, we'd like to briefly mention critical findings and takeaways they can expect to encounter in the article. A thoughtful analysis of the economic, societal, and ethical implications of AI's integration into healthcare underscores our commitment to addressing critical issues, such as data privacy, algorithmic transparency, and equitable access to AI-driven healthcare services. As we contemplate the future landscape, we project an exciting vista where more sophisticated AI algorithms and real-time surgical visualizations redefine the boundaries of medical achievement. While acknowledging the limitations of the present research, we shed light on AI's pivotal role in enhancing patient engagement, education, and data security within the burgeoning realm of AI-driven healthcare.

10.
Inf Syst Front ; 24(5): 1465-1481, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-34177358

RESUMO

One realm of AI, recommender systems have attracted significant research attention due to concerns about its devastating effects to society's most vulnerable and marginalised communities. Both media press and academic literature provide compelling evidence that AI-based recommendations help to perpetuate and exacerbate racial and gender biases. Yet, there is limited knowledge about the extent to which individuals might question AI-based recommendations when perceived as biased. To address this gap in knowledge, we investigate the effects of espoused national cultural values on AI questionability, by examining how individuals might question AI-based recommendations due to perceived racial or gender bias. Data collected from 387 survey respondents in the United States indicate that individuals with espoused national cultural values associated to collectivism, masculinity and uncertainty avoidance are more likely to question biased AI-based recommendations. This study advances understanding of how cultural values affect AI questionability due to perceived bias and it contributes to current academic discourse about the need to hold AI accountable.

11.
AI Ethics ; 2(4): 595-597, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-35098248

RESUMO

Technology giants today preside over vast troves of user data that are heavily mined for profit. The concentration of such valuable data in private hands to serve mainly commercial interests must be questioned. In this article, we argue that if data is the new oil, Big Tech companies possess extensive, encompassing and granular data that is tantamount to premium oil. In contrast, governments, universities and think tanks undertake data collection efforts that are comparatively modest in scale, scope, duration and resolution and must contend with 'data dregs'. Viewed against the backdrop of the COVID-19 pandemic, this sharp data asymmetry is unfortunate because the data Big Tech monopolizes is invaluable for boosting epidemiological control, formulating government policies, enhancing social services, improving urban planning and refining public education. We explain why this state of extreme data inequity undermines societal benefit and subverts our quest for ethical AI. We also propose how it should be addressed through data sharing and Open Data initiatives.

12.
Exp Biol Med (Maywood) ; 247(22): 1969-1971, 2022 11.
Artigo em Inglês | MEDLINE | ID: mdl-36426683

RESUMO

This editorial article aims to highlight advances in artificial intelligence (AI) technologies in five areas: Collaborative AI, Multimodal AI, Human-Centered AI, Equitable AI, and Ethical and Value-based AI in order to cope with future complex socioeconomic and public health issues.


Assuntos
Inteligência Artificial , COVID-19 , Humanos , Atenção à Saúde , Previsões
13.
AI Ethics ; 2(1): 157-165, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-34790953

RESUMO

In the past few decades, technology has completely transformed the world around us. Indeed, experts believe that the next big digital transformation in how we live, communicate, work, trade and learn will be driven by Artificial Intelligence (AI) [83]. This paper presents a high-level industrial and academic overview of AI in Education (AIEd). It presents the focus of latest research in AIEd on reducing teachers' workload, contextualized learning for students, revolutionizing assessments and developments in intelligent tutoring systems. It also discusses the ethical dimension of AIEd and the potential impact of the Covid-19 pandemic on the future of AIEd's research and practice. The intended readership of this article is policy makers and institutional leaders who are looking for an introductory state of play in AIEd.

14.
Front Robot AI ; 8: 640647, 2021.
Artigo em Inglês | MEDLINE | ID: mdl-34124173

RESUMO

With the progress of Artificial Intelligence, intelligent agents are increasingly being deployed in tasks for which ethical guidelines and moral values apply. As artificial agents do not have a legal position, humans should be held accountable if actions do not comply, implying humans need to exercise control. This is often labeled as Meaningful Human Control (MHC). In this paper, achieving MHC is addressed as a design problem, defining the collaboration between humans and agents. We propose three possible team designs (Team Design Patterns), varying in the level of autonomy on the agent's part. The team designs include explanations given by the agent to clarify its reasoning and decision-making. The designs were implemented in a simulation of a medical triage task, to be executed by a domain expert and an artificial agent. The triage task simulates making decisions under time pressure, with too few resources available to comply with all medical guidelines all the time, hence involving moral choices. Domain experts (i.e., health care professionals) participated in the present study. One goal was to assess the ecological relevance of the simulation. Secondly, to explore the control that the human has over the agent to warrant moral compliant behavior in each proposed team design. Thirdly, to evaluate the role of agent explanations on the human's understanding in the agent's reasoning. Results showed that the experts overall found the task a believable simulation of what might occur in reality. Domain experts experienced control over the team's moral compliance when consequences were quickly noticeable. When instead the consequences emerged much later, the experts experienced less control and felt less responsible. Possibly due to the experienced time pressure implemented in the task or over trust in the agent, the experts did not use explanations much during the task; when asked afterwards they however considered these to be useful. It is concluded that a team design should emphasize and support the human to develop a sense of responsibility for the agent's behavior and for the team's decisions. The design should include explanations that fit with the assigned team roles as well as the human cognitive state.

15.
Front Big Data ; 3: 25, 2020.
Artigo em Inglês | MEDLINE | ID: mdl-33693398

RESUMO

Data shapes the development of Artificial Intelligence (AI) as we currently know it, and for many years centralized networking infrastructures have dominated both the sourcing and subsequent use of such data. Research suggests that centralized approaches result in poor representation, and as AI is now integrated more in daily life, there is a need for efforts to improve on this. The AI research community has begun to explore managing data infrastructures more democratically, finding that decentralized networking allows for more transparency which can alleviate core ethical concerns, such as selection-bias. With this in mind, herein, we present a mini-survey framed around data representation and data infrastructures in AI. We outline four key considerations (auditing, benchmarking, confidence and trust, explainability and interpretability) as they pertain to data-driven AI, and propose that reflection of them, along with improved interdisciplinary discussion may aid the mitigation of data-based AI ethical concerns, and ultimately improve individual wellbeing when interacting with AI.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA