Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 2 de 2
Filtrar
Más filtros











Base de datos
Intervalo de año de publicación
1.
Sci Eng Ethics ; 26(4): 2295-2311, 2020 08.
Artículo en Inglés | MEDLINE | ID: mdl-32356091

RESUMEN

Brain reading technologies are rapidly being developed in a number of neuroscience fields. These technologies can record, process, and decode neural signals. This has been described as 'mind reading technology' in some instances, especially in popular media. Should the public at large, be concerned about this kind of technology? Can it really read minds? Concerns about mind-reading might include the thought that, in having one's mind open to view, the possibility for free deliberation, and for self-conception, are eroded where one isn't at liberty to privately mull things over. Themes including privacy, cognitive liberty, and self-conception and expression appear to be areas of vital ethical concern. Overall, this article explores whether brain reading technologies are really mind reading technologies. If they are, ethical ways to deal with them must be developed. If they are not, researchers and technology developers need to find ways to describe them more accurately, in order to dispel unwarranted concerns and address appropriately those that are warranted.


Asunto(s)
Encéfalo , Neurociencias , Software de Reconocimiento del Habla , Habla , Humanos , Principios Morales , Privacidad , Software de Reconocimiento del Habla/ética
2.
AJOB Neurosci ; 11(2): 105-112, 2020.
Artículo en Inglés | MEDLINE | ID: mdl-32228383

RESUMEN

This article examines the ethical and policy implications of using voice computing and artificial intelligence to screen for mental health conditions in low income and minority populations. Mental health is unequally distributed among these groups, which is further exacerbated by increased barriers to psychiatric care. Advancements in voice computing and artificial intelligence promise increased screening and more sensitive diagnostic assessments. Machine learning algorithms have the capacity to identify vocal features that can screen those with depression. However, in order to screen for mental health pathology, computer algorithms must first be able to account for the fundamental differences in vocal characteristics between low income minorities and those who are not. While researchers have envisioned this technology as a beneficent tool, this technology could be repurposed to scale up discrimination or exploitation. Studies on the use of big data and predictive analytics demonstrate that low income minority populations already face significant discrimination. This article urges researchers developing AI tools for vulnerable populations to consider the full ethical, legal, and social impact of their work. Without a national, coherent framework of legal regulations and ethical guidelines to protect vulnerable populations, it will be difficult to limit AI applications to solely beneficial uses. Without such protections, vulnerable populations will rightfully be wary of participating in such studies which also will negatively impact the robustness of such tools. Thus, for research involving AI tools like voice computing, it is in the research community's interest to demand more guidance and regulatory oversight from the federal government.


Asunto(s)
Inteligencia Artificial/ética , Bioética , Trastornos Mentales/diagnóstico , Enfermos Mentales , Grupos Minoritarios , Pobreza , Software de Reconocimiento del Habla/ética , Humanos
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA