Recent advances in neurotechnology and Artificial Intelligence (AI) and the accelerating convergence of both into neuroAI systems have radically advanced the translation of neural activity into speech and action. Yet, they also expose deeply private dimensions of the mind to algorithmic interpretation. This raises unprecedented ethical and societal questions as non-invasive neuroAI systems expand beyond clinical use into industrial sectors, blurring medical and commercial boundaries and driving profound societal transformations in how human minds are analysed, monetized and governed. With this comes questions such as: (1) how should agency, legitimacy and responsibility be distributed in neuroAI-mediated actions? and (2) how can issues such as algorithmic opacities and data bias be identified, and ethics and adaptive governance frameworks be embedded, to enhance public trust and accountability and promote epistemic justice in neuroAI?
NeuroTranslate adopts a comparative, interdisciplinary, and mixed-methods design that combines embedded ethics methodologies, positioning ethicists directly within neuroAI laboratories. This enables a first-hand examination of how moral, epistemic, and practical challenges unfold within distinct experimental settings.
In doing this, this project both supports TransforM’s research focus areas and informs the Council of Europe’s emerging neuroAI and neuroprivacy guidelines.
Focus area:
Sacondary focus areas:
Principal Investigators: