A lot has been written recently on the ineffectiveness of the current plethora of AI ethics regulations available in the public and private sectors. I approach this concern from a novel angle by critically reflecting from within the ethics of technology on current AI ethics discourse, which is mostly still deeply Cartesian, especially when it comes to policy-making. I start with an analysis of current AI ethics vocabulary and point to its value-laden and Cartesian nature. In a first step towards moving away from Cartesianism I then briefly take the reader on a journey through pertinent aspects of trans-human discourse as illustrated by Clark’s proposal of human minds as ‘extended’. I then consider Verbeek and Kudina’s work in post-phenomenological mediation theory to enrich Clark’s suggestions by acknowledging a more active role for technology in co-shaping humans and their socio-cultural worlds. As a result, via a novel notion of ‘extended moral agency’, I define a notion of ‘moral affordance’ to inform a new non-Cartesian tradition for AI ethics discourse and policy-making. Finally, I briefly comment on implications of my argument for the future of AI ethics regulation.
