In the wake of Alex Garland’s much-celebrated ‘Ex Machina’, the critic’s mind has been fixed on the delicate intersection between Law, ethics and technology.
Robots and bionic technologies that enhance or become part of humans raise many thorny legal and ethical questions. If a brain-computer interface is used to communicate for someone in a vegetative state, are these messages legally binding? If a robotic prosthesis is implicated in a murder, who is at fault? If a human with a prosthesis is still subject to the ordinary laws of human governance, should machines be entitled to the same legal rights and punishments, or does the very fact that their autonomy is at the discretion of human designers thwart the question of independent rights altogether?
Drawing up any kind of regulatory framework is a tricky issue, and not simply because policy-makers are faced with a haze of perpetual obfuscation – the kind of questions that tend only to inspire further questions, rather than conclusive answers – but because they find themselves entrusted with the punishing responsibility of preserving ethical norms without hampering the potential for technological innovation. Philosophy students should consider the advance of artificial intelligence in light of ‘existential risk’ – a department in Cambridge, the Centre for the Study of Existential Risk, was set up for the explicit purpose of studying the threat of risks which have the capacity to destroy mankind.