Should the State permit the use of artificial intelligence?
Under what conditions should the State, as the entity charged with protecting its citizens, permit the introduction of autonomous machines?
Artificial intelligence has for quite some time been the topic dominating discussions of digitisation. As elsewhere, it has also been creating quite a stir in some legal circles. Beyond associated liability and data protection problems, the question is how we as a society want to position ourselves in relation to the phenomenon of artificial intelligence. How can coexistence between man and machine succeed as the latter adopts increasingly human characteristics? This requires not only legal expertise, but also (machine) ethics.
This begins with questions on the definition of concepts such as “artificial intelligence” and “machine learning”, which are based on seemingly unavoidable anthropomorphisms. Whether such comparisons to human beings are justified or whether they inadmissibly blur the differences is not irrelevant to the law, if the law is to subject these new man-made actors to legal regulation.
A new discussion is whether algorithms can or do engage in unlawful discrimination, for example based on race or sex? Or do machines and software really always make neutral decisions, as has often been claimed? There is certainly reason to question this proposition with respect to some automated decision-making processes. For example, when U.S. psychologists, in assessing the probability that offenders will reoffend, they rely on prognosis software that seems to put young blacks at a distinct disadvantage compared to other prisoners. The question is therefore: Do human prejudices simply continue or even intensify when AI is used?