Abstract: | This is the second article in a two-part series on the social, ethical and public policy implications of the new artificial intelligence (AI). The first article briefly presented a neo-Durkheimian understanding of the social fears projected onto AI, before arguing that the common and enduring myth of an AI takeover arising from the autonomous decision-making capability of AI systems, most recently resurrected by Professor Kevin Warwick, is misplaced. That article went on to argue that, nevertheless, some genuine and practical issues in the accountability of AI systems that must be addressed. This second article, drawing further on the neo-Durkheimian theory, sets out a more detailed understanding of what it is for a system to be autonomous enough in its decision making to blur the boundary between tool and agent. The importance of this is that this blurring of categories is often the basis, the first article argued, of social fears. |