Artificial intelligence (AI) constitutes an umbrella term covering a host of computational approaches, from machine learning to natural language processing and computer vision. This includes a diversity of functions for integration and analyses of data and information through complex interplays of logic, probability, mathematics, perception, reasoning, learning or action. As a result of its wide-ranging functions and general-purpose application, it has potentially transformative impacts across a range of sectors such as finance, national security, healthcare, criminal justice, transportation, smart cities, and labour markets. It has become a pervasive aspect of daily life with applications in speech recognition, customer service, image classification, and recommendation engines being used across a wide range of commercial mobile applications and products ranging from Apple’s Siri and Amazon’s Alexa to autonomous vehicles.
However, more recently, the rise of generative AI raises critical questions about the implications of its transformative impacts across issues such as copyright and intellectual property, data privacy and consumer protection, the future of work, and product safety among others. These, in turn, raise the question of legal personhood for AI i.e., whether AI should be considered a separate legal entity for regulation.
The rise of generative AI raises critical questions about the implications of its transformative impacts across issues such as copyright and intellectual property, data privacy and consumer protection, the future of work, and product safety among others.
Determining the civil liability of AI has become an important aspect of AI standard setting and innovation, particularly because of the concerns around potential biases, misrepresentation, ‘hallucinations’ or the generation of false information, and complex ethical conundrums that AI systems might have to navigate. An illustrative example was provided through MIT’s Moral Machine project, which was an online software tool designed to collect public feedback on the moral decisions that autonomous driverless cars should make. These included moral dilemmas like whom should the car prioritise in the event of an impending accident—its passengers or pedestrians? Or whether they should determine the outcome by ranking people based on their age (such as children over adults) or number (save more people over one person) or other relevant criteria.
Artificial agency and regulatory approaches
Implementing traditional approaches of dealing with liability becomes complicated when it comes to AI on account of unpredictability and causal agency without legal agency. This involves the opacity or inexplicability of AI algorithms or the inability to trace back causal chains behind AI outputs to determine where liabilities lie in case of adverse consequences. This becomes particularly contentious when issues are framed around whether choices made by AI systems and models are agent-dependent, which then raises the question of who the agent is when it comes to determining liability on the basis of a generative AI output—is it the developer, the deployer, or the AI system itself? These issues stand further compounded in the case of multi-model and multimodal AI systems, which involve combining foundational AI models like Large Language Models (LLMs) with broader AI capabilities and where AI systems can draw insights from multiple types of data involving texts, images, sounds and videos respectively.
Read Full Article: