In a significant development in the ongoing race to regulate artificial intelligence (AI), a key committee of lawmakers in the European Parliament has approved a groundbreaking AI regulation, bringing it one step closer to becoming law.
Known as the European AI Act, this legislation represents the first-ever comprehensive legal framework for AI systems in the Western world.
While China has already formulated draft rules for managing generative AI products like ChatGPT, the European AI Act takes a unique risk-based approach to governing AI technologies.
The AI Act classifies AI applications into four levels of risk: unacceptable risk, high risk, limited risk, and minimal or no risk.
Applications falling under the unacceptable risk category are prohibited by default and cannot be deployed within the European Union. They include AI systems employing subliminal or deceptive techniques, exploiting vulnerabilities of individuals or groups, biometric categorisation based on sensitive attributes, social scoring systems, risk assessment for predicting criminal or administrative offenses, untargeted scraping for facial recognition databases, and emotion inference in law enforcement, border management, the workplace, and education.
One of the primary concerns addressed by the legislation is the regulation of "foundation models" like ChatGPT, which have raised apprehensions among regulators due to their advanced capabilities and potential displacement of skilled workers. The new rules impose obligations on developers of foundation models, such as large language models and generative AI systems. These developers will be required to conduct safety checks, implement data governance measures, and employ risk mitigation strategies before making their models publicly available. Additionally, they must ensure that the training data used to inform their systems do not infringe upon copyright laws.
Experts believe chatGPT like AI models would be required to take measures to assess and mitigate risks to fundamental rights, health and safety and the environment, democracy and rule of law. They would also be subject to data governance requirements, such as examining the suitability of the data sources and possible biases.
While the European Parliament has passed the AI Act, it is essential to note that the legislation still has a long way to go before becoming law. Nevertheless, this move by European lawmakers underscores the urgency to establish regulations for the rapidly evolving AI landscape.
The introduction of the AI Act is prompted by the swift development of AI technology by privately held companies. Notable AI systems like Microsoft-backed OpenAI's ChatGPT and Google's Bard have showcased remarkable capabilities, generating human-like responses through large language models trained on extensive datasets. AI technology has permeated numerous applications and systems, shaping the content users encounter on platforms like TikTok and Instagram.
The objective of the European Union's proposals is to establish guidelines for AI companies and organisations utilising AI technologies. However, these regulations have raised concerns within the tech industry. The Computer and Communications Industry Association (CCIA) expressed apprehension that the scope of the AI Act has been excessively broadened, potentially encompassing even harmless forms of AI.