Contents

How EU AI Act will affect ML landscape?

How EU AI Act will affect ML landscape? webp image

Over the past several months, we have seen a record increase in public interest in artificial intelligence mainly due to the easy-to-use and free-of-charge AI-supported tools, such as Midjourney or ChatGPT. Easy access and the abundant deposits of creativity of users of such systems have made it possible to discover many beneficial and interesting features of underlying machine learning models, such as prompt templates for the ChatGPT model from OpenAI that enable certain desired system responses but have also confirmed that these tools carry risks. Like any tool, it allows you to use its capabilities to do good as well as evil.

The risk of abuse, along with the growing popularity of such systems and accelerated scientific development in this area, have caused governments to fear the consequences of further developments. In an effort to protect citizens from the dangerous effects of AI development, the EU has prepared the AI Act, which seeks to establish a legal framework that will shape the further development of artificial intelligence and reduce the risks it brings. In a similar vein, experts writing an open letter demanded a six-month pause in AI research to set rules and principles.

The EU is one of many political entities to introduce such regulations. The U.S. has developed its own set of regulations called the "Blueprint for an AI Bill of Rights", the U.K. has its "National AI Strategy." Each of these countries is trying in its way to mitigate the risks that AI brings and enforce the development of ethical AI. A race to regulate AI is currently underway, and there will likely be a single standard of regulation recognized by a large proportion of countries around the world.

General assumptions

The European Union, in its regulations, has focused on minimizing risks to users of AI-based systems by forcing certain actions on the manufacturers of such systems and the companies using them. Its priority was to ensure that AI technologies are safe, transparent, non-discriminatory, traceable, and environmentally friendly. It followed the principle of proportional risk by defining several levels of risk for AI systems and corresponding regulations. Regulations were thought of as technology-agnostic and future-proof.

Affected Businesses

Four levels of AI systems risk are identified: unaccepted, high, Generative AI, and limited.

Unaccepted Risk

Included in this category are all AI systems for biometrics in public spaces, such as facial recognition software (with the exception of law enforcement, for which there are other restrictions), social scoring systems, and systems that manipulate human behavior with a focus on vulnerable groups such as children or people with disabilities. AI systems of this type will be banned in the EU.

High-Risk

AI systems used in items on the list of DIRECTIVE 2001/95/EC OF THE EUROPEAN PARLIAMENT AND OF THE COUNCIL of 3 December 2001 on general product safety and falling under one of the following areas:

  • human biometrics
  • critical infrastructure
  • education
  • employment
  • public basic and health services
  • law enforcement
  • migration, border guards
  • law interpretation and enforcement

Entities producing AI for these areas will be required to be registered in the EU database. These systems will be subject to human evaluation and supervision throughout their life cycle. Moreover, the user of AI systems is to be able to obtain an explanation of the decision made by the system, which will allow him to supervise the system's operation and detect anomalies or incorrect operation. It follows that an integral part of the high-risk AI systems of the future will be a component that implements explainable AI. It will also be required to inform the user about the capabilities and limitations of the system.

Generative AI

Generative systems will have to meet transparency requirements, especially:

  • disclosing that the content was generated by AI
  • securing the model against the generation of illegal content
  • publishing a summary of the data used for training

Limited Risk

Other AI systems are assigned to the limited-risk category. Artificial intelligence systems with limited risk should meet minimum transparency requirements that would allow users to make informed decisions. Users should be aware when they interact with artificial intelligence.

Zrzut%20ekranu%202024-06-19%20o%2015.09.03

Photo by Christian Lue on Unsplash

Opportunities

As a world, we are at the stage of creating a law to regulate AI. We now have the opportunity to create a regulation that does not significantly restrict the development and application of artificial intelligence in business while protecting society and individual citizens from the dangerous effects of AI systems.

Certainly, the requirements of explainability and transparency will force the scientific community to place greater emphasis on methods of explaining and understanding AI systems. This, in turn, will translate into iterative improvements in AI models in the long run.

The requirement to secure generative models against the creation of illegal content will be a strong incentive for research in this area. I expect the emergence of new methods which will make AI systems safer for users.

Risks

While delving into the topic of the EU AI Act regulation and its impact on obligations to artificial intelligence system manufacturers, I wondered how the requirements of European officials would translate into technical requirements for the systems. It turns out that I was not the only one. Similar considerations and thoughts were published by Balint Gyevnar et. all (2023) in an article titled “Get Your Act Together: A Comparative View on Transparency in the AI Act and Technology”. The conclusions of this reading are that the current regulations are ambiguous, there is a difference in understanding of terms between officials and engineers, there is a lack of direct translation of the requirements described in the regulations into systems requirements, and there is a risk of infringement of the proprietary rights of AI system developers.

The language used by civil servants and engineers is different. Both groups use the same words but in a different sense. Examples of such expressions are the terms interpretability, explainability, transparency, and model output. These terms need to be adapted and precisely defined to allow a clear understanding of the requirements. Transparency as understood by officials, often refers to transparency oriented toward regulatory compliance, while for engineers transparency refers to understanding the operation of the system itself, such as through explainable AI methods.

Are AI systems that recommend content considered by officials as those that manipulate people's behavior? If so, this will be a big blow to the e-commerce industry and streaming platforms that use these solutions to promote their products based on customer behavior. What about AI-controlled advertising campaigns?

The regulations are quite general in places and leave room for interpretation, which carries risks for companies developing AI-based solutions.

Moreover, under the transparency requirement, there is a risk of violating proprietary rights by requiring a description of the motivations and design decisions made during the construction of the system, which would directly involve the disclosure of information that will allow competitors to copy the solution. This is the biggest threat to this regulation. The description of the architecture and datasets that were used to train the machine learning model is of immense value in a business for which data is a competitive advantage.

What’s next?

Discussions with countries in the EU Council on the final form of the regulation will begin soon. I hope that the voices of experts will be taken into account when modifying the law. Otherwise, we will have to wait for interpretations detailing the guidelines, which will allow transferring the requirements written in the regulations to the technical requirements of the system and relieve companies from the risk of imprecision in the law.

Summary

The regulation being developed by the EU is an attempt to get under control the "wild west" in AI development before serious damage is done. The guidelines are intended to protect users of AI systems, especially in areas where the decision made has significant consequences for the lives of citizens. In my opinion, the ban on the use of AI in areas that may violate the security and privacy of citizens deserves praise. For example, the ban on social scoring and real-time biometrics in public spaces will limit the possibility of top-down shaping of citizens' behavior, as is the case in the People's Republic of China.

However, there are also concerns about the lack of clarity in the terms used and the lack of technical requirements for AI systems.

One thing is certain - regulations in the area of artificial intelligence await us. As engineers, we will have to adapt to the regulations and create systems that comply with the law. Based on the current version of the regulations, I do not foresee any paralysis related to the regulations introduced. They require some fine-tuning, especially in terms of the description of design decisions and data sets, but they do not pose an existential threat to the further development of the industry.

References

  1. https://artificialintelligenceact.eu/
  2. https://www.europarl.europa.eu/
  3. https://www.whitehouse.gov/
  4. https://www.gov.uk/
  5. https://futureoflife.org/
Blog Comments powered by Disqus.