The EU's "Artificial Intelligence Act" is about to take effect, and overseas com

On August 1st, the European Union's Artificial Intelligence Act (hereinafter referred to as the "AI Act") will officially come into effect within the EU and will be implemented in phases over the next three years. This is the world's first comprehensive legislation to regulate artificial intelligence, and its impact should not be underestimated.

Relevant to the vital interests of AI companies, the AI Act imposes constraints on various overseas companies conducting business in the EU region, with repercussions extending beyond the region. The act has already attracted widespread attention from the legal community and industry, with everyone awaiting further detailed explanations and supporting facilities from the EU on some of its provisions.

Taking data compliance as an example, Liu Lianbo, the Product Director for the Asia-Pacific region at Moloco, a global machine learning and growth marketing solutions company, told Yicai, "The issue of privacy policy is not targeted at a single company; it is relevant to all partners or players involved. The challenges faced by everyone in this industry are always the same, and I don't think it will have a fundamental impact on the trend of going overseas, because the industry will share this challenge." He believes that the first priority is compliance, followed by the correct collection of data without misuse, which will ultimately prompt us to adjust our algorithm models to respond. "We will carefully track and study the specific challenges the act poses to our overseas customers, and then we will work with our customers to research and implement solutions."

Advertisement

"The most important thing for domestic companies going overseas to the EU is to focus on high-risk AI systems. If they do not comply with the regulations, companies will face high fines and legal risks," Wang Jie, the founding partner and director lawyer of Kenting (Guangzhou) Law Firm, and the founder of the W&W International Legal Team, told Yicai reporters. Referring to the situation of domestic technology companies conducting business in the EU region, the main industries affected include healthcare, human resources, public safety, and transportation.

Some industry insiders told reporters that it is not easy for companies to comply with some of the provisions of the AI Act, and achieving compliance is expected to lead to increased corporate costs.

Fines can reach 35 million euros.

The unexpected popularity of ChatGPT at the end of 2022 has brought a sense of urgency to AI legislation. Under the intense preparation of the EU, the AI Act was passed by the European Parliament in March this year and approved by the EU Council in May. The act officially came into effect on August 1st this year.

The global influence of the AI Act should not be underestimated, partly because its jurisdiction includes some companies established outside the EU region. Six months later, the general provisions and the ban on unacceptable risk AI systems will apply; 12 months later, sections on high-risk AI systems and general AI models will apply; 24 months later, except for some parts such as the classification rules for high-risk AI systems, the rest will apply; 36 months later, parts such as the classification rules for high-risk AI systems will also apply.

The act stipulates that the regulations apply to providers who place AI systems on the EU market or provide AI system services to the EU, or who place general AI models on the EU market, regardless of whether these providers are established or located in the EU or in third countries. It also applies to AI system deployers, importers, and distributors established or located in the EU, as well as product manufacturers who place AI systems on the market or into use in conjunction with their products under their own name or trademark, and affected persons located in the EU.

This means that as long as overseas AI-related companies place AI systems on the EU market, or carry their own trademark, or affect persons in the EU region, they may be within the scope of the act's regulations. Companies that violate the regulations may be subject to administrative fines of up to 35 million euros or 7% of their highest annual turnover (whichever is higher).Regarding the value and subsequent impact of the bill, there has been much discussion within the legal and academic communities.

The European Union's (EU) General Data Protection Regulation (GDPR) has previously demonstrated the "Brussels Effect," which refers to the EU's ability to unilaterally extend its legal system beyond its borders through market regulatory power, compelling regulated entities to comply with EU laws even outside the EU. There are differing opinions on whether this effect can be replicated with the "AI Bill."

"The 'AI Bill' is a groundbreaking piece of legislation that goes beyond rights, obligations, and responsibilities, focusing on 'risk' as the key issue. Previous legislation has never proposed a comprehensive regulatory approach to new technologies," said Chen Jidong, an associate professor at the School of Law, Tongji University, to reporters. The EU has supported several simple provisions of the bill with extensive questionnaires and reports, investing significant legislative costs and conducting thorough preliminary research. Striking a balance between prioritizing AI development and prioritizing regulation, the EU has not been too early in its oversight. This bill is not a one-size-fits-all perfect solution and will be improved based on the development of AI. The "AI Bill" has already been studied and referenced domestically. The United States, on the other hand, has shown the choice of pragmatists, adopting a dynamic regulatory approach.

In the view of Zhao Xinhua, a partner at King & Wood Law Firms, the "AI Bill" is very comprehensive, and the EU is ahead in global governance of issues. Combined with the GDPR in 2018 and recent digital service and digital market laws, the EU has laid the groundwork for the digital economy era for many years. The positive aspect of the bill is that it serves as a good example for AI regulation, "just as after the introduction of the GDPR, the global regulatory framework has largely been modeled after the GDPR, the introduction of the 'AI Bill' will also provide valuable references and insights for global AI governance."

Wu Shenkuo, an associate professor at the School of Law, Beijing Normal University, and deputy director of the China Internet Association Research Center, believes that this will have a very significant impact on multinational companies, affecting their business development models, compliance process design, and market operation strategies, with the advantage of clear rules and detailed institutional design.

There are also opinions that the introduction of the "AI Bill" may be too early and too strict, and attitudes towards the development of the AI industry vary across different regions globally.

Wu Shenkuo also believes that in the short term, there is a possibility that the "AI Bill" could hinder local AI deployment. This is a process of interaction and game-playing between regulation and industry enterprises, requiring a process of mutual enhancement of understanding. In the medium to long term, if there is a balance between certainty and regulatory intensity, there may be opportunities for rebound.

AI startup Wave Intelligence plans to expand into Europe, and its founder, Jiang Yuchen, once studied for a Ph.D. in artificial intelligence in Switzerland. She believes that strict EU regulation is not a new issue, as Europe has always been strict on data privacy. "This time, it's just that corresponding privacy clauses have been introduced for AI, but the strictness of privacy clauses themselves is not so new; it's an inherent challenge that has always existed in the European market."

As for the impact of the "AI Bill" on the market, Jiang Yuchen believes there will be some, but the impact on private deployment will be smaller, and the impact on large model companies may be greater. "Especially for large model companies that provide closed-source APIs, there is a greater risk, but for companies that provide private deployment or guaranteed software, this market is actually larger." When privacy regulation is strong, products and companies with better privacy protection will benefit.

The division of high-risk systems has attracted attention.The impact of the "AI Act" on businesses is refined within the "risk-based" regulatory framework. Specifically, the "AI Act" adopts a "risk-based" regulatory model, categorizing AI systems into four risk levels: prohibited, high-risk, limited-risk, and lowest-risk, each with corresponding compliance requirements. The Act stipulates that AI systems with unacceptable risks are completely banned from being marketed, high-risk AI systems can only be introduced to the EU market, put into service, or used if they meet certain mandatory requirements, and limited-risk AI systems are subject to fewer regulations.

Some legal professionals believe that AI companies entering the EU market should pay the most attention to the delineation and requirements of high-risk AI systems. Julia Apostle, a partner at the Paris office of the U.S. law firm Orrick, told reporters that, according to the "AI Act," developers of high-risk AI systems have the heaviest compliance burden, including the need to demonstrate compliance with legal requirements. Any company developing or introducing AI systems in Europe should review whether their AI systems are high-risk. From the perspective of AI practical applications, companies developing AI systems that may affect individual rights, safety, and autonomy are more significantly impacted by the Act.

"High-risk AI systems involve critical areas such as education and vocational training, employment, and important private and public services (such as healthcare, banking), which are subject to strict regulatory and compliance requirements. AI systems used in industries like healthcare, human resources, public safety, and transportation mostly fall into the high-risk category," said Wang Jie.

According to initial estimates by the EU when promoting the "AI Act," high-risk AI systems account for approximately 5% to 15% of AI systems. Wang Jie believes that applications that do not directly affect personal safety, health, or fundamental rights, such as chatbots and personalized content recommendations, have relatively less impact and are generally not considered high-risk systems. Julia Apostle thinks that most AI systems will not be too affected by the Act, while high-sensitivity areas, such as healthcare and professional scenarios, will receive further legislative attention.

However, Chen Jidong believes that high-risk AI systems are widely involved, and now the EU includes most AI systems in the high-risk category. For example, autonomous vehicles and facial recognition may involve high risks.

It is worth noting that the "AI Act" clearly defines multiple entities related to AI systems, including AI system providers, deployers, authorized representatives, importers, and product manufacturers. According to an article recently published by a team of lawyers from the Shanghai Duan & Duan Law Firm, AI system providers have the most numerous obligations, involving about 29 articles, which cover various stages such as system design, development, testing, deployment, and monitoring, significantly more than the regulations involved for deployers, authorized representatives, and importers.

Wang Jie stated that AI system providers are responsible for developing AI systems or general AI models and introducing them to the market, which means more resources need to be invested to ensure compliance. AI system providers bear the most obligations and also bear higher compliance costs.

Furthermore, the "AI Act" also pays considerable attention to general AI systems. The Act states that models with at least 10 billion parameters and trained using large-scale self-supervision with a vast amount of data should be considered to have significant generality. When the cumulative computational power used for training a general AI model exceeds 10^25 floating-point operations per second (FLOPs), it is considered to have a high impact, and may be identified as having "systemic risk," requiring the fulfillment of some additional requirements.The compliance cost for large models may exceed 17%

For artificial intelligence-related enterprises, meeting the requirements of such a comprehensive and complex act may lead to increased compliance costs.

The Center for Data Innovation, an overseas think tank, estimated at the beginning of the EU's promotion of the AI Act in 2021 that the act would cause the EU to suffer a loss of 31 billion euros in the next five years. A European small and medium-sized enterprise deploying high-risk AI systems will bear compliance costs as high as 400,000 euros. The AI Act will result in an additional 17% expenditure on all artificial intelligence spending.

CEPS (Center for European Policy Studies) clarified in the same year that the estimate of a loss of more than 30 billion euros was exaggerated, and the additional 17% expenditure only applies to companies that do not meet any regulatory requirements. In CEPS's research, the cost of establishing a new quality management system (QMS) may be between 193,000 euros and 330,000 euros, and the annual maintenance cost is estimated at 71,400 euros. Once the QMS is established, the cost will decrease, and the QMS is only a requirement for providers of high-risk AI systems.

The above discussion on compliance costs took place two or three years ago, when large models were not yet mainstream, and the situation may have changed now.

Julia Apostle believes that the proposed 17% figure is based on a very low estimate of the basic cost of AI systems and does not reflect the cost of training a basic model. "Compliance costs are still not fully estimated because all the details are not yet known. The cost for providers of high-risk AI systems and general artificial intelligence models will be the greatest. The law depends on the adoption of technical standards, which companies implement, and the implementation process must be certified. And the content of these standards has not been officially determined, but it will definitely involve the adoption of policies, procedures, and specific product requirements." Julia Apostle said that because the act's regulation of AI is not limited to the EU, it seems unlikely that companies will leave the EU due to the act.

Wang Jie believes that for domestic large model manufacturers, it may be difficult to achieve compliance quickly and at low cost in the short term, especially for large models with complex data. The challenge for large model manufacturers may be to adapt to the EU's high standards for data management, model transparency, and interpretability.

Wu Shenkuo believes that the classification and grading of risks, especially the issue of disclosure of AI research and development, is a relatively large compliance challenge, which has a strong and obvious conflict with the certain black box attributes of artificial intelligence algorithms. Zhao Xinhua believes that the AI Act involves some higher compliance standards and requirements, and companies need to consider whether they can accept the compliance costs and the potential non-compliance responsibilities before entering the EU market.

"It is not accurate to say that the EU does not care about economic restrictions at all." Zhao Xinhua said that from the scope and content of the AI Act, the EU is also trying to eliminate the impact of regulations on innovation, such as relatively loose management of some AI systems with less risk. For example, in the measures to support innovation, a special artificial intelligence regulatory sandbox has been set up for companies to test innovative products, services, business models, and delivery mechanisms, to avoid companies immediately facing regulatory consequences for engaging in related activities. This is a flexible regulatory system for technological innovation, giving small businesses more room for development. The act has a large degree of flexibility, risk control is gradual and progressive, and it is expected to have a smaller negative impact on the economy.

In terms of specific regulations, the industry and the legal community are still paying attention to which specific requirements may increase compliance costs.The "AI Act" has very detailed requirements for high-risk AI systems, covering risk management systems, data and data governance, technical documentation, record keeping, transparency and providing information to deployers, human oversight, accuracy, robustness, and cybersecurity. Specific requirements include but are not limited to establishing, implementing, documenting, and maintaining risk management systems related to high-risk AI systems; preparing technical documentation and providing necessary information to national authorities and certification bodies; data preparation and processing for training, validation, and testing datasets, including various annotations, cleaning, updates, etc., and taking measures to detect, prevent, and mitigate potential biases.

The "AI Act" also requires general AI model providers to prepare and regularly update technical documentation, update information and documentation for AI system providers who intend to integrate the model, establish policies to comply with EU copyright law, and draft and publicly disclose a sufficiently detailed summary of the content used to train general AI models. General AI models identified as having "systemic risks" must fulfill additional obligations.

A research and development head from a major domestic internet company told reporters that among the requirements for high-risk AI systems, the most likely to increase costs is the processing of data and the prevention of certain biases, as it involves specialized data processing and fine-tuning of models to avoid illegal content output. Major domestic models have certainly done work in this area, but standards vary by region, and to meet new standards, work needs to be redone.

Zhao Xinhua stated that high-risk AI system providers refer to companies that provide AI systems and introduce them to the EU market. The Act specifically lists the obligations of providers, including preparing the corresponding technical documentation, proving compliance with the Act's requirements, and making a conformity declaration in accordance with EU regulations, as well as obtaining the CE conformity certification mark. High-risk AI systems must complete a conformity assessment procedure before being placed on the market.

"These regulations are very specific, setting different obligations for providers, deployers, importers, and distributors," said Zhao Xinhua. If large model companies provide services in the EU market, in addition to needing lawyers to assess and advise on compliance and legal aspects, they also involve third-party conformity assessment procedures, and some companies need to find third-party certification bodies to add the CE mark. In addition, the "AI Act" also requires a summary of the main technical data of AI system algorithms, which may involve the company's internal business and technical teams, as well as external third-party entities.

Beyond cost, whether AI companies are willing to meet the requirements of the "AI Act" to enter the EU market is also a question.

The "AI Act" requires transparency in the use of data in the pre-training and training of general AI models, and model providers should draft and publicly disclose a sufficiently detailed summary of the content used to train general AI models. While appropriately considering the need to protect trade secrets and confidential business information, the summary should be comprehensive.

"Now, the technical documents of the publicly disclosed large models mostly gloss over the pre-training data, but in fact, this part should be the most," a leading domestic large model developer told reporters. The effectiveness of large models is 80% related to the training data. The details of the training data are not disclosed because the training data is to some extent the core secret of the enterprise. At the same time, the training data of many large models comes from various online channels, and if not disclosed, no one may find the source of the data, but once disclosed, it may expose copyright issues.

The above technical personnel told reporters that if there are detailed requirements for the technical documents to be submitted externally, large model companies may not be willing to provide them. This is not a cost issue, but because the company may wish to keep the technology confidential.

The "AI Act" will also be implemented in stages, and there are currently many ambiguous or clarifying issues. Chen Jidong told reporters that it is still difficult to define high-risk systems, and when enterprises face the many obligations of high-risk AI systems, they tend to avoid classifying themselves as high-risk. The EU or related parties also need to provide a set of back-end supporting infrastructure to support the smooth operation of the law.Wu Shenkuo believes that, from the perspective of potential shortcomings, whether the bill has a strong enough match with the logic of industrial research and development and application still requires further observation.

"The ambiguity and points of contention are mainly focused on definitions and specific compliance requirements, such as the precise boundaries of high-risk AI systems, the distinction between basic models and application models, etc. There is also uncertainty about how to effectively assess and monitor the transparency and fairness of AI systems," Wang Jie believes.

"I think some large domestic enterprises have already prepared for compliance, but most have not yet made systematic preparations and are maintaining a wait-and-see attitude. Whether to prepare for compliance depends more on strategic awareness. Enterprises must fundamentally recognize that humanity has entered the high-risk era of artificial intelligence, and the consensus on managing its risks has become a consensus," Chen Jidong stated.

Comment