Uncategorized

Laws | Free Full-Text | China’s Legal Practices Concerning Challenges of Artificial General Intelligence



1. Background

Nowadays, the world is at a historical intersection of a new round of technological revolution and industrial transformation. Following industrialization and informatization, intelligence has become a new developmental trend of the era (Sun and Li 2022). Driven by national policies that promote the digital economy and the demand for high-quality economic development, AI technology and industry have maintained rapid advancement. As technological innovation becomes more active and industrial integration deepens, technologies such as intelligent automation, recommendations, search, and decision-making have deeply integrated into enterprise operations and social services, which brings significant economic and social benefits. In summary, artificial general intelligence (AGI) is playing an increasingly crucial role in optimizing industrial structures, enhancing economic activity, and aiding economic development (Guo and Hu 2022).
Generally speaking, artificial intelligence refers to algorithms or machines that achieve autonomous learning, decision-making, and execution based on a given amount of input information. The development of AI is built on the improvement of computer processing power, the advancements in algorithms, and the exponential growth of data (Cao and Fang 2019). Since John McCarthy first proposed the concept of artificial intelligence in 1956, the progress of AI has not always been smooth. It has experienced three periods of prosperity driven by machine learning, neural networks, and internet technologies, as well as two periods of stagnation due to insufficient computing power and imperfect reasoning models (Jiang and Xue 2022). With the deepening implementation of AI and the recent popularity of technologies like GPT-4, a new wave of artificial intelligence-generated content (AIGC) has emerged, demonstrating the capabilities of AGI. However, generative artificial intelligence (GAI) has also raised concerns due to its inherent technical flaws and issues like algorithmic black boxes, decision biases, privacy breaches, and data misuse, leading to a crisis of trust. Although AGI has not developed into a fully mature product, compared to GAI, it possesses a higher level of intelligence. It is expected to bring more convenience to human life, while it is also likely to trigger a more severe trust crisis.
In this context, the key to addressing the challenges of AGI development lies in providing a governance framework that balances ethics, technology, and law (Zhao 2023). This framework should respect the laws of technological development while aligning with the requirements of legal governance and the logic of scientific and technological ethics. However, both theoretical research and practical experience indicate that the current governance of AGI lacks specificity, systematicity, comprehensiveness, and a long-term perspective. So, it is urgently needed to use systematic scientific legal methods to ensure and promote a positive cycle between technological breakthroughs and high-level competition. This approach should aim to integrate technological, industrial, institutional, and cultural innovation, and advance the innovative development of AGI as well.

This article uses issues arising from representative GAI products and services as examples, based on which it discusses the support elements of data, algorithms, and computing power in the training of GAI models and, then, extends the argument to explore AGI. Additionally, it discusses how to safeguard the innovative development of AGI by examining the current situation of China’s response to these challenges. Based on this analysis, the article proposes legal solutions to promote the innovative development of AGI in the future, with the aim of enriching theoretical research in this field.

2. Challenges of Generative Artificial Intelligence Technology

Science and technology are the primary productive forces, and scientific and technological progress is an indispensable driver of industrial development. With advancements in technologies such as GAI, people have discovered that AI is capable of accomplishing tasks previously unimaginable. However, people have also realized that the safety challenges, which are posed by AI’s development and its deep integration into daily life, are becoming increasingly complex.

Artificial intelligence is mainly divided into specialized artificial intelligence and general artificial intelligence. Specialized artificial intelligence, also known as “narrow AI” or “weak AI”, refers to AI programmed to perform a single task. It extracts information from specific data sets and cannot operate outside the designed task scenarios. Specialized AI is characterized by its strong functionality but poor interoperability. General artificial intelligence (AGI), also known as “strong AI”, “full AI”, or “deep AI”, possesses general human-like intelligence, which enables it to learn, reason, solve problems, and adapt to new environments like a human. AGI can address a wide range of issues without the need for specially encoded knowledge and application areas.

With the emergence of large models like GPT-4 that demonstrate powerful natural language-processing capabilities, the possibility of achieving AGI with “big data model + multi-scenario” has increased. Although no technology has yet fully reached the level of AGI, some scholars believe that certain generative AI models have initially achieved a level close to AGI (Bubeck et al. 2023).

Currently, the security issues brought by GPT-3.5, characterized by autonomous intelligence, data dependency, the “algorithmic black box”, and “lack of interpretability”, have attained widespread attention. If technology products that truly meet AGI standards emerge, even more significant security challenges could be brought, potentially having more severe consequences and broader impacts on national security, social ethics, and individual life and property safety. Therefore, it is essential to explore the specific risks posed by generative AI to find ways to ensure that the innovative development of GAI benefits human society without causing harm.

2.1. Ethical Risks in Science and Technology

Scientific research and technological innovation must adhere to the norms of scientific and technological ethics, which are crucial for the healthy development of scientific activities. Currently, generative AI can generate content in text, image, audio, and video formats, and their application fields are extremely broad. But, the lack of established usage norms for this technology poses ethical risks, leading to distrust in the application of AI. This issue is especially serious during the transition from weak AI to strong AI, where AI’s increasing autonomy presents unprecedented challenges to traditional ethical frameworks and the fundamental nature of human thought.

GAI services excel in areas such as news reporting and academic writing, making the technology an easy tool for creating rumors and forging papers. The academic journal “Nature” has published multiple analytical articles on ChatGPT, discussing how large language models (LLMs) like ChatGPT could bring potential disruptions to academia, the potential infringement risks associated with generated content, and the necessity of building usage regulations (Stokel-Walker and Van Noorden 2023). It is foreseeable that the lack of clear ethical standards could lead to frequent occurrences of academic fraud, misinformation, and rumor spreading, thereby destroying trust in AI technology. This distrust could even extend to situations where AI technology is not used (Chen and Lin 2023).
Moreover, the responses provided by GAI through data and algorithms are uncertain. With the continuous iteration of GAI, some technologies have been considered to have reached the level of AGI, approaching human-like intelligence. As GAI develops further, it raises profound questions about whether the technology will independently adopt ethical principles similar to those of humans. To address this situation, some scholars have proposed human factors engineering and suggested incorporating it into the research and development of AGI, with the aim of guiding the development of AI, like advanced GAI, towards being safe, trustworthy, and controllable (Salmon et al. 2023).

2.2. Challenges in Responsibility Allocation

Safety incidents of AGI can not only affect the security of devices and data but also lead to serious production accidents that endanger human life. In recent years, incidents caused by autonomous driving technologies from companies like Google, Tesla, and Uber have intensified the ethical debate over whether humans or AI should take responsibility. If the responsibility allocation is not clearly defined beforehand, the difficulty of obtaining remedies and defending rights after infringement may increase, resulting in public distrust of AI. Moreover, it could make AI products develop in ways that deviate from social ethics and legal norms, ultimately threatening economic and social order.

In terms of laws, the legal and ethical standards of AI are still underdeveloped, resulting in many infringement incidents. In the U.S., three artists—Sarah Andersen, Kelly McKernan, and Karla Ortiz—filed a lawsuit against AI companies and platforms such as Stability AI and Midjourney, claiming that the data used in their training processes infringed on the copyrights of millions of artists1. When such incidents occur, determining the liable party and correctly allocating responsibility becomes a major challenge. The concept of the “responsibility gap”, introduced by Andreas Matthias in 2004, refers to the inability of algorithm designers and operators to foresee future outcomes during the autonomous learning process of the algorithm. This implies that humans do not have sufficient control over the actions of machines and cannot be held liable for the fault of machine builders and operators under the traditional assignment of fault (Matthias 2004).
In terms of application, GAI technology has “universal accessibility”. Its usage and cost thresholds are not so high, so a wide range of people can easily access and use the technology. This accessibility increases the risk of infringement incidents. For example, spreading rumors can be easily facilitated by AI, making it simple to create and disseminate false information. Some users may intentionally spread and create false information and rumors to boost web traffic, which increases the frequency of misinformation dissemination (Chen and Lin 2023).

2.3. Intellectual Property Challenges

With the widespread application of GAI, concerns have arisen regarding the legality of the training data sources for large AI models and whether the content they generate can be considered as a work.

While it is widely accepted that GAI, as a computer program, can be protected as intellectual property, significant controversy remains over the intellectual property issues related to massive data training. The lack of clear boundaries or definitions regarding intellectual property in data can easily result in a “tragedy of the commons”. Conversely, overemphasizing the protection of data as intellectual property can hinder technological development, resulting in an “anti-commons tragedy” (Peng 2022). Scholars are actively discussing how to balance the protection of intellectual property within data and the advancement of technological innovation.

Furthermore, there is debate over whether the content generated by AI can be recognized as a work. GAI produces content based on extensive data training and continuously refines the output according to user feedback. Therefore, it is challenging to determine that the content is entirely autonomously generated by AI, which leads to disputes. Some scholars argue that GAI mimics the human creative process, and its content is not a product of human intellect. However, in practice, a few countries do recognize computer-generated content as a work. For instance, the UK’s Copyright, Designs and Patents Act (CDPA) Article 9(3) provides that content generated by a computer can be protected as intellectual property.

Finally, though there is no consensus on the issue of ownership of content generated by AI. Most scholars agree that AI itself cannot be the rights holder of a work. In the U.S. “Monkey Selfie” copyright dispute case, a U.S. District Judge ruled that the copyright law does not extend its protection to animals, and a work must be the creation of a “human” to be considered a copyrighted work2. This case indicates that the U.S. does not recognize copyright for non-human entities, which means that AI, as a non-human entity, also does not enjoy copyright protection. Similarly, Article 2 of China’s Copyright Law stipulates that works created by Chinese citizens, legal entities, or unincorporated organizations are protected by copyright. It is evident that AI is also not considered a subject of copyright under Chinese law.

2.4. Data-Related Risks

Data elements have immense potential value. If this value is fully realized by the following pattern “potential value—value creation—value realization”, it can significantly drive social and economic development (B. Chen 2023). As users become more aware of protecting their data privacy and as the risks associated with data breaches increase, finding a balance between data protection and data-driven AI research is crucial for achieving public trust in AI technology.

In GAI technology, the first type of risk is the inherent security risk of the training data. The training outcomes of GAI models directly depend on the input data. However, due to limitations in data-collection conditions, the proportion of data from different groups is not balanced. For example, current training corpora are predominantly in English and Chinese, making it difficult for other languages with fewer speakers to be integrated into the AI world, thus presenting certain limitations.

The second type of risk arises from the processes of data collection and usage. With the advancement of internet technology, the amount of personal information has increased and has become easier to collect. The growing scale of data is both the key to achieving GAI services and a primary source of trust crises. The training-data volume for GPT-4 has reached 13 trillion tokens (Petal and Wong 2021). Although mainstream GAI service providers have not disclosed their data sources, it is known that these data mainly come from public web scraping datasets and large human language datasets. It is a challenge to access and process such data in a secure, compliant, and privacy-protective manner, demanding higher standards for security technical safeguards.

2.5. Algorithm Manipulation Challenges

In the AI era, the uncontrollability brought by the statistical nature of algorithms, the autonomous learning ability of AI, and the inexplicability of deep-learning black-box models have become new factors leading to a crisis of user trust. From the perspective of technical logic, algorithms play a core role in the hardware infrastructure and applications of GAI, shaping user habits and values (L. Zhang 2021). Due to the black-box problem in the decision-making processes of AI models, this uncontrollable technical defect brings most of the algorithmic challenges.

First, algorithms lack stability. GAI faces various attack methods that target its data and systems, such as virus attacks, adversarial attacks, and backdoor attacks. For instance, feeding malicious comments into the model can effectively influence the recommendation algorithm, resulting in inaccurate recommendation outputs.

Second, the explainability of algorithms needs improvement. Machine-learning algorithms, particularly those based on deep learning, are essentially end-to-end black boxes. On the one hand, people are unclear about the processes and operational mechanisms within large models that contain vast amounts of parameters. On the other hand, it is also unclear which specific data from the database influence the AI algorithm’s decision-making process.

Lastly, algorithmic bias and discrimination issues remain unresolved. The emergence of these is influenced by multiple internal and external factors. Internally, if the algorithm developers set discriminatory factors or incorrectly configure certain parameters during the development stage, the algorithm will inherently exhibit biased tendencies. Externally, since GAI optimizes its content based on feedback, any biases and discrimination present in the feedback data will affect the final generated content.

3. China’s Legal Practice for AGI

Artificial intelligence is a double-edged sword, bringing both convenience and risks to society. The trust crisis caused by the application of AI technology hinders further innovation and development. Currently, countries around the world have accumulated certain governance experiences to align the rule of law with the development of AI technology. There has been substantial research on governance principles, implementation rules, and accountability mechanisms.

The European Union has adopted a strict regulatory approach, aiming to address AI-related issues through specialized institutions and legislation. In April 2021, the European Commission proposed the “Artificial Intelligence Act”, which focuses on threats posed by AI to personal safety and fundamental rights. On 28 September 2022, the European Commission released the proposal “AI Liability Directive”, further clarifying the allocation of responsibility within AI systems, especially regarding damage compensation and risk management. In March 2024, the EU passed the world’s first comprehensive AI regulatory law, the “Artificial Intelligence Act”, which emphasizes strict regulation of high-risk AI systems, marking a new phase in AI governance.

In contrast, the United States adopts a lenient regulatory approach and has yet to establish systematic governance legislation for AI. For example, in October 2022, the White House issued “The Blueprint for an AI Bill of Rights: Making Automated Systems Work for The American People”. While this document provides guidance for federal measures on AI development, it is not legally binding and has limited governance effectiveness. However, as the pace of AI development accelerates, the U.S. government is gradually increasing its intervention in AI development. For instance, in October 2023, President Biden signed an executive order of regulatory measures related to AI to ensure the U.S. takes a leading role in managing AI risks. In June 2024, the Federal Trade Commission launched an investigation into certain companies’ AI investments and partnerships.

In summary, different countries make reasonable choices based on their actual circumstances and adopt various regulatory attitudes and methods for AI supervision. With the ongoing research and development of AI and the expanding scope of its risks and impacts, both the U.S.’s lenient regulatory model and the EU’s stringent regulatory model have their merits. China is forming a governance approach with Chinese characteristics based on other countries’ experiences. This involves enacting laws and regulations to prevent the potential risks of AI while providing clear and reasonable guidance for enterprises’ development.

3.1. Compliance with the Law: The Normative Values for AGI Development

Given the risks associated with the development of AI, the Chinese government has promptly intervened by enacting laws to promote the regulated development of AI within a framework of the rule of law, thereby preemptively addressing the challenges posed by AGI. The “Artificial Intelligence Law Draft” is included in the State Council’s 2023 legislative work plan. On 31 October 2023, during the ninth collective study session of the 19th Central Political Bureau, it was explicitly stated that “we must strengthen the assessment and prevention of potential risks in AI development to ensure AI is safe, reliable, and controllable”. All of these demonstrate China’s proactive attitude and emphasis on supporting and regulating AI technology and industry development.

The Cyberspace Administration of China, along with six other departments, jointly issued the “Interim Measures for the Management of Generative AI Services” (hereinafter referred to as the “Interim Measures”), which came into effect on 15 August 2023. The “Interim Measures” focus on ex ante regulation and effectively enhancing the capabilities of GAI security governance through preventive supervision. The “Artificial Intelligence Law (Scholars’ Draft)” was released in March 2024. By refining and reconstructing the regulatory targets, bodies, tools, and content of AI risk, it outlines the basic framework of the AI regulation system (Hu and Liu 2024).

Additionally, some regions in China have actively engaged in local legislation, taking the lead in exploring the regulation of AI. According to statistics, more than 60 local regulatory documents have been issued across China, addressing various aspects of AI industry development. It can be said that China has established a preliminary framework for AI technologies, core elements, and various application scenarios, laying a solid legislative foundation.

Therefore, while vigorously developing AI, China places great emphasis on the safety challenges and sets clear safety governance objectives. A comprehensive governance approach is adopted, incorporating regulations, standards, and technical support, to implement an agile governance model. Of course, regarding the current state of AI development in China, the ultimate goal of regulation is to maintain the development of a new wave of scientific and technological advancements, including AGI. Therefore, beyond the safety principles, China also focuses on accelerating the development of AGI. By fostering a market environment that encourages competitive innovation, China promotes the sharing and innovation of AI enterprises.

3.2. Trustworthiness: The Fundamental Value of AGI Development

Trustworthiness is the primary principle, or “imperative clause”, that must be followed in the current stage of AGI innovation. It is also the focus of AI governance policy formulation (J. Chen 2023). Although the specific definition of trustworthy AI has yet to be unified, its core principles include stability, interpretability, privacy protection, and fairness. Stability refers to the ability of AI to make correct decisions in the presence of environmental noise and malicious attacks. Interpretability means that AI decisions must be understood by humans. Privacy protection refers to the AI system’s ability to safeguard personal or group privacy from breaches. Fairness implies that the AI system should accommodate individual differences and treat different groups equitably.
In China, academician He Jifeng of the Chinese Academy of Engineering first proposed the concept of “trustworthy AI” at the Xiangshan Science Conference in November 2017 (Y. Zhang 2021). On 25 September 2021, the National New Generation Artificial Intelligence Governance Expert Committee released the “Ethical Norms for the New Generation of Artificial Intelligence”, which in Article 3, outlined one of the basic ethical norms that all AI activities should follow, namely ensuring controllability and trustworthiness, stating that “artificial intelligence must always remain under human control”. In 2023, the China Academy of Information and Communications Technology (CAICT), in collaboration with Tsinghua University and the Ant Group, released the “White Paper on Trustworthy AI Technology and Application Progress (2023)”, which reviews and summarizes the current state of trustworthy AI technology and applications, emphasizing the relationship between safety risk management in AI applications and user trust in AI.
With the emergence of GAI services and the trust crisis in AI, companies, from a business standpoint, are compelled to address issues such as user trust, technical risks, and competition when leveraging AGI to empower the digital economy. In June 2020, Ant Group unveiled the Trusted AI Technology Architecture at the Global Artificial Intelligence Conference (Zhou 2022). In July 2021, the Jing Dong Exploration Research Institute and the China Academy of Information and Communications Technology jointly released China’s first “Trusted AI White Paper” at the World Artificial Intelligence Conference (CNR 2021). Both companies highlight privacy protection, stability, interpretability, and fairness as the four fundamental principles of trustworthy AI.
Although different documents express “Trustworthiness” in various ways, the core requirement can be summarized as that AI development should have “legitimacy”, meaning AI’s goals must align with human goals, and its actions and decisions should conform to human values and ethics (Yang 2024). AI should also adhere to external principles, such as human oversight and technological neutrality, as well as internal principles like transparency, safety, and accountability, in order to uphold the rule of law, protect citizens’ fundamental rights, and safeguard human well-being.

3.3. Human Centric: The Value Orientation of AGI Development

The safety baseline for the innovation and development of AGI encompasses three main elements: people, technology, and trust. Technology serves as the foundation for the robust growth and stability of the AGI industry. Trust is the pillar that promotes the continuous and healthy development of the AGI sector. And people are the core protection objects of trustworthy AI laws and policies. A human-centric approach is the fundamental principle of AGI innovation and development. In fact, the issue of trust is not only entirely dependent upon the underlying logic of AI development and application but also on how well the law supervises AI technology. The key question is whether AI’s trustworthiness can be achieved from a legal regulatory perspective.

In recent years, China has undertaken various legal explorations and practices to promote a human-centric approach to AGI. In terms of legislation, Shanghai issued the “Shanghai Regulations on Promoting the Development of the AI Industry” in September 2022, which emphasizes the trustworthiness of AI algorithms, ethics, governance, and supervision, and provides detailed provisions on technical standards, data security, and personal information protection in the field of intelligent connected vehicles. In addition, Shanghai Municipal will establish the “Shanghai AI Strategic Advisory Expert Committee” to provide consultation on major strategies and decisions in AI development. It also sets up the “AI Ethics Expert Committee” to formulate ethical guidelines and promote discussions and standard-setting on major ethical issues in AI, both domestically and internationally.

As social understanding deepens, the participants involved in ensuring the innovative development of AGI will become more diverse, fostering coordinated interaction among various entities and elements in the industry chain. Based on the legal frameworks of “Cybersecurity Law”, “Data Security Law”, and “Personal Information Protection Law”, AI governance systems with local characteristics are gradually formed. In addition, industry associations, alliances, and research institutions play an active role in formulating and publishing. Achievements in areas such as safety and reliability provide a reference for the human-centric development of AGI. Specific incidents like the “first case of facial recognition” (Bing Guo vs. Hangzhou Safari Park Co., Ltd., Service Contract Dispute Case) and the privacy debates sparked by “Miaoya Camera” (The Paper 2023) have drawn widespread public attention, increasing public understanding and demand for AI, and significantly enhancing participation levels. This demand-driven approach compels the development of AGI towards a human-centric direction.

China actively guides human-centered AI development, not only within its domestic judicial practices but also by promoting international cooperation. At the World Artificial Intelligence Conference held in Shanghai in July 2024, the Shanghai Declaration on Global AI Governance was issued, emphasizing the importance of developing human-centered AI. Currently, AI development faces unprecedented challenges, particularly in the areas of safety and ethics. Only through global cooperation and collective efforts can AI’s potential be fully realized to bring greater benefits to humanity, and China plays a key role in this process.

4. Legal Strategies for the Innovative Development of AGI

Given the characteristics of autonomy, data dependence, and the “algorithm black box”, AI faces significant security and ethical challenges in the fields of technological development, application derivation, data security, and privacy protection. These challenges could potentially have severe consequences and impacts on national security, social ethics, and personal life and property. Therefore, ensuring that the innovative development of AGI is beneficial to human society is a significant challenge that must be addressed. Research into AI governance is urgently needed.

4.1. Establishing Norms for Technology Ethics Supervision and Management

Currently, technology ethics governance faces issues, such as inadequate mechanisms, imperfect systems, and uneven development, which are insufficient to meet the needs of AI industry innovation and competitiveness. So, it is necessary to accelerate the establishment of multi-domain technology ethics norms and the enhancement of supervision. On the one hand, precise identification and tracking of risks in multiple application fields of GAI services should be conducted to improve governance responsiveness and regulatory efficiency. On the other hand, clear ethical rules for GAI should be defined, in order to promote pre-regulation and comprehensive oversight and guide companies to comply.

4.1.1. Establish a Mechanism for Identifying and Tracking Technology Ethics Risks

Based on the concept of risk classification, the purpose of technology ethics risk identification is to provide a preliminary factual basis for differentiated response mechanisms (Shi and Liu 2022). For example, the EU’s “Artificial Intelligence Act” categorizes AI activities, systems, and models into three main categories and six subcategories based on their risk levels. It implements different levels of prohibitions and controls, focusing particularly on managing the safety risks of high-risk AI systems and AGI models (Pi 2024). This approach helps to enhance regulatory clarity and guide GAI service providers toward compliance. It is also conducive to identifying the deficiencies or risks that exist in its data sources, operation paths, and output contents in advance, so as to avoid delays in post-event regulation.

However, if the scope and degree of pre-review are set improperly, it might inhibit the R&D and training efficiency of GAI, objectively slowing down its development. Therefore, a reasonable review scope should be built on the basis of integrating security and development to achieve a balance between security and innovation.

4.1.2. Accelerate the Establishment of Technology Ethics Review and Supervision Systems

On 4 April 2023, the Ministry of Science and Technology of China issued an announcement seeking public comments on the “Measures for the Ethical Review of Science and Technology (Trial)”. Article 4 proposes that technology ethics reviews should adhere to scientific, independent, fair, and transparent principles, providing guidance for an open review system and procedures. Compared with self-filing and self-assessment systems, external regulatory methods like technology ethics reviews are more mandatory and can urge technology developers to improve compliance with technology use. And the clear guidelines provide a method to address the ethical challenges of AI by effectively enhancing the trustworthiness of AI and upholding social fairness and justice.

4.1.3. Strengthen Regular Safety Supervision of Technological Ethics

Given that AGI models are involved in various critical fields, such as industrial manufacturing, commercial services, warehousing and logistics, household use, and medical services, where they deeply interact with users, it is necessary to ensure the safe application of technology through regular supervision. Under regular supervision, precision is a crucial method for addressing risks, and regulation is the baseline requirement for risk governance. It is critical to focus on the application areas and scenarios of AI technology, establish reasonable accountability systems and technical ethics standards, and proactively avoid ethical risks.

4.2. Improve Rules for Identifying and Bearing Liability for Infringement

The realization of legal liability relies on the existence and determination of the responsible entity and is based on the principle of attribution. Clarifying the liability-bearing entities and the principles of attribution for AI infringements helps in both preemptively avoiding risks and encouraging the development of the AI industry. Although the increasing autonomy of AI challenges traditional causality theories, leading to difficulties in attributing responsibility, the influence of human values and intentions in every stage of AI algorithm design, development, and deployment justifies using subjective fault in algorithm design and deployment as a basis for liability.

4.2.1. Adopt a Preventive Liability Approach

Preventive liability involves managing potential risks through preemptive measures before they materialize into actual harm. Given the uncontrollability and unpredictability of new AI technologies, overly stringent preventive liability could stifle innovation by imposing high compliance costs on enterprises. Thus, institutional design should balance the interests of data handlers, algorithm developers, service providers, and users. Instead of forcing a binary choice between “prohibition” and “complete tolerance of harm”, it should allow for intermediate or partial exclusion to avoid turning litigation into a zero-sum game (Hu 2015).

4.2.2. Clarify the Liability-Bearing Entities

AGI large models possess multi-purpose and multi-functional characteristics, which cater to both consumer interactions (C-end) and industry services (B-end). Requiring providers to take measures to mitigate all risks associated with applications demands that they conduct prior review and prevention of various unforeseen risks (Ding 2024). Given the multi-field, multi-entity applications of generative content, the “safe harbor” principle should be applied in liability allocation. This means technology providers do not automatically bear all responsibilities and obligations but are liable only if they fail to meet specified duties and take necessary risk-prevention measures.

Providers of AGI models should undoubtedly assume specific obligations related to certain risks, but these obligations should be limited to areas such as national security and public safety and within the scope of reasonably foreseeable risks. For instance, providers should bear pre-screening responsibilities for content related to terrorism and obscenity. However, for non-public infringement risks, where service providers are without subjective fault, the principle of safe harbor should be followed, exempting them from tort liability.

4.2.3. Standardize Compensation Liability Methods

On a macro level, compensation for loss addresses the victim’s financial interests, providing only monetary restitution (C. Li 2010). On a micro level, AI-induced damages to ethical order, life, or emotions cannot be fully remedied with money alone. To ensure victims receive adequate compensation, various laws and regulations must be well-coordinated to explore reasonable compensation calculations and optimal methods for comprehensive interest relief. Conducting phased interest and dynamic assessment, courts should fully consider AI feasibility and economics, and prioritize compensation amounts that protect long-term victim interests and overall social benefits.

4.3. Balance Fair Competition and Innovation

The text-generation models behind GAI are characterized by their large scale, self-supervised training, and strong generalization capabilities. This means that building, training, and maintaining these models require substantial human resources, computing power, and data. Once trained, these large-scale AI models can easily outperform smaller AI models in specific fields. In other words, the GAI industry requires significant upfront investment and long development cycles. However, as long as the model is released, its high efficiency, lower costs, and broad applicability give its developing companies a significant competitive advantage in the market. To address the potential monopoly risks of GAI, it is necessary to balance intellectual property protection with antitrust measures and to balance protecting competition with encouraging innovation.

4.3.1. Balance the Scope of Intellectual Property Law and Antitrust Law

Intellectual property (IP) rights are private rights with exclusivity, and their inherent monopoly rights are legitimate and used to enhance market power. However, if this right is abused, it may lead to exclusion and restriction of competition. Antitrust law, on the other hand, is public law that limits monopoly power and respects private rights (like IP rights) but prevents abuse of these rights. If the exercise of IP rights excludes or restricts competition, it becomes also subject to antitrust regulation. For AGI, there is a need to protect IP rights and to ensure competitive regulations as well, Seeking a balance between IP protection and antitrust measures to protect competition and stimulate innovation.

On the one hand, innovation should be encouraged to create a suitable developmental and competitive market environment for the AGI industry. Establishing a fair, open, and orderly market environment ensures the healthy development and social benefits of AGI. On the other hand, attention should be paid to the monopoly risks in its upstream and downstream industries, and timely regulations should be enforced. For instance, the application of AGI may promote vertical integration strategies by large tech companies, leading to monopolistic and anti-competitive effects. In this context, regulatory authorities need to monitor monopolies in the chip, cloud computing, and downstream application markets, as well as to implement targeted regulations when necessary.

4.3.2. Adhere to Prudent Regulation Principles to Encourage Innovation

AGI encompasses a variety of advanced technologies, and its innovation and application processes inevitably face risks, such as safety issues and data breaches due to technical flaws and other factors. To address these situations, it is necessary to establish a fault-tolerant mechanism and to approach innovation failures and errors with tolerance and caution. Opportunities for correction and improvement would be provided when deviations or errors occur in the application of AGI, so as to better address the risk of innovation suppression caused by potential institutional barriers and outdated legal norms.

4.4. Enhance Data Security of Artificial Intelligence

Data are the raw materials that form the foundation of artificial intelligence. Given that AGI relies on large models, it demands substantial data volumes and necessitates a focus on strengthening data-security protections and avoiding data-security risks (Fan and Zhang 2022). From the era of the mobile internet to the era of artificial intelligence, the use of data has consistently expanded in both breadth and depth. This evolution underscores the need to ensure data security through robust market competition, comprehensive legal mechanisms, and advanced technical security measures to effectively safeguard user privacy and data security.

4.4.1. Establish a Data Classification and Grading Protection System

In the governance of AGI, data-security protection should be the central focus. Based on data’s characteristics, such as shareability, reusability, multi-ownership, high dynamism, and weighted usage attributes, a dynamic approach should be employed to balance data development and security. In response to this, China’s “Data Security Law” proposed establishing a data classification and grading protection system at the national level, matching corresponding protection and management measures according to the importance of the data. Subsequently, the “Twenty Data Measures” classified data into public data, enterprise data, and personal information data, using this as a basis to construct a system of rights and obligations for data utilization.

To manage the large-scale parameters of GAI models systematically, it is essential to first establish a data classification and grading protection system that integrates the application fields of AI services and the inherent properties of AI algorithms. This system should promptly clarify the data-security standards that need to be met. Meanwhile, a corresponding data-security protection mechanism should be implemented to match different types and risk levels of data, ensuring that AI technology goals are maintained and enhanced while addressing data-security threats.

4.4.2. Promoting Preemptive Data-Security Regulation

Preemptive data-security regulation involves implementing measures to identify and mitigate risks during data processing and usage, so as to ensure data security upfront. This proactive regulatory approach guides the AI training process towards greater standardization, prevents damage from delayed post-incident regulation, and better safeguards users’ rights to information and choice. Thereby, algorithm trustworthiness can be enhanced.

4.4.3. Review Existing Privacy Protection and Compliance Mechanisms

Current practices in mobile internet personal information protection interpret the necessity principle very strictly to prevent improper data collection and aggregation. For instance, the “Methods for Identifying Illegal Collection and Use of Personal Information by Apps”, jointly formulated by the Cyberspace Administration of China, the Ministry of Industry and Information Technology, the Ministry of Public Security, and the State Administration for Market Regulation, stipulates that personal information cannot be collected solely for improving service quality or developing new products. While this strict compliance approach protects user privacy, it also restricts the data available for training AI systems. Conversely, relaxing these restrictions might introduce privacy and security risks. Therefore, how privacy protection rules can be applied to future AI application scenarios warrants careful consideration and discussion.

4.5. Strengthen AI Algorithm Regulation

The rapid development of GAI services not only highlights the explosive growth in data demand for large models but also indicates a significant increase in the complexity of the algorithm. Algorithm transparency and explainability face unprecedented challenges. To address the algorithm crisis, it is necessary to build a comprehensive governance system around AI algorithms and implement specialized, systematic governance.

4.5.1. Establish a Legal Framework and Regulatory Mechanisms

A robust legal framework and regulatory mechanism are needed to prevent algorithmic discrimination. Laws such as the “Cybersecurity Law” and the “Data Security Law” can delineate reasonable boundaries for algorithm governance activities and impose corresponding care duties on service providers. Additionally, specific legislation for areas like facial recognition and algorithmic recommendations can provide precise regulation of algorithmic activities within AI applications, which facilitates the indirect identification and mitigation of algorithm risks.

4.5.2. Balance Algorithm Transparency, Explainability, and Innovation

Considering the technological properties and application patterns of AI algorithms, GAI services require a well-considered approach to algorithm governance. It is not feasible to demand higher transparency for GAI algorithms due to the increasing technical difficulty of making algorithms transparent as parameter counts and hidden layers grow. Moreover, excessive transparency requirements could undermine the innovation incentives of developers and users.

4.5.3. Promote Multi-Principal Co-Governance of Algorithms

Establishing an autonomous industry oversight committee at the national level can provide guidance and supervision for the AI technology sector. Taking advantage of professional expertise, this committee can implement a classification and grading regulatory principle. By categorizing and governing algorithms based on their application scenarios, the committee can assist regulatory authorities with algorithm registration, auditing, and accountability. This multi-principal co-governance approach aims to ensure the reasonable application of GAI technology in content generation and to continuously refine the development, use, and regulation of AGI algorithms.

4.6. Conducting International Cooperation on AI Governance

In this highly interconnected era, technological advancements and applications have transcended national boundaries. Therefore, the governance of the AI industry cannot be achieved by any single country alone but requires the joint efforts of the global community. The convening of the World Artificial Intelligence Conference and the High-Level Conference on Global AI Governance highlights the importance of global cooperation. Through international collaboration and exchange, technological innovation and ethical values can complement each other, leading to the development of a more comprehensive and inclusive governance framework. This framework can address global challenges in AI development and promote AI towards a safer, more inclusive, and sustainable direction.

4.6.1. Promote International Cooperation

Countries should actively promote international cooperation and participate in the formulation of regulations and ethical guidelines in the field of AI, so as to create a comprehensive and effective governance system for the trustworthy development of AI (X. Li 2017). Additionally, through actively engaging in international dialogues, an international governance framework and standards with broad consensus can be established to address key issues, such as cross-border data flows, privacy protection, transparency, and accountability (Cao 2023). Joint standards within cooperation and exchange ensure that the application of AI technologies aligns with the global consensus, thereby reducing potential risks and disputes.

4.6.2. Establish Multilateral and Bilateral Cooperation Mechanisms

Encouraging the participation of international organizations, academia, industry, and civil society and promoting the international exchange of talent and collaborative research are crucial because, in the process of exchanging research results, various research teams can learn from each other and jointly overcome technical challenges. This not only accelerates the development of AI technologies but also helps to create a more open and collaborative innovation environment for the progress of the global AI industry.

5. Conclusions

Strategic emerging industries are the new pillars of future development. The legal landscape in the digital era should anticipate the future form of global governance for AGI. The era of AGI is not far off, with GAI technologies advancing rapidly in a short period. Their wide range of applications highlights the revolutionary significance of AGI, making the AI industry a new focal point of global competition. However, the innovative development of the AGI industry also faces challenges related to technological ethics, intellectual property, accountability mechanisms, data security, and algorithmic manipulation, which undermine the trustworthiness of AI.

Therefore, it is necessary to further develop a legal regulatory framework for the AI industry and improve the governance ecosystem for technological ethics. By introducing relevant codes of conduct and ethical guidelines, we can promote the healthy and sustainable development of the AI industry within a legal framework. Addressing the aforementioned issues requires strategic research and the pursuit of feasible technical solutions. By establishing technological ethics standards, improving the system for regulating liability, protecting competition while encouraging innovation, enhancing AI data-security measures, and standardizing algorithmic regulation in the AI field, the obstacles on the path to the innovative development of AGI can eventually be removed.



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *