1. Background
This article uses issues arising from representative GAI products and services as examples, based on which it discusses the support elements of data, algorithms, and computing power in the training of GAI models and, then, extends the argument to explore AGI. Additionally, it discusses how to safeguard the innovative development of AGI by examining the current situation of China’s response to these challenges. Based on this analysis, the article proposes legal solutions to promote the innovative development of AGI in the future, with the aim of enriching theoretical research in this field.
2. Challenges of Generative Artificial Intelligence Technology
Science and technology are the primary productive forces, and scientific and technological progress is an indispensable driver of industrial development. With advancements in technologies such as GAI, people have discovered that AI is capable of accomplishing tasks previously unimaginable. However, people have also realized that the safety challenges, which are posed by AI’s development and its deep integration into daily life, are becoming increasingly complex.
Artificial intelligence is mainly divided into specialized artificial intelligence and general artificial intelligence. Specialized artificial intelligence, also known as “narrow AI” or “weak AI”, refers to AI programmed to perform a single task. It extracts information from specific data sets and cannot operate outside the designed task scenarios. Specialized AI is characterized by its strong functionality but poor interoperability. General artificial intelligence (AGI), also known as “strong AI”, “full AI”, or “deep AI”, possesses general human-like intelligence, which enables it to learn, reason, solve problems, and adapt to new environments like a human. AGI can address a wide range of issues without the need for specially encoded knowledge and application areas.
Currently, the security issues brought by GPT-3.5, characterized by autonomous intelligence, data dependency, the “algorithmic black box”, and “lack of interpretability”, have attained widespread attention. If technology products that truly meet AGI standards emerge, even more significant security challenges could be brought, potentially having more severe consequences and broader impacts on national security, social ethics, and individual life and property safety. Therefore, it is essential to explore the specific risks posed by generative AI to find ways to ensure that the innovative development of GAI benefits human society without causing harm.
2.1. Ethical Risks in Science and Technology
Scientific research and technological innovation must adhere to the norms of scientific and technological ethics, which are crucial for the healthy development of scientific activities. Currently, generative AI can generate content in text, image, audio, and video formats, and their application fields are extremely broad. But, the lack of established usage norms for this technology poses ethical risks, leading to distrust in the application of AI. This issue is especially serious during the transition from weak AI to strong AI, where AI’s increasing autonomy presents unprecedented challenges to traditional ethical frameworks and the fundamental nature of human thought.
2.2. Challenges in Responsibility Allocation
Safety incidents of AGI can not only affect the security of devices and data but also lead to serious production accidents that endanger human life. In recent years, incidents caused by autonomous driving technologies from companies like Google, Tesla, and Uber have intensified the ethical debate over whether humans or AI should take responsibility. If the responsibility allocation is not clearly defined beforehand, the difficulty of obtaining remedies and defending rights after infringement may increase, resulting in public distrust of AI. Moreover, it could make AI products develop in ways that deviate from social ethics and legal norms, ultimately threatening economic and social order.
2.3. Intellectual Property Challenges
With the widespread application of GAI, concerns have arisen regarding the legality of the training data sources for large AI models and whether the content they generate can be considered as a work.
Furthermore, there is debate over whether the content generated by AI can be recognized as a work. GAI produces content based on extensive data training and continuously refines the output according to user feedback. Therefore, it is challenging to determine that the content is entirely autonomously generated by AI, which leads to disputes. Some scholars argue that GAI mimics the human creative process, and its content is not a product of human intellect. However, in practice, a few countries do recognize computer-generated content as a work. For instance, the UK’s Copyright, Designs and Patents Act (CDPA) Article 9(3) provides that content generated by a computer can be protected as intellectual property.
2.4. Data-Related Risks
In GAI technology, the first type of risk is the inherent security risk of the training data. The training outcomes of GAI models directly depend on the input data. However, due to limitations in data-collection conditions, the proportion of data from different groups is not balanced. For example, current training corpora are predominantly in English and Chinese, making it difficult for other languages with fewer speakers to be integrated into the AI world, thus presenting certain limitations.
2.5. Algorithm Manipulation Challenges
First, algorithms lack stability. GAI faces various attack methods that target its data and systems, such as virus attacks, adversarial attacks, and backdoor attacks. For instance, feeding malicious comments into the model can effectively influence the recommendation algorithm, resulting in inaccurate recommendation outputs.
Second, the explainability of algorithms needs improvement. Machine-learning algorithms, particularly those based on deep learning, are essentially end-to-end black boxes. On the one hand, people are unclear about the processes and operational mechanisms within large models that contain vast amounts of parameters. On the other hand, it is also unclear which specific data from the database influence the AI algorithm’s decision-making process.
Lastly, algorithmic bias and discrimination issues remain unresolved. The emergence of these is influenced by multiple internal and external factors. Internally, if the algorithm developers set discriminatory factors or incorrectly configure certain parameters during the development stage, the algorithm will inherently exhibit biased tendencies. Externally, since GAI optimizes its content based on feedback, any biases and discrimination present in the feedback data will affect the final generated content.
3. China’s Legal Practice for AGI
Artificial intelligence is a double-edged sword, bringing both convenience and risks to society. The trust crisis caused by the application of AI technology hinders further innovation and development. Currently, countries around the world have accumulated certain governance experiences to align the rule of law with the development of AI technology. There has been substantial research on governance principles, implementation rules, and accountability mechanisms.
The European Union has adopted a strict regulatory approach, aiming to address AI-related issues through specialized institutions and legislation. In April 2021, the European Commission proposed the “Artificial Intelligence Act”, which focuses on threats posed by AI to personal safety and fundamental rights. On 28 September 2022, the European Commission released the proposal “AI Liability Directive”, further clarifying the allocation of responsibility within AI systems, especially regarding damage compensation and risk management. In March 2024, the EU passed the world’s first comprehensive AI regulatory law, the “Artificial Intelligence Act”, which emphasizes strict regulation of high-risk AI systems, marking a new phase in AI governance.
In contrast, the United States adopts a lenient regulatory approach and has yet to establish systematic governance legislation for AI. For example, in October 2022, the White House issued “The Blueprint for an AI Bill of Rights: Making Automated Systems Work for The American People”. While this document provides guidance for federal measures on AI development, it is not legally binding and has limited governance effectiveness. However, as the pace of AI development accelerates, the U.S. government is gradually increasing its intervention in AI development. For instance, in October 2023, President Biden signed an executive order of regulatory measures related to AI to ensure the U.S. takes a leading role in managing AI risks. In June 2024, the Federal Trade Commission launched an investigation into certain companies’ AI investments and partnerships.
In summary, different countries make reasonable choices based on their actual circumstances and adopt various regulatory attitudes and methods for AI supervision. With the ongoing research and development of AI and the expanding scope of its risks and impacts, both the U.S.’s lenient regulatory model and the EU’s stringent regulatory model have their merits. China is forming a governance approach with Chinese characteristics based on other countries’ experiences. This involves enacting laws and regulations to prevent the potential risks of AI while providing clear and reasonable guidance for enterprises’ development.
3.1. Compliance with the Law: The Normative Values for AGI Development
Given the risks associated with the development of AI, the Chinese government has promptly intervened by enacting laws to promote the regulated development of AI within a framework of the rule of law, thereby preemptively addressing the challenges posed by AGI. The “Artificial Intelligence Law Draft” is included in the State Council’s 2023 legislative work plan. On 31 October 2023, during the ninth collective study session of the 19th Central Political Bureau, it was explicitly stated that “we must strengthen the assessment and prevention of potential risks in AI development to ensure AI is safe, reliable, and controllable”. All of these demonstrate China’s proactive attitude and emphasis on supporting and regulating AI technology and industry development.
Additionally, some regions in China have actively engaged in local legislation, taking the lead in exploring the regulation of AI. According to statistics, more than 60 local regulatory documents have been issued across China, addressing various aspects of AI industry development. It can be said that China has established a preliminary framework for AI technologies, core elements, and various application scenarios, laying a solid legislative foundation.
Therefore, while vigorously developing AI, China places great emphasis on the safety challenges and sets clear safety governance objectives. A comprehensive governance approach is adopted, incorporating regulations, standards, and technical support, to implement an agile governance model. Of course, regarding the current state of AI development in China, the ultimate goal of regulation is to maintain the development of a new wave of scientific and technological advancements, including AGI. Therefore, beyond the safety principles, China also focuses on accelerating the development of AGI. By fostering a market environment that encourages competitive innovation, China promotes the sharing and innovation of AI enterprises.
3.2. Trustworthiness: The Fundamental Value of AGI Development
3.3. Human Centric: The Value Orientation of AGI Development
The safety baseline for the innovation and development of AGI encompasses three main elements: people, technology, and trust. Technology serves as the foundation for the robust growth and stability of the AGI industry. Trust is the pillar that promotes the continuous and healthy development of the AGI sector. And people are the core protection objects of trustworthy AI laws and policies. A human-centric approach is the fundamental principle of AGI innovation and development. In fact, the issue of trust is not only entirely dependent upon the underlying logic of AI development and application but also on how well the law supervises AI technology. The key question is whether AI’s trustworthiness can be achieved from a legal regulatory perspective.
In recent years, China has undertaken various legal explorations and practices to promote a human-centric approach to AGI. In terms of legislation, Shanghai issued the “Shanghai Regulations on Promoting the Development of the AI Industry” in September 2022, which emphasizes the trustworthiness of AI algorithms, ethics, governance, and supervision, and provides detailed provisions on technical standards, data security, and personal information protection in the field of intelligent connected vehicles. In addition, Shanghai Municipal will establish the “Shanghai AI Strategic Advisory Expert Committee” to provide consultation on major strategies and decisions in AI development. It also sets up the “AI Ethics Expert Committee” to formulate ethical guidelines and promote discussions and standard-setting on major ethical issues in AI, both domestically and internationally.
China actively guides human-centered AI development, not only within its domestic judicial practices but also by promoting international cooperation. At the World Artificial Intelligence Conference held in Shanghai in July 2024, the Shanghai Declaration on Global AI Governance was issued, emphasizing the importance of developing human-centered AI. Currently, AI development faces unprecedented challenges, particularly in the areas of safety and ethics. Only through global cooperation and collective efforts can AI’s potential be fully realized to bring greater benefits to humanity, and China plays a key role in this process.
4. Legal Strategies for the Innovative Development of AGI
Given the characteristics of autonomy, data dependence, and the “algorithm black box”, AI faces significant security and ethical challenges in the fields of technological development, application derivation, data security, and privacy protection. These challenges could potentially have severe consequences and impacts on national security, social ethics, and personal life and property. Therefore, ensuring that the innovative development of AGI is beneficial to human society is a significant challenge that must be addressed. Research into AI governance is urgently needed.
4.1. Establishing Norms for Technology Ethics Supervision and Management
Currently, technology ethics governance faces issues, such as inadequate mechanisms, imperfect systems, and uneven development, which are insufficient to meet the needs of AI industry innovation and competitiveness. So, it is necessary to accelerate the establishment of multi-domain technology ethics norms and the enhancement of supervision. On the one hand, precise identification and tracking of risks in multiple application fields of GAI services should be conducted to improve governance responsiveness and regulatory efficiency. On the other hand, clear ethical rules for GAI should be defined, in order to promote pre-regulation and comprehensive oversight and guide companies to comply.
4.1.1. Establish a Mechanism for Identifying and Tracking Technology Ethics Risks
However, if the scope and degree of pre-review are set improperly, it might inhibit the R&D and training efficiency of GAI, objectively slowing down its development. Therefore, a reasonable review scope should be built on the basis of integrating security and development to achieve a balance between security and innovation.
4.1.2. Accelerate the Establishment of Technology Ethics Review and Supervision Systems
On 4 April 2023, the Ministry of Science and Technology of China issued an announcement seeking public comments on the “Measures for the Ethical Review of Science and Technology (Trial)”. Article 4 proposes that technology ethics reviews should adhere to scientific, independent, fair, and transparent principles, providing guidance for an open review system and procedures. Compared with self-filing and self-assessment systems, external regulatory methods like technology ethics reviews are more mandatory and can urge technology developers to improve compliance with technology use. And the clear guidelines provide a method to address the ethical challenges of AI by effectively enhancing the trustworthiness of AI and upholding social fairness and justice.
4.1.3. Strengthen Regular Safety Supervision of Technological Ethics
Given that AGI models are involved in various critical fields, such as industrial manufacturing, commercial services, warehousing and logistics, household use, and medical services, where they deeply interact with users, it is necessary to ensure the safe application of technology through regular supervision. Under regular supervision, precision is a crucial method for addressing risks, and regulation is the baseline requirement for risk governance. It is critical to focus on the application areas and scenarios of AI technology, establish reasonable accountability systems and technical ethics standards, and proactively avoid ethical risks.
4.2. Improve Rules for Identifying and Bearing Liability for Infringement
The realization of legal liability relies on the existence and determination of the responsible entity and is based on the principle of attribution. Clarifying the liability-bearing entities and the principles of attribution for AI infringements helps in both preemptively avoiding risks and encouraging the development of the AI industry. Although the increasing autonomy of AI challenges traditional causality theories, leading to difficulties in attributing responsibility, the influence of human values and intentions in every stage of AI algorithm design, development, and deployment justifies using subjective fault in algorithm design and deployment as a basis for liability.
4.2.1. Adopt a Preventive Liability Approach
4.2.2. Clarify the Liability-Bearing Entities
Providers of AGI models should undoubtedly assume specific obligations related to certain risks, but these obligations should be limited to areas such as national security and public safety and within the scope of reasonably foreseeable risks. For instance, providers should bear pre-screening responsibilities for content related to terrorism and obscenity. However, for non-public infringement risks, where service providers are without subjective fault, the principle of safe harbor should be followed, exempting them from tort liability.
4.2.3. Standardize Compensation Liability Methods
4.3. Balance Fair Competition and Innovation
The text-generation models behind GAI are characterized by their large scale, self-supervised training, and strong generalization capabilities. This means that building, training, and maintaining these models require substantial human resources, computing power, and data. Once trained, these large-scale AI models can easily outperform smaller AI models in specific fields. In other words, the GAI industry requires significant upfront investment and long development cycles. However, as long as the model is released, its high efficiency, lower costs, and broad applicability give its developing companies a significant competitive advantage in the market. To address the potential monopoly risks of GAI, it is necessary to balance intellectual property protection with antitrust measures and to balance protecting competition with encouraging innovation.
4.3.1. Balance the Scope of Intellectual Property Law and Antitrust Law
Intellectual property (IP) rights are private rights with exclusivity, and their inherent monopoly rights are legitimate and used to enhance market power. However, if this right is abused, it may lead to exclusion and restriction of competition. Antitrust law, on the other hand, is public law that limits monopoly power and respects private rights (like IP rights) but prevents abuse of these rights. If the exercise of IP rights excludes or restricts competition, it becomes also subject to antitrust regulation. For AGI, there is a need to protect IP rights and to ensure competitive regulations as well, Seeking a balance between IP protection and antitrust measures to protect competition and stimulate innovation.
On the one hand, innovation should be encouraged to create a suitable developmental and competitive market environment for the AGI industry. Establishing a fair, open, and orderly market environment ensures the healthy development and social benefits of AGI. On the other hand, attention should be paid to the monopoly risks in its upstream and downstream industries, and timely regulations should be enforced. For instance, the application of AGI may promote vertical integration strategies by large tech companies, leading to monopolistic and anti-competitive effects. In this context, regulatory authorities need to monitor monopolies in the chip, cloud computing, and downstream application markets, as well as to implement targeted regulations when necessary.
4.3.2. Adhere to Prudent Regulation Principles to Encourage Innovation
AGI encompasses a variety of advanced technologies, and its innovation and application processes inevitably face risks, such as safety issues and data breaches due to technical flaws and other factors. To address these situations, it is necessary to establish a fault-tolerant mechanism and to approach innovation failures and errors with tolerance and caution. Opportunities for correction and improvement would be provided when deviations or errors occur in the application of AGI, so as to better address the risk of innovation suppression caused by potential institutional barriers and outdated legal norms.
4.4. Enhance Data Security of Artificial Intelligence
4.4.1. Establish a Data Classification and Grading Protection System
In the governance of AGI, data-security protection should be the central focus. Based on data’s characteristics, such as shareability, reusability, multi-ownership, high dynamism, and weighted usage attributes, a dynamic approach should be employed to balance data development and security. In response to this, China’s “Data Security Law” proposed establishing a data classification and grading protection system at the national level, matching corresponding protection and management measures according to the importance of the data. Subsequently, the “Twenty Data Measures” classified data into public data, enterprise data, and personal information data, using this as a basis to construct a system of rights and obligations for data utilization.
To manage the large-scale parameters of GAI models systematically, it is essential to first establish a data classification and grading protection system that integrates the application fields of AI services and the inherent properties of AI algorithms. This system should promptly clarify the data-security standards that need to be met. Meanwhile, a corresponding data-security protection mechanism should be implemented to match different types and risk levels of data, ensuring that AI technology goals are maintained and enhanced while addressing data-security threats.
4.4.2. Promoting Preemptive Data-Security Regulation
Preemptive data-security regulation involves implementing measures to identify and mitigate risks during data processing and usage, so as to ensure data security upfront. This proactive regulatory approach guides the AI training process towards greater standardization, prevents damage from delayed post-incident regulation, and better safeguards users’ rights to information and choice. Thereby, algorithm trustworthiness can be enhanced.
4.4.3. Review Existing Privacy Protection and Compliance Mechanisms
Current practices in mobile internet personal information protection interpret the necessity principle very strictly to prevent improper data collection and aggregation. For instance, the “Methods for Identifying Illegal Collection and Use of Personal Information by Apps”, jointly formulated by the Cyberspace Administration of China, the Ministry of Industry and Information Technology, the Ministry of Public Security, and the State Administration for Market Regulation, stipulates that personal information cannot be collected solely for improving service quality or developing new products. While this strict compliance approach protects user privacy, it also restricts the data available for training AI systems. Conversely, relaxing these restrictions might introduce privacy and security risks. Therefore, how privacy protection rules can be applied to future AI application scenarios warrants careful consideration and discussion.
4.5. Strengthen AI Algorithm Regulation
The rapid development of GAI services not only highlights the explosive growth in data demand for large models but also indicates a significant increase in the complexity of the algorithm. Algorithm transparency and explainability face unprecedented challenges. To address the algorithm crisis, it is necessary to build a comprehensive governance system around AI algorithms and implement specialized, systematic governance.
4.5.1. Establish a Legal Framework and Regulatory Mechanisms
A robust legal framework and regulatory mechanism are needed to prevent algorithmic discrimination. Laws such as the “Cybersecurity Law” and the “Data Security Law” can delineate reasonable boundaries for algorithm governance activities and impose corresponding care duties on service providers. Additionally, specific legislation for areas like facial recognition and algorithmic recommendations can provide precise regulation of algorithmic activities within AI applications, which facilitates the indirect identification and mitigation of algorithm risks.
4.5.2. Balance Algorithm Transparency, Explainability, and Innovation
Considering the technological properties and application patterns of AI algorithms, GAI services require a well-considered approach to algorithm governance. It is not feasible to demand higher transparency for GAI algorithms due to the increasing technical difficulty of making algorithms transparent as parameter counts and hidden layers grow. Moreover, excessive transparency requirements could undermine the innovation incentives of developers and users.
4.5.3. Promote Multi-Principal Co-Governance of Algorithms
Establishing an autonomous industry oversight committee at the national level can provide guidance and supervision for the AI technology sector. Taking advantage of professional expertise, this committee can implement a classification and grading regulatory principle. By categorizing and governing algorithms based on their application scenarios, the committee can assist regulatory authorities with algorithm registration, auditing, and accountability. This multi-principal co-governance approach aims to ensure the reasonable application of GAI technology in content generation and to continuously refine the development, use, and regulation of AGI algorithms.
4.6. Conducting International Cooperation on AI Governance
In this highly interconnected era, technological advancements and applications have transcended national boundaries. Therefore, the governance of the AI industry cannot be achieved by any single country alone but requires the joint efforts of the global community. The convening of the World Artificial Intelligence Conference and the High-Level Conference on Global AI Governance highlights the importance of global cooperation. Through international collaboration and exchange, technological innovation and ethical values can complement each other, leading to the development of a more comprehensive and inclusive governance framework. This framework can address global challenges in AI development and promote AI towards a safer, more inclusive, and sustainable direction.
4.6.1. Promote International Cooperation
4.6.2. Establish Multilateral and Bilateral Cooperation Mechanisms
Encouraging the participation of international organizations, academia, industry, and civil society and promoting the international exchange of talent and collaborative research are crucial because, in the process of exchanging research results, various research teams can learn from each other and jointly overcome technical challenges. This not only accelerates the development of AI technologies but also helps to create a more open and collaborative innovation environment for the progress of the global AI industry.
5. Conclusions
Strategic emerging industries are the new pillars of future development. The legal landscape in the digital era should anticipate the future form of global governance for AGI. The era of AGI is not far off, with GAI technologies advancing rapidly in a short period. Their wide range of applications highlights the revolutionary significance of AGI, making the AI industry a new focal point of global competition. However, the innovative development of the AGI industry also faces challenges related to technological ethics, intellectual property, accountability mechanisms, data security, and algorithmic manipulation, which undermine the trustworthiness of AI.
Therefore, it is necessary to further develop a legal regulatory framework for the AI industry and improve the governance ecosystem for technological ethics. By introducing relevant codes of conduct and ethical guidelines, we can promote the healthy and sustainable development of the AI industry within a legal framework. Addressing the aforementioned issues requires strategic research and the pursuit of feasible technical solutions. By establishing technological ethics standards, improving the system for regulating liability, protecting competition while encouraging innovation, enhancing AI data-security measures, and standardizing algorithmic regulation in the AI field, the obstacles on the path to the innovative development of AGI can eventually be removed.