Uncategorized

How Generative AI is straining Government Regulation around the world –


by Benji Christie of Formic AI Ltd. (www.formic.ai); and Len Zapalowski of Strategic Exits Partners Ltd. (www.exits.partners).

Artificial intelligence (AI) has the potential to bring about significant benefits to both commerce and consumers in broad applications from healthcare to video games. However, if left unregulated, AI also poses significant risks, not limited to replicating bias and discrimination, but also loss of privacy; widespread immediate dissemination of misinformation and disinformation; disruption to democratic governing systems; and even, harm to life.

As the research and development of AI systems continue to advance at an unprecedented pace, governments around the world are grappling with how to react. Across the globe, we’re seeing government bodies attempt to come to terms with the new reality of AI being plugged into everything. Take the USA for example, where the country’s top AI companies are ironically pleading with their regulatory bodies to slow down the pace of AI development for a period of time to identify and assess the unintended consequences. Different governments are taking different approaches, from “protecting the people”, to “waiting and hoping for the best”, to “propelling the growth”.

The revolutionary leaps in AI come from the growth of an innovation called Foundational Models. Also known as Large Language Models (LLM) or Generative AI models (Gen AI), Foundational Models have garnered immense attention due to their remarkable ability to understand context like why jokes are funny, and their ability to generate novel text, image, and video. Their sudden take up and impact across the globe is shining a spotlight on Generative AI.

The responses of some nations and leaders to Generative AI illustrate how they are regulating (or not) the exponential growth of this sector. The graphic below depicts where various nations are on a continuum from consumer protection to uncontrolled corporate development.

The European Union (EU) has consistently taken steps to protect user data, as seen from their 2018 law “EU General Data Protection Regulation (GDPR)”. They have maintained this trend during the Generative AI wave by introducing regulations to promote ethical and responsible development. On June 14th, the European Parliament passed a vote to adopt the Artificial Intelligence Act (AI Act), a broad set of rules to safely control various forms of AI development. These regulations would specifically target and categorize high-risk AI systems as well as foundational models. Specifically for General (or Foundational) Models, the legislation proposes extra guardrails around model transparency: like training data visibility; legal viability; not using copyrighted material to help generate responses; and, safety requirements like limiting model use for illegal purposes (e.g., misinformation campaigns).

China’s government has moved quickly to regulate both the development and the use of Generative AI. In April 2023, the Cyberspace Administration of China (CAC) announced the requirement for a security review for large-scale Gen AI services like ChatGPT, as well as a second round of regulation on AI which specifically targets Foundational Models called the Measures for the Management of Generative AI Services. These regulations are built around data restrictions and protections whose goal is to improve user trust in AI and reduce the misuse of AI tools. The product/services requirements are centered around reducing bias in the models (e.g., removing discrimination factors like race and sex), adhering to copyright/IP infringements, and safeguarding user data.

The set of rules put forth by the China’s officials are centered around balancing the economic potential with geopolitical guardrails to ensure that generative AI providers are aligned with the nation’s socialist values. Nevertheless, their stipulation which defines that authorities like the CAC must have full visibility on an AI’s data, algorithm, and systems will likely deter any non-open source players from entering their market any time soon.

In late 2022, Canada proposed the Artificial Intelligence and Data Act (AIDA), which aims to regulate the development of AI in the country under a policy of full transparency. The proposed legislation in AIDA focuses on information disclosure and risk management for what is defined as “high-risk AI systems”. These regulations require organizations to demonstrate transparency behind the AI-powered decision-making processes. This transparency aims to ensure that AI systems are not blindly created and used without knowledge of how the systems develop an answer. This implies that organizations must have a strong understanding of how a Gen AI algorithm or system actually generates an answer prior to creating a large model.

Early in the summer of 2023, Canadian policy makers held discussions with various leaders in AI across Canadian organizations. These discussions helped the country evaluate policy to balance consumer protection with economic growth. Overall, the country is attempting to balance their investments in AI R&D with safety regulations around AI usage.

The UK’s government takes an approach to AI regulation focused on guidance as opposed to restrictions. The Information Commissioner’s Office (ICO) has published sets of guidelines aimed at supporting Explainable AI, Data Protection, and Model Fairness. The guidelines promote informed engagement with key stakeholders in industry, academia, and society, in an effort to establish the UK as an AI superpower.
Additionally, in Spring 2023, the UK government published a white paper which outlined their legislative strategy on AI regulation. The paper dictated that existing regulators in the UK (like the ICO) will need to take responsibility for the development and oversight of AI in their respective sectors. The paper was a pre-cursor for a planned global summit (announced in June), where the UK is looking to establish themselves at the center of the conversation for global and international policy development.

The Biden-Harris administration has expressed interest in regulating the AI industry more broadly. However, there are little to no regulatory requirements targeted at modern AI technologies. The regulatory landscape across the USA remains largely fragmented, with various states creating their own approaches to AI regulation.

In early May 2023, some of the American tech leaders in AI, including Google and OpenAI, were invited to a closed-room evaluation of their models. This evaluation will be developed by Scale AI, one of the world’s leading data platforms for AI, and conducted by thousands of community partners and AI experts with the goal of providing the public information on the potential impact and issues presented by existing Generative AI systems.

Although governing bodies in the US are known to switch opinions quickly, it looks like the US is assigning a higher priority to development technologies quicker to be the epicenter of AI. Regulatory control appears to be pushed to a reactive, rather than proactive approach.

Ever since ChatGPT turned the global news spotlight onto Generative AI and Foundational Models in late 2022, there has been a major push from AI leaders across the world to quickly create and adopt rulesets surrounding AI development.

After the release of GPT-4, the latest Generative AI model from OpenAI/Microsoft, a group of AI researchers signed an Open Letter (published on 12th April 2023) to stop the development of powerful generalized AI models. This letter received a mixed reaction from the public. The letter had reasonable justifications to request the pause in AI research and development; however, it did not provide any concrete methods to enact a pause. The letter simply calls on AI labs around the world to stop any development of advanced AI systems like large foundational models.

In Spring 2023, one of the pioneers of AI – known as one of the “Godfathers of AI” – Geoffrey Hinton, quit his position at Google in order to talk freely about the potentially dangerous trends in AI. His unexpected and highly public departure from one of the world’s leading companies in AI helped inform the public that AI technology is developing far more rapidly than projected and could outstrip human capabilities in just a few years’ time. Not enough is being done to identify and mitigate potential dangers.

In mid-May 2023, Sam Altman, the CEO of OpenAI (the creators of Chat-GPT, recently acquired by Microsoft) spoke in front of the US Congress and appealed for regulations to help control the new technologies. He recommended a three-point plan aimed at controlling who can create models, ensuring accurate responses, and external audits. These recommendations would consolidate the companies allowed to create powerful models while still not addressing the ethical concerns of bodies such as the EU. The impact of such a proposition is stated to benefit the country by consolidating power, however experts like Yann LeCun from Meta or Andrew Ng from Stanford believe mass regulation as opposed to more transparency would hinder the research and innovation of AI in general.

It is crucial for governments and tech companies to collaborate and proactively regulate the development and deployment of AI to ensure that all risks are addressed. As AI technology continues to blaze ahead, it is important that we have the proper rules in place to ensure that the growth is sustainable.

Currently, the EU is taking the most “safety-first” approach in order to protect their constituents as much as possible at the risk of innovative speed. At the other extreme, the US is focused on growth at all costs, despite their top AI companies pleading for more control and regulation. With new releases in this space coming out every day and regulation not keeping pace, it’s hard not to imagine that speed will win. Will our future be the dystopian Brave New World?



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *