AI’s path to maturity lacks footing without the stamp of regulation. While the recent developments of the White House executive order and the EU AI Act are a good start, there’s a lot of progress to be made in terms of AI research and rule-making. Because if we want to unlock the full range of AI use cases, we’re going to need precise regulation, tailored to the unique needs of each sector.
Generative AI sparks an opportunity to transform every industry; a prospect that has prompted a flurry of AI innovation across verticals that will undoubtedly send ripple effects throughout the economy. McKinsey estimates that across 63 different use cases, generative AI could contribute between $2.6 trillion to $4.4 trillion to the global economy annually. But sector and use-case-specific regulatory framework is imperative if we want to harness this potential and ensure responsible, safe applications of generative AI.
Why One-Size-Fits-All Guidelines Won’t Work
General guidelines will not suffice in the regulation of generative AI. The potential applications of the technology are far too differentiated. For example, both aviation and manufacturing, each with distinct characteristics, will utilize AI in meaningful ways. But it doesn’t make sense to apply one blanket law across them both.
Consider the existing laws governing aviation safety. These regulations are incredibly specific and account for nuances in aircraft design, operational protocols, even noise. Trying to transplant these rules directly onto manufacturing companies, which operate under entirely different conditions, would be unreasonable, let alone counterproductive and a hindrance to progress.
Even the definition of “safety” varies significantly from one industry to another. Safety in automotive use-cases (ie. self-driving cars) is determined by risk of physical injury whereas safety in financial services (ie. trading algorithms) boils down to loss of money. Specific parameters and guidelines are essential for each use case to ensure the safe and responsible deployment of generative AI.
Disparate use cases require different parameters. AI in healthcare will require specific health data rules, whereas the entertainment industry will need stringent copyright standards. The recent writers’ strike in Hollywood vividly illustrates the nuanced concerns and impact of AI across industries. As AI increasingly infiltrates the entertainment industry, it becomes clear that dedicated rules and standards are necessary to address copyright issues and protect the creative rights of individuals.
How Do We Optimize AI Across Industries?
A collaborative and sector-specific regulatory approach is crucial to the future of AI. The White House should remain a significant player but cannot be the sole architect of all AI-related regulations. Instead, collaboration with relevant agencies such as the FDA, the FCC, HHS, and others is imperative. Each agency should play their role in delineating how AI can be ethically and effectively utilized within their respective field.
Importantly, collaboration needs to extend beyond public sector regulators. Lawmakers should work in lock step with companies that understand what can be done with AI in their respective industry and how rules will impact the ability to innovate. From the Space Race to the Human Genome Project, the US government has a long history of working with private companies to learn where, why, and how regulations should be applied. Let’s continue that history. Like other world-defining advancements in technology, partnership here will enhance the effectiveness of regulations and ensure that they keep up with the rapid advancements in generative AI technology.
While the excitement surrounding the emergence of AI is justified, it is the careful and thoughtful regulatory process that will safeguard its long-term benefits. Lump sum regulations, driven by enthusiasm and market analysis alone, risk suffocating the potential of generative AI. That potential remains great, AI could rewrite entrenched and antiquated systems and unlock new modes for further societal betterment. By embarking on a comprehensive and meticulous regulatory journey now, we can pave the way for a future where everyone reaps the benefits of responsible and innovative AI applications.
David Haber is Co-Founder and CEO of Lakera.