AI Ethics
February 20, 2024

Building the Right Regulations for AI: Possibility or Pipe Dream?

With great power comes great regulation. 

The rapid growth of AI has sparked concern throughout the world. There have been knee-jerk reactions such as Italy banning ChatGPT a few months after launch (Access was restored). 

Even the founder of Open AI, the company behind ChatGPT, is sounding the alarm bells. 

I think if this technology goes wrong, it can go quite wrong,” Sam Altman told the Senate subcommittee. “We want to work with the government to prevent that from happening.” Mr Altman warned that the risks of advanced AI systems were serious enough to warrant government intervention and called for regulation of AI for its potential harms.

However, there are now louder voices clamoring for regulations and policies, hoping that we can potentially prevent the apocalypse with laws regarding the development and deployment of AI. The incredible potential of this technology is often overshadowed by negative outcomes that AI could create:

  • Amplifying bias
  • Unemployment
  • Negative environmental impact
  • Misuse and weakened security

(You can read about these subjects in our article - AI ethics - Why is it important and what are the pain points?)

All of these challenges are exacerbated by the complexity of AI, as we still don’t understand the full extent of risks that AI systems can pose to businesses, consumers, and the general public.

Are AI regulations a tough nut to crack?

There have been proposals—hard and soft—to regulate AI. However, the issue lies with the pace at which AI is developing. Hard legal proposals could be stifling and would stymie the progress of a flourishing technology. Moreover, the wide range of applications makes it hard for regulatory agencies to create standardized laws and policies. Some of the challenges include:

  • Agreeing to disagree?
    Everybody’s talking, and yet none can agree on a concrete definition of AI (or even Artificial General Intelligence) on a global scale - Is it task-oriented or something that mimics humans? This makes it hard for regulators to establish clear, adaptive, and resilient frameworks. Terms like “explainability” and “fairness” can mean different things depending on the stakeholder perspective or the context. For example, in healthcare, explainability might focus on how an AI system diagnoses diseases, requiring transparency for medical professionals and patients. In finance, it could relate to credit scoring algorithms, where the emphasis is on understanding decision-making processes for loan approvals.
  • Where do we start?
    Since AI is making inroads in nearly every sphere of human life, it’s hard for regulators to come up with a “one size fits all” regulation. Does misinformation take precedence over transparency? Do we need to worry about election interference or rather the emergence of a Terminator?
  • Can regulators keep up?
    AI is moving quickly, leaving regulations to chug far behind. This was evident in the US Senate hearing with Sam Altman, where many involved were confused about the basic workings of AI. With swift developments taking place in the field, it will become harder for regulators to predict and adapt to what comes next. AI will most likely require flexible regulations that can mitigate risk and encourage innovation simultaneously.
  • Where are the experts? 
    Building effective regulations requires the expertise of lawmakers, AI ethicists, economists, computer scientists, and those working in various social science disciplines. Not only is there a shortage of reliable experts, there are also challenges of getting differing viewpoints to pull in the same direction.

What are leaders and experts saying?

Conversations around regulating AI have been brewing since 2014. Elon Musk sounded the alarm bells in 2014, 2015, and in 2017, calling for proactive and precautionary government intervention. Although he admitted that regulation was both “irksome” and “not fun”, he has repeatedly advocated for regulations. 

At the time, the response to Musk’s remarks was mixed - seen as either an overreaction or a clever marketing ploy for his own products.

However, these days, there is a greater consensus for creating regulations around AI. 

"To realize the promise of AI and avoid the risk, we need to govern this technology,"
US President Joe Biden said. "In the wrong hands, AI can make it easier for hackers to exploit vulnerabilities in the software that makes our society run."


“Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.” - UK PM, Rishi Sunak - Source


“I believe Europe, together with partners, should lead the way on a new global framework for AI, built on three pillars: guardrails, governance, and guiding innovation.”
-
Ursula von der Leyen, President of the European Commission at the World Economic Forum - Source

Jensen Huang, President NVIDIA, Sundar Pichai, CEO, Google and Mark Zuckerberg, CEO, Facebook outside the Kennedy Caucus Room at the Russell Senate Office Building
Jensen Huang, President NVIDIA, Sundar Pichai, CEO, Google and Mark Zuckerberg, CEO, Facebook outside the Kennedy Caucus Room at the Russell Senate Office Building, source

Elon Musk, CEO, Tesla Motors
Elon Musk, CEO, Tesla Motors, source

In a speech at the Center for Strategic & International Studies, Senate Majority Leader Chuck Schumer asked whether Congress can work to “maximize AI’s benefits while protecting the American people—and all of humanity—from its novel risks? I think the answer to these questions is an emphatic yes.” - Source


A delegation of the biggest tech leaders, including Sam Atlman, CEO, OpenAI; Sundar Pichai, CEO, Google; Elon Musk, CEO, Tesla Motors; Mark Zuckerberg, CEO, Facebook; also convened in Washington for an ‘AI Safety Forum’ where they met with US senators to talk about the rise of AI and the regulation required. 


“If anything, I feel, yes, it’s moving fast, but moving fast in the right direction,” said
Satya Nadella. “Humans are in the loop versus being out of the loop. It’s a design choice, which, at least, we have made.” - Source

"There's the existential risk, which is: 'What if AI starts improving itself and we lose control?'" - Satya Nadella - Source


This meeting is just one of a series of discussions between the heads of Silicon Valley, policymakers, researchers, labor leaders, civil liberty groups, and government. The need for different stakeholder involvement comes with its own share of criticism. 


“We can’t really have the companies recommending the regulators. What you don’t want is regulatory capture, where the government just plays into the hands of the companies.” Gary Markus, Professor at New York University.

However, some see it differently.

“AI by itself, I don’t see as a threat,” Bill Gates said in an exclusive interview with CNBC Africa. “There are a lot of long-term effects where AI could make us all so productive. It will change how we use our time but, we need to roll it out and we need to get the benefits which in every domain, particularly health and education, I am very excited about.” - Source

And some have already started self regulating, as Nick Clegg, Meta's president of global affairs, announced that Meta will require advertisers throughout the world to disclose whether they have used AI or related digital editing techniques “to create or alter the content of political ads”

“This applies if the ad contains a photorealistic image or video, or realistic-sounding audio, that was digitally created or altered to depict a real person as saying or doing something they did not say or do,” Clegg wrote. “It also applies if an ad depicts a realistic-looking person that does not exist or a realistic-looking event that did not happen, alters footage of a real event, or depicts a realistic event that allegedly occurred, but that is not a true image, video, or audio recording of the event.” - Source

The latest regulations - The EU AI ACT and Biden’s Executive Order

AI regulations may be in the nascent stage, but it doesn’t mean that they don’t exist. Governments across the world are putting plans in place or taking concrete steps to safeguard their citizens while driving innovation.

In November 2023, 18 countries signed the first international agreement on how to keep AI safe from rogue actors. This includes the US, UK, Australia, Canada, Chile, Czechia, Estonia, France, Germany, Israel, Italy, Japan, New Zealand, Nigeria, Norway, Poland, South Korea and Singapore. These are guidelines which intend on ensuring that AI is safe by design.

Canada has also introduced the Digital Charter Implementation Act (legislation for trust and privacy) in 2022, while China has proposed the Interim Measures for the Management of Generative AI Services in 2023.

However, two of the biggest moves have come from the US and the EU.

Biden’s Executive Order

President Joe Biden and Vice President Kamala Harris, source


The United States recently launched a sweeping executive order to regulate both the development and use of AI, emphasizing the protection of American citizens' privacy, advancing equity and civil rights, and promoting innovation and competition.

The executive order had been touted as the first AI regulations to cover a vast array of challenges, however, experts believe that this is just the beginning and that more exhaustive regulations will be required. Furthermore, the powers of executive order are limited, and it could be overturned. Legislation is critical to cementing these regulations.

Key Takeaways:

  • New Standards for AI Safety and Security: The Executive Order mandates developers of powerful AI systems to share safety test results and critical information with the U.S. government, particularly for models posing serious risks to national security or public health.
  • Advancing Equity and Civil Rights: Clear guidance will be provided to prevent AI algorithms from exacerbating discrimination in various sectors.
  • Consumer, Patient, and Student Protection: Actions will be directed to ensure AI's responsible use in healthcare, education, and consumer protection.
  • Support for Workers and Innovation: The order includes principles to mitigate AI's impact on workers, promoting AI research and competition in the U.S.
  • Privacy Protection Initiatives: The order calls for the acceleration of privacy-preserving techniques in AI and the evaluation of agencies' collection and use of commercially available information.
  • Development of Safety Standards and Tools: The National Institute of Standards and Technology will set rigorous standards for AI safety testing, while the Department of Homeland Security will apply these standards to critical infrastructure sectors.
  • Protection Against AI-Enabled Biological Risks: New standards will be developed for biological synthesis screening to manage risks potentially exacerbated by AI.
  • AI-Enabled Fraud and Deception Prevention: Standards and best practices will be established for detecting AI-generated content and authenticating official content.
  • Advanced Cybersecurity Program: The order includes the development of AI tools to identify and fix vulnerabilities in critical software.
  • National Security Memorandum on AI: A memorandum will be developed to ensure the safe, ethical, and effective use of AI in military and intelligence missions.

Bradley Tusk, CEO at Tusk Ventures, a venture capital firm, welcomed the move, but said tech companies would likely shy away from sharing proprietary data with the government over fears it could be provided to rivals.

"Without a real enforcement mechanism, which the executive order does not seem to have, the concept is great but adherence may be very limited," Tusk said - Source

The EU AI Act

The European Union's Artificial Intelligence (AI) Act represents a pioneering legislative effort to regulate the development and application of AI technologies within the EU. This legislation, the first of its kind, aims to establish a comprehensive framework for ensuring AI systems are safe, transparent, and accountable. The AI Act categorizes AI systems based on their risk levels, imposing stricter requirements on high-risk AI systems, including those used in critical infrastructure, employment, and essential private and public services. Penalties could go up to €30 million, or 6 percent of an organization’s global revenue.

Source

Key Takeaways:

  • Legislative Process: The AI Act is undergoing extensive discussions in trilogue talks, yet faces challenges in finalizing AI development standards.
  • Foundational Models Debate: Key EU countries are concerned about stringent regulations on foundational AI models, fearing it could stifle innovation.
  • Biometric Identification Rules: There's debate over the proposed prohibition of real-time biometric identification in public spaces, with calls for more law enforcement exceptions.
  • GDPR Compatibility: Enforcing the AI Act alongside GDPR raises issues due to overlaps and potential conflicts in data processing and consent rules.
  • Delay in Finalization: Delays in finalizing the Act risk postponing its implementation, potentially until after the 2024 European Parliament elections.
  • Global Context: The EU's AI regulation efforts are part of a broader global movement towards AI governance, with significant international attention.

It’s clear from the developments across the globe that AI regulations will only grow more comprehensive from here on out.

The good news? There is a bit of time to assess what the regulations are and how to respond to them.

What kind of regulations would you like to see in the space of Artificial Intelligence?

Let us know!

Want to cut the clutter and get information directly in your mailbox?