Introduction

Europe has always been ahead of the United States in regulating emerging technology and the internet. One reason for their maturity with digital governance could be the differences in historical experiences. European countries generally place a higher value on privacy because their long histories have often included group and governmental overreach and upheaval. Spain, Germany, Italy, France, and others have had to live through, even in just the last century, horrific experiences that can come with offering too much control to a subset of leaders. Compound this level of control with deep access to personal information on citizens and the conditions exist for even wider spread targeting of dissenters.

The European Union led the US and the rest of the world with the General Data Protection Regulation (GDPR) Act in 2018. The act was a groundbreaking piece of legislation intended to protect citizens’ data privacy and offer increased penalties for the mishandling of personal information. This digital governance impacted technology companies all over the world if any of their users live in Europe. GDPR also became the tip of the spear for the digital rights of citizens.

Agree or disagree with the regulations in GDPR, at least the EU is trying to balance the scales between Big Tech and users. The U.S. has no federal level regulations at this point. States such as California (with CCPA) are leading the way in tackling data privacy from a protection standpoint, but this really is a federal issue. Forcing organizations to deal separately with regulations in all 50 states will be ungainly and expensive.

Now the EU is poised to lead the world again with another pioneering piece of legislation: The Artificial Intelligence (AI) Act. The AI Act is a proposed law regulating artificial intelligence. The act is the first of its kind regulating emerging AI technology.

The premise of the act is based, in part, to ensure that AI providers consider the impact of their applications on society at large, and the individual. They also want AI developers to recognize that AI applications which cause negligible harms to individuals could, in theory, lead to significant harms at the societal level.

Before going further into what is specifically overseen, let’s understand why it is necessary to regulate AI at all.

Why Regulate AI at all?

The importance of regulating a widespread technology should be clear by this point. We have now seen the dangers caused by social technologies when they’re allowed to wildly proliferate totally unregulated. Users in the U.S. have now experienced data losses they had no control over, been subjected to misleading content posts driven by the intent to sway elections, and negative posts promoted simply to be addictive.

The U.S. Congress has called the CEOs of Big Tech social technology companies to answer questions on the negative aspects of their systems. It is admirable that the U.S. government is trying to find the balance between freedom and control and not racing to over-regulate. At the same time, it makes sense to set some boundaries that would protect society and it is likely Congress will do this at some point.

AI has the potential to inflict the same types of ills as well as other more novel damages. The potential exists for AIs to automate processes in ways that cause widespread damage in a manner of minutes. AI bias could create an unfair playing field for any group of people. Autonomous machines or weaponry could physically hurt human beings without a shred of concern.

AI applications already influence us in many ways including what information we see online by predicting what content would be engaging to user. AI technology is also used to capture and analyze data from facial recognition that can be used by advertisers to personalize content or, more alarmingly, for law enforcement to police and arrest individuals. AI is also used to offer health advice and diagnose diseases like cancer.

In short, AI affects many parts of our lives and will very likely affect more and more aspects over time.If history is any indication, the time to regulate artificial intelligence is BEFORE it proliferates. It is a foregone conclusion that AI will spread rapidly and widely. If only a small percentage of AI applications cause damage to individuals and society, this could still mean millions of people.  We need to learn from past technology expansion and be proactive this time.

For example, we did not act proactively to better regulate the use of mobile devices while driving and now have hundreds of thousands of people who have died from distracted driving. We could have saved many lives at this point, but we have not chosen to do this in the U.S. Our laws in this area are weak and barely applied.

The proposed EU AI Act assigns applications of AI to three risk categories.

The first risk concerns applications and systems that create an unacceptable risk. An example of this is the Black Mirror-esque government-run social scoring system currently affecting the day-to-day lives and behavior of Chinese citizens. These types of risks are outright banned under the EU AI Act.

The second type of risk concerns high-risk applications. An example of this type of risk is the CV-scanning tools that are currently ranking job applicants for potential employment. These types of risk are subject to specific legal requirements.

The final type of risk concerns the broad category of applications that are not explicitly banned or listed as high-risk under the act. These are largely left unregulated by the legislation.

The penalties for violations under the EU AI Act are fairly steep.

The proposed act is far from a panacea for what is a wide open and far from fully realized field and technology. However, it is at least an initial stake in the ground. The act sets a tone that these technologies and their uses will be watched by regulators (even as the technology continues to evolve).  The act does represent a proactive genesis to regulation before AI technology becomes harder to control.

The Flip Side to Regulation?

Regulation for regulation’s sake is never a good or productive idea.  Weak regulation or ineffectual regulation can often be worse, in some ways, than no regulation at all. This is because it establishes a “false sense of security” that lulls people into believing the issue has been resolved (note our comments on distracted driving earlier). Finally, such regulations could be seen as unnecessarily slowing or strangling potential advances or innovation and the commerce, opportunities, jobs, and new industries that this might foster.

Finding the right balance between meaningful, effective, and necessary regulation while allowing progress and innovation to move at a reasonable pace is, admittedly, a very difficult one. Difficult, but not impossible.

There will almost certainly be missteps and overreaches in the evolving attempts to rationally regulate this powerful technology in the aim of protecting individuals and society from the possible worst impacts of AI. That is simply the cost and side effects of change and progress and is especially true when mixed with the politics.

Our View

There is an argument to be made that we flat out failed, societally and politically, by choosing to not regulate the internet and social media before its worst effects were allowed to spread like wildfire and embed themselves deeply into our lives. One might argue that we really “couldn’t see” what was coming with these new and innovative technologies. However, that is untrue and a weak justification for our lack of action in the past. To anyone paying attention at the time, the warning bells were being rung, but they were not heeded for numerous reasons, the promise of lofty profits being chief amongst them.

To fail again, in the same way with AI, a potentially much more impactful technology, would be negligence of the highest order and somewhat societally “insane”.  That is, if a viable definition of “insane” is doing the same thing over and over and expecting different results.

Reasonable regulations for AI are a good and necessary step, and the earlier the better. Whether the proposed EU AI Act is good or bad or enough isn’t really the issue. It’s likely not enough. But it acts as an initial stake in the ground to start to try to protect all of us (if this kind of early oversight should eventually become a worldwide standard/approach) from the potential worst effects of AI and its applications. Based on past experiences with the internet and social media, the effects of AI could be very consequential even if they are “only” at the same level. Yet, in the case of AI these effects could be at a much higher, much more damaging level.

Sign Up For Our Articles

"*" indicates required fields

Download the PDF of this FPOView