• Sign In
  • Get Started
HomeBlogAI Regulations

AI Regulations: Where We’re At and Where We’re Going

As nations around the globe grapple with the challenges posed by AI, efforts to establish comprehensive regulatory frameworks are gaining momentum. These frameworks aim to strike a delicate balance between fostering innovation and safeguarding against potential harm.
Media - Image - Shera Brady

by Shera Brady

March 19, 2024
Existing Regulations that Impact the Use of A-1
Contents
The AI RisksWhat’s Already in Place?What’s In Store for AI Regulations

The need for robust regulations to ensure the responsible development and deployment of AI systems has never been more critical. From concerns about data privacy and algorithmic bias to fears of autonomous decision-making gone awry, the complexities surrounding AI regulations demand careful attention from policymakers, industry leaders, and society at large. 

As nations around the globe grapple with the challenges posed by AI, efforts to establish comprehensive regulatory frameworks are gaining momentum. These frameworks aim to strike a delicate balance between fostering innovation and safeguarding against potential harm. 

The AI Risks

Risks associated with unregulated AI loom large, ranging from privacy breaches to discriminatory decision-making. Ethical concerns surrounding AI deployment, including issues of bias, fairness, and transparency, remain at the forefront of employment issues and reinforce worries about AI exacerbating existing inequalities or creating new ones.

There are also several questions about privacy and the use of AI. In a world of billion-dollar lawsuits, licensing issues, and Google making a $60M deal with Reddit to train their model using 1.22 billion contributing users’ data, it’s no wonder people are asking for more robust protection, fast.

In fact, most AI users are not fully briefed on how their data is being collected and used. Many organizations are lobbying for informed consent and the ability for users to opt out or delete data they don’t want to be saved.

Unbridled AI usage's potential safety and societal impacts underscore the need for clear guidelines, thoughtful regulation, and effective oversight.

What’s Already in Place?

Major regulatory bodies such as the European Union (EU), the United States (US), and China have developed comprehensive frameworks to address the complexities of AI deployment. However, there are notable variances in their approach and focus.

GDPR and AI

In the European Union, regulations like the General Data Protection Regulation (GDPR) don’t yet explicitly mention AI. Still, many provisions in the GDPR are relevant to AI, and some are indeed challenged by the new ways of processing personal data that are AI enables. Here are a few ways GDPR could be used to address AI:

  • Purpose limitation: Refers to the idea of data compatibility, meaning data is reused for purposes that are compatible with the originally intended use. Basically, AI cannot reuse your data for its own purposes.

  • Data minimization: Requires reducing the ease with which the data can be connected to individuals

  • Privacy by design: Require businesses to anonymize the dataset and design the model to prevent re-identification. If not, data subjects must be given instructions on how to exercise their rights under GDPR.

The recently proposed AI Act under GDPR sets stringent standards for data protection and AI governance and heavy penalties for violation. Most violations of the act will cost companies $16 million or 3% of annual global turnover. Still, they can go as high as $38 million or 7% of annual global turnover for violations related to AI systems that the act prohibits.

United States and AI Laws

The White House recently released the Blueprint for an AI Bill of Rights, which includes five principles to help the public incorporate protections into policy and practice. These principles are:

  1. Safe and Effective Systems: Automated systems should be developed with consultation from diverse communities, stakeholders, and domain experts to identify concerns, risks, and potential impacts of the system.

  2. Algorithmic Discrimination Protections: Designers, developers, and deployers of automated systems should take proactive and continuous measures to protect individuals and communities from algorithmic discrimination and to use and design systems equitably.

  3. Data Privacy: Designers, developers, and deployers of automated systems should seek your permission and respect your decisions regarding collection, use, access, transfer, and deletion of your data in appropriate ways and to the greatest extent possible; where not possible, alternative privacy by design safeguards should be used.

  4. Notice and Explanation: Designers, developers, and deployers of automated systems should provide generally accessible plain-language documentation including clear descriptions of the overall system functioning and the role automation plays, notice that such systems are in use, the individual or organization responsible for the system, and explanations of outcomes that are clear, timely, and accessible.

  5. Human Alternatives, Consideration, and Fallback: You should be able to opt out, where appropriate, and have access to a person who can quickly consider and remedy problems you encounter.

On the state level, the California Consumer Privacy Act (CCPA) stands out as the most comprehensive privacy law, endowing California residents with significant control over their data and imposing obligations on businesses engaged in its collection, utilization, or sale.

What’s In Store for AI Regulations

Emerging trends in AI regulation are likely to encompass greater emphasis on ethical considerations, transparency, and accountability, reflecting a growing recognition of the need to address societal concerns surrounding AI deployment. Potential areas for regulatory improvement may include: 

  • Refining existing compliance frameworks to include AI 

  • Enhancing enforcement mechanisms

  • Fostering greater collaboration between regulatory bodies and industry stakeholders

International cooperation and collaboration are also expected to play a pivotal role in addressing the global nature of AI challenges, facilitating knowledge sharing, harmonizing standards, and promoting interoperability across jurisdictions. There are a number of organizations already working on developing AI policies, including:

  • The IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems, whose mission is to ensure every stakeholder involved in the design and development of autonomous and intelligent systems is educated, trained, and empowered to prioritize ethical considerations so that these technologies are advanced for the benefit of humanity.

  • The United Nations’ Multistakeholder Advisory Body on Artificial Intelligence​ is building global capacity for the development and use of AI in a manner that is trustworthy, human rights-based, safe and sustainable, and promotes peace.

  • The Cloud Security Alliance’s AI Safety Initiative is working to reduce the risk of AI technologies being misused by taking the steps necessary to educate and instill best practices when managing the full lifecycle of AI capabilities, ensuring—most importantly—that they are designed, developed, and deployed to be safe and secure.

Overall, the future of AI regulation promises to be marked by a continued commitment to balancing innovation with responsible governance to harness the full potential of AI for the benefit of society.

The complexities surrounding AI regulations, particularly in the realm of privacy, underscore the need for continuous adaptation and collaboration among stakeholders. As the AI privacy paradox looms large, legal and ethical committees face the daunting task of keeping pace with the rapid advancements in AI technology to effectively address emerging challenges. It's imperative that policymakers, technology developers, and the public remain actively engaged in the ongoing discourse to shape the trajectory of AI-driven data processing and safeguard privacy rights.

Trusted Newsletter
Resources for you
How AI impacts privacy

The AI Dilemma: Harnessing the Power of AI While Protecting Privacy

Biden's executive order on AI

What the Biden Administration’s New Executive Order on AI Will Mean for Cybersecurity

AI Best Practices

Essential AI Security Practices Your Organization Should Know

How to Avoid BEC Attacks - 936x532 (1)

Business Email Compromise Attacks Are on the Rise, Here’s How To Avoid Getting Duped

Media - Image - Shera Brady
Shera Brady
Related Resources
How AI impacts privacy

The AI Dilemma: Harnessing the Power of AI While Protecting Privacy

Biden's executive order on AI

What the Biden Administration’s New Executive Order on AI Will Mean for Cybersecurity

AI Best Practices

Essential AI Security Practices Your Organization Should Know

How to Avoid BEC Attacks - 936x532 (1)

Business Email Compromise Attacks Are on the Rise, Here’s How To Avoid Getting Duped