Contact Sales

Essential AI Security Practices Your Organization Should Know

With impactful tools like AI comes a pressing need for heightened cybersecurity measures, prompting states, countries, and other regulatory bodies to propose laws to provide security guidance and ensure responsible safeguards.
Media - Anthony Gagliardi

by Tony Gagliardi

January 25, 2024
AI Best Practices

The rise of artificial intelligence (AI) has permeated every corner of our digital landscape, revolutionizing the way businesses operate and taking efficiency and innovation to another level. However, with these impactful tools comes a pressing need for heightened cybersecurity measures, prompting states, countries, and other regulatory bodies to propose laws to provide security guidance and ensure responsible safeguards for AI.

As a company who is investing in generative AI solutions, we’re sharing insights from our own research and development, combined with insights from security leaders and practitioners. In this piece, we dive into the impact of AI on businesses and how to navigate the intricate realm of security challenges it presents.

Understanding AI Security Risks

Businesses face a dynamic and evolving AI risk landscape, largely due to the fact that AI is still a new and complex technology, making it difficult to not only predict potential vulnerabilities but also track the consequences of improper usage. 

The lack of traceability also makes it hard to maintain accountability—a problem exacerbated by how regularly users can unwittingly share sensitive information. These programs also present a greater supply chain risk, particularly concerning data sources and ethical considerations in data collection and use.

Security Best Practices for Businesses Developing or Using AI Solutions

Keeping the following best practices in mind while implementing AI can strengthen your security posture and boost confidence in secure, ethical AI development.

AI and the U.S. Government

The White House recently released an executive order on the safe, secure, and trustworthy development and use of artificial intelligence—here are some of the ways the government is planning on addressing AI:

  • Requiring businesses to develop strong security systems internally and share safety information with the government

  • Protecting against deep fakes by providing standards for detecting them

  • Using AI to find and fix infrastructure vulnerabilities

  • Preventing landlords from using AI to discriminate by addressing algorithmic discrimination

  • Using AI for benefits like creating life-saving drugs and advancing education

  • Accelerating AI research and producing various reports (like AI’s impact on the labor market) through shared data resources

  • Help the little companies compete with the big companies in AI

  • Allow AI-specialized immigrants to stay in the country and work on AI

  • Accelerating the hiring of AI professionals in government and help the government acquire AI technology

  • Expanding international collaboration on AI and helping develop international standards

The U.S. Equal Employment Opportunity Commission also issued Title VII of the Civil Rights Act of 1964 to address how artificial intelligence is used in employment selection procedures. Title VII generally prohibits employment discrimination based on race, color, religion, sex (including pregnancy, sexual orientation, and gender identity), or national origin.

Basically, if you plan to use AI to make employment decisions, the program you use must be unbiased. If the AI algorithm’s decisions are influenced by race, color, religion, sex, or national origin, or combination of any characteristics, you will be in violation of Title VII.

Title VII also states that if AI tools developed by third parties present the same biases, the employer will be accountable, making it your responsibility to thoroughly vet the AI program you plan on using.

Compliance Standards

The novelty of AI means many compliance standards don’t explicitly mention its usage and implementation, but there are still several considerations to make when looking at AI in your compliance program.

For example:

  • GDPR provisions apply if personal data of EU residents is being processed. The transparency section would require disclosure and consent if AI is used to process this type of data. Privacy by design would require businesses to anonymize the data set and design the model to prevent re-identification. If not, data subjects must be given instructions on how they can exercise their rights under GDPR.

  • CCPA data subjects must be informed of the use of AI, especially concerning California residents’ personal data.

  • HIPAA would require personal health information to be de-identified if shared with a non-HIPAA compliant AI tool or third party.

Because these compliance standards don’t have much text surrounding AI, many companies are implementing risk management frameworks, like the NIST AI RMF, to offer a roadmap in navigating these new tools.

Data Protection and Privacy

One of the main concerns surrounding AI is knowing how to protect sensitive and private data. Your organization should implement robust data encryption and scrubbing techniques as well as anonymize sensitive user data before using AI, where possible. As with any security program, minimizing data and regularly auditing access and usage can be a lifesaver when it comes to mitigating risk.

Vendor Risk Management also plays a key role when using AI solutions. While most companies are familiar with the basic mechanics of a vendor security review, understanding the business use case becomes even more critical when using AI solutions. The types of data shared with AI solutions along with how the business intends to use them can drastically change the associated risk.

Model Security

Understanding the security posture of the various AI models out there will help inform your decision on which is right for your organization. Businesses should conduct thorough model validation and testing to identify and rectify potential vulnerabilities. 

Implementing anomaly detection mechanisms adds an extra layer of security, allowing swift identification of unexpected deviations in model behavior. Proactive error and exception management is also essential for maintaining the stability of AI applications, minimizing disruptions and potential security loopholes.

When outsourcing models, employing model and/or code signing mechanisms can help ensure the integrity and authenticity of external contributions to the AI ecosystem.

Access Control and Data Protection Methods

Proper access control and data protection methods are absolutely crucial in any security program, but especially when it comes to AI. Here are a few examples of which access and control principles to implement and why:

  • Privileged access control to backend systems and model repository to prevent model theft

  • Prompt and/or input validation to prevent injection attacks

  • User intervention on sensitive operations to prevent model poisoning

  • Limits on queued actions and requests to prevent denial of service attacks

  • Data segregation to prevent data spillover

Threat modeling is also a helpful technique for evaluating how an AI solution or tool could be attacked from a security perspective. As such, the results of threat modeling can help identify what specific security controls or safeguards need to be implemented. 

Continuous Monitoring and Incident Response

Organizations should develop periodic AI risk and threat assessment procedures to identify and mitigate risk within their models or tools that they’re using. Ensuring the integrity of AI processes also requires regular input and output validation, minimizing the risk of inaccurate or manipulated data.

Regular vulnerability scanning and continuous monitoring of resource utilization is crucial for optimizing efficiency and detecting unusual patterns that may indicate security threats. Integrating your AI models and processes into your existing risk management and security operations can help your business stay proactive in mitigating risk.

Organizations should consider conducting an incident response tabletop or exercise that uses AI as part of the scenario, such as AI tool outputs inadvertently disclosing sensitive data, AI tools being used by attackers to scan company systems, etc. This type of exercise may identify potential areas of improvement in the overall incident response process to account for AI threats. 

Employee Training and Awareness

Of course, none of these practices mean anything without thoroughly trained employees implementing them. Communicate with your team about the importance of maintaining a healthy security posture, and make sure they know the risks that come with AI usage.

We recommend educating all employees on these topics:

  • Company policies and expectations around AI usage

  • Responsible and ethical AI usage

  • Intellectual property management

  • Secure developer AI implementation

  • Business continuity and incident response

As businesses embrace the transformative power of AI, prioritizing safety and implementing these security practices is not only beneficial but necessary. By educating employees and fostering a culture of transparency and accountability, your organization can harness the potential of AI while mitigating risks and safeguarding sensitive data.

Trusted Newsletter
Resources for you
How cybercrime losses have doubled

How Cybercrime Losses Have More Than Doubled in 2 Years

GRC Maturity: Manual Risk Management Programs Fall Behind

GRC Maturity: Manual Risk Management Programs Fall Behind

Blog-Hero-Vulnerability-Scanning-Tools

11 Popular Vulnerability Scanning Tools to Consider 

DDRR RiskTrendst (1)

Navigating the New Normal: 5 Takeaways From Our Risk Trends Report

Media - Anthony Gagliardi
Tony Gagliardi
Tony Gagliardi's area of expertise focuses on on building sound cybersecurity risk management programs that meet security compliance requirements. Tony is a Certified Information Systems Security Professional (CISSP) specializing in GRC, SOC 2, ISO 27001, GDPR, CCPA/CPRA, HIPAA, various NIST frameworks and enterprise risk management.
Related Resources
How cybercrime losses have doubled

How Cybercrime Losses Have More Than Doubled in 2 Years

GRC Maturity: Manual Risk Management Programs Fall Behind

GRC Maturity: Manual Risk Management Programs Fall Behind

Blog-Hero-Vulnerability-Scanning-Tools

11 Popular Vulnerability Scanning Tools to Consider 

DDRR RiskTrendst (1)

Navigating the New Normal: 5 Takeaways From Our Risk Trends Report