Survey of security leaders shows AI adoption is accelerating without proper security measures
AUSTIN, Texas, March 6, 2024 /PRNewswire/ -- HiddenLayer, the leading security provider for artificial intelligence (AI) models and assets, today released its inaugural AI Threat Landscape Report highlighting the pervasive use of AI and the risks involved in its deployment. Nearly all surveyed companies, 98%, consider at least some of their AI models crucial to their business success, and 77% identified breaches to their AI in the past year. Yet only 14% of IT leaders said their respective companies are planning and testing for adversarial attacks on AI models.
The report surveyed 150 IT security and data science leaders to shed light on the biggest vulnerabilities impacting AI today, their implications on commercial and federal organizations, and cutting-edge advancements in security controls for AI in all its forms.
The survey uncovered AI's widespread utilization by today's businesses as companies have, on average, a staggering 1,689 AI models in production. In response, security for AI has become a priority, with 94% of IT leaders allocating budgets to secure their AI in 2024. Yet only 61% are highly confident in their allocation, and 92% are still developing a comprehensive plan for this emerging threat. These findings reveal the need for support in implementing security for AI.
"AI is the most vulnerable technology ever to be deployed in production systems," said Chris "Tito" Sestito, Co-Founder and CEO of HiddenLayer. "The rapid emergence of AI has resulted in an unprecedented technological revolution, of which every organization in the world is affected. Our first-ever AI Threat Landscape Report reveals the breadth of risks to the world's most important technology. HiddenLayer is proud to be on the front lines of research and guidance around these threats to help organizations navigate the security for AI landscape."
Risks Involved with AI Use
Adversaries can leverage a variety of methods to utilize AI to their advantage. The most common risks of AI usage include:
- Manipulation to give biased, inaccurate, or harmful information.
- Creation of harmful content, such as malware, phishing, and propaganda.
- Development of deep fake images, audio, and video.
- Leveraged by malicious actors to provide access to dangerous or illegal information.
Common Types of Attacks on AI
There are three major types of attacks on AI:
- Adversarial Machine Learning Attacks target AI algorithms, aimed to alter AI's behavior, evade AI-based detection, or steal the underlying technology.
- Generative AI System Attacks threaten AI's filters and restrictions, intended to generate content deemed harmful or illegal.
- Supply Chain Attacks attack ML artifacts and platforms with the intention of arbitrary code execution and delivery of traditional malware.
Challenges to Securing AI
While industries are reaping the benefits of increased efficiency and innovation thanks to AI, many organizations do not have proper security measures in place to ensure safe use. Some of the biggest challenges reported by organizations in securing their AI include:
- Shadow IT: 61% of IT leaders acknowledge shadow AI, solutions that are not officially known or under the control of the IT department, as a problem within their organizations.
- Third-Party AIs: 89% express concern about security vulnerabilities associated with integrating third-party AIs, and 75% believe third-party AI integrations pose a greater risk than existing threats.
Best Practices for Securing AI
HiddenLayer has outlined recommendations for organizations to begin securing their AI, including:
- Discovery and Asset Management: Begin by identifying where AI is already used in your organization. What applications has your organization already purchased that use AI or have AI-enabled features?
- Risk Assessment and Threat Modeling: Perform threat modeling to understand the potential vulnerabilities and attack vectors that could be exploited by malicious actors to complete your understanding of your organization's AI risk exposure.
- Data Security and Privacy: Go beyond the typical implementation of encryption, access controls, and secure data storage practices to protect your AI model data. Evaluate and implement security solutions that are purpose-built to provide runtime protection for AI models.
- Model Robustness and Validation: Regularly assess the robustness of AI models against adversarial attacks. This involves pen-testing the model's response to various attacks, such as intentionally manipulated inputs.
- Secure Development Practices: Incorporate security into your AI development lifecycle. Train your data scientists, data engineers, and developers on the various attack vectors associated with AI.
- Continuous Monitoring and Incident Response: Implement continuous monitoring mechanisms to detect anomalies and potential security incidents in real-time for your AI, and develop a robust AI incident response plan to quickly and effectively address security breaches or anomalies.
HiddenLayer's products and services accelerate the process of securing AI, with its AISec Platform providing a comprehensive AI security solution that ensures the integrity and safety of models throughout an organization's MLOps pipeline. HiddenLayer also provides its Machine Learning Detection & Response (MLDR), which enables organizations to automate and scale the protection of AI models and ensure their security in real-time, and its Model Scanner, which enables companies to evaluate the security and integrity of their ML artifacts before deploying them.
For more information, view the full report here.
About HiddenLayer
HiddenLayer is the leading provider of security for AI. Its security platform helps enterprises safeguard the machine learning models behind their most important products. HiddenLayer is the only company to offer turnkey security for AI that does not add unnecessary complexity to models and does not require access to raw data and algorithms. Founded by a team with deep roots in security and ML, HiddenLayer aims to protect enterprise's AI from inference, bypass, extraction attacks, and model theft. The company is backed by a group of strategic investors, including M12, Microsoft's Venture Fund, Booz Allen Ventures, IBM Ventures, and Capital One Ventures.
Contact
David Sack
SutherlandGold for HiddenLayer
[email protected]
SOURCE HiddenLayer
WANT YOUR COMPANY'S NEWS FEATURED ON PRNEWSWIRE.COM?
Newsrooms &
Influencers
Digital Media
Outlets
Journalists
Opted In
Share this article