Trustworthy AI: Building Security Into AI Systems

Trustworthy AI: Building Security Into AI Systems


This IDC Perspective provides an overview of the security challenges in protecting AI systems, the threat landscape, and the security baselines needed to build trust into AI. "As enterprises embark on major AI-powered digital transformation, security risks to AI systems are set to become greater. With traditional approaches to security insufficient in providing complete protection, embedding appropriate security controls into AI needs to become an integral component of an organization's security risk management program," said Ralf Helkenberg, research manager, European Privacy and Data Security.

Please Note: Extended description available upon request.


Executive Snapshot
Situation Overview
The Emerging Security Risk to AI Systems
AI in Cybersecurity
AI Life Cycle and Model Security
Types of AI Model Threats
AI Model Reconnaissance
Poisoning Attack
Evasion Attack
Prompt Injection Attack
Supply Chain Attack
Privacy Attacks
Model Replication
Model Exfiltration
Advice for the Technology Buyer
Defense Against AI Security Threats
1. Identify: Assess AI Security Risk and Posture
AI Asset Mapping
Use-Case–Based Risk Assessment
2. Protect: Implement Safeguarding Measures
Security Awareness
Model Safeguards
Security by Design
3. Detect: Enable Timely Discovery of AI Security Events
Security Monitoring
4. Respond: Prepare for AI Security Incidents
Attack Response Plans
Learn More
Related Research
Synopsis

Download our eBook: How to Succeed Using Market Research

Learn how to effectively navigate the market research process to help guide your organization on the journey to success.

Download eBook
Cookie Settings