IDC PeerScape: Practices for Securing AI Models and Applications

IDC PeerScape: Practices for Securing AI Models and Applications


This IDC PeerScape describes the best practices for securing AI models and applications."Cybersecurity vendors are securing their AI applications and models by protecting APIs, monitoring model inputs and outputs, and proactively looking for weaknesses," said Michelle Abraham, research director, Security and Trust at IDC. "They have well-thought-out protections in place using existing security technologies as well as new technologies designed for GenAI."

Please Note: Extended description available upon request.


IDC PeerScape Figure
Executive Summary
Peer Insights
Practice 1: Protect APIs and Connections to AI Infrastructure as Well as the AI Infrastructure Itself
Challenge
Examples
Broadcom
CrowdStrike
IBM
Trend Micro
Guidance
Practice 2: Use Verified and Tested Foundation Models
Challenge
Examples
Broadcom
Cisco
CrowdStrike
IBM
Trend Micro
Guidance
Practice 3: Monitor Model Inputs and Outputs to Detect and Respond to Attacks Against AI
Challenge
Examples
IBM
Cisco
Splunk
Guidance

Download our eBook: How to Succeed Using Market Research

Learn how to effectively navigate the market research process to help guide your organization on the journey to success.

Download eBook
Cookie Settings