IDC PeerScape: Best Practices for Data Security in GenAI Deployments
This IDC PeerScape discusses the best practices to protect data in GenAI deployments.In January 2024, we interviewed multiple cybersecurity software vendors to learn their methods for protecting AI within their organizations. We thought these security experts would have considered and implemented the best security measures out there, and we were right! This document details the best practices and advice that came from those conversations, which were mainly edited for length."Data security has never been easy, but generative AI has amplified the challenges organizations face when it comes to keeping data secure and usable," said Jennifer Glenn, research director, Security and Trust at IDC. "Organizations need to carefully consider and curate the data they plan to use in their GenAI initiatives and take steps to keep that data secure throughout its life cycle."
Please Note: Extended description available upon request.
IDC PeerScape Figure
Executive Summary
Peer Insights
Practice 1: Curate the Right Data from the Right Sources for Better Security and Privacy, Reduced Legal Risk, and Improved Quality
Challenge
Examples
CrowdStrike
Splunk
Trend Micro
Zscaler
Guidance
Practice 2: Protect the Model and Source Data from Compromise
Challenge
Examples
Broadcom
CrowdStrike
IBM
Zscaler
Guidance
Practice 3: Link Input with Output for Demonstrable Security, Quality, and Confidence