Red Hat security ratings for AI models

Updated -

Red Hat's security classifications and guidance for vulnerabilities specific to AI models are outlined below.
Red Hat Product Security rates the severity of security issues found in Red Hat-provided AI models using a four-point scale (Low, Moderate, Important, and Critical).

The four-point scale tells you how serious Red Hat considers an issue to be, helping you judge the severity and determine the most important updates. The scale assesses the potential risk based on a technical analysis of the exact flaw and its type, but not the current threat level; a given rating will not change if an exploit is later released for a flaw or if one is available before the release of a fix.

Note: Because AI is a fast-moving field, new standards will develop, and applications for AI usage will continue to expand. This article will be updated as the field changes.

Issues pertaining to weaknesses in AI systems that can be exploited and cause a negative impact to the confidentiality, integrity, or availability of the affected component are considered to be security vulnerabilities by Red Hat Product Security. For these issues, see Severity ratings.

a. Critical This rating is given to flaws that could be easily exploited and allow attackers to read, modify, or delete other users’ data or perform a privileged action on behalf of another user without requiring user interaction.
b. Important This rating is given to flaws that could be exploited and allow attackers to read, modify, or delete other users’ data or perform a privileged action on behalf of another user and require user interaction to succeed.
c. Moderate This rating is given to flaws that could be exploited to cause denial of service-like conditions on the model via an inference endpoint or allow attackers to steal other users’ data from the model without authorization.
d. Low This rating is given to flaws that could be exploited to cause data poisoning in the model via adversarial fine-tuning that could lead to wrong or malicious data generated during inference.

Flaws related to unexpected model behavior that do not impact confidentiality, integrity, and availability and are outside of the defined intent and scope of the model design (as documented in the model card) are called AI safety. Safety issues are linked with producing harmful content. We do not classify AI safety issues as security vulnerabilities.

Comments