Warning message

Log in to add comments.

Predictable security severities

Mark J. Cox published on 2007-05-18T00:00:00+00:00, last updated 2016-06-21T20:24:27+00:00

Red Hat has shipped products with randomization, stack protection, and other security mechanisms turned on by default since 2003. Vista recently shipped with similar protections and I read today an article about how the Microsoft Security Response Team were not treating Vista any differently when rating the severity of security issues. The Red Hat Security Response team use a similar guide for classification and I thought it would be worth clarifying how we handle this very situation.

We rate the impact of individual vulnerabilities on the same four point scale, designed to be an at-a-glance guide to how worried Red Hat is about each security issue. The scale takes into account the potential risk of a flaw based on a technical analysis of the exact flaw and it's type, but not the current threat level. Therefore the rating given to an issue will not change if an exploit or worm is later released for a flaw, or if one is available before release of a fix.

For the purpose of evaluating severities, our protection technologies fall into roughly three categories:

  1. Security innovations that completely block a particular type of security flaw. An example of this is Fortify Source. Given a particular vulnerability we can evaluate if the flaw would be caught at either compile time or run time and blocked. Because this is deterministic, we will adjust the security severity of an issue where we can prove it would not be exploitable.
  2. Security innovations that should block a particular security flaw from being remotely exploitable. Examples of this would be support for NX, Randomisation, and Stack canaries. Although these technologies can reduce the likelihood of exploiting certain types of vulnerabilities, we don't take them into account and don't downgrade the security severity.
  3. Security innovations that try to contain an exploit for a vulnerability. I'm thinking here of SELinux. An attacker who can exploit a flaw in any of the remotely accessible daemons protected by a default SELinux policy will find themselves tightly constrained by that policy. We do not take SELinux into account when setting the security severity.

I've not been keeping a list of vulnerabilities that are deterministically blocked, but I have a couple of examples I recall where we did alter the severity:

  • In October 2005 a buffer overflow flaw was found in the authentication code of wget. On Fedora Core 4 this flaw was not exploitable as it was caught by Fortify Source.
  • In August 2004, a double-free flaw in MIT Kerberos KDC was announced. For users of Red Hat Enterprise Linux 4 users we were able to downgrade the severity of this flaw from critical as glibc hardening prevented this double-free flaws being exploited. The flaw was rated important as it could still lead to a remote denial of service (KDC crash).
English

About The Author

Mark J. Cox's picture Red Hat Community Member 25 points

Mark J. Cox

Mark J Cox lives in Scotland and for 2000 to 2018 was the Senior Director of Product Security at Red Hat. Mark has developed software and worked on the security teams of popular open source projects including Apache and OpenSSL. Mark is a founding member of the Apache Software Foundation and the Ope...