Warning message

Log in to add comments.

Happy 15th Birthday Red Hat Product Security

Mark J. Cox published on 2016-10-17T13:30:00+00:00, last updated 2016-10-17T14:54:37+00:00

This summer marked 15 years since we founded a dedicated Product Security team for Red Hat. While we often publish information in this blog about security technologies and vulnerabilities, we rarely give an introspection into the team itself. So I’d like, if I may, to take you on a little journey through those 15 years and call out some events that mean the most to me; particularly what’s changed and what’s stayed the same. In the coming weeks some other past and present members of the team will be giving their anecdotes and opinions too. If you have a memory of working with our team we’d love to hear about it, you can add a comment here or tweet me.

Our story starts, however, before I joined the company. Red Hat was producing Red Hat Linux in the 1990’s and shipping security updates to it. Here’s an early security update notice from 1998, and the first formal Red Hat Security Advisory (RHSA) RHSA-1999:013. Red Hat would collaborate on security issues along with other Linux distributors on a private communication list called “vendor-sec”, then an engineer would build and test updates prior to them being signed and made available.

In Summer 2000, Red Hat acquired C2Net, a security product company I was working at. C2Net was known for the most widely used secure web server at the time, Stronghold. Red Hat was a small company and so with my security background (being also a founder of the Apache Software Foundation and OpenSSL) it was common for all questions on anything security related to end up at my desk. Although our engineers were responsive to dealing with security patches in Red Hat Linux, we didn’t have any published processes in handling issues or dealing with researchers and reporters, and we knew it needed something more scalable for the future and when we had more than one product. So with that in mind I formed the Red Hat Security Response Team (SRT) in September 2001.

The mission for the team was a simple one: to be “responsible for ensuring that security issues found in Red Hat products and services are addressed”. The charter went into a little more detail:

  • Be a contact point for our customers who have found security issues in our products or services, and publish our procedures for dealing with this contact;
  • Track alerts and security issues within the community which may affect users of Red Hat products and services;
  • Investigate and address security issues in our supported products and services;
  • Ensure timely security fixes for our products;
  • Ensure that customers can easily find, obtain, and understand security advisories and updates;
  • Help customers keep their systems up to date, and minimize the risk of security issues;
  • Work with other vendors of Linux and open source software (including our competitors) to reduce the risk of security issues through information sharing and peer review.

That mission and the detailed charter were published on our web site along with many of our policies and procedures. Over the years this has changed very little, and our mission today maps closely to that original one. From day one we wanted to be responsive to anyone who mailed the security team so we set a high internal SLA goal to have a human response to incoming security email within one business day. We miss that high standard from time to time, but we average over 95% achievement.

Fundamentally, all software has bugs; some bugs have a security consequence. If you’re a product vendor you need a security response team to handle tracking and fixing those security flaws. Given Red Hat products are comprised of open source software, this presents some unique challenges in how to deal with security issues in a supply chain comprising of thousands of different projects, each with their own development teams, policies, and methodologies. From those early days Red Hat worked out how to do this and do it well. We leveraged the “Getting Things Done” (GTD) methodology to create internal workflows and processes that worked in a stressful environment: where every day could bring something new, and work was mostly comprised of interruptions, you need to have a system you can trust so tasks can be postponed and reprioritised without getting lost.

"Red Hat has had the best track record in dealing with third-party vulnerabilities. This may be due to the extent of their involvement with third-party vendors and the open-source community, as they often contribute their own patches and work closely with third-party vendors." -- Symantec Internet Security Threat Report 2007

By 2002 we had started using Common Vulnerabilities and Exposures (CVE) names to identify vulnerabilities, not just during the publication of our advisories, but for helping with the co-ordination of issues in advance between vendors, an unexpected use that was a pleasant surprise to the creators at MITRE. As a CVE editorial board member I would be personally asked to vote on every vulnerability (vulnerabilities would start out as candidates, with a CAN- prefix, before migrating to full CVE- names). As you can imagine that process didn’t last long as the popularity of using CVE names across the industry meant the number of vulnerabilities being handled started to rapidly increase. Now it is uncommon to hear about any vulnerability that doesn’t have a CVE name associated with it. Scaling the CVE process became a big issue in the last few years and hit a crisis point; however in 2016 the DWF project forced changes which should help address these concerns long term, forcing a distributed process instead of one with a bottleneck.

In the early 2000’s, worms that attacked Linux were fairly common, affecting services that were usually enabled and Internet facing by default such as “sendmail” and “samba”. None of the worms were “0 day” however, they instead exploited vulnerabilities which had had updates to address them released weeks or months prior. September 2002 saw the “Slapper worm” which affected OpenSSL via the Apache web server, “Millen” followed in November 2002 exploiting IMAP. By 2005, Red Hat Enterprise Linux shipped with randomization, NX, and various heap and other protection mechanisms which, together with more secure application defaults (and SELinux enabled by default), helped to disrupt worms. By 2014 logos and branded flaws took over our attentions, and exploits became aimed at making money through botnets and ransomware, or designed for leaking secrets.

As worms and exploits with clever names were common then, vulnerabilities with clever names, domain names, logos, and other branding are common now. This trend really started in 2014 with the OpenSSL flaw “Heartbleed”. Heartbleed was a serious issue that helped highlight the lack of attention in some key infrastructure projects. But not all the named security issues that followed were important. As we’ve shown in the past just because a vulnerability gets a fancy name doesn’t mean it’s a significant issue (also true in reverse). These issues highlighted the real importance of having a dedicated product security team – a group to weed through the hype and figure out the real impact to the products you supply to your customers. It really has to be a trusted partnership with the customer though, as you have to prove that you’re actually doing work analysing vulnerabilities with security experts, and not simply relying on a press story or third-party vulnerability score. Our Risk Report for 2015 took a look at the branded issues and which ones mattered (and why) and at the Red Hat Summit for the last two years we’ve played a “Game of Flaws” card game, matching logos to vulnerabilities and talking about how to assess risk and figure out the importance of issues.

Just like Red Hat itself, SRT was known for it’s openness and transparency. By 2005 we were publishing security data on every vulnerability we addressed along with the metrics on when the issue was public, how long it was embargoed, the issue CWE type, CVSS scoring, and more. We provided XML feeds of vulnerabilities, scripts that could run risk reports, along with detailed blogs on our performance and metrics. In 2006 we started publishing Open Vulnerability Assessment Language (OVAL) definitions for Red Hat Enterprise Linux products, allowing industry standard tools to be able to test systems for outstanding errata. These OVAL definitions are consumed today by tools such as OpenSCAP. Our policy of backporting security fixes caused problems for third-party scanning tools in the early days, but now by using our data such as our OVAL definitions they can still give accurate results to our mutual customers. As new security standards emerged, like Common Vulnerability Reporting Framework (CVRF) in 2011, we’d get involved in the definitions and embrace them. In this case helping define the fields and providing initial example content to help promote the standard. While originally we provided this data in downloadable files on our site, we now have an open API allowing easier access to all our vulnerability data.

"Red Hat's transparency on its security performance is something that all distributions should strive for -- especially those who would tout their security response" -- Linux Weekly News (July 2008)

Back in 2005 this transparency on metrics was especially important; as our competitors (of non-open source operating systems) were publishing industry reports comparing vulnerability “days of risk” and doing demonstrations with bags of candies showing how many more vulnerabilities we were fixing than they were. Looking back it’s hard to believe anyone took them seriously. Our open data helped counter these reports and establish that they were not comparing things in a “like-for-like” way; for example treating all issues as having the same severity, or completely not counting issues that were found by the vendor themselves. We even did a joint statement with other Linux distributions, something unprecedented. We still publish frequent “risk reports” which give an honest assessment of how well we handled security issues in our products, as well as helping customers figure out which issues mattered.

Our team grew substantially over the years, both in numbers of associates and in diversity - with staff spread across time zones, offices, and in ten different countries. Our work also was not just the reactive security work but involved proactive work such as auditing and bug finding too. Red Hat associates in our team also help in upstream communities to help projects assess and deal with security issues and present at technical conferences. We’d also help secure the internal supply chain, such as providing facilities and processes for package signing using hardware cryptographic modules. This led a few years ago to the rebranding as “Red Hat Product Security” to better reflect this multi-faceted nature of the team. Red Hat associates continue to find flaws in Red Hat products as well as in products and services from other companies which we report responsibly. In 2016 for example 12% of issues addressed in our products were discovered by Red Hat associates, and we continue to work with our peers on embargoed security issues.

In our first year we released 152 advisories to address 147 vulnerabilities. In the last year we released 642 advisories to address 1415 vulnerabilities across more than 100 different products, and 2016 saw us release our 5000th advisory.

In a subscription-based business you need to continually show value to customers, and we talk about it in terms of providing the highest quality of security service. We are already well known for getting timely updates our for critical issues: for Red Hat Enterprise Linux in 2016, for example, 96% of Critical security issues had an update available the same or next calendar day after the issue was made public. But our differentiator is not just providing timely security updates, it’s a much bigger involvement. Take the issue in bash in 2014 which was branded “Shellshock” as an example. Our team's response was to ensure we provided timely fixes, but also to provide proactive notifications to customers, through our technical account managers and portal notifications, as well as knowledge base and solution articles to help customers quickly understand the issue and their exposure. Our engineers created the final patch used by vendors to address the issue, we provided testing tools, and our technical blog describing the flaw was the definitive source of information which was referenced as authoritative by news articles and even US-CERT.

My favourite quote comes from Alan Kay in 1971: “The best way to predict the future is to invent it”. I’m reminded every day of the awesome team of world-class security experts we’ve built up at Red Hat and I enthusiastically look forward to helping them define and invent the future of open source product security.

English

About The Author

Mark J. Cox's picture Red Hat Community Member 25 points

Mark J. Cox

Mark J Cox lives in Scotland and for 2000 to 2018 was the Senior Director of Product Security at Red Hat. Mark has developed software and worked on the security teams of popular open source projects including Apache and OpenSSL. Mark is a founding member of the Apache Software Foundation and the Ope...