Red Hat Customer Portal

Skip to main content

Warning message

Log in to add comments.

Latest Posts

  • CVE-2016-3710: QEMU: out-of-bounds memory access issue

    Authored by: Prasad Pandit

    Quick Emulator (aka QEMU) is an open source systems emulator. It emulates various processors and their accompanying hardware peripherals like disc, serial ports, NIC et al. A serious vulnerability of out-of-bounds r/w access through the Video Graphics Array (VGA) emulator was discovered and reported by Mr Wei Xiao and Qinghao Tang of Marvel Team at 360.cn Inc. This vulnerability is formally known as Dark Portal. In this post we'll see how Dark Portal works and its mitigation.

    VGA is a hardware component primarily responsible for drawing content on a display device. This content could be text or images at various resolutions. The VGA controller comes with its own processor (GPU) and its own RAM. Size of this RAM varies from device to device. The VGA emulator in QEMU comes with the default memory of 16 MB. The systems' CPU maps this memory, or parts of it, to supply graphical data to the GPU.

    The VGA standard has evolved and many extensions to it have been devised to support higher resolutions or new hardware. The VESA BIOS Extensions (VBE) is a software interface implemented in VGA BIOS and Boch VBE exntension is a set of registers designed to support Super VGA (SVGA) hardware. QEMU VGA emulator implements both VBE and Boch VBE extensions. It provides two ways to access its video memory:

    • Linear Frame buffer: In this, entire video RAM is accessed by the CPU like a byte addressed array in C.
    • Bank Switching: A chunk (or bank) of 64 KB of video memory is mapped into host's memory. Host CPU could slide this 64KB window, to access other parts of the video memory.

    VBE has numerous registers to hold information about memory bank's size, offset etc. parameters. These registers can be manipulated using VGA I/O port r/w functions. A register VBE_DISPI_INDEX_BANK holds the offset address in the currently used bank (or window) of the VGA memory. In order to update the display pixel, GPU must calculate its location on the screen and its offset within current memory bank.

    QEMU VGA emulator in action:

    In QEMU's VGA emulator, user could set the offset VBE_DISPI_INDEX_BANK register via vbe_ioport_write_data() routine

            void vbe_ioport_write_data() {
                ...
                case VBE_DISPI_INDEX_BANK:
                    ...
                    s->bank_offset = (val << 16);
            }
    

    The VGA read/write functions vga_mem_readb() and vga_mem_writeb() compute the pixel location using the supplied address and bank_offset values

            uint32_t vga_mem_readb/writeb(VGACommonState *s, hwaddr addr, ...) {
                ...
                switch(memory_map_mode) {
                case 1:
                    ...
                    addr += s->bank_offset;
                    break;
                 ...
                 /* standard VGA latched access */
                 s->latch = ((uint32_t *)s->vram_ptr)[addr];
            }
    

    The said out-of-bounds r/w access issue occurs because it accesses the byte(uint8_t *) addressed video memory, as double word(uint32_t *) type. This type promotion throws the given pixel location address beyond the 16 MB VGA memory.

    Impact:

    This issue affects all QEMU/KVM and Xen guests wherein VGA emulator is enabled. Depending on where the OOB access lands in host memory, it could lead to information disclosure OR crash the QEMU process resulting in DoS. It could potentially be leveraged to execute arbitrary code with privileges of the QEMU process on the host.

    Mitigation:

    The sVirt and Seccomp functionalities used to restrict host's QEMU process privileges and resource access might mitigate the impact of successful exploitation of this issue. Also a possible policy-based workaround is to avoid granting untrusted users administrator privileges within guests.

    Conclusion:

    VGA VBE registers can be manipulated by a privileged user inside guest. It leads to an out-of-bounds memory access in QEMU process on the host, essentially making it an attack on the virtualisation host by a guest.

    Posted: 2016-05-11T13:30:00+00:00
  • Red Hat Product Security Risk Report: 2015

    Authored by: Red Hat Product...

    This report takes a look at the state of security risk for Red Hat products for calendar year 2015. We look at key metrics, specific vulnerabilities, and the most common ways users of Red Hat products were affected by security issues.

    Our methodology is to look at how many vulnerabilities we addressed and their severity, then look at which issues were of meaningful risk, and which were exploited. All of the data used to create this report is available from public data maintained by Red Hat Product Security.

    Red Hat Product Security assigns a Common Vulnerabilities and Exposures (CVE) name to every security issue we fix. If we fix a bug that later turns out to have had a security implication we’ll go back and assign a CVE name to that issue retrospectively. Every CVE fixed has an entry in our public CVE database in the Red Hat Customer Portal as well as a public bug that has more technical detail of the issue. Therefore, for the purposes of this report we will equate vulnerabilities to CVEs.

    Note: Vulnerability counts can be used for comparing Red Hat issues within particular products or dates because we apply a consistent methodology on how we allocate names and how we score their severity. You should not use vulnerability count data (such as the number of CVEs addressed) to compare with any other product from another company, because the methodology used to assign and report on vulnerabilities varies. Even products from different vendors that are affected by the same CVE can have variance in the severity of the CVE given the unique way the product is built or integrated.

    Vulnerabilities

    Across all Red Hat products, and for all issue severities, we fixed more than 1300 vulnerabilities by releasing more than 600 security advisories in 2015. At first that may seem like a lot of vulnerabilities, but for a given user only a subset of those issues will be applicable for the products and versions of the products in use. Even then, within a product such as Red Hat Enterprise Linux, not every package is installed in a default or even likely installation.

    Red Hat rates vulnerabilities using a 4 point scale designed to be an at-a-glance guide to the amount of concern Red Hat has for each security issue. This scale is designed to align as closely as possible with similar scales from other open source groups and enterprise vendors, such as Microsoft. The severity levels are designed to help users determine which advisories mattered the most. Providing a prioritised risk assessment helps customers understand and better schedule upgrades to their systems, being able to make a more informed decision about the risk that each issue places on their unique environment.

    Since 2009, we also publish Common Vulnerability Scoring System (CVSS) scores for every vulnerability addressed to aid customers who use CVSS scoring for their internal processes. However, CVSS scores have some limitations and we do not use CVSS as a way to prioritise vulnerabilities.

    The 4 point scale rates vulnerabilities as Low, Moderate, Important, or Critical.

    Vulnerabilities rated Critical in severity can pose the most risk to an organisation. By definition, a Critical vulnerability is one that could potentially be exploited remotely and automatically by a worm. However we, like other vendors, also stretch the definition to include those flaws that affect web browsers or plug-ins where a user only needs to visit a malicious (or compromised) website in order to be exploited. These flaws actually account for the majority of the Critical issues fixed as we will show in this report. If you’re using a Red Hat product that does not have a desktop, for example, you’ll be affected by a lot less Critical issues.

    The table below gives some examples for advisory and vulnerability counts for a subset of products and product families. A given Red Hat advisory may fix multiple vulnerabilities across multiple versions of a product. Therefore, a count of vulnerabilities can be used as an estimate of the amount of effort in understanding the issues and fixes. A count of advisories can be used as an estimate of the amount of effort to understand and deploy updates.

    One product broken out in the table is Red Hat Enterprise Linux 6. During Red Hat Enterprise Linux 6 installation, the user gets a choice of installing either the default selection of packages, or making a custom selection. If the user installs a “default” “server” and does not add any additional packages or layered products, then in 2015 there were just 6 Critical and 19 Important security advisories applicable to that system (and 29 advisories that also addressed moderate/low issues).

    Where there are more advisories shown than vulnerabilities (such as for OpenStack), this is because the same vulnerability may affect multiple currently supported versions of the product, each version got it’s own security advisory.

    In 2015, for every Red Hat product there were 112 Critical Red Hat security advisories released addressing 373 Critical vulnerabilities. 82% of the Critical issues had updates available to address them the same or next day after the issue was public. 99% of Critical vulnerabilities were addressed within a week of the issue being public.

    Looking at just the subset of issues affecting base Red Hat Enterprise Linux releases, there were 46 Critical Red Hat security advisories released addressing 61 Critical vulnerabilities. 96% of the Critical issues had updates available to address them the same or next day after the issue was public.

    For Red Hat Enterprise Linux, server installations will generally be affected by far fewer Critical vulnerabilities, just because most Critical vulnerabilities occur in browsers or browser components. A great way to reduce risk when using our modular products is to make sure you install the right variant, and review the package set to remove packages you don’t need.

    Vulnerability trending

    The number of vulnerabilities addressed by Red Hat year on year is increasing as a function of new products and versions of products being continually added. However, for any given version of a product we find that the number of vulnerabilities being fixed actually decreases over time. This is influenced by Red Hat backporting security fixes.

    We use the term backporting to describe the action of taking a fix for a security flaw out of the most recent version of an upstream software package and applying that fix to an older version of the package we distribute. Backporting is common among vendors like Red Hat and is essential to ensuring we can deploy automated updates to customers with minimal risk.

    The trends can be investigated using our public data, and from time to time we do Risk Reports that delve into a given product and version. For example see our Red Hat Enterprise Linux 6.5 to 6.6 Risk Report.

    What issues really mattered in 2015

    In 2014, the OpenSSL Heartbleed vulnerability started a trend of branding vulnerabilities changing the way security vulnerabilities affecting open source software were being reported and perceived. Vulnerabilities are found and fixed all of the time, and just because a vulnerability gets a catchy name, fancy logo, or media attention doesn’t mean it’s of real risk to users.

    So let’s take a chronological tour through 2015 to see which issues got branded or media attention, but more importantly which issues actually mattered for Red Hat customers.

    Ghost” (January 2015) CVE-2015-0235

    A bug was found affecting certain function calls in the glibc library. A remote attacker that was able to make an application call to an affected function could execute arbitrary code. While a proof of concept exploit is available, as is a Metasploit module targeting Exim, not many applications were found to be vulnerable in a way that would have allowed remote exploitation.

    Red Hat Enterprise Linux versions were affected. This was given Critical impact, and updates were available the same day the issue was public. This issue was given enhanced coverage in the Red Hat Customer Portal, with a banner on all pages and a customer outreach email campaign.

    Freak” (March 2015) CVE-2015-0204

    A flaw was found in the cryptography library OpenSSL where clients accepted EXPORT-grade (insecure) keys even when the client had not initially asked for them. This could have been exploited using a man-in-the-middle attack, downgrading to a weak key, factorizing it, then decrypting communication between the client and the server. Like the branded OpenSSL issues from 2014 such as Poodle and CCS Injection, this issue is hard to exploit as it requires a man-in-the-middle attack. We’re therefore not aware of active exploitation of this issue.

    Red Hat Enterprise Linux versions were affected. This was given Moderate impact, and updates were available within a few weeks of the issue being public.

    ABRT (April 2015) CVE-2015-3315

    ABRT (Automatic Bug Reporting Tool) is a tool to help users detect defects in applications and create a bug report. ABRT was vulnerable to multiple race condition and symbolic link flaws. A local attacker could have used these flaws to potentially escalate their privileges on an affected system to root.

    This issue affected Red Hat Enterprise Linux 7. This was given Important impact, and updates were made available. Other products and versions of Red Hat Enterprise Linux were either not affected, or not vulnerable to privilege escalation. A working public exploit is available for this issue.

    JBoss Operations Network open APIs (April 2015) CVE-2015-0297

    Red Hat JBoss Operations Network is a middleware management solution that provides a single point of control to deploy, manage, and monitor JBoss Enterprise Middleware, applications, and services. The JBoss Operations Network server did not correctly restrict access to certain remote APIs which could have allowed a remote, unauthenticated attacker to execute arbitrary Java methods. We’re not aware of active exploitation of this issue.

    This issue affected versions of JBoss Operations Network. It was given Critical impact, and updates were made available within a week of the issue being public.

    Venom” (May 2015) CVE-2015-3456

    Venom was a branded flaw which affected QEMU. A privileged user of a guest virtual machine could use this flaw to crash the guest or, potentially, execute arbitrary code on the host with the privileges of the host’s QEMU process corresponding to the guest.

    A number of Red Hat products were affected and updates were released the same day as the issue was public. Red Hat products by default would block arbitrary code execution as SELinux sVirt protection confines each QEMU process. We therefore are not aware of any exploitation of this issue.

    This issue was given enhanced coverage in the Red Hat Customer Portal, with a banner on all pages and a customer outreach email campaign.

    Logjam” (May 2015) CVE-2015-4000

    TLS connections using the Diffie-Hellman key exchange protocol were found to be vulnerable to an attack in which a man-in-the-middle attacker could downgrade vulnerable TLS connections to weak cryptography which could then be broken to decrypt the connection.

    This issue affected various cryptographic libraries across several Red Hat products. It was rated Moderate impact and updates were made available.

    Like Poodle and Freak, this issue is hard to exploit as it requires a man-in-the-middle attack. We’re not aware of active exploitation of this issue.

    libuser privilege escalation (July 2015) CVE-2015-3246

    The libuser library implements an interface for manipulating and administering user and group accounts. Flaws in libuser could allow authenticated local users with shell access to escalate privileges to root.

    Red Hat Enterprise Linux 6 and 7 were affected. This issue was rated Important impact, and updates were made available the same day as issue was made public. Red Hat Enterprise Linux 5 was affected and a mitigation was published. A public exploit exists for this issue.

    BIND DoS (July 2015) CVE-2015-5477

    A flaw in the Berkeley Internet Name Domain (BIND) allowed a remote attacker to cause named (functioning as an authoritative DNS server or a DNS resolver) to exit, causing a denial of service against BIND.

    This issue affected the versions of BIND shipped with all versions of Red Hat Enterprise Linux. This issue was rated Important impact, and updates were available the same day as the issue was made public. A public exploit and a Metasploit module exist for this issue.

    Several other similar flaws in BIND leading to denial of service were found and addressed through the year, such as CVE-2015-8704, CVE-2015-8000, and CVE-2015-5722. Public exploits exist for some of these issues.

    Firefox local file stealing via PDF reader (August 2015) CVE-2015-4495

    A flaw in Mozilla Firefox could allow an attacker to access local files with the permissions of the user running Firefox. Public exploits exist for this issue, including part of Metasploit, and specifically targeting Linux systems.

    This issue affected Firefox that was shipped with versions of Red Hat Enterprise Linux. It was rated Important impact, and updates were available the following day after the issue was public.

    Firefox add-on permission warning (August 2015) CVE-2015-4498

    Mozilla Firefox normally warns a user when trying to install an add-on if initiated by a web page. A flaw allowed this dialog to be bypassed. We’re not aware that this issue has been exploited.

    This issue affected Firefox shipped with Red Hat Enterprise Linux versions. It was rated Important impact, and updates were available the same day as the issue was public.

    Java Deserialization (November 2015) CVE-2015-7501

    An issue was found in Java Object Serialization affecting the JMXInvokerServlet interface. This could lead to arbitrary code execution when deserializing Java objects from untrusted sources with the Apache commons-collections library when containing certain risky classes on the classpath.

    This issue impacted many products in the JBoss Middleware suite and updates were made available in November and the following months. Direct exploitation of this vulnerability requires some means of getting an application to accept an object containing one of the risky classes.

    Grub2 password bypass (December 2015) CVE-2015-8370

    A flaw was found in the way the grub2 handled backspace characters entered in username and password prompts. An attacker with access to the system console could use this flaw to bypass grub2 password protection.

    This issue only affected Red Hat Enterprise Linux 7. It was rated Moderate severity, and updates were made available within a week. Steps on how to exploit this issue are public.

    Various flaws in software in supplementary channels (various dates)

    Red Hat provides some packages which are not open source software in supplementary channels for users of Red Hat Enterprise Linux. This channel contains software such as Adobe Flash Player, IBM Java, Oracle Java, and Chromium browser.

    A large number of Critical flaws affected these packages. For example, for Adobe Flash Player in 2015, we issued 15 Critical advisories to address nearly 300 Critical vulnerabilities. Linux exploits exist for some of these critical vulnerabilities, 5 having Metasploit modules. As these projects release security updates, we ship appropriate updated packages to customers.

    The issues examined in this section were included because they were meaningful. This includes the issues that are of high severity and likely to be exploited (or already have a public working exploit), as well as issues that were highly visible or branded (with a name or logo or enhanced media attention), regardless of their severity. See the Venn diagram below for our opinion on the intersection.

    Lower risk issues with increased customer attention

    Another way we gauge the level of customer concern around an issue is to measure web traffic, specifically how many page views each of the vulnerability (CVE) pages gets in the Red Hat Customer Portal.

    The graph above gives an indication of customer interest in given vulnerabilities. Many of the top issues were highlighted earlier in this report. Of the rest, the top viewed issues were ones predominantly affecting Red Hat Enterprise Linux:

    • A flaw in Samba, CVE-2015-0240, where a remote attacker could potentially execute arbitrary code as root. Samba servers are likely to be internal and not exposed to the internet, limiting the attack surface. No exploits that lead to code execution are known to exist, and some analyses have shown that creation of such a working exploit is unlikely.
    • Various flaws in OpenSSL, After high profile issues such as Heartbleed and Poodle in previous years, OpenSSL issues tend to always get increased customer interest independent of the actual severity or risk:
    • Two flaws in OpenSSH, of which one, CVE-2015-5600 did not affect Red Hat products in a default configuration and was rated Low impact; CVE-2015-5352 which affected some versions of Red Hat Enterprise Linux but at Moderate impact.
    • Two flaws in the Red Hat Enterprise Linux kernel, both rated Important impact; CVE-2015-1805 that could allow a local attacker to escalate their privileges to root; CVE-2015-5364 where a remote attacker who is able to send UDP packages to a listening server could cause it to crash. We are not aware of public exploits for either issue.
    • A Moderate rated flaw in the Apache web server CVE-2015-3183 which could lead to proxy smuggling attacks. We are not aware of a public exploit for this issue.

    The open source supply chain

    Red Hat products are based on open source software. Some Red Hat products contain several thousand individual packages, each of which is based on separate, third-party, software from upstream. While Red Hat engineers play a part in many upstream components, handling and managing vulnerabilities across thousands of third-party components is non-trivial.

    Red Hat has a dedicated Product Security team who monitor issues affecting Red Hat products and work closely in relationships with upstream projects. In 2015, more than 2000 vulnerabilities were investigated that potentially affected parts of our products, leading to fixing 1363 vulnerabilities.

    Every one of those 2000+ vulnerabilities is tracked in the Red Hat Bugzilla tool and is publicly accessible. Each vulnerability has a master bug including the CVE name as an alias and a “whiteboard” field which contains a comma separated list of metadata. The metadata we publish includes the dates we found out about the issue, the severity, and the source. We also summarise this in a file containing all of the information gathered for every CVE, as well as a readable entry in the CVE database in the Red Hat Customer Portal.

    For example, for CVE-2015-0297 mentioned above:

    Bugzilla: https://bugzilla.redhat.com/show_bug.cgi?id=CVE-2015-0297

    Whiteboard: impact=critical,public=20150414,reported=20150220,
    source=customer,cvss2=7.5/AV:N/AC:L/Au:N/C:P/I:P/A:P,
    cwe=CWE-306,jon-3/Security=affected

    This example shows us the issue was reported to Red Hat Product Security by a customer on February 20, 2015, the issue became known to the public on April 14, 2015, and it affected the JBoss Operations Network 3 product. An automated comment in the Bugzilla shows an errata was released to address this on April 21, 2015 as RHSA-2015:0862.

    Issues that are not yet public still get an entry in Bugzilla, but they are initially private to Red Hat. Once an issue becomes public, the associated Bugzilla is updated and made public.

    We make use of this data to create metrics and spot trends. One interesting metric is to look at how vulnerabilities are reported to us. We can do this by looking at the whiteboard “source” data to see how we found out about all the issues we fixed in 2015. This is shown on the chart below.

    Key:

    • Internet: for issues not disclosed in advance, we monitor a number of mailing lists and security web pages of upstream projects.
    • Relationship: issues reported to us by upstream projects, generally in advance of public disclosure.
    • Red Hat: issues found by Red Hat employees.
    • Individual: issues reported to Red Hat Product Security directly by a customer or researcher.
    • Peer vendors: issues reported to us by other open source distributions, through relationships or a shared private forum.
    • CVE: if we haven’t found out about an issue any other way, we can catch it from the list of public assigned CVE names from Mitre.
    • CERT: issues reported to us from a national Computer Emergency Response Team like CERT/CC or CPNI.

    We can make some observations from this data. First, Red Hat employees find a lot of the vulnerabilities we fix. We don’t take a passive role and wait for others to find flaws for us to fix. We actively look for issues ourselves and these are found by engineering, quality assurance, as well as our security teams. 12% of the issues we fixed in the year were found by Red Hat employees. The issues we find are shared back upstream and if they are risky, under embargo to other peer vendors (generally via the ‘distros’ shared private forum). In addition to those 167 issues, Red Hat also finds and reports flaws in software that isn’t part of a current shipped product or affects other vendors’ software.

    Next, relationships matter. When you are fixing vulnerabilities in third-party software, having a relationship with the upstream community makes a big difference. Red Hat Product Security are often asked how to get notified of issues in open source software in advance, but there is no single place you can go to get notifications. If an upstream is willing to give information about flaws in advance, then you should also be willing to give value back to that notification, making it a two-way street. At Red Hat we do this by sanity checking draft advisories, checking patches, and feeding back the results from our quality testing when there is enough time. A good example of this is the OpenSSL CCS Injection flaw in 2014. Our relationship with OpenSSL gave us advance notice of the issue. We found a mistake in the advisory as well as a mistake in the patch, which otherwise would have caused OpenSSL to have to do a secondary fix after release. Only two of the dozens of companies pre-notified about those OpenSSL vulnerabilities noticed issues and fed back information to upstream.

    Finally, it’s non-trivial to replicate this yourself. If you are an organization that uses open source software that you manage yourself, then you need to ensure you are able to find out about vulnerabilities that affect those components so you can analyse and remediate. Vendors without a sizable dedicated security team have to watch what other vendors do, or rely on other vulnerability feeds such as the list of assigned CVE names from Mitre. Red Hat chooses to invest in a dedicated team handling vulnerability notifications to ensure we find out about issues that affect our products and build upstream relationships.

    Embargo and release timings

    Vulnerabilities known to Red Hat in advance of being public are known as being “under embargo”, mirroring the way journalists use the term for stories under a press embargo which are not to be made public until an agreed date and time.

    The component parts that make up Red Hat products are open source, and this means we’re in most cases not the only vendor shipping each particular part. Unlike companies shipping proprietary software, Red Hat therefore is not in sole control of the date each flaw is made public. This is actually a good thing and leads to much shorter response times between flaws being first reported to being made public. It also keeps us honest; Red Hat can’t play games to artificially reduce our “days of risk” statistics by using tactics such as holding off public disclosure of meaningful flaws for a long period, or until some regularly scheduled patch day.

    Shorter embargo periods also make flaws much less valuable to attackers; they know a flaw in open source is likely to get fixed quickly, shortening their window of opportunity to exploit it.

    For the issues found by Red Hat, we choose to only embargo the issues that really matter and even then we use embargoes sparingly. Bringing in additional security experts, who would not normally be aware due to the embargo, rather than just the original researcher and the upstream project, increases the chances of the issue being properly understood and patched the first time around. For the majority of lower severity issues, attackers have little to no interest in them. By definition, these are issues that lead to minimal consequences even if they are exploitable, so the cost of embargoes is not justified. If we do choose to embargo an issue due to the severity, we share the details with the relevant upstream developers as well as other peer vendors, working together to address the issues. We talk about this more in our blog post "The hidden costs of embargos".

    For 2015, we knew about 438 (32%) of the vulnerabilities we addressed in advance of them being public. Across all products and vulnerabilities of all severities known to us in advance, the median embargo was 13 days.

    There are many positives to releasing fixes for issues that matter quickly, but the drawback to not having a regular patch day is that you need to respond to more issues as they happen. We do help suggest embargo dates that avoid weekends and major holidays, so let’s look how well that works in practice.

    The chart above shows a heat-map for 2015 with the days and times we push most issues for Critical and Important advisories for all Red Hat products. The more advisories pushed for a given date and hour, the darker that section of the heat-map.

    The most popular times we pushed advisories can be seen as Tuesdays 11 a.m. to 2 p.m. EST and Thursdays 9 a.m. to 3 p.m. EST. Fridays are pretty light for pushes. There were no Saturday pushes. The only Sunday pushes were ones arranged to arrive first thing Monday morning (these are usually pushed during Monday in India or Europe time zones).

    Conclusion

    This report looked at the security risk to users of Red Hat products in 2015 by giving metrics around vulnerabilities, highlighting those that were the most severe, looking at threat, those that were exploited, and showing which were branded or gained media attention.

    There are other types of security risks, such as malware or ransomware, that we haven’t covered in this report. They rely on an attacker having access to a system through an intrusion or by exploiting a vulnerability.

    For the last year of vulnerabilities affecting Red Hat products the issues that matter and the issues that got branded do have an overlap, but they certainly don’t closely match. Just because an issue gets given a name, a logo, or press attention does not mean it’s of increased risk. We’ve also shown there were some vulnerabilities of increased risk that did not get branded or media attention at all.

    At Red Hat, our dedicated Product Security team analyses threats and vulnerabilities against all of our products every day, and provide relevant advice and updates through the Red Hat Customer Portal. Customers can call on this expertise to ensure that they respond quickly to address the issues that matter, while avoiding being caught up in a media whirlwind for those that don’t.

    Appendix: Common security abbreviations and terms

    Acronyms are used extensively in security standards, so here are some of the more common terms and abbreviations you’ll see used by Red Hat relating to vulnerability handling and errata. You can find more in this blog post.

    CVE:

    The Common Vulnerabilities and Exposures (CVE) project is a list of standardized names for vulnerabilities and security exposures.

    Since November 2001 Red Hat has used CVE names in security advisories to describe all vulnerabilities affecting Red Hat products. Red Hat has CVE Editorial Board membership and is a Candidate Naming Authority. We have a public CVE compatibility page and provide a CVE database in the Red Hat Customer Portal.

    CVRF:

    The goal of the Common Vulnerability Reporting Framework (CVRF) is to provide a way to share information about security updates in an XML machine-readable format.

    Since 2012, Red Hat has provided CVRF representations of Red Hat Security Advisories, and details can be found in this page.

    OVAL:

    The Open Vulnerability and Assessment Language (OVAL) project promotes open and publicly available security content, and seeks to standardize the transfer of this information across the entire spectrum of security tools and services.

    Since 2006, Red Hat has been providing machine-readable XML versions of our Red Hat Enterprise Linux security advisories as OVAL definitions. Our OVAL definitions are designed for use by automated test tools to determine the patch state of a machine.

    Red Hat provides OVAL patch definitions for security updates to Red Hat Enterprise Linux 4, 5, 6, and 7. The first OVAL-compatible version was Red Hat Enterprise Linux 3, for which OVAL patch definitions continue to be available for download. For more information read this page.

    RHSA:

    Since 1999, all Red Hat security updates are accompanied by a security advisory (RHSA). The advisories are publicly available via the Red Hat Customer Portal as well as other notification methods such as email. These are sometimes also referred to as security errata. The other advisory types are Red Hat Bugfix Advisory (RHBA) and Red Hat Enhancement Advisory (RHEA).

    CVSS:

    Common Vulnerability Scoring System (CVSS) base scores give a detailed severity rating by scoring the constant aspects of a vulnerability: Access Vector, Access Complexity, Authentication, Confidentiality, Integrity, and Availability.

    Since 2009, Red Hat provides CVSS version 2 base metrics for all vulnerabilities affecting Red Hat products. These scores are found on the CVE pages (linked to from the References section of each Red Hat Security Advisory) and also from our Security Measurements page.

    CVSS scores are not used by Red Hat to determine the priority with which flaws are fixed. It is used as a guideline to identify key metrics of a flaw, but the priority for which flaws are fixed is determined by the overall impact of the flaw using the aforementioned 4 point scale.

    CWE:

    Common Weakness Enumeration (CWE) is a dictionary or formal list of common software weaknesses. It is a common language or taxonomy for describing vulnerabilities and weaknesses; a standard measurement for software assurance tools and services’ capabilities; and a base for software vulnerability and weakness identification, mitigation, and prevention.
    The Red Hat Customer Portal is officially CWE Compatible.

    CPE:

    CPE is a structured naming scheme for information technology systems, software, and packages. For reference, we provide a dictionary mapping the CPE names we use, to Red Hat product descriptions. Some of these CPE names will be for new products that are not in the official CPE dictionary, and should therefore be treated as temporary CPE names.

    Posted: 2016-04-20T13:30:00+00:00
  • Security risks with higher level languages in middleware products

    Authored by: Pavel Polischouk

    Java-based high-level application-specific languages provide significant flexibility when using middleware products such as BRMS. This flexibility comes at a price as there are significant security concerns in their use. In this article the usage of Drools language and MVEL in JBoss BRMS is looked at to demonstrate some of these concerns. Other middleware products might be exposed to similar risks.

    Java is an extremely feature-rich portable language that is used to build a great range of software products, from desktop GUI applications to smartphone apps to dedicated UIs for hardware such as printers to a breadth of server-side products, mostly middleware. As such, Java is a general-purpose language with a rather steep learning curve and strict syntax, well suited for complex software projects but not very friendly for writing simple scripts and one-liner pieces of code.

    Several efforts emerged over time to introduce Java-based or Java-like scripting functionality, with probably the most famous one being Javascript: a language that is not based on Java but appearing similar to Java in several aspects; and it is well-suited for scripting.

    Another example of a simplified language based on Java is MVEL. MVEL is an expression language, mostly used for making basic logic available in application-specific languages and configuration files, such as XML. It's not intended for some serious object-oriented programming, but mainly simple expressions as in "user.getManager().getName() != null". However simple, MVEL is still very powerful and allows the use of any Java APIs available to the developer; its strength is its simplified syntax which doesn’t restrict the ability to call any code the developer may need access to.

    Yet another example of a Java-based application-specific language is the Drools rules language. It is used in JBoss BRMS, a middleware product that implements Business Rules, and its open source counterpart Drools. There is a similar idea behind the Drools language: to hide all the clutter of Java from the rules developer and provide a framework that makes it easy to concentrate on the task at hand (namely: writing business rules) without compromising the ability to call any custom Java code that might be needed to properly implement the organisation's business logic.

    Developers of Java middleware products, starting with application servers and continuing to more complex application frameworks, traditionally invest a great deal of effort into making their products secure. This includes: separating the product’s functionality into several areas with different levels of risk; applying user roles to each of these areas; authorization and authentication of users; audit of the available functionality to evaluate the risk of using this functionality for unintended tasks that could potentially lead to compromises, etc. In this context it is important to understand that the same rich feature set and versatility that makes Java so attractive as a developer platform also becomes its Achilles heel when it comes to security: every so often one of these features finds its way into some method of unintended use or another.

    In this article I will look at one such case where a very flexible feature was added to one of the middleware products that was later discovered to include unsafe consequences, and the methods used to patch it.

    JBoss BRMS, mentioned above, had a role-based security model from the very beginning. Certain roles would allow deployment of new rules and certain development processes would normally be established to allow proper code review prior to deployment. These combined together would ensure that only safe code is ever deployed on the server.

    This changed in BRMS (and BPMS) 6. A new WYSIWYG tool was introduced that allowed for constructing the rules graphically in a browser session, and testing them right away. So any person with rule authoring permissions (role known as "analyst" rather than "admin") would be able to do this. The Drools rules would allow writing arbitrary MVEL expressions, that in turn allow any calls to any Java classes deployed on the application server without restrictions, including the system ones. As an example, an analyst would be able to write System.exit() in a rule and testing this rule would shut down the server. Basically, the graphical rule editor allowed authenticated arbitrary code execution for non-admin users.

    A similar problem existed in JBoss Fuse Service Works 6. While the Drools engine that ships with it does not come with any graphical tool to author rules, so the rules must be deployed on the server as before, it comes with the RTGov component that has some MVEL interfaces exposed. Sending an RTGov request with an MVEL expression in it would again allow authenticated arbitrary code execution for any user that has RTGov permissions.

    This behaviour was caught early on in the development cycle for BxMS/FSW version 6, and a fix was implemented. The fix involves running the application server with Java Security Manager (JSM) turned on, and enabling specific security policies for user-provided code. After the fix was applied, only a limited number of Java instructions were allowed to be used inside user-provided expressions, which were safe for use in legitimate Drools rules and RTGov interfaces, and the specific RCE (Remote Code Execution) vulnerability was considered solved. Essentially a similar security approach was taken as for running Java applets in a sandbox within a browser, where an applet can only use a safe subset of the Java library.

    Some side-effects were detected when products went into testing with the fix applied and performance regression was executed. It was discovered that certain tests ran significantly slower with JSM enabled than on an unsecured machine. This slowdown was significant only in those tests that took into account only the raw performance of rules, not “real-world” scenarios, since any kind of database access would slow down the overall performance anyway, much more significantly than enabling the JSM. However, certain guidelines were developed in order to help customers achieve the best possible balance of speed and security.

    When deploying BRMS/BPMS on a high-performance production server, it is possible to disable JSM, but at the same time not to allow any "analyst"-role users to use these systems for rule development. It is recommended to use these servers for running the rules and applications developed separately and achieving maximum performance, while eliminating the vulnerability by disabling the whole attack vector by disallowing the rule development altogether.

    When BRMS is deployed on development servers used by rule developers and analysts, it is suggested to run these servers with JSM enabled. Since these are not production servers, they do not require mission critical performance in processing real-time customer data, as they are only used for application and rule development. As such, the overhead of the JSM is not noticeable on a non mission-critical server and it is a fair trade-off for a tighter security model.

    When a server is deployed in a "BRMS-as-a-service" configuration, or in other words when rule development is exposed to customers over the Web (even through a VPN-protected Extranet), enabling the complete JSM protection is the recommended approach, accepting the JSM overhead. Without it, any customer with minimal "rule writing and testing" privileges can completely take over the server (and any other co-hosted customers' data as well), a very undesirable situation to avoid.

    Similar solutions are recommended for FSW. Since only RTGov exposes the weakness, it is recommended to run RTGov as a separate server with JSM enabled. For high performance production servers, it is recommended not to install or enable the RTGov component, which eliminates the risk of exposure of user-provided code-based attack vectors, making it possible to run them without JSM at full speed.

    This kind of concern is not specific to JBoss products but is a generic problem potentially affecting any middleware system. Any time rich functionality is made available to users, some of it may be used for malicious purpose. Red Hat takes security of the customers very seriously and every effort is made not only to provide the customers with the richest functionality, but also making sure this functionality is safe to use and proper safe usage guidelines are available.

    Posted: 2016-03-23T13:30:00+00:00
  • Go home SSLv2, you’re DROWNing

    Authored by: Mark J. Cox

    The SSLv2 protocol had its 21st birthday last month, but it’s no cause to celebrate with an alcohol beverage, since the protocol was already deprecated when it turned 18.

    Announced today is an attack called DROWN that takes advantage of systems still using SSLv2.

    Many cryptographic libraries already disable SSLv2 by default, and updates from the OpenSSL project and Red Hat today catch up.

    What is DROWN?

    CVE-2016-0800, also known as DROWN, stands for Decrypting RSA using Obsolete and Weakened eNcryption and is a Man-in-the-Middle (MITM) attack against servers running TLS for secure communications.

    This means that if an attacker can intercept and modify network traffic between a client and the host, the attacker could impersonate the server on what is expected to be a secure connection. The attacker could then potentially eavesdrop or modify important information as it is transferred between the server and client.

    Other Man-in-the-Middle attacks have included POODLE and FREAK. The famous OpenSSL Heartbleed issue from April 2014 did not need a Man-in-the-Middle and was therefore a much more severe risk.

    How does it work?

    The DROWN issue is technically complicated, and the ability to attack using it depends on a number of factors described in more detail in the researchers’ whitepaper. In short, the issue uses a protocol issue in SSLv2 as an oracle in order to help break the encryption on other TLS services if a shared RSA key is in use. The issue is actually quite tricky to exploit by itself, but made easier on servers that are not up to date with some previous year-old OpenSSL security updates. They call this “Special DROWN”, as it could allow a real-time Man-in-the-Middle attack.

    Red Hat has a vulnerability article in the Customer Portal which explains the technical attack and the dependencies in more detail.

    How is Red Hat affected?

    OpenSSL is affected by this issue. In Red Hat Enterprise Linux, the cryptographic libraries GnuTLS and NSS are not affected by this issue as they intentionally do not enable SSLv2.

    Customers who are running services that have the SSLv2 protocol enabled could be affected by this issue.

    Red Hat has rated this issue as having Important security severity. A successful attack would need to be able to leverage a number of conditions and require an attacker to be a Man-in-the-Middle.

    Red Hat advises that SSLv2 is a protocol that should no longer be considered safe and should not be used in a modern environment. Red Hat updates for OpenSSL can be found here: https://access.redhat.com/security/cve/cve-2016-0800. The updates cause the SSLv2 protocol to be disabled by default.

    Our OpenSSL updates also include several other lower priority security fixes which are each described in the Errata. Your organization should review those issues as well when assessing risk.

    If you are a Red Hat Insights customer, a test has been added to identify servers affected by this issue.

    What do you need to do?

    If you are unsure of any details surrounding this issue in your environment, you should apply the update and restart services as appropriate. For detailed technical information please see the Red Hat vulnerability article.

    Security protocols don’t turn 21 every day, so let’s turn off SSLv2, raise a glass, and DROWN one’s sorrows. Cheers!

    Posted: 2016-03-01T13:00:00+00:00
  • Primes, parameters and moduli

    Authored by: rhn-support-kseifrie

    First a brief history of Diffie-Hellman for those not familiar with it

    The short version of Diffie-Hellman is that two parties (Alice and Bob) want to share a secret so they can encrypt their communications and talk securely without an eavesdropper (Eve) listening in. So Alice and Bob first share a public prime number and modulus (which Eve can see). Alice and Bob then each choose a large random number (referred to as their private key) and apply some modular arithmetic using the shared prime and modulus. If everything goes as planned Alice sends her answer to Bob, and Bob sends his answer to Alice. They each take the number sent by the other party, and using modular arithmetic, and their respective private keys are able to derive a number that will be the same for both of them, known as the shared secret. Even if Eve listens in she cannot easily derive the shared secret.

    However if Eve has sufficiently advanced cryptographic experts, and sufficient computing power it is conceivable that she can derive the private keys for exchanges when a sufficiently small modulus is used. Most keys, today, are in the range of 1024 bit or larger meaning that the modulus is at least several hundred digits long.

    Essentially if Alice and Bob agree to a poorly chosen modulus number then Eve will have a much easier time deriving the secret keys and listening in on their conversation. Poor examples of modulus numbers include numbers that are not actually prime, and modulus numbers that aren’t sufficiently large (e.g. a 1024 bit modulus provides vastly less protection than a 2048 bit modulus).

    Why you need a good prime for your modulus

    A prime number is needed for your modulus. For this example we'll use 23. Is 23 prime? 23 is small enough that you can easily walk through all the possible factors (1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12), divide 23 by them and see if there is a remainder. But much larger prime numbers, such as ones that are in the mid to high hundreds of digits long, are essentially impossible to factor unless you have a lot of computational resources, and some really efficient ways to try and factor prime numbers. Fortunately there is a simple solution to this, just use an even larger prime number, such as a 2048, 4096 or even 16384 bit prime number. But when picking such a number how can you be sure it’s a prime and not easily factored? Ignoring the obvious give-aways (like all numbers ending in 0, 2, 4, 5, 6 and 8) there are several clever mathematical algorithms for testing the primality of numbers and for generating prime numbers.

    Miller-Rabin, Shawe-Taylor and FIPS 186-4

    The Miller-Rabin primality test was first proposed in 1976, and the Shawe-Taylor strong prime generation was first proposed in 1986. One thing that is important to remember, is that back when these algorithms were made public the amount of computing power available to generate/factorize prime numbers was much smaller than is now available. The Miller-Rabin test is a probabilistic primality test, you cannot conclusively prove a number is prime, but by running the Miller-Rabin test multiple times with different parameters you can be reasonably certain that the number in question is probably prime and with enough tests your confidence can approach almost 100%. Shawe-Taylor is also probabilistic, you're not 100% guaranteed to get a good prime, but the chances of something going wrong and getting a non-prime number are very small.

    FIPS 186-4 covers the math and usage of both Miller-Rabin and Shawe-Taylor, and gives specific information on how to use them securely (e.g. how many rounds of Miller-Rabin you’ll need to use, etc.). The main difference between Rabin-Miller and Shawe-Taylor is that Shawe-Taylor generates something that is probably a prime, whereas with Miller-Rabin you generate a number that might be prime, and then test it. As such you may immediately generate a good number, or it may take several tries. In testing on a 3Ghz CPU, using a single core it took me between less than a second and over 10 minutes to generate a 2048 bit prime using the Miller-Rabin method.

    Generating primes in advance

    The easiest way to deal with the time and computing resources needed to generate primes is to generate them in advance. Also because they are shared publicly during the exchange you can even distribute them in advance, which is what has happened for example with OpenSSL. Unfortunately many of the primes currently in use were generated a long time ago when the attacks available were not as well understood, and thus are not very large. Additionally, there are relatively few primes in use and it appears that there may be a literal handful of primes in wide use (especially the default ones in OpenSSL and OpenSSH for example). There is now public research to indicate that at least one large organization may have successfully attacked several of these high value prime numbers, and as computational power increases this becomes more and more likely.

    To generate Diffie-Hellman primes in advance is easy, for example with OpenSSL:

    openssl dhparam [bit size] -text > filename
    

    so to generate a 2048 bit prime:

    openssl dhparam 2048 -text
    

    Or for example with OpenSSH you first generate a list of candidate primes and then test them:

    ssh-keygen -G candidates -b 2048
    ssh-keygen -T moduli -f candidates
    

    Please note that OpenSSH uses a list of multiple primes so generation can take some time, especially with larger key sizes.

    Defending - larger primes

    The best defense against someone factoring your primes is to use really large primes. In theory every time you add a single bit to the prime you are increasing the workload significantly (assuming no major advances in math/quantum computing that we don’t know about that make factorization much easier). As such moving from a 1024 bit to 2048 bit prime is a huge improvement, and moving to something like a 4096 bit prime should be safe for a decade or more (or maybe not, I could be wrong). So why don’t we simply use very large primes? CPU power and battery power are still finite resources, and very large primes take much more computational power to use, so much so that very large primes like 16384 bit primes become impractical to use, introducing noticeable delays in connections. The best thing we can do here is set a minimum prime size such as 2048 bits now, and hopefully move to 4096 bit primes within the next few years.

    Defending - diversity of primes

    But what happens if you cannot use larger primes? The next best thing is to use custom generated prime numbers, this means that an attacker will have to factor your specific prime(s), increasing their workload. Please note that even if you can use large primes, prime diversity is still a good idea, but prime diversity increases the amount of work for an attacker at a much slower rate than using larger prime does. The following apps and locations contain primes you may want to replace:

    OpenSSH: /etc/ssh/moduli

    Apache with mod_ssl: “SSLOpenSSLCondCmd DHParameters [filename]” or append the DH param to the SSLCertificateFile

    Some excellent articles on securing a variety of other services and clients are:

    Current and future state

    So what should we do?

    I think the best plan for dealing with this in the short term is deploying larger primes (2048 bits minimum, ideally 4096 bits) right now wherever possible. For systems that cannot have larger primes (e.g. some are limited to 768, 2013 bits or other related small sizes) we should ensure that default primes are not used and instead custom primes are used, ideally for limited periods of time, replacing the primes as often as possible (which is easier since they are small and quick to generate).

    In the medium term we need to ensure as many systems as possible can handle larger prime sizes, and we need to make default primes much larger, or at least provide easy mechanisms (such as firstboot scripts) to replace them.

    Longer term we need to understand the size of primes needed to avoid decryption due to advances in math and quantum computing. We also need to ensure software has manageable entry points for these primes so that they can easily be replaced and rotated as needed.

    Why not huge primes?

    Why not simply use really large primes? Because computation is expensive, battery life matters more than ever and latency will become problems that users will not tolerate. Additionally the computation time and effort needed to find huge primes (say 16k) is difficult at best for many users and not possible for many (anyone using a system on a chip for example).

    Why not test all the primes?

    Why not test the DH params passed by the remote end, and refuse the connection if the primes used are too small? There is at least one program (wvstreams [wvstreams]) that tests the DH params passed to it, however it does not check for a minimum size, it simply tests the DH params for correctness. The problem with this is twofold, one there would be a significant performance impact (adding time to each connection) and two, most protocols and programs don’t really support error messages from the remote end related to the DH params, so apart from dropping the connection there is not a lot you can do.

    Summary

    As bad as things sound there is some good news. Fixing this issue is pretty trivial, and mostly requires some simple operational changes such as using moderately sized DH Parameters (e.g. 2048 bits now, 4096 within a few years). The second main fix for this issue is to ensure any software in use that handles DH Parameters can handle larger key sizes, if this is not possible then you will need to place a proxy in front that can handle proper key sizes (so all your old Java apps will need proxies). This also has the benefit of decoupling client-server encryption from old software which will allow you to solve future problems more easily as well.

    References

    Posted: 2016-01-20T12:00:00+00:00
  • The SLOTH attack and IKE/IPsec

    Authored by: Paul Wouters

    Executive Summary: The IKE daemons in RHEL7 (libreswan) and RHEL6 (openswan) are not vulnerable to the SLOTH attack. But the attack is still interesting to look at .

    The SLOTH attack released today is a new transcript collision attack against some security protocols that use weak or broken hashes such as MD5 or SHA1. While it mostly focuses on the issues found in TLS, it also mentions weaknesses in the "Internet Key Exchange" (IKE) protocol used for IPsec VPNs. While the TLS findings are very interesting and have been assigned CVE-2015-7575, the described attacks against IKE/IPsec got close but did not result in any vulnerabilities. In the paper, the authors describe a Chosen Prefix collision attack against IKEv2 using RSA-MD5 and RSA-SHA1 to perform a Man-in-the-Middle (MITM) attack and a Generic collision attack against IKEv1 HMAC-MD5.

    We looked at libreswan and openswan-2.6.32 compiled with NSS as that is what we ship in RHEL7 and RHEL6. Upstream openswan with its custom crypto code was not evaluated. While no vulnerability was found, there was some hardening that could be done to make this attack less dangerous that will be added in the next upstream version of libreswan.

    Specifically, the attack was prevented because:

    • The SPI's in IKE are random and part of the hash, so it requires an online attack of 2^77 - not an offline attack as suggested in the paper.
    • MD5 is not enabled per default for IKEv2.
    • Weak Diffie-Hellman groups DH22, DH23 and DH24 are not enabled per default.
    • Libreswan as a server does not re-use nonces for multiple clients.
    • Libreswan destroys nonces when an IKE exchange times out (default 60s).
    • Bogus ID payloads in IKEv1 cause the connection to fail authentication.

    The rest of this article explains the IKEv2 protocol and the SLOTH attack.

    The IKEv2 protocol

    The IKE exchange starts with an IKE_INIT packet exchange to perform the Diffie-Hellman Key Exchange. In this exchange, the initiator and responder exchange their nonces. The result of the DH exchange is that both parties now have a shared secret called SKEYSEED. This is fed into a mutually agreed PRF algorithm (which could be MD5, SHA1 or SHA2) to generate as much pseudo-random key material as needed. The first key(s) are for the IKE exchange itself (called the IKE SA or Parent SA), followed by keys for one or more IPsec SAs (also called Child SAs).

    But before the SKEYSEED can be used, both ends need to perform an authentication step. This is the second packet exchange, called IKE_AUTH. This will bind the Diffie-Hellman channel to an identity to prevent the MITM attack. Usually these are digital signatures over the session data to prove ownership of the identity's private key. In IKE, it signs the session data. In TLS that signature is only over a hash of the session data which made TLS more vulnerable to the SLOTH attack.

    The attack is to trick both parties to sign a hash which the attacker can replay to the other party to fake the authentication of both entities.

    They call this a "transcript collision". To facilitate the creation of the same hash, the attacker needs to be able to insert its own data in the session to the first party so that the hash of that data will be identical to the hash of the session to the second party. It can then just pass on the signatures without needing to have private keys for the identities of the parties involved. It then needs to remain in the middle to decrypt and re-encrypt and pass on the data, while keeping a copy of the decrypted data.

    The IKEv2 COOKIE

    The initial IKE_INIT exchange does not have many payloads that can be used to manipulate the outcome of the hashing of the session data. The only candidate is the NOTIFY payload of type COOKIE.

    Performing a Diffie-Hellman exchange is relatively expensive. An attacker could send a lot of IKE_INIT requests forcing the VPN server to use up its resources. These could all come from spoofed source IP addresses, so blacklisting such an attack is impossible. To defend against this, IKEv2 introduced the COOKIE mechanism. When the server gets too busy, instead of performing the Diffie-Hellman exchange, it calculates a cookie based on the client's IP address, the client's nonce and its own server secret. It hashes these and sends it as a COOKIE payload in an IKE_INIT reply to the client. It then deletes all the state for this client. If this IKE_INIT exchange was a spoofed request, nothing more will happen. If the request was a legitimate client, this client will receive the IKE_INIT reply, see the COOKIE payload and re-send the original IKE_INIT request, but this time it will include the COOKIE payload it received from the server. Once the server receives this IKE_INIT request with the COOKIE, it will calculate the cookie data (again) and if it matches, the client has proven that it contacted the server before. To avoid COOKIE replays and thwart attacks attempting to brute-force the server secret used for creating the cookies, the server is expected to regularly change its secret.

    Abusing the COOKIE

    The SLOTH attacker is the MITM between the VPN client and VPN server. It prepares an IKE_INIT request to the VPN server but waits for the VPN client to connect. Once the VPN client connects, it does some work with the received data that includes the proposals and nonce to calculate a malicious COOKIE payload and sends this COOKIE to the VPN client. The VPN client will re-send the IKE_INIT request with the COOKIE to the MITM. The MITM now sends this data to the real VPN server to perform an IKE_INIT there. It includes the COOKIE payload even though the VPN server did not ask for a COOKIE. Why does the VPN server not reject this connection? Well, the IKEv2 RFC-7296 states:

    When one party receives an IKE_SA_INIT request containing a cookie whose contents do not match the value expected, that party MUST ignore the cookie and process the message as if no cookie had been included.

    The intention here was likely meant for a recovering server. If the server is no longer busy, it will stop sending cookies and stop requiring cookies. But a few clients that were just about to reconnect will send back the cookie they received when the server was still busy. The server shouldn't reject these clients now, so the advice was to ignore the cookie in that case. Alternatively, the server could just remember the last used secret for a while and if it receives a cookie when it is not busy, just do the cookie validation. But that costs some resources too which can be abused by an attacker to send IKE_INIT requests with bogus cookies. Limiting the time of cookie validation from the time when the server became unbusy would mitigate this.

    COOKIE size

    The paper actually pointed out a common implementation error:

    To implement the attack, we must first find a collision between m1 amd m'1. We observe that in IKEv2 the length of the cookie is supposed to be at most 64 octets but we found that many implementations allow cookies of up to 2^16 bytes. We can use this flexibility in computing long collisions.

    The text that limits the COOKIE to 64 byte is hidden deep down in the RFC when it talks about a special use case. It is not at all clearly defined:

    When a responder detects a large number of half-open IKE SAs, it
    SHOULD reply to IKE_SA_INIT requests with a response containing the
    COOKIE notification. The data associated with this notification MUST
    be between 1 and 64 octets in length (inclusive), and its generation
    is described later in this section. If the IKE_SA_INIT response
    includes the COOKIE notification, the initiator MUST then retry the
    IKE_SA_INIT request, and include the COOKIE notification containing
    the received data as the first payload, and all other payloads
    unchanged.

    A few implementations (including libreswan/openswan) missed this 64 byte limitation. So instead, those implements only looked at the COOKIE value as a NOTIFY PAYLOAD. These payloads have a two byte Payload Length value, so NOTIFY data is legitimately 2^16 (65535) bytes. Libreswan will fix this in the next release and limit the COOKIE to 64 bytes.

    Attacking the AUTH hash

    Assuming the above works, it needs to find a collision between m1 and m'1. The only numbers they claim could be feasible is when MD5 would be used for the authentication step in IKE_AUTH. An offline attack could then be computed of 2^16 to 2^39 which they say would take about 5 hours. As the paper states, IKEv2 implementations either don't support MD5, or if they do it is not part of the default proposal set. It makes a case that the weak SHA1 is widely supported in IKEv2 but admits using SHA1 will need more computing power (they listed 2^61 to 2^67 or 20 years). Note that libreswan (and openswan in RHEL) requires manual configuration to enable MD5 in IKEv2, but SHA1 is still allowed for compatibility.

    The final step of the attack - Diffie-Hellman

    Assuming the above succeeds the attacker needs to ensure that g^xy' = g^x'y. To facilitate that, they use a subgroup confinement attack, and illustrate this with an example of picking x' = y' = 0. Then the two shared secrets would have the value 1. In practice this does not work according to the authors because most IKEv2 implementations validate
    the received Diffie-Hellman public value to ensure that it is larger than 1 and smaller than p - 1.They did find that Diffie-Hellman groups 22 to 24 are known to have many small subgroups, and implementations tend to not validate these. Which led to an interesting discussion on one of the cypherpunks mailinglists about the mysterious nature of the DH groups in RFC-5114. Which are not enabled in libreswan (or openswan in RHEL) by default, and require manual configuration precisely because the origin of these groups is a mystery.

    The IKEv1 attack

    The paper briefly brainstorms about a variant of this attack using IKEv1. It would be interesting because MD5 is very common with IKEv1, but the article is not really clear on how that attack should work. It mentions filling the ID payload with malicious data to trigger the collision, but such an ID would never pass validation.

    Counter measures

    Work was already started on updating the cryptographic algorithms deemed mandatory to implement for IKE. Note that it does not state which algorithms are valid to use, or which to use per default. This work is happening at the IPsec working group at the IETF and can be found at draft-ietf-ipsecme-rfc4307bis. It is expected to go through a few more rounds of discussion and one of the topics that will be raised are the weak DH groups specified in RFC-5114.

    Upstream Libreswan has hardened its cookie handling code, preventing the attacker from sending an uninvited cookie to the server without having their connection dropped.

    Posted: 2016-01-15T12:00:00+00:00
  • DevOps On The Desktop: Containers Are Software As A Service

    Authored by: Trevor Jay

    It seems that everyone has a metaphor to explain what containers "are". If you want to emphasize the self-contained nature of containers and the way in which they can package a whole operating system's worth of dependencies, you might say that they are like virtual machines. If you want to emphasize the portability of containers and their role as a distribution mechanism, you might say that they are like a platform. If you want to emphasize the dangerous state of container security nowadays, you might say that they are equivalent to root access. Each of these metaphors emphasizes one aspect of what containers "are", and each of these metaphors is correct.

    It is not an exaggeration to say that Red Hat employees have spent man-years clarifying the foggy notion invoked by the buzzword "the cloud". We might understand cloudiness as having three dimensions: (1) irrelevant location, (2) external responsibility, and (3) the abstraction of resources. The different kinds of cloud offerings distinguish themselves from one another by their emphasis on these qualities. The location of the resources that comprise the cloud is one aspect of the cloud metaphor and the abstraction of resources is another aspect of the cloud metaphor. This understanding was Red Hat's motivation for both its private-platform offerings and its infrastructure-as-a-service offerings (IaaS/PaaS). Though the hardware is self-hosted and administered, developers are still able to think either in terms of pools of generic computational resources that they assign to virtual machines (in the case of IaaS) or in terms of applications (in the case of PaaS).

    What do containers and the cloud have in common? Software distribution. Software that is distributed via container or via statically-linked binary is essentially software-as-a-service (SaaS). The implications of this are far-reaching.

    Given the three major dimensions of cloudiness, what is software as a service? It is a piece of software hosted and administered externally to you that you access mainly through a network layer (either an API or a web interface). With this definition of software as a service, we can declare that 99% of the container-distributed and statically-linked Go software is SaaS that happens to run on your own silicon powered by your own electricity. Despite being run locally, this software is still accessed through a network layer and this software is still---in practice---administered externally.

    A static binary is a black box. A container is modifiable only if it was constructed as an ersatz VM. Even if the container has been constructed as an ersatz VM, it is only as flexible as (1) the underlying distribution in the container and (2) your familiarity with that distribution. Apart from basic networking, the important parts of administration must be handled by a third party: the originating vendor. For most containers, it is the originating vendor that must take responsibility for issues like Heartbleed that might be present in software's underlying dependencies.

    This trend, which shows no signs of slowing down, is a natural extension to the blurring of the distinction between development and operations. The term for this collaboration is one whose definition is even harder to pin down than "cloud": DevOps. The DevOps movement has seen some traditional administration responsibilities---such as handling dependencies---become shared between operational personnel and developers. We have come to expect operations to consume their own bespoke containers and static binaries in order to ensure consistency and to ensure that needed runtime dependencies are always available. But now, a new trend is emerging---operational groups are now embedding the self-contained artifacts of other operational groups into their own stack. Containers and static blobs, as a result, are now emerging as a general software distribution method.

    The security implications are clear. Self-contained software such as containers and static binaries must be judged as much by their vendor's commitments to security as by their feature set because it is that vendor who will be acting as the system administrator. Like when considering the purchase of a phone, the track record for appropriate, timely, and continuous security updates is as important as any feature matrix.

    Some security experts might deride the lack of local control over security that this trend represents. However, that analysis ignores economies of scale and the fact that---by definition---the average system administrator is worse than the best. Just as the semi-centralized hosting of the cloud has allowed smaller businesses to achieve previously impossible reliability for their size, so too does this trend offer the possibility of a better overall security environment.

    Of course, just as the unique economic, regulatory, and feature needs of enterprise customers pushed those customers to private clouds, so too must there be offerings of more customizable containers.

    Red Hat is committed to providing both "private cloud" flexibility and to helping ISVs leverage the decades of investment that we have made in system administration. We release fresh containers at a regular cadence and at the request of our security team. By curating containers in this way, we provide a balance between the containers becoming dangerously out of date and the fragility that naturally occurs when software used within a stack updates "too often". However, just as important is our commitment to all of our containers being updatable in the ways our customers have come to expect from their servers and VMs: yum update for RPM based content, and zips and patches for content such as our popular JBoss products. This means that if you build a system on a RHEL-based container you can let "us" administer it by simply keeping up with the latest container releases or you can take control yourself using tools you already know.

    Sadly, 2016 will probably not be the year of the Linux desktop, but it may well be the year of DevOps on the desktop. In the end, that may be much more exciting.

    Posted: 2015-12-23T12:00:00+00:00
  • Risk report update: April to October 2015

    Authored by: Mark Cox
    In April 2015 we took a look at a years worth of branded vulnerabilities, separating out those that mattered from those that didn’t. Six months have passed so let’s take this opportunity to update the report with the new vulnerabilities that mattered across all Red Hat products.

    ABRT (April 2015) CVE-2015-3315:

    ABRT (Automatic Bug Reporting Tool) is a tool to help users to detect defects in applications and to create a bug report. ABRT was vulnerable to multiple race condition and symbolic link flaws. A local attacker could use these flaws to potentially escalate their privileges on an affected system to root.

    This issue affected Red Hat Enterprise Linux 7 and updates were made available. A working public exploit is available for this issue. Other products and versions of Enterprise Linux were either not affected or not vulnerable to privilege escalation.

    JBoss Operations Network open APIs (April 2015) CVE-2015-0297:

    Red Hat JBoss Operations Network is a middleware management solution that provides a single point of control to deploy, manage, and monitor JBoss Enterprise Middleware, applications, and services. The JBoss Operations Network server did not correctly restrict access to certain remote APIs which could allow a remote, unauthenticated attacker to execute arbitrary Java methods. We’re not aware of active exploitation of this issue. Updates were made available.

    “Venom” (May 2015) CVE-2015-3456:

    Venom was a branded flaw which affected QEMU. A privileged user of a guest virtual machine could use this flaw to crash the guest or, potentially, execute arbitrary code on the host with the privileges of the host’s QEMU process corresponding to the guest.

    A number of Red Hat products were affected and updates were released. Red Hat products by default would block arbitrary code execution as SELinux sVirt protection confines each QEMU process.

    “LogJam” (May 2015) CVE-2015-4000:

    TLS connections using the Diffie-Hellman key exchange protocol were found to be vulnerable to an attack in which a man-in-the-middle attacker could downgrade vulnerable TLS connections to weak cryptography which could then be broken to decrypt the connection.

    Like Poodle and Freak, this issue is hard to exploit as it requires a man in the middle attack. We’re not aware of active exploitation of this issue. Various packages providing cryptography were updated.

    BIND DoS (July 2015) CVE-2015-5477:

    A flaw in the Berkeley Internet Name Domain (BIND) allowed a remote attacker to cause named (functioning as an authoritative DNS server or a DNS resolver) to exit, causing a denial of service against BIND.

    This issue affected the versions of BIND shipped with all versions of Red Hat Enterprise Linux. A public exploit exists for this issue. Updates were available the same day as the issue was public.

    libuser privilege escalation (July 2015) CVE-2015-3246:

    The libuser library implements a interface for manipulating and administering user and group accounts. Flaws in libuser could allow authenticated local users with shell access to escalate privileges to root.

    Red Hat Enterprise Linux 6 and 7 were affected and updates available same day as issue was public. Red Hat Enterprise Linux 5 was affected and a mitigation was published. A public exploit exists for this issue.

    Firefox lock file stealing via PDF reader (August 2015) CVE-2015-4495:

    A flaw in Mozilla Firefox could allow an attacker to access local files with the permissions of the user running Firefox. Public exploits exist for this issue, including as part of Metasploit, and targeting Linux systems.

    This issue affected Firefox shipped with versions of Red Hat Enterprise Linux and updates were available the next day after the issue was public.

    Firefox add-on permission warning (August 2015) CVE-2015-4498:

    Mozilla Firefox normally warns a user when trying to install an add-on if initiated by a web page. A flaw allowed this dialog to be bypassed.

    This issue affected Firefox shipped with Red Hat Enterprise Linux versions and updates were available the same day as the issue was public.

    Conclusion

    The issues examined in this report were included because they were meaningful. This includes the issues that are of a high severity and are likely easy to be exploited (or already have a public working exploit), as well as issues that were highly visible or branded (with a name or logo), regardless of their severity.

    Between 1 April 2015 and 31 October 2015 for every Red Hat product there were 39 Critical Red Hat Security Advisories released, addressing 192 Critical vulnerabilities. Aside from the issues in this report which were rated as having Critical security impact, all other issues with a Critical rating were part of Red Hat Enterprise Linux products and were browser-related: Firefox, Chromium, Adobe Flash, and Java (due to the browser plugin).

    Our dedicated Product Security team continue to analyse threats and vulnerabilities against all our products every day, and provide relevant advice and updates through the Customer Portal. Customers can call on this expertise to ensure that they respond quickly to address the issues that matter. Hear more about vulnerability handling in our upcoming virtual event: Secure Foundations for Today and Tomorrow.

    Posted: 2015-11-04T18:45:05+00:00
  • Red Hat CVE Database Revamp

    Authored by: Vincent Danen

    Since 2009, Red Hat has provided details of vulnerabilities with CVE names as part of our mission to provide as much information around vulnerabilities that affect Red Hat products as possible. These CVE pages distill information from a variety of sources to provide an overview of each flaw, including information like a description of the flaw, CVSSv2 scores, impact, public dates, and any corresponding errata that corrected the flaw in Red Hat products.

    Over time this has grown to include more information, such as CWE identifiers, statements, and links to external resources that note the flaw (such as upstream advisories, etc.). We’re pleased to note that the CVE pages have been improved yet again to provide even more information.

    Beyond just a UI refresh, and deeper integration into the Red Hat Customer Portal, the CVE pages now also display specific “mitigation” information on flaws where such information is provided. This is an area where we highlight certain steps that can be taken to prevent the exploitability of a flaw without requiring a package update. Obviously this is not applicable to all flaws, so it is noted only where it is relevant.

    In addition, the CVE pages now display the “affectedness” of certain products in relation to these flaws. For instance, in the past, you would know that an issue affected a certain product either by seeing that an erratum was available (as noted on the CVE page) or by visiting Bugzilla and trying to sort through comments and other metadata that is not easily consumable. The CVE pages now display this information directly on the page so it is no longer required that a visitor spend time poking around in Bugzilla to see if something they are interested in is affected (but has not yet had an erratum released).

    To further explain how this works, the pages will not show products that would not be affected by the flaw. For instance, a flaw against the mutt email client would not note that JBoss EAP is unaffected because EAP does not ship, and has never shipped, the mutt email client. However, if a flaw affected mutt on Red Hat Enterprise Linux 6, but not Red Hat Enterprise Linux 5 or 7, the CVE page might show an erratum for Red Hat Enterprise Linux 6 and show that mutt on Red Hat Enterprise Linux 5 and 7 is unaffected. Previously, this may have been noted as part of a statement on the page, but that was by no means guaranteed. You would have to look in Bugzilla to see if any comments or metadata noted this; now it is quite plainly noted on the pages directly.

    This section of the page, entitled “Affected Packages State”, is a table that lists the affected platform, package, and a state. This state can be:

    “Affected”: this package is affected by this flaw on this platform
    “Not affected”: this package, which ships on this platform, is not affected by this flaw
    “Fix deferred”: this package is affected by this flaw on this platform, and may be fixed in the future
    “Under investigation”: it is currently unknown whether or not this flaw affects this package on this platform, and it is under investigation
    “Will not fix”: this package is affected by this flaw on this platform, but there is currently no intention to fix it (this would primarily be for flaws that are of Low or Moderate impact that pose no significant risk to customers)

    For instance, the page for CVE-2015-5279 would look like this, noting the above affected states:

    By being explicit about the state of packages on the CVE pages, visitors will know exactly what is affected by this CVE, without having to jump through hoops and spend time digging into Bugzilla comments.

    Other improvements that come with the recent changes include enhanced searching capabilities. You can now search for CVEs by keyword, so searching for all vulnerabilities that mention “openssl” or “bind” or “XSS” are now possible. In addition, you can filter by year and impact rating.

    The Red Hat CVE pages are a primary source of vulnerability information for many, a gateway of sorts that collects the most important information that visitors are often interested in, with links to further sources of information that are of interest to the vulnerability researcher.

    Red Hat continues to look for ways to provide extra value to our customers. These enhancements and changes are designed to make your jobs easier, and we believe that they will become an even greater resource for our customers and visitors. We hope you agree!

    Posted: 2015-10-22T15:49:33+00:00
  • Important security notice regarding signing key and distribution of Red Hat Ceph Storage on Ubuntu and CentOS

    Authored by: Fábio Olivé Leite

    Last week, Red Hat investigated an intrusion on the sites of both the Ceph community project (ceph.com) and Inktank (download.inktank.com), which were hosted on a computer system outside of Red Hat infrastructure.

    download.inktank.com provided releases of the Red Hat Ceph product for Ubuntu and CentOS operating systems. Those product versions were signed with an Inktank signing key (id 5438C7019DCEEEAD). ceph.com provided the upstream packages for the Ceph community versions signed with a Ceph signing key (id 7EBFDD5D17ED316D). While the investigation into the intrusion is ongoing, our initial focus was on the integrity of the software and distribution channel for both sites.

    To date, our investigation has not discovered any compromised code available for download on these sites. We can not not fully rule out the possibility that some compromised code was available for download at some point in the past.

    For download.inktank.com, all builds were verified matching known good builds from a clean system. However, we can no longer trust the integrity of the Inktank signing key, and therefore have re-signed these versions of the Red Hat Ceph Storage products with the standard Red Hat release key. Customers of Red Hat Ceph Storage products should only use versions signed by the Red Hat release key.

    For ceph.com, the Ceph community has created a new signing key (id E84AC2C0460F3994) for verifying their downloads. See ceph.com for more details.

    Customer data was not stored on the compromised system. The system did have usernames and hashes of the fixed passwords we supplied to customers to authenticate downloads.

    To reiterate, based on our investigation to date, the customers of the CentOS and Ubuntu versions of Red Hat Ceph Storage should take action as a precautionary measure to download the rebuilt and newly-signed product versions. We have identified and notified those customers directly.

    Customers using Red Hat Ceph Storage products for Red Hat Enterprise Linux are not affected by this issue. Other Red Hat products are also not affected.

    Customers who have any questions or need help moving to the new builds should contact Red Hat support or their Technical Account Manager.

    Posted: 2015-09-17T12:00:00+00:00