Warning message

Log in to add comments.

Latest Posts

  • Before you initiate a “docker pull”

    Authored by: Ben Pritchett

    In addition to the general challenges that are inherent to isolating containers, Docker brings with it an entirely new attack surface in the form of its automated fetching and installation mechanism, “docker pull”. It may be counter-intuitive, but “docker pull” both fetches and unpacks a container image in one step. There is no verification step and, surprisingly, malformed packages can compromise a system even if the container itself is never run. Many of the CVE’s issues against Docker have been related to packaging that can lead to install-time compromise and/or issues with the Docker registry.

    One, now resolved, way such malicious issues could compromise a system was by a simple path traversal during the unpack step. By simply using a tarball’s capacity to unpack to paths such as “../../../” malicious images were able to override any part of a host file system they desired.

    Thus, one of the most important ways you can protect yourself when using Docker images is to make sure you only use content from a source you trust and to separate the download and unpack/install steps. The easiest way to do this is simply to not use “docker pull” command. Instead, download your Docker images over a secure channel from a trusted source and then use the “docker load” command. Most image providers also serve images directly over a secure, or at least verifiable, connection. For example, Red Hat provides a SSL-accessible “Container Images”.  Fedora also provides Docker images with each release as well.

    While Fedora does not provide SSL with all mirrors, it does provide a signed checksum of the Docker image that can be used to verify it before you use “docker load”.

    Since “docker pull” automatically unpacks images and this unpacking process itself is often compromised, it is possible that typos can lead to system compromises (e.g. a malicious “rel” image downloaded and unpacked when you intended “rhel”). This typo problem can also occur in Dockerfiles. One way to protect yourself is to prevent accidental access to index.docker.io at the firewall-level or by adding the following /etc/hosts entry:

    127.0.0.1 index.docker.io

    This will cause such mistakes to timeout instead of potentially downloading unwanted images. You can still use “docker pull” for private repositories by explicitly providing the registry:

    docker pull registry.somewhere.com/image

    And you can use a similar syntax in Dockerfiles:

    from registry.somewhere.com/image

    Providing a wider ecosystem of trusted images is exactly why Red Hat began its certification program for container applications. Docker is an amazing technology, but it is neither a security nor interoperability panacea. Images still need to come from sources that certify their security, level-of-support, and compatibility.

    Posted: 2014-12-18T14:30:57+00:00
  • Container Security: Isolation Heaven or Dependency Hell

    Authored by: Ben Pritchett

    Docker is the public face of Linux containers and two of Linux’s unsung heroes: control groups (cgroups) and namespaces. Like virtualization, containers are appealing because they help solve two of the oldest problems to plague developers: “dependency hell” and “environmental hell.”

    Closely related, dependency and environmental hell can best be thought of as the chief cause of “works for me” situations. Dependency hell simply describes the complexity inherent in modern application’s tangled graph of external libraries and programs they need to function. Environmental hell is the name for the operating system portion of that same problem (i.e. what wrinkles, in particular which bash implementation,on which that quick script you wrote unknowingly relies).

    Namespaces provide the solution in much the same way as virtual memory simplified writing code on a multi-tenant machine: by providing the illusion that an application suite has the computer all to itself. In other words,”via isolation”. When a process or process group is isolated via these new namespace features, we say they are “contained.” In this way, virtualization and containers are conceptually related, but containers isolate in a completely different way and conflating the two is just the first of a series of misconceptions that must be cleared up in order to understand how to use containers as securely as possible. Virtualization involves fully isolating programs to the point that one can use Linux, for example, while another uses BSD. Containers are not so isolated. Here are a few of the ways that “containers do not contain:”

    1. Containers all share the same kernel. If a contained application is hijacked with a privilege escalation vulnerability, all running containers *and* the host are compromised. Similarly, it isn’t possible for two containers to use different versions of the same kernel module.
    2. Several resources are *not* namespaced. Examples include normal ulimit systems still being needed to control resources such as filehandlers. The kernel keyring is another example of a resource that is not namespaced. Many beginning users of containers find it counter-intuitive that socket handlers can be exhausted or that kerberos credentials are shared between containers when they believe they have exclusive system access. A badly behaving process in one container could use up all the filehandles on a system and starve the other containers. Diagnosing the shared resource usage is not feasible from within
    3. By default, containers inherit many system-level kernel capabilities. While Docker has many useful options for restricting kernel capabilities, you need a deeper understanding of an application’s needs to run it inside containers than you would if running it in a VM. The containers and the application within them will be dependent on the capabilities of the kernel on which they reside.
    4. Containers are not “write once, run anywhere”. Since they use the host kernel, applications must be compatible with said kernel. Just because many applications don’t depend on particular kernel features doesn’t mean that no applications do.

    For these and other reasons, Docker images should be designed and used with consideration for the host system on which they are running. By only consuming images from trusted sources, you reduce the risk of deploying containerized applications that exhaust system resources or otherwise create a denial of service attack on shared resources. Docker images should be considered as powerful as RPMs and should only be installed from sources you trust. You wouldn’t expect your system to remain secured if you were to randomly install untrusted RPMs nor should you if you “docker pull” random Docker images.

    In the future we will discuss the topic of untrusted images.

    Posted: 2014-12-17T14:30:37+00:00
  • Analysis of the CVE-2013-6435 Flaw in RPM

    Authored by: Ben Pritchett

    The RPM Package Manager (RPM) is a powerful command-line driven package management system capable of installing, uninstalling, verifying, querying, and updating software packages. RPM was originally written in 1997 by Erik Troan and Marc Ewing. Since then RPM has been successfully used in all versions of Red Hat Linux and currently in Red Hat Enterprise Linux.

    RPM offers considerable advantages over traditional open-source software install methodology of building from source via tar balls, especially when it comes to software distribution and management. This has led to other Linux distributions to accept RPM as either the default package management system or offer it as an alternative to the ones which are default in those distributions.

    Like any big, widely used software, over time several features are added to it and also several security flaws are found. On several occasions Red Hat has found and fixed security issues with RPM.

    Florian Weimer of Red Hat Product Security discovered an interesting flaw in RPM, which was assigned CVE-2013-6435. Firstly, let’s take a brief look at the structure of an RPM file. It consists of two main parts: the RPM header and the payload. The payload is a compressed CPIO archive of binary files that are installed by the RPM utility. The RPM header, among other things, contains a cryptographic checksum of all the installed files in the CPIO archive. The header also contains a provision for a cryptographic signature. The signature works by performing a mathematical function on the header and archive section of the file. The mathematical function can be an encryption process, such as PGP (Pretty Good Privacy), or a message digest in the MD5 format.

    If the RPM is signed, one can use the corresponding public key to verify the integrity and even the authenticity of the package. However, RPM only checked the header and not the payload during the installation.

    When an RPM is installed, it writes the contents of the package to its target directory and then verifies its checksum against the value in the header. If the checksum does not match, that means something is wrong with the package (possibly someone has tampered with it) and the file is removed. At this point RPM refuses to install that particular package.

    Though this may seem like the correct way to handle things, it has a bad consequence. Let’s assume RPM installs a file in the /etc/cron.d directory and then verifies its checksum. This offers a small race-window, in which crond can run before the checksum is found to be incorrect and the file is removed. There are several ways to prolong this window as well. So in the end we achieve arbitrary code execution as root, even though the system administrator assumes that the RPM package was never installed.

    The approach Red Hat used to solve the problem is:

    • Require the size in the header to match with the size of the file in the payload. This prevents anyone from tampering with the payload, because the header is cryptographically verified. (This fix is already present in the upstream version of RPM)
    • Set restrictive permissions while a file is being unpacked from an RPM package. This will only allow root to access those file. Also, several programs, including cron, perform a check for permission sanity before running those files.

    Another approach to mitigate this issue is the use of the O_TMPFILE flag. Linux kernel 3.11 and above introduced this flag, which can be passed to open(2), to simplify the creation of secure temporary files. Files opened with the O_TMPFILE flag are created, but they are not visible in the file system. As soon as they are closed, they are deleted. There are two uses for these files: race-free temporary files and creation of initially unreachable files. These unreachable files can be written to or changed same as regular files. RPM could use this approach to create a temporary, unreachable file, run a checksum on it, and either delete it or atomically link it to set the file up, without being vulnerable to the attack described above. However, as mentioned above, this feature is only available in Linux kernel 3.11 and above, was added to glibc 2.19, and is slowly making its way into GNU/Linux distributions.

    The risk mentioned above is greatly reduced if the following precautions are followed:

    • Always check signatures of RPM packages before installing them. Red Hat RPMs are signed with cryptographic keys provided at https://access.redhat.com/security/team/key. When installing RPMs from Red Hat or Fedora repositories, Yum will automatically validate RPM packages via the respective public keys, unless explicitly told not to (via the “nogpgcheck” option and configuration directive).
    • Package downloads via Red Hat software repositories are protected via TLS/SSL so it is extremely difficult to tamper with them in transit. Fedora uses a whole-file hash chain rooted in a hash downloaded over TLS/SSL from a Fedora-run central server.

    The above issue (CVE-2013-6435) has been fixed along with another issue (CVE-2014-8118), which is a potentially exploitable crash in the CPIO parser.

    Red Hat customers should update to the latest versions of RPM via the following security advisories:
    https://rhn.redhat.com/errata/RHSA-2014-1974.html
    https://rhn.redhat.com/errata/RHSA-2014-1975.html
    https://rhn.redhat.com/errata/RHSA-2014-1976.html

    Posted: 2014-12-10T14:30:50+00:00
  • Disabling SSLv3 on the client and server

    Authored by: Ben Pritchett

    Recently, some Internet search engines announced that they would prefer websites secured with encryption over those that were not.  Of course there are other reasons why securing your website with encryption is beneficial.  Protecting authentication credentials, mitigating the use of cookies as a means of tracking and allowing access, providing privacy of your users, and authenticating your own server thus protecting the information you are trying to convey to your users.  And while setting up and using encryption on a webserver can be trivial, doing it properly might take a few additional minutes.

    Red Hat strives to ship sane defaults that allow both security and availability.  Depending on your clients a more stringent or lax configuration may be desirable.  Red Hat Support provides both written documentation as well as a friendly person that can help make sense of it all.  Inevitably, it is the responsibility of the system owner to secure the systems they host.

    Good cryptographic protocols

    Protocols are the basis for all cryptography and provide the instructions for implementing ciphers and using certificates.  In the asymmetric, or public key, encryption world the protocols are all based off of the Secure Sockets Layer, or SSL, protocol.  SSL has come along way since its initial release in 1995.  Development has moved relatively quickly and the latest version, Transport Layer Security version 1.2 (TLS 1.2), is now the standard that all new software should be supporting.

    Unfortunately some of the software found on the Internet still supports or even requires older versions of the SSL protocol.  These older protocols are showing their age and are starting to fail.  The most recent example is the POODLE vulnerability which showed how weak SSL 3.0 really is.

    In response to the weakened protocol Red Hat has provided advice to disable SSL 3.0 from its products, and help its customers implement the best available cryptography.  This is seen in products from Apache httpd to Mozilla Firefox.  Because SSL 3.0 is quickly approaching its twentieth birthday it’s probably best to move on to newer and better options.

    Of course the protocol can’t fix everything if you’re using bad ciphers.

    Good cryptographic ciphers

    Cryptographic ciphers are just as important to protect your information.  Weak ciphers, like RC4, are still used on the Internet today even though better and more efficient ciphers are available.  Unfortunately the recommendations change frequently.  What was suggested just a few months ago may no longer be good choices today.  As more work goes into researching the available ciphers weaknesses are discovered.

    Fortunately there are resources available to help you stay up to date.  Mozilla provides recommended cipher choices that are updated regularly.  Broken down into three categories, system owners can determine which configuration best meets their needs.

    Of course the cipher can’t fix everything if your certificate are not secure.

    Certificates

    Certificates are what authenticate your server to your users.  If an attacker can spoof your certificate they can intercept all traffic going between your server and users.  It’s important to protect your keys and certificates once they have been generated.  Using a hardware security module (HSM) to store your certificates is a great idea.  Using a reputable certificate authority is equally important.

    Clients

    Most clients that support SSL/TLS encryption automatically try to negotiate the latest version.  We found with the POODLE attack that http clients, such as Firefox, could be downgraded to a weak protocol like SSL 3.0.  Because of this many server owners went ahead and disabled SSL 3.0 to prevent the downgrade attack from affecting their users.  Mozilla has, with their latest version of Firefox, disabled SSL 3.0 by default (although it can be re-enabled for legacy support).  Now users are protected even though server owners may be lax in their security (although they are still at the mercy of the server’s cipher and protocol choices).

    Much of the work has already been done behind the scenes and in the development of the software that is used to serve up websites as well as consume the data that comes from these servers.  The final step is for system owners to implement the technology that is available.  While a healthy understanding of cryptography and public key infrastructure is good, it is not necessary to properly implement good cryptographic solutions.  What is important is protecting your data and that of your users.  Trust is built during every interaction and your website it usually a large part of that interaction.

    Posted: 2014-12-03T14:30:23+00:00
  • Enterprise Linux 6.5 to 6.6 risk report

    Authored by: Ben Pritchett

    Red Hat Enterprise Linux 6.6 was released the 14th of October, 2014, eleven months since the release of 6.5 in November 2013. So lets use this opportunity to take a quick look back over the vulnerabilities and security updates made in that time, specifically for Red Hat Enterprise Linux 6 Server.

    Red Hat Enterprise Linux 6 is in its fourth year since release, and will receive security updates until November 30th 2020.

    Errata count

    The chart below illustrates the total number of security updates issued for Red Hat Enterprise Linux 6 Server if you had installed 6.5, up to and including the 6.6 release, broken down by severity. It’s split into two columns, one for the packages you’d get if you did a default install, and the other if you installed every single package.

    During installation there actually isn’t an option to install every package, you’d have to manually select them all, and it’s not a likely scenario. For a given installation, the number of package updates and vulnerabilities that affected you will depend on exactly what you selected during installation and which packages you have subsequently installed or removed.

    Security errata 6.5 to 6.6 Red Hat Enterprise Linux 6 ServerFor a default install, from release of 6.5 up to and including 6.6, we shipped 47 advisories to address 219 vulnerabilities. 2 advisories were rated critical, 25 were important, and the remaining 20 were moderate and low.

    Or, for all packages, from release of 6.5 up to and including 6.6, we shipped 116 advisories to address 399 vulnerabilities. 13 advisories were rated critical, 53 were important, and the remaining 50 were moderate and low.

    You can cut down the number of security issues you need to deal with by carefully choosing the right Red Hat Enterprise Linux variant and package set when deploying a new system, and ensuring you install the latest available Update release.

     

    Critical vulnerabilities

    Vulnerabilities rated critical severity are the ones that can pose the most risk to an organisation. By definition, a critical vulnerability is one that could be exploited remotely and automatically by a worm. However we also stretch that definition to include those flaws that affect web browsers or plug-ins where a user only needs to visit a malicious (or compromised) website in order to be exploited. Most of the critical vulnerabilities we fix fall into that latter category.

    The 13 critical advisories addressed 42 critical vulnerabilities across six different projects:

    • An update to php RHSA-2013:1813 (December 2013).  A memory corruption flaw was found in the way the openssl_x509_parse() function of the PHP openssl extension parsed X.509 certificates. A remote attacker could use this flaw to provide a malicious self-signed certificate or a certificate signed by a trusted authority to a PHP application using the aforementioned function, causing the application to crash or, possibly, allow the attacker to execute arbitrary code with the privileges of the

      user running the PHP interpreter.
    • An update to JavaOpenJDK
      • RHSA-2014:0026 (January 2014).  Multiple improper permission check issues were discovered in the Serviceability, Security, CORBA, JAAS, JAXP, and Networking components in OpenJDK. An untrusted Java application or applet could use these flaws to bypass certain Java sandbox restrictions.
      • RHSA-2014:0406 (April 2014).  An input validation flaw was discovered in the medialib library in the 2D component. A specially crafted image could trigger Java Virtual Machine memory corruption when processed. A remote attacker, or an untrusted Java application or applet, could possibly use this flaw to execute arbitrary code with the privileges of the user running the Java Virtual Machine.
      • RHSA-2014:0889 (July 2014).  It was discovered that the Hotspot component in OpenJDK did not properly verify bytecode from the class files. An untrusted Java application or applet could possibly use these flaws to bypass Java sandbox restrictions.
    • An update to ruby RHSA-2013:1764 (November 2014).  A buffer overflow flaw was found in the way Ruby parsed floating point numbers from their text representation. If an application using Ruby accepted untrusted input strings and converted them to floating point numbers, an attacker able to provide such input could cause the application to crash or, possibly, execute arbitrary code with the privileges of the

      application.
    • An update to nss and nspr RHSA-2014:0917 (July 2014).  A race condition was found in the way NSS verified certain certificates.  A remote attacker could use this flaw to crash an application using NSS or, possibly, execute arbitrary code with the privileges of the user running that application.
    • An update to bash (Shellshock) RHSA-2014:1293 (September 2014).  A flaw was found in the way Bash evaluated certain specially crafted environment variables. An attacker could use this flaw to override or bypass environment restrictions to execute shell commands. Certain services and applications allow remote unauthenticated attackers to provide environment variables, allowing them to exploit this issue.
    • An update to Firefox:
      • RHSA-2013:1812 (December 2013).   Several flaws were found in the processing of malformed web content. A web page containing malicious content could cause Firefox to terminate unexpectedly or, potentially, execute arbitrary code with the privileges of the user running Firefox.
      • RHSA-2014:0132 (February 2014).  Several flaws were found in the processing of malformed web content. A web page containing malicious content could cause Firefox to crash or, potentially, execute arbitrary code with the privileges of the user running Firefox.
      • RHSA-2014:0310 (March 2014).  Several flaws were found in the processing of malformed web content. A web page containing malicious content could cause Firefox to crash or, potentially, execute arbitrary code with the privileges of the user running Firefox.
      • RHSA-2014:0448 (April 2014).  Several flaws were found in the processing of malformed web content. A web page containing malicious content could cause Firefox to crash or, potentially, execute arbitrary code with the privileges of the user running Firefox.
      • RHSA-2014:0741 (June 2014).  Several flaws were found in the processing of malformed web content. A web page containing malicious content could cause Firefox to crash or, potentially, execute arbitrary code with the privileges of the user running Firefox.
      • RHSA-2014:0919 (July 2014).  Several flaws were found in the processing of malformed web content. A web page containing malicious content could cause Firefox to crash or, potentially, execute arbitrary code with the privileges of the user running Firefox.
      • RHSA-2014:1144 (September 2014). Several flaws were found in the processing of malformed web content. A web page containing malicious content could cause Firefox to crash or, potentially, execute arbitrary code with the privileges of the user running Firefox.
      • RHSA-2014:1635 (October 2014).  Several flaws were found in the processing of malformed web content. A web page containing malicious content could cause Firefox to crash or, potentially, execute arbitrary code with the privileges of the user running Firefox.

        A flaw was found in the Alarm API, which allows applications to schedule

        actions to be run in the future. A malicious web application could use this

        flaw to bypass cross-origin restrictions.

    97% of updates to correct 42 critical vulnerabilities were available via Red Hat Network either the same day or the next calendar day after the issues were public.

    Previous update releases

    We generally measure risk in terms of the number of vulnerabilities, but the actual effort in maintaining a Red Hat Enterprise Linux system is more related to the number of advisories we released: a single Firefox advisory may fix ten different issues of critical severity, but takes far less total effort to manage than ten separate advisories each fixing one critical PHP vulnerability.

    To compare these statistics with previous update releases we need to take into account that the time between each update release is different. So looking at a default installation and calculating the number of advisories per month gives the following chart:

    Security Errata per month Red Hat Enterprise Linux 6 Server Default InstallThis data is interesting to get a feel for the risk of running Enterprise Linux 6 Server, but isn’t really useful for comparisons with other major versions, distributions, or operating systems — for example, a default install of Red Hat Enterprise Linux 6 Server does not include Firefox, but Red Hat Enterprise Linux 5 Server does. You can use our public security measurement data and tools, and run your own custom metrics for any given Red Hat product, package set, timescales, and severity range of interest.

    See also: 6.5, 6.4, 6.3, 6.2, and 6.1 risk reports.

    Posted: 2014-11-12T14:30:28+00:00
  • Can SSL 3.0 be fixed? An analysis of the POODLE attack.

    Authored by: Ben Pritchett

    SSL and TLS are cryptographic protocols which allow users to securely communicate over the Internet. Their development history is no different from other standards on the Internet. Security flaws were found with older versions and other improvements were required as technology progressed (for example elliptic curve cryptography or ECC), which led to the creation of newer versions of the protocol.

    It is easier to write newer standards, and maybe even implement them in code, than to adapt existing ones while maintaining backward compatibility. The widespread use of SSL/TLS to secure traffic on the Internet makes a uniform update difficult. This is especially true for hardware and embedded devices such as routers and consumer electronics which may receive infrequent updates from their vendors.

    The fact that legacy systems and protocols need to be supported, even though more secure options are available, has lead to the inclusion of a version negotiation mechanism in SSL/TLS protocols. This mechanism allows a client and a server to communicate even if the highest SSL/TLS version they support is not identical. The client indicates the highest version it supports in its ClientHello handshake message, then the server picks the highest version supported by both the client and the server, then communicates this version back to the client in its ServerHello handshake message. The SSL/TLS protocols implement protections to prevent a man-in-the-middle (MITM) attacker from being able to tamper with handshake messages that force the use of a protocol version lower than the highest version supported by both the client and the server.

    Most popular browsers implement a different out-of-band mechanism for fallback to earlier protocol versions. Some SSL/TLS implementations do not correctly handle cases when a connecting client supports a newer TLS protocol version than supported by the server, or when certain TLS extensions are used. Instead of negotiating the highest TLS version supported by the server the connection attempt may fail. As a workaround, the web browser may attempt to re-connect with certain protocol versions disabled. For example, the browser may initially connect claiming TLS 1.2 as the highest supported version, and subsequently reconnect claiming only TLS 1.1, TLS 1.0, or eventually SSL 3.0 as the highest supported version until the connection attempt succeeds. This can trivially allow a MITM attacker to cause a protocol downgrade and make the client/server use SSL 3.0. This fallback behavior is not seen in non HTTPS clients.

    The issue related to the POODLE flaw is an attack against the “authenticate-then-encrypt” constructions used by block ciphers in their cipher block chaining (CBC) mode, as used in SSL and TLS. By using SSL 3.0, at most 256 connections are required to reliably decrypt one byte of ciphertext. Known flaws already affect RC4 and non block-ciphers and their use is discouraged.

    Several cryptographic library vendors have issued patches which introduce the TLS Fallback Signaling Cipher Suite Value (TLS_FALLBACK_SCSV) support to their libraries. This is essentially a fallback mechanism in which clients indicate to the server that they can speak a newer SSL/TLS versions than the one they are proposing. If TLS_FALLBACK_SCSV was included in the ClientHello and the highest protocol version supported by the server is higher than the version indicated by the client, the server aborts the connection, because it means that the client is trying to fallback to a older version even though it can speak the newer version.

    Before applying this fix, there are several things that need to be understood:

    • As discussed before, only web browsers perform an out-of-band protocol fallback. Not all web browsers currently support TLS_FALLBACK_SCSV in their released version. Even if the patch is applied on the server, the connection may still be unsafe if the browser is able to negotiate SSL 3.0
    • Clients which do not implement out-of-protocol TLS version downgrades (generally anything which does not speak HTTPS) do not need to be changed. Adding TLS_FALLBACK_SCSV is unnecessary (and even impossible) if there is no downgrade logic in the client application.
    • Thunderbird shares a lot of its code with the Firefox web browser, including the connection setup code for IMAPS and SMTPS. This means that Thunderbird will perform an insecure protocol downgrade, just like Firefox. However, the plaintext recovery attack described in the POODLE paper does not apply to IMAPS or SMTPS, and the web browser in Thunderbird has Javascript disabled, and is usually not used to access sites which require authentication, so the impact on Thunderbird is very limited.
    • The TLS/SSL server needs to be patched to support the SCSV extension – though, as opposed to the client, the server does not have to be rebuilt with source changes applied. Just installing an upgrade TLS library is sufficient. Due to the current lack of browser support, this server-side change does not have any positive security impact as of this writing. It only prepares for a future where a significant share of browsers implement TLS_FALLBACK_SCSV.
    • If both the server and the client are patched and one of them only supports SSL 3.0, SSL 3.0 will be used directly, which results in a connection with reduced security (compared to currently recommended practices). However, the alternative is a total connection failure or, in some situations, an unencrypted connection which does nothing to protect from an MITM attack. SSL 3.0 is still better than an unencrypted connection.
    • As a stop-gap measure against attacks based on SSL 3.0, disabling support for this aging protocol can be performed on the server and the client. Advice on disabling SSL 3.0 in various Red Hat products and components is available on the Knowledge Base.

    Information about (the lack of ongoing) attacks may help with a decision. Protocol downgrades are not covert attacks, in particular in this case. It is possible to log SSL/TLS protocol versions negotiated with clients and compare these versions with expected version numbers (as derived from user profiles or the HTTP user agent header). Even after a forced downgrade to SSL 3.0, HTTPS protects against tampering. The plaintext recovery attack described in the POODLE paper (Bodo Möller, Thai Duong, Krzysztof Kotowicz, This POODLE Bites: Exploiting The SSL 3.0 Fallback, September 2014) can be detected by the server and just the number of requests generated by it could be noticeable.

    Red Hat has done additional research regarding the downgrade attack in question. We have not found any clients that can be forcibly downgraded by an attacker other than clients that speak HTTPS. Due to this fact, disabling SSL 3.0 on services which are not used by HTTPS clients does not affect the level of security offered. A client that supports a higher protocol version and cannot be downgraded is not at issue as it will always use the higher protocol version.

    SSL 3.0 cannot be repaired at this point because what constitutes the SSL 3.0 protocol is set in stone by its specification. However, starting in 1999, successor protocols to SSL 3.0 were developed called TLS 1.0, 1.1, and 1.2 (which is currently the most recent version). Because of the built-in protocol upgrade mechanisms, these successor protocols will be used whenever possible. In this sense, SSL 3.0 has indeed been fixed – an update to SSL 3.0 should be seen as being TLS 1.0, 1.1, and 1.2. Implementing TLS_FALLBACK_SCSV handling in servers makes sure that attackers cannot circumvent the fixes in later protocol versions.

    Posted: 2014-10-20T14:27:34+00:00
  • POODLE – An SSL 3.0 Vulnerability (CVE-2014-3566)

    Authored by: Ben Pritchett

    Red Hat Product Security has been made aware of a vulnerability in the SSL 3.0 protocol, which has been assigned CVE-2014-3566. All implementations of SSL 3.0 are affected. This vulnerability allows a man-in-the-middle attacker to decrypt ciphertext using a padding oracle side-channel attack.

    To mitigate this vulnerability, it is recommended that you explicitly disable SSL 3.0 in favor of TLS 1.1 or later in all affected packages.

    A brief history

    Transport Layer Security (TLS) and its predecessor, Secure Sockets Layer (SSL), are cryptographic protocols designed to provide communication security over networks. The SSL protocol was originally developed by Netscape.  Version 1.0 and was never publicly released; version 2.0 was released in February 1995 but contained a number of security flaws which ultimately led to the design of SSL 3.0. Over the years, several flaws were found in the design of SSL 3.0 as well. This ultimately lead to the development and widespread use of the TLS protocol.

    Most TLS implementations remain backward compatible with SSL 3.0 to incorporate legacy systems and provide a smoother user experience. Many SSL clients implement a protocol downgrade “dance” to work around the server side interoperability issues. Once the connection is downgraded to SSL 3.0, RC4 or a block cipher with CBC mode is used; this is where the problem starts!

    What is POODLE?

    The POODLE vulnerability has two aspects. The first aspect is a weakness in the SSL 3.0 protocol, a padding oracle. An attacker can exploit this vulnerability to recover small amounts of plaintext from an encrypted SSL 3.0 connection, by issuing crafted HTTPS requests created by client-side Javascript code, for example. Multiple HTTPS requests are required for each recovered plaintext byte, and the vulnerability allows attackers to confirm if a particular byte was guessed correctly. This vulnerability is inherent to SSL 3.0 and unavoidable in this protocol version. The fix is to upgrade to newer versions, up to TLS 1.2 if possible.

    Normally, a client and a server automatically negotiate the most recent supported protocol version of SSL/TLS. The second aspect of the POODLE vulnerability concerns this negotiation mechanism. For the protocol negotiation mechanism to work, servers must gracefully deal with a more recent protocol version offered by clients. (The connection would just use the older, server-supported version in such a scenario, not benefiting from future protocol enhancements.) However, when newer TLS versions were deployed, it was discovered that some servers just terminated the connection at the TCP layer or responded with a fatal handshake error, preventing a secure connection from being established. Clearly, this server behavior is a violation of the TLS protocol, but there were concerns that this behavior would make it impossible to deploy upgraded clients and widespread interoperability failures were feared. Consequently, browsers first try a recent TLS version, and if that fails, they attempt again with older protocol versions, until they end up at SSL 3.0, which suffers from the padding-related vulnerability described above. This behavior is sometimes called the compatibility dance. It is not part of TLS implementations such as OpenSSL, NSS, or GNUTLS; it is implemented by application code in client applications such as Firefox and Thunderbird.

    Both aspects of POODLE require a man in the middle attack at the network layer. The first aspect of this flaw, the SSL 3.0 vulnerability, requires that an attacker can observe the network traffic between a client and a server and somehow trigger crafted network traffic from the client. This does not strictly require active manipulation of the network transmission, passive eavesdropping is sufficient. However, the second aspect, the forced protocol downgrade, requires active manipulation of network traffic.  As described in the POODLE paper, both aspects require the attacker to be able to observe and manipulate network traffic while it is in transit.


    How are modern browsers affected by the POODLE security flaw?

    Browsers are particularly vulnerable because session cookies are short and an ideal target for plain text recovery, and the way HTTPS works allows an attacker to generate many guesses quickly (either through Javascript or by downloading images). Browsers are also most likely to implement the compatibility fallback.

    By default, Firefox supports SSL 3.0, and performs the compatibility fallback as described above. SSL 3.0 support can be switched off, but the compatibility fallback cannot be configured separately.

    Is this issue fixed?

    The first aspect of POODLE, the SSL 3.0 protocol vulnerability, has already been fixed through iterative protocol improvements, leading to the current TLS version, 1.2. It is simply not possible to address this in the context of the SSL 3.0 protocol, a protocol upgrade to one of the successors is needed. Note that TLS versions before 1.1 had similar padding-related protocol issues, which is why we recommend to switch to TLS 1.1, if possible. These issues, while present, are currently not known to be exploitable in a way such as POODLE. The first aspect of the POODLE vulnerability is not exploitable with TLS 1.0. (SSL and TLS are still quite similar as protocols, the name change has non-technical reasons.)

    The second aspect, caused by browsers which implement the compatibility fallback in an insecure way, has yet to be addressed. Strictly speaking, this is a security vulnerability in browsers due to the way they misuse the TLS protocol. One way to fix this issue would be to remove the compatibility dance, focusing instead on making servers compatible with clients implementing the most recent TLS implementation (as explained, the protocol supports a version negotiation mechanism, but some servers refuse to implement it).

    However, there is an industry-wide effort under way to enable browsers to downgrade in a secure fashion, using a new Signaling Cipher Suite Value (SCSV). This will require updates in browsers (such as Firefox) and TLS libraries (such as OpenSSL, NSS and GNUTLS). However, we do not envision changes in TLS client applications which currently do not implement the fallback logic, and neither in TLS server applications as long as they use one of the system TLS libraries. TLS-aware packet filters, firewalls, load balancers, and other infrastructure may need upgrades as well.

    Is there going to be another SSL 3.0 issue in the near future? Is there a long term solution?

    Disabling SSL 3.0 will obviously prevent exposure to future SSL 3.0-specific issues. The new SCSV-based downgrade mechanism should reliably prevent the use of SSL 3.0 if both parties support a newer protocol version. Once these software updates are widely deployed, the need to disable SSL 3.0 to address this and future vulnerabilities will hopefully be greatly reduced.

    SSL 3.0 is typically used in conjunction with the RC4 stream cipher. (The only other secure option in a strict, SSL 3.0-only implementation is Triple DES, which is quite slow even on modern CPUs.) RC4 is already considered very weak, and SSL 3.0 does not even apply some of the recommended countermeasures which prolonged the lifetime of RC4 in other contexts. This is another reason to deploy support for more recent TLS versions.

    I have patched my SSL implementation against BEAST and LUCKY-13, am I still vulnerable?

    This depends on the type of mitigation you have implemented. If you disabled protocol versions earlier than TLS 1.1 (which includes SSL 3.0), then the POODLE issue does not affect your installation. If you forced clients to use RC4, the first aspect of POODLE does not apply, but you and your users are vulnerable to all of the weaknesses in RC4. If you implemented the n/n-1 split through a software update, or if you deployed TLS 1.1 support without enforcing it, but made no other configuration changes, you are still vulnerable to the POODLE issue.

    Is it possible to monitor for exploit attempts?

    The protocol downgrade is visible on the server side. Usually, servers can log TLS protocol versions. This information can then be compared with user agents or other information from the profile of a logged-in user, and mismatches could indicate attack attempts.

    Attempts to abuse the SSL 3.0 padding oracle part of POODLE, as described in the paper, are visible to the server as well. They result in a fair number of HTTPS requests which follow a pattern not expected during the normal course of execution of a web application. However, it cannot be ruled out that a more sophisticated adaptive chosen plain text attack avoids confirmation of guesses from the server, and this more advanced attack would not be visible to the server, only to the client and the network up to the point at which the attacker performs their traffic manipulation.

    What happens when i disable SSL 3.0 on my web server?

    Some old browsers may not be able to support a secure connection to your site. Estimates of the number of such browsers in active use vary and depend on the target audience of a web site. SSL protocol version logging (see above) can be used to estimate the impact of disabling SSL 3.0 because it will be used only if no TLS version is available in the client.

    Major browser vendors including Mozilla and Google have announced that they are to deactivate the SSL 3.0 in their upcoming versions.

    How do I secure my Red Hat-supported software?

    Red Hat has put together several articles regarding the removal of SSL 3.0 from its products.  Customers should review the recommendations and test changes before making them live in production systems.  As always, Red Hat Support is available to answer any questions you may have.

    Posted: 2014-10-15T14:44:40+00:00
  • The Source of Vulnerabilities, How Red Hat finds out about vulnerabilities.

    Authored by: Ben Pritchett

    Red Hat Product Security track lots of data about every vulnerability affecting every Red Hat product. We make all this data available on our Measurement page and from time to time write various blog posts and reports about interesting metrics or trends.

    One metric we’ve not written about since 2009 is the source of the vulnerabilities we fix. We want to answer the question of how did Red Hat Product Security first hear about each vulnerability?

    Every vulnerability that affects a Red Hat product is given a master tracking bug in Red Hat bugzilla. This bug contains a whiteboard field with a comma separated list of metadata including the dates we found out about the issue, and the source. You can get a file containing all this information already gathered for every CVE. A few months ago we updated our ‘daysofrisk’ command line tool to parse the source information allowing anyone to quickly create reports like this one.

    So let’s take a look at some example views of recent data: every vulnerability fixed in every Red Hat product in the 12 months up to 30th August 2014 (a total of 1012 vulnerabilities).

    Firstly a chart just giving the breakdown of how we first found out about each issue: Sources of issues

    • CERT: Issues reported to us from a national cert like CERT/CC or CPNI, generally in advance of public disclosure
    • Individual: Issues reported to Red Hat Product Security directly by a customer or researcher, generally in advance of public disclosure
    • Red Hat: Issues found by Red Hat employees
    • Relationship: Issues reported to us by upstream projects, generally in advance of public disclosure
    • Peer vendors: Issues reported to us by other OS distributions, through relationships

      or a shared private forum
    • Internet: For issues not disclosed in advance we monitor a number of mailing lists and security web pages of upstream projects
    • CVE: If we’ve not found out about an issue any other way, we can catch it from the list of public assigned CVE names from Mitre

    Next a breakdown of if we knew about the issue in advance. For the purposes of our reports we count knowing the same day of an issue as not knowing in advance, even though we might have had a few hours notice: Known in advanceThere are few interesting observations from this data:

    • Red Hat employees find a lot of vulnerabilities. We don’t just sit back and wait for others to find flaws for us to fix, we actively look for issues ourselves and these are found by engineering, quality assurance, as well as our security teams. 17% of all the issues we fixed in the year were found by Red Hat employees. The issues we find are shared back in advance where possible to upstream and other peer vendors (generally via the ‘distros’ shared private forum).
    • Relationships matter. When you are fixing vulnerabilities in third party software, having a relationship with the upstream makes a big difference. But

      it’s really important to note here that this should never be a one-way street, if an upstream is willing to give Red Hat information about flaws in advance,

      then we need to be willing to add value to that notification by sanity checking the draft advisory, checking the patches, and feeding back the

      results from our quality testing. A recent good example of this is the OpenSSL CCS Injection flaw; our relationship with OpenSSL gave us advance

      notice of the issue and we found a mistake in the advisory as well as a mistake in the patch which otherwise would have caused OpenSSL to have to have

      done a secondary fix after release. Only two of the dozens of companies prenotified about those OpenSSL issues actually added value back to OpenSSL.
    • Red Hat can influence the way this metric looks; without a dedicated security team a vendor could just watch what another vendor does and copy them,

      or rely on public feeds such as the list of assigned CVE names from Mitre. We can make the choice to invest to find more issues and build upstream relationships.
    Posted: 2014-10-08T13:30:48+00:00
  • Bash specially-crafted environment variables code injection attack

    Authored by: Ben Pritchett

    Update 2014-09-30 19:30 UTC

    Questions have arisen around whether Red Hat products are vulnerable to CVE-2014-6277 and CVE-2014-6278.  We have determined that RHSA-2014:1306, RHSA-2014:1311, and RHSA-2014:1312 successfully mitigate the vulnerability and no additional actions need to be taken.


     

    Update 2014-09-26 12:00 UTC

    We have written a FAQ to address some of the more common questions seen regarding the recent bash issues.

    Frequently Asked Questions about the Shellshock Bash flaws


    Update 2014-09-26 02:20 UTC

    Red Hat has released patched versions of Bash that fix CVE-2014-7169.  Information regarding these updates can be found in the errata.  All customers are strongly encouraged to apply the update as this flaw is being actively attacked in the wild.
    Fedora has also released a patched version of Bash that fixes CVE-2014-7169.  Additional information can be found on Fedora Magazine.

    Update 2014-09-25 16:00 UTC

    Red Hat is aware that the patch for CVE-2014-6271 is incomplete. An attacker can provide specially-crafted environment variables containing arbitrary commands that will be executed on vulnerable systems under certain conditions. The new issue has been  assigned CVE-2014-7169.
    We are working on patches in conjunction with the upstream developers as a critical priority. For details on a workaround, please see the knowledgebase article.
    Red Hat advises customers to upgrade to the version of Bash which contains the fix for CVE-2014-6271 and not wait for the patch which fixes CVE-2014-7169. CVE-2014-7169 is a less severe issue and patches for it are being worked on.

    Bash or the Bourne again shell, is a UNIX like shell, which is perhaps one of the most installed utilities on any Linux system. From its creation in 1980, Bash has evolved from a simple terminal based command interpreter to many other fancy uses.

    In Linux, environment variables provide a way to influence the behavior of software on the system. They typically consists of a name which has a value assigned to it. The same is true of the Bash shell. It is common for a lot of programs to run Bash shell in the background. It is often used to provide a shell to a remote user (via ssh, telnet, for example), provide a parser for CGI scripts (Apache, etc) or even provide limited command execution support (git, etc)

    Coming back to the topic, the vulnerability arises from the fact that you can create environment variables with specially-crafted values before calling the Bash shell. These variables can contain code, which gets executed as soon as the shell is invoked. The name of these crafted variables does not matter, only their contents. As a result, this vulnerability is exposed in many contexts, for example:

    • ForceCommand is used in sshd configs to provide limited command execution capabilities for remote users. This flaw can be used to bypass that and provide arbitrary command execution. Some Git and Subversion deployments use such restricted shells. Regular use of OpenSSH is not affected because users already have shell access.
    • Apache server using mod_cgi or mod_cgid are affected if CGI scripts are either written in Bash, or spawn subshells. Such subshells are implicitly used by system/popen in C, by os.system/os.popen in Python, system/exec in PHP (when run in CGI mode), and open/system in Perl if a shell is used (which depends on the command string).
    • PHP scripts executed with mod_php are not affected even if they spawn subshells.
    • DHCP clients invoke shell scripts to configure the system, with values taken from a potentially malicious server. This would allow arbitrary commands to be run, typically as root, on the DHCP client machine.
    • Various daemons and SUID/privileged programs may execute shell scripts with environment variable values set / influenced by the user, which would allow for arbitrary commands to be run.
    • Any other application which is hooked onto a shell or runs a shell script as using Bash as the interpreter. Shell scripts which do not export variables are not vulnerable to this issue, even if they process untrusted content and store it in (unexported) shell variables and open subshells.

    Like “real” programming languages, Bash has functions, though in a somewhat limited implementation, and it is possible to put these Bash functions into environment variables. This flaw is triggered when extra code is added to the end of these function definitions (inside the enivronment variable). Something like:

    $ env x='() { :;}; echo vulnerable' bash -c "echo this is a test"
     vulnerable
     this is a test

    The patch used to fix this flaw, ensures that no code is allowed after the end of a Bash function. So if you run the above example with the patched version of Bash, you should get an output similar to:

     $ env x='() { :;}; echo vulnerable' bash -c "echo this is a test"
     bash: warning: x: ignoring function definition attempt
     bash: error importing function definition for `x'
     this is a test

    We believe this should not affect any backward compatibility. This would, of course, affect any scripts which try to use environment variables created in the way as described above, but doing so should be considered a bad programming practice.

    Red Hat has issued security advisories that fixes this issue for Red Hat Enterprise Linux. Fedora has also shipped packages that fixes this issue.

    We have additional information regarding specific Red Hat products affected by this issue that can be found at https://access.redhat.com/site/solutions/1207723

    Information on CentOS can be found at http://lists.centos.org/pipermail/centos/2014-September/146099.html.

    Posted: 2014-09-24T14:00:08+00:00
  • Enterprise Linux 5.10 to 5.11 risk report

    Authored by: Ben Pritchett

    Red Hat Enterprise Linux 5.11 was released this month (September 2014), eleven months since the release of 5.10 in October 2013. So, as usual, let’s use this opportunity to take a look back over the vulnerabilities and security updates made in that time, specifically for Red Hat Enterprise Linux 5 Server.

    Red Hat Enterprise Linux 5 is in Production 3 phase, being over seven years since general availability in March 2007, and will receive security updates until March 31st 2017.

    Errata count

    The chart below illustrates the total number of security updates issued for Red Hat Enterprise Linux 5 Server if you had installed 5.10, up to and including the 5.11 release, broken down by severity. It’s split into two columns, one for the packages you’d get if you did a default install, and the other if you installed every single package.

    Note that during installation there actually isn’t an option to install every package, you’d have to manually select them all, and it’s not a likely scenario. For a given installation, the number of package updates and vulnerabilities that affected your systems will depend on exactly what you selected during installation and which packages you have subsequently installed or removed.

    Security errata 5.10 to 5.11 Red Hat Enterprise Linux 5 ServerFor a default install, from release of 5.10 up to and including 5.11, we shipped 41 advisories to address 129 vulnerabilities. 8 advisories were rated critical, 11 were important, and the remaining 22 were moderate and low.

    For all packages, from release of 5.10 up to and including 5.11, we shipped 82 advisories to address 298 vulnerabilities. 12 advisories were rated critical, 29 were important, and the remaining 41 were moderate and low.

    You can cut down the number of security issues you need to deal with by carefully choosing the right Red Hat Enterprise Linux variant and package set when deploying a new system, and ensuring you install the latest available Update release.

    Critical vulnerabilities

    Vulnerabilities rated critical severity are the ones that can pose the most risk to an organisation. By definition, a critical vulnerability is one that could be exploited remotely and automatically by a worm. However we also stretch that definition to include those flaws that affect web browsers or plug-ins where a user only needs to visit a malicious (or compromised) website in order to be exploited. Most of the critical vulnerabilities we fix fall into that latter category.

    The 12 critical advisories addressed 33 critical vulnerabilities across just three different projects:

    • An update to NSS/NSPR: RHSA-2014:0916(July 2014). A race condition was found in the way NSS verified certain certificates which could lead to arbitrary code execution with the privileges of the user running that application.
    • Updates to PHP, PHP53: RHSA-2013:1813, RHSA-2013:1814

      (December 2013). A flaw in the parsing of X.509 certificates could allow scripts using the affected function to potentially execute arbitrary code. An update to PHP: RHSA-2014:0311

      (March 2014). A flaw in the conversion of strings to numbers could allow scripts using the affected function to potentially execute arbitrary code.
    • Updates to Firefox, RHSA-2013:1268 (September 2013), RHSA-2013:1476 (October 2013), RHSA-2013:1812 (December 2013), RHSA-2014:0132 (February 2014), RHSA-2014:0310 (March 2014), RHSA-2014:0448 (Apr 2014), RHSA-2014:0741 (June 2014), RHSA-2014:0919 (July 2014) where a malicious web site could potentially run arbitrary code as the user running Firefox.

    Updates to correct 32 of the 33 critical vulnerabilities were available via Red Hat Network either the same day or the next calendar day after the issues were public.

    Overall, for Red Hat Enterprise Linux 5 since release until 5.11, 98% of critical vulnerabilities have had an update available to address them available from the Red Hat Network either the same day or the next calendar day after the issue was public.

    Other significant vulnerabilities

    Although not in the definition of critical severity, also of interest are other remote flaws and local privilege escalation flaws:

    • A flaw in glibc, CVE-2014-5119, fixed by RHSA-2014:1110 (August 2014). A local user could use this flaw to escalate their privileges. A public exploit is available which targets the polkit application on 32-bit systems although polkit is not shipped in Red Hat Enterprise Linux 5. It may be possible to create an exploit for Red Hat Enterprise Linux 5 by targeting a different application.
    • Two flaws in squid, CVE-2014-4115, and CVE-2014-3609, fixed by RHSA-2014:1148 (September 2014). A remote attacker could cause Squid to crash.
    • A flaw in procmail, CVE-2014-3618, fixed by RHSA-2014:1172 (September 2014). A remote attacker could send an email with specially crafted headers that, when processed by formail, could cause procmail to crash or, possibly, execute arbitrary code as the user running formail.
    • A flaw in Apache Struts, CVE-2014-0114, fixed by RHSA-2014:0474 (April 2014). A remote attacker could use this flaw to manipulate the ClassLoader used by an application server running Stuts 1 potentially leading to arbitrary code execution under some conditions.
    • A flaw where yum-updatesd did not properly perform RPM signature checks, CVE-2014-0022, fixed by RHSA-2014:1004 (Jan 2014). Where yum-updatesd was configured to automatically install updates, a remote attacker could use this flaw to install a malicious update on the target system using an unsigned RPM or an RPM signed with an untrusted key.
    • A flaw in the kernel floppy driver, CVE-2014-1737, fixed by RHSA-2014:0740 (June 2014). A local user who has write access to /dev/fdX on a system with floppy drive could use this flaw to escalate their privileges. A public exploit is available for this issue. Note that access to /dev/fdX is by default restricted only to members of the floppy group.
    • A flaw in libXfont, CVE-2013-6462, fixed by RHSA-2014:0018 (Jan 2014). A local user could potentially use this flaw to escalate their privileges to root.
    • A flaw in xorg-x11-server, CVE-2013-6424, fixed by RHSA-2013:1868 (Dec 2013). An authorized client could potentially use this flaw to escalate their privileges to root.
    • A flaw in the kernel QETH network device driver, CVE-2013-6381, fixed by RHSA-2014:0285 (March 2014). A local, unprivileged user could potentially use this flaw to escalate their privileges. Note this device is only found on s390x architecture systems.

    Note that Red Hat Enterprise Linux 5 was not affected by the OpenSSL issue, CVE-2014-0160, “Heartbleed”.

    Previous update releases

    We generally measure risk in terms of the number of vulnerabilities, but the actual effort in maintaining a Red Hat Enterprise Linux system is more related to the number of advisories we released: a single Firefox advisory may fix ten different issues of critical severity, but takes far less total effort to manage than ten separate advisories each fixing one critical PHP vulnerability.

    To compare these statistics with previous update releases we need to take into account that the time between each update release is different. So looking at a default installation and calculating the number of advisories per month gives the following chart:

    Security Errata per month to 5.10This data is interesting to get a feel for the risk of running Enterprise Linux 5 Server, but isn’t really useful for comparisons with other major versions, distributions, or operating systems — for example, a default install of Red Hat Enterprise Linux 4AS did not include Firefox, but 5 Server does. You can use our public security measurement data and tools, and run your own custom metrics for any given Red Hat product, package set, time scales, and severity range of interest.

    See also:
    5.10, 5.9, 5.8, 5.7, 5.6, 5.5, 5.4, 5.3, 5.2, and 5.1 risk reports.

    Posted: 2014-09-18T13:30:49+00:00