Warning message

Log in to add comments.

Latest Posts

  • Join us at Red Hat Summit 2017

    Authored by: Christopher Robinson

    As you’ve probably heard, this year’s Red Hat Summit is in Boston May 2-4. Product Security is looking forward to taking over multiple sessions and activities over the course of those 3 days, and we wanted to give you a sneak peek of what we have planned.


    There will be A LOT of Product Security sessions including:

    Tuesday, May 2

    Time Session Title Room
    10:15-11:00AM L102598 - Practical OpenSCAP—Security standard compliance and reporting Room 252B
    10:15-11:00AM S102106 - Red Hat security roadmap Room 156AB
    10:15-11:00AM S104999 - A greybeard's worst nightmare— How Kubernetes and Docker containers are re-defining the Linux OS Room 104AB
    11:30AM-12:15PM B104934 - The age of uncertainty: how can we make security better Room 158
    11:30AM-12:15PM S105019 - Red Hat container technology strategy Room 153A
    12:45-1:15PM Security + Red Hat Insights Expert Exchange at Participation Square, Partner Pavilion
    3:30-4:15PM P102235 - The Furry and the Sound: A mock disaster security vulnerability fable Room 156AB
    3:30-4:15PM S102603 - OpenShift Roadmap: What's New & What's Next! Room 153B
    4:30-5:15PM B103109 - How to survive the Internet of Things: Security Room 155
    4:30-5:15PM S104047 - The roadmap for a security-enhanced Red Hat OpenStack Platform Room 151A

    Wednesday, May 3

    Time Session Title Room
    10:15-11:00AM S105084 - Security-Enhanced Linux for mere mortals Room 104AB
    11:30AM-12:15PM S103850 - Ten layers of container security Room 156C
    3:30-4:14PM LT122001 - Lightning Talks: Infrastructure security Room 101
    3:30-5:30PM L99901 - A practical introduction to container security Room 251
    3:40-4:00PM Container Catalog Quick Talk Expert Exchange at Participation Square, Partner Pavilion
    4:30-5:15PM LT122006 - Red Hat Satellite lightning talks Room 101
    4:30-5:15PM S102068 - DevSecOps the open source way Room 104C
    4:30-5:15PM S104105 - Securing your container supply chain Room 153B
    4:30-5:15PM S104897 - Easily secure your front- and back-end applications with KeyCloak Room 153A

    Thursday, May 4

    Time Session Title Room
    9:50-10:10AM Risk Report Quick Talk Expert Exchange at Participation Square, Partner Pavilion
    10:15-11:00AM S111009 - Partner session: Microsoft Room 105
    10:15-11:00AM S103174 - Automating security compliance for physical, virtual, cloud, and container environments with Red Hat CloudForms, Red Hat Satellite, and Ansible Tower by Red Hat Room 157C
    10:15AM-12:15PM L100049 - Practical SELinux: Writing custom application policy Room 252B
    11:30AM-12:15PM S102840 - The security of cryptography Room 156C
    11:30AM-12:15PM B103948 - DirtyCow: A game changer? Room 155
    3:30-4:15PM B103901 - Perfect Security: A dangerous myth? System security for open source Room 155
    3:30-5:30PM L105190 - Proactive security compliance automation with CloudForms, Satellite, OpenSCAP, Insights, and Ansible Tower Room 254A
    4:30-5:15PM S104110 - An overview and roadmap of Red Hat Development Suite Room 102A


    We are pleased to announce we will be hosting a series of games in Participation Square at Summit that will cover all things security including recent vulnerabilities, product enhancements and more. Come test your knowledge, and win some awesome prizes in the process!

    Our games will be live at the following times, so mark your calendars!

    Tuesday, May 2

    Time Game Room
    11:10-11:50AM Product Security Game Show Expert Exchange at Participation Square, Partner Pavilion
    1:05-1:45PM Product Security Game Show Expert Exchange at Participation Square, Partner Pavilion
    3:00-3:30PM Flawed & Branded Card Game Expert Exchange at Participation Square, Partner Pavilion
    5:00-5:30PM Product Security Game Show Expert Exchange at Participation Square, Partner Pavilion
    6:30-7:00PM Flawed & Branded Card Game Expert Exchange at Participation Square, Partner Pavilion

    Wednesday, May 3

    Time Session Title Room
    1:00-1:45PM Product Security Game Show Expert Exchange at Participation Square, Partner Pavilion
    5:00-5:45PM Product Security Game Show Expert Exchange at Participation Square, Partner Pavilion
    5:50-6:20PM Flawed & Branded Card Game Expert Exchange at Participation Square, Partner Pavilion

    Thursday, May 4

    Time Session Title Room
    11:35AM-12:05PM Flawed & Branded Card Game Expert Exchange at Participation Square, Partner Pavilion
    12:10-1:00PM Product Security Game Show Expert Exchange at Participation Square, Partner Pavilion

    We are looking forward to connecting with you in Boston! Hope to see you there.

    Posted: 2017-04-19T13:30:00+00:00
  • Determining your risk

    Authored by: Stephen Herr

    Red Hat continues to be a leader in transparency regarding security problems that are discovered in our software and the steps we take to fix them. We publish data about vulnerabilities on our security metrics page and recently launched an API Service that allows easier (and searchable) access to the same data. This data is important to administrators for understanding what known security problems exist and determining what they should do about it.

    Pitfalls of comparing version numbers

    Comparing version numbers with Common Vulnerabilities and Exposures (CVE) advisories is a common mistake people, and 3rd party security scanners, make when trying to determine if systems are vulnerable. To truly understand the real risk a system is exposed to by a flaw one must know what the CVE affects, how the vulnerability is fixed, what you have installed, and where it came from in order to be able to properly compare things. The data available from our website can get you started but an understanding of your system is needed to complete the picture.

    Suppose someone installs an Apache httpd RPM from a non-Red Hat source and they compare its version with data from one of Red Hat’s CVEs to see if it is affected. Suppose the version of the installed RPM is lower than the one that Red Hat released to fix the CVE. What does that tell you about the vulnerability? Absolutely nothing. The 3rd party httpd RPM may contain an older build of httpd that is unaffected by the CVE (perhaps the flaw was in code introduced at a later date), an older build that is affected by the CVE, a newer build (there’s no guarantee that the 3rd party used the same versioning system as either Apache or Red Hat) that is still affected by the CVE, or a newer version that is not affected by the CVE. Or it may contain an arbitrarily different patchlevel of httpd, containing some patches that Red Hat’s build does not have but excluding others that Red Hat does include. Simply put, there is no useful information that can be gathered from such a simple comparison.

    What if a server only has Red Hat software installed, is there something we can say then? Perhaps, but it’s not as simple as most people assume.

    First, just because the newest version of a package in a given channel is affected by a CVE - and therefore we release an update to fix it - does not mean that older versions of that package are necessarily affected. Going back and testing for the presence of the vulnerability in all previous versions is not something that is typically done unless there is a good reason. So if you are running an older version and a new package is released that fixes a CVE, that is not a guarantee that you are vulnerable to that CVE. In addition, some CVEs require a combination of packages before the system is vulnerable, eg. kernel-1234 and openssh-5678. Security scanners usually fail to identify such cases and can generate false-positive alerts if only one of the packages is installed or has a lower version.

    Second, version comparison is completely untenable when you’re comparing packages from different channels or repositories. A simple example would be comparing a kernel from RHEL 6 with RHEL 7. The kernel you have installed on your RHEL 6 machines is kernel-2.6.32-264.el6. What if Red Hat releases kernel-3.10.0-513.el7 to fix a RHEL 7 CVE? A naive version comparison would lead you to believe that the RHEL 6 kernel is vulnerable, but that may or may not be true. The RHEL 6 kernel may have been patched to fix this vulnerability in 2.6.32-263. Or it may never have been affected in the first place.

    In reality no one would compare RHEL 6 packages with RHEL 7 versions. But the same holds true inside a single RHEL version. Red Hat releases software in many different streams: “regular” RHEL and Extended Update Support to name the two most common. You can know for example that kernel-2.6.32-264.el6 is “newer” than kernel-2.6.32-263.el6, but what about kernel-2.6.32-263.4.el6_4 (a hypothetical EUS kernel)? Again a naive version comparison would lead you to believe that the 264.el6 kernel was “newer”, but again it’s impossible to say. The 263.4.el6_4 build may have more patches, fewer, or an arbitrarily different set of patches from the 264.el6 build (technically possible, but highly unlikely in practice).

    And the same applies not just to kernels in the major Red Hat product lines, but to all packages in all different channels. Consider a bundled library as part of an add-on product. When presented with a CVE in a bundled library the application developer generally has two options: pull into the application the version of the library that contains the fix (which might have “moved on” and may contain many other changes including new features or API changes, which may require a large revalidation effort at least) or to backport the fix to the old version and branch the version numbers. As soon as you branch the version numbers you have the same problem you do when comparing RHEL with EUS versions. Version comparison is simply not useful when you start backporting fixes to old versions of software in different channels (which Red Hat does constantly for different products). You can only compare versions between software that was intended to end up on the same machine; inside a base channel and child channel combination for example.

    Third, even if you account for the above there are still other considerations. CPU architecture can matter. If support for a given architecture was at a developer-preview or beta level when an update was made, then it was probably built from a different source RPM than the “regular” architectures and may have an arbitrarily different version number. Or you might have to consider the RPM’s Epoch. Epoch, while never included in the filename and sometimes not displayed by programs and default settings, is nevertheless the highest priority field (if it exists) in determining which RPM is the “newer” version. Are you sure that the data source and security scanner you are using is considering Epoch? What about the RPM Obsoletes tag?

    It’s a complicated problem. Correctly detecting vulnerability against a particular CVE is difficult or perhaps impossible given merely RPM version information. Many 3rd party scanners are unaware of the complexity of the problem, leading to false positives or, worse yet, to false negatives.

    Getting the information needed to make a decision

    The three data resources that Red Hat provides to help explain vulnerabilities and determine if you are vulnerable are the Common Vulnerability Reporting Framework (CVRF) data, the Common Vulnerability and Exposures (CVE) feed, and the Open Vulnerability and Assessment Language (OVAL) data.

    The CVRF data is a representation of the update content itself and is not designed for scanning purposes. CVRF data does help explain the impact of the erratum and how it affects your system.

    Likewise the CVE feed helps explain what the vulnerability affects and why you should care. It represents the vulnerability itself but not necessarily how to detect it on your system.

    OVAL data is machine-readable data that combines much of what is found in the CVRF and CVE advisories. It provides scanners with the additional information (beyond just version numbers) they need to be able to properly detect vulnerability to a CVE. OpenSCAP is one such OVAL-compliant scanner, and the only scanner that Red Hat currently supports. Currently Red Hat only publishes OVAL data for RHEL, but support for additional products is coming in the future.

    And in conclusion...

    There are several ways to answer the “am I vulnerable” question. For managing individual systems you can use yum-plugin-security or OpenSCAP. To simultaneously manage multiple systems there are additional subscription options available: Red Hat Insights or Satellite. Insights can detect and warn about a subset of “high priority” issues (and also non-CVE related things like performance-related configurations). Satellite is intended to be a full-service management platform for handling the configuration, provisioning, entitlement, and reporting (including which CVEs are applicable) of Red Hat systems and software.

    The security metrics data we provide is very useful for determining “What is the problem”, “How important is it”, and “Should I care”, but to answer the question “Which of my systems are affected” you need something more; something that knows the details of your systems’ channel subscriptions and which version (if any) of the updated packages are applicable to them.

    Posted: 2017-04-12T13:30:00+00:00
  • Changes coming to TLS: Part Two

    Authored by: Huzaifa Sidhpurwala

    In the first part of this two-part blog we covered certain performance improving features of TLS 1.3, namely 1-RTT handshakes and 0-RTT session resumption. In this part we shall discuss some security and privacy improvements.

    Remove Obsolete and insecure cryptographic primitives

    Remove RSA Handshakes

    When RSA is used for key establishment there is no forward secrecy, which basically means that an adversary can record the encrypted conversation between the client and the server and later if it is able to break the RSA public key (could take years or could be because the attacker was able to get his hands on the private key), all the recorded conversations can be decrypted. In some cases (like when SSLv2 is enabled), RSA key establishment is vulnerable to DROWN. You can still use RSA certificates with TLS 1.3, but all the key establishment has to be done with DH (either finite field or elliptic curve). The primary reason why RSA key exchange was removed was Bleichenbacher and similar attacks; getting PFS is a welcomed bonus.

    Remove weak primitives

    TLS 1.3 also removes RC4, SHA1, MD5 (vulnerable to SLOTH) which are all considered weak or broken.

    No CBC mode

    Security weaknesses in CBC Mac-Then-Encrypt mode has been long established and has been the cause behind various named flaws like Lucky-13 and POODLE.

    No ChangeCipherSpec

    This was removed because it is no longer necessary to mark end of handshake - the two first exchanged messages do that. As an aside, ChangeCipherSpec caused famous CCS injection flaw in OpenSSL.

    No negotiation compression

    Removes the option of negotiating compression which is vulnerable to CRIME.

    Re-key mechanism

    Replace session renegotiation, with a simple re-key mechanism.

    Removing PKCS #1 v1.5 and some ECDHE groups

    PKCS #1 v1.5 encryption in the RSA key exchange is removed since it has multiple flaws. PKCS#1 v1.5 signature algorithm, which isn't broken, is removed mostly as a "just in case" and to base the protocol on new cryptographic primitives that were designed from ground up to follow good practice. A lot of weak and non-standard ECDHE groups were removed including the custom FFDHE groups now that we finally have a mechanism for clients to advertise key sizes to server.

    New cryptographic features and primitives

    Anti-downgrade feature

    Implementations which support TLS 1.3 will also continue supporting TLS 1.2 for a long time to ensure backward compatibility with older clients. This, however, can lead to downgrade attacks.

    A man-in-the-middle (MITM) attacker could modify the CLIENTHELLO message to trick the TLS server into believing that the client only supports TLS 1.2 and less and then use any flaws discovered in TLS 1.2 to complete the MITM attack (read or modify messages between client and the server). TLS 1.3, however, offers an anti-downgrade feature, which is an enhancement of the previous downgrade mechanism in TLS 1.2 FINISHED messages.

    When a TLS 1.3 server gets a request from the client to downgrade the following happens:

    1. If negotiating TLS 1.2, TLS 1.3 servers MUST set the last eight bytes of their Random value to the bytes: 44 4F 57 4E 47 52 44 01

    2. If negotiating TLS 1.1, TLS 1.3 servers MUST, and TLS 1.2 servers SHOULD, set the last eight bytes of their Random value to the bytes: 44 4F 57 4E 47 52 44 00

    TLS 1.3 clients receiving a TLS 1.2 or below ServerHello MUST check that the last eight bytes are not equal to either of these values. TLS 1.2 clients SHOULD also check that the last eight bytes are not equal to the second value if the ServerHello indicates TLS 1.1 or below. If a match is found, the client MUST abort the handshake with an “illegal_parameter” alert. This mechanism provides limited protection against downgrade attacks over and above that provided by the Finished exchange. Because the ServerKeyExchange, a message present in TLS 1.2 and below, includes a signature over both random values, it is not possible for an active attacker to modify the random values without detection as long as ephemeral ciphers are used.

    New improved session resumption features

    Session resumption using tickets and identifiers have been obsoleted by TLS 1.3 and has been replaced by PSK (pre-shared key) mode. A PSK is established on a previous connection after the handshake is completed, and can then be presented by the client on the next visit. Also, forward secrecy can be maintained by limiting the lifetime of PSK identities sensibly. Clients and servers may also choose an (EC)DHE cipher suite for PSK handshakes to provide forward secrecy for every connection, not just the whole session.

    New ECC curves

    TLS 1.3 includes two additional ECC curves: Curve 25519 and Curve 448. These new curves can easily be implemented in constant time on common hardware (as opposed to the other elliptic curves).

    Privacy of certificates during handshake

    TLS 1.3 has provision for what it calls "Encrypted Extensions". The server sends the EncryptedExtensions message immediately after the ServerHello message. This is the first message that is encrypted under keys derived from the "server_handshake_traffic_secret". The rest of the handshake after this is encrypted, including certificate transmission of certificates (both client and server). This offers protection of extension data from eavesdropping attackers.

    Inclusion of ChaCha20/Poly1305

    TLS 1.3 only allows AEAD cipher suites, which means AES-GCM/AES-CCM and ChaCha20-Poly1305 are the only options available. They are intended to improve performance and power consumption in devices with acceleration for AES (note: ChaCha20 is not new in TLS 1.3; it is already supported and deployed in TLS 1.2).


    OpenSSL is currently working on including TLS 1.3 support. It seems likely that OpenSSL 1.1.1 will include this.

    NSS 3.29 contains support for TLS 1.3, which is enabled by default. Note that although NSS has support for draft versions of TLS 1.3, one can't deploy the current NSS and expect it to work with implementations that will deploy real, finished TLS 1.3, as it doesn't use the same version ID as the finished version will.

    GnuTLS is working on TLS 1.3 support.

    Certain fuzzers, like the famous tlsfuzzer, is going to include support for fuzzing TLS 1.3 protocol soon.

    Additionally, a regularly updated list of TLS 1.3 implementations is available here

    Posted: 2017-04-05T13:30:00+00:00
  • Changes coming to TLS: Part One

    Authored by: Huzaifa Sidhpurwala

    Transport layer Security version 1.3 (TLS 1.3) is the latest version of the SSL/TLS protocol which is currently under development by the IETF. It offers several security and performance improvements as compared to the previous versions. While there are several technical resouces which discuss the finer aspects of this new protocol, this two-part article is a quick reference to new features and major changes in the TLS protocol.

    Faster Handshakes

    In TLS, before data can be encrypted, a secure channel needs to created. This is achieved via the handshake protocol in which a cipher suite is negotiated between the client and the server and key share materials are exchanged.

    The handshake is initiated by the TLS client utilizing a ClientHello message sent to the server. This message contains, among other things, a list of cipher suites which the client is capable of supporting. The server replies to this message in its ServerHello message by picking one of the cipher suites from the client list and, along with it, the server also sends its key share and the site's certificate to the client.

    The client receives all of the above, generates its own key share, combines it with the server key share and generates bulk encryption keys for the session. The client then sends the server its key share and sends a FINISHED message which contains a signed hash of all the data which was previously exchanged. The server does the same thing by sending its own FINISHED message.

    TLS 1.3 handshake

    This ends the handshake and results in a cipher suite and a bulk encryption key being negotiated between the client and the server and takes two full rounds of data exchange.1

    TLS 1.3 aims to make this faster by reducing the handshake to just one round trip (1-RTT). In TLS 1.3 the ClientHello not only contains a list of supported ciphers, but also it makes a guess as to what key agreement protocol the server is likely to chose and sends a key share for that particular protocol.

    TLS 1.3 handshake

    As soon as the server selects the key agreement protocol it sends its own key share and, at the same time, generates the bulk encryption key (since it already has the client key share). In doing so, the computers can switch to encrypted messages one whole round trip in advance. The ServerHello also contains the FINISHED message. The client receives the server key share, generates bulk encryption keys, sends the FINISHED message and is immediately ready to send encrypted data. This results in faster handshakes and better performance.2

    Faster Session Resumptions

    The TLS protocol has a provision for session resumption. The general idea is to avoid a full handshake by storing the secret information of previous sessions and reusing those when connecting to a host the next time. This drastically reduces latency and CPU usage. However in modern times the session resumption is usually frowned upon because it can easily compromise Perfect Forward Secrecy (PFS).

    Session resumption can be achieved in one of the two following ways:

    1. Session ID:
      In a full handshake the server sends a Session ID as part of the ServerHello message. On a subsequent connection the client can use this session ID and pass it to the server when connecting. Because both server and client have saved the last session’s “secret state” under the session ID they can simply resume the TLS session where they left off.

    2. Session Tickets:
      The second mechanism to resume a TLS session are Session Tickets. This extension transmits the server’s secret state to the client, encrypted with a key only known to the server. That ticket key is protecting the TLS connection now and in the future and is the weak spot an attacker will target. The client will store its secret information for a TLS session along with the ticket received from the server. By transmitting that ticket back to the server at the beginning of the next TLS connection both parties can resume their previous session, given that the server can still access the secret key that was used to encrypt it.

    In both of the above methods session resumption is done by using 1-RTT (one round trip). However TLS 1.3 achieves this by using 0-RTT. When a TLS 1.3 client connects to a server, they agree on a session resumption key. If pre-shared keys (PSK) are used the server provides a Session Ticket which can be encrypted using the PSK if required.

    During Session resumption, the client sends the Session Ticket in the ClientHello and then immediately, without waiting for the RTT to complete, sends the encrypted data. The server figures out the PSK from the session ticket and uses it to decrypt the data and resume the connection. The client may also send a key share, so that the server and the client can switch to a fresh bulk encryption key for the rest of the session.

    TLS 1.3 handshake

    There are two possible problems with the above implementation though:

    1. No PFS when using Session Resumption:
      Since the PSK is not agreed upon using fresh Diffie Hellman, it does not provide Forward Secrecy against a compromise of the session key. This problem can be solved by rotating the session keys regularly.

    2. This can easily lead to replay attacks:
      Since the 0-RTT data does not have any state information associated with it, an attacker could easily resend the 0-RTT message, and the server will decrypt the associated data and act on the data it received. One way to avoid this is to make sure that the server does not perform any "important action" (like transferring secret information or making a financial transaction) based on the 0-RTT data alone. For such operations servers can impose 1-RTT session resumptions.

    DTLS has an additional concern: during re-handshake it is not known whether the peer is alive at all. Data sent along with the handshake message may have ended up in a black hole, going no where.

    Part 2 of the blog discusses the various cryptographic changes found in TLS 1.3 and their implications.

    1. TLS 1.2 has an extension called "false start", which can reduce the initial handshake to 1-RTT, it bundles the client application data with its FINISHED message. 

    2. The roundtrips mentioned are TLS protocol roundtrips, not actual network roundtrips. In reality you have additional roundtrips due to TCP establishment. Most implementations today provide a way to use TCP fast open to reduce that additional round-trip. 

    Posted: 2017-03-29T13:30:00+00:00
  • Customer security awareness: alerting you to vulnerabilities that are of real risk

    Authored by: Christopher Robinson

    Every day we are bombarded with information. Something is always happening somewhere to someone and unfortunately it's rarely good. Looking at this through the lens of information security, NOT getting the right details at the appropriate time could be the difference from stopping and blocking an attack, or being the next sad, tragic headline...

    Red Hat Product Security oversees the vulnerability remediation for all of Red Hat's products. Our dual mission of governing guidelines and standards for how our products are composed and delivered is balanced with our in-taking, assessing, and responding to information about security vulnerabilities that might impact those products. Once a flaw has been identified, part of our role is to understand its real impact and try to produce a calm, clear direction to get issues that matter remediated. One big challenge is understanding when something is bad and could cause harm compared with something that is completely terrible and WILL cause major havoc out “in the wild." For the layperson, the facts and the hype can be extremely difficult and time-consuming to separate so that they can act appropriately.

    Recent trends in the security field haven't been helping. It seems as if every month there is a new bug that has a cute name, a logo, and a webstore selling stickers and stuffed animals. While awareness of a problem is an excellent goal, oftentimes the flashing blinky text and images obscure how bad (or not) an issue is.

    Thankfully, for over 15 years Red Hat Product Security has been providing calm, accurate, timely advice around these types of issues. We're able to separate the hope from the hype, so to speak. To that end, with the meteoric rise of “branded” flaws not stopping in the foreseeable future, Red Hat Product Security developed a special process to help inform our valued subscribers and partners when these situations arise. We call it our Customer Security Awareness (CSAw) process:

    We've augmented our processes to include enhanced oversight and handling of these very special issues. Some of these issues could be of such grave risk the need for quick actions and good advice merits extra special handling. Other times we might recognize that a security bug has the potential to have it's own PR agent, we take the right steps so that customers proactively get the appropriate level of information, allowing them to decide how quickly they need to react based on their own risk appetites. We ensure we provide special tools and extra alerts so that when these things really DO matter, the decision makers have the right data to move forward.

    For more details about the process, please check out the Red Hat Product Security Center or reach out to us via secalert@redhat.com or our Twitter Account @RedHatSecurity.

    Posted: 2017-03-22T13:30:00+00:00
  • Red Hat Product Security Risk Report 2016

    Authored by: Red Hat Product...

    At Red Hat, our dedicated Product Security team analyzes threats and vulnerabilities against all our products and provides relevant advice and updates through the Red Hat Customer Portal. Customers can rely on this expertise to help them quickly address the issues that can cause high risks and avoid wasting time or effort on those that don’t.

    Red Hat delivers certified, signed, supported versions of the open source solutions that enable cost-effective innovation for the enterprise. This is the Red Hat value chain.

    This report explores the state of security risk for Red HatⓇ products for calendar year 2016. We look at key metrics, specific vulnerabilities, and the most common ways that security issues affected users of Red Hat products.

    Among our findings:

    • Looking only at issues affecting base Red Hat Enterprise LinuxⓇ releases, we released 38 Critical security advisories addressing 50 Critical vulnerabilities. Of those issues, 100% had fixes the same or next day after the issue was public.

    • During that same timeframe, across the whole Red Hat portfolio, 76% of Critical issues had updates to address them the same or next day after the issue was public with 98% addressed within a week of the issue being public.

    • A catchy name or a flashy headline for a vulnerability doesn't tell much about its risk. The Red Hat Product Security Team helps customers determine a vulnerability’s actual impact. Most 2016 issues that mattered were not branded.


    Across all Red Hat products, and for all issue severities, we fixed more than 1,300 vulnerabilities1 by releasing more than 600 security advisories in 2016. Critical2 vulnerabilities pose the most risk to an organization. Most Critical vulnerabilities occur in browser or browser components, so Red Hat Enterprise Linux server installations will generally be affected by far fewer critical vulnerabilities. One way customers can reduce risk when using our modular products is to make sure they install the right variant and review the package set to remove packages they don’t need.

    The Red Hat value chain

    Red Hat products are based on open source software. Some Red Hat products contain several thousand individual packages, each of which is based on separate, third-party software from upstream projects.

    Red Hat engineers play a part in many upstream components, but handling and managing vulnerabilities across thousands of third-party components is a significant task. Red Hat has a dedicated Product Security team that monitors issues affecting Red Hat products and works closely with upstream projects.

    For more than 15 years, Red Hat Product Security has been a recognized leader in fixing security flaws across the Linux stack. In 2016, we investigated more than 2,600 vulnerabilities that potentially affected parts of our products, leading to fixes for 1,346 vulnerabilities. That’s a 30% increase over 2015, when the team investigated 2,000.

    Vulnerabilities known to Red Hat in advance of being made public are known as “under embargo.” Unlike companies shipping proprietary software, Red Hat is not in sole control of the date each flaw is made public. This is a good thing, as it leads to much shorter times between when a flaw is first reported and when it becomes public. Shorter embargo periods make flaws much less valuable to attackers. They know a flaw in open source is likely to get fixed quickly, shortening their window of opportunity to exploit it.

    For 2016, across all products, we knew about 394 (29%) of the vulnerabilities we addressed before making them public, down slightly from 32% in 2015. We expect this figure to vary from year to year. Across all products and vulnerabilities of all severities known to us in advance, the median embargo was seven days. This is much lower than 2015, when the median embargo was 13 days.

    Figure 2: Red Hat Product Security monitors multiple sources to identify vulnerabilities. The value of your Red Hat subscription at work.

    The full report is available for download.

    1. Red Hat Product Security assigns a Common Vulnerabilities and Exposures (CVE) name to every security issue we fix. In this report, we equate vulnerabilities to CVEs. 

    2. Red Hat rates vulnerabilities on a four-point scale that shows at a glance how much concern Red Hat has about each security issue. The scale rates vulnerabilities as Low, Moderate, Important, or Critical. By definition, a Critical vulnerability is one that could be exploited remotely and automatically by a worm. However we, like other vendors, also stretch the definition to include those flaws that affect web browsers or plug-ins where a user only needs to visit a malicious (or compromised) website to be exploited. 

    Posted: 2017-03-07T14:39:02+00:00
  • How Threat Modeling Helps Discover Security Vulnerabilities

    Authored by: Hooman Broujerdi

    Application threat modeling can be used as an approach to secure software development, as it is a nice preventative measure for dealing with security issues, and mitigates the time and effort required to deal with vulnerabilities that may arise later throughout the application's production life cycle. Unfortunately, it seems security has no place in the development life cycle, however, while CVE bug tracking databases and hacking incident reports proves that it ought to be. Some of the factors that seem to have contributed as to why there's a trend of insecure software development are:

    a) Iron Triangle Constraint: the relationship between time, resources, and budget. From a management standpoint there's an absolute need for the resources (people) to have appropriate skills to be able to implement the software business problem. Unfortunately, resources are not always available and are an expensive factor to consider. Additionally, the time required to produce quality software that solves the business problem is always an intensive challenge, not to mention that constraints in the budget seem to have always been a rigid requirement for any development team.

    b) Security as an Afterthought: taking security for granted has an adverse effect on producing a successful piece of software. Software engineers and managers tend to focus on delivering the actual business requirements and closing the gap between when the business idea is born and when the software has actually hit the market. This creates a mindset that security does not add any business value and it can always be added on rather than built into the software.

    c) Security vs Usability: another reason that seems to be a showstopper in a secure software delivery process is the idea that security makes the software usability more complex and less intuitive (e.g. security configuration is often too complicated to manage). It is absolutely true that the incorporation of security comes with a cost. Psychological Acceptability should be recognized as a factor, but not to the extent of ruling out security as part of a software development life cycle.

    With a and b being the main factors for not adopting security into the Software Development Life Cycle (SDLC), development without bringing security in the early stages turns out to have disastrous consequences. Many vulnerabilities go undetected allowing hackers to penetrate the applications and cause damage and, in the end, harm the reputations of the companies using the software as well as those developing it.

    What is Threat Modeling?

    Threat modeling is a systematic approach for developing resilient software. It identifies the security objective of the software, threats to it, and vulnerabilities in the application being developed. It will also provide insight into an attacker's perspective by looking into some of the entry and exit points that attackers are looking for in order to exploit the software.


    Although threat modeling appears to have proven useful for eliminating security vulnerabilities, it seems to have added a challenge to the overall process due to the gap between security engineers and software developers. Because security engineers are usually not involved in the design and development of the software, it often becomes a time consuming effort to embark on brainstorming sessions with other engineers to understand the specific behavior, and define all system components of the software specifically as the application gets complex.

    Legacy Systems

    While it is important to model threats to a software application in the project life cycle, it is particularly important to threat model legacy software because there's a high chance that the software was originally developed without threat models and security in mind. This is a real challenge as legacy software tends to lack detailed documentation. This, specifically, is the case with open source projects where a lot of people contribute, adding notes and documents, but they may not be organized; consequently making threat modeling a difficult task.

    Threat Modeling Crash Course

    Threat modeling can be drilled down to three steps: characterizing the software, identifying assets and access points, and identifying threats.

    Characterizing the Software

    At the start of the process the system in question needs to be thoroughly understood. This includes reviewing the correlation of every single component as well as defining the usage scenarios and dependencies. This is a critical step to understanding the underlying architecture and implementation details of the system. The information from this process is used to produce a data flow diagram (DFD) which provides the best representation for identifying different security zones where data will be in transit or stored.

    Data Flow Diagram for a typical web application

    Depending on the type and complexity of the system, this phase may also be drilled down into more detailed diagrams that could be used to help understand the system better, and ultimately address a broader range of potential threats.

    Identifying Assets and Access Points

    The next phase of the threat modeling exercise is where assets and access points of the system need to be clearly identified. System assets are the components that need to be protected against misuse by an attacker. Assets could be tangible such as configuration files, sensitive information, and processes or could potentially be an abstract concept like data consistency. Access points, also known as attack surfaces, are the path adversaries use to access the targeted endpoint. This could be an open port or protocol, file system read and write privileges, or authentication mechanism. Once the assets and access points are identified, a data access control matrix can be generated and the access level privilege for each entity can be defined.

    Data Access Control Matrix

    Identifying Threats

    Given the first two phases are complete, specific threats to the system can be identified. Using one of the systematic approaches towards the threat identification process can help organize the effort. The primary approaches are: attack tree based approach, stochastic model based approaches, and categorized threat lists.

    Attack trees have been used widely to identify threats, but categorized lists seem to be more comprehensive and easier to use. Some implementations are Microsoft's STRIDE, OWASP's Top 10 Vulnerabilities, and CWE/SANS' Top 25 Most Dangerous Software Errors.

    Threat identification using OWASP top 10 vulnerabilities

    Although the stochastic based approach is outside the scope of this writing, additional information is available for download.

    The key to generating successful and comprehensive threat lists against external attacks relies heavily on the accuracy of the system architecture model and the corresponding DFD that's been created. These are the means to identify the behavior of the system for each component, and to determine whether a vulnerability exists as a result.

    Risk Ranking

    Calculating the risk of each relevant threat associated with the software is the next step in the process. There are a number of different ways to calculate this risk however OWASP has already documented a methodology which can be used for threat prioritization. The crux of this method is to determine the severity of the risk associated with each threat and come up with a weighting factor to address each identified threat, depending on the significance of the issue to the business. It is also important to understand that threat modeling has to be revisited occasionally to ensure it has not become outdated.

    Mitigation and Control

    Threats selected from previous steps now need to be mitigated. Security engineers should provide a series of countermeasures to ensure that all security aspects of the issues are addressed by developers during the development process. A critical point at this stage is to ensure that the security implementation cost does not exceed the expected risk. The mitigation scope has to be clearly defined to ensure that meaningful security efforts align with the organization's security vision.

    Threat Analysis/Verification

    Threat analysis and verification focuses on the security delivery, after the code development and testing has started. This is a key step towards hardening the software against attacks and threats that were identified earlier. Usually a threat model owner is involved during the process to ensure relevant discussions are had on each remedy implementation and whether the priority of a specific threat can be re-evaluated.

    How Threat Modeling Integrates into SDLC

    Until now the identified threats to the system used to allow the engineering team to make better, informed decisions during the software development life cycle. Continuous integration and delivery being the key to agile development practices make any extra paper work a drag on the development flow. This also resurrects the blocking issues - Iron Triangle Constraint - mentioned earlier which might delay the release as a result. Therefore, it is essential to automate the overall threat modeling process into the continuous delivery pipeline, to ensure security is enforced during the early stages of product development.

    Although there is no one-size-fits-all approach to automating threat modeling into SDLC, and any threat modeling automation technique has to address specific security integration problems, there are various automation implementations available, including Threat Modeling with Architectural Risk Patterns - AppSec USA 2016, Developer-Driven Threat Modeling, and Automated Threat Modeling through the Software Development Life-Cycle that could help integrating threat modeling into the software delivery process.

    Should you use Threat Modeling anyway?

    Threat modeling not only adds value to company's product but also encourages a security posture for its products by providing a holistic security view of the system components. This is used as a security baseline where any development effort can have a clear vision of what needs to be done to ensure security requirements are met with the company benefiting in the long term.

    Posted: 2017-03-02T21:00:00+00:00
  • Debugging a kernel in QEMU/libvirt - Part II

    Authored by: Wade Mealing

    This blog has previously shown how to configure a Red Hat Enterprise Linux system for kernel debugging, it expects that the system has been configured, have the source code matching the installed kernel version handy, and the reader is ready to follow along.

    This should not be running on a productions system as system function interruption is guaranteed.

    The particular problem that will be investigated is CVE-2016-9793. As discussed on the Oss-security list, this vulnerability was classified as an integer overflow, which must be addressed.

    Eric Dumazet describes the patch as (taken from the commit that attempts to fix the flaw):

    $ git show b98b0bc8c431e3ceb4b26b0dfc8db509518fb290
    commit b98b0bc8c431e3ceb4b26b0dfc8db509518fb290
    Author: Eric Dumazet <edumazet@google.com>
    Date:   Fri Dec 2 09:44:53 2016 -0800
        net: avoid signed overflows for SO_{SND|RCV}BUFFORCE
        CAP_NET_ADMIN users should not be allowed to set negative
        sk_sndbuf or sk_rcvbuf values, as it can lead to various memory
        corruptions, crashes, OOM...
        Note that before commit 82981930125a ("net: cleanups in
        sock_setsockopt()"), the bug was even more serious, since SO_SNDBUF
        and SO_RCVBUF were vulnerable.
        This needs to be backported to all known linux kernels.
        Again, many thanks to syzkaller team for discovering this gem.
        Signed-off-by: Eric Dumazet <edumazet@google.com>
        Reported-by: Andrey Konovalov <andreyknvl@google.com>
        Signed-off-by: David S.  Miller <davem@davemloft.net>
    diff --git a/net/core/sock.c b/net/core/sock.c
    index 5e3ca41..00a074d 100644
    --- a/net/core/sock.c
    +++ b/net/core/sock.c
    @@ -715,7 +715,7 @@ int sock_setsockopt(struct socket *sock, int level, int optname,
                    val = min_t(u32, val, sysctl_wmem_max);
                    sk->sk_userlocks |= SOCK_SNDBUF_LOCK;
    -               sk->sk_sndbuf = max_t(u32, val * 2, SOCK_MIN_SNDBUF);
    +               sk->sk_sndbuf = max_t(int, val * 2, SOCK_MIN_SNDBUF);
                    /* Wake up sending tasks if we upped the value.  */
    @@ -751,7 +751,7 @@ set_rcvbuf:
                     * returning the value we actually used in getsockopt
                     * is the most desirable behavior.
    -               sk->sk_rcvbuf = max_t(u32, val * 2, SOCK_MIN_RCVBUF);
    +               sk->sk_rcvbuf = max_t(int, val * 2, SOCK_MIN_RCVBUF);
            case SO_RCVBUFFORCE:

    The purpose of the investigation is to determine if this flaw affects the shipped kernels.

    User interaction with the kernel happen through syscalls and ioctls. In this case, the issue is the setsockopt syscall. This specific call ends up being handled in a function named sock_setsockopt as shown in the patch.
    The flaw is not always clearly documented in patches, but in this case the area that the patch modifies is an ideal place to start looking.

    Investigating sock_setsockopt function

    The sock_setsockopt code shown below has the relevant parts highlighted in an attempt to explain key concepts that complicate the investigation of this flaw.

    A capabilities check is the first step that must be overcome when attempting to force set the snd_buff size. Inspecting the function sock_setsockopt code, a capable() check enforces the process has CAP_NET_ADMIN privilege to force buffer sizes. The attack vector is reduced by requiring this capability but not entirely mitigated. The root user by default has these capabilities, and it can be granted to binaries that run by other users. The relevant section of code is:

             if (!capable(CAP_NET_ADMIN)) {
                 ret = -EPERM;

    The reproducer would need to have CAP_NET_ADMIN capabilities/permissions to run setsockopt() with the SO_RCVBUFFORCE parameter. To read more about Linux kernel capabilities checkout the setcap (8) man page and the capabilities (7) man page.

    We can see from the patch and surrounding discussion that it is possible to set the size of sk->sndbuf to be negative. Following the flow of code, it would then enter the max_t macro before being assigned. The patch explicitly changes the max_t macros type to be cast.

    Using the GDB debugger and setting a breakpoint will show how various sizes values affect the final value of sk->sndbuf.

    Integer overflows

    The patch shows that the type used in the max_t macro compare was changed from u32 (unsigned 32 bit integer) to int (signed 32 bit integer). Before we make assumptions or do any kind of investigation, we can make a hypothesis that the problem exists with the outcome of the max_t.

    Here is the definition of max_t:

    #define max_t(type, x, y) ({            \
        type __max1 = (x);            \
        type __max2 = (y);            \
        __max1 > __max2 ? __max1: __max2; })

    My understanding of the max_t macro is that it would cast both the second and third parameters to the type specified by the first parameter returning __max1 if __max1 was greater than __max2. The unintended side affect would be that when casting to an unsigned type the comparison would turn negative values into large integer values.

    It may be tempting to program the relevant macro, type definitions, and operations on the variables into a small C program to test. Resist! Armed with your kernel debugging knowledge and a small C program to exercise the code, we can see how the tool-chain decided to create this code.

    For the test case, we'll need to consider using values that will test how the compiler and architecture deals with these kind of overflows. Input that would could create overflows or final negative values should be used as test cases.

    Building the test case

    To exercise this particular section of code (before the patch) we can build a small reproducer in C. Feel free to choose a language and write your test code in which you can set the socket options with the same way.

    #include <stdio.h>
    #include <limits.h>
    #include <linux/types.h>
    #include <sys/types.h>
    #include <sys/socket.h>
    #include <stdio.h>
    #include <error.h>
    #include <errno.h>
    #include <string.h>
    int main(int argc, char **argv)
        int sockfd, sendbuff;
        socklen_t optlen;
        int res = 0;
        int i = 0;
        /* Boundary values used to test our hypothesis */
            int val[] = {INT_MIN , INT_MIN + 100, INT_MIN + 200, -200 , 0 , 200 , INT_MAX - 200, INT_MAX - 100, INT_MAX};
        sockfd = socket(AF_INET, SOCK_DGRAM, 0);
        if(sockfd == -1) {
             printf("Error: %s", strerror(errno));
        for (i = 0 ; i < 7; i++ ) {
            sendbuff = (val[i] / 2.0);
            printf("== Setting the send buffer to %d\n", sendbuff);
            if (setsockopt(sockfd, SOL_SOCKET, SO_SNDBUFFORCE, &sendbuff, sizeof(sendbuff)) == -1) {
              printf("SETSOCKOPT ERROR: %s\n", strerror(errno));
            if (getsockopt(sockfd, SOL_SOCKET, SO_SNDBUF, &sendbuff, &optlen) == -1) {
              printf("GETSOCKOPT ERROR: %s\n", strerror(errno));
            else {
              printf("getsockopt returns buffer size: %d\n", sendbuff);
     return 0;

    Compile the reproducer:

    [user@target /tmp/]# gcc setsockopt-integer-overflow-ver-001.c -o setsockopt-reproducer

    And set the capability of CAP_NET_ADMIN on the binary:

    [user@target /tmp/]# setcap CAP_NET_ADMIN+ep setsockopt-reproducer

    If there are exploit creators (or flaw reporters) in the audience, understand that naming your files as reproducer.c and reproducer.py ends up getting confusing, please attempt to create a unique name for files. This can save time when searching through the 200 reproducer.c laying around the file system.

    Saving time

    Virtual machines afford programmers the ability to save the system state for immediate restore. This allows the system to return to a "known good state" if it was to panic or become corrupted. Libvirt calls this kind of snapshot a "System Checkpoint" style snapshot.

    The virt-manager GUI tool in Red Hat Enterprise Linux 7 did not support creating system checkpoints in the GUI. The command line interface is able to create system-checkpoint snapshots by:

    # virsh snapshot-create-as RHEL-7.2-SERVER snapshot-name-1

    To restore the system to the snapshot run run the command:

    # virsh snapshot-revert RHEL-7.2-SERVER snapshot-name-1

    If the system is running Fedora 20 or newer, and you prefer to use GUI tools, Cole Robinson has written an article which shows how to create system checkpoint style snapshots from within the virt-manager.

    The advantage of snapshots is that you can restore your system back to a working state in case of file system corruption, which can otherwise force you to reinstall from scratch.

    Debugging and inspecting

    GDB contains a "Text User Interface" mode which allows for greater insights into the running code. Start GDB in the "Text User Interface Mode" and connect to the running qemu/kernel using gdb as shown below:

    gdb -tui ~/kernel-debug/var/lib/kernel-3.10.0-327.el7/boot/vmlinux
    <gdb prelude here>
    (gdb) dir ./usr/src/debug/kernel-3.10.0-327.el7/linux-3.10.0-327.el7.x86_64/
    (gdb) set architecture i386:x86-64:intel
    (gdb) set disassembly-flavor intel
    (gdb) target remote localhost:1234

    The extra line beginning with dir points GDB to the location of the source used in creating the binary. This allows GDB to show the current line of execution. This directory tree was created when extracting the kernel-debuginfo-package using rpm2cpio.

    GDB should appear similar to the below screenshot:

    The TUI mode will show the source code at the top and the command line interactive session at the bottom window. The TUI can be customized further and this is left as an exercise to the reader.

    Inspecting the value

    The plan was to inspect the value at the time of writing to the sk->sk_sndbuf to determine how different parameters would affect the final value.

    We will set a breakpoint in GDB to stop at that position and print out the value of sk->sk_sndbuf.

            sk->sk_userlocks |= SOCK_SNDBUF_LOCK;
    >>>>>>>>sk->sk_sndbuf = max_t(u32, val * 2, SOCK_MIN_SNDBUF);  
            /* Wake up sending tasks if we upped the value.  */
        case SO_SNDBUFFORCE:
            if (!capable(CAP_NET_ADMIN)) {
                ret = -EPERM;
            goto set_sndbuf;

    The line which assigns the sk->snd_buf value is line 704 in net/core/sock.c. To set a breakpoint on this line we can issue the "break" command to gdb with the parameters of where it should break.

    Additional commands have been appended that will run every time the breakpoint has been hit. In this demonstration the breakpoint will print the value of sk->sk_sndbuf and resume running.

    If you are not seeing the (gdb) prompt, hit ctrl + c to interrupt the system; pausing the system. While the system is suspended in gdb mode it will not take keyboard input or continue any processing.

    (gdb) break net/core/sock.c:703
    Breakpoint 1 at 0xffffffff81516ede: file net/core/sock.c, line 703.
    (gdb) commands
    Type commands for breakpoint(s) 4, one per line.
    End with a line saying just "end".
    >p sk->sk_sndbuf

    The "command" directive is similar to a function that will be run each time the most recently set breakpoint is run. The 'continue' directive at the (gdb) prompt to resume processing on the target system.

    The plan was to show a binary compare of val to inspect the comparison, however this value was optimized out. GCC would allow us to inspect the 'val' directly if we were to step through the assembly and inspect the registers at the time of comparison. Doing so, however, is beyond the scope of this document.

    Lets give it a simple test running the reproducer against the code with a predictable, commonly used value. Start another terminal, connect to the target node and run the command:

    [user@target]# ./setsockopt-reproducer 
    Setting the send buffer to -1073741824
    getsockopt buffer size: -2147483648
    Setting the send buffer to -1073741774
    getsockopt buffer size: -2147483548
    Setting the send buffer to -1073741724
    getsockopt buffer size: -2147483448
    Setting the send buffer to -100
    getsockopt buffer size: -200
    Setting the send buffer to 0
    getsockopt buffer size: 4608
    Setting the send buffer to 100
    getsockopt buffer size: 4608
    Setting the send buffer to 1073741723
    getsockopt buffer size: 2147483446
    Setting the send buffer to 1073741773
    getsockopt buffer size: 2147483546

    At this time there should be a breakpoint showing as executed in the gdb terminal printing out the value every time the function passes net/core/sock.c line 704.

    Breakpoint 4, sock_setsockopt (sock=sock@entry=0xffff88003c57b680, level=level@entry=1, optname=optname@entry=32, optval=<optimized out>, optval@entry=0x7ffce597f1a0 "",
        optlen=optlen@entry=4) at net/core/sock.c:704
    $9 = 212992

    The above example shows $? = ______ as the output of the command that we have created. Each dollar ($N) shown in output correspond to the values iterated through in the test-case code.

    int val[] = {INT_MIN , INT_MIN + 1, -1 , 0 , 1 , INT_MAX - 1, INT_MAX};

    Listed below is the complete output of the example script:

    Breakpoint 1, sock_setsockopt (sock=sock@entry=0xffff8800366cf680, level=level@entry=1, optname=optname@entry=32, optval=<optimized out>, optval@entry=0x7ffe3ea373c4 "",
        optlen=optlen@entry=4) at net/core/sock.c:704
    $1 = 212992
    Breakpoint 1, sock_setsockopt (sock=sock@entry=0xffff8800366cf680, level=level@entry=1, optname=optname@entry=32, optval=<optimized out>, optval@entry=0x7ffe3ea373c4 "2",
        optlen=optlen@entry=4) at net/core/sock.c:704
    $2 = -2147483648
    Breakpoint 1, sock_setsockopt (sock=sock@entry=0xffff8800366cf680, level=level@entry=1, optname=optname@entry=32, optval=<optimized out>, optval@entry=0x7ffe3ea373c4 "d",
        optlen=optlen@entry=4) at net/core/sock.c:704
    $3 = -2147483548
    Breakpoint 1, sock_setsockopt (sock=sock@entry=0xffff8800366cf680, level=level@entry=1, optname=optname@entry=32, optval=<optimized out>,
        optval@entry=0x7ffe3ea373c4 "\234\377\377\377\003", optlen=optlen@entry=4) at net/core/sock.c:704
    $4 = -2147483448
    Breakpoint 1, sock_setsockopt (sock=sock@entry=0xffff8800366cf680, level=level@entry=1, optname=optname@entry=32, optval=<optimized out>, optval@entry=0x7ffe3ea373c4 "",
        optlen=optlen@entry=4) at net/core/sock.c:704
    $5 = -200
    Breakpoint 1, sock_setsockopt (sock=sock@entry=0xffff8800366cf680, level=level@entry=1, optname=optname@entry=32, optval=<optimized out>, optval@entry=0x7ffe3ea373c4 "d",
        optlen=optlen@entry=4) at net/core/sock.c:704
    $6 = 4608
    Breakpoint 1, sock_setsockopt (sock=sock@entry=0xffff8800366cf680, level=level@entry=1, optname=optname@entry=32, optval=<optimized out>, optval@entry=0x7ffe3ea373c4 "\233\377\377?\003",
        optlen=optlen@entry=4) at net/core/sock.c:704
    $7 = 4608
    Breakpoint 1, sock_setsockopt (sock=sock@entry=0xffff8800366cf680, level=level@entry=1, optname=optname@entry=32, optval=<optimized out>, optval@entry=0x7ffe3ea373c4 "\315\377\377?\003",
        optlen=optlen@entry=4) at net/core/sock.c:704
    $8 = 2147483446


    As we can see, the final values of sk->sk_sndbuf can be below zero if an application manages to set the value incorrectly. There are many areas of the kernel that use sk->sndbuf where the most obvious of places is the tcp_sndbuf_expand function. This value is used and memory is allocated based on this size.

    This is going to be marked as vulnerable in EL7. I leave this as an exercise for the reader to do their own confirmation on other exploits they may be interested in.


    Listed below are a number of problems that first time users have run into. Please leave problems in the comments and I may edit this article to aid others in finding the solution faster.

    Problem: Can't connect to gdb?
    Solution: Use netstat to check the port is open and listening on the host. Add a rule in the firewall to allow incoming connections to this port.

    Problem: GDB doesn't allow me to type?
    Solution: Hit Ctrl + C to interrupt the current system, enter your command, type 'continue' to resume the hosts execution.

    Problem: Breakpoint is set, but it never gets hit?
    Solution: Its likely that you have a booted kernel and source code mismatch, check to see the running kernel matches the source code/line number that has been set.

    Problem: The ssh connection drops while running the code!
    Solution: If the target system remains in gdbs interrupted mode for too long networked connections to the system can be dropped. Try and connect to the host via "virsh console SOMENAME" to get a non-networked console. You may need to setup a serial console on the host if one is not present.

    Additional thanks to:
    - Doran Moppert (GDB assistance!)
    - Prasad Pandit (Editing)
    - Fabio Olive Leite (Editing)

    Posted: 2017-02-24T14:30:00+00:00
  • Do you know where that open source came from?

    Authored by: Joshua Bressers

    Last year, while speaking at RSA, a reporter asked me about container provenance. This wasn’t the easiest question to answer because there is a lot of nuance around containers and what’s inside them. In response, I asked him if he would eat a sandwich he found on the ground. The look of disgust I got was priceless, but it opened up a great conversation.

    Think about it this way: If there was a ham sandwich on the ground that looked mostly OK, would you eat it? You can clearly see it’s a ham sandwich. The dirt all brushed off. You do prefer wheat bread to white. So what’s stopping you? It was on the ground. Unless you’re incredibly hungry and without any resources, you won’t eat that sandwich. You’ll visit the sandwich shop across the street.

    The other side of this story is just as important though. If you are starving and without money, you’d eat that sandwich without a second thought. I certainly would. Starving to death is far worse than eating a sandwich of questionable origin. This is an example you have to remember in the context of your projects and infrastructure. If you have a team that is starving for time, they aren’t worried about where they get their solutions. For many, making the deadline is far more important than “doing it right.” They will eat the sandwich they find.

    This year at RSA, I’m leading a Peer2Peer session titled, “Managing your open source.” I keep telling everyone that open source won. It’s used everywhere; there’s no way to escape it anymore. But a low-cost, flexible, and more secure software option must have some kind of hidden downside, right? Is the promise of open source too good to be true? Only if you don’t understand the open source supply chain.

    Open source is everywhere, and that means it’s easily acquirable. From cloning off of github to copying random open source binaries downloaded from a project page, there’s no stopping this sort of behavior. If you try, you will fail. Open source won because it solves real problems and it snuck in the back door when nobody was looking. It’s no secret how useful open source is: by my guesstimates, the world has probably saved trillions in man hours and actual money thanks to all the projects that can be reused. If you try to stop it now it’s either going to go back underground, making the problem of managing your open source usage worse or, worse still, you’re going to have a revolt. Open source is amazing, but there is a price for all this awesome.

    Fundamentally, this is our challenge: How do we empower our teams to make the right choices when choosing open source software?

    We know they’re going to use it. We can’t control every aspect of its use, but we can influence its direction. Anyone who is sensitive to technical debt will understand that open source isn’t a “copy once and forget” solution. It takes care and attention to ensure that you haven’t just re-added Heartbleed to your infrastructure. Corporate IT teams need to learn how to be the sandwich shop - how do we ensure that everyone is coming to us for advice and help with open source instead of running whatever they find on the ground? There aren’t easy answers to all of these questions, but we can at least start the discussion.

    In my RSA Peer2Peer session we’re going to discuss what this all means in the modern enterprise:
    - How are you managing your open source?
    - Are you doing nothing?
    - Do you have a program where you vet the open source used to ensure a certain level of quality?
    - How do you determine quality?
    - Are you running a scanner that looks for security flaws?
    - What about the containers or Linux distribution you use, where did that come from, who is taking care of it?
    - How are you installing your open source applications on your Linux or even Windows servers?

    There are a lot of questions, too many to ask in a single hour or day, and far too many to effectively answer over the course of a career in IT security. That’s okay though; we want to start a discussion that I expect will never end.

    See you at RSA on Tuesday February 14, 2017 | 3:45 PM - 4:30 PM | Marriott Marquis | Nob Hill C

    Posted: 2017-02-08T14:30:00+00:00
  • Debugging a kernel in QEMU/libvirt

    Authored by: Wade Mealing

    A kernel bug announced on oss-security list claims to create a situation in which memory corruption can panic the system, by causing an integer used in determining the size of TCP send and receive buffers to be a negative value. Red Hat engineering sometimes backports security fixes and features from the current kernel, diverging the Red Hat Enterprise Linux kernel from upstream and causing some security issues to no longer apply. This blog post shows how to use live kernel debugging to determine if a system is at risk by this integer overflow flaw.

    This walkthrough assumes that the reader has a Red Hat Enterprise Linux 7 guest system and basic knowledge of C programming.

    Setting up the guest target to debug

    The guest to be the target of the debugging session is a libvirt (or KVM/QEMU) style Virtual Machine. The guest virtual serial port should be mapped to the TCP port (TCP/1234) for use by GDB (the GNU Debugger).

    Modifying the guest domain file

    The virsh-edit command is intended to be a safe method of manipulating the raw XML that describes the guest, which is what we need to do in this circumstance. We need to configure the guest via the domain configuration file as there is no tickbox to enable what we need in virt-manager.

    The first change is to set the XML namespace for QEMU, which sounds more complex than it is.

    # virsh-edit your-virt-host-name

    Find the domain directive and add the option xmlns:qemu='http://libvirt.org/schemas/domain/qemu/1.0'.

    <domain type='kvm'
            xmlns:qemu='http://libvirt.org/schemas/domain/qemu/1.0' >

    Add a new qemu:commandline tag inside domain which will allow us to pass a parameter to QEMU for this guest when starting.

    <domain type='kvm'
           xmlns:qemu='http://libvirt.org/schemas/domain/qemu/1.0' >
              <qemu:arg value='-s'/>

    Save the file and exit the editor. Some versions of libvirt may complain that the XML has invalid attributes; ignore this and save the file anyway. The libvirtd daemon does not need to be restarted. The guest will need to be destroyed and restarted if it is already running.

    The -s parameter is an abbreviation of -gdb tcp::1234. If you have many guests needing debugging on different ports, or already have a service running on port 1234 on the host, you can set the port in the domain XML file as shown below:

            <qemu:arg value='-gdb'/>
            <qemu:arg value='tcp::1235'/>

    If it is working, the QEMU process on the host will be listening on the port specified as shown below:

    [root@target]# netstat -tapn | grep 1234
    tcp        0      0  *               LISTEN      11950/qemu-system-x
    Change /etc/default/grub on the guest

    The guest kernel will need to be booted with new parameters to enable KGDB debugging facilities. Add the values kgdboc=ttyS0,115200. In the system shown here, a serial console is also running on ttyS0 with no adverse effects.

    Use the helpful grubby utility to apply these changes across all kernels.

    # grubby --update-kernel=ALL --args="console=ttyS0,115200 kgdboc=ttyS0,115200"
    Downloading debuginfo packages

    The Red Hat Enterprise Linux kernel packages do not include debug symbols, as the symbols are stripped from binary files at build time. GDB needs those debug symbols to assist programmers when debugging. For more information on debuginfo see this segment of the Red Hat Enteprise Linux 7 developer guide.

    RPM packages containing the name 'debuginfo' contain files with symbols. These packages can be downloaded from Red Hat using yum or up2date.

    To download these packages on the guest:

    # debuginfo-install --downloadonly kernel-3.10.0-327.el7

    This should download two files in the current directory on the host for later extraction and use by GDB.

    Copy these files from the guest to the host for the host GDB to use them. I choose ~/kernel-debug/ as a sane location for these files. Create the directory if it doesn't already exist.

    # mkdir -p ~/kernel-debug
    # scp yourlogin@guest:kernel*.rpm ~/kernel-debug/

    The final step on the guest is to reboot the target. At this point the system should reboot with no change in behavior.

    Preparing the host to debug

    The system which runs the debugger doesn't need to be the host that contains the guest. The debugger system must be capable of making a connection to the guest running on the specified port (1234). In this example these commands will be run on the host which contains the virtual machine.

    Installing GDB

    Install GDB on the host using a package manager.

    # sudo yum -y install gdb
    Extracting files to be used from RPMs

    When Red Hat builds the kernel it strips debugging symbols from the RPMs. This creates smaller downloads and uses less memory when running. The stripped packages are the well-known RPM packages named like kernel-3.10.0-327.el7.x86_64.rpm. The non-stripped debug information is stored in debuginfo rpms, like the ones downloaded earlier in this document by using debuginfo-install. They must match the exact kernel version and architecture being debugged on the guest to be of any use.

    The target does not need to match the host system architecture or release version. The example below can extract files from RPMs on any system.

    # cd ~/kernel-debug
    # rpm2cpio kernel-debuginfo-3.10.0-327.el7.x86_64.rpm | cpio -idmv
    # rpm2cpio kernel-debuginfo-common-3.10.0-327.el7.x86_64.rpm | cpio -idmv

    This extracts the files within the packages into the current working directory as they would be on the intended file system. No scripts or commands within the RPMs are ran. These files are not installed and the system package management tools will not manage them. This allows them to be used on other architectures, releases or distributions.

    • The unstripped kernel is the vmlinux file in ~/kernel-debug/usr/lib/debug/lib/modules/3.10.0-510.el7.x86_64/vmlinux
    • The kernel source is in the directory ~/kernel-debug/usr/src/debug/kernel-3.10.0-327.el7/linux-3.10.0-327.el7.x86_64/
    Connecting to the target system from the remote system

    Start GDB with the text user interface, passing as a parameter the path to the unstripped kernel binary (vmlinux) running on the target system.

    # gdb -tui ~/kernel-debug/var/lib/kernel-3.10.0-327.el7/boot/vmlinux
    <gdb prelude shows here>

    GDB must be told where to find the target system. Type the following into the GDB session:

    set architecture i386:x86-64:intel
    target remote localhost:1234
    dir ~/kernel-debug/usr/src/debug/kernel-3.10.0-327.el7/linux-3.10.0-327.el7.x86_64/

    Commands entered at the (gdb) prompt can be saved in ~/.gdbinit to reduce repetitive entry.

    At this point, if all goes well, the system should be connected to the remote GDB session.

    The story so far...

    Congratulations, you've made it this far. If you've been following along you should have setup a GDB session to a system running in libvirt and be able to recreate and begin investigation into flaws.

    Join the dark side as next time when we validate an integer promotion comparison flaw.

    Posted: 2017-01-11T14:30:00+00:00