Red Hat Customer Portal

Skip to main content

Warning message

Log in to add comments.

Latest Posts

  • Evolution of the SSL and TLS protocols

    Authored by: Huzaifa Sidhpurwala

    The Transport Layer Security (TLS) protocol is undoubtedly the most widely used protocol on the Internet today. If you have ever done an online banking transaction, visited a social networking website, or checked your email, you have most likely used TLS. Apart from wrapping the plain text HTTP protocol with cryptographic goodness, other lower level protocols like SMTP and FTP can also use TLS to ensure that all the data between client and server is inaccessible to attackers in between. This article takes a brief look at the evolution of the protocol and discusses why it was necessary to make changes to it.

    Like any other standard used today on the internet, the TLS protocol also has a humble beginning and a rocking history. Originally developed by Netscape in 1993 it was initially called Secure Sockets Layer (SSL). The first version was said to be so insecure that "it could be broken in ten minutes" when Marc Andreessen presented it at an MIT meeting. Several iterations were made which led to SSL version 2 and, later in 1995, SSL version 3. In 1996, an IETF working group formed to standardize SSL. Even though the resulting protocol is almost identical to SSL version 3, the process took three years.

    TLS version 1.0, with a change in name to prevent trademark issues, was published as RFC 2246. Later versions 1.1 and 1.2 were published which aimed to address several shortcomings and flaws in the earlier versions of the protocol.

    Cryptographic primitives are based on mathematical functions and theories

    The TLS protocol itself is based on several cryptographic primitives including asymmetric key exchange protocols, ciphers, and hashing algorithms. Assembling all these primitives together securely is non-trivial and would not be practical to implement individually in the same way TLS does. For example, AES is a pretty strong symmetric cipher, but like any other symmetric cipher it needs the encryption key to be securely exchanged between the client and the server. Without an asymmetric cipher there is no way to exchange keys on an insecure network such as the Internet. Hashing functions are used to help authenticate the certificates used to exchange the keys and also ensure integrity of data-in-transit. These hash algorithms, like SHA, have one way properties and are reasonably collision resistant. All these cryptographic primitives, arranged in a certain way, make up the TLS protocol as a whole.

    Key Exchanges

    The reason two systems that have never met can communicate securely is due to secure key exchange protocols. Because each system must know the same secret to establish a secure communications path using a symmetric cipher, the use of key exchange systems allow those two systems to establish that secret and securely share it with each other to establish the communications path.

    The Rivest-Shamir-Adleman (RSA) cryptosystem is the most widely used asymmetric key exchange algorithm. This algorithm assumes that factorization of large numbers is difficult, so while the public key (n) is calculated using n = p x q, it is hard for an attacker to factorize n into the corresponding primes p and q, which can be easily used to calculate the private key.

    The Diffie-Hellman key exchange (DHE) uses the discrete log problem and assumes that when given y = g ^ a mod p, it is difficult to solve this equation to extract the private key a. Elliptic-Curve-based Diffie-Hellman key exchange (ECDHE) uses the abstract DH problem, but uses multiplication in elliptic curve groups for its security.

    Symmetric algorithms

    Symmetric algorithms used today like Advanced Encryption Standard (AES) have good confusion and diffusion properties, which mean that the encrypted data will be statistically different from the input. ChaCha20 is a newer stream cipher that is starting to see some traction and may see additional use in the future as a faster alternative to AES.

    Changes as time and technology progresses

    Faster computers are now more accessible to the common public via cloud computing, GPUs, and dedicated FPGA devices than they were 10 years ago. New computation methods have also become possible. Quantum computers are getting bigger, making possible attacks on the underlying mathematics of many algorithms used for cryptography. Also, new research in mathematics means that as older theories are challenged and newer methods are invented and researched, our previous assumptions about hard mathematical problems are losing ground.

    New design flaws in the TLS protocol are also discovered from time to time. The POODLE flaw in SSL version 3 and DROWN flaw in SSL version 2 showed that the previous versions of the protocol are not secure. We can likely expect currently deployed versions of TLS to also have weaknesses as research continues and computing power gets greater.

    Attacks against cryptographic primitives and its future


    The best known attack against RSA is still factoring n into its components p and q. The best known algorithm for factoring integers larger than 10^100 is the number field sieve. The current recommendation from NIST is using a minimum RSA key length of 2048 bits for information needed to be protected until at least the year 2030. For secrecy beyond that year larger keys will be necessary.

    RSA's future, however, is bleak! IETF recommended removal of static-RSA from the TLS version 1.3 draft standard stating "[t]hese cipher suites have several drawbacks including lack of PFS, pre-master secret contributed only by the client, and the general weakening of RSA over time. It would make the security analysis simpler to remove this option from TLS version 1.3. RSA certificates would still be allowed, but the key establishment would be via DHE or ECDHE." The consensus in the room at IETF-89 was to remove RSA key transport from TLS 1.3.

    DHE and ECC

    Like RSA, the best known attack against DHE is the number field sieve. With the current computing power available, a 512-bit DH key takes 10 core-years to break. NIST recommends a key size of 224 bits and 2048-bit group size for any information which needs to be protected till 2030.

    As compared to DHE, ECC has still stood its ground and is being increasingly used in newer software and hardware implementations. Most of the known attacks against ECC work only on special hardware or against buggy implementations. NIST recommends use of at least 224-bit key size for ECC curves.

    However, the biggest threat to all of the above key exchange methods is quantum computing. Once viable quantum computing technology is available, all of the above public key cryptography systems will be broken. NIST recently conducted a workshop on post-quantum cryptography and several alternatives to the above public cryptography schemes were discussed. It is going to be interesting to watch what these discussions lead to, and what new standards are formed.

    Symmetric ciphers and hashes

    All symmetric block ciphers are vulnerable to brute force attacks. The amount of time taken to brute force depends on the size of the key; the bigger the key, the more time and power it takes to brute force. The SWEET32 attack has already shown that small block sizes are bad and has finally laid 3DES to rest. We already know that RC4 is insecure and there have been several attempts to deprecate it.

    The proposed TLS version 1.3 draft has provision for only two symmetric ciphers, namely AES and ChaCha20, and introduces authenticated encryption (AEAD). The only MAC function allowed is Poly1305.

    And in conclusion...

    No one knows for sure what will happen next but history has shown that older algorithms are at risk. That's why it is so important to stay up to date on cryptography technology. Developers should make sure their software supports the latest versions of TLS while deprecating older versions that are broken (or weakened). System owners should regularly test their systems to verify what ciphers and protocols are supported and stay educated on what is current and what the risks are to utilizing old cryptography.

    Posted: 2016-11-16T14:30:00+00:00
  • Understanding and mitigating the Dirty Cow Vulnerability

    Authored by: Anonymous

    Rodrigo Freire & David Sirrine - Red Hat Technical Account Management Team

    Dirty Cow (CVE-2016-5195) is the latest branded vulnerability, with a name, a logo, and a website, to impact Red Hat Enterprise Linux. This flaw is a widespread vulnerability and spans Red Hat Enterprise Linux versions 5, 6, and 7. Technical details about the vulnerability and how to address it can be found at: Kernel Local Privilege Escalation "Dirty COW" - CVE-2016-5195.

    In order to be successful, an attacker must already have access to a server before they can exploit the vulnerability. Dirty Cow works by creating a race condition in the way the Linux kernel's memory subsystem handles copy-on-write (COW) breakage of private read-only memory mappings. This race condition can allow an unprivileged local user to gain write access to read-only memory mappings and, in turn, increase their privileges on the system.

    Copy-on-write is a technique that allows a system to efficiently duplicate or copy a resource which is subject to modification. If a resource is copied but not modified, there's no need to create a new resource; the resource can be shared between the copy and the original. In case of a modification, a new resource is created.

    While there is currently an updated kernel available that addresses this issue, in large data centers where affected systems can number in the hundreds, thousands, or even tens of thousands, it may not be possible to find a suitable maintenance window to update all the affected systems as this requires downtime to reboot the system. RHEL7.2 systems or above can be live-patched to fix this issue using kpatch. In order to take advantage of this Red Hat benefit, file a support case, inform about the kernel version, and request a suitable kpatch. For more details about what a kpatch is see: Is live kernel patching (kpatch) supported in RHEL 7?

    RHEL 5 and 6, while affected, do not support kpatch. Fortunately, there is a stopgap solution for this vulnerability using SystemTap. The SystemTap script will apply the patch while the system is running, without the need of a reboot. This is done by intercepting the vulnerable system call, which allows the system to continue working as expected without being compromised.

    A word of caution: this SystemTap solution can potentially impair a virus scanner running in the system. Please check with your antivirus vendor.

    The SystemTap script is relatively small and efficient, broken into 4 distinct sections as follows:

    probe kernel.function("mem_write").call ? {
            $count = 0
    probe syscall.ptrace {  // includes compat ptrace as well
            $request = 0xfff
    probe begin {
            printk(0, "CVE-2016-5195 mitigation loaded")
    probe end {
            printk(0, "CVE-2016-5195 mitigation unloaded")

    First, the script places a probe at the beginning of the kernel function “mem_write” when called and not loaded inline:

    probe kernel.function("mem_write").call ? {
            $count = 0

    Next, the script places a probe at the ptrace syscalls that disables them (this bit can impair antivirus software and potentially other kinds of software such as debuggers):

    probe syscall.ptrace {  // includes compat ptrace as well
            $request = 0xfff

    Finally, the “probe begin” and “probe end” code blocks tell systemtap to add the supplied text to the kernel log buffer via the printk function. This creates an audit trail by registering in the system logs exactly when the mitigation is loaded and unloaded.

    This solution works in all affected RHEL versions: 5, 6, and 7.

    Red Hat always seeks to provide both mitigations to disable attacks as well as the actual patches to treat the flaw. To learn more about SystemTap, and how it can be used in your management of your Red Hat systems, please refer to Using SystemTap or one of our videos about it within our Customer Portal.

    Again, for more information on how to use the SystemTap solution or to see links to the available patches, please visit the "Resolve" tab in the related Red Hat Vulnerability Response article.

    Posted: 2016-11-09T14:30:00+00:00
  • From There to Here (But Not Back Again)

    Authored by: Vincent Danen

    Red Hat Product Security recently celebrated our 15th anniversary this summer and while I cannot claim to have been with Red Hat for that long (although I’m coming up on 8 years myself), I’ve watched the changes from the “0day” of the Red Hat Security Response Team to today. In fact, our SRT was the basis for the security team that Mandrakesoft started back in the day.

    In 1999, I started working for Mandrakesoft, primarily as a packager/maintainer. The offer came, I suspect, because of the amount of time I spent volunteering to maintain packages in the distribution. I also was writing articles for TechRepublic at the time, so I also ended up being responsible for some areas of documentation, contributing to the manual we shipped with every boxed set we sold (remember when you bought these things off the shelf?).

    Way back then, when security flaws were few and far between (well, the discovery of these flaws, not the existence of them, as we’ve found much to our chagrin over the years), there was one individual at Mandrakesoft who would apply fixes and release them. The advisory process was ad-hoc at best, and as we started to get more volume it was taking his time away from kernel hacking and so they turned to me to help. Having no idea that this was a pivotal turning point and would set the tone and direction of the next 16 years of my life, I accepted. The first security advisory I released for Linux-Mandrake was an update to BitchX in July of 2000. So in effect, while Red Hat Product Security celebrated 15 years of existence this summer, I celebrated my 16th anniversary of “product security” in open source.

    When I look back over those 16 years, things have changed tremendously. When I started the security “team” at Mandrakesoft (which, for the majority of the 8 years I spent there, was a one-man operation!) I really had no idea what the future would hold. It blows my mind how far we as an industry have come and how far I as an individual have come as well. Today it amazes me how I handled all of the security updates for all of our supported products (multiple versions of Mandriva Linux, the Single Network Firewall, Multi-Network Firewall, the Corporate Server, and so on). While there was infrastructure to build the distributions, there was none for building or testing security updates. As a result, I had a multi-machine setup (pre-VM days!) with a number of chroots for building and others for testing. I had to do all of the discovery, the patching, backporting, building, testing, and the release. In fact, I wrote the tooling to push advisories, send mail announcements, build packages across multiple chroots, and more. The entire security update “stack” was written by me and ran in my basement.

    During this whole time I looked to Red Hat for leadership and guidance. As you might imagine, we had to play a little bit of catchup many times and when it came to patches and information, it was Red Hat that we looked at primarily (I’m not at all ashamed to state that quite often we would pull patches from a Red Hat update to tweak and apply to our own packages!). In fact, I remember the first time I talked with Mark Cox back in 2004 when we, along with representatives of SUSE and Debian, responded to the claims that Linux was less secure than Windows. While we had often worked well together through cross-vendor lists like vendor-sec and coordinated on embargoed issues and so on, this was the first real public stand by open source security teams against some mud that was being hurled against not just our products, but open source security as a whole. This was one of those defining moments that made me scary-proud to be involved in the open source ecosystem. We set aside competition to stand united against something that deeply impacted us all.

    In 2009 I left Mandriva to work for Red Hat as part of the Security Response Team (what we were called back then). Moving from a struggling small company to a much larger company was a very interesting change for me. Probably the biggest change and surprise was that Red Hat had the developers do the actual patching and building of packages they normally maintained and were experienced with. We had a dedicated QA team to test this stuff! We had a rigorous errata process that automated as much as possible and enforced certain expectations and conditions of both errata and associated packages. I was actually able to focus on the security side of things and not the “release chain” and all parts associated with it, plus there was a team of people to work with when investigating security issues.

    Back at Mandriva, the only standard we focused on was the usage of CVE. Coming to Red Hat introduced me to the many standards that we not only used and promoted, but also helped shape. You can see this in CVE, and now DWF, OpenSCAP and OVAL, CVRF, the list goes on. Not only are we working to make, and keep, our products secure for our customers, but we apply our expertise to projects and standards that benefit others as these standards help to shape other product security or incident response teams, whether they work on open source or not.

    Finally (as an aside and a “fun fact”) when I first started working at Mandrakesoft with open source and Linux, I got a tattoo of Tux on my calf. A decade later, I got a tattoo of Shadowman on the other calf. I’m really lucky to work on things with cool logos, however I’ve so far resisted getting a tattoo of the heartbleed logo!

    I sit and think about that initial question that I was asked 16 years ago: “Would you want to handle the security updates?”. I had no idea it would send me to work with the people, places, and companies that I have. No question that there were challenges and more than a few times I’m sure that the internet was literally on fire but it has been rewarding and satisfying. And I consider myself fortunate that I get to work every day with some of the smartest, most creative, and passionate people in open source!

    Posted: 2016-10-24T13:30:00+00:00
  • Happy 15th Birthday Red Hat Product Security

    Authored by: Mark J. Cox

    This summer marked 15 years since we founded a dedicated Product Security team for Red Hat. While we often publish information in this blog about security technologies and vulnerabilities, we rarely give an introspection into the team itself. So I’d like, if I may, to take you on a little journey through those 15 years and call out some events that mean the most to me; particularly what’s changed and what’s stayed the same. In the coming weeks some other past and present members of the team will be giving their anecdotes and opinions too. If you have a memory of working with our team we’d love to hear about it, you can add a comment here or tweet me.

    Our story starts, however, before I joined the company. Red Hat was producing Red Hat Linux in the 1990’s and shipping security updates to it. Here’s an early security update notice from 1998, and the first formal Red Hat Security Advisory (RHSA) RHSA-1999:013. Red Hat would collaborate on security issues along with other Linux distributors on a private communication list called “vendor-sec”, then an engineer would build and test updates prior to them being signed and made available.

    In Summer 2000, Red Hat acquired C2Net, a security product company I was working at. C2Net was known for the most widely used secure web server at the time, Stronghold. Red Hat was a small company and so with my security background (being also a founder of the Apache Software Foundation and OpenSSL) it was common for all questions on anything security related to end up at my desk. Although our engineers were responsive to dealing with security patches in Red Hat Linux, we didn’t have any published processes in handling issues or dealing with researchers and reporters, and we knew it needed something more scalable for the future and when we had more than one product. So with that in mind I formed the Red Hat Security Response Team (SRT) in September 2001.

    The mission for the team was a simple one: to be “responsible for ensuring that security issues found in Red Hat products and services are addressed”. The charter went into a little more detail:

    • Be a contact point for our customers who have found security issues in our products or services, and publish our procedures for dealing with this contact;
    • Track alerts and security issues within the community which may affect users of Red Hat products and services;
    • Investigate and address security issues in our supported products and services;
    • Ensure timely security fixes for our products;
    • Ensure that customers can easily find, obtain, and understand security advisories and updates;
    • Help customers keep their systems up to date, and minimize the risk of security issues;
    • Work with other vendors of Linux and open source software (including our competitors) to reduce the risk of security issues through information sharing and peer review.

    That mission and the detailed charter were published on our web site along with many of our policies and procedures. Over the years this has changed very little, and our mission today maps closely to that original one. From day one we wanted to be responsive to anyone who mailed the security team so we set a high internal SLA goal to have a human response to incoming security email within one business day. We miss that high standard from time to time, but we average over 95% achievement.

    Fundamentally, all software has bugs; some bugs have a security consequence. If you’re a product vendor you need a security response team to handle tracking and fixing those security flaws. Given Red Hat products are comprised of open source software, this presents some unique challenges in how to deal with security issues in a supply chain comprising of thousands of different projects, each with their own development teams, policies, and methodologies. From those early days Red Hat worked out how to do this and do it well. We leveraged the “Getting Things Done” (GTD) methodology to create internal workflows and processes that worked in a stressful environment: where every day could bring something new, and work was mostly comprised of interruptions, you need to have a system you can trust so tasks can be postponed and reprioritised without getting lost.

    "Red Hat has had the best track record in dealing with third-party vulnerabilities. This may be due to the extent of their involvement with third-party vendors and the open-source community, as they often contribute their own patches and work closely with third-party vendors." -- Symantec Internet Security Threat Report 2007

    By 2002 we had started using Common Vulnerabilities and Exposures (CVE) names to identify vulnerabilities, not just during the publication of our advisories, but for helping with the co-ordination of issues in advance between vendors, an unexpected use that was a pleasant surprise to the creators at MITRE. As a CVE editorial board member I would be personally asked to vote on every vulnerability (vulnerabilities would start out as candidates, with a CAN- prefix, before migrating to full CVE- names). As you can imagine that process didn’t last long as the popularity of using CVE names across the industry meant the number of vulnerabilities being handled started to rapidly increase. Now it is uncommon to hear about any vulnerability that doesn’t have a CVE name associated with it. Scaling the CVE process became a big issue in the last few years and hit a crisis point; however in 2016 the DWF project forced changes which should help address these concerns long term, forcing a distributed process instead of one with a bottleneck.

    In the early 2000’s, worms that attacked Linux were fairly common, affecting services that were usually enabled and Internet facing by default such as “sendmail” and “samba”. None of the worms were “0 day” however, they instead exploited vulnerabilities which had had updates to address them released weeks or months prior. September 2002 saw the “Slapper worm” which affected OpenSSL via the Apache web server, “Millen” followed in November 2002 exploiting IMAP. By 2005, Red Hat Enterprise Linux shipped with randomization, NX, and various heap and other protection mechanisms which, together with more secure application defaults (and SELinux enabled by default), helped to disrupt worms. By 2014 logos and branded flaws took over our attentions, and exploits became aimed at making money through botnets and ransomware, or designed for leaking secrets.

    As worms and exploits with clever names were common then, vulnerabilities with clever names, domain names, logos, and other branding are common now. This trend really started in 2014 with the OpenSSL flaw “Heartbleed”. Heartbleed was a serious issue that helped highlight the lack of attention in some key infrastructure projects. But not all the named security issues that followed were important. As we’ve shown in the past just because a vulnerability gets a fancy name doesn’t mean it’s a significant issue (also true in reverse). These issues highlighted the real importance of having a dedicated product security team – a group to weed through the hype and figure out the real impact to the products you supply to your customers. It really has to be a trusted partnership with the customer though, as you have to prove that you’re actually doing work analysing vulnerabilities with security experts, and not simply relying on a press story or third-party vulnerability score. Our Risk Report for 2015 took a look at the branded issues and which ones mattered (and why) and at the Red Hat Summit for the last two years we’ve played a “Game of Flaws” card game, matching logos to vulnerabilities and talking about how to assess risk and figure out the importance of issues.

    Just like Red Hat itself, SRT was known for it’s openness and transparency. By 2005 we were publishing security data on every vulnerability we addressed along with the metrics on when the issue was public, how long it was embargoed, the issue CWE type, CVSS scoring, and more. We provided XML feeds of vulnerabilities, scripts that could run risk reports, along with detailed blogs on our performance and metrics. In 2006 we started publishing Open Vulnerability Assessment Language (OVAL) definitions for Red Hat Enterprise Linux products, allowing industry standard tools to be able to test systems for outstanding errata. These OVAL definitions are consumed today by tools such as OpenSCAP. Our policy of backporting security fixes caused problems for third-party scanning tools in the early days, but now by using our data such as our OVAL definitions they can still give accurate results to our mutual customers. As new security standards emerged, like Common Vulnerability Reporting Framework (CVRF) in 2011, we’d get involved in the definitions and embrace them. In this case helping define the fields and providing initial example content to help promote the standard. While originally we provided this data in downloadable files on our site, we now have an open API allowing easier access to all our vulnerability data.

    "Red Hat's transparency on its security performance is something that all distributions should strive for -- especially those who would tout their security response" -- Linux Weekly News (July 2008)

    Back in 2005 this transparency on metrics was especially important; as our competitors (of non-open source operating systems) were publishing industry reports comparing vulnerability “days of risk” and doing demonstrations with bags of candies showing how many more vulnerabilities we were fixing than they were. Looking back it’s hard to believe anyone took them seriously. Our open data helped counter these reports and establish that they were not comparing things in a “like-for-like” way; for example treating all issues as having the same severity, or completely not counting issues that were found by the vendor themselves. We even did a joint statement with other Linux distributions, something unprecedented. We still publish frequent “risk reports” which give an honest assessment of how well we handled security issues in our products, as well as helping customers figure out which issues mattered.

    Our team grew substantially over the years, both in numbers of associates and in diversity - with staff spread across time zones, offices, and in ten different countries. Our work also was not just the reactive security work but involved proactive work such as auditing and bug finding too. Red Hat associates in our team also help in upstream communities to help projects assess and deal with security issues and present at technical conferences. We’d also help secure the internal supply chain, such as providing facilities and processes for package signing using hardware cryptographic modules. This led a few years ago to the rebranding as “Red Hat Product Security” to better reflect this multi-faceted nature of the team. Red Hat associates continue to find flaws in Red Hat products as well as in products and services from other companies which we report responsibly. In 2016 for example 12% of issues addressed in our products were discovered by Red Hat associates, and we continue to work with our peers on embargoed security issues.

    In our first year we released 152 advisories to address 147 vulnerabilities. In the last year we released 642 advisories to address 1415 vulnerabilities across more than 100 different products, and 2016 saw us release our 5000th advisory.

    In a subscription-based business you need to continually show value to customers, and we talk about it in terms of providing the highest quality of security service. We are already well known for getting timely updates our for critical issues: for Red Hat Enterprise Linux in 2016, for example, 96% of Critical security issues had an update available the same or next calendar day after the issue was made public. But our differentiator is not just providing timely security updates, it’s a much bigger involvement. Take the issue in bash in 2014 which was branded “Shellshock” as an example. Our team's response was to ensure we provided timely fixes, but also to provide proactive notifications to customers, through our technical account managers and portal notifications, as well as knowledge base and solution articles to help customers quickly understand the issue and their exposure. Our engineers created the final patch used by vendors to address the issue, we provided testing tools, and our technical blog describing the flaw was the definitive source of information which was referenced as authoritative by news articles and even US-CERT.

    My favourite quote comes from Alan Kay in 1971: “The best way to predict the future is to invent it”. I’m reminded every day of the awesome team of world-class security experts we’ve built up at Red Hat and I enthusiastically look forward to helping them define and invent the future of open source product security.

    Posted: 2016-10-17T13:30:00+00:00
  • A bite of Python

    Authored by: Ilya Etingof

    Being easy to pick up and progress quickly towards developing larger and more complicated applications, Python is becoming increasingly ubiquitous in computing environments. Though apparent language clarity and friendliness could lull the vigilance of software engineers and system administrators -- luring them into coding mistakes that may have serious security implications. In this article, which primarily targets people who are new to Python, a handful of security-related quirks are looked at; experienced developers may well be aware of the peculiarities that follow.

    Input function

    In a large collection of Python 2 built-in functions, input is a total security disaster. Once called, whatever is read from stdin gets immediately evaluated as Python code:

       $ python2
        >>> input()
        ['__builtins__', '__doc__', '__name__', '__package__']
       >>> input()

    Clearly, the input function must never ever be used unless data on a script's stdin is fully trusted. Python 2 documentation suggests raw_input as a safe alternative. In Python 3 the input function becomes equivalent to raw_input, thus fixing this weakness once and forever.

    Assert statement

    There is a coding idiom of using assert statements for catching next to impossible conditions in a Python application.

       def verify_credentials(username, password):
           assert username and password, 'Credentials not supplied by caller'
           ... authenticate possibly null user with null password ...

    However, Python does not produce any instructions for assert statements when compiling source code into optimized byte code (e.g. python -O). That silently removes whatever protection against malformed data that the programmer wired into their code leaving the application open to attacks.

    The root cause of this weakness is that the assert mechanism is designed purely for testing purposes, as is done in C++. Programmers must use other means for ensuring data consistency.

    Reusable integers

    Everything is an object in Python. Every object has a unique identity which can be read by the id function. To figure out if two variables or attributes are pointing to the same object the is operator can be used. Integers are objects so the is operation is indeed defined for them:

        >>> 999+1 is 1000

    If the outcome of the above operation looks surprising, keep in mind that the is operator works with identities of two objects -- it does not compare their numerical, or any other, values. However:

        >>> 1+1 is 2

    The explanation for this behavior is that Python maintains a pool of objects representing the first few hundred integers and reuses them to save on memory and object creation. To make it even more confusing, the definition of what "small integer" is differs across Python versions.

    A mitigation here is to never use the is operator for value comparison. The is operator is designed to deal exclusively with object identities.

    Floats comparison

    Working with floating point numbers may get complicated due to inherently limited precision and differences stemming from decimal versus binary fraction representation. One common cause of confusion is that float comparison may sometimes yield unexpected result. Here's a famous example:

       >>> 2.2 * 3.0 == 3.3 * 2.0

    The cause of the above phenomena is indeed a rounding error:

       >>> (2.2 * 3.0).hex()
       >>> (3.3 * 2.0).hex()

    Another interesting observation is related to the Python float type which supports the notion of infinity. One could reason that everything is smaller than infinity:

       >>> 10**1000000 > float('infinity')

    However, up to Python 3, a type object beats the infinity:

       >>> float > float('infinity')

    The best mitigation is to stick to integer arithmetic whenever possible. The next best approach would be to use the decimal stdlib module which attempts to shield users from annoying details and dangerous flaws.

    Generally, when important decisions are made based on the outcome of arithmetic operations, care must be taken not to fall victim to a rounding error. See the issued and limitations chapter in Python documentation.

    Private attributes

    Python does not support object attributes hiding. But there is a workaround based on the feature of double underscored attributes mangling. Although changes to attribute names occur only to code, attributes names hardcoded into string constants remain unmodified. This may lead to confusing behavior when a double underscored attribute visibly "hides" from getattr()/hasattr() functions.

       >>> class X(object):
       ...   def __init__(self):
       ...     self.__private = 1
       ...   def get_private(self):
       ...     return self.__private
       ...   def has_private(self):
       ...     return hasattr(self, '__private')
       >>> x = X()
       >>> x.has_private()
       >>> x.get_private()

    For this privacy feature to work, attribute mangling is not performed on attributes out of class definition. That effectively "splits" any given double underscored attributive onto two depending on from where it is being referenced:

       >>> class X(object):
       ...   def __init__(self):
       ...     self.__private = 1
       >>> x = X()
       >>> x.__private
       AttributeError: 'X' object has no attribute '__private'
       >>> x.__private = 2
       >>> x.__private
       >>> hasattr(x, '__private')

    These quirks could turn into a security weakness if a programmer relies on double underscored attributes for making important decisions in their code without paying attention to the asymmetrical behavior of private attributes.

    Module injection

    Python modules importing system is powerful and complicated. Modules and packages can be imported by file or directory name found in search path as defined by sys.path list. Search path initialization is an intricate process which is also dependent on Python version, platform and local configuration. To mount successful attack on a Python application, an attacker needs to find a way to smuggle a malicious Python module into a directory or importable package file which Python would consider when trying to import a module.

    The mitigation is to maintain secure access permissions on all directories and package files in search path to ensure unprivileged users do not have write access to them. Keep in mind that the directory where the initial script invoking Python interpreter resides is automatically inserted into the search path.

    Running script like this reveals actual search path:

       $ cat
       import sys
       import pprint

    On Windows platform, instead of script location, current working directory of the Python process is injected into the search path. On UNIX platforms, current working directory is automatically inserted into sys.path whenever program code is read from stdin or command line ("-" or "-c" or "-m" options):

       $ echo "import sys, pprint; pprint.pprint(sys.path)" | python -
       $ python -c 'import sys, pprint; pprint.pprint(sys.path)'
       $ cd /tmp
       $ python -m myapp

    To mitigate the risk of module injection from current working directory explicitly changing directory to a safe one is recommended prior to running Python on Windows or passing code through command line.

    Another possible source for the search path is the contents of the $PYTHONPATH environment variable. An easy mitigation against sys.path population from process environment is the -E option to Python interpreter which makes it ignoring $PYTHONPATH variable.

    Code execution on import

    It may not look obvious that the import statement actually leads to execution of the code in the module being imported. That is why even importing mistrustful module or package is risky. Importing simple module like this may lead to unpleasant consequences:

       $ cat
       import os
       import sys
       os.system('cat /etc/passwd | mail')
       del sys.modules['malicious']  # pretend it's not imported
       $ python
       >>> import malicious
       >>> dir(malicious)
       Traceback (most recent call last):
       NameError: name 'malicious' is not defined

    Combined with sys.path entry injection attack, it may pave the way to further system exploitation.

    Monkey patching

    A process of changing Python objects attributes at run-time is known as monkey patching. Being a dynamic language, Python fully supports run-time program introspection and code mutation. Once a malicious module gets imported one way or another, any existing mutable object could be insensibly monkey patched without programmer's consent. Consider this:

       $ cat
       import builtins
       def malicious_open(*args, **kwargs):
          if len(args) > 1 and args[1] == 'w':
             args = ('/dev/null',) + args[1:]
          return original_open(*args, **kwargs)
       original_open, =, malicious_open

    If the code above gets executed by Python interpreter, everything written into files won't be stored on the filesystem:

       >>> import nowrite
       >>> open('data.txt', 'w').write('data to store')
       >>> open('data.txt', 'r')
       Traceback (most recent call last):
       FileNotFoundError: [Errno 2] No such file or directory: 'data.txt'

    Attacker could leverage Python garbage collector (gc.get_objects()) to get hold of all objects in existence and hack any of them.

    In Python 2 built-in objects can be accesses via the magic __builtins__ module. One of the known tricks, exploiting __builtins__ mutability, that might bring the world to its end is:

       >>> __builtins__.False, __builtins__.True = True, False
       >>> True
       >>> int(True)

    In Python 3 assignments to True and False won't work so they can't be manipulated that way.

    Functions are first-class objects in Python, they maintain references to many properties of a function. In particular, executable byte code is referenced by the __code__ attribute which, of course, can be modified:

       >>> import shutil
       >>> shutil.copy
       <function copy at 0x7f30c0c66560>
       >>> shutil.copy.__code__ = (lambda src, dst: dst).__code__
       >>> shutil.copy('my_file.txt', '/tmp')
       >>> shutil.copy
       <function copy at 0x7f30c0c66560>

    Once the above monkey patch is applied, despite shutil.copy function still looking sane, it silently stopped working due to the no-op lambda function code set to it.

    Type of Python object is determined by the __class__ attribute. Evil attacker could hopelessly mess up things by resorting to changing type of live objects:

       >>> class X(object): pass
       >>> class Y(object): pass
       >>> x_obj = X()
       >>> x_obj
       <__main__.X object at 0x7f62dbe5e010>
       >>> isinstance(x_obj, X)
       >>> x_obj.__class__ = Y
       >>> x_obj
       <__main__.Y object at 0x7f62dbe5d350>
       >>> isinstance(x_obj, X)
       >>> isinstance(x_obj, Y)

    The only mitigation against malicious monkey patching is to ensure the authenticity and integrity of the Python modules being imported.

    Shell injection via subprocess

    Being known as a glue language, it is quite common for a Python script to delegate system administration tasks to other programs by asking the operating system to execute them, possibly providing additional parameters. The subprocess module offers easy to use and quite high-level service for such tasks.

       >>> from subprocess import call
       >>> unvalidated_input = '/bin/true'
       >>> call(unvalidated_input)

    But there is a catch! To make use of UNIX shell services, like command line parameters expansion, the shell keyword argument to the call function should be turned into True. Then the first argument to call function is passed as-is to the system shell for further parsing and interpretation. Once unvalidated user input reaches the call function (or other functions implemented in the subprocess module), a hole is opened to the underlying system resources.

       >>> from subprocess import call
       >>> unvalidated_input = '/bin/true'
       >>> unvalidated_input += '; cut -d: -f1 /etc/passwd'
       >>> call(unvalidated_input, shell=True)

    It is obviously much safer not to invoke UNIX shell for external command execution by leaving the shell keyword in its default False state and supplying a vector of command and its parameters to the subprocess functions. In this second invocation form, neither command nor its parameters are interpreted or expanded by shell.

       >>> from subprocess import call
       >>> call(['/bin/ls', '/tmp'])

    If the nature of the application dictates the use of UNIX shell services, it is utterly important to sanitize everything that goes to subprocess making sure that no unwanted shell functionality can be exploited by malicious users. In newer Python versions, shell escaping can be done with the standard library's shlex.quote function.

    Temporary files

    While vulnerabilities based on improper use of temporary files strike many programming languages, they are still surprisingly common in Python scripts so it's probably worth mentioning here.

    Vulnerabilities of this kind leverage insecure file system access permissions, possibly involving intermediate steps, ultimately leading to data confidentiality or integrity issues. Detailed description of the problem in general can be found in CWE-377.

    Luckily, Python is shipped with the tempfile module in its standard library which offers high-level functions for creating temporary file names "in the most secure manner possible". Beware the flawed tempfile.mktemp implementation which is still present in the library for backward compatibility reasons. The tempfile.mktemp function must never be used! Instead, use tempfile.TemporaryFile, or tempfile.mkstemp if you need the temporary file to persist after it is closed.

    Another possibility of accidentally introducing a weakness is through the use of shutil.copyfile function. The problem here is that destination file is created in the most insecure manner possible.

    Security-savvy developer may consider first copying the source file into a random temporary file name, then renaming the temporary file to its final name. While this may look like a good plan, it can be rendered insecure by the shutil.move function if it is used for performing the renaming. Trouble is that if the temporary file is created on a file system other than the one where the final file is to reside, shutil.move will fail to move it atomically (via os.rename) and silently resort to the insecure shutil.copy. A mitigation would be to prefer os.rename over shutil.move as os.rename is guaranteed to fail explicitly on operations across file system boundaries.

    Further complications may arise from the inability of shutil.copy to copy all file meta data potentially leaving the created file unprotected.

    Not exclusively specific to Python, care must be taken when modifying files on file systems of non-mainstream types, especially remote ones. Data consistency guarantees tend to differ in the area of file access serialization. As an example, NFSv2 does not honour the O_EXCL flag to the open system call, which is crucial for atomic file creation.

    Insecure deserialization

    Many data serialization techniques exist, among them Pickle is designed specifically to de/serialize Python objects. Its goal is to dump live Python objects into an octet stream for storage or transmission, then reconstruct them back to possibly another instance of Python. The reconstruction step is inherently risky if serialized data is tampered with. The insecurity of Pickle is well recognized and clearly noted in Python documentation.

    Being a popular configuration file format, YAML is not necessarily perceived as a powerful serialization protocol capable of tricking a deserializer into executing arbitrary code. What makes it even more dangerous is that the de facto default YAML implementation for Python - PyYAML makes deserialization look very innocent:

       >>> import yaml
       >>> dangerous_input = """
       ... some_option: !!python/object/
       ...   args: [cat /etc/passwd | mail]
       ...   kwds: {shell: true}
       ... """
       >>> yaml.load(dangerous_input)
       {'some_option': 0}

    ...while /etc/passwd is being stolen. A suggested fix is to always use yaml.safe_load for handling YAML serialization you can't trust. Still, the current PyYAML default feels somewhat provoking considering other serialization libraries tend to use dump/load function names for similar purposes, but in a safe manner.

    Templating engines

    Web application authors adopted Python long ago. Over the course of a decade, quite a number of Web frameworks have been developed. Many of them utilize templating engines for generating dynamic web contents from, well, templates and runtime variables. Aside from web applications, templating engines found their way into completely different software such as the Ansible IT automation tool.

    When content is being rendered from static templates and runtime variables, there is a risk of user-controlled code injection through runtime variables. A successfully mounted attack against a web application may lead to a cross-site scripting vulnerability. Usual mitigation for server-side template injection is to sanitize the contents of template variables before it interpolates into the final document. The sanitization can be done by denying, stripping off or escaping characters that are special to any given markup or other domain-specific language.

    Unfortunately, templating engines do not seem to lean towards tighter security here -- looking at the most popular implementations, neither of them apply escaping mechanism by default, relying on a developer's awareness of the risks.

    For example, Jinja2, which is probably one of the most popular tools, renders everything:

       >>> from jinja2 import Environment
       >>> template = Environment().from_string('')
       >>> template.render(variable='<script>do_evil()</script>')

    ...unless one of many possible escaping mechanisms is explicitly engaged by reversing its default settings:

       >>> from jinja2 import Environment
       >>> template = Environment(autoescape=True).from_string('')
       >>> template.render(variable='<script>do_evil()</script>')

    An additional complication is that, in certain use-cases, programmers do not want to sanitize all template variables, intentionally leaving some of them holding potentially dangerous content intact. Templating engines address that need by introducing "filters" to let programmers explicitly sanitize the contents of individual variables. Jinja2 also offers a possibility of toggling the escaping default on a per-template basis.

    It can get even more fragile and complicated if developers choose to escape only a subset of markup language tags letting others legitimately sneaking into the final document.


    This blog post is not meant to be a comprehensive list of all potential traps and shortcomings specific to the Python ecosystem. The goal is to raise awareness of security risks that may come into being once one starts coding in Python, hopefully making programming more enjoyable, and our lives more secure.

    Posted: 2016-09-07T13:30:00+00:00
  • Using the Java Security Manager in Enterprise Application Platform 7

    Authored by: Jason Shepherd

    JBoss Enterprise Application Platform 7 allows the definition of Java Security Policies per application. The way it's implemented means that we'll also be able to define security policies per module, in addition to define one per application. The ability to apply the Java Security Manager per application, or per module in EAP 7, makes it a versatile tool in the mitigation of serious security issues, or useful for applications with strict security requirements.

    The main difference between EAP 6, and 7 is that EAP 7 implements the Java Enterprise Edition 7 specification. Part of that specification is the ability to add Java Security Manager permissions per application. How that works in practice is that the Application Server defines a minimum set of policies that need to be enforced, as well as a maximum set of policies that an application is allowed to grant to itself.

    Let’s say we have a web application which wants to read Java System Properties. For example:


    If you ran with the Security Manager enabled, this call would throw an AccessControlException. In order to enable the security manager, start JBoss EAP 7 with the option -secmgr, or set SECMGR to true in standalone, or domain configuration files.

    Now if you added the following permissions.xml file to the META-INF folder in the application archive, you could grant permissions for the Java System Property call:

    Add to META-INF/permissions.xml of application:

    <permissions ..>

    The Wildfly Security Manager in EAP 7 also provides some extra methods for doing privileged actions. Privileged actions are ones that won't trigger a security check. However in order to use these methods, the application will need to declare a dependency on the Wildfly Security Manager. These methods can be used by developers instead of the built in PrivilegedActions in order to improve the performance of the security checks. There are a few of these optimized methods:

    • getPropertyPrivileged
    • getClassLoaderPrivileged
    • getCurrentContextClassLoaderPrivileged
    • getSystemEnvironmentPrivileged

    For more information about custom features built into the Wildlfy Security Manager, see this presentation slide deck by David Lloyd.

    Out of the box EAP 7 ships with a minimum, and maximum policy like so:


    <subsystem xmlns="urn:jboss:domain:security-manager:1.0">
                <permission class=""/>

    That doesn't enforce any particular permissions on applications, and grants them AllPermissions if they don’t define their own. If an administrator wanted to grant at least permissions to read System Properties to all applications then they could add this policy:


    <subsystem xmlns="urn:jboss:domain:security-manager:1.0">
                <permission class="java.util.PropertyPermission" name="*" actions="read,write"/>
                <permission class=""/>

    Alternatively if they wanted to restrict all permissions for all applications except FilePermission than they should use a maximum policy like so:

    <subsystem xmlns="urn:jboss:domain:security-manager:1.0">
                <permission class="" name="/tmp/abc" actions="read,write"/>

    Doing so would mean that the previously described web applications which required PropertyPermission would fail to deploy, because it is trying to grant permissions to Properties, which is not granted by the application administrator. There is a chapter on using the security manager in the official documentation for EAP 7.

    Enabling the security manager after development of an application can be troublesome because a developer would then need to add the correct policies one at a time, as the AccessControlExceptions where hit. However the Wildfly Security Manager EAP 7 will have a debug mode, which if enabled, doesn’t enforce permission checks, but logs violations of the policy. In this way, a developer could see all the permissions which need to be added after one test run of the application. This feature hasn’t been backported from upstream yet, however a request to get it backported has been made. In EAP 7 GA release you can get extra information about access violations by enabling DEBUG logging for

    When you run with the Security Manager in EAP 7 each module is able to declare it’s own set of unique permissions. If you don’t define permissions for a module, a default of AllPermissions is granted. Being able to define Security Manager policies per module is powerful because you can prevent sensitive, or vulnerable features of the application server from a serious security impact if it’s compromised. That gives the ability for Red Hat to provide a workaround for a known security vulnerability via a configuration change to a module which limits the impact. For example, to restrict the permissions of the JGroups modules to only the things required you could add the following permissions block to the JGroups:


        <grant permission="" name="${env.EAP_HOME}/modules/system/layers/base/org/jgroups/main/jgroups-3.6.6.Final-redhat-1.jar" actions="read"/>
        <grant permission="java.util.PropertyPermission" name="jgroups.logging.log_factory_class" actions="read"/>
        <grant permission="" name="${env.EAP_HOME}/modules/system/layers/base/org/jboss/as/clustering/jgroups/main/wildfly-clustering-jgroups-extension-10.0.0.CR6-redhat-1.jar" actions="read"/>

    In EAP 7 GA the use of ${env.EAP_HOME} as above won't work yet. That feature has been implemented upstream and backporting can be tracked. That feature will make file paths compatible between systems by adding support for System Property, and Environment Variable expansion in module.xml permission blocks, making the release of generic security permissions viable.

    While the Security Manager could be used to provide multi-tenancy for the application server, Red Hat does not think that it's suitable for that. Our Java multi-tenancy in Openshift is achieved by running each tenant’s application in a separate Java Virtual Machine, with the Operating System providing sandboxing via SELinux. This was discussed within the JBoss community, with the view of Red Hat reflected in this post

    In conclusion EAP 7 introduced the Wildfly Java Security Manager which allows an application developer to define security policies per application, while also allowing an application administrator the ability to define security policies per module, or a set of minimum, or maximum security permissions for applications. Enabling the Security Manager will have an impact on performance. Red Hat recommends taking a holistic approach to security of the application, and not relying on the Security Manager only.

    Posted: 2016-07-13T13:30:00+00:00
  • Java Deserialization attacks on JBoss Middleware

    Authored by: Jason Shepherd

    Recent research by Chris Frohoff and Gabriel Lawrence has exposed gadget chains in various libraries that allow code to be executed during object deserialization in Java. They've done some excellent research, including publishing some code that allows anyone to serialize a malicious payload that when deserialized runs the operating system command of their choice, as the user which started the Java Virtual Machine (JVM). The vulnerabilities are not with the gadget chains themselves but with the code that deserializes them.

    What is a gadget chain?

    Perhaps the simplest example is a list. With some types of lists, it’s necessary to compare objects in order to determine their order in the list. For example a PriorityQueue orders objects by comparing them with each other during it’s construction. It takes a Comparator object which will call any method you choose on the objects in the list. Now if that method contains a call to Runtime.exec(), then you can execute that code during construction of the PriorityQueue object.


    There are couple of ways in which this type of attack on the JVM can be mitigated:

    1. not deserializing untrusted objects;
    2. not having the classes used in the 'gadget chain' in the classpath;
    3. running the JVM as a non-root operating system user, with reduced privileges;
    4. egress filtering not allowing any outbound traffic other than that matching a connection for which the firewall already has an existing state table entry.

    The first is the best approach, as it prevents every kind of gadget chain a malicious attacker can create, even one devised from classes in the JVM itself. The second is OK, but has it's limits as there are new gadget chains made public often, and it's hard to keep up with the growing tide of them. Fortunately Enterprise Application Platform (EAP) 6 introduced module classloader that restricts which classes are available in the classpath of each module. It's much harder to find a classloader that has access to all the classes used by the gadget chain.

    The 3rd and 4th option are just good general security practices. If you want to serve content on port 80 of your host, you should use a firewall, or load balancer to redirect requests from port 80 to the JVM on another port above 1024, where your unprivileged JVM process is listening. You should not run a JVM as root in order to bind to a port less than 1024, as doing so will allow a compromised JVM to run commands as root.

    Egress filtering is particularly useful as a mitigation against deserialization attacks because output from the remote code execution is not returned to an attacker. The technique used by Java deserialization attacks results in the normal flow of Java execution being interrupted and an exception being thrown. So while an attacker has write and execute permissions of the user running the JVM, they don't have access to read files or shell command output, unless they can open a new connection which "phones home".

    EAP 5

    EAP 5 is still widely used, and does allow deserialization of untrusted objects via the Legacy Invoker Servlet. On top of that, its classloading structure is flat, with most libraries, including the classes from the gadget chains, available in the classpath. For anyone still running EAP 5 it is highly recommended to only bind the Legacy Invoker Servlet to a network interface card (NIC) which is not publicly accessible. This also applies to products layered on EAP 5, such as SOA-Platform (SOA-P) 5.

    EAP 6 and EAP 7

    While EAP 6, and EAP 7 are more robust because of the module classloader system, they can still be vulnerable. Users of these versions who are utilizing the clustering features should ensure that they are running their clustering on a dedicated Virtual Local Area Network (VLAN) and not over the Internet. That includes users of JBoss Data Grid (JDG) which uses the clustering features in the default configuration. If you don’t have a dedicated VLAN make sure you encrypt your clustering traffic. This issue is addressed in the JBoss Middleware product suite by the fix for CVE-2016-2141.


    While deserialization attacks are a serious threat to JBoss Middleware, with the correct planning, and deployment configuration, the risk can be greatly reduced. Anyone running EAP 5, or layered products, should disable or restrict access to the Legacy Invoker Servlet, while anyone using the clustering feature in EAP should apply the fix for CVE-2016-2141, or make sure their clustering traffic is sent only over a dedicated VLAN.

    Posted: 2016-07-06T13:30:00+00:00
  • Redefining how we share our security data.

    Authored by: Vincent Danen

    Red Hat Product Security has long provided various bits of machine-consumable information to customers and users via our Security Data page. Today we are pleased to announce that we have made it even easier to access and parse this data through our new Security Data API service.

    While we have provided this information since January 2005, it required end users to download the content from the site, which meant you either downloaded many files and kept a local copy, or you were downloading large files on a regular basis. It also meant that, as part of writing the parser, if you were looking for certain criteria, you had to account for that criteria in your parser, which could make it more complex and difficult to write.

    Although the Security Data API doesn’t remove the need for a parser (you need something to handle the provided data), it does offer a lot of search options so that you can leverage the API to obtain real time data.

    So what information can you obtain via the API? Currently it provides CVE information for flaws that affected components we ship in supported products, as well as CVRF (Common Vulnerability Reporting Framework) documents and OVAL (Open Vulnerability Assessment Language) definitions. While CVRF documents and OVAL definitions are provided in their native XML format, the API also provides that information in JSON format for easier parsing. This means that you can use any CVRF or OVAL parser with the feed, and you can also write your own JSON parser to get the representative data for them as well.

    Most users will be interested in the CVE data, which we have been providing as part of our CVE database since August 2009. If you wanted to get information on CVE-2016-0800, for instance, you would visit the CVE page: If you were using this information for some vulnerability assessment or reporting, you would have had to do some web scraping and involve other documents on our Security Data page.

    With the Security Data API you can view the information for this CVE in two ways: XML and JSON. This uses our own markup to describe the flaw, and from this view you can see the CVSSv2 score and metrics (or CVSSv3 score and metrics), as well as impact rating, links to Bugzilla, the date it went public, and other details of the flaw.

    While this is interesting, and we think it will be incredibly useful, the really compelling part of the API is the search queries you can perform. For instance, if you wanted to find all Critical impact flaws with a CVSSv2 score of 8 or greater, you would visit and get a nice JSON output of CVEs that meet this criteria.

    If you wanted to find all CVEs that were public a week ago (assuming today is June 1st 2016), you would use and if you further wanted to get only those that affected the firefox package, you would use

    Perhaps you only want information on the CVEs that were addressed in RHSA-2016:1217. You could use to get the list of CVEs and some of their details and then iterate through the CVEs to get further details of each.

    The same search parameters are available for CVRF documents and OVAL definitions. You have the flexibility to obtain the details via XML or JSON. The ability to get the data in multiple formats allows you to write parsers for any of these formats and also allows you to write the parsers in any language you choose. These parsers can further take arguments such as severity, date ranges, CVSS scores, CWE identifiers, package names and more, which are in turn used as search criteria when using the API.

    Red Hat Product Security strives to be as transparent as possible when it comes to how we handle security flaws, and the Security Data page has been the primary source of this information, as far as machine-consumable content is concerned. With the availability of the Security Data API, we think this is the best and most user-friendly way to consume this content and are pleased to provide it. Being able to programmatically obtain this information on-the-fly will now allow Red Hat customers and users to generate all kinds of interesting reports, or enhance existing reports, all in real-time.

    We are also pleased to say that the beta API does not require any kind of authentication or login to access and it is available for anyone to use.

    There is one last thing to note, however. The API, at this time, is in beta and the structure of the content, including how it is searched, may change at any time without any prior notification. We will be only supporting this one version of the API for now, however if we make any changes we will note that in the documentation.

    For further information and instructions on how to use the API, please visit the Security Data API documentation. If you encounter an error in any of the data, please contact us and let us know.

    Posted: 2016-06-23T13:30:00+00:00
  • How Red Hat uses CVSSv3 to Assist in Rating Flaws

    Authored by: Christopher Robinson

    Humans have been measuring risk since the dawn of time. "I'm hungry, do I go outside my awesome cave here and forage for food? There might be something bigger, scarier, and hungrier than me out there...maybe I should wait?" Successfully navigating through life is a series of Risk/Reward calculations made each and every day. Sometimes, ideally, the choices are small ("Do I want fries with that?") while others can lead to catastrophic outcomes if the scenario isn't fully thought-through and proper mitigations put into place.

    Several years ago, Red Hat began publishing a CVSS score along with our own impact rating for security flaws that affected Red Hat products. CVSS stands for Common Vulnerability Scoring System and is owned and managed by FIRST.Org. FIRST is a non-profit organization based in the United States, and is dedicated to assisting incident response teams all over the globe. CVSS provides a numeric score of a vulnerability ranging from 0.0 up to 10.0. While not perfect, it is additional feedback that can be taken when trying to evaluate the urgency with which a newly-delivered patch must be deployed in your environment. The v2 incarnation of the system, while a nice move in the direction of quantitative analysis, had many rough edges and did not accurately depict many of the more modern computing scenarios under which our customers deploy technology.

    Last year, in 2015, the FIRST organization, stewards of the CVSS scoring system (along with several other useful assessment tools) published the next generation of CVSS: version 3. v3 gives a security practitioner better dials and adjustments to get a more accurate representation of a risk presented by a software flaw. Using CVSSv2, it was challenging to express how software was vulnerable when the underlying host/operating system was only partially/minimally impacted. CVSSv3 addresses this issue with updates to improve the possible values for impact metrics and introduces a new metric called Scope.

    An important conceptual change in CVSSv3 is the ability to score vulnerabilities that exist in one software component (now referred to as the vulnerable component) but which impact a separate software, hardware, or networking component (now referred to as impacted component).

    It is important to note that Red Hat uses CVSS scores solely as a guideline and we provide severity ratings of vulnerabilities in our products using a four-point scale based on impact. For more information specific to how Red Hat rates flaws (which also illustrates how we use CVSS as a guideline) please see: .

    CVSSv3 now also provides a standard mapping from numeric scores to severity rating terms: None, Low, Medium, High and Critical. These numbers are very useful in starting to understand the risks involved, but ultimately they are one of several factors that goes into Red Hat’s severity rating, which may not always dictate what the final rating is.

    The Base Metrics now are:

    • Attack Vector (AV) - Expresses the "remoteness" of the attack and how the vulnerability is exploited.
    • Attack Complexity (AC) - Speaks to how hard the attack is to execute and what factors are needed for it to be successful. (The older Access Complexity metric is now split into Attack Complexity and User Interaction.)
    • User Interaction (UI) - Determines whether the attack require an active human to participate or if the attack is automated.
    • Privileges Required (PR) - Documents the level of user authentication required for attack to be successful (replaces older Authentication metric).
    • Scope (S) - Determines whether an attacker can affect a component that has a different level of authority.

    It’s important to note that User Interaction is now separate from Attack Complexity. In CVSSv2 it was not easy to determine if there was any user interaction required from the attack complexity metric alone, but with CVSSv3, it’s scored as a separate metric, making it more explicit what part of the scoring metric is related to user interaction.

    One question that must always be answered is, what kind of damage can be done if an attack is successfully executed? This is still measured by the CIA triad (Confidentiality, Integrity, Availability) however the values are now "None" to "Low" to "High" rather than “None” to “Partial” to “Complete” as they are in CVSSv2.

    • Confidentiality (C) - Determines whether data can be disclosed to non-authorized parties, and if so to what level.
    • Integrity (I) - This measures how trustworthy the data is and how far it can be trusted to not be modified by unauthorized users.
    • Availability (A) - This metric is concerned with data or services being accessible to authorized users when they need to access it.

    The CVSSv3 standard also has in it the ability to measure compensating controls that might exist in an environment, or based off of the timeline of the attack. These are measured in the Temporal and Environmental metrics. Red Hat cannot speak to countermeasures that may or may not exist in a customer's network, and therefore will not publish scoring based on these dimensions. To truly measure the residual level of risk in your environment you may also desire to use those metrics to communicate vulnerabilities within your organizations.

    Complete descriptions of the new metrics can be found at:

    When using these new criteria, Red Hat will continue to evaluate the risks that vulnerabilities bring based off of the context of how the flaw adversely affects the component in relation to other products. Sometimes Red Hat CVSSv3 scoring may differ from that of other organizations' ratings. For open source software shipped by multiple vendors, the CVSSv3 base scores may vary for each vendor's version, depending on the version they ship, how they ship it, the platform, and even how the software is compiled. This makes scoring of vulnerabilities difficult for third-party vulnerability databases, such as NVD, who can give only a single CVSSv3 base score to each vulnerability. More details can be found by reading CVSSv3 Base Metrics

    To illustrate some of the differences between the old method and the new method, we provide the following examples:

    Flaw type CVSSv3 Metrics CVSSv2 Metrics (for comparison)
    Many XSS, CSRF 6.1/CVSS:3.0/AV:N/AC:L/PR:N/UI:R/S:C/C:L/I:L/A:N 4.3/AV:N/AC:M/Au:N/C:N/I:P/A:N
    NULL dereference C:N/I:N/A:L C:N/I:N/A:P
    info leak C:L/I:N/A:N C:P/I:N/A:N
    tempfile/symlink C:N/I:L/A:N C:N/I:P/A:N
    most wireshark flaws 6.5/CVSS:3.0/AV:A/AC:L/PR:N/UI:N/S:U/C:N/I:N/A:H 2.9/AV:A/AC:M/Au:N/C:N/I:N/A:P
    most browser ACEs 7.3/CVSS:3.0/AV:N/AC:L/PR:N/UI:N/S:U/C:L/I:L/A:L 6.8/AV:N/AC:M/Au:N/C:P/I:P/A:P
    local kernel null ptr -> root 8.4/CVSS:3.0/AV:L/AC:L/PR:N/UI:N/S:U/C:H/I:H/A:H 7.2/AV:L/AC:L/Au:N/C:C/I:C/A:C
    local kernel DoS 5.5/CVSS:3.0/AV:L/AC:L/PR:L/UI:N/S:U/C:N/I:N/A:H 4.9/AV:L/AC:L/Au:N/C:N/I:N/A:C
    local network kernel -> root 8.8/CVSS:3.0/AV:A/AC:L/PR:N/UI:N/S:U/C:H/I:H/A:H 8.3/AV:A/AC:L/Au:N/C:C/I:C/A:C
    local network kernel DoS 6.5/CVSS:3.0/AV:A/AC:L/PR:N/UI:N/S:U/C:N/I:N/A:H 6.1/AV:A/AC:L/Au:N/C:N/I:N/A:C
    local kernel infoleak 3.3CVSS:3.0/AV:L/AC:L/PR:L/UI:N/S:U/C:L/I:N/A:N 1.9/AV:L/AC:M/Au:N/C:P/I:N/A:N
    remote kernel -> root 9.8/CVSS:3.0/AV:N/AC:L/PR:N/UI:N/S:U/C:H/I:H/A:H 10.0/AV:N/AC:L/Au:N/C:C/I:C/A:C
    remote kernel DoS 7.5/CVSS:3.0/AV:N/AC:L/PR:N/UI:N/S:U/C:N/I:N/A:H 7.8/AV:N/AC:L/Au:N/C:N/I:N/A:C
    wireshark buffer overflow 6.3/CVSS:3.0/AV:A/AC:L/PR:N/UI:N/S:U/C:L/I:L/A:L 5.4/AV:A/AC:M/Au:N/C:P/I:P/A:P
    wireshark null ptr deref 4.3/CVSS:3.0/AV:A/AC:L/PR:N/UI:N/S:U/C:N/I:N/A:L 2.9/AV:A/AC:M/Au:N/C:N/I:N/A:P
    root password leak C:H/I:N/A:N C:P/I:N/A:N
    SSL cert verification issues 3.7/CVSS:3.0/AV:N/AC:H/PR:N/UI:N/S:U/C:N/I:L/A:N(typical, I:L) 4.3/AV:N/AC:M/Au:N/C:N/I:P/A:N (typical, I:P)
    Java deserialization RCE 7.3/CVSS:3.0/AV:N/AC:L/PR:N:UI:N/S:U/C:L/I:L/A:L 7.5/AV:N/AC:L/Au:N/C:P/I:P/A:P

    For the JBoss Middleware suite, most scoring will be unaffected as the scoring for JBoss Middleware in CVSSv2 always related to the JBoss Middleware product itself, not the overall operating system. That won’t change with the introduction of the scope metric in CVSSv3. The scope of a vulnerability's impact will be related to all things authorized by the system user running the JBoss Middleware product, not the Java Virtual Machine, or a specific application deployed to the product.

    It is important to highlight, however, that with CVSSv2 other Red Hat products were scored based on the impact to the entire product, and not individual components. This is a very obvious change in CVSSv3 that will cause scores for products like Red Hat Enterprise Linux to be higher, sometimes substantially so, than they would have been rated using CVSSv2. This will result in our scores being closer to those published by other vendors or organizations, however differences (as outlined above) are still taken into account and may cause some slight variation.

    Further information on Red Hat impact ratings and how CVSSv3 is used can be found on our Issue Severity Classification page. CVSSv3 base metrics will be available for all vulnerabilities from June 2016 onwards. These scores are found on the CVE pages (linked to from the References section of each Red Hat Security Advisory) and also from our Security Measurements page.

    So now, forewarned being forearmed, you can feel just a little bit safer crawling out from your cave to venture forth. With a clearer understanding of the risks that could occur, you can journey out into the big, wide world better prepared!

    Posted: 2016-06-21T20:16:50+00:00
  • The Answer is always the same: Layers of Security

    Authored by: Daniel Walsh

    There is a common misperception that now that containers support seccomp we no longer need SELinux to help protect our systems. WRONG. The big weakness in containers is the container possesses the ability to interact with the host kernel and the host file systems. Securing the container processes is all about shrinking the attack surface on the host OS and more specifically on the host kernel.

    seccomp does a great job of shrinking the attack surface on the kernel. The idea is to limit the number of syscalls that container processes can use. It is an awesome feature. For example, on an x86_64 bit machine, there are around 650 system calls. If the Linux Kernel has a bug in any one of these syscalls, a process could get the kernel to turn off security features and take over the system, i.e. it would break out of confinement. If your container does not run 32 bit code, you can turn on seccomp and eliminate all x86 syscalls, basically cutting the number of syscalls in half. This means that if the kernel had a bug in a 32 bit syscall that allowed the process to take over the system, this syscall would not be available to the processes in your container, and the container would not be able to break out. We also eliminate a lot of other syscalls that we do not expect processes inside of a container to call.

    But seccomp is not enough

    This still means that if a bug remains in the kernel that can be triggered in the 300 remaining syscalls, then the container process can still take the system over, and/or create havoc. Just having open/read/write/ioctl on things like files/devices etc, could allow a container process the ability to break out. And if they break out they would be able to write all over the system.

    You could continue to shrink the seccomp syscall table to such a degree that processes can not escape, but at some point it will also prevent the container processes getting any real work done.

    Defense in Depth

    As usual, any single security mechanism by itself will not fully protect your containers. You need lots of security mechanisms to control what a process can do inside and outside a container.

    • Read-Only file systems. Prevent open/write on kernel file systems. Container processes need read access to kernel file systems like /proc, /sys, /sys/fs ... But they seldom need write access.

    • Dropping privileged process capabilities. This can prevent things like setting up the network or mounting file systems, (seccomp can also block some of these, but not as comprehensively as capabilities).

    • SELinux. Prevents which file system objects like files, devices, sockets, and directories a container process can read/write/execute. Since your processes in a container will need to use open/read/write/exec syscalls, SELinux controls which file system
      objects you can interact with. I have heard a great analogy, SELinux is telling people which people they can talk to, seccomp is telling them what they can say.

    • prctl(NO__NEW__PRIVS). Prevents privilege escalation through the use of setuid applications. Running your container
      processes without privileges is always a good idea, and this keeps the processes non privileged.

    • PID Namespace. Makes it harder to see other processes on the system that are not in your container.

    • Network Namespace. Controls which networks your container processes are able to see.

    • Mount Namespace. Hides large parts of the system from the processes inside of the container.

    • User Namespace. Helps remove remaining system capabilities. It can allow you to have privileges inside of your containers namespaces, but not outside of the container.

    • kvm. If you can find some way to run containers in a kvm/virtualization wrapper, this would be a lot more secure. (ClearLinux and others are working on this).

    The more Linux security services that you can wrap around your container processes the more secure your system will be.

    Bottom Line

    It is the combination of all of these kernel services along with administrators continuing to maintain good security practices that begin to keep your container processes contained.

    Posted: 2016-05-25T13:30:00+00:00