Warning message

Log in to add comments.

Latest Posts

  • The Product Security Blog has moved!

    Authored by: Fábio Olivé Leite

    Red Hat Product Security has joined forces with other security teams inside Red Hat to publish our content in a common venue using the Security channel of the Red Hat Blog. This move provides a wider variety of important Security topics, from experts all over Red Hat, in a more modern and functional interface. We hope everyone will enjoy the new experience!

    Posted: 2019-03-19T19:38:17+00:00
  • Security Technologies: FORTIFY_SOURCE

    Authored by: Huzaifa Sidhpurwala

    FORTIFY_SOURCE provides lightweight compile and runtime protection to some memory and string functions (original patch to gcc was submitted by Red Hat). It is supposed to have no or a very small runtime overhead and can be enabled for all applications and libraries in an operating system. The concept is basically universal meaning it can be applied to any operating system, but there are glibc specific patches available in gcc-4 onwards. In gcc, FORTIFY_SOURCE normally works by replacing some string and memory functions with their *_chk counterparts (builtins). These functions do the necessary calculations to determine an overflow. If an overflow is found, the program is aborted; otherwise control is passed to the corresponding string or memory operation functions. Again all this is normally done in assembly so the overhead is really minimal.

    While there are some technical articles which talk about FORTIFY_SOURCE in detail, this article discusses broad objectives of this technology and how it can help developers prevent some vulnerabilities related to buffer overflows.

    How does FORTIFY_SOURCE actually work

    As mentioned earlier, depending on the code, gcc can detect buffer overflows during compile time and runtime.

    Consider a buffer of fixed size which is declared as:

    char buf[5];

    Here are possible scenarios FORTIFY_SOURCE can effectively work with:

    1. Fixed data being copied into buf
    memcpy (buf, foo, 5);
    strcpy (buf, "abcd");

    In this case size of the data being copied into buf is known (5 bytes in first case and 4 chars plus a null byte in the second case), therefore FORTIFY_SOURCE can easily determine at compile time that no buffer overflow is possible, so memcpy/strcpy can be called directly or even inlined.

    1. Again, fixed data being copied into buf
    memcpy (buf, foo, 6);
    strcpy (buf, "abcde");

    Again size of the data is known, but seems to be incorrect. The compiler can detect buffer overflows at compile time. It issues warnings at compile time, and calls the checking alternatives at runtime.

    1. Variable data being copied into buf
    memcpy (buf, foo, n);
    strcpy (buf, bar);

    The compiler knows the number of bytes remaining in object, but doesn't know the length of the actual copy that will happen at runtime. FORTIFY_SOURCE replaces memcpy or strcpy with wrapper functions __memcpy_chk or __strcpy_chk which checks if a buffer overflow happened. If buffer overflow is detected, __chk_fail () is called (the normal action is to abort () the application, perhaps by writing some message to stderr).

    1. Variable data copied into buffer whose size is not known:
    memcpy (p, q, n);
    strcpy (p, q);

    This is not check-able at compile time or at runtime. FORTIFY_SOURCE cannot be used to protect against overflows if they occur.

    Functions protected by FORTIFY_SOURCE

    Memcpy, memset, stpcpy, strcpy, strncpy, strcat, strncat, sprintf, snprintf, vsprintf, vsnprintf, gets(3), and wide character variants thereof. For some functions, argument consistency is checked; for example, a check is made that open(2) has been supplied with a mode argument when the specified flags include O_CREAT.

    How to enable FORTIFY_SOURCE

    During compile D_FORTIFY_SOURCE macro needs to be set in order to enable FORTIFY_SOURCE. There are two levels of checking which are implemented. Setting the macro to 1 will enable some checks (the man page says that "checks that shouldn't change the behavior of conforming programs are performed"), while setting it to 2 adds some more checking.

    The difference between -D_FORTIFY_SOURCE=1 and -D_FORTIFY_SOURCE=2
    is e.g. for:

    struct S { 
        struct T {
             char buf[5];
              int x; 
              } t;
         char buf[20]; 
    } var;

    With -D_FORTIFY_SOURCE=1, "strcpy (&var.t.buf[1], "abcdefg");" is not considered an overflow (object is whole VAR), while with -D_FORTIFY_SOURCE=2 "strcpy (&var.t.buf[1], "abcdefg");" will be considered a buffer overflow.

    Another difference is that with -D_FORTIFY_SOURCE=2, "%n" in format strings of the most common *printf family functions is allowed only if the format string is stored in read-only memory (usually string literals, gettext's _("%s string %n") is fine too). Usually, when an attacker attempts to exploit a format string vulnerability, the "%n" is injected by the attacker, so it is necessarily a writable memory.

    In conclusion FORTIFY_SOURCE is lightweight and quite effective at controlling some categories of overflows, but again like any other security technology it is not perfect and should be enabled in combination with others.

    Posted: 2018-09-26T13:30:00+00:00
  • New Red Hat Product Security OpenPGP key

    Authored by: Red Hat Product...

    Red Hat Product Security has transitioned from using its old 1024-bit DSA OpenPGP key to a new 4096-bit RSA OpenPGP key. This was done to improve the long-term security of our communications with our customers and also to meet current key recommendations from NIST (NIST SP 800-57 Pt. 1 Rev. 4 and NIST SP 800-131A Rev. 1).

    The old key will continue to be valid for some time, but it is preferred that all future correspondence use the new key. Replies and new messages either signed or encrypted by Product Security will use this new key.

    The old key was:

    pub   1024D/0x5E548083650D5882 2001-11-21
          Key fingerprint = 92732337E5AD3417526564AB5E548083650D5882

    And the new key is:

    pub   4096R/0xDCE3823597F5EAC4 2017-10-31
          Key fingerprint = 77E79ABE93673533ED09EBE2DCE3823597F5EAC4

    To fetch the full key from a public key server, you can simply do:

    $ gpg --keyserver keys.fedoraproject.org --recv-key \

    If you already know the old key, you can now verify that the new key is
    signed by the old one:

    $ gpg --check-sigs '77E79ABE93673533ED09EBE2DCE3823597F5EAC4'

    You may also import the public key as listed here:

    Version: GnuPG v2.0.22 (GNU/Linux)

    If you are satisfied that you've got the right key, and the UIDs match what you expect, you may import and begin using the new key when communicating with us at secalert@redhat com.

    Posted: 2018-08-22T13:30:00+00:00
  • Security Technologies: Stack Smashing Protection (StackGuard)

    Authored by: Huzaifa Sidhpurwala

    In our previous blog, we saw how arbitrary code execution resulting from stack-buffer overflows can be partly mitigated by marking segments of memory as non-executable, a technology known as Execshield. However stack-buffer overflow exploits can still effectively overwrite the function return address, which leads to several interesting exploitation techniques like ret2libc, ret2gets, and ret2plt. With all of these methods, the function return address is overwritten and attacker controlled code is executed when the program control transfers to overwritten address on the stack.

    Memory stack showing exploit overwriting code in return address and buffer 1 and 2.

    In 1998 GCC introduced StackGuard, which was successfully used in conjunction with other security hardening technologies to rebuild the Red Hat Linux 7.3 distribution (GCC 2.96-113). The StackGuard patch was also applied to the source for GCC 3.2-7 used in the Red Hat Linux 8 distribution to rebuild both the compiler and GLIBC.

    StackGuard basically works by inserting a small value known as a canary between the stack variables (buffers) and the function return address. When a stack-buffer overflows into the function return address, the canary is overwritten. During function return the canary value is checked and if the value has changed the program is terminated. Thus reducing code execution to a mere denial of service attack. The performance cost of inserting and checking the canary is very small for the benefit it brings, and can be reduced further if the compiler detects that no local buffer variables are used by the function so the canary can be safely omitted.

    Memory stack with StackGuard Canary


    There are currently three types of canaries which are supported by StackGuard:

    Terminator canaries

    Most buffer overflow attacks are based on certain string operations which end at string terminators. A terminator canary contains NULL(0x00), CR (0x0d), LF (0x0a), and EOF (0xff), four characters that should terminate most string operations, rendering the overflow attempt harmless. This prevents attacks using strcpy() and other methods that return upon copying a null character while the undesirable result is that the canary is known.

    This type of protection can be bypassed by an attacker overwriting the canary with its known values and the return address with specially-crafted value resulting in a code execution. This can be when non-string functions are used to copy buffers and both the buffer contents and the length of the buffer are attacker controlled.

    Random canaries

    A random canary is chosen at random at the time the program execs. With this method, the attacker could not learn the canary value prior to the program start by searching the executable image. The random value is taken from /dev/urandom if available, and created by hashing the time of day if /dev/urandom is not supported. This randomness is sufficient to prevent most prediction attempts. If there is an information leak flaw in the application, which can be used to read the canary value, this kind of protection could be bypassed.

    random XOR canaries

    Random XOR canaries are random canaries that are XOR-scrambled using all or part of the control data (frame pointer + return address etc). In this way, once the canary or the control data is clobbered, the canary value is wrong and it will result in immediate program termination.

    How does stackguard actually work

    Compilers implement this feature by selecting appropriate functions, storing the stack canary during the function prologue, checking the value in the epilogue, and invoking a failure handler if it was changed. For example consider the following code:

    void function1 (const char* str){
            char buffer[16];
            strcpy(buffer, str);

    StackGuard automatically converts this code to:

    extern uintptr_t __stack_chk_guard;
    noreturn void __stack_chk_fail(void);void function1(const char* str){
            uintptr_t canary = __stack_chk_guard;
            char buffer[16];
            strcpy(buffer, str);
            if ( (canary = canary ^ __stack_chk_guard) != 0 )

    Note how the secret value is stored in a global variable (initialized at program load time) and is copied into the stack frame and how it is safely erased from the stack as part of the check, preventing the value being leaked when this stack space is re-used. Since stacks grow downwards on many architectures, in this example the canary gets overwritten whenever input to strcpy is more than 16 characters.

    The detection method works because it is impossible to get the correct value via trial and error. Since one incorrect canary value prevents further alterations, an attacker cannot keep trying until the correct value is found. In the example above, if the canary contained a zero byte, it would be impossible for a single strcpy to overwrite it correctly. This forces the attacker to either not attack, or be detected and be unable to alter the stack any further. This does not mean that the buffer cannot be exploited, but it makes exploitation much more difficult, often requiring multiple bugs to be used together. Also, __stack_chk_guard can be stored in various places; some architectures use TLS (Thread-local Storage) data for it.

    One heuristic ordering often used, with the stack growing downwards, is first storing the canary, then buffers (that might overflow into each other), and finally all the small variables unaffected by overruns. This is based on the idea that it is generally less dangerous if arrays are modified, compared to variables that hold flags, pointers and function pointers, which much more seriously alter execution. Some compilers randomize the order of stack variables and randomize the stack frame layout, which further complicates determining the right input with the intended malicious effect.

    Limitations of StackGuard:

    Though StackGuard may be effective in preventing stack-buffer overflow attacks it has certain limitations as well:

    • An information disclosure flaw in a different part of the program could disclose the global __stack_chk_guard value. This would allow an attacker to write the correct canary value and overwrite the function return address.

    • Not all buffer overflows are on stack. StackGuard cannot prevent heap-based buffer overflows.

    • While StackGuard effectively prevents most stack buffer overflows, some out-of-bounds write bugs can allow the attacker to write to the stack frame after the canary, without overwriting the canary value itself.

    • If a function has multiple local data structures and pointers to functions, these are allocated on the stack as well, before the canary value. If there is a buffer overflow in any one of these structures, the attacker can use this to overwrite adjacent buffers/pointers which could result in arbitrary code execution. This really depends on the arrangement of data on the stack.

    • On some architectures, multi-threaded programs store the reference canary __stack_chk_guard in Thread Local Storage, which is located a few kb after the end of the thread's stack. In these circumstances, a sufficiently large overflow can overwrite both canary and __stack_chk_guard to the same value, causing the detection to incorrectly fail.

    • Lastly, for network applications which fork() child processes, there are techniques for brute forcing canary values. This only works in some limited cases though.

    The GCC compiler provides various options to control StackGuard implementation during compilation. Despite of some limitations, StackGuard is quite effective in protecting binaries against runtime stack-buffer overflows.

    Posted: 2018-08-20T13:30:00+00:00
  • Managing risk in the modern world

    Authored by: Christopher Robinson

    Things can be pretty scary out there today. There are a lot of things that could occur that make even the calmest amongst us take pause. Everything we do is a series of risk-based decisions that we hope leads to happy outcomes. “Should I get out of bed today?”, “Should I eat this sushi they are selling in this gas station?”, “Can you hold my beverage?”. The challenges of modern-day existence can be very daunting.

    With this blog, I’m sharing how I’d advise organizations to consider IT-related risks and how Red Hat Product Security aims to help customers make informed decisions with data.

    In the world of information security, we have a few tools to help us make informed decisions. When looking at potential problems we think of them in terms of potential risks that could happen that would impact our business. First, when we need to understand what could possibly happen, we look at the impact of a potential threat. IF I don’t have any ice available on this hot summer day, I COULD have a dissatisfactory chilly-beverage experience. That sounds pretty bad, right? I’d be very disappointed if that risk (the inadequate supply of ice) was unable to slake my summer-y, adventure-fueled thirst (the impact or possible bad outcome).

    The lack of frozen water might not be my only problem in venturing outside. Do I have sunscreen? Do I have my trusty adventure hat? What if it rains? What if it gets windy and I get a non-frosty-beverage-induced chill? It’s a great idea to maybe track these problems (risks) and the potential bad outcomes (impacts) down in something like my handy, dandy notebook (or a risk register).

    Once you’ve written down what you feel is a solid list of things that could happen, you might notice that some of the problems are bigger or smaller than others. Getting sunstroke from lack of appropriate protection is bad, far worse than say not having a coaster for this quickly-warming drink. Some of these problems are measurable (quantifiable) while others are more about how you feel about it (aka “the feels”) (qualifiable). Either approach is valid depending on what the data is and what decisions you’d like to make, but you want to make sure you’re using the same measuring stick so that your analysis is consistent. The whole “comparing apples to oranges” debate of the ages, you know?

    As you’re thinking about how bad these items could be, it is important to think about it in three dimensions:
    - Does this problem allow someone to look at something they shouldn’t? (Confidentiality)
    - Does this problem make it so people that should see the data cannot? (Availability)
    - Does this problem alter the data in a manner it should not be? (Integrity)

    So you’ve got a list. You have some scale to measure how bad the outcomes potentially could be. Another element to look at is the likelihood or probability of those bad possible outcomes actually happening. While you may be naturally afraid of sharks, the realistic chance one will swim up to your back porch, take off it’s snorkel mask, and start nibbling on you is very, very unlikely (....I hope). Flipping to a new page in your handy dandy notebook you can plot each risk on an X/Y scale as shown here:

    Diagram showing the difference between impact and probability of different threats.

    Here in this chart, we’ve plotted the likelihood and impact of not having a coaster, getting sunstroke, and the aforementioned toothy visitor.

    To throw more wacky wildlife analogies into this discussion, ideally this gets you thinking about the threat - a hairy, scary bear coming and eating your sandwich, what the vulnerability is - there is a hole in your bear-proof fence, and what the impact would be if that threat exploits that vulnerability - you has no more tasty sammich since the shark and bear teamed up on you to gobble it up nom nom nom, plus you’re also out of ice and your beverage is getting dangerously close to room-temperature! (The horror!)

    Risk = Threat x Vulnerability

    So this is where you break out science and math to help you decide what to do. There is a concept of a “single loss expectancy” (SLE), in layman’s terms this is “what is the cost if this bad thing happens once?” So back to our busy and hot back porch example, we have lost one sandwich (tragic, I know). That doesn’t sound too bad. What if this terrible series of events unfolds every day, then you look at your risk in terms of “annualized rate of occurrence” (ARO). Every day is a pretty intense probability (like 100%, which sounds like a lot). With those two points of data, we can then state what our “annual loss expectancy” (ALE) is.

    ALE = SLE x ARO

    So for our example, that’s one sandwich (perhaps also an additional investment in anti-shark spray and bear-netting) multiplied by 365 days. So that’s quite a large pile of deli meats, cheeses, and bread, don't you agree?

    You invest in some countermeasures: the anti-shark spray, some bear netting, and those little tents you put over outdoor food to keep the bugs off of them (and a bag of ice from the local A&P down the street). So that costs you, like $3.50 a year. To get fancier, let’s say this travesty of sandwich abuse (and lukewarm drinks) goes on for several years. So you have dozens of dollars wrapped up in trying to stop these animal hooligans. Sounds like we might want to evaluate if we’re spending the right amount for the value we’re receiving.


    You generally should not spend more money to protect something than what this is worth to you (now to be fair, things like human safety and not breaking the law will typically override this calculus) but it may be cheaper to just stop doing this risky activity than it would be to continue doing it and paying to protect it.

    When you’re thinking about these risks in your handy dandy notebook or on your porch, there are several techniques that can help you deal with it. There are four ways any risk can be addressed:
    - You can try and correct it and make it go away (but this can be expensive) - Buy an electric bear & shark-proof fence with teeny-tiny mesh so they can’t slip in.
    - You can try to reduce it to make it smaller and less scary - make the hole in the fence smaller, so the bear has to work harder to get in and maybe he’s too tired to eat the whole sandwich, so you still have a bear-slobbery half left.
    - You can transfer the risk and make it someone else’s problem- sell the house and hope the buyer doesn’t go out back during the home inspection.
    - Or...you can accept the risk and just live with it - I, for one, welcome our new shark-riding bear overlords.

    You can NEVER eliminate ALL risk from something. Even if you correct/remediate a problem, there could be other factors or exploits that are unknown at that time, or you might not have enough time/effort/money to purge the problem entirely. Working smarter, you can temper how you address a problem; maybe by selectively applying mitigations where the resource or performance costs are too high (or accepting the risk if the cost is too painful) or doing a risk-based analysis and applying fixes to critical assets first, then working your way down to less important (or inconsequential) systems. As the poet once said, “If you chose not to decide, you still have made a choice.”

    Now all this merry conversation around ice and bears and landsharks is all well and good, but you might say “Self, what does this all do to help ME?”
    From Sandwiches to Security

    Well here is the turn: Red Hat Product Security works to provide you clear factual data as we understand it, to help you understand how big those bears are, how fast those sharks are swimming, and how hot it is outside so that you can be armed to know how much ice you’ll need to hit that perfect 50-70 degrees Fahrenheit of frosty wet goodness and how much mayo you might need for that sub sandwich (hint...none). Red Hat can help you understand what a vulnerability is, and give you options to address it, but ultimately you know your sandwich best, and your preferred sandwich is different than your neighbor’s or that guy’s down the street.

    Once embargoes are lifted, Red Hat publishes our security data and metrics openly for everyone to see. Each vulnerability is clearly labeled using industry-standard tools and measurements like Common Vulnerabilities and Exposures (CVE), Common Weakness Enumeration (CWE), and Common Vulnerability Scoring System (CVSS). We publish that data in both human-readable and machine-readable formats like Open Vulnerability and Assessment Language (OVAL) and Common Vulnerability Reporting Framework (CVRF).

    We work to describe the problem and provide our assessment of the severity of the issue by providing an additional impact rating on top of a score like CVSS. Our engineers look at each flaw from the vantage point of our products based on the components and how those packages are used and configured. We have the experience to know when a flaw is just a little guppie and when it truly is a megalodon that will ruin your day. On top of that, we have our internal Customer Security Awareness program that layers on additional data, context, and tools to help you quickly decide what you need to do within the context of your operating and business environment.

    Hopefully, this helps you understand where and how you should react and be able to articulate YOUR business’ risk to YOUR stakeholders so the next time that bear-shark combo swims by you can act quickly and confidently with your response.

    Posted: 2018-08-14T15:30:00+00:00
  • How SELinux helps mitigate risk while facilitating compliance

    Authored by: Red Hat Product...

    Many of our customers are required to meet a variety of regulatory requirements. Red Hat Enterprise Linux includes security technologies that help meet these requirements. Improving Linux security also benefits our layered products, such as Red Hat OpenShift Container Platform and Red Hat OpenStackⓇ Platform.

    In this blog post, we use PCI-DSS to highlight some of the benefits of SELinux. Though there are many other security standards that affect our customers, we selected PCI-DSS based on a review of customer support cases, feedback, and general inquiries we received. The items we selected from this standard are also accepted industry practices, such as:

    • Limiting user access to data based on job roles.
    • Limiting access to system components.
    • Configuring software behavior, functions, and access.

    What is SELinux?

    SELinux is an advanced access control mechanism originally created by the United States National Security Agency. It was released under an open source license in 2000, and integrated into the Linux kernel in 2003. As part of the Linux kernel, it is built into the core of Red Hat Enterprise Linux. SELinux works by layering additional access controls on top of the traditional discretionary access controls that have been the basis of UNIX and Linux security for decades. SELinux access controls provide both increased granularity as well as a single security policy that is applied across the entire system and enforced by the RHEL kernel. SELinux enforces the security policy on applications bundled with Red Hat Enterprise Linux as well as any custom, third-party, and independent software vendor (ISV) applications. In addition to applications on the host system, SELinux access controls provide separation and controlled sharing between RHEL-hosted virtual machines and containers.

    SELinux’s access controls are driven by a configurable security policy, which is loaded into the kernel at boot. The SELinux security policy functions as a whitelist for user and application behavior. The policy allows administrators and policy developers to isolate applications into specific SELinux domains that are tailored to the application’s permitted behaviors. Access to files, local interprocess communications (IPC) mechanisms, the network, and various other system resources can all be restricted on a per-domain basis. SELinux also allows the administrator to put individual SELinux domains, as well as the entire system, into permissive mode where SELinux-based access denials are logged, but the access is still permitted. This eases policy development and troubleshooting.

    While SELinux is an important part of Red Hat Enterprise Linux security capabilities, there are many other security technologies and widely accepted practices that should also be employed. Data encryption, malware scanning, firewalls, and other network security mechanisms remain an important part of an overall security strategy. SELinux is a way to augment existing security solutions, and is not a replacement for current security measures that may be in place.

    Mapping to compliance requirements

    With the above understanding of how SELinux can help reduce risk and harden a Red Hat Enterprise Linux system, let’s see how it maps to a few PCI-DSS compliance requirements. When reviewing PCI-DSS 3.2 requirements, it is easy to see how RHEL with SELinux can help address requirements that fall under the section Implement Strong Access Control Measures Requirement. Let’s look at some lesser-known requirements in sections two and three instead.

    PCI-DSS requirement 2.2:

    “[d]evelop configuration standards for all system components. Assure that these standards address all known security vulnerabilities and are consistent with industry-accepted system hardening standards.”

    Given that, by default, it denies access to any resource rather than permits access, SELinux immediately meets industry-accepted system hardening standards, and may help mitigate certain classes of security vulnerabilities. It also helps meet the more granular requirements under 2.2 by ensuring a greater level of security restrictions and more fine-grained access control.

    PCI-DSS requirement 3.6.7:

    “Prevention of unauthorized substitution of cryptographic keys”

    At a system-configuration level, SELinux can prevent unauthorized overwriting of files—even when a specific user or role would normally be authorized to write to the directory containing cryptographic keys.

    SELinux can also help customers meet other well-known PCI-DSS 3.2 requirements by:
    Limiting access to system components and cardholder data to only those individuals whose job requires such access. (meets 7.1.1 - 7.1.3)
    Establishing an access control system(s) for systems components that restricts access based on a user’s need to know, and is set to ‘deny all’ unless specifically allowed. (meets 7.2.1 - 7.2.3)

    Restricting malicious actor read, write, and pivoting

    When SELinux is in enforcing mode, the default policy used in Red Hat Enterprise Linux is the targeted policy. In the default targeted policy, some applications run in a confined SELinux domain where SELinux policy restricts those applications to a particular set of behaviors. All other applications run in special unconfined domains; while they are still SELinux security domains, there is little to no restriction to their permitted behavior.

    Almost every service that listens on a network is confined in RHEL, such as httpd and sshd. Also, most processes that run as the root user and perform tasks for users, such as the passwd utility, are confined. When a process is confined, it runs in its own domain. Depending on the SELinux policy configuration for a confined process, an attacker's access to resources, ability to pivot, read, and write, and the possible damage they can do may be limited.

    We have listed below a few of the common processes and daemons that run confined by default in their own domain. If you have a question regarding a process that is not listed here, send your inquiry to Red Hat Customer Service.

    • dhcpd is a dynamic host control protocol used in Red Hat Enterprise Linux to dynamically deliver and configure Layer 3 TCP/IP details for clients.
    • smbd is a Samba server that provides file and print services between clients across various operating systems.
    • httpd (Apache HTTP Server) provides a web server.
    • Squid is a high-performance proxy caching server for web clients supporting FTP, Gopher, and HTTP data objects. It reduces bandwidth and improves response times by caching and reusing frequently requested web pages.
    • mysqld is a multi-user, multi-threaded SQL database server that consists of the MariaDB server daemon (mysqld) and many client programs and libraries.
    • PostgreSQL is an Object-Relational database management system (DBMS).
    • Postfix is an open-source Mail Transport Agent (MTA), which supports protocols like LDAP, SMTP AUTH (SASL), and TLS.

    For more information on how the Red Hat portfolio can help customers with PCI-DSS compliance, review Red Hat’s 2015 paper on PCI and DSS compliance and our 2016-2017 blog series.


    SELinux can also help mitigate many risks posed from privilege escalation attacks. SELinux policy rules define how processes access files and other processes. If a process is compromised, the attacker can only access resources granted to it through the associated SELinux domain. Exploiting an application does not change what SELinux allows the process to access. For example, if the Apache HTTP Server is compromised, an attacker cannot use that process to read files in user home directories by default, unless a specific SELinux policy rule was added or configured to allow such access.

    Based on our review of data from the 2017 calendar year, we selected three vulnerabilities publicly released during that time which were mitigated by default Red Hat Enterprise Linux SELinux policies.

    CVE-2016-9962 targeted containers, and it became public just 11 days into the new year. On Red Hat systems with SELinux enabled, the dangers of even privileged containers are mitigated. SELinux prevents container processes from accessing host content even if those container processes manage to gain access to the actual file descriptors. With SELinux in enforcing mode, and enabling the default SELinux policy (deny_ptrace) which only affects the policy shipped by Fedora or Red Hat, customers can:
    - remove all ptrace,
    - confine an unconfined domain, and
    - retain the flexibility to disable it permanently or temporarily for troubleshooting.

    CVE-2017-6074 addressed a flaw in the Datagram Congestion Control Protocol (DCCP). If exploited by a local, unprivileged user, the user could alter the kernel memory and escalate their privileges on the system. With SELinux enabled and using the default policies alone, this flaw is mitigated.

    CVE-2017-7494 addressed a vulnerable Samba client. A malicious authenticated Samba client, having write access to the Samba share, could use this flaw to execute arbitrary code as root. When SELinux is enabled by default, our default policy prevents loading of modules from outside of Samba's module directories and therefore mitigates the flaw.

    Red Hat and security

    At Red Hat we believe that security is a mindset, not a feature. That’s why we work closely with upstream developers and communities to encourage secure coding practices, information sharing, and collaboration. We firmly believe the principles of open source software contribute to transparency and more secure products, benefiting customers and communities alike.

    SELinux is shipped enabled by default in Red Hat Enterprise Linux. In addition to providing added security and mitigating a threat actor’s ability to pivot, SELinux also helps customers meet a variety of compliance standards requirements. And although the terms compliant and secure are not directly interchangeable, we understand that both are very important to our customers. We work continuously to support our products and help our customers achieve both business objectives.

    For more information on Red Hat Product Security, visit the Product Security Center on the Red Hat Customer Portal. If you have vulnerability information you would like to share with us, please send an email to secalert@redhat.com.

    Posted: 2018-08-09T13:30:00+00:00
  • Security Technologies: ExecShield

    Authored by: Huzaifa Sidhpurwala

    The world of computer security has changed dramatically in the last few years. Keeping your operating system updated with the latest security patches is no longer sufficient. Operating system providers need to be more proactive in combating security problems. A majority of exploitable security flaws are due to memory corruption. ExecShield, a Red Hat-developed technology, included since Red Hat Enterprise Linux 3, aims to help protect systems from this type of exploitable security flaws.

    Buffer Overflows

    Buffer overflows are common mistakes found in programs written in the C or C++ programming languages and are generally very easy to exploit. In fact, there are semi-automated exploit-creation kits available on the Internet.

    Memory stack with example code

    When a function is called, the return address is first pushed on the stack. This is followed by any variables which are declared on the stack including buffers. On Intel and compatible processors, the stack grows in a downward direction over time (The image shows a simplistic, but inverted view of the memory, in which stack is showed to grow in the upward direction). This is why the buffer is stored before the return address. This buffer is stored on the stack and is located before the memory location containing the address of the program code that invoked the function. When the function is finished, the address is used to resume the program at the point of the function invocation.

    If there is a stack-based buffer overflow flaw (CWE-121) in the application, the attacker can effectively write beyond the bounds of the buffer and could overwrite the return address. By overwriting the return address (which holds the address of the memory location of the code to execute when the function is complete), the exploit can control which code is executed when the function finishes. The simplest and most common approach is to make the return address point back into the buffer, which the attacker has filled with program code in the same step which caused the overflow. Such injected program code is often called "shellcode", as an attacker would typically start a shell such as bash, through which further commands can be ran.

    Memory stack showing exploit overwriting code in return address and buffer 1 and 2.

    Preventing Security flaws caused by buffer-overflow

    An attacker can often change the return address of the function and point them to an area within the buffer which contains shellcode. The first logical step in countering buffer overflows is to ensure that return addresses only point to trusted program code and not to hostile externally injected program code. This is the approach that ExecShield and NX technology, provided by AMD and Intel, take.

    Memory stack with a segment limit that segments the stack into executable and non-executable areas.

    ExecShield approximates a separation of read and execute permissions by segment limits. The effect of applying segment limits is that the first N megabytes of the virtual memory of a process are executable, while the remaining virtual memory is not. The operating system kernel selects the value of N.

    With such a segment limit in place, the operating system must make sure that all program code is located below this limit (the top side of the above picture) while data, especially the stack, should be located in the higher virtual memory addresses (the bottom side of the picture). When a violation of the execution permission happens, the program triggers a segmentation fault and terminates. This behavior is identical to when a program tries to violate read or write memory permissions or access unmapped memory addresses.

    Intel and AMD NX Technology

    Both AMD and Intel have recognized the lack of ability in separating read and execute permissions in the x86 architecture. In the AMD64 processor line, AMD extended the architecture in a backward compatible way by adding a No eXecute permission to the set of existing memory permissions. After AMD's decision to include this functionality in its AMD64 instruction set, Intel implemented the similar XD bit feature in x86 processors beginning with the Pentium 4 processors based on later iterations of the Prescott core.

    Since NX is more fine grained than the previously described segment approach, for processors which have NX enabled, ExecShield attempts to use that.


    Since a typical buffer-overflow exploit works by overwriting the original return address with the address of the buffer containing the shell code, the attacker needs to know the exact address of the buffer containing this code. While this appears to be difficult, in practice it may not be very difficult especially when there is some sort of an address leak flaw as well.

    Each system running the same version of the operating system has the same binaries and libraries. A person who is writing an exploit only has to examine their own system to determine the address that will be similar on all other such systems. Another approach in exploiting buffer overflows also involves overwriting the return address, however, rather than overwriting it with the address of the shellcode injected into the buffer, an attacker overwrites it with the address of a subroutine that is already present in the application, quite often the system() function from the glibc library. Since this type of attack does not depend on executing code in a data/stack area, but does depend on executing previously present and legitimate code with attacker-supplied data, it defeats the ExecShield approach of making the stack non-executable. Note, however, that this approach also depends on knowing the exact address of the function that is to be called.

    In order to prevent the above, the ExecShield technology uses Address Space Layout Randomization, which gives randomized offsets to several key components like the stack, locations of shared libraries and the program heap. This randomization offers more system security by making it difficult to find the exact address needed for these exploits; the address is now different for every machine as well as being different each time a program starts.

    Though ExecShield provides a decent level of protection against exploitation, it is not enough on its own. Several other technologies such as ASLR should be enabled in the operating system. Further, important binaries need to be compiled with exploitation-mitigation technologies like PIE, REPRO, FORTIFY_SOURCE etc, to provide an optimal level of protection, which is what Red Hat has been doing for years.

    Posted: 2018-07-25T13:30:00+00:00
  • SPECTRE Variant 1 scanning tool

    Authored by: Nick Clifton

    As part of Red Hat's commitment to product security we have developed a tool internally that can be used to scan for variant 1 SPECTRE vulnerabilities. As part of our commitment to the wider user community, we are introducing this tool via this article.

    This tool is not a Red Hat product. As such, it is not supported and does not come with any kind of warranty.

    The tool only works on static binaries and does not simulate an entire running system. This means it will neither follow jumps through a PLT into a shared library, nor will it emulate the loading of extra code via the dlopen() function.

    The tool currently only supports the x86_64 and AArch64 architectures. We do hope to add additional architectures in the future.

    The tool is currently available in source form as it would be unwise to offer a security analysis tool as a pre-compiled binary. There are details on how to obtain the source and build the tool later in this article.


    To use the scanner simply invoke it with the path to a binary to scan and a starting address inside the binary:

    x86_64-scanner vmlinux --start-address=0xffffffff81700001

    Note - these examples are using the x86_64 scanner, but the AArch64 scanner behaves in the same way.

    The start address will presumably be a syscall entry point, and the binary a kernel image. (Uncompressed; the scanner does not yet know how to decompress compressed kernels). Alternatively the binary could be a library and the address an external function entry point into that library. In fact, the scanner will handle any kind of binary, including user programs, libraries, modules, plugins and so on.

    A start address is needed in order to keep things simple and to avoid extraneous output. In theory, the scanner could examine every possible code path through a binary, including ones not normally accessible to an attacker. But this would produce too much output. Instead, a single start address is used in order to restrict the search to a smaller region. Naturally the scanner can be run multiple times with different starting addresses each time, so that all valid points of attack can be scanned.

    The output of the scanner will probably look like this:

    X86 Scanner: No sequences found.

    Or, if something is found, like this:

    X86 Scanner: Possible sequence found, based on a starting address of 0:.
    X86 Scanner:               000000: nop.
    X86 Scanner: COND: 000001: jne &0xe .
    X86 Scanner:               00000e: jne &0x24 .
    X86 Scanner: LOAD:  000010: mov 0xb0(%rdi),%rcx.
    X86 Scanner:               000017: mov 0xb0(%rsp),%rax.
    X86 Scanner:               00001f: nop.
    X86 Scanner: LOAD:  000020: mov 0x30(%rcx),%rbx.

    This indicates that entering the test binary at address 0x0 can lead to encountering a conditional jump at address 0x1 would trigger speculation. Then a load at address 0x10 uses an attacker provided value (in %rdi) which might influence a second load at 0x20.

    One important point to remember about the scanner’s output is that it is only a starting point for further investigation. Closer examination of the code flagged may reveal that an attacker could not actually use it to exploit the Spectre vulnerability.

    Note - currently the scanner does not check that the starting address is actually a valid instruction address. (It does check that the address lies inside the binary image provided). So if an invalid address is provided unexpected errors can be generated.

    Note - the scanner sources include two test files, one for x86_64 (x86_64_test.S) and one for AArch64 (aarch64_test.S). These can be used to test the functionality of the scanner.

    How It Works

    The scanner is basically a simulator that emulates the execution of the instructions from the start address until the code reaches a return instruction which would return to whatever called the start address. It tracks values in registers and memory (including the stack, data sections, and the heap).

    Whenever a conditional branch is encountered the scanner splits itself and follows both sides of the branch. This is repeated at every encounter, subject to an upper limit set by the --max-num-branches option.

    The scanner assumes that at the start address only the stack and those registers which are used for parameter passing could contain attacker provided values. This helps prevent false positive results involving registers that could not have been compromised by an attacker.

    The scanner keeps a record of the instructions encountered and which of them might trigger speculation and which might be used to load values from restricted memory, so that it can report back when it finds a possible vulnerability.

    The scanner also knows about the speculation denial instructions (lfence, pause, csdb), and it will stop a scan whenever it encounters one of them.

    The scanner has a built-in limit on the total number of instructions that it will simulate on any given path. This is so that it does not get stuck in infinite loops. Currently the limit is set at 4096 instructions.


    The scanner does support a --verbose option which makes it tell you more about what it is doing. If this option is repeated then it will tell even more, possibly too much. The --quiet option on the other hand disables most output, (unless there is an internal error), although the tool does still return a zero or non-zero exit value depending upon whether any vulnerabilities were found.

    There is also a --max-num-branches option which will restrict the scanner to following no more than the specified number of conditional branches. The default is 32, so this option can be used to increase or decrease the amount of scanning performed by the tool.

    By default the scanner assumes that the file being examined is in the ELF file format. But the --binary option overrides this and causes the input to be treated as a raw binary file. In this format the address of the first byte in the file is considered to be zero, the second byte is at address 1 and so on.

    The x86_64 scanner uses Intel syntax in its disassembly output by default but you can change this with the --syntax=att option.


    The source for the scanner is available for download. In order to build the scanner from the source you will also need a copy of the FSF binutils source.

    Note - it is not sufficient to just install the binutils package or the binutils-devel package, as the scanner uses header files that are internal to the binutils sources. This requirement is an artifact of how the scanner evolved and it will be removed one day.

    Note - you do not need to build a binutils release from these sources. But if you do not then you will need to install the binutils and binutils-devel packages on your system. This is so that the binutils libraries are available for linking the scanner. In theory it should not matter if you have different versions of the binutils sources and binutils packages installed, as the scanner only makes use of very basic functions in the binutils library. Ones that do not change between released versions.

    Edit the makefile and select the version of the scanner that you want to build (AArch64 or x86_64). Also edit the CFLAGS variable to point to the binutils sources.

    If you are building the AArch64 version of the tool you will also need a copy of the GDB sources from which you will need to build the AArch64 simulator:

    ./configure --target=aarch64-elf
    make all-sim

    Then edit the makefile and change the AARCH64 variables to point to the built sim. To build the scanner once these edits are complete just run "make".


    Feedback on problems building or running the scanner are very much welcome. Please send them to Nick Clifton. We hope that you find this tool useful.

    Red Hat is also very interested in collaborating with any party that is concerned about this vulnerability. If you would like to pursue this, please contact Jon Masters.

    Posted: 2018-07-18T13:30:00+00:00
  • Insights Security Hardening Rules

    Authored by: Keith Grant

    Many users of Red Hat Insights are familiar with the security rules we create to alert them about security vulnerabilities on their system, especially concerning high-profile issues such as Spectre/Meltdown or Heartbleed. In this post, I'd like to talk about the other category of security related rules, those related to security hardening.

    In all of the products we ship, we make a concerted effort to ship thoughtful, secure default settings to minimize the amount of configuration needed to do the work you want to do. With complex packages such as Apache httpd, however, every installation will require some degree of customization before it's ready for deployment to production, and with more complex configurations, there's a chance that a setting or the interaction between several settings can have security implications which aren't immediately evident. Additionally, sometimes systems are configured in a manner that aids rapid development, but those configurations aren't suitable for production environments.

    With our hardening rules, we detect some of the most common security-related configuration issues and provide context to help you understand the represented risks, as well as recommendations on how to remediate the issues.

    Candidate Rule Sources

    We use several sources to find candidates for new hardening rules, but our primary sources are our own Red Hat Enterprise Linux Security Guides. These guides are founded on Red Hat's own knowledge of its specific environment, past customer issues, and the domain expertise of Red Hat's engineers. These guides cover a broad spectrum of security concerns ranging from physical and operational security to specific recommendations for individual packages or services.

    Additionally, the Product Security Insights team reviews other industry-standard benchmarks, best-practices guides, and news sources for their perspectives on secure configurations. One example is the Center for Internet Security's CIS Benchmark for RHEL specifically and Linux in general. Another valuable asset is SANS' Information Security Resources, which provides news about new research in information security.

    From these sources, we select candidates based on a number of informal criteria, such as:

    • What risk does this configuration represent? Some misconfigurations can expose confidential information, while a less serious misconfiguration might cause loss of audit log data.
    • How common are vulnerable configurations? If an issue seems rare, then it may have a lower priority. Conversely, some issues are almost ubiquitous, which suggests that even further research into where our user communication or education could be improved.
    • How likely are false reports, positive or negative? Some system configurations, especially around networking, are intrinsically complex. Being able to assess whether a system has a vulnerable firewall in isolation is challenging, as users may have shifted the responsibility for a particular security check (e.g. packet filtering) to other devices. In some cases, heuristics can be used, but this is always weighed against the inconvenience of false reports.

    With these factors in mind, we can prioritize our list of candidates. We can also identify areas where more information would make possible other rules, or would improve the specificity of rule recommendations.

    An Example Rule

    For a concrete example, one hardening rule we offer detects potentially insecure network-related settings in sysctl. Several parameters are tested by this rule, such as:

    icmp_echo_ignore_broadcasts: This setting, which is on by default, will prevent the system from responding to ICMP requests sent to broadcast addresses. A user may have changed this setting while troubleshooting network issues, but it presents an opportunity for a bad actor to stage a denial-of-service attack against the system's network segment.

    tcp_syncookies: Also on by default, syncookies provide protection against TCP SYN flood attacks. In this case, there aren't many reasons why it would be disabled, but some specialized hardware, legacy software, or software in development may have a minimal network stack which doesn't support syncookies. In this case, it's important to be aware of the issue and have other methods to protect the system from SYN flood attacks.

    ip_forward: This setting, which allows packet forwarding, is disabled by default. However, since it must be enabled for the system to act as a router, it's also the most commonly detected setting. In this case, to prevent false positives, the rule uses supporting data such as the firewall configuration to determine if this system may be acting as a router. If it's not, it's possible the user has a particular purpose for having the system forward packets, or it's possible the system was used as a router at one point, but its configuration wasn't completely reexamined after it was put into use elsewhere. In any case, as above, it's important that the system's user is aware that the feature is enabled, and understands the security implications.

    These are only a few of the parameters this rule examines. In some cases, such as this, several different but related issues are handled by a single rule, as the locations of the configuration and the logic used to detect problems is similar. In other cases, such as with httpd configuration, the problem domain is much larger, and warrants separate rules for separate areas of concern, such as data file permissions, cryptography configuration, or services exposed to public networks.


    This is just a brief overview of the process that goes into choosing candidates for and creating security hardening rules. It is, in practice, a topic as large as the configuration space of systems in general. That there is so much information about how to securely configure your systems is testament to that. What might be insecure in one context is the intended state in another, and in the end, only the user will have sufficient knowledge of their context to know which is the case. Red Hat Insights, however, provides users with Red Hat's breadth and depth of understanding, applied to the actual, live configuration of their systems. In this way, users benefit not only from the automated nature of Insights, but also from the Product Security Insights team's participation in the wider Information Security community.

    While we have an active backlog of security hardening rules, and likely will for some time due to the necessary prioritization of vulnerabilities in our rule creation, we're always interested in hearing about your security concerns. If there are issues you've faced that you'd like Insights to be able to tell you about, please let us know. Additionally, if you've had a problem with one of our rules, we'd like to work with you to address it. We may have substantial knowledge about how Red Hat products work, but you are the most knowledgeable about how you use them, and our objective is to give you all the information we can to help you do so securely.

    Posted: 2018-07-12T13:30:00+00:00
  • Red Hat’s disclosure process

    Authored by: Vincent Danen

    Last week, a vulnerability (CVE-2018-10892) that affected CRI-O, Buildah, Podman, and Docker was made public before some affected upstream projects were notified. We regret that this was not handled in a way that lives up to our own standards around responsible disclosure. It has caused us to look back to see what went wrong so as to prevent this from happening in the future.

    Because of how important our relationships with the community and industry partners are and how seriously we treat non-public information irrespective of where it originates, we are taking this event as an opportunity to look internally at improvements and challenge assumptions we have held.

    We conducted a review and are using this to develop training around the handling of non-public information relating to security vulnerabilities, and ensuring that our relevant associates have a full understanding of the importance of engaging with upstreams as per their, and our, responsible disclosure guidelines. We are also clarifying communication mechanisms so that our associates are aware of the importance of and methods for notifying upstream of a vulnerability prior to public disclosure.

    Red Hat values and recognizes the importance of relationships, be they with upstreams, downstreams, industry partners and peers, customers, or vulnerability reporters. We embrace open source development principles including trust and transparency. As we navigate through a landscape full of software that will inevitably contain security vulnerabilities we strive to manage each flaw with the same degree of care and attention, regardless of its potential impact. Our commitment is to work with other vendors of Linux and open source software to reduce the risk of security issues through responsible information sharing and peer reviews.

    This event has reminded us that it is important to remain vigilant, provide consistent, clear guidance, and handle potentially sensitive information appropriately. And while our track record of responsible disclosure speaks for itself, when an opportunity presents itself to revisit, reflect, and improve our processes, we make the most of it to ensure we have the proper procedures and controls in place.

    Red Hat takes its participation in open source projects and security disclosure very seriously. We have discovered hundreds of vulnerabilities and our dedicated Product Security team has participated in responsible disclosures for more than 15 years. We strive to get it right every time, but this time we didn't quite live up to the standards to which we seek to hold ourselves. Because we believe in open source principles such as accountability, we wanted to share what had happened and how we have responded to it. We are sincerely apologetic for not meeting our own standards in this instance.

    Posted: 2018-07-10T13:00:00+00:00