Warning message

Log in to add comments.

Latest Posts

  • SPECTRE Variant 1 scanning tool

    Authored by: Nick Clifton

    As part of Red Hat's commitment to product security we have developed a tool internally that can be used to scan for variant 1 SPECTRE vulnerabilities. As part of our commitment to the wider user community, we are introducing this tool via this article.

    This tool is not a Red Hat product. As such, it is not supported and does not come with any kind of warranty.

    The tool only works on static binaries and does not simulate an entire running system. This means it will neither follow jumps through a PLT into a shared library, nor will it emulate the loading of extra code via the dlopen() function.

    The tool currently only supports the x86_64 and AArch64 architectures. We do hope to add additional architectures in the future.

    The tool is currently available in source form as it would be unwise to offer a security analysis tool as a pre-compiled binary. There are details on how to obtain the source and build the tool later in this article.


    To use the scanner simply invoke it with the path to a binary to scan and a starting address inside the binary:

    x86_64-scanner vmlinux --start-address=0xffffffff81700001

    Note - these examples are using the x86_64 scanner, but the AArch64 scanner behaves in the same way.

    The start address will presumably be a syscall entry point, and the binary a kernel image. (Uncompressed; the scanner does not yet know how to decompress compressed kernels). Alternatively the binary could be a library and the address an external function entry point into that library. In fact, the scanner will handle any kind of binary, including user programs, libraries, modules, plugins and so on.

    A start address is needed in order to keep things simple and to avoid extraneous output. In theory, the scanner could examine every possible code path through a binary, including ones not normally accessible to an attacker. But this would produce too much output. Instead, a single start address is used in order to restrict the search to a smaller region. Naturally the scanner can be run multiple times with different starting addresses each time, so that all valid points of attack can be scanned.

    The output of the scanner will probably look like this:

    X86 Scanner: No sequences found.

    Or, if something is found, like this:

    X86 Scanner: Possible sequence found, based on a starting address of 0:.
    X86 Scanner:               000000: nop.
    X86 Scanner: COND: 000001: jne &0xe .
    X86 Scanner:               00000e: jne &0x24 .
    X86 Scanner: LOAD:  000010: mov 0xb0(%rdi),%rcx.
    X86 Scanner:               000017: mov 0xb0(%rsp),%rax.
    X86 Scanner:               00001f: nop.
    X86 Scanner: LOAD:  000020: mov 0x30(%rcx),%rbx.

    This indicates that entering the test binary at address 0x0 can lead to encountering a conditional jump at address 0x1 would trigger speculation. Then a load at address 0x10 uses an attacker provided value (in %rdi) which might influence a second load at 0x20.

    One important point to remember about the scanner’s output is that it is only a starting point for further investigation. Closer examination of the code flagged may reveal that an attacker could not actually use it to exploit the Spectre vulnerability.

    Note - currently the scanner does not check that the starting address is actually a valid instruction address. (It does check that the address lies inside the binary image provided). So if an invalid address is provided unexpected errors can be generated.

    Note - the scanner sources include two test files, one for x86_64 (x86_64_test.S) and one for AArch64 (aarch64_test.S). These can be used to test the functionality of the scanner.

    How It Works

    The scanner is basically a simulator that emulates the execution of the instructions from the start address until the code reaches a return instruction which would return to whatever called the start address. It tracks values in registers and memory (including the stack, data sections, and the heap).

    Whenever a conditional branch is encountered the scanner splits itself and follows both sides of the branch. This is repeated at every encounter, subject to an upper limit set by the --max-num-branches option.

    The scanner assumes that at the start address only the stack and those registers which are used for parameter passing could contain attacker provided values. This helps prevent false positive results involving registers that could not have been compromised by an attacker.

    The scanner keeps a record of the instructions encountered and which of them might trigger speculation and which might be used to load values from restricted memory, so that it can report back when it finds a possible vulnerability.

    The scanner also knows about the speculation denial instructions (lfence, pause, csdb), and it will stop a scan whenever it encounters one of them.

    The scanner has a built-in limit on the total number of instructions that it will simulate on any given path. This is so that it does not get stuck in infinite loops. Currently the limit is set at 4096 instructions.


    The scanner does support a --verbose option which makes it tell you more about what it is doing. If this option is repeated then it will tell even more, possibly too much. The --quiet option on the other hand disables most output, (unless there is an internal error), although the tool does still return a zero or non-zero exit value depending upon whether any vulnerabilities were found.

    There is also a --max-num-branches option which will restrict the scanner to following no more than the specified number of conditional branches. The default is 32, so this option can be used to increase or decrease the amount of scanning performed by the tool.

    By default the scanner assumes that the file being examined is in the ELF file format. But the --binary option overrides this and causes the input to be treated as a raw binary file. In this format the address of the first byte in the file is considered to be zero, the second byte is at address 1 and so on.

    The x86_64 scanner uses Intel syntax in its disassembly output by default but you can change this with the --syntax=att option.


    The source for the scanner is available for download. In order to build the scanner from the source you will also need a copy of the FSF binutils source.

    Note - it is not sufficient to just install the binutils package or the binutils-devel package, as the scanner uses header files that are internal to the binutils sources. This requirement is an artifact of how the scanner evolved and it will be removed one day.

    Note - you do not need to build a binutils release from these sources. But if you do not then you will need to install the binutils and binutils-devel packages on your system. This is so that the binutils libraries are available for linking the scanner. In theory it should not matter if you have different versions of the binutils sources and binutils packages installed, as the scanner only makes use of very basic functions in the binutils library. Ones that do not change between released versions.

    Edit the makefile and select the version of the scanner that you want to build (AArch64 or x86_64). Also edit the CFLAGS variable to point to the binutils sources.

    If you are building the AArch64 version of the tool you will also need a copy of the GDB sources from which you will need to build the AArch64 simulator:

    ./configure --target=aarch64-elf
    make all-sim

    Then edit the makefile and change the AARCH64 variables to point to the built sim. To build the scanner once these edits are complete just run "make".


    Feedback on problems building or running the scanner are very much welcome. Please send them to Nick Clifton. We hope that you find this tool useful.

    Red Hat is also very interested in collaborating with any party that is concerned about this vulnerability. If you would like to pursue this, please contact Jon Masters.

    Posted: 2018-07-18T13:30:00+00:00
  • Insights Security Hardening Rules

    Authored by: Keith Grant

    Many users of Red Hat Insights are familiar with the security rules we create to alert them about security vulnerabilities on their system, especially concerning high-profile issues such as Spectre/Meltdown or Heartbleed. In this post, I'd like to talk about the other category of security related rules, those related to security hardening.

    In all of the products we ship, we make a concerted effort to ship thoughtful, secure default settings to minimize the amount of configuration needed to do the work you want to do. With complex packages such as Apache httpd, however, every installation will require some degree of customization before it's ready for deployment to production, and with more complex configurations, there's a chance that a setting or the interaction between several settings can have security implications which aren't immediately evident. Additionally, sometimes systems are configured in a manner that aids rapid development, but those configurations aren't suitable for production environments.

    With our hardening rules, we detect some of the most common security-related configuration issues and provide context to help you understand the represented risks, as well as recommendations on how to remediate the issues.

    Candidate Rule Sources

    We use several sources to find candidates for new hardening rules, but our primary sources are our own Red Hat Enterprise Linux Security Guides. These guides are founded on Red Hat's own knowledge of its specific environment, past customer issues, and the domain expertise of Red Hat's engineers. These guides cover a broad spectrum of security concerns ranging from physical and operational security to specific recommendations for individual packages or services.

    Additionally, the Product Security Insights team reviews other industry-standard benchmarks, best-practices guides, and news sources for their perspectives on secure configurations. One example is the Center for Internet Security's CIS Benchmark for RHEL specifically and Linux in general. Another valuable asset is SANS' Information Security Resources, which provides news about new research in information security.

    From these sources, we select candidates based on a number of informal criteria, such as:

    • What risk does this configuration represent? Some misconfigurations can expose confidential information, while a less serious misconfiguration might cause loss of audit log data.
    • How common are vulnerable configurations? If an issue seems rare, then it may have a lower priority. Conversely, some issues are almost ubiquitous, which suggests that even further research into where our user communication or education could be improved.
    • How likely are false reports, positive or negative? Some system configurations, especially around networking, are intrinsically complex. Being able to assess whether a system has a vulnerable firewall in isolation is challenging, as users may have shifted the responsibility for a particular security check (e.g. packet filtering) to other devices. In some cases, heuristics can be used, but this is always weighed against the inconvenience of false reports.

    With these factors in mind, we can prioritize our list of candidates. We can also identify areas where more information would make possible other rules, or would improve the specificity of rule recommendations.

    An Example Rule

    For a concrete example, one hardening rule we offer detects potentially insecure network-related settings in sysctl. Several parameters are tested by this rule, such as:

    icmp_echo_ignore_broadcasts: This setting, which is on by default, will prevent the system from responding to ICMP requests sent to broadcast addresses. A user may have changed this setting while troubleshooting network issues, but it presents an opportunity for a bad actor to stage a denial-of-service attack against the system's network segment.

    tcp_syncookies: Also on by default, syncookies provide protection against TCP SYN flood attacks. In this case, there aren't many reasons why it would be disabled, but some specialized hardware, legacy software, or software in development may have a minimal network stack which doesn't support syncookies. In this case, it's important to be aware of the issue and have other methods to protect the system from SYN flood attacks.

    ip_forward: This setting, which allows packet forwarding, is disabled by default. However, since it must be enabled for the system to act as a router, it's also the most commonly detected setting. In this case, to prevent false positives, the rule uses supporting data such as the firewall configuration to determine if this system may be acting as a router. If it's not, it's possible the user has a particular purpose for having the system forward packets, or it's possible the system was used as a router at one point, but its configuration wasn't completely reexamined after it was put into use elsewhere. In any case, as above, it's important that the system's user is aware that the feature is enabled, and understands the security implications.

    These are only a few of the parameters this rule examines. In some cases, such as this, several different but related issues are handled by a single rule, as the locations of the configuration and the logic used to detect problems is similar. In other cases, such as with httpd configuration, the problem domain is much larger, and warrants separate rules for separate areas of concern, such as data file permissions, cryptography configuration, or services exposed to public networks.


    This is just a brief overview of the process that goes into choosing candidates for and creating security hardening rules. It is, in practice, a topic as large as the configuration space of systems in general. That there is so much information about how to securely configure your systems is testament to that. What might be insecure in one context is the intended state in another, and in the end, only the user will have sufficient knowledge of their context to know which is the case. Red Hat Insights, however, provides users with Red Hat's breadth and depth of understanding, applied to the actual, live configuration of their systems. In this way, users benefit not only from the automated nature of Insights, but also from the Product Security Insights team's participation in the wider Information Security community.

    While we have an active backlog of security hardening rules, and likely will for some time due to the necessary prioritization of vulnerabilities in our rule creation, we're always interested in hearing about your security concerns. If there are issues you've faced that you'd like Insights to be able to tell you about, please let us know. Additionally, if you've had a problem with one of our rules, we'd like to work with you to address it. We may have substantial knowledge about how Red Hat products work, but you are the most knowledgeable about how you use them, and our objective is to give you all the information we can to help you do so securely.

    Posted: 2018-07-12T13:30:00+00:00
  • Red Hat’s disclosure process

    Authored by: Vincent Danen

    Last week, a vulnerability (CVE-2018-10892) that affected CRI-O, Buildah, Podman, and Docker was made public before some affected upstream projects were notified. We regret that this was not handled in a way that lives up to our own standards around responsible disclosure. It has caused us to look back to see what went wrong so as to prevent this from happening in the future.

    Because of how important our relationships with the community and industry partners are and how seriously we treat non-public information irrespective of where it originates, we are taking this event as an opportunity to look internally at improvements and challenge assumptions we have held.

    We conducted a review and are using this to develop training around the handling of non-public information relating to security vulnerabilities, and ensuring that our relevant associates have a full understanding of the importance of engaging with upstreams as per their, and our, responsible disclosure guidelines. We are also clarifying communication mechanisms so that our associates are aware of the importance of and methods for notifying upstream of a vulnerability prior to public disclosure.

    Red Hat values and recognizes the importance of relationships, be they with upstreams, downstreams, industry partners and peers, customers, or vulnerability reporters. We embrace open source development principles including trust and transparency. As we navigate through a landscape full of software that will inevitably contain security vulnerabilities we strive to manage each flaw with the same degree of care and attention, regardless of its potential impact. Our commitment is to work with other vendors of Linux and open source software to reduce the risk of security issues through responsible information sharing and peer reviews.

    This event has reminded us that it is important to remain vigilant, provide consistent, clear guidance, and handle potentially sensitive information appropriately. And while our track record of responsible disclosure speaks for itself, when an opportunity presents itself to revisit, reflect, and improve our processes, we make the most of it to ensure we have the proper procedures and controls in place.

    Red Hat takes its participation in open source projects and security disclosure very seriously. We have discovered hundreds of vulnerabilities and our dedicated Product Security team has participated in responsible disclosures for more than 15 years. We strive to get it right every time, but this time we didn't quite live up to the standards to which we seek to hold ourselves. Because we believe in open source principles such as accountability, we wanted to share what had happened and how we have responded to it. We are sincerely apologetic for not meeting our own standards in this instance.

    Posted: 2018-07-10T13:00:00+00:00
  • What is tar and why does OpenShift Container Application Platform use it?

    Authored by: Kurt Seifried

    Tar is a Posix standard archiving utility originally meant for making tape archives; one of tar's most enduring uses has been for system backups. Tar can take everything that is stored on a filesystem and store it in a structured file, including special files such as links and devices. This capability has made tar a popular storage format for more than 38 years.

    Red Hat's OpenShift Container Application Platform is a PaaS (Platform as a Service) that integrates many Red Hat software components and technologies (storage, middleware, etc.). Besides being an excellent platform for running applications, OpenShift has many integrated options to help you build and maintain those applications. One such option is the Source-to-Image (S2I) tool:

    Source-to-Image (S2I) is a tool for building reproducible Docker images. It produces ready-to-run images by injecting application source into a Docker image and assembling a new Docker image. The new image incorporates the base image (the builder) and built source and is ready to use with the docker run command. S2I supports incremental builds, which re-use previously downloaded dependencies, previously built artifacts, etc.

    S2I is not limited to stateless builds that start fresh from original configuration and data files each time. Through a feature called "incremental builds", S2I can re-use previously built or imported artifacts. Re-using third-party imports prevents having to wait for a full install each build. Re-using sub-builds means that a front-end change can potentially proceed without needing to wait for a full backend build. S2I accomplishes this by utilizing previous versions of the built image:

    During the build process, S2I must place sources and scripts inside the builder image. To do so, S2I creates a tar file that contains the sources and scripts, then streams that file into the builder image. Before executing the assemble script, S2I untars that file and places its contents into the location specified with the --destination flag or the io.openshift.s2i.destination label from the builder image, with the default location being the /tmp directory.

    What is CVE-2018-1102?

    CVE-2018-1102 is an issue created by the interaction of tar's relative path features and S2I's high level of privilege. By carefully crafting tar content, an attacker can overwrite host filesystem content and can potentially compromise the node hosting the build.

    tar is a storage format used to archive many varieties of filesystem content including: backups, source code, and binary packages. To make these archives portable, the tar format supports relative paths. This allows a tar archive to install files not just further down into the filesystem from where it is unpacked but further up as well. For example, the relative path "../../../file" would install "file" three levels up from the directory where the tar was being unpacked. This feature is not necessarily always needed, but it is faithfully reproduced in the Go tar library that is used by OpenShift.

    S2I must manage containers, container filesystems, and collaborate with OpenShift orchestration (e.g. for on-demand builds). As a result, it runs with a high level of privilege including the ability to write to the host filesystem. When unpacked by the S2I service, a particularly constructed relative path in a tar archive can escape the intended buildroot and install content on the host filesystem. This level of access can easily be leveraged into full control of the host.

    Previously, S2I did not check for potential escapes. Red Hat has made changes that prevent the use of relative paths to escape the buildroot directory and block other potentially dangerous features.

    How is Red Hat affected?

    OpenShift Container Application Platform 3.x is affected, from 3.0 through 3.9. Additionally the OpenShift Online service was affected, however this has been patched to address the issue. Disabling S2I builds completely mitigates this issue. Updates have been made available for OpenShift 3.4 and up. Hotfixes are pending for versions 3.1 through 3.3.

    What do you need to do

    You should install the security updates for CVE-2018-1102. If you cannot do this, or need time to evaluate the patches, you can also disable S2I builds. For more information on this issue please read the vulnerability article explaining CVE-2018-1102.

    Posted: 2018-04-27T19:04:54+00:00
  • Join us in San Francisco at the 2018 Red Hat Summit

    Authored by: Christopher Robinson

    This year’s Red Hat Summit will be held on May 8-10 in beautiful San Francisco, USA. Product Security will be joining many Red Hat security experts in presenting and assisting subscribers and partners at the show. Here is a sneak peek at the more than 125 sessions that a security-minded attendee can see at Summit this year.


    Cloud Management and Automation

    S1181 - Automating security and compliance for hybrid environments
    S1467 - Live demonstration: Find it. Fix it. Before it breaks.
    S1104 - Distributed API Management in a Hybrid Cloud Environment

    Containers and Openshift

    S1049 - Best practices for securing the container life cycle
    S1260 - Hitachi & Red Hat collaborate: container migration guide
    S1220 - Network Security for Apps on OpenShift
    S1225 - OpenShift for Operations
    S1778 - Security oriented OpenShift within regulated environments
    S1689 - Automating Openshift Secure Container Deployment at Experian

    Infrastructure Modernization & Optimization

    S1727 - Demystifying systemd
    S1741 - Deploying SELinux successfully in production environments
    S1329 - Operations Risk Remediation in Highly Secure Infrastructures
    S1515 - Path to success with your Identity Management deployment
    S1936 - Red Hat Satellite 6 power user tips and tricks
    S1907 - Satellite 6 Securing Linux lifecycle in the Public Sector
    S1931 - Security Enhanced Linux for Mere Mortals
    S1288 - Smarter Infrastructure Management with Satellite & Insights

    Middleware + Modern App Dev Security

    S1896 - Red Hat API Management Overview, Security Models and Roadmap
    S1863 - Red Hat Single Sign-On Present and Future
    S1045 - Securing Apps & Services with Red Hat Single-Sign On
    S2109 - Securing service mesh, micro services and modern applications with JWT
    S1189 - Mobile in a containers world

    Value of the Red Hat Subscription

    S1916 - Exploiting Modern Microarchitectures: Meltdown, Spectre, and other hardware security vulnerabilities in modern processors
    S2702 - The Value of a Red Hat Subscription

    Roadmaps & From the Office of the CTO

    S2502 - Charting new territories with Red Hat
    S9973 - Getting strategic about security
    S1017 - Red Hat Security Roadmap : It's a Lifestyle, Not a Product
    S1000 - Red Hat Security Roadmap for Hybrid Cloud
    S1890 - What's new in security for Red Hat OpenStack Platform?

    Instructor-Led Security Labs

    L1007, L1007R - A practical introduction to container security (3rd ed.)
    L1036 - Defend Yourself Using Built-in RHEL Security Technologies
    L1034, L1034R - Implementing Proactive Security and Compliance Automation
    L1051 - Linux Container Internals: Part 1
    L1052 - Linux Container Internals: Part 2
    L1019 - OpenShift + RHSSO = happy security teams and happy users
    L1106, L1106R - Practical OpenSCAP
    L1055 - Up and Running with Red Hat Identity Management

    Security Mini-Topic Sessions

    M1022 - A problem's not a problem, until it's a problem (Red Hat Insights)
    M1140 - Blockchain: How to identify good use cases
    M1087 - Monitor and automate infrastructure risk in 15 minutes or less

    Security Birds-of-a-Feather Sessions

    B1009 - Connecting the Power of Data Security and Privacy
    B1990 - Grafeas to gate your deployment pipeline
    B1046 - I'm a developer. What do I need to know about security?
    B1048 - Provenance and Deployment Policy
    B1062 - The Red Hat Security BoF - Ask us (most) anything
    B1036 - Virtualization: a study
    B2112 - Shift security left - and right - in the container lifecycle

    Security Panels

    P1757 - DevSecOps with disconnected OpenShift
    P1041 - Making IoT real across industries

    Security Workshops

    W1025 - Satellite and Insights Test-drive

    On top of the sessions, Red Hat Product Security will be there playing fun educational games like our Flawed and Branded card game and the famous GAME SHOW! GAME SHOW!

    No matter what your interest or specialty is, Red Hat Summit definitely has something for you. Come learn more about the security features and practices around our products! We're looking forward to seeing you there!

    Posted: 2018-04-23T14:30:00+00:00
  • Certificate Transparency and HTTPS

    Authored by: Kurt Seifried

    Google has announced that on April 30, 2018, Chrome will:

    “...require that all TLS server certificates issued after 30 April, 2018 be compliant with the Chromium CT Policy. After this date, when Chrome connects to a site serving a publicly-trusted certificate that is not compliant with the Chromium CT Policy, users will begin seeing a full page interstitial indicating their connection is not CT-compliant. Sub-resources served over https connections that are not CT-compliant will fail to load and will show an error in Chrome DevTools.”

    So what exactly does this mean, and why should one care?

    What is a CT policy?

    CT stands for “Certificate Transparency” and, in simple terms, means that all certificates for websites will need to be registered by the issuing Certificate Authority (CA) in at least two public Certificate Logs.

    When a CA issues a certificate, it now must make a public statement in a trusted database (the Certificate Log) that, at a certain date and time, they issued a certificate for some site. The reason is for more than a year many different CAs have issued certificates for sites and names for which they shouldn’t (like “localhost” or “1.2.3.”) or have issued certificates following fraudulent requests (e.g. people who are not BigBank asking for certificates for bigbank.example.com). By placing all requested certificates into these Certificate Logs, other groups, such as security researchers and companies, can monitor what is being issued and raise red flags as needed (e.g. if you see a certificate issued for your domain, which you did not request).

    If you do not announce your certificates in these Certificate Logs, the Chrome web browser will generate an error page that the user must click through before going to the page they were trying to load, and if a page contains elements (e.g. from advertising networks) that are served from non CT-compliant domains, they will simply not be loaded.

    Why is Google doing this?

    Well there are probably several reasons but the main ones are:

    1. As noted, several CAs have been discovered issuing certificates wrongly or fraudulently, putting Internet users at risk. This technical solution will greatly reduce the risk as such wrong or fraudulently issued certificates can be detected quickly.

    2. More importantly, this prepares for a major change coming to the Chrome web browser in July 2018, in which all HTTP websites will be labeled as “INSECURE”, which should significantly drive up the adoption of HTTPS. This adoption will, of course, result in a flood of new certificates which, combined with the oversight provided by Certificate Logs, should help to catch fraudulently or wrongly-obtained certificates.

    What should a web server operator do?

    The first step is to identify your web properties, both external facing and internal facing. Then it’s simply a matter of determining whether you:

    want the certificate for a website to show up in the Certificate Log so that the Chrome web browser does not generate an error (e.g. your public facing web sites will want this), or absolutely do not want that particular certificate to show up in the Certificate Logs (e.g. a sensitive internal host), and you’re willing to live with Chrome errors.

    Depending on how your certificates are issued, and who issued them, you may have some time before this becomes an issue (e.g. if you are using a service that issues short lived certificates you definitely will be affected by this). Also please note that some certificate issuers like Amazon’s AWS Certificate Manager do allow you to choose to opt out of reporting them to the Certificate Logs, a useful feature for certificates being used on systems that are “internal” and you do not want the world to know about.

    It should be noted that in the long term, option 2 (not reporting certificates to the Certificate Logs) will become increasingly problematic as it is possible that Google may simply have Chrome block them rather than generate an error. So, with that in mind, now is probably a good time to start determining how your security posture will change when all your HTTPS-based hosts are effectively being enumerated publicly. You will also need to determine what to do with any HTTP web sites, as they will start being labelled as “INSECURE” within the next few months, and you may need to deploy HTTPS for them, again resulting in them potentially showing up in the Certificate Logs.

    Posted: 2018-04-17T15:00:01+00:00
  • Harden your JBoss EAP 7.1 Deployments with the Java Security Manager

    Authored by: Jason Shepherd


    The Java Enterprise Edition (EE) 7 specification introduced a new feature which allows application developers to specify a Java Security Manager (JSM) policy for their Java EE applications, when deployed to a compliant Java EE Application Server such as JBoss Enterprise Application Platform (EAP) 7.1. Until now, writing JSM policies has been pretty tedious, and running with JSM was not recommended because it adversely affected performance. Now a new tool has been developed which allows the generation of a JSM policy for deployments running on JBoss EAP 7.1. It is possible that running with JSM enabled will still affect performance, but JEP 232 indicates the performance impact would be 10-15% (it is still recommended to test the impact per application).

    Why Run with the Java Security Manager Enabled?

    Running a JSM will not fully protect the server from malicious features of untrusted code. It does, however, offer another layer of protection which can help reduce the impact of serious security vulnerabilities, such as deserialization attacks. For example, most of the recent attacks against Jackson Databind rely on making a Socket connection to an attacker-controlled JNDI Server to load malicious code. This article provides information on how this issue potentially affects an application written for JBoss EAP 7.1. The Security Manager could block the socket creation, and potentially thwart the attack.

    How to generate a Java Security Manager Policy


    • Java EE EAR or WAR file to add policies to;
    • Targeting JBoss EAP 7.1 or later;
    • Comprehensive test plan which exercises every "normal" function of the application.

    If a comprehensive test plan isn't available, a policy could be generated in a production environment, as long as some extra disk space for logging is available and there is confidence the security of the application is not going to be compromised while generating policies.

    Setup 'Log Only' mode for the Security Manager

    JBoss EAP 7.1 added a new feature to its custom Security Manager that is enabled by setting the org.wildfly.security.manager.log-only System Property to true.

    For example, if running in stand-alone mode on Linux, enable the Security Manager and set the system property in the bin/standalone.conf file using:

    JAVA_OPTS="$JAVA_OPTS -Dorg.wildfly.security.manager.log-only=true"

    We'll also need to add some additional logging for the log-only property to work, so go ahead and adjust the logging categories to set org.wildfly.security.access to DEBUG, as per the documentation, e.g.:


    Test the application to generate policy violations

    For this example we'll use the batch-processing quickstart. Follow the README to deploy the application and access it running on the application server at http://localhost:8080/batch-processing. Click the 'Generate a new file and start import job' button in the Web UI and notice some policy violations are logged to the $JBOSS_HOME/standalone/log/server.log file, for example:

    DEBUG [org.wildfly.security.access] (Batch Thread - 1) Permission check failed (permission "("java.util.PropertyPermission" "java.io.tmpdir" "read")" in code source 
    "(vfs:/content/batch-processing.war/WEB-INF/classes <no signer certificates>)" of "ModuleClassLoader for Module "deployment.batch-processing.war" from Service Module Loader")

    Generate a policy file for the application

    Checkout the source code for the wildfly-policygen project written by Red Hat Product Security.

    git clone git@github.com:jasinner/wildfly-policygen.git

    Set the location of the server.log file which contains the generated security violations in the build.gradle script, i.e.:

    task runScript (dependsOn: 'classes', type: JavaExec) {
        main = 'com.redhat.prodsec.eap.EntryPoint'
        classpath = sourceSets.main.runtimeClasspath
        args '/home/jshepher/products/eap/7.1.0/standalone/log/server.log'

    Run wildfly-policygen using gradle, i.e.:

    gradle runScript

    A permissions.xml file should be generated in the current directory. Using the example application, the file is called batch-processing.war.permissions.xml. Copy that file to src/main/webapp/META-INF/permissions.xml, build, and redeploy the application, for example:

    cp batch-processing.war.permissions.xml $APP_HOME/src/main/webapp/META-INF/permissions.xml

    Where APP_HOME is an environment variable pointing to the batch-processing application's home directory.

    Run with the security manager in enforcing mode

    Recall that we set the org.wildfly.security.manager.log-only system property in order to log permission violations. Remove that system property or set it to false in order to enforce the JSM policy that's been added to the deployment. Once that line has been changed or removed from bin/standalone.conf, restart the application server, build, and redeploy the application.

    JAVA_OPTS="$JAVA_OPTS -Dorg.wildfly.security.manager.log-only=false"

    Also go ahead and remove the extra logging category that was added previously using the CLI, e.g.:


    This time there shouldn't be any permission violations logged in the server.log file. To verify the Security Manager is still enabled look for this message in the server.log file:

    INFO  [org.jboss.as] (MSC service thread 1-8) WFLYSRV0235: Security Manager is enabled


    While the Java Security Manager will not prevent all security vulnerabilities possible against an application deployed to JBoss EAP 7.1, it will add another layer of protection, which could mitigate the impact of serious security vulnerabilities such as deserialization attacks against Jackson Databind. If running with Security Manager enabled, be sure to check the impact on the performance of the application to make sure it's within acceptable limits. Finally, use of the wildfly-policygen tool is not officially supported by Red Hat, however issues can be raised for the project in Github, or reach out to Red Hat Product Security for usage help by emailing secalert@redhat.com.

    Posted: 2018-03-14T13:30:00+00:00
  • Securing RPM signing keys

    Authored by: Huzaifa Sidhpurwala

    RPM Package Manager is the common method for deploying software packages to Red Hat Enterprise Linux, Fedora Project, and their derivative Linux operating systems. These packages are generally signed using an OpenPGP key, implementing a cryptographic integrity check, enabling the recipient the ability to verify that no modifications occurred after the package was signed (assuming the recipient has a copy of the sender’s public key). This model assumes that the signer has secured the RPM signing private keys and they will not be accessible to a powerful adversary.

    Before looking into possible solutions to the problem of securing RPM-signing keys, its worth mentioning that signing RPMs is not a complete solution. When using yum repositories, transport security (HTTPS) needs to be enabled to ensure that RPM packages, and the associated metadata (which contains pkg checksums, timestamps, dependency information etc), are transmitted securely. Lastly, sysadmins should always ensure that packages are installed from trusted repositories.

    How RPM signing works

    The rpm file format is a binary format and broadly consists of 4 sections:
    - the legacy lead is a 96 byte header which contains "magic numbers" (used to identify file type) and other data;
    - an optional signature section;
    - a header which is an index containing information about the RPM package file; and
    - the cpio archive of the actual files to be written to the filesystem.

    During the signing process, GnuPG calculates a signature from the header and the cpio section using the private key. This OpenPGP signature is stored in the optional signature section of the RPM file.

    Diagram depicting the header and payload of the RPM is signed and that the signature is also stored in the RPM.

    During verfication, GnuPG uses the public key to verify the integrity of both these sections.

    Diagram showing the process for verifying the header and payload of an RPM.

    The above is normally referred to as RPM v3 signatures. RPM version 4 and above introduce v4 signatures which are calculated on the header alone. The file contents are hashed and those hashes are protected by the header-only signature, so the payload can be validated without actually having a signature on it.

    How to sign rpms

    The RPM binary internally uses OpenPGP, which has been traditionally used for signing emails. The process involves creating a key pair using OpenPGP and then using the private key to sign the RPMs while the public key is published securely to enable users to verify the integrity of the binary RPMs after downloading them.

    Currently there is no way to directly sign RPMS using x509 certificates. If you really want to use x509 certificates you can sign RPMs like any other files by signing the entire RPM file using openssl, nss, or any other x509 tool and appending distributing the signature and the certificate you want to verify against with the RPM file.

    Using a Hardware Security Module

    A Hardware Security Module (HSM) is a physical computing device which can manage and safeguard digital keys. They also provide cryptographic processing like key generation, signing, encryption, and decryption. Various HSM devices are available in the market which have varying features and costs. Several HSM devices are now available as USB sticks that, once plugged in, the GPG tool should be able to recognize and generate new keys, sign and verify files using the on-device private keys.

    Typically, keys are generated inside the HSM where they are then stored. The HSM is then installed on specially configured systems which are used for signing RPMs. These systems may be disconnected from the Internet to ensure additional security. Even if the HSM is stolen, it may not be possible to extract the private key from the HSM.

    With time HSM devices have become cheaper and more easily available.

    Securely publishing public keys

    While the conventional method of publishing OpenPGP public keys on HTTPS protected websites exists, DNSSEC (or DNS for that matter) provides an elegant solution for further securing the integrity of the published public keys.

    PKA is a simple way of storing the look up information for your public key in a DNS TXT record. DNSSEC can store the fingerprint of the public key, as well as their storage location. DNS records can further be signed to provide additional security.

    All RPMs released by Red Hat are signed, and the public keys are distributed with the products and also published online.

    Posted: 2018-03-07T14:30:00+00:00
  • Let's talk about PCI-DSS

    Authored by: Langley Rock

    For those who aren’t familiar with Payment Card Industry Data Security Standard (PCI-DSS), it is the standard that is intended to protect our credit card data as it flows between systems and is stored in company databases. PCI-DSS requires that all vulnerabilities rated equal to, or higher than, CVSS 4.0 must be addressed by PCI-DSS compliant organizations (notably, those which process and/or store cardholder data). While this was done with the best of intentions, it has had an impact on many organizations' capability to remediate these vulnerabilities in their environment.

    The qualitative severity ratings of vulnerabilities as categorized by Red Hat Product Security do not directly align with the baseline ratings recommended by CVSS. These CVSS scores and ratings are used by PCI-DSS and most scanning tools. As a result, there may be cases where a vulnerability which would be rated as low severity by Red Hat, may exceed the CVSS’ recommended threshold for PCI-DSS.

    Red Hat has published guidelines on vulnerability classification. Red Hat Product Security prioritizes focus of security flaw remediation on Critical and Important vulnerabilities, which provide compromise to confidentiality, data, and/or availability. This is not intended to downplay the importance of lower severity vulnerabilities, but rather, aims to target those risks which are seen as most important by our customers and industry at large. CVSS ratings for vulnerabilities build upon a set of assumptions, factoring in a worst-case scenario (i.e. the CVSS calculator leaves all Temporal and Environmental factors set to “undefined”) possibly resulting in an environment that has no security mitigations or blocking controls in place, which might not be an accurate representation of your environment. Specifically, a given flaw may be less significant in your application depending how the function is used, whether it is exposed to untrusted data, or whether it enforces a privilege boundary. It is Red Hat’s position, that the base CVSS scores alone cannot reliably be used to fully capture the importance of flaws in every use case.

    In most cases, security issues will be addressed when updates are available upstream. However, as noted above, there may be cases where a vulnerability rated as low severity by Red Hat, may exceed the CVSS’ threshold for vulnerability mitigation by the PCI-DSS standard and be considered actionable by a security scanner or during an audit by a Qualified Scanning Auditor (QSA).

    In light of the above, Red Hat does not claim any of its products meet PCI-DSS compliance requirements. We do strive to provide secure software solutions and guidance to help remediate vulnerabilities of notable importance to our customers.

    When there is a discrepancy in the security flaw ratings, we suggest the following:

    • Harden your system: Determine if the component is needed or used. In many cases, scans will pick up on packages which are included in the distribution but do not need to be deployed in the production environment. If customers can remove these packages, or replace with another unaffected package, without impacting their functional system, they reduce the attack surface and reduce the number of components which might be targeted.

    • Validate the application: Determine if the situation is a false positive. (Red Hat often backports fixes which may result in false positives for version-detecting scanning products).

    • Self-evaluate the severity: Update the base CVSS score by calculating the environmental factors that are relevant, document the updated CVSS score for the vulnerability respective to your environment. All CVSS vector strings in our CVE pages link to the CVSS calculator on FIRST's website, with the base score pre-populated so that customers just need to fill in their other metrics.

    • Implement other controls to limit (or eliminate) exposure of the vulnerable interface or system.

    Further technical information to make these determinations can often be found from product support, in the various technical articles and blogs Red Hat makes available, in CVE pages’ Statement or Mitigation sections and in Bugzilla tickets. Customers with support agreements can reach out to product support for additional assistance to evaluate the potential risk for their environment, and confirm if the vulnerability jeopardizes the confidentiality of PCI-DSS data.

    Red Hat recognizes vulnerability scores and impacts may differ, and are there to help you assess your environment. As a customer, you can open a support case and provide us the feedback that matters to you. Our support and product teams value this feedback and will use it to provide better results.

    Posted: 2018-02-28T14:30:00+00:00
  • JDK approach to address deserialization Vulnerability

    Authored by: Hooman Broujerdi

    Java Deserialization of untrusted data has been a security buzzword for the past couple of years with almost every application using native Java serialization framework being vulnerable to Java deserialization attacks. Since it's inception, there have been many scattered attempts to come up with a solution to best address this flaw. This article focuses on Java deserialization vulnerability and explains how Oracle provides a mitigation framework in it's latest Java Development Kit (JDK) version.


    Let's begin by reviewing the Java deserialization process. Java Serialization Framework is JDK's built-in utility that allows Java objects to be converted into byte representation of the object and vice versa. The process of converting Java objects into their binary form is called serialization and the process of reading binary data to construct a Java object is called deserialization. In any enterprise environment the ability to save or retrieve the state of the object is a critical factor in building reliable distributed systems. For instance, a JMS message may be serialized to a stream of bytes and sent over the wire to a JMS destination. A RESTful client application may serialize an OAuth token to disk for future verification. Java's Remote Method Invocation (RMI) uses serialization under the hood to pass objects between JVMs. These are just some of the use cases where Java serialization is used.

    Inspecting the Flow

    When the application code triggers the deserialization process, ObjectInputStream will be initialized to construct the object from the stream of bytes. ObjectInputStream ensures the object graph that has been serialized is recovered. During this process, ObjectInputStream matches the stream of bytes against the classes that are available in the JVM's classpath.

    So, what is the problem?

    During deserialization process, when readObject() takes the byte stream to reconstruct the object, it looks for the magic bytes relevant to the object type that has been written to the serialization stream, to determine what object type (e.g. enum, array, String, etc.) it needs to resolve the byte stream to. If the byte stream can not be resolved to one of these types, then it will be resolved to an ordinary object (TC_OBJECT), and finally the local class for that ObjectStreamClass will be retrieved from the JVM's classpath. If the class is not found then an InvalidClassException will be thrown.

    The problem arises when readObject() is presented with a byte stream that has been manipulated to leverage classes that have a high chance of being available in the JVM's classpath, also known as gadget classes, and are vulnerable to Remote Code Execution (RCE). So far a number of classes have been identified to be vulnerable to RCE, however research is still ongoing to discover more of such classes. Now you might ask, how these classes can be used for RCE? Well, depending on the nature of the class, the attack can be materialized by constructing the state of that particular class with a malicious payload, which is serialized and is fed at the point in which serialized data is exchanged (i.e. Stream Source) in the above work flow. This tricks JDK to believe this is the trusted byte stream, and it will be deserialized by initializing the class with the payload. Depending on the payload this can have disastrous consequences.

    JVM vulnerable classes

    Of course the challenge for the adversary is to be able to access the stream source for this purpose, of which the details are outside the scope of this article. A good tool to review for further information on the subject is ysoserial, which is arguably the best tool for generating payloads.

    How to mitigate against deserialization?

    Loosely speaking, mitigation against a deserialization vulnerability is accomplished by implementing a LookAheadObjectInputStream strategy. The implementation needs to subclass the existing ObjectInputStream to override the resolveClass() method to verify if the class is allowed to be loaded. This approach appears to be an effective way of hardening against deserialization and usually consists of two implementation flavors: whitelist or blacklist. In whitelist approach, implementation only includes the acceptable business classes that are allowed to be deserialized and blocks other classes. Blacklist implementation on the other hand holds a set of well-known classes that are vulnerable and blocks them from being serialized.

    Both whitelist and blacklist have their own pros and cons, however, whitelist-based implementation proves to be a better way to mitigate against a deserialization flaw. It effectively follows the principle of checking the input against the good values which have always been a part of security practices. On the other hand, blacklist-based implementation heavily relies on the intelligence gathered around what classes have been vulnerable and gradually include them in the list which is easy enough to be missed or bypassed.

    protected Class<?> resolveClass(ObjectStreamClass desc)
                    throws IOException, ClassNotFoundException {
          String name = desc.getName();
          if(isBlacklisted(name) ) {
                  throw new SecurityException("Deserialization is blocked for security reasons");
          if(isWhitelisted(name) ) {
                  throw new SecurityException("Deserialization is blocked for security reasons");
          return super.resolveClass(desc);

    JDK's new Deserialization Filtering

    Although ad hoc implementations exist to harden against a deserialization flaw, the official specification on how to deal with this issue is still lacking. To address this issue, Oracle has recently introduced serialization filtering to improve the security of deserialization of data which seems to have incorporated both whitelist and blacklist scenarios. The new deserialization filtering is targeted for JDK 9, however it has been backported to some of the older versions of JDK as well.

    The core mechanism of deserialization filtering is based on an ObjectInputFilter interface which provides a configuration capability so that incoming data streams can be validated during the deserialization process. The status check on the incoming stream is determined by Status.ALLOWED, Status.REJECTED, or Status.UNDECIDED arguments of an enum type within ObjectInputFilter interface. These arguments can be configured depending on the deserialization scenarios, for instance if the intention is to blacklist a class then the argument will return Status.REJECTED for that specific class and allows the rest to be deserialized by returning the Status.UNDECIDED. On the other hand if the intention of the scenario is to whitelist then Status.ALLOWED argument can be returned for classes that match the expected business classes. In addition to that, the filter also allows access to some other information for the incoming deserializing stream, such as the number of array elements when deserializing an array of class (arrayLength), the depth of each nested objects (depth), the current number of object references (references), and the current number of bytes consumed (streamBytes). This information provides more fine-grained assertion points on the incoming stream and return the relevant status that reflects each specific use cases.

    Ways to configure the Filter

    JDK 9 filtering supports 3 ways of configuring the filter: custom filter, process-wide filter also known as global filter, and built-in filters for the RMI registry and Distributed Garbage Collection (DGC) usage.

    Case-based Filters

    The configuration scenario for a custom filter occurs when a deserialization requirement is different from any other deserialization process throughout the application. In this use case a custom filter can be created by implementing the ObjectInputFilter interface and override the checkInput(FilterInfo filterInfo) method.

    static class VehicleFilter implements ObjectInputFilter {
            final Class<?> clazz = Vehicle.class;
            final long arrayLength = -1L;
            final long totalObjectRefs = 1L;
            final long depth = 1l;
            final long streamBytes = 95L;
            public Status checkInput(FilterInfo filterInfo) {
                if (filterInfo.arrayLength() < this.arrayLength || filterInfo.arrayLength() > this.arrayLength
                        || filterInfo.references() < this.totalObjectRefs || filterInfo.references() > this.totalObjectRefs
                        || filterInfo.depth() < this.depth || filterInfo.depth() > this.depth || filterInfo.streamBytes() < this.streamBytes
                        || filterInfo.streamBytes() > this.streamBytes) {
                    return Status.REJECTED;
                if (filterInfo.serialClass() == null) {
                    return Status.UNDECIDED;
                if (filterInfo.serialClass() != null && filterInfo.serialClass() == this.clazz) {
                    return Status.ALLOWED;
                } else {
                    return Status.REJECTED;

    JDK 9 has added two methods to the ObjectInputStream class allowing the above filter to be set/get for the current ObjectInputStream:

    public class ObjectInputStream
        extends InputStream implements ObjectInput, ObjectStreamConstants {
        private ObjectInputFilter serialFilter;
        public final ObjectInputFilter getObjectInputFilter() {
            return serialFilter;
        public final void setObjectInputFilter(ObjectInputFilter filter) {
            this.serialFilter = filter;

    Contrary to JDK 9, latest JDK 8 (1.8.0_144) seems to only allow filter to be set on ObjectInputFilter.Config.setObjectInputFilter(ois, new VehicleFilter()); at the moment.

    Process-wide (Global) Filters

    Process-wide filter can be configured by setting jdk.serialFilter as either a system property or a security property. If the system property is defined then it is used to configure the filter; otherwise the filter checks for the security property (i.e. jdk1.8.0_144/jre/lib/security/java.security) to configure the filter.

    The value of jdk.serialFilter is configured as a sequence of patterns either by checking against the class name or the limits for incoming byte stream properties. Patterns are separated by semicolon and whitespace is also considered to be part of a pattern. Limits are checked before classes regardless of the order in which the pattern sequence is configured. Below are the limit properties which can be used during the configuration:

    - maxdepth=value // the maximum depth of a graph
    - maxrefs=value // the maximum number of the internal references
    - maxbytes=value // the maximum number of bytes in the input stream
    - maxarray=value // the maximum array size allowed

    Other patterns match the class or package name as returned by Class.getName(). Class/Package patterns accept asterisk (*), double asterisk (**), period (.), and forward slash (/) symbols as well. Below are a couple pattern scenarios that could possibly happens:

    // this matches a specific class and rejects the rest
     // this matches all classes in the package and all subpackages and rejects the rest 
    - "jdk.serialFilter=org.example.**;!*" 
    // this matches all classes in the package and rejects the rest 
    - "jdk.serialFilter=org.example.*;!*" 
     // this matches any class with the pattern as a prefix
    - "jdk.serialFilter=*;

    Built-in Filters

    JDK 9 has also introduced additional built-in, configurable filters mainly for RMI Registry and Distributed Garbage Collection (DGC) . Built-in filters for RMI Registry and DGC white-list classes that are expected to be used in either of these services. Below are classes for both RMIRegistryImpl and DGCImp:





    In addition to these classes, users can also add their own customized filters using sun.rmi.registry.registryFilter and sun.rmi.transport.dgcFilter system or security properties with the property pattern syntax as described in previous section.

    Wrapping up

    While Java deserialization is not a vulnerability itself, deserialization of untrusted data using JDK's native serialization framework is. It is important to differentiate between the two, as the latter is introduced by a bad application design rather than being a flaw. Java deserialization framework prior to JEP 290 however, did not have any validation mechanism to verify the legitimacy of the objects. While there are a number of ways to mitigate against JDK's lack of assertion on deserializing objects, there is no concrete specification to deal with this flaw within the JDK itself. With JEP 290, Oracle introduced a new filtering mechanism to allow developers to configure the filter based on a number of deserialization scenarios. The new filtering mechanism seems to have made it easier to mitigate against deserialization of untrusted data should the need arises.

    Posted: 2018-02-21T14:30:00+00:00

Welcome! Check out the Getting Started with Red Hat page for quick tours and guides for common tasks.