Warning message

Log in to add comments.

Latest Posts

  • Kernel Stack Protector and BlueBorne

    Authored by: Red Hat Product...

    Today, a security issue called BlueBorne was disclosed, a vulnerability that could be used to attack sensitive systems via the Bluetooth protocol. Specifically, BlueBorne is a flaw where a remote (but physically quite close) attacker could get root on a server, without an internet connection or authentication, via installed and active Bluetooth hardware.

    The key phrase is “has the potential.” BlueBorne is still a serious flaw and one that requires patching and remediation, but most Red Hat Enterprise Linux users are at less risk of a direct attack. This is because Bluetooth hardware is not particularly common on servers, and our Server distributions of Red Hat Enterprise Linux don’t enable Bluetooth by default. But what about the desktop and workstation users of Red Hat Enterprise Linux and many other Linux distributions?

    Laptops and desktop machines commonly have Bluetooth hardware, and Workstation variants of Red Hat Enterprise Linux enable Bluetooth by default. It’s possible that a malicious actor could use a remote Bluetooth connector to gain access to personal workstations or terminals in an office building, allowing them to gain root for accessing sensitive data or potentially causing a cascading, system-wide attack. This is unlikely, however, on Linux operating systems, including Red Hat Enterprise Linux, thanks to Stack Protection.

    Stack Protection has been available for some time, having been introduced in some distributions back in 2005. We believe most major vendor distributions build their Linux kernels with Stack Protection enabled. For us, this includes Fedora Core (since version 5) and Red Hat Enterprise Linux (since version 6). With a kernel compiled in this way, the flaw turns from remote code execution to a remote crash (kernel panic). While having a physically local attacker being able to cause your machines to crash without touching them is bad, but it’s certainly not as bad as remote root.

    Red Hat, along with other Linux distribution vendors and the upstream Kernel security team, received one week advance notice on BlueBorne in order to prepare patches and updates. We used this time to evaluate the issue, develop the fix and build and test updated packages for supported versions of Red Hat Enterprise Linux. We also used the time to provide clearly understood information about the flaw, and how it impacted our products, which can be found in the Vulnerability Article noted below.

    Because Stack Protection works by adding a single check value (a canary) to the stack before the return address, a buffer overflow could overwrite other buffers on the stack before that canary depending on how things get ordered, so it was important for us to check properly. Based on a technical investigation we concluded that with Stack Protection enabled, it would be quite unlikely to be able to exploit this to gain code execution. We can’t completely rule it out, though, as an attacker may be able to use some other mechanism to bypass it (for example, if they can determine the value of the stack canary, maybe a race condition, combining it with some other flaw).

    On some architectures, notably ppc64 and s390x for Red Hat Enterprise Linux, Stack Protection is not used. However the Bluetooth kernel module is not available for our s390x Server variant. And ppc64 is only available in a Server variant, which doesn’t install the bluez package, making it not vulnerable by default even if Bluetooth hardware happens to be present.

    So if most distributions build kernels with Stack Protection, and Stack Protection has been available for many years before the flaw was introduced, where is the risk? Well, the problem is going to be all those kernels that have been built without Stack Protection turned on. So things like IoT devices that are Bluetooth enabled along with a vulnerable kernel compiled without Stack Protection will be most at risk from this flaw.

    Regardless of whether you have Stack Protection or not, patch your system. BlueBorne remains an important flaw and one that needs to be remedied as soon as possible via the appropriate updates.

    For Red Hat customers our page https://access.redhat.com/security/vulnerabilities/blueborne contains information on the patches we released today along with other details and mitigations. We’d like to thank Armis Labs for reporting this vulnerability.

    Posted: 2017-09-12T11:51:33+00:00
  • Polyinstantiating /tmp and /var/tmp directories

    Authored by: Huzaifa Sidhpurwala

    On Linux systems, the /tmp/ and /var/tmp/ locations are world-writable. They are used to provide a common location for temporary files and are protected through the sticky bit, so that users cannot remove files they don't own from the directory, even though the directory itself is world-writable. Several daemons/applications use the /tmp or /var/tmp directories to temporarily store data, log information, or to share information between their sub-components. However, due to the shared nature of these directories, several attacks are possible, including:

    Polyinstantiation of temporary directories is a proactive security measure which reduces chances of attacks that are made possible by /tmp and /var/tmp directories being world-writable.

    Setting Up Polyinstantiated Directories

    Configuring polyinstantiated directories is a three-step process (this example assumes that a Red Hat Enterprise Linux 7 system is used):

    First, create the parent directories which will hold the polyinstantiation child directories. Since in this case we want to setup polyinstantiated /tmp and /var/tmp, we create /tmp-inst and /var/tmp/tmp-inst as the parent directories.

    $ sudo mkdir --mode 000 /tmp-inst
    $ sudo mkdir --mode 000 /var/tmp/tmp-inst
    

    Creating these directories with mode 000 ensures that no users can access them directly. Only polyinstantiated instances mounted on /tmp (or /var/tmp) can be used.

    Second, configure /etc/security/namespace.conf. This file already contains an example configuration which we can use. In our case we will just uncomment the lines corresponding to /tmp and /var/tmp.

     /tmp     /tmp-inst/            level      root,adm 
     /var/tmp /var/tmp/tmp-inst/    level      root,adm
    

    This configuration specifies that /tmp must be polyinstantiated from a subdirectory of /tmp-inst. The third field specifies the method used for polyinstatiation which in our case is based on process MLS level. The last field is a comma-separated list of uids or usernames for whom the polyinstantiation is not performed1. More information about the configuration parameters can be found in /usr/share/doc/pam-1.1.8/txts/README.pam_namespace.

    Also ensure that pam_namespace is enabled in the PAM login configuration file. This should already be enabled by default on Red Hat Enterprise Linux systems.

     session    required    pam_namespace.so
    

    Third, setup the correct selinux context. This is a two-step process. In the first step we need to enable the global SELinux boolean for polyinstantiation using the following command:

    $ sudo setsebool polyinstantiation_enabled=1
    

    You can verify it worked by using:

    $ sudo getsebool polyinstantiation_enabled
    polyinstantiation_enabled --> on
    

    In the second step, we need to set the process SELinux context of the polyinstantiated parent directories using the following commands:

    $ sudo chcon --reference=/tmp /tmp-inst
    $ sudo chcon --reference=/var/tmp/ /var/tmp/tmp-inst
    

    The above commands use the selinux context of the /tmp and /var/tmp directories, respectively, as references and copies them to our polyinstantiated parent directories.

    Once the above is done, you can logoff and login, and each non-root user gets their own polyinstantiation of /tmp and /var/tmp directories.

    PrivateTmp feature of systemd

    Daemons running on systems which use systemd can now use the PrivateTmp feature. This enables a private /tmp directory for each daemon that is not shared by the processes outside of the namespace, however this makes sharing between processes outside the namespace using /tmp impossible. The main difference between polyinstantiated /tmp and PrivateTmp is that the former creates a per-user /tmp directory, while the latter creates a per-deamon or process /tmp.

    Conclusion

    In conclusion, while polyinstantiation will not prevent every type of attack (caused by flaws in the applications running on the system, or mis-configurations like weak root password, wrong directory/file permissions etc), it is a useful addition to your security toolkit that is straightforward to configure. Polyinstantiation can also be used for other directories such as /home. Some time ago, polyinstantiated /tmp by default was proposed for Fedora, but several issues caused the proposal to be denied.


    1. Default values include the root and the adm user. Since root is a superuser anyway, it does not make any sense to polyinstantiate the /tmp directory for the root user. 

    Posted: 2017-08-31T17:28:50+00:00
  • Post Quantum Cryptography

    Authored by: Huzaifa Sidhpurwala

    Traditional computers are binary digital electronic devices based on transistors. They store information encoded in the form of binary digits each of which could be either 0 or 1. Quantum computers, in contrast, use quantum bits or qubits to store information either as 0, 1 or even both at the same time. Quantum mechanical phenomenons such as entanglement and tunnelling allow these quantum computers to handle a large number of states at the same time.

    Quantum computers are probabilistic rather than deterministic. Large-scale quantum computers would theoretically be able to solve certain problems much quicker than any classical computers that use even the best currently known algorithms. Quantum computers may be able to efficiently solve problems which are not practically feasible to solve on classical computers. Practical quantum computers will have serious implications on existing cryptographic primitives.

    Most cryptographic protocols are made of two main parts, the key negotiation algorithm which is used to establish a secure channel and the symmetric or bulk encryption cipher, which is used to do the actual protection of the channel via encryption/decryption between the client and the server.

    The SSL/TLS protocol uses RSA, Diffie-Hellman (DH) or Elliptic Curve Diffie-Hellman (ECDH) primitives for the key exchange algorithm. These primitives are based on hard mathematical problems which are easy to solve when the private key is known, but computationally intensive without it. For example, RSA is based on the fact that when given a product of two large prime numbers, factorizing the product (which is the public key) is computationally intensive. By comparison, a quantum computer could efficiently solve this problem using Shor's algorithm to find the public key factors. This ability could allow a quantum computer to decrypt many of the cryptographic systems in use today. Similarly, DH and ECDH key exchanges could all be broken very easily using sufficiently large quantum computers.

    For symmetric ciphers, the story is slightly different. It has been proven that applying Grover's algorithm to break a symmetric (secret key) algorithm with key length of n bits by brute force requires time equal to roughly 2^(n/2) invocations of the underlying cryptographic algorithm, compared with roughly 2^n in the classical case, meaning that the strength of symmetric key lengths are effectively halved: AES-256 would have the same security against an attack using Grover's algorithm that AES-128 has against classical brute-force search. Therefore the situation with symmetric ciphers is stronger than the one with public key crypto systems.

    Hashes are also affected in the same way symmetric algorithms are: Grover's algorithm requires twice the hash size (as compared to the current safe values) for the same cryptographic security.

    Therefore, we need new algorithms which are resistant to quantum computations. Currently there are 5 proposals, which are under study:

    Lattice-based cryptography

    A lattice is the symmetry group of discrete translational symmetry in n directions. This approach includes cryptographic systems such as Learning with Errors, Ring-Learning with Errors (Ring-LWE), the Ring Learning with Errors Key Exchange and the Ring Learning with Errors Signature. Some of these schemes (like NTRU encryption) have been studied for many years without any known feasible attack vectors and hold great promise. On the other hand, there is no supporting proof of security for NTRU against quantum computers.

    Lattice is interesting because it allows the use of traditional short key sizes to provide the same level of security. No short key system has a proof of hardness (long key versions do). The possibility exists that a quantum algorithm could solve the lattice problem, and the short key systems may be the most vulnerable.

    Multivariate cryptography

    Multivariate cryptography includes cryptographic systems such as the Rainbow scheme which is based on the difficulty of solving systems of multivariate equations. Multivariate signature schemes like Rainbow could provide the basis for a quantum secure digital signature, though various attempts to build secure multivariate equation encryption schemes have failed.

    Several practical key size versions have been proposed and broken. EU has standardized on a few; unfortunately those have all been broken (classically).

    Hash-based cryptography

    Hash based digital signatures were invented in the late 1970s by Ralph Merkle and have been studied ever since as an interesting alternative to number theoretic digital signatures like RSA and DSA. The primary drawback for any Hash based public key is a limit on the number of signatures that can be signed using the corresponding set of private keys. This fact reduced interest in these signatures until interest was revived due to the desire for cryptography to resist attack by quantum computers. Schemes that allow an unlimited number of signatures (called 'stateless') have now been proposed.

    Hash based systems are provably secure as long as hashes are not invertible. The primary issue with hash based systems is the signature sizes (quite large). They also only provide signatures, not key exchange.

    Code-based cryptography

    Code-based cryptography includes cryptographic systems which rely on error-correcting codes. The original McEliece signature using random Goppa codes has withstood scrutiny for over 30 years. The Post Quantum Cryptography Study Group sponsored by the European Commission has recommended the McEliece public key encryption system as a candidate for long term protection against attacks by quantum computers. The downside, however, is that code based cryptography has large key sizes.

    Supersingular elliptic curve isogeny cryptography

    This cryptographic system relies on the properties of supersingular elliptic curves to create a Diffie-Hellman replacement with forward secrecy. Because it works much like existing Diffie-Hellman implementations, it offers forward secrecy which is viewed as important both to prevent mass surveillance by governments but also to protect against the compromise of long term keys through failures.

    Implementations

    Since this is a dynamic field with cycles of algorithms being defined and broken, there are no standardized solutions. NIST's crypto competition provides a good chance to develop new primitives to power software for the next decades. However it is important to mention here that Google Chrome, for some time, has implemented the NewHope algorithm which is a part of Ring Learning-with-Errors (RLWE) family. This experiment has currently concluded.

    In conclusion, which Post-Quantum cryptographic scheme will be accepted will ultimately depend on how fast quantum computers become viable and made available at minimum to state agencies if not the general public.

    Posted: 2017-07-26T13:30:00+00:00
  • What is new in OpenSSH 7.4 (in RHEL 7.4)?

    Authored by: Jakub Jelen

    Red Hat Enterprise Linux 7 (RHEL 7) so far has been providing iterations of OpenSSH 6.6.1p1, first released in 2014. While we've kept it updated against vulnerabilities, many things have changed both in security requirements and features provided by the latest OpenSSH. Therefore, OpenSSH has now been updated to the recently released OpenSSH 7.4p1, which brings many new features and security enhancements. For the complete set of changes and bugfixes, please refer to the upstream release notes.

    New features

    This OpenSSH client and server rebase brings on many new and useful features. The most prominent ones are described below.

    Host Key Rotation

    Setting up a new system accessed by SSH, or generating new server host keys which satisfy new security requirements, was a complicated task in the past. New keys had to be distributed to all users and swapped in the server configuration atomically, otherwise users would see scary warnings about possible MITM attack. This update addresses that problem by presenting all the host keys to connecting clients, allowing for a smooth transition between old and new keys.

    This functionality is controlled by the client configuration option UpdateHostKeys. When the value is set to yes or ask, the client will request all the public keys present on the server (with OpenSSH 6.8 and newer) and store them in the known_hosts file. These keys are transferred using the encrypted channel and the client verifies the signatures from the server, therefore this operation does not require explicit user verification, assuming the previously used host key is already trusted.

    $ ssh -o UpdateHostKeys=ask localhost
    Learned new hostkey: RSA SHA256:5oa3j5qave0Tz2ScovK084zqtgsGy0PeZfL8qc7NMtk
    Learned new hostkey: ED25519 SHA256:iZG0mDh0JZaPrQ+weGEEyjfN+qL9EDRxrffhqzoAFdo
    Accept updated hostkeys? (yes/no): yes
    Last login: Wed May  3 16:29:38 2017 from ::1
    

    New SHA256 Fingerprint Format

    As can be seen in the previous paragraph, OpenSSH moved away from MD5-based fingerprints to SHA256 ones. The new hash is longer and therefore it is represented in base64 format instead of the colon-separated hexadecimal pairs. The fingerprint format can be specified using the FingerprintHash configuration option in ssh_config, or with -E switch to ssh-keygen. In most places both hashes are shown by default (SHA256 and MD5) for backward compatibility:

    $ ssh example.org
    The authenticity of host 'example.org (192.168.0.7)' can't be established.
    ECDSA key fingerprint is SHA256:kmtPlIwHZ4B68g6/eRbDTgC2GD0QnmrjjA0MjOB3/HY.
    ECDSA key fingerprint is MD5:57:01:87:16:7a:a8:24:60:db:9e:05:3f:a0:78:aa:69.
    Are you sure you want to continue connecting (yes/no)? yes
    

    Other tools require an explicit command line option to provide the old fingerprint format:

    $ ssh-keygen -l -f /etc/ssh/ssh_host_rsa_key.pub
    256 SHA256:kmtPlIwHZ4B68g6/eRbDTgC2GD0QnmrjjA0MjOB3/HY no comment (ECDSA)
    $ ssh-keygen -l -E md5 -f /etc/ssh/ssh_host_rsa_key.pub
    256 MD5:57:01:87:16:7a:a8:24:60:db:9e:05:3f:a0:78:aa:69 no comment (ECDSA)
    

    Simplified connection using intermediate hosts

    There is a new configuration option ProxyJump, and command line switch -J, which significantly simplifies the configuration and connection to servers in private networks behind a jump box.

    In the past, there was only the ProxyCommand option, which covered many use cases though it was complex to use. It required a configuration as follows:

    Host proxy2
      ProxyCommand ssh -W %h:%p -p 2222 user@proxy
    Host example.com
      ProxyCommand ssh -W %h:%p user2@proxy2
    

    The new ProxyJump configuration syntax is simpler, and makes it very easy to write even longer chains of connections. For example:

    Host example.com
      ProxyJump user@proxy:2222,user2@proxy2
    

    The same as above can also be simply specified ad-hoc on the command-line:

    $ ssh -J user@proxy:2222,user2@proxy2 example.com
    

    UNIX domain socket forwarding

    Previously, OpenSSH allowed only TCP ports to be forwarded in SSH channels. Many applications today are using UNIX domain sockets instead, so OpenSSH implemented support for them. You can forward a remote socket to a local one, the other way round, or even UNIX domain socket to TCP socket, and it is not more complicated than standard TCP forwarding. Just replace hostname:port values with paths to UNIX domain sockets.

    For example, the remote MariaDB socket can be forwarded to the local machine and allow secure connection to this database:

    $ ssh -L /tmp/mysql.sock:/var/lib/mysql/mysql.sock -fNT mysql.example.com
    $ mysql -S /tmp/mysql.sock
    

    New default cipher ChaCha20-Poly1305

    Although the ChaCha20-Poly1305 cipher was available in the older OpenSSH versions, it is now prioritized over other ciphers since it is considered mature enough with reasonable performance. It is a cipher of the Authenticated Encryption with Associated Data (AEAD) form which combines the MAC algorithm (Poly1305) with the cipher, similar to AES-GCM. The cipher is automatically used when connecting to RHEL7 servers1, but the connection to other servers will still use other supported ciphers.

    Improvement in ssh-agent connection

    So far, identity management in ssh-agent (e.g. adding of the SSH keys in ssh-agent) is handled by the ssh-add tool. This must be used prior to connecting to a remote server using ssh.

    It can come in handy to add and decipher the required keys on demand while connecting to a remote server. For that, the option AddKeysToAgent in ssh_config will either add all the used keys automatically or prompt to add new keys that are being used. In combination with the -t switch in ssh-agent, specifying a key's lifetime, it is a simple and secure alternative to storing your keys in ssh-agent indefinitely.

    Furthermore, the ssh-agent connection socket can now be specified in ssh_config using IdentityAgent, which removes quite a lot of the struggle starting the ssh-agent properly and passing the environment variables to existing processes. For example, the following snippet in ~/.bashrc will start the agent, and added keys will have a default lifetime of 10 hours:

    SOCK=/tmp/username/ssh-agent.sock
    SOCK_DIR=`dirname $SOCK`
    [ -d "$SOCK_DIR" ] || mkdir -m700 $SOCK_DIR
    [ -f "$SOCK" ] || eval `ssh-agent -a "$SOCK" -t 10h`
    

    The corresponding configuration in ~/.ssh/config will contain the following lines:

    IdentityAgent /tmp/username/ssh-agent.sock
    AddKeysToAgent yes
    

    Security-related changes

    Over the years, there were many changes upstream regarding security. With this update we are trying to preserve backward compatibility, while removing (potentially) broken algorithms or options that were still available in the current OpenSSH available in Red Hat Enterprise Linux 7.

    Not using SHA-1 for signatures by default

    The original SSH protocol in RFC 4253 only defines RSA authentication signatures using the SHA-1 hash algorithm. As SHA-1 is no longer considered safe, the protocol has been extended in draft-ietf-curdle-rsa-sha2 to allow SHA-2 algorithms along with RSA signatures. This extension is negotiated automatically when connecting to servers running OpenSSH 7.2 and newer. It can be verified that the new hash algorithm is used from the debug log as follows:

    debug2: KEX algorithms: [...],ext-info-c
    [...]
    debug3: receive packet: type 7
    debug1: SSH2_MSG_EXT_INFO received
    debug1: kex_input_ext_info: server-sig-algs=<rsa-sha2-256,rsa-sha2-512>
    [...]
    debug1: Server accepts key: pkalg rsa-sha2-512 blen 279
    

    SSH-1 server removed

    The obsolete SSH-1 server code was removed from OpenSSH. The complete announcement can be found in this knowledgebase article.

    As with previous RHEL 7 releases, connections from clients to existing SSH-1 servers are still possible, and require the use of -1 switch or the use the following configuration:

    Host legacy.example.com
      Protocol 2,1
    

    Seccomp filter for privilege separation sandbox

    The OpenSSH server in RHEL is now built with seccomp filter sandbox, which is enabled by default. In previous versions, the rlimit method was used to limit resources available for a network-facing process. The seccomp filter allows only a very limited list of system calls from this process, reducing the impact of a compromise. That sandbox can be turned off in /etc/ssh/sshd_config by setting:

    UsePrivilegeSeparation yes
    

    Removed support for pre-authentication compression

    Historically, compression in network protocols was recommended in order to increase encryption performance and reduce the amount of data available to an adversary.

    Given statistical attacks and the fact that compression in combination with encryption increases code complexity and attack surface, enabling compression with encryption today is mostly seen as a security liability, underlined by issues such as CVE-2016-10012.

    The compression option was disabled in OpenSSH server for more than 10 years, and on this release, it is completely removed.

    Deprecation of insecure algorithms

    Following up with Deprecation of Insecure Algorithms in RHEL 6.9, legacy algorithms that potentially pose a more serious threats to deployments are being disabled. That, in RHEL 7.4, affects the RC4 ciphers, as well as MD5, RIPE-MD160, and truncated SHA-1 MACs on both client and server side. The ciphers Blowfish, Cast128, and 3DES were removed from the default set of algorithms accepted by the client but are still supported in the server.

    If these algorithms are still needed for interoperability with legacy servers or clients, they can be enabled on a per-host basis as described in the upstream documentation. The following example describes how to enable 3des-cbc cipher in a client:

    Host legacy.example.org
      Ciphers +3des-cbc
    

    Another example, enabling hmac-md5 in a server for legacy.example.org client:

    Match Host legacy.example.org
      MACs +hmac-md5
    

    Conclusion

    The new OpenSSH in RHEL 7.4 comes with many bug fixes and features that might affect your everyday work and that are worth using. Engineering has worked very hard to maintain the backward compatibility with previous versions while improving the security defaults at the same time. If you feel any regressions have been missed please contact Red Hat Support.


    1. Not when the system is in FIPS140-2 certified mode. 

    Posted: 2017-07-12T00:00:00+00:00
  • Enhancing the security of the OS with cryptography changes in Red Hat Enterprise Linux 7.4

    Authored by: Nikos Mavrogian...

    Today we see more and more attacks on operating systems taking advantage of various technologies, including obsolete cryptographic algorithms and protocols. As such, it is important for an operating system not only to carefully evaluate the new technologies that get introduced, but to also provide a process for phasing out technologies that are no longer relevant. Technologies with no practical use today increase the attack surface of the operating system and more specifically, in the cryptography field, introduce risks such as untrustworthy communication channels, when algorithms and protocols are being used after their useful lifetime.

    That risk is not being confined to the users of the obsolete technologies; as the DROWN and other cross-protocol attacks have demonstrated, it is sufficient for a server to only enable a legacy protocol in parallel with the latest one, for all of its users to be vulnerable. Furthermore, the recent cryptographic advances against the SHA-1 algorithm used for digital signatures, demonstrate the need for algorithm agility in modern infrastructures. SHA-1 was an integral part of the Internet and private Public Key Infrastructures and despite that, we must envision a not so distant future with systems that no longer rely on SHA-1 for any cryptographic purpose.

    To address the challenges above, with the release of Red Hat Enterprise Linux (RHEL) 7.4 beta, we are introducing several cryptographic updates to RHEL and we also are introducing a multitude of new features and enhancements. We continue and extend our protocol deprecation effort started in RHEL 6.9, improve access to kernel-provided PRNG with the getrandom system call, as well as ensure that HTTP/2 supporting technologies like ALPN in TLS, and DTLS 1.2, are supported universally in our operating system. Python applications are also made secure by default by enabling certificate verification by default in TLS sessions. We are also proud to bring the OpenSC smart card drivers into RHEL 7.4, incorporating our in-house developed drivers and merging our work with the OpenSC project community efforts. At the same time, the introduced crypto changes ensure that RHEL 7.4 meets the rigorous security certification requirements for FIPS140-2 cryptographic modules.

    Removal of SSH 1.0, SSL 2.0 protocols and EXPORT cipher suites

    For reasons underlined in our deprecation of insecure algorithms blog post, that is, to protect applications running in RHEL from severe attacks utilizing obsolete cryptographic protocols and algorithms, the SSH 1.0, SSL 2.0 protocols, as well as those marked as ‘export’ cipher suites will no longer be included in our supported cryptographic components. The SSH 1.0 protocol is removed from the server-side of the OpenSSH component, while support for it will remain on the client side (already disabled by default) to allow communications with legacy equipment. The SSL 2.0 protocol as well as the TLS ‘export’ cipher suites are removed unconditionally from the GnuTLS, NSS, and OpenSSL crypto components. Furthermore, the Diffie-Hellman key exchange is restricted only to values considered acceptable for today’s cryptographic protocols.

    Note also, that through our review of accepted legacy hashes in the operating system we have discovered that the OpenSSL component enables obsolete hashes for digital signatures, such as SHA-0, MD5, and MD4. Since these hashes have no practical use today, and to reduce the risk of relying on legacy algorithms, we have decided to deviate from upstream OpenSSL settings and disable these hashes by default for all OpenSSL applications. That change is reversible (see release notes). Note that this issue was discussed with the upstream OpenSSL developers, and although that behavior is known to them, it is kept for backwards compatibility.

    Below is a summary of the deprecated protocols and algorithms.

    Introduced change Description Reversible
    SSL 2.0 protocol The shipped TLS libraries will no longer include support for the SSL 2.0 protocol. No.
    Export TLS ciphersuites The shipped TLS libraries will no longer include support for Export TLS ciphersuites. No.
    SSH 1.0 protocol The shipped OpenSSH server will no longer include support for SSH 1.0 protocol. No.
    TLS Diffie-Hellman key exchange Only parameters larger than 1024-bits will be accepted by default. Conditionally (information will be provided in the release notes).
    Insecure hashes for digital signatures in OpenSSL The MD5, MD4 and SHA-0 algorithms will not be enabled by default for TLS sessions or certificate verification on OpenSSL. Yes. Administrators can revert this setting system-wide (information will be provided in the release notes).
    RC4 algorithm The algorithm will no longer be enabled by default for OpenSSH sessions. Yes. Administrators can revert this setting system-wide (information will be provided in the release notes).

    Phasing out of SHA-1

    SHA-1 was published in 1993 and is still in use today in a variety of applications including but not limited to the web PKI. In particular, its primary use case is digital signatures on data, certificates, and OCSP responses. However, there are several known weaknesses on this hash and there was recently a demonstration of collision attack, something that, when combined with the experience of MD5 hash attacks, is an indication that a forged certificate attack may not be far in the future. For that reason, we have ensured that all our cryptographic tools1 which deal with digital signatures will no longer use SHA-1 as the default hash function, but instead switch to SHA2-256, which provides maximum compatibility with older clients.

    In addition, to further strengthen the resistance of RHEL to such cryptographic attacks, we have lifted the OpenSSH server and client limitation to SHA-1 for digital signatures, enabling support for SHA2-256 digital signatures.

    We do not yet plan to disable SHA-1 system-wide in RHEL 7 as a significant amount of infrastructure still depends on it, and disabling it would severely disrupt operations. Nonetheless, we would like to recommend software engineers working on RHEL to no longer rely on SHA-1 for cryptographic purposes, and system administrators to verify that they no longer use certificates, OCSP stapled responses, or any other cryptographic structure with SHA-1 based signatures.

    To verify whether a certificate, OCSP response, or CRL utilize the SHA-1 hash algorithm for signature, you may use the following commands.

    Test Command
    Certificate in FILENAME uses SHA-1 openssl x509 -in FILENAME -text -noout | grep -i 'Signature.*sha1'
    OCSP response in FILENAME uses SHA-1 openssl ocsp -resp_text -respin FILENAME -noverify | grep -i 'Signature.*sha1'
    CRL in FILENAME uses SHA-1 openssl crl -in FILENAME -text -noout | grep -i 'Signature.*sha1'

    HTTP/2 related updates

    Modern internet applications strive for low-latency and good responsiveness, and the HTTP/2 protocol is a major driver to that effort. With the introduction of OpenSSL 1.0.2 into RHEL 7.4, we complete across our base crypto stack support for Application Layer Protocol Negotiation (ALPN). This TLS protocol extension enables applications to reduce TLS handshake round-trips by negotiating the application protocol and its version during the handshake process. In practice this is used for applications to signal their HTTP/2 support, eliminating the need for additional round-trips after the TLS connection is established. That, when combined with other kernel features such as TCP fast open, can further fine-tune the TLS session establishment.

    DTLS 1.2 for all applications

    Additionally, the introduction of OpenSSL 1.0.2 brings support for the Datagram TLS 1.2 (DTLS) protocol in OpenSSL applications, completing that support throughout the TLS stacks on the operating system. That support ensures that applications using the DTLS protocol can take advantage of the secure authenticated encryption (AEAD) cipher suites, eliminating the reliance on CBC cipher suites, which are known to be problematic for DTLS.

    Crypto-related Python-language changes

    The upstream version of Python 2.7.9 enabled SSL/TLS certificate verification in Python’s standard library modules that provide HTTP client functionality such as urllib, httplib or xmlrpclib. Prior to this change, no certificate verification was performed by default, making Python applications vulnerable to certain classes of attacks in SSL and TLS connections. It was a well known issue for a long time and several applications worked around the issue by implementing their own certificate checks. Despite these work-arounds, in order to ensure that all Python applications are secure by default, and follow a consistent certificate validation process, in Red Hat Enterprise Linux 7.4 we incorporate the upstream change and enable certificate verification by default in TLS sessions for all applications.

    Crypto-related kernel changes

    Despite the fact that the Linux kernel was one of the first to provide interfaces to access cryptographically-secure pseudo-random data via /dev/*random interfaces, these interfaces show their age today. The /dev/random is a system device which when read will provide random data, to the extent assessed by an internal estimator. That is, the device will block when not enough random events, related to internal kernel entropy sources, such as interrupts, CPU, and others, are accumulated. Due to that, the /dev/random interface may seem initially tempting to use. However, in practice /dev/random cannot be relied on; it requires a large amount of random events to be accumulated to provide few bytes of random data, and any use of it during boot time could risk blocking the system indefinitely.

    On the other hand, the device /dev/urandom provides access to the same random generator, but will never block, nor apply any restrictions to the number of new random events that must be read in order to provide any output. That is expected, given that modern random generators when sufficiently seeded can provide an enormous amount of output prior to being considered broken in an informational-theoretic sense. Unfortunately, /dev/urandom suffers from a design flaw. When used early in the boot process prior to initialization of the kernel random number generator, it will still output data. How random is the output data is system-specific, but in modern platforms, which provide a CPU-based random generator, that is less of an issue.

    To eliminate such risks, RHEL 7.4 introduces the getrandom() system call, which combines the best of the existing interfaces. It can provide non-blocking access to kernel CPRNG and it will block prior to the kernel random generator being initialized. Furthermore, it requires no file descriptor to be carried by application and libraries. The new system call is available through the syscall(2) interface.

    Smart card related changes

    For long time, RHEL provided the CoolKey smart-card driver for applications to access the PKCS#11 API on smart cards. This driver was limited to the smart cards that we focus on for our operating system, that is, the PIV, CAC, PKCS#15-compliant and CoolKey-applet cards. However, over the years, the community-based OpenSC project, which provided an alternative driver, has managed to provide support for the majority of available smart cards, provide solid code base with support for the industry standard PKCS#11 API, and is already included in our Fedora distribution for many years. Most importantly, however, the project also serves as a healthy example of a community-driven project, with collaborating engineers from diverse backgrounds and technologies.

    With that in mind, we have proceeded by merging our projects, by introducing missing drivers and features to OpenSC, as well as by releasing our extensive test suite for smart card support. The outcome of this effort is made available on RHEL 7.4, where the OpenSC component will serve in parallel with the CoolKey component. The latter will remain supported for the lifetime of RHEL 7, however, new hardware enablement will only be available on the OpenSC component.

    Future changes

    In the lifetime of Red Hat Enterprise Linux 7 we plan to deprecate and disable by default the SSL 3.0 protocol and the RC4 cipher from TLS and Kerberos system-wide. Note that we do not plan to remove support for the above algorithms, but disable them for applications, which did not explicitly requested them.

    Introduced change Description Reversible
    SSL 3.0 protocol The shipped TLS libraries will no longer enable support for the SSL 3.0 protocol. This is a protocol which has been superseded by TLS 1.0 for almost two decades, and there is a multitude of attacks towards it. By explicitly enabling the protocol in applications that require it.
    RC4 The shipped TLS libraries will no longer enable support for the RC4 based ciphersuites by default. Cryptographic advances have shown that accidental usage of that algorithm put applications at risk in relation with data secrecy. By explicitly enabling the ciphersuites in applications that require them.

    Concluding remarks

    All of the above changes ensure that Red Hat Enterprise 7 remains a leader in the adoption of new technologies with security built into the OS. Red Hat Enterprise Linux not only carefully evaluates and incorporates new technologies, but technologies that are no longer relevant and pose a security risk are regularly phased out. Specifically, HTTP/2-supporting protocols are made available in all of our cryptographic back-ends. We also align with the community on smart card development not only by bringing our improvements to the OpenSC project, but by introducing it as a fully supported component in Red Hat Enterprise Linux 7. Furthermore, with the introduction of Datagram TLS 1.2 in OpenSSL, and the updates in OpenSSH in order to support other than SHA-1 cryptographic hashes, we ensure that all Red Hat Enterprise Linux 7 applications utilize the right cryptographic algorithms which can resist today’s threats.

    On the other hand, we reduce the attack surface of the operating system, by removing support for SSH 1.0, SSL 2.0, and the TLS ‘export’ cipher suites. That is a move which reduces the risk of successful future attacks, at the cost of removal of protocols which are today considered either too risky to use or plain insecure.

    The removal of the previously mentioned protocols was an exceptional move because we are convinced that these are primarily used to attack misconfigured systems rather than for real-world use cases. Regarding any future changes, for example the disablement of SSL 3.0 or RC4, we would like to assure that it will be done in a way that systems can revert to the previous behavior when required by local policy.

    All of these changes ensure Red Hat Enterprise Linux keeps pace with the reality of the security landscape around us, and improves its strong security foundation.


    1. The above applies to software from the GnuTLS, NSS, and OpenSSL crypto components. 

    Posted: 2017-06-16T00:00:00+00:00
  • Secure XML Processing with JAXP on EAP 7

    Authored by: Jason Shepherd

    The Java Development Kit (JDK) version 8 provides the Java API for XML Processing (JAXP). If a developer is using JAXP on Red Hat JBoss Enterprise Application Platform (EAP) 7 they need to be aware that Red Hat JBoss EAP 7 ships it's own implementation, with some differences from JDK 8 that are covered in this article.

    Background

    There have been three issues raised in the month of May 2017 relating to JAXP on Red Hat JBoss EAP 7: CVE-2017-7464, CVE-2017-7465, and CVE-2017-7503. All of the issues are XML External Entity (XXE) vulnerabilities, which have affected Java since 2002. XXE is a type of attack that affects weakly configured XML parsers. A successful attack occurs when XML input contains external entities. Those external entities can do things such as access local network resources, or read local files.

    Red Hat maintains it's own fork of Xalan used for XML processing. This was done to avoid certain limitations with the upstream code in a multi-classloader, modular environment, such as Red Hat JBoss EAP. The JDK maintains it's own fork as well, and has made some security improvements which have not been contributed back to the upstream Xalan project. For example a Billion Laughs-style XXE attack is thwarted on OpenJDK 1.8.0_73 by changes that result in an error message such as:

    JAXP00010001: The parser has encountered more than "64000" entity expansions in this document; this is the limit imposed by the JDK.
    

    It's important to remember that you only need to be concerned about vulnerabilities affecting XML processing if you don't trust the XML content you are parsing. One way to add trust to XML content is by adding authentication to network-accessible endpoints which are accepting XML content.

    The current situation

    All of the vulnerabilities relate to JAXP and not to the Web Services specifications JAX-WS and JAX-RS. JAX-WS and JAX-RS are implemented on Red Hat JBoss EAP by Apache CXF and Resteasy. One of the main goals for developers on Red Hat JBoss EAP is to build web services, which is why the distinction is made. If you're doing web service development on Red Hat JBoss EAP 7, these issues will mostly not apply. They affect the use of JAXP directly for XML processing.

    There is one place where web services overlap with JAXP, and user code could potentially be affected by one of these vulnerabilities. Section 4.2.4 of the JAX-RS 2.0 specification mandates that EAP 7 must provide MessageBodyReader, and MessageBodyWriter implementations for javax.xml.transform.Source. This is used for processing of JAX-RS requests with application/*+xml media types. A common way of unmarshaling a javax.xml.transform.Source is to use a TransformerFactory as demonstrated here:

    TransformerFactory factory = TransformerFactory.newInstance();
    Transformer transformer = factory.newTransformer();
    StreamResult xmlOutput=new StreamResult(new StringWriter());
    transformer.transform(mySource,xmlOutput);
    resultXmlStr = xmlOutput.getWriter().toString();
    

    Where mySource is of type javax.xml.transform.Source.

    In this situation, there is one outstanding issue, CVE-2017-7503, on EAP 7.0.5 which is yet to be resolved. While this is one place where Red Hat JBoss EAP encourages the use of JAXP, the JAX-RS implementation (Resteasy) is not directly affected. Resteasy is not directly affected because it doesn't do any processing of a javax.xml.transform.Source as demonstrated above. It only provides a javax.xml.transform.Source for the developer to transform in their web application code.

    CVE-2017-7503 affects processing of XML content from an untrusted source using a javax.xml.transform.TransformerFactory. The only safe way to process untrusted XML content with a TransformerFactory is to use the StAX API. StAX is a safe implementation on EAP 7.0.x because the XML content is not read in it's entirety in order to parse it. As a developer using StAX, you decide which XML stream events you want to react to, so XXE control constructs won't be processed automatically by the parser.

    For CVE-2017-7464 and CVE-2017-7465 there are mitigations from OWASP XXE Prevention Cheat Sheet. The specific mitigation for these issues are as follows:

    CVE-2017-7464

    SAXParserFactory parserFactory = SAXParserFactory.newInstance();
    parserFactory.setFeature(XMLConstants.FEATURE_SECURE_PROCESSING, true);
    parserFactory.setFeature("http://apache.org/xml/features/disallow-doctype-decl",
                 true);
    

    CVE-2017-7465

    TransformerFactory factory = TransformerFactory.newInstance();
    factory.setFeature(javax.xml.XMLConstants.FEATURE_SECURE_PROCESSING, true);
    

    How could it be fixed?

    On Red Hat JBoss EAP 7.0.x, CVE-2017-7503 is yet to be resolved. In contrast CVE-2017-7464 and CVE-2017-7465 already have mitigations in place so a fix may not be provided in the 7.0.x stream.

    Ideally, JAXP implementations would be removed entirely from Red Hat JBoss EAP 7 and rely solely on those from the JDK itself. Removing Xalan from Red Hat JBoss EAP is still being investigated.

    A compromise might be to maintain a fork of the JAXP libraries and enable the secure options recommended by OWASP by default in a future major release of Red Hat JBoss EAP 7.

    Posted: 2017-06-01T13:30:00+00:00
  • The RHSA notifications you want, right in your Inbox

    Authored by: Fábio Olivé Leite

    Red Hat Product Security takes pride in the quality and timeliness of its Security Advisories and all the accompanying information we publish for every erratum and vulnerability that we track and fix in our products. There are many ways in which customers and the general public can get notified about those advisories and errata and one of the most commonly used is the rhsa-announce mailing list. This list has been around for nearly 10 years, and we have recently taken steps to increase its usefulness for a wider variety of subscribers.

    E Pluribus Mail Unum

    The rhsa-announce mailing list was created in November of 2007 with the purpose of delivering information about all security advisories published by Red Hat regardless of the product family they affected. Before rhsa-announce there were two mailing lists, enterprise-watch-list and jboss-watch-list, with the latter having been created earlier that year (March 2007) and the former going way back to March 2003. That is 14 years of security advisories archived publicly in a simple text format that makes it easy for search engines to index them.

    Over time our product portfolio expanded and new mailing lists were created, maintaining the usual behavior with rhsa-announce receiving all advisory email and the lists specific to a “product family” received only those that affected those particular products. This led to the creation of rhev-watch-list and storage-watch-list (anyone remembers stronghold-watch-list?), which have been a little less popular (hundreds of subscribers) than the original ones (thousands of subscribers).

    As Red Hat’s product portfolio continues to expand this raised a few questions. Are we just going to continue creating new lists? How do we expect interested people to find out about those lists? Is there a better way forward? Well, it turns out there is: mailing lists managed by MailMan can support topics, which is a way to create “sub-lists” inside a list by matching keywords in certain headers. Subscribers can update the topics they subscribe to at any time without having to manage multiple subscriptions and perhaps multiple filters in their email clients. We are therefore making rhsa-announce the one and only mailing list for all Red Hat Security Advisories, with topics support for selecting specific security impacts of the published errata, or specific product families one is interested in, starting on June 1st.

    Call to Action: if you currently subscribe to any security advisory mailing list at Red Hat other than rhsa-announce, please unsubscribe from them and select the topics you are interested in at rhsa-announce instead.

    Subscribing to Specific Topics in RHSA-Announce

    Subscribing to rhsa-announce is very easy: just visit the mailing list information page and follow the instructions in the “Subscribing to RHSA-announce” section; alternatively, send an email to rhsa-announce-request@redhat.com with a subject of “subscribe”. Either method should send you a confirmation email with instructions to confirm your subscription. The subscription section looks like this:

    Once you are subscribed, go to the subscription options page, log in, and scroll down to the list of topics to select those that you are interested in (see the screen capture below). You will need your rhsa-announce subscription password to access that page (the one you entered in the WebUI or that MailMan generated for you if you subscribed via email). After selecting your topics of interest, click on “Submit My Changes” at the bottom of the page to save your options.

    The topics for Critical, Important, Moderate, and Low severity issues match for every product so, for example, selecting “Critical Severity Issues” delivers email about advisories of Critical impact affecting every Red Hat product. The topics for specific product lines match on every product in that group so “Middleware products issues” matches and delivers all advisory emails, of any severity, concerning our Middleware products.

    Unfortunately one cannot subscribe to intersections of topics; it is not possible to subscribe only to “Critical advisories for OpenStack” or similar. The topic subscription is a union of all topics selected. Another detail worth mentioning is that MailMan topic filtering is sometimes “too inclusive”, and some advisories may match more than the expected topics, depending on the contents of some email headers. Topics do not miss a product advisory email, though, they just may sometimes include messages for a product outside the expected group.

    If one wishes to unsubscribe from rhsa-announce (or any other list hosted by Red Hat), that can also be done from the list options page or by sending an email to rhsa-announce-request@redhat.com with a subject of “unsubscribe”. For other lists the operation is similar: just append “-request@redhat.com” to the list name to obtain the address to send the unsubscribe command to, such as enterprise-watch-list-request@redhat.com.

    Finally, it should be noted that rhsa-announce is not meant for general discussion of advisories; it is a read-only mailing list. Questions regarding advisories can be sent to Red Hat Support or to Red Hat Product Security.

    Other Sources of Errata Notifications and Vulnerability Information

    While we’re on the subject of receiving notifications about Red Hat Security Advisories and our security errata, it should be mentioned that email is not the only way to get that information. The Product Security section of the Red Hat Customer Portal contains a wealth of information about our advisories, individual CVEs, blog posts, and more. Certain vulnerabilities need special attention sometimes because they are truly severe and sometimes because we feel the need to dispel some myth involving scary logos and catchy names. Those get a more detailed article in the Vulnerability Responses section of the Portal. Logged-in users that have active Red Hat Subscriptions can also proceed to the notifications area of the Customer Portal and select from a nice amount of errata notification options to receive more personalized notifications.

    For those interested in automation, in addition to the usual machine-consumable CVRF and OVAL data we have been providing for years, we also provide a Security Data API that everyone is welcome to make reasonable use of and easily query for many kinds of security information they are interested in regarding our products.

    As can be seen, there are many options for consuming our published security advisory information. This continues our tradition of providing objective and timely security advice as transparently and openly as possible, something Red Hat Product Security is very proud of.

    Posted: 2017-05-17T13:30:00+00:00
  • Security Scoring and Grading for Container Images

    Authored by: Vincent Danen

    We have just rolled out an update to the interface of the Red Hat Container Catalog that attempts to answer to the question of whether or not a particular container image available in the Container Catalog can be considered secure. In the interests of transparency providing as much information as available to deploy the right container image for their needs, we are excited about these new capabilities in the Red Hat Container Catalog and wanted to give a little insight on our rationale.

    Vulnerability scoring and rating is nothing new in the security ecosystem. We rate the severity of vulnerabilities using our four-point security rating scale where an individual flaw can be rated as Critical, Important, Moderate, or Low. Security errata (RHSA) that are released also are assigned an impact rating based on the highest impact rating of the flaws being fixed in the erratum. These ratings are based solely on the merit of the flaw itself, without any concept of possible impact over time (for example, a Moderate-rated flaw remains Moderate forever because the flaw itself is being rated). While exploitability comes into account when the flaw rating is assigned (in terms of whether it is wormable, remotely exploitable, locally exploitable, etc.), we don’t look at the risk of exploitation over time when assessing it.

    Container images, being a static selection of software for a specific purpose, subtly change the way flaws should be rated. The security of a container image is not based on the concept of an isolated system on the host. Users should not expect a container image to remain secure indefinitely. The older a container image, the higher the probability of it containing a security exposure.

    Curtis Yanko from Sonatype, during a talk at Red Hat Summit 2016 entitled "Secure Your Enterprise Software Supply Chain with Containers" was quoted as saying that "code ages like milk, not like wine." When we were looking at how to rate or score container images, we felt that this explained it quite well: containers age like milk, not like wine. Old, stale containers are much more likely to contain security risks, while new, fresh containers are less likely. The Red Hat Container Catalog (or more specifically the “Container Health Index”) will rate the container images based on known security flaws and the length of time the software in the container images is exposed to those flaws. Because container trust is temporal, we chose to grade container images with a simple time-based rating system rather than just a vulnerability-based one.

    The challenges we faced were to make sure that at a glance the grade was relevant and easy to understand. It had to account for known security exposures and associated age, but could not account for individual users’ relevant risk as we do not know what individual users may be doing with any given container (we can’t know if it is performing some critical action or running a critical service). Each user needs to determine risk based on the Container Health Index, their use-case and any other information available to them.

    So, using the line of thought that fresh containers are better than older ones, we use a grading system of A through F to describe the freshness of a container image and associated security exposures. Specifically, the age and the criticality (rated Critical or Important) of the oldest flaw that is applicable to the container image.

    As container images age, the scores get worse:

    • Grade A: This image does not have any unapplied Critical or Important security errata
    • Grade B: This image is affected by Critical (no older than 7 days) or Important (no older than 30 days) security errata
    • Grade C: This image is affected by Critical (no older than 30 days) or Important (no older than 90 days) security errata
    • Grade D: This image is affected by Critical (no older than 90 days) or Important (no older than 12 months) security errata
    • Grade E: This image is affected by Critical or Important security errata no older than 12 months
    • Grade F: This image is affected by Critical or Important security errata older than 12 months
    • Unknown: This image is missing metadata required to calculate a grade and cannot be scanned

    For example, a Grade A container image does not contain known unapplied errata that fix Critical or Important flaws, although the container image may have missing errata that fix Moderate or Low flaws. Ideally you will always be running Grade A containers.

    Container images of Grade B may be missing errata that fix flaws of Critical or Important rating, however no missing Critical flaw is older than 7 days and no missing Important flaw is older than 30 days. The number of days is based on the initial date an erratum was released to fix the flaw.

    Currently, the information that is required to generate this grade is based on Red Hat errata published for Red Hat products that are available in the RPM packaging format. Containers that include other software layered on top of a Red Hat RPM-based base layer are not included in the grade. In this case, you will need to consider the possible impact of the ungraded components with the underlying container image’s grade and the age of the container itself to determine what is acceptable for you.

    While we would love to provide similar ratings to all container images, we currently only rate Red Hat RPM-based container images because of the data available for analysis. We hope to continue to build on the breadth of analytics available on the Red Hat Container Catalog.

    Posted: 2017-04-25T16:26:09+00:00
  • Join us at Red Hat Summit 2017

    Authored by: Christopher Robinson

    As you’ve probably heard, this year’s Red Hat Summit is in Boston May 2-4. Product Security is looking forward to taking over multiple sessions and activities over the course of those 3 days, and we wanted to give you a sneak peek of what we have planned.

    Sessions

    There will be A LOT of Product Security sessions including:

    Tuesday, May 2

    Time Session Title Room
    10:15-11:00AM L102598 - Practical OpenSCAP—Security standard compliance and reporting Room 252B
    10:15-11:00AM S102106 - Red Hat security roadmap Room 156AB
    10:15-11:00AM S104999 - A greybeard's worst nightmare— How Kubernetes and Docker containers are re-defining the Linux OS Room 104AB
    11:30AM-12:15PM B104934 - The age of uncertainty: how can we make security better Room 158
    11:30AM-12:15PM S105019 - Red Hat container technology strategy Room 153A
    12:45-1:15PM Security + Red Hat Insights Expert Exchange at Participation Square, Partner Pavilion
    3:30-4:15PM P102235 - The Furry and the Sound: A mock disaster security vulnerability fable Room 156AB
    3:30-4:15PM S102603 - OpenShift Roadmap: What's New & What's Next! Room 153B
    4:30-5:15PM B103109 - How to survive the Internet of Things: Security Room 155
    4:30-5:15PM S104047 - The roadmap for a security-enhanced Red Hat OpenStack Platform Room 151A

    Wednesday, May 3

    Time Session Title Room
    10:15-11:00AM S105084 - Security-Enhanced Linux for mere mortals Room 104AB
    11:30AM-12:15PM S103850 - Ten layers of container security Room 156C
    3:30-4:14PM LT122001 - Lightning Talks: Infrastructure security Room 101
    3:30-5:30PM L99901 - A practical introduction to container security Room 251
    3:40-4:00PM Container Catalog Quick Talk Expert Exchange at Participation Square, Partner Pavilion
    4:30-5:15PM LT122006 - Red Hat Satellite lightning talks Room 101
    4:30-5:15PM S102068 - DevSecOps the open source way Room 104C
    4:30-5:15PM S104105 - Securing your container supply chain Room 153B
    4:30-5:15PM S104897 - Easily secure your front- and back-end applications with KeyCloak Room 153A

    Thursday, May 4

    Time Session Title Room
    9:50-10:10AM Risk Report Quick Talk Expert Exchange at Participation Square, Partner Pavilion
    10:15-11:00AM S111009 - Partner session: Microsoft Room 105
    10:15-11:00AM S103174 - Automating security compliance for physical, virtual, cloud, and container environments with Red Hat CloudForms, Red Hat Satellite, and Ansible Tower by Red Hat Room 157C
    10:15AM-12:15PM L100049 - Practical SELinux: Writing custom application policy Room 252B
    11:30AM-12:15PM S102840 - The security of cryptography Room 156C
    11:30AM-12:15PM B103948 - DirtyCow: A game changer? Room 155
    3:30-4:15PM B103901 - Perfect Security: A dangerous myth? System security for open source Room 155
    3:30-5:30PM L105190 - Proactive security compliance automation with CloudForms, Satellite, OpenSCAP, Insights, and Ansible Tower Room 254A
    4:30-5:15PM S104110 - An overview and roadmap of Red Hat Development Suite Room 102A

    Games!

    We are pleased to announce we will be hosting a series of games in Participation Square at Summit that will cover all things security including recent vulnerabilities, product enhancements and more. Come test your knowledge, and win some awesome prizes in the process!

    Our games will be live at the following times, so mark your calendars!

    Tuesday, May 2

    Time Game Room
    11:10-11:50AM Product Security Game Show Expert Exchange at Participation Square, Partner Pavilion
    1:05-1:45PM Product Security Game Show Expert Exchange at Participation Square, Partner Pavilion
    3:00-3:30PM Flawed & Branded Card Game Expert Exchange at Participation Square, Partner Pavilion
    5:00-5:30PM Product Security Game Show Expert Exchange at Participation Square, Partner Pavilion
    6:30-7:00PM Flawed & Branded Card Game Expert Exchange at Participation Square, Partner Pavilion

    Wednesday, May 3

    Time Session Title Room
    1:00-1:45PM Product Security Game Show Expert Exchange at Participation Square, Partner Pavilion
    5:00-5:45PM Product Security Game Show Expert Exchange at Participation Square, Partner Pavilion
    5:50-6:20PM Flawed & Branded Card Game Expert Exchange at Participation Square, Partner Pavilion

    Thursday, May 4

    Time Session Title Room
    11:35AM-12:05PM Flawed & Branded Card Game Expert Exchange at Participation Square, Partner Pavilion
    12:10-1:00PM Product Security Game Show Expert Exchange at Participation Square, Partner Pavilion

    We are looking forward to connecting with you in Boston! Hope to see you there.

    Posted: 2017-04-19T13:30:00+00:00
  • Determining your risk

    Authored by: Stephen Herr

    Red Hat continues to be a leader in transparency regarding security problems that are discovered in our software and the steps we take to fix them. We publish data about vulnerabilities on our security metrics page and recently launched an API Service that allows easier (and searchable) access to the same data. This data is important to administrators for understanding what known security problems exist and determining what they should do about it.

    Pitfalls of comparing version numbers

    Comparing version numbers with Common Vulnerabilities and Exposures (CVE) advisories is a common mistake people, and 3rd party security scanners, make when trying to determine if systems are vulnerable. To truly understand the real risk a system is exposed to by a flaw one must know what the CVE affects, how the vulnerability is fixed, what you have installed, and where it came from in order to be able to properly compare things. The data available from our website can get you started but an understanding of your system is needed to complete the picture.

    Suppose someone installs an Apache httpd RPM from a non-Red Hat source and they compare its version with data from one of Red Hat’s CVEs to see if it is affected. Suppose the version of the installed RPM is lower than the one that Red Hat released to fix the CVE. What does that tell you about the vulnerability? Absolutely nothing. The 3rd party httpd RPM may contain an older build of httpd that is unaffected by the CVE (perhaps the flaw was in code introduced at a later date), an older build that is affected by the CVE, a newer build (there’s no guarantee that the 3rd party used the same versioning system as either Apache or Red Hat) that is still affected by the CVE, or a newer version that is not affected by the CVE. Or it may contain an arbitrarily different patchlevel of httpd, containing some patches that Red Hat’s build does not have but excluding others that Red Hat does include. Simply put, there is no useful information that can be gathered from such a simple comparison.

    What if a server only has Red Hat software installed, is there something we can say then? Perhaps, but it’s not as simple as most people assume.

    First, just because the newest version of a package in a given channel is affected by a CVE - and therefore we release an update to fix it - does not mean that older versions of that package are necessarily affected. Going back and testing for the presence of the vulnerability in all previous versions is not something that is typically done unless there is a good reason. So if you are running an older version and a new package is released that fixes a CVE, that is not a guarantee that you are vulnerable to that CVE. In addition, some CVEs require a combination of packages before the system is vulnerable, eg. kernel-1234 and openssh-5678. Security scanners usually fail to identify such cases and can generate false-positive alerts if only one of the packages is installed or has a lower version.

    Second, version comparison is completely untenable when you’re comparing packages from different channels or repositories. A simple example would be comparing a kernel from RHEL 6 with RHEL 7. The kernel you have installed on your RHEL 6 machines is kernel-2.6.32-264.el6. What if Red Hat releases kernel-3.10.0-513.el7 to fix a RHEL 7 CVE? A naive version comparison would lead you to believe that the RHEL 6 kernel is vulnerable, but that may or may not be true. The RHEL 6 kernel may have been patched to fix this vulnerability in 2.6.32-263. Or it may never have been affected in the first place.

    In reality no one would compare RHEL 6 packages with RHEL 7 versions. But the same holds true inside a single RHEL version. Red Hat releases software in many different streams: “regular” RHEL and Extended Update Support to name the two most common. You can know for example that kernel-2.6.32-264.el6 is “newer” than kernel-2.6.32-263.el6, but what about kernel-2.6.32-263.4.el6_4 (a hypothetical EUS kernel)? Again a naive version comparison would lead you to believe that the 264.el6 kernel was “newer”, but again it’s impossible to say. The 263.4.el6_4 build may have more patches, fewer, or an arbitrarily different set of patches from the 264.el6 build (technically possible, but highly unlikely in practice).

    And the same applies not just to kernels in the major Red Hat product lines, but to all packages in all different channels. Consider a bundled library as part of an add-on product. When presented with a CVE in a bundled library the application developer generally has two options: pull into the application the version of the library that contains the fix (which might have “moved on” and may contain many other changes including new features or API changes, which may require a large revalidation effort at least) or to backport the fix to the old version and branch the version numbers. As soon as you branch the version numbers you have the same problem you do when comparing RHEL with EUS versions. Version comparison is simply not useful when you start backporting fixes to old versions of software in different channels (which Red Hat does constantly for different products). You can only compare versions between software that was intended to end up on the same machine; inside a base channel and child channel combination for example.

    Third, even if you account for the above there are still other considerations. CPU architecture can matter. If support for a given architecture was at a developer-preview or beta level when an update was made, then it was probably built from a different source RPM than the “regular” architectures and may have an arbitrarily different version number. Or you might have to consider the RPM’s Epoch. Epoch, while never included in the filename and sometimes not displayed by programs and default settings, is nevertheless the highest priority field (if it exists) in determining which RPM is the “newer” version. Are you sure that the data source and security scanner you are using is considering Epoch? What about the RPM Obsoletes tag?

    It’s a complicated problem. Correctly detecting vulnerability against a particular CVE is difficult or perhaps impossible given merely RPM version information. Many 3rd party scanners are unaware of the complexity of the problem, leading to false positives or, worse yet, to false negatives.

    Getting the information needed to make a decision

    The three data resources that Red Hat provides to help explain vulnerabilities and determine if you are vulnerable are the Common Vulnerability Reporting Framework (CVRF) data, the Common Vulnerability and Exposures (CVE) feed, and the Open Vulnerability and Assessment Language (OVAL) data.

    The CVRF data is a representation of the update content itself and is not designed for scanning purposes. CVRF data does help explain the impact of the erratum and how it affects your system.

    Likewise the CVE feed helps explain what the vulnerability affects and why you should care. It represents the vulnerability itself but not necessarily how to detect it on your system.

    OVAL data is machine-readable data that combines much of what is found in the CVRF and CVE advisories. It provides scanners with the additional information (beyond just version numbers) they need to be able to properly detect vulnerability to a CVE. OpenSCAP is one such OVAL-compliant scanner, and the only scanner that Red Hat currently supports. Currently Red Hat only publishes OVAL data for RHEL, but support for additional products is coming in the future.

    And in conclusion...

    There are several ways to answer the “am I vulnerable” question. For managing individual systems you can use yum-plugin-security or OpenSCAP. To simultaneously manage multiple systems there are additional subscription options available: Red Hat Insights or Satellite. Insights can detect and warn about a subset of “high priority” issues (and also non-CVE related things like performance-related configurations). Satellite is intended to be a full-service management platform for handling the configuration, provisioning, entitlement, and reporting (including which CVEs are applicable) of Red Hat systems and software.

    The security metrics data we provide is very useful for determining “What is the problem”, “How important is it”, and “Should I care”, but to answer the question “Which of my systems are affected” you need something more; something that knows the details of your systems’ channel subscriptions and which version (if any) of the updated packages are applicable to them.

    Posted: 2017-04-12T13:30:00+00:00
Close

Welcome! Check out the Getting Started with Red Hat page for quick tours and guides for common tasks.