False-Fails in the `oscap` Utility's Profiles?
Has anyone noticed that a number of the tests that are included in the OpenScap profiles are kind of sub-optimal?
I'd previously written a set of CM modules for our automated CM system to apply the DISA STIGs for EL6 to our RHEL 6 hosts. As part of finishing this effort up, I decided to run the oscap utility against a system that I had hardened to meet the STIG specifications. I was expecting some false-fails, but I've been surprised by the number of false fails. When looking at the surprising false failures, it seems like many of the tests don't gracefully handle "secure out of the box" settings and "subsystem not installed" types of settings.
Since my test system is pretty much an '@Core' build - with the exception of the SCAP-related RPMs - the ungraceful handling of "subsystem not installed" is really resulting in a lot of errors. To me, it just doesn't make sense that I need to create a file normally handled via an RPM, just to insert a non-relevant (by writ of the fact that the associated service/subsystem/etc. isn't present) configuration option. Are things like this just the result of sub-optimal test-design or is it done intentionally because of the expectation that more traditionally security scanners will also false-trigger if the sub-optimal detection-mode isn't used in the SCAP profiles?
The "sub-optimal" question probably extends to some other false-fails, as well (e.g., the "noexec option on removable media" test is designed to need the option to be set on more than just removable media; some of the "secure by default" sysctl settings and the like)?
Just wondering if anyone else has experienced similar and how they've been addressing such false-fails. Basically, I'm trying to determine if I over-finessed my CM modules or if there's a better way to run the SCAP tools (preferably one that can be explained to auditors that don't always understand the tools they use to evaluate systems).
Responses
Hey Tom, which profiles are you referring to? So far, whenever I have tried doing scans I get so much cruft that I thought I was doing it wrong ;-) I had come to the conclusion that the available profiles were merely there as an example, or as a "foundation" to only call a subset during the actual scan.
Since I have never had the requirement to run a scan for auditors, how does it work?
Do they provide the xccdf to use for the scan? Or do you tell them what you are checking for? I find with PCI audits, the requirements, scanning and reporting seems somewhat ambiguous.
Good topic!
Hi Tom,
My experience with SCAP tools was very similar. Seemingly simple tests would fail which unfortunately added up to a few failures. I originally started picking apart the SCAP profiles to fix/remove/resolve them but it doesn't look particularly good in security audit situations when you are modifying the test to return a positive result.
From memory, many exceptions were related to configuration options missing in the .conf file for packages that weren't even installed.
The tragically non-technical solution I have resorted to in the past is explaining any 'exceptions' in detail and bundling them with the report.
-edit-
I should mention, another avenue I investigated after being underwhelmed with the SCAP tools was Puppet. Several people have put energy into developing Puppet manifests for security compliance purposes eg.
https://github.com/arildjensen/cis-puppet
As I use Puppet to carry out the OS configuration, using it to also confirm audit compliance looks like a logical next step.
Welcome! Check out the Getting Started with Red Hat page for quick tours and guides for common tasks.
