False-Fails in the `oscap` Utility's Profiles?

Latest response

Has anyone noticed that a number of the tests that are included in the OpenScap profiles are kind of sub-optimal?

I'd previously written a set of CM modules for our automated CM system to apply the DISA STIGs for EL6 to our RHEL 6 hosts. As part of finishing this effort up, I decided to run the oscap utility against a system that I had hardened to meet the STIG specifications. I was expecting some false-fails, but I've been surprised by the number of false fails. When looking at the surprising false failures, it seems like many of the tests don't gracefully handle "secure out of the box" settings and "subsystem not installed" types of settings.

Since my test system is pretty much an '@Core' build - with the exception of the SCAP-related RPMs - the ungraceful handling of "subsystem not installed" is really resulting in a lot of errors. To me, it just doesn't make sense that I need to create a file normally handled via an RPM, just to insert a non-relevant (by writ of the fact that the associated service/subsystem/etc. isn't present) configuration option. Are things like this just the result of sub-optimal test-design or is it done intentionally because of the expectation that more traditionally security scanners will also false-trigger if the sub-optimal detection-mode isn't used in the SCAP profiles?

The "sub-optimal" question probably extends to some other false-fails, as well (e.g., the "noexec option on removable media" test is designed to need the option to be set on more than just removable media; some of the "secure by default" sysctl settings and the like)?

Just wondering if anyone else has experienced similar and how they've been addressing such false-fails. Basically, I'm trying to determine if I over-finessed my CM modules or if there's a better way to run the SCAP tools (preferably one that can be explained to auditors that don't always understand the tools they use to evaluate systems).

Responses