False-Fails in the `oscap` Utility's Profiles?

Latest response

Has anyone noticed that a number of the tests that are included in the OpenScap profiles are kind of sub-optimal?

I'd previously written a set of CM modules for our automated CM system to apply the DISA STIGs for EL6 to our RHEL 6 hosts. As part of finishing this effort up, I decided to run the oscap utility against a system that I had hardened to meet the STIG specifications. I was expecting some false-fails, but I've been surprised by the number of false fails. When looking at the surprising false failures, it seems like many of the tests don't gracefully handle "secure out of the box" settings and "subsystem not installed" types of settings.

Since my test system is pretty much an '@Core' build - with the exception of the SCAP-related RPMs - the ungraceful handling of "subsystem not installed" is really resulting in a lot of errors. To me, it just doesn't make sense that I need to create a file normally handled via an RPM, just to insert a non-relevant (by writ of the fact that the associated service/subsystem/etc. isn't present) configuration option. Are things like this just the result of sub-optimal test-design or is it done intentionally because of the expectation that more traditionally security scanners will also false-trigger if the sub-optimal detection-mode isn't used in the SCAP profiles?

The "sub-optimal" question probably extends to some other false-fails, as well (e.g., the "noexec option on removable media" test is designed to need the option to be set on more than just removable media; some of the "secure by default" sysctl settings and the like)?

Just wondering if anyone else has experienced similar and how they've been addressing such false-fails. Basically, I'm trying to determine if I over-finessed my CM modules or if there's a better way to run the SCAP tools (preferably one that can be explained to auditors that don't always understand the tools they use to evaluate systems).

Responses

Hey Tom, which profiles are you referring to? So far, whenever I have tried doing scans I get so much cruft that I thought I was doing it wrong ;-) I had come to the conclusion that the available profiles were merely there as an example, or as a "foundation" to only call a subset during the actual scan.

Since I have never had the requirement to run a scan for auditors, how does it work?

Do they provide the xccdf to use for the scan? Or do you tell them what you are checking for? I find with PCI audits, the requirements, scanning and reporting seems somewhat ambiguous.

Good topic!

I've been running the profiles that come in the RHEL6 RPM. To be honest, I was running all the profiles as a sanity-check for my lockdown scripts.

My organization provides platform management services to a number of customer organizations. Sadly, each of those organizations seem to have their own scanning tools and things the look for - even though they all claim to be looking for the same things. Worse, each organizations' preferred tools are various degrees of out of date (for example, when we were still doing RedHat 5.x systems for them, at least one organization's auditors were trying to fail the systems because they weren't still running pam_tally - even though pam_tally2 had long since replaced pam_tally).

My broad-spectrum application of oscap was just sort of a way for me to try to "worst case" things and anticipate what those organizations were likely to find. None of the organizations I do work for currently use SCAP. That said, some could probably be convinced to do so if the SCAP profiles were more accurate than what they currently seem to be (e.g., the RHEL6 RPM's audit rule checkers seem to all be mis-designed - causing nearly a dozen fail-events even though what they're supposedly checking for are EXACTLY how the system's configured).

To date, I'd been too busy trying to get my CM modules written that I hadn't actually had a chance to dig super-deep into the SCAP process. This past week was the first week I'd actually had a chance to figure out how to run the SCAP tools (and, my broad-specturm approach was a result of not knowing which of the pre-canned profiles was likely to best apply form my customers, so I just did the "run them all" thing). Since I was running into my own issues, I kind of wanted to compare notes with others who might be on similar path, what they've encountered. and to see how best to work around issues (possibly with an eye towards filing bugs and suggested fixes).

Any way, still trying to find the best way to skin this particular cat.

Hi Tom,

My experience with SCAP tools was very similar. Seemingly simple tests would fail which unfortunately added up to a few failures. I originally started picking apart the SCAP profiles to fix/remove/resolve them but it doesn't look particularly good in security audit situations when you are modifying the test to return a positive result.

From memory, many exceptions were related to configuration options missing in the .conf file for packages that weren't even installed.

The tragically non-technical solution I have resorted to in the past is explaining any 'exceptions' in detail and bundling them with the report.

-edit-

I should mention, another avenue I investigated after being underwhelmed with the SCAP tools was Puppet. Several people have put energy into developing Puppet manifests for security compliance purposes eg.
https://github.com/arildjensen/cis-puppet

As I use Puppet to carry out the OS configuration, using it to also confirm audit compliance looks like a logical next step.

I've been writing my CM modules to run under Saltstack. I was looking at OpenSCAP as a "neutral" way of verifying that what my Saltstack modules say they are doing is confirmable via a third-party application. The SCAP utilities get mentioned on a not-infrequent basis on these forums, so I figured there'd be people with experience that might be able to correct my.

Overall, I like the idea of SCAP. Unfortunately the "it doesn't look particularly good in security audit situations when you are modifying the test to return a positive result" problem means it will be a non-starter to try to get our security folks to adopt it over the tools they use (tools that have their own problems, but, when you're suggesting change, the proposed-change has to look much closer to bullet-proof than the incumbent solution(s). So, I'd like to figure out how to help make it better, just haven't had time to dig in to see whether the OpenSCAP utilities offer the same flexibility to implement the types of more-finessed tests that I was able to create under Saltstack.

Automating systems-security has afforded a lot of opportunities to file bugs and proposed fixes to several other internal and external projects, though. Always nice to be able to leave "fingerprints" on stuff. =)

Close

Welcome! Check out the Getting Started with Red Hat page for quick tours and guides for common tasks.