By Jason Ritzke – Senior Technical Consultant

When architecting secure infrastructure a natural place to begin is a common standard set, such as those provided in a DISA STIG or CIS guideline. However, while industry-standard compliance documents can often be a provocative starting point for thinking about your infrastructure requirements, they are no substitute for considering and building to your business needs. Though all of the rules in a widely applied compliance framework may seem dazzlingly complete at first, they were still created based on needs that may not match yours, or under different constraints.

In order to ground this argument in practical concerns, let me give an example. Recently I was going over some logs trying to track down a mysterious “disappearing” directory. Situations like this are typically not mysterious in the least since directories don’t disappear. Either they’re deleted, scrubbed from disk, or a filesystem or disk error renders them unreadable or unavailable. Finding evidence of none of these things in the logs, I decided to go deeper, into the system audit log configuration. And what I saw was both utterly ‘compliant’ and more than a little worrying.

The goal of logging

First, a small aside. Some may be unfamiliar with the differentiation between audit logging and the more ubiquitous application logging. For me, it’s easier to understand logging as sliced into three categories: debug, operational, and audit. Operational logs typically only contain serious error conditions. Debug logs should be a detailed description of code flow during run time to allow speedy resolution of software bugs. Audit logging could be described as somewhere between the two. Audit logging concerns itself with what I’d call the CLUE factors:

Who, where, when, with what?

Colonel Mustard, in the Library, at 1200Utc, with the Ethernet cable.

In many Linux distributions, the primary methodology for providing such logging is called auditd.

Read between the guidelines

On this system, the logging rules had been set up in compliance with the CIS guidelines for hardening RHEL 6. The text and examples in these guidelines are surprisingly prolific, being mirrored exactly in the DISA STIG and even the RHEL documentation itself. Historically, it appears that all these guidelines stem from the NSA’s guidelines to hardening RHEL 5. You can also find these guidelines implemented verbatim in countless configuration management repositories on GitHub and elsewhere. These are guidelines for a completely different OS version, repeated across the internet until it becomes simply “how things are done”. Not a comforting thought.

The core issue

But is there anything exactly wrong with these guidelines? That is, of course, going to be a matter of opinion. And mine is that, while these guidelines are in most part a good effort, in some places they appear to have been designed as security in a vacuum. That is to say, without regard for the actual tactics and procedures of the modern adversary, likely due to the environment at the time of their design. Many steps are taken to secure the system have simple workarounds, workarounds that many attackers will learn on the first day of their education. The rules appear to be drafted under the presumption that system services are typically somewhat trusted, and that, I think, is a mistake.

Let’s take, for example, the rules concerning logging file deletion events. In the CIS guidelines (and therefore many other places), it’s implemented like this:

– a always,exit -F arch=b64 -S unlink -S unlinkat -S rename -S renameat -F auid>=500 -F auid!=4294967295 -k delete

– a always,exit -F arch=b32 -S unlink -S unlinkat -S rename -S renameat -F auid>=500 -F auid!=4294967295 -k delete

This is, of course, why our disappearing directory was such a mystery! These rules are set up to omit logging the actions of root, or any of the other system accounts (that is to say accounts with low user IDs).

Permissions and trust

The very first thing that you’re likely to learn about any of the introductory offensive security training that you can find on the internet is sniffing out lax permissions. Now, let’s assume a fairly common style of breach scenario as a thought experiment. An attacker gets in and finds a logging script that an admin threw in for debugging purposes and forgot to remove. This script, let’s call it “testfile”, is in a root’s crontab and regularly runs some test and sends an email. You can imagine that this is a thing that happens in ops land fairly often. Furthermore, since he was in a rush (and he was going to delete it before he got distracted) he just made the script world-writable. Yes, breaches like this are bad, but the point of concern here is that to the organization they happen invisibly. When we convince a system service to use a file to perform nefarious deletion actions for us (instead of performing them ourselves), there is no record. Your audit logs have now failed to detect post-breach attack actions and you can’t tell the FBI what happened.

This rule actually does its job well, but the job itself was defined under premises that may not match your own; that you don’t need to worry about auditing system service accounts because their actions are known, and additionally because you don’t want to drown in audit events. But to that I respond with a quote from security researcher Moxie Marlinspike:

“You are running network services with security vulnerabilities. Again, you are running network services with security vulnerabilities.”

Understanding the aftermath

This post ignores the question of preventing breaches by intent. Addressed here is only the concern of post-event visibility. In the event of a breach, the CLUE factors (in some variation) are likely some of the first things your bosses and later your legal team will ask you about. And if your company is unable to show what occurred, all the #cyber insurance won’t help, because you won’t even know what happened to your systems.

The bottom line is this: best practices are no substitute for knowing and designing around the environment that you actually live in. Securing yourself against threat models that do not apply to you is both ineffective and wasted effort. There is no substitute for experienced personnel who can understand and adapt the technologies in your environment to the needs of your environment.


This post originally appeared on https://www.4loopz.com and was edited and re-posted here.