The title isn’t click-bait, I promise! I sincerely believe what I am going to write next.
The best security advice I can give you is this:
“Pick your security battles”
Ok, so it’s a little more complex than that, but it’s a start. So let me explain.
Security people have the horrible need to require all security issues be fixed just because they are security issues and they are willing stop deployment of critical business systems until they are fixed.
At this point you probably think I am a security heretic. I am not! Let me explain with an example.
Many years ago a product team came to me to ask me to help them get a security bug through the Windows War Room. We were at a point where every bug had to pass War before being accepted into the product. Every bug was a surgical fix. I took a look at the bug and said, “nope!”
The program manager said, “but it’s a security bug, it’s a memory corruption bug and you don’t think we should get it fixed?”
I said, “sure it should be fixed, but it’s a quality issue and not a security bug, it’s not critical and does not put customers at risk. So no, not at this point in the release.”
The issue was a memory corruption bug in a command-line tool parsing a command-line argument. Also, it was a one-byte overrun and the way the code worked, the argument could only be numeric. Also, the issue was picked up by the Visual C++ /GS stack-based overrun detection code and address-space layout randomization (ASLR) was enforced for the app.
The lesson here is:
“Not all bugs are created equal.”
Sure, it’s memory corruption, and it was eventually fixed, but it’s not a serious security issue. Now image a scenario where the corruption is in code running as admin/root listening on an unauthenticated UDP endpoint on the internet! That’s a no brainer. In fact, I’d ask why did this get in the code in the first place.
The ‘analysis technique’ I often apply is:
“What does the attacker control, and who is the attacker?”
Let’s take the first scenario. The attacker controls an argument to a command-line tool, and the attacker can only use numbers (0-9). The attacker is either a user encountering the memory corruption, in which case the ensuing crash affects only the user. Or, the attacker is potentially someone on the Internet convincing a user to the run app with the overlong argument. If an attacker can get a user to run some arbitrary code, then the attacker get the user to run, well, any arbitrary code!
You can often determine the level of attacker control by looking at the entry-point’s attack surface, such as local vs remote and authenticated vs anonymous and user access vs admin access.
And we haven’t touched the various mitigations that came into play.
I often show the following example to hammer the point home.
Look at this C code, let’s assume it’s in a Windows command-line tool.
void func() {
char buf[4];
buf[4] = 0;
}
This is a memory corruption bug (padding aside) because the code writes to the 5th element of a four-byte array (remember, arrays start at zero in C.) And for the smarty-pants out there, I realize that this code, as-is, would be removed by the optimizer. But humor me.
We can agree it’s a memory corruption issue, but is it a security bug? No! The attacker controls NOTHING. The index into the array is a constant. The value is a constant. The attacker controls zip.
Okay, let’s take the same code but spice it up a little.
void func(int index, int value) {
char buf[4];
buf[index] = value;
}
Let’s assume the arguments index and value come from a call to recv() which reads a packet from the Internet. Is this a security bug? Heck yeah. The attacker controls everything about that poor old buffer. Even with /GS and ASLR in place, this would be a serious bug and would be fixed at the earliest.
So there you have it – pick your battles, because not all bugs a created equal, and to determine what needs to be fixed understand what the attacker controls.
– Michael