Azure Monitor Activity Log Change History (Preview)

There’s a new public preview feature named Change History in Azure Monitor Activity Logs that allows you to get a better feel for what caused events in the first place.

I don’t know about you, but wading through a JSON description of an event to determine what happened can be a little cumbersome at times, and this feature is a great time-saver.

To get to the feature, go to Monitor and then click Activity Log:

This is the list of all your management plane activities across your subscription consumed by Azure Monitor.

NOTE: If you don’t see Monitor in the list on the left, type monitor in the Search resources, services, and docs textbox at the top of the portal:

Below is an expanded example of the Activity log from my personal subscription:

Now click on an Update event – in this example, I will click the first event. As you can see I have been experimenting with Azure Private DNS; I had just got done running this example.

Notice there’s a new option named Change history (preview):

There’s a list of changes to this resource shown and if I click any of these entries, the following pops up:

As you can see – right there is the diff – this change was me changing a tag named vmRelated from True to False.

More info is available here.

Simple, but effective!

Azure Logic Apps and SQL Injection Vulnerabilities

This code takes a property from an untrusted HTTP post named name and uses that as input to a SQL statement. If I use an HTTP POST tool, like Postman, I can test the code:

In this example, the name Mary is inserted into the database and the code is working as intended. But if the untrusted input is changed to something like this:

Well, I think you know what happens! Assuming the connection to SQL Server has enough privilege to delete the foo table this code will go ahead and delete the foo table.

Remember, one good definition of a secure system is a system that does what it’s supposed to do and nothing else. In this example, the code can delete a table, and that is far above what was written in the spec!

Most common SQLi attacks are used to exfiltrate data when using SQL select clauses using the classic or 1=1 -- string and the myriad of variants.

The Remedy

The remedy is to use another SQL Server action rather than the Execute a SQL Query action. The SQL Server connector has plenty of other, more constrained actions.

You can also use the Execute Stored Procedure action, as that will use SQL parameters under the hood.

If you must accept untrusted data in order to construct a SQL statement, make sure you use SQL parameters as shown below. Note that using parameterized queries to build SQL statements is as true today as it was well over a decade ago!

Notice how the SQL query has changed to use @name, which is a parameter declared using the formalParameters option.  The key is the parameter name and the value is the SQL data type which should match the table definition.

Now the value of this parameter can be set to reference the name dynamic content that came from the HTTP POST request in the actual parameter value below

So there you have it, use SQL parameters! Be careful out there!

I hope that helps, if you have any questions please leave a comment.

A Common Pattern – Using Insecure JavaScript Libraries

I will be up front and state that I am no JavaScript fan. I REALLY liked it back in the day when it was readable! I much prefer TypeScript as it fosters more robust, readable and maintainable code, especially for large projects. But that’s just my opinion.

With all that said, I understand JS’s importance in today’s modern toolset, and I am fine with that.

But there’s an alarming trend I have noticed with many customers who rely on common JS frameworks such as jQuery, Angular, Bootstrap, React and so on. Seems there’s a new framework every week!

The trend is devs using insecure JS libraries. 100% of the customers I have worked with over the last year or so have had at least one insecure JS library.

My old boss, Steve Lipner, coined a term about 20 years ago to describe these kinds of dependencies: he used the term ‘giblets’.

A giblet is code you depend on but you don’t control and JS frameworks are a prime giblet. If a JS framework you use has a security vulnerability, then you have a security vuln, too! Here’s a little more info on giblets.

So how do you remedy this issue of using insecure JS libraries?

First, you need to understand which libraries you use, and keep abreast of any security vulnerabilites therein. Someone in your org needs to keep track of this; it’s part of their job!

Next, you need to make a hard decision. Does your application host a copy of the libraries and possibly be behind on security fixes? Or, does your application pull the latest-n-greatest from the source and possibly run the risk of regressions? I cannot answer that question for you, you need to determine the policy and live with the tradeoffs offered by each scenario.

As a side note, there’s a great add-on for Chrome named Retire.js that will flag potentially vulnerable JS libraries from the browser. It produces output like the image below when you navigate to a web page that references a vulnerable library.

I want to wrap this up with a story.

One customer told me they had to use an old and insecure version of jQuery for compatibility with Internet Explorer on Windows XP! After looking at their analytics, they realized that some customers were still using Windows XP, but it was only three customers out of thousands!

So we added landing-page text that said, “We see you’re still using Windows XP! Windows XP is no longer supported by Microsoft, and we will retire support for Windows XP on September 30th 2019. After this date you might encounter issues using our system. Please upgrade before then.”

I think that’s reasonable!

-Michael

Azure Policy – A Love Story

In a previous post I briefly mentioned the policy-enforcement capability built into Azure. If you have not seen Azure Policy, used it or simply not aware of it, you really need to know about it!

Before I start, this post is merely to make you aware Azure Policy exists, it’s not a complete tutorial by any stretch and it’s policy from my perspective.

So what is Azure Policy? It’s simply a way to make sure specific conditions exist and if the conditions don’t exist then either raise an audit event or block the action. There’re other options as well, but audit and deny are, by far, the two most important outcomes.

For example, some conditions might be:

  • Require TLS to blob store
  • Secrets in Key Vault must be in hardware
  • Don’t allow VMs to have a public end point
  • Require Threat Detection on Azure SQL Server
  • Require HTTP/2 for Azure Functions

I could keep going, but I am sure you get the picture. The beauty of Azure Policy is once it’s set, enforcement is performed automatically.

So let’s look at an example. Many customers want to, or need to control where their data resides. Data sovereignty is a big deal for some customers. For example, a customer might not want data in specific countries because of legal or regulatory reasons.

A customer wants their CosmosDB data available only in specific regions. One of the nice things about CosmosDB is there’s a simple UI that makes it easy to replicate data geographical. Of course, with that ease and power comes responsibility!

The customer wants to put a requirement in place that limits data replication to only East US, East US 2, South Central US and North Central US. That way ‘something’ comes to the rescue if someone fat-fingers the CosmosDB replication UI, or ‘accidently’ runs a PowerShell script that configures data replication all over the world.

Enter Azure Policy.

I won’t lie, Azure Policy files are a little obtuse. They are declared in JSON. The following file enforces a rule that only allows CosmosDB data in the four regions mentioned above.

{
     "properties": {
         "displayName": "Limit CosmosDB to specific locations",
         "policyRule": {
             "if": {
                 "not": {
                     "field": "Microsoft.DocumentDB/databaseAccounts/Locations[*].locationName",
                     "in": [
                         "East US",
                         "East US 2",
                         "South Central US",
                         "North Central US"
                     ]
                 }
             },
             "then": {
                 "effect": "deny"
             }
         }
     }
 }

Here’s how to read it. It all starts at the policyRule block. Before I start, remember that ComsosDB was known as DocumentDB.

“if the database account locations is not in East US, East US 2, South Central or North Central US, then deny the action. “

In other words, if someone attempts, by any means, to set a data replication site other than one of the four, the request is blocked.

If you change the “deny” to “audit” the action will go through, but an audit event is raised. This is useful if you want to roll out policy in an existing environment and you want to get a feel for how bad things are without stopping the business from running 🙂

Some customers think in terms of preventative and detective controls. “Deny” is preventatve and “Audit” is detective.

This is just the tip of Azure Policy. If you want to experiment, take a look at some of the sample policies available on GitHub.

One final thought. The title says “A Love Story” and clearly that’s a little extreme, maybe even clickbait, but Azure Policy is one of the most important security compliance features built in to Azure.


Security and Compiler Optimizations

A previous post I wrote about password comparison led to much discussion between some colleagues and I; most notably long-time friend and co-author David LeBlanc and Reid Borsuk in the security engineering team.

Reid mentioned an issue that I have not considered in a long time; and that is the potential for compiler optimizations to introduce security vulnerabilities.

The first time I learned of this issue was during the development of Windows Server 2003. Here is the link to my initial message to bugtraq in Nov 2002 on the topic.

Fast forward almost 17 years.

The issue with my initial code is the compiler might generate code that could short-circuit the string comparison. Here’s the code:

static bool TimeConstantCompare(string hashName, string s1, string s2)
{
      var h1 = HashAlgorithm.Create(hashName).ComputeHash(Encoding.UTF8.GetBytes(s1));
      var h2 = HashAlgorithm.Create(hashName).ComputeHash(Encoding.UTF8.GetBytes(s2));

      int accum = 0;
      for (int i = 0; i < h1.Length; i++)
          accum |= (h1[i] ^ h2[i]);

      return accum == 0;
}

The code walks over the bytes of each hash; bytes 0..N from hash 1 and bytes 0..N for hash 2. Each byte is XOR’d and if two bytes are different the result is non-zero (it’s a property of XOR) and this result is OR’d with an accumulator. This means, and this is the important part, that if the accumulator ever gets to non-zero it will always be non-zero from that point on.

Now here’s the fun part.

The code returns a bool (true or false) based on the accumulator being zero or not, but the compiler might emit optimized code that no longer performs the rest of the loop if at ANY point the accumulator is not zero.

For example, let’s say the hashes are 32-bytes in size. In theory, the loop will ALWAYS iterate from 0-31. However, let’s say when i==17, the two bytes are 0xDE and 0xAD, this will yield a value of 0xDE XOR 0xAD == 0x73 and if the accumulator is 0x00, then 0x00 OR 0x73 == 0x73. The compiler knows that from this point forward (because of how OR works), the accumulator will always be >= 0x73. The last line of the function returns a value based on the accumulator == 0, the compiler knows that the accumulator can NEVER be 0x00 from this point on, so skip the rest of the loop!

I hope that makes sense!

This isn’t a huge issue because we’re comparing a hash and not the actual password. But it’s still valuable to know compilers can do this.

There are two fixes, here’s the updated method:

[MethodImpl(MethodImplOptions.NoInlining | MethodImplOptions.NoOptimization)]        
static int TimeConstantCompare(string hashName, string s1, string s2)
{
     var h1 = HashAlgorithm.Create(hashName).ComputeHash(GetRawBytes(s1));
     var h2 = HashAlgorithm.Create(hashName).ComputeHash(GetRawBytes(s2));

     int accum = 0;
     for (int i = 0; i < h1.Length; i++)
          accum |= (h1[i] ^ h2[i]);

     return accum;
}

First, the method returns an int rather than a bool, so the compiler has no clue what you’re doing with an int. In theory, the compiler could look at how the code calling this function uses the return value to optimize the code, but inter-procedural optimizations are harder and less common than intra-procedural optimizations. It would be very hard for the compiler to ascertain how to optimize this method based on every call in the code to the method.

The real fix is to add the NoOptimization directive before the method implementation. This prevents the optimizer from doing little tricks like this. The obvious downside is a potential performance hit. If this is code that’s called constantly, in a tight loop, then you may see degradation. But like everything in life, measure!

Reid provided me with some more in-depth detail about inter- vs intra-procedural optimizations. You probably don’t need to know this, but if you’re nosey, read on.

This statement is true for traditional compilers, but the .NET JIT is prohibited from inter-procedural optimizations only in the case where both NoInlining and NoOptimization is enforced, not just NoOptimization. That’s because the .NET runtime requires method signatures to be carried with the objects they represent. This means every instance of Program (including the static one) carries the pointer to the compiled method (or JIT stub pre-compile), so every caller needs to call the function in the same way. The caller would then be restricted to calling the “standard” implementation of the method without this cheat.
The way .NET implements the scenario you’re talking about is to force an inline to occur. After inlining occurs, the NoOptimization attribute gets stripped since the method disappeared. Then they force optimization on the new parent method with code already inlined. Preventing both inlining and optimization prevents the specific optimization you refer to in your blog post.

My updated code for string comparison is on GitHub

On a side note, you may notice the code to convert a string into a series of bytes is different. That’s a topic for another post 🙂

CosmosDB and SQL Injection

Over the last few months I have started digging deep into CosmosDB. I have to say I really like it, and coming from someone who has used SQL Server since v4.2 on OS/2, that’s saying something!

To be honest, I have only used the SQL API, but eventually I will take a look at the Gremlin graph API.

Being a security guy, my first area of interest is SQL Injection – is it possible to build SQL statements that are subject to SQLi when using CosmosDB?

The answer is YES! (but read on)

In fact, querying CosmosDB using the SQL API is as vulnerable to SQLi as, say, SQL Server or Oracle or MySQL.

IMPORTANT This is not a vulnerability in ComsosDB; it’s an issue with any system that uses strings to build programming constucts. A similar example is Cross-site scripting (XSS) in web sites using
JavaScript (or Java or C# or PHP or Perl or Ruby or Python!) at the back-end. Remember, CosmosDB will give you back whatever you ask for (permissions aside) and this may be more than you bargained for if the query is built incorrectly from untrusted data.

Even if you don’t have CosmosDB setup in your Azure subscription (remember, you can get a free subscription) you can test this out.

  • Go to the CosmosDB query test site at https://www.documentdb.com/sql/demo
  • Run the default query and note you only get one document returned.
  • Now add ‘or 1=1’ at the end of the query (without the quotes) and re-run the query. You get 100 documents returned.

The fact that the ‘or 1=1’ clause changed the query and made it return all documents indicates a potential SQLi issue. If any arguments in the query came from an attacker, we’d be in a world of hurt. For those not aware, the ‘or 1=1’ clause at the end of query is true for every document in the database, so the query returns all documents.

Remedy If you’re programmatically querying CosmosDB, and handle untrusted input that is used to build queries, you must use parameterized queries. This is the same guidance that’s been around in other SQL APIs.

There’s a nice write-up about CosmosDB and parameterized queries here.
The methods you’ll use the most are SqlParameter and SqlParameterCollection.

TL;DR CosmosDB queries are subject to SQLi if you build queries incorrectly from untrusted data. Make sure your queries use parameterized queries, especially in the face of untrusted input.

The Dangers of String-Comparing Passwords

Or, a gentle introduction to timing attacks

One of my favorite things about software security is making people aware of what they didn’t KNOW they didn’t know. That’s not a typo, by the way. And this topic is one such example. Enjoy!

When I see a system that compares passwords, my first response is “Why are you handling passwords?” No modern system should handle passwords directly unless there’s an ancient legacy reason. Even then I’d consider ways to get away from handling the passwords.

With that said, I have seen more than one system over the years that DID need to compare secret data such as passwords.

And every single one of them got it wrong!

I think we have all seen code like the code below at some point.

static bool DoPasswordsMatch(string pwd1, string pwd2)
{
    if (pwd1.Length != pwd2.Length)
        return false;

    for (int i = 0; i < pwd1.Length; i++)
        if (pwd1[i] != pwd2[i])
            return false;

    return true;
}

The code checks if the passwords are the same length, and if they are not, then clearly the passwords are not the same so the function exits.

If the passwords are same length then the code does a byte-by-byte compare of the strings and if there is a mismatch at any point the function returns.

If all is well, and the passwords match the function returns true.

This code is broken because of timing attacks. In the example code, if an attacker provides one password and the password is not the same length as the real password, then the function exits quickly. In other words the attacker can now know the passwords are not the same length.

It’s this ‘short-circuiting’ that causes the problem.

Note this issue is not restricted to passwords, your code could compare AppID secrets or any other authentication secret.

Plenty has been written on this topic over the years:

https://blog.ircmaxell.com/2014/11/its-all-about-time.html 

https://wiki.php.net/rfc/timing_attack

https://stackoverflow.com/questions/25844354/timing-attack-in-php

https://security.stackexchange.com/questions/111040/should-i-worry-about-remote-timing-attacks-on-string-comparison

https://security.stackexchange.com/questions/97798/feasibility-of-time-based-database-brute-force-attacks-on-websites

https://stackoverflow.com/questions/37633688/constant-time-equals

So how do you fix this? For systems that MUST perform some form of comparison of secret strings, using code like the following will mitigate the risk substantially. It does so by removing all forms of short-circuit logic.

bool TimeConstantCompare(string hashName, string s1, string s2)
{
var h1 = HashAlgorithm.Create(hashName).ComputeHash(Encoding.UTF8.GetBytes(s1));
var h2 = HashAlgorithm.Create(hashName).ComputeHash(Encoding.UTF8.GetBytes(s2));
int accum = 0;

for (int i = 0; i < h1.Length; i++)
accum |= (h1[i] ^ h2[i]);

return accum == 0;
}

The way this code works is it first hashes the two strings; for example, hashName is SHA256 and this yields two series of bytes of equal length. Now the fun starts – the code walks over the two series of bytes XOR-ing each byte, if the bytes are the same the result is zero. If the bytes are not the same, then the result is non-zero. The result is then OR-ed with an accumulator, this means that if ANY bytes are not the same, the non-zero result is carried forward.

At the end of the function, if the accumulator is zero, then the bytes are the same which means the two strings match. All in constant time!

I posted some C# code up on GitHub.

tl;dr: Don’t compare passwords. If you MUST compare passwords don’t do a naive string compare. Use a time-constant compare instead.