left-icon

Application Security in .NET Succinctly®
by Stan Drapkin

Previous
Chapter

of
A
A
A

CHAPTER 11

Web Security

Web Security


ASP.NET security

Security compromises of web applications are becoming more and more ubiquitous these days. Web applications present an easily accessible battleground to combine various security primitives into more complex security protocols and test these protocols in action via wide exposure enabled by the web. This would be great if it wasn’t for one little thing: security properties cannot be tested in a functional way (e.g., “Did we include the security feature? Yes? Checkmark! Deploy to production!”). What typically gets tested in the end is the insecurity, when it is already too late (i.e. the production environment is compromised).

The .NET web development tool of choice is ASP.NET, which is a collective term representing numerous frameworks such as Web Forms, AJAX, Web Pages, MVC, Web API, SignalR, WCF, and SOAP Web Services (ASMX), each of which is going through its own evolution and maturity cycles. There is a great deal of complexity in the ASP.NET portfolio of technologies, and complexity is the eternal nemesis of security.

Microsoft has strived to address this complexity with varying degrees of success by providing “ready-to-use” security components for common web-application-related requirements, such as session management, user authentication (identity management), user authorization, and credential storage. Many of these ASP.NET-provided security components are quite dated because they were designed for ASP.NET 1.x (2002–2003), or ASP.NET 2.0 (2005). The web has moved on, however. There are practically-exploitable vulnerabilities in the modern web that well-intentioned ASP.NET-provided security components were never designed to address, or address insufficiently. The road to the pit of failure is paved with good intentions. Microsoft has made these provided security components as easy to use as possible (some of them are enabled by default), and it is very easy for developers to get instant functional gratification by using them, but it is also what makes these components dangerous.

Microsoft has made commendable effort with ASP.NET 4.0, and then again with ASP.NET 4.5 to improve these widely used security components and incorporate additional protection measures without compromising backwards compatibility (the user base is enormous). We feel, however, that many ASP.NET-provided security components are way past their expiration date, and should be replaced with new approaches which are actually designed for the threats and vulnerabilities of the modern web. On the other hand, we want to be able to reuse as much of the existing tried-and-tested security functionality as possible, and thus would only consider custom implementations as a last resort.

Microsoft is fully aware of the need to modernize ASP.NET and bring it up to speed with the evolution of the modern Web. ASP.NET 5 was a major redesign of ASP.NET framework to free the internal architecture from all things “legacy” that were holding ASP.NET back. This ambitious effort is apparently radical enough to kill “ASP.NET 5” and instead rebrand the project as “ASP.NET Core”. While the ASP.NET 4.x “king” is officially not a king anymore, and the new king is not mature yet (ASP.NET Core 2.0 is still missing crucial cryptographic capabilities), there will remain countless ASP.NET 4.x (and 2.x) line-of-business applications. These “legacy” applications will be with us for a very long time, and it is important to understand and correctly apply the security concepts behind them.

Session state

The ASP.NET session state component enables server-side storage of any serializable web-session-specific data, and is enabled by default for all ASP.NET applications according to MSDN. The client side keeps track of associated server-side session state via a unique session state identifier called SessionID. SessionID is supposed to be a Base32-encoded, case-insensitive, CSP-RNG-generated 120-bit value (see the “Base32” section for more info).

In December of 2008 (three years after ASP.NET 2.0), a security researcher investigating the claimed 120-bit entropy of SessionID wrote a paper about it. The paper showed that the internal algorithm indeed generates 120 bits of entropy, but Base32 encoding implementation has a flaw, which reduces the encoded entropy from 120 bits to 96 bits in the worst case, with the expected average entropy actually hovering around 108 bits. The actual SessionID key space was therefore 2(120-108) = 4096 times smaller than claimed. The reduced key space is still large enough to be out of reach of any practical exploits, but it goes to show that Microsoft has had its share of implementation mistakes. The bug was in bit-shifting right on random integers, and not realizing that for negative integers, the incoming-from-left bits will be ones and not zeroes.

Neither the flaw, nor the security paper was publicly acknowledged by Microsoft (at least we can’t find a public record of it on the web). The latest .NET in 2008 was .NET 3.5, which was running on 2.0 runtime. .NET 4.0 was released in April 2010, which introduced a new 4.0 runtime. .NET GAC now has two System.Web.dll assemblies in it: version 2.0 and version 4.0. Here are the version 2.0 and version 4.0 algorithms for Encode() method:

Code Listing 18

// System.Web.SessionState.SessionId class in System.Web.dll version 2.0

private static string Encode(byte[] buffer) {

 char[] array = new char[24]; int num = 0;

   for (int i = 0; i < 15; i += 5) {

    int num2 = (int)buffer[i] | (int)buffer[i + 1] << 8 |

    (int)buffer[i + 2] << 16 | (int)buffer[i + 3] << 24;

    int num3 = num2 & 31;

    array[num++] = SessionId.s_encoding[num3]; num3 = (num2 >> 5 & 31);

    array[num++] = SessionId.s_encoding[num3]; num3 = (num2 >> 10 & 31);

    array[num++] = SessionId.s_encoding[num3]; num3 = (num2 >> 15 & 31);

    array[num++] = SessionId.s_encoding[num3]; num3 = (num2 >> 20 & 31);

    array[num++] = SessionId.s_encoding[num3]; num3 = (num2 >> 25 & 31);

    array[num++] = SessionId.s_encoding[num3];

    num2 = (num2 >> 30 | (int)buffer[i + 4] << 2);

    num3 = (num2 & 31);

    array[num++] = SessionId.s_encoding[num3];

    num3 = (num2 >> 5 & 31);

    array[num++] = SessionId.s_encoding[num3];

   }

  return new string(array);
}

Code Listing 19

// System.Web.SessionState.SessionId class in System.Web.dll version 4.0

private static string Encode(byte[] buffer) {

 char[] array = new char[24]; int num = 0;

   for (int i = 0; i < 15; i += 5) {

    int num2 = (int)buffer[i] | (int)buffer[i + 1] << 8 |

    (int)buffer[i + 2] << 16 | (int)buffer[i + 3] << 24;

    int num3 = num2 & 31;

    array[num++] = SessionId.s_encoding[num3]; num3 = (num2 >> 5 & 31);

    array[num++] = SessionId.s_encoding[num3]; num3 = (num2 >> 10 & 31);

    array[num++] = SessionId.s_encoding[num3]; num3 = (num2 >> 15 & 31);

    array[num++] = SessionId.s_encoding[num3]; num3 = (num2 >> 20 & 31);

    array[num++] = SessionId.s_encoding[num3]; num3 = (num2 >> 25 & 31);

    array[num++] = SessionId.s_encoding[num3];

    num2 = (num2 >> 30 & 3 | (int)buffer[i + 4] << 2);

    num3 = (num2 & 31);

    array[num++] = SessionId.s_encoding[num3];

    num3 = (num2 >> 5 & 31);

    array[num++] = SessionId.s_encoding[num3];

   }

  return new string(array);
}

We highlighted the Microsoft fix in 4.0. The buggy line of code is supposed to use the remaining 2 random bits (the “>> 30” part) and OR them with the intermediate result. The developer assumed that the 30 bits incoming from the left will be zeroes, but num2 is an int and can be negative. The “& 3” fix zeroes out these incoming 30 bits regardless of their value. This SessionID fix in .NET 4.0 was also not publicly mentioned by Microsoft, as far as we know. One easy way of telling that your SessionID-using ASP.NET application is still running on the 2.0 runtime is to observe the last characters of SessionIDs (2.0-based SessionIDs often end with “45” or “55”). The proper behavior of the right-shift operator is documented in MSDN.

SessionIDs are sent to the client side via a cookie (default behavior) or as part of the URL, with all the nasty URL rewriting handled by ASP.NET. The default cookie-based mode creates a cookie, which is marked HttpOnly (good), but not Secure (bad). There is a setting to enable secure session cookies, but it is not on by default. You might still feel at ease because your ASP.NET application uses TLS for all sensitive forms and pages anyway. Another aspect of SessionIDs is that if the client side fails to provide a matching SessionID value to the server side (either SessionID is not provided at all or the provided SessionID does not match), ASP.NET always generates a brand new server-side session state and sends its SessionID to the client side as a cookie, which will override any pre-existing client-side SessionID cookie.

To make it worse, if the client-provided SessionID is valid but matches to an expired or abandoned server-side session state, the newly generated server-side state will reuse the same client-provided SessionID (again, there is a setting to stop reuse, but it is not on by default). This SessionID reuse effectively enables the server side to switch to a different session state without the client side ever knowing about it, since the client keeps using the same SessionID value. To make this even worse, session state design will accept without question any client-side-provided SessionID string as long as it has a valid SessionID encoding. Effectively, the client gets to decide and fix the exact value of its own SessionID. Many automated vulnerability testing scans get tripped by ASP.NET SessionID cookie handling because it fails session fixation tests. Session fixation attacks use a different meaning of “session,” however. ASP.NET session state is a non-authenticated session, while session fixation attacks focus on authenticated sessions.

Imagine the following scenario:

Your bank runs TLS for all ASP.NET forms and pages, and uses a cookie-based session state. You feel reasonably secure doing online banking. One of the thousands or millions of URLs on your bank’s website (such as a static CSS file) happens to be HTTP instead of HTTPS. You might not even get a “mixed-content” browser warning if that HTTP resource is not loaded up front, but instead loads on-click or in a new browser window. The SessionID cookie traveled unsecured along with that HTTP request. Since you were banking over Wi-Fi in your local coffee shop, you bank account just got hijacked without you even knowing.

One easy way to defend against this might be to enforce TLS at the web-server level, no exceptions. However, most sites want to be accessible via HTTP (at least the main page), which then gets redirected to a safe HTTPS site. The only way to properly enforce TLS with session state in this scenario is to have the TLS-secured application on a separate sub-domain (such as secure.myapp.com) and restrict the SessionID cookies to that sub-domain. However, many web applications do not want to inconvenience their users with a separate secure sub-domain.

The root of the problem is that session state was never designed to be used for authentication, yet it is very commonly used to implement ASP.NET authentication. Developer ignorance, poor documentation, and ease of misuse all contribute to this dangerous abuse.

Tip: ASP.NET session state should never be used for authentication.

What should session state be used for? We are not sure. Someone suggested using it for sticky “non-sensitive” data, like users’ color preferences. This makes no sense to us because the “sensitive” part is not whether you prefer “blue” to “green,” but your implicit expectation of ownership over that decision, and your implicit assumption that nobody should be able to hijack that choice. It is the trust in sanctity of client-server interaction, and not merely the data itself, that is at stake. There is no such thing as “non-sensitive” data in a web session—every bit exchanged between client side and server side must be confidential and authentic.

Microsoft offers the following guidance on session state: “The session-state feature is enabled by default. While the default configuration settings are set to the most secure values, you should disable session state if it is not required for your application.”

We can only build on this by further recommending that you disable session state right away, because that is how you know or will find out that session state is not required for your application. Leaving session state enabled in your applications—even if you are fairly sure nobody is using it today—is asking for trouble. If you leave it enabled, somebody is likely to abuse it.

Just to kick the dead horse one more time, you should not feel safe using session state—even when you are 100% HTTPS and HTTP is not even in your vocabulary. A valid User-A can still set their own session state values and then fix User-B’s session to use the same valid User-A session, thus causing User-B to use User-A’s session without even knowing—HTTPS or not. If you try to fix it by making it obvious to User-B that they are actually using User-A’s session, you are using session state for authentication.

You can gather another bit of wisdom from Microsoft’s recommendation: If the default session state is already at its “most secure” configuration, what does that imply about the security of other session state modes, such as cookieless sessions? We leave this as an exercise to the reader.

Availability is technically also a security concern, and session state has issues here as well. Session state handlers are blocking by default because they require both read and write access to the session state storage, which is a mutually exclusive operation that requires locking. There is a way to configure ASP.NET handlers to only require read access, which makes them non-blocking and concurrent (at the expense of writability), but it is not the default. It is not uncommon to see an ASP.NET application with multiple AJAX-enabled independent web sections loading serially, one-by-one, even though they were clearly designed to be loaded concurrently. In some cases, the last-loading request will simply HTTP-time-out while waiting in this artificial session state handler queue.

If you have a user-specific, server-side state to maintain, use in-memory cache (single-server, non-persistent), a distributed cache (such as Redis), or simply leverage your main database with a simple optional caching layer. There are also many fast key-value stores (which is what session state is) available if you use cloud hosting. You could also consider storing state on the client side, EtM-encrypted or not, in cookies or using web storage.

We do live in the real world, full of legacy line-of-business ASP.NET applications that need to be supported and maintained. However, we trust that if you have any power over the design of new ASP.NET solutions, you will use it wisely.

CSRF

Cross-site request forgery (CSRF) attacks are number 5 and number 8 on the OWASP “Top 10 Most Critical Web Application Security Risks” list for 2010 and 2013, respectively. Most development frameworks, including .NET, implement CSRF defenses that follow OWASP recommendations (or, perhaps, OWASP CSRF defense recommendations follow what most development frameworks do currently).

OWASP says the following:

  • Preventing CSRF usually requires the inclusion of an unpredictable token in each HTTP request. Such tokens should, at a minimum, be unique per user session.
  • The preferred option is to include the unique token in a hidden field. This causes the value to be sent in the body of the HTTP request, avoiding its inclusion in the URL, which is subject to exposure.

Do you see a fundamental disconnect between these two statements? There is nothing wrong with the first statement—it clearly articulates a valid HTTP-level solution to CSRF prevention. The second statement, however, suddenly talks about tokens in “hidden fields”—an HTML feature. Where did HTML come from all of a sudden?

The OSI model conceptually represents many protocol layers that make the web work. You will find HTTP in that model as the topmost “application layer” protocol. HTML is not in that model because it is a “payload” protocol, just like JPEG, PNG, XML, JSON, CSS, or any other payload that can be transferred over HTTP.

Implementing CSRF defenses in HTML is like plugging a leak by building a dam in the middle of the ocean. CSRF is a variant of the confused deputy problem, which has two ingredients: a trust-based authority, and confusion that leads to misuse of that authority. Authority on the web is typically driven by authentication of a client to the server, which allows the server to infer authority from the client’s identity. ASP.NET supports two main authentication modes: Windows authentication and forms authentication. We have not discussed either of these yet, but it suffices to know that both of these authentication modes establish authentication at the HTTP level, meaning they work regardless of what the HTTP payload might be. In a CSRF attack, the confused deputy (your authenticated browser session) is tricked into performing a rogue HTTP request that the authenticated client (you) did not authorize. Did you notice any mention of HTML in the previous sentence?

Defending against CSRF in HTML is not wrong because it is somehow insecure, it is wrong because it is misguided, since it is a defense in the wrong place. It also coincidentally does not help with JSON-over-AJAX payloads or any other non-HTML HTTP requests, which require additional workarounds to inject the CSRF token into some equivalent of HTML hidden fields for each payload protocol. You could add extra fields to XML or JSON, but good luck with binary payloads. CSRF is fundamentally an HTTP-level problem because that is the level at which authority is established on the web—it should ideally be mitigated at the same HTTP level. Practical solutions are not ideal, however.

Most CSRF defense approaches revolve around unpredictable per-session tokens (1st OWASP statement). These tokens need to be somehow verified for validity. This validation can be implemented statefully or statelessly. In a stateful approach, the server keeps state on all valid previously-issued/non-expired CSRF tokens (the same model is used by ASP.NET session state). The stateful approach does not scale well because there can be an unlimited number of HTTP interactions during a single session, and the amount of state the server side would have to keep can quickly grow beyond practical. The stateless approach keeps on the client side everything needed for the server side to validate CSRF tokens, which requires no server-side state and scales well. The downside of stateless approach is that there is more CSRF-related info that needs to be sent to the server side, since server side is stateless, but this extra CSRF-token-related bloat is usually reasonable. Let’s explore the stateless approach.

The first thing we need is some kind of client-side storage for stateless CSRF tokens, and HTTP cookies are ideal. Other client-side storage mechanisms like web storage are HTML concepts—not HTTP—and thus are out. To be more precise, the HTTP mechanism we want to leverage for storage is HTTP headers, with cookies being a well-supported protocol for storing custom data in HTTP headers. HTTP cookies have the added benefit (for our purposes) of automatically sending what they store to the server side on each HTTP request.

Next, we need a way to ensure authenticity of client-side-stored tokens to prevent attackers from making their own valid tokens. A keyed HMAC (based on a server-side secret key) of the token (appended) will ensure token authenticity. However, HMAC will not help with replayed tokens or attackers being legitimate users and using their own authentic tokens to mount CSRF attacks. This uncovers two important problems that need to be resolved—replay prevention and token identity—which we ideally want to resolve at the HTTP level. There is a good, tested protocol we can leverage for replay prevention at HTTP level: TLS. We could try to avoid tying CSRF tokens to identities by giving tokens a short time to live (for example, by including an HMACed absolute expiration date), which will make it more difficult to exploit legitimately obtained tokens for CSRF attacks. This, however, raises the difficulty bar but does not solve the problem, since a determined attacker would simply obtain a fresh token just in time. Tying CSRF tokens to client identity, on the other hand, would definitively resolve the problem of abusing legitimately issued tokens. Windows authentication does not allow attaching additional (dynamic) data to the identity, but forms authentication does.

The stateless CSRF token approach also requires a side-channel mechanism for token submission, which cannot be exploited by an attacker to trick the client (browser) to generate a valid HTTP request. The side-channel role is played by HTML hidden fields in the OWASP 2nd statement, while the primary channel is HTTP-cookie-based (stateless mode). However, we do not want to use HTML form fields unless we really have to. We need some HTTP-level “operation” that a legitimate browser could do, but an attacker could not.

One idea is to rely on the assumption that the CSRF attackers cannot inject or modify HTTP headers on HTTP requests of a confused browser, but legitimate self-originating requests can. The problem with this assumption is that some browser plugins do not play by the rules and are able to bypass this restriction.

Another idea is to leverage the same-origin policy (SOP) browser security mechanism, which is getting harder to bypass in modern web browsers. An unpredictable CSRF token could be stored client-side in a tamper-proof, encrypted authentication cookie (which is what forms authentication does), marked HttpOnly (inaccessible for reading). The same plaintext CSRF token value could also be stored in a second cookie not marked HttpOnly, which allows SOP-approved JavaScript to read it. A SOP-approved HTTP request in need of CSRF protection would first read the CSRF token value from the plaintext cookie and either inject that value into a custom HTTP header on the outgoing request, or, if header injection is not available, inject that value into the payload (a concession, since it is not a pure HTTP-only approach anymore). The server side would compare the CSRF token from the decrypted and validated authentication cookie container with the one provided on a side channel (HTTP header or payload-embedded), and only authorize the request upon a successful match.

The benefit of such an approach is that even if the targeted browser has a plugin that does not play by the rules and can inject or modify HTTP headers on outgoing requests, that plugin would not know what to use for the correct token value, since access to that value is additionally protected by the SOP. It is possible that a browser plugin could bypass the SOP, but then the browser is unsafe to begin with. SOP applies to DOM, which includes cookies (via document.cookie) as well as any HTML element data, such as hidden fields (via document.body). SOP also applies to XMLHttpRequest.

This approach can further be strengthened by adding an expiration to the mix. Forms authentication tokens, for example, often have a sliding expiration, which causes a new valid authentication token to be sent to the client when the old one is approaching expiration. If this authentication token re-issue mechanism could be plugged into, we could also issue a new CSRF token at the same time. This would decouple the web session, HTML page, and CSRF token lifecycles and enable shorter CSRF token lifespans.

The cookie-based, just-in-time CSRF token injection has several benefits over the more common approach of HTML-hidden-field token storage. One benefit is that the CSRF lifespan is no longer coupled to the HTML lifespan, which allows for shorter-lived, more secure tokens. Another benefit is that injection can be automated and made transparent to developers, who no longer have to remember to use special commands to generate a separate CSRF token for every form and XMLHttpRequest. Authentication and CSRF cookies automatically get issued, re-issued, rotated, expired, etc. with zero friction for developers—it just works. The payload manipulation is only used when custom HTTP headers cannot be set; for example, when submitting HTML forms.

This is a recipe for success—we just need the right ingredients to make that recipe work. Specifically, we need proper mechanics to access the non-HttpOnly CSRF cookie value and inject it into HTTP header or form submission just-in-time, right before an outgoing HTTP request is ready to go out. The just-in-time requirement is due to an expected asynchronous rotation of authentication and CSRF token cookies. This calls for some JavaScript code. HTTP calls must be triggered from somewhere, and browsers trigger them from either HTML or JavaScript (at least the server-state-altering calls). We need to intercept these HTTP calls right after they are triggered, but before they are made, so we employ some JavaScript to make it happen. We are using jQuery, but it can be easily ported to raw JS or to other DOM-manipulating JS libraries.

Code Listing 20

function getCsrfToken() {

  // use your favorite cookie-reading logic to return the CSRF cookie value

}


/**************************************************************************

Set up a prefilter for $.ajax() call to append a csrf header with the csrf token value, which is read from the csrf cookie prior to each $.ajax() call.

The prefilter can further be overloaded via "beforeSend" function.

The call sequence is (1) $.prefilter; (2) beforeSend (3) $.ajax() call.

**************************************************************************/

$.ajaxPrefilter(function (options, originalOptions, jqXHR)

{

      var csrfToken = getCsrfToken();

      if (csrfToken)

      {

            jqXHR.setRequestHeader("X-CSRF", csrfToken);

      }

});

/**************************************************************************

There is no way to add custom HTTP headers to non-AJAX requests, such as <form> POSTs. Instead, we auto-inject a hidden input element with CSRF token into every form on submit.

**************************************************************************/

var csrfName = "__csrf";

var submitHandler = function (e, form)
{

  form = form || this;

  if (form.method === "post")
  {

    var csrfToken = getCsrfToken(), $csrfInputElement;

    if (csrfToken)
    {
      $csrfInputElement = $("#" + csrfName, form);

      if (!$csrfInputElement.length)
      {

        var csrfHtml = '<input type="hidden" name="' + csrfName +
                     '" id="' + csrfName + '" value="' + csrfToken + '"/>';

        $(form).append(csrfHtml);

      }

      else $csrfInputElement["val"](csrfToken);
    }

  }

};

$(document).on('submit', 'form', submitHandler);

/**************************************************************************

ASP.NET form submission is done via globally registered "__doPostBack" function. We need to intercept it and add CSRF-handling logic.

**************************************************************************/

var beforePostBack = function (beforeFunc)

{

  var old__doPostBack = this["__doPostBack"];

  if (old__doPostBack)

  {

    this["__doPostBack"] = function (target, argument)

    {

      beforeFunc(target, argument);

      return old__doPostBack(target, argument);

    };

  }

};

$(function ()
{

  beforePostBack(function (target, argument)
  {

    submitHandler(null, $("[name='" + target + "']").closest("form")[0]);

    return true;

  });

});

We are not JavaScript masters, but this code is not too complicated, generic, and certainly automates all client-side CSRF-handling duties. We have not covered the mechanics of the server-side CSRF token handling yet. Since there are many benefits of tying CSRF tokens to authentication, we should first review forms authentication in depth to properly set the stage.

Forms authentication

ASP.NET forms authentication (FA) is Microsoft’s stateless implementation of an HTTP+HTML authentication technique. While this technique is wildly popular and widespread, it is not standardized and is entirely implementation dependent. The client-side credentials in FA (for example, a username and password combination) are typically sent to the server side “in-band” as part of the HTML payload (for example, HTML form submission). The server-side generates an authentication ticket upon successful credential validation, and usually sends that ticket “out-of-band” to the client side, typically within an HTTP cookie header. Subsequent client-side requests provide this authentication ticket out-of-band as well. The server side is then able to verify the claimed identity, as well as the legitimacy of the identity claim without any additional client-side credential submission.

ASP.NET FA can operate in cookie-based or cookieless mode, with the default being a runtime choice made by ASP.NET based on the claimed HTTP capabilities of the client-side device. Cookieless FA is full of security vulnerabilities, so it is best to proactively turn it off by forcing a cookie-based FA mode at all times. FA requires TLS for security, so you must ensure that all FA-based solutions are 100 percent delivered over TLS. Since you are unlikely to actually do it just because you’ve read the previous sentence, you need proper motivation for yourself and your team to enforce TLS. That motivation is the RequireSSL attribute, which you should set to true, since it is false by default. RequireSSL=true causes FA cookies to be marked Secure, which prevents compliant client-side agents (such as modern browsers) from sending Secure-marked cookies over non-TLS HTTP connections. Thus, if you accidentally forget to deploy TLS, FA will stop working. Deploying TLS properly means “TLS only, HTTP disabled,” not “TLS by default, HTTP works just as well.” However, if you manage to screw it up and HTTP ends up working just as well, RequireSSL=true might prevent an exploit. All FA cookies are marked as HttpOnly, which makes them inaccessible to scripts in compliant (modern) browser agents.

The default FA cookie timeout T is 30 minutes, and sliding expiration is enabled by default. Every cookie-wrapped FA ticket will expire in T minutes, while any valid client-side-submitted FA ticket with less than half-life time to live (TTL) will trigger the server-side ASP.NET FA module to automatically issue a brand new cookie-wrapped FA ticket. This renewed FA ticket is sent to the client side with the HTTP response, and buys T more minutes of authenticated access. If the ticket renewal is triggered right around the half-life TTL, the effective lifetime extension is only ~T/2 because the old FA ticket would have lived for another ~T/2 minutes, while the newly issued replacement ticket is good for T minutes from issue time. If the ticket renewal is triggered right before expiration, the effective lifetime extension is ~T.

The FA sign-out mechanism is constrained by the stateless nature of the ASP.NET FA implementation. The server-side FA module implements sign-out by sending a void, already-expired cookie to the client side, which causes compliant browsers to “forget” the valid FA ticket they have been holding. A malicious client does not have to honor the “please-forget-your-valid-ticket” server-side request, however. Such a client can continue to use the previously obtained valid FA ticket, which the server will keep renewing forever, no questions asked. The only way for the server to stop accepting and automatically renewing valid FA tickets is to make them invalid, typically via changing the global server-side secret key (machineKey) used to EtM all tickets. This would invalidate all previously-issued FA tickets, meaning all currently authenticated users are kicked out.

FA tickets also do not differentiate between issue date and creation date, since only issue date is recorded within the ticket. The IssueDate tracked by the FA ticket is the time at which the ticket was created, regardless of whether this was the very first ticket creation event, or subsequent ticket renewal events. In other words, issue date is always equal to expiration date (Expiration) minus T minutes. The fact that a particular FA ticket is being renewed for the umpteenth time is not captured anywhere. This makes it impossible to distinguish legitimate FA tickets that were originally created minutes or hours ago from exploited, kept-alive tickets that were created days, months, or years ago. It would be nice for an improved FA implementation to also capture the creation date, which we define as the moment of the very first, original, non-renewed ticket creation (where “renewal count” = 0), but ASP.NET FA implementation does not do that.

An explicit renewal count integer would have been helpful as well, since it cannot be definitively inferred from the difference between the creation date and expiration date. Finally, a unique FA session ID would have been helpful to identify each unique FA session. The FA session ID could have been a GUID, since it is protected by the ticket container, or it could have been cryptographically generated (similar to session state SessionID) for unpredictability in unprotected-use scenarios.

Having a unique FA session ID would allow for an optional server-side state. For example, each server in a huge server farm could locally cache some information related to a specific FA session ID (just like the session state mechanism), and go to higher-latency data store only on cache misses. One useful scenario is going to a higher-latency data store to check whether the provided FA ticket corresponds to a valid identity (such as a non-disabled user account) and minimizing per-request data store trips with local caching. Another useful FA session ID scenario is transparent (no user involvement) re-validation of client-side credentials against a lower-latency data store, but only on cache misses. We do not want to use any user-specific unique ID (such as user ID) for FA session ID because we should ideally support multiple parallel FA sessions per user account (for example, concurrent desktop and mobile device sessions, or simply multiple browsers signed-in in parallel).

The built-in ASP.NET 4.5 FA ticket stores the following data:

Table 8: ASP.NET 4.5 forms authentication ticket internals

FA ticket data

Comments

Serialized version (1 byte)

The version of internal ticket serialization, probably intended for future serialization protocol agility. Currently set to 1. Internal.

Ticket version (1 byte)

Developer-provided arbitrary ticket version, which defaults to 2. API exposes it as an int, which internally gets converted into a byte. If you try to set it to 256 and round-trip it, you will get 0 due to silent byte conversion. MSDN is silent about it, but MS reference source code comments say the following: “Technically it should be stored as a 32-bit integer instead of just a byte, but we have historically been storing it as just a single byte forever and nobody has complained.”

Which does not make it right, of course.

Issue Date (8 bytes)

Date of ticket issue (original or renewal, as discussed previously).

Spacer (1 byte)

Used to break compatibility with pre-4.5 ASP.NET tickets. Internal.

Expiration Date (8 bytes)

IsPersistent (1 byte)

Name (variable-length)

User-provided .NET string, char-serialized.

User Data (variable-length)

User-provided .NET string, char-serialized. Defaults to empty string.

Cookie Path (variable-length)

User-provided .NET string, char-serialized. Defaults to .config value.

Spacer (1 byte)

Another spacer to break compatibility with pre-4.5 ASP.NET tickets. Internal.

Pre-ASP.NET 4.5 FA ticket implementations had a different serialization format and a different authenticated symmetric encryption approach, which suffered in 2011 from a critical security vulnerability that compromised every version of .NET FA and required an out-of-band patch. This critical vulnerability has been in production for more than seven years. ASP.NET 4.5 uses a new serialization format (not really material from a security perspective) and a new EtM encryption approach, which is finally done properly (similar to the EtM approach we described). That alone is a compelling enough reason to run ASP.NET 4.5.

ASP.NET FA tickets have been historically encoded as Base16, which does not change with ASP.NET 4.5. Microsoft continues to support the cookieless FA mode, which is the likely reason for case-insensitive Base16 ticket encoding. Legacy compatibility is a bad thing to be stuck with. Unfortunately, all ASP.NET cookie-based FA deployments have to pay the price of Base16 bloat. It would have been nice if ASP.NET were intelligent enough to use Base16 ticket encoding for cookieless FA tickets, and Base64 encoding for cookie-based FA tickets. Switching from Base16 to Base64 would reduce encoded FA ticket size by 33 percent and increase the storage density by 50 percent. Given a maximum 4-kilobyte HTTP cookie size, Base16 can store at most 2 kilobytes, while Base64 can store at most 3 kilobytes—that’s one extra kilobyte of storage (for example, a serialized 512-byte .NET string).

Membership

The identity-carrying part of the FA ticket is the mandatory Name string property, which should be set to a unique client identity. This unique identity should be set as an outcome of some black-box process of credential verification, which is out of scope for FA. Such credential verification process typically involves three phases:

  • Registration (credential creation and storage)
  • Submission (credential provision for the purposes of authentication)
  • Verification (validation of submitted credentials)

ASP.NET implements these credential management phases in a separate “membership” API, which can have multiple “providers” (implementations). ASP.NET ships with SQL Server and Active Directory membership providers. There is also an abstract ExtendedMembershipProvider class with SimpleMembershipProvider implementation in WebMatrix.WebData.dll, and a separate NuGet-only Microsoft package offering “universal” providers, where the “universal” part seems to refer to the fact that tightly coupled SQL Server persistence is exchanged for tightly coupled Entity Framework-based persistence, which supports a few additional storage backends.

From a security perspective, SimpleMembershipProvider is the only Microsoft-provided membership implementation that makes an attempt at doing the right thing, such as using computationally slow password-based key derivation. From a design and usability perspective, membership is an ancient API that might have been useful in 2005, but is no longer adequate when evaluated against modern, secure engineering design principles, because it suffers from insufficient security, multiple SRP/SoC violations, and tight persistence coupling. While Microsoft has tried to address some issues with SimpleMembershipProvider, it is a temporary fix at best. There is simply no good way to fix membership inadequacies other than to use a more modern credential management API. There is also no consistency of credential management among various ASP.NET technologies, such as Web Forms, MVC, Web Pages, and Web API.

Insider threats

An important security goal to aim for is defense against insider threats—attackers or current and former employees who might be able to access your precious data from within, and who should be assumed to have full insider knowledge and complete server-side read access (including full knowledge of all secret keys, complete DB read access, etc.). This is a tough security goal to reach, but it is necessary to have at least some defense mechanisms in place to thwart insider threats as part of a defense-in-depth approach to security. Membership APIs and—dare we say it—all other ASP.NET security mechanisms were never designed to protect against insider threats.

“There are only two types of companies: those that have been hacked, and those that will be. Even that is merging into one category: those that have been hacked and will be again."

FBI Director Robert Mueller (March 1st, 2012)

Security experts tend to have an even gloomier perspective, that there are only two types of companies: those that have been hacked, and those that do not know they have been hacked. It is not rational to be concerned about an advanced persistent threat (APT) brute-forcing “weak” 1,000-iteration PBKDF2-SHA1 within SimpleMembershipProvider while ignoring that junior developer, fired last week, who has seen the <machineKey> secret passwords and thus is able to forge any desired FA cookie and identity because you never change <machineKey> passwords. Even if you had a policy to change secret passwords and bothered to actually roll out new passwords across your 100‑server web farm (which, of course, you could painlessly do with a single click, since everything is so perfectly automated at your company that it almost runs itself), you would have to wait a week or two until the next scheduled production push anyway.

This hypothetical scenario is not that far-fetched, and could be a serious security threat. While insiders might not be particularly advanced, sophisticated, or malicious, their persistence and permanence, coupled with rampant (willful) ignorance, incompetence, poor training, and underfunding, more than make up for it (security rarely generates revenue). While APTs are more dangerous when they are well-funded, insiders are typically more dangerous when they are inadequately funded. Having the right mindset is crucial for insider threat mitigation, and the best way to do it is to assume that the attacker is “you,” or, rather, your evil self from a parallel universe who knows everything you know, and has read access to everything you have read access to.

Credential storage

We have already covered PBKDF2 for deriving a master key (MK) from a low-entropy SKM, such as a user password. PBKDF2 forces you to provide a salt (for example, a GUID), so you are unlikely to forget to protect a MK against SKM reuse. You could then store the MK and a salt against a user record at this point, and in fact many systems do just that. We advise against it, however.

One issue is that at this point there is nothing connecting the MK to a user ID other than storage. Ideally, a MK should be correlated to a user ID via cryptography, and not just via storage. A hypothetical insider with read-only access to user IDs and read-write access to the MK and salt could replace one MK/salt record (for the “admin” user) with another MK/salt record (from an insider’s own account with a known password) to gain access, and possibly change it back to avoid discovery.

Another issue is that it is wise to have an explicit one-way transformation between key extraction/derivation and storage, which you skip if you store the PBKDF2-derived result directly. One reason to have this explicit one-way transformation is to cleanly obtain an image and a pre-image of MK. For example, a PBKDF2-derived secret can be the pre-image of MK (PMK), and a one-way image of PMK can be the actual MK saved to storage.

One obvious idea to generate MK might be MK = HASH(PMK), where HASH is a cryptographic hash like SHA‑512. This does not address our desire to cryptographically connect MK and a user ID, so do not do that. Another idea might be MK = HASH(PMK + <user ID>) or HASH(<user ID> + PMK), where + is concatenation. This idea is most likely susceptible to length extension attacks, so definitely do not do that either (even with SHA-384). There is a proper cryptographic tool to authenticate data with a secret key: MAC, and our preferred MAC implementation, HMAC. We can do MK = HMACPMK(<user ID>). This is our preferred approach because it ensures a one-way PMK-to-MK transformation, and cryptographically ties MK to user ID at the same time.

Tip: As a general rule, never use a hash as a replacement for a MAC by creatively injecting a secret into a hashed message.

We store both MK and salt against the user ID record, but what should we do with PMK? Various ASP.NET membership providers have no separate concept of PMK because they skip the PMK-to-MK step. PMK = MK for them. These membership providers also do MK verification against server-side storage only once: when the FA cookie is generated for the first time during (successful) login. Subsequent FA cookie submissions are validated based on server-side ability to EtM-decrypt the FA cookie with a fixed server-side secret key (or derived keys). As long as EtM-decryption was successful, the FA cookie is accepted as valid, and all claims made by such FA cookie contents are accepted. Effectively, the ASP.NET security model exchanges a secret, user-provided key for a bunch of identity claims signed with a server-side secret key that outsiders should not know, which prevents outsiders from forging identity claims. Note that the server-side identity storage is not consulted as part of identity claim verification—all it takes to verify identity claims is EtM-decryption attempt. Insiders, however, are assumed to know all secret server-side keys, and are thus able to forge any identity claims because they know the secret key needed to make EtM-decryption succeed.

The obvious upside of the ASP.NET FA security model is that it scales well because it does not consult server-side storage for identity claim verification for every post-login request. The obvious downside of this approach is that insider threats are clearly out of scope for such a security model. ASP.NET FA’s inability to invalidate tickets upon sign-out (which we discussed earlier) is a direct consequence of non-existent post-login, server-side credential validation. Wouldn’t it be nice to have a more robust FA security model that offers some resistance against insider threats and allows for fine-tuning and striking the optimal balance between always checking storage for validation of post-login client requests, and never checking storage (ASP.NET FA)? This is where PMK can help.

We can store PMK within the EtM-encrypted FA ticket container (for example, within the UserData string) when a user logs in with credentials used to derive PMK. Subsequent client-server interactions will be authenticated via (1) EtM-decrypting the FA cookie with the server-side secret key; (2) extracting PMK and <user ID> claims from the FA cookie and checking HMACPMK(<user ID>) against the server-side storage record for that specific user ID. Insiders might be able to bypass (1) using knowledge of server-side secret keys, but it would be much harder for them to bypass (2) since that would either require knowledge of a user’s PMK, or the ability to permanently or temporarily forge a <user ID> in storage.

Most knee-jerk reactions (known as “Internet advice”) to storing sensitive data in cookies typically look like this:

  1. Use TLS with Secure, HttpOnly cookies.
  2. Encrypt your cookies.
  3. DO NOT DO IT.

The must-have requirement for TLS is sound. If someone can steal a valid (non-expired) FA cookie, it does not matter what is inside of it. The encryption requirement is technically not a must-have, since authenticity of the cookie (i.e. its MAC) is often more important than the privacy of its contents. However, we do want to encrypt our cookies, because EtM encryption not only gives us both privacy and authenticity, but also allows us to leverage that cookie container for data that truly requires privacy, such as PMK. The “do not do it” part is a result of fear, uncertainty, and doubt, however.

PMK is a properly salted, PBKDF2-derived (and sufficiently iterated), fixed-length value. Assuming that both TLS protection and EtM-encryption of the cookie container are somehow bypassed (properly implemented cryptography is rarely broken and is typically bypassed), PMK could only be useful to an insider, since outsiders cannot forge FA cookies. PMK knowledge does not reveal the actual plaintext user password required to log in. If a user’s plaintext password happens to be easily predictable, nothing would help thwart attackers anyway.

Improving forms authentication

We have already described the ASP.NET 4.5 FA deficiencies and its ticket container contents in a prior chapter. Let’s summarize these deficiencies, which we will try to rectify:

  • Space-inefficient Base16 container encoding.
  • Not designed to be resistant to insider attacks:
  • No cryptographic connection to user credentials.
  • Credential validation is only done once (on login).
  • No concept of separate creation date and (re-)issue date.
  • No concept of (re-)issue count.
  • No concept of unique session ID to identify each unique FA session, which survives ticket re-issues.
  • Not designed to provide CSRF defense.

We will employ the following approaches to try to improve on these deficiencies:

  • Use Base64 container encoding.
  • Provide insider attack resistance by:
  • Adding user-credential-derived PMK to FA ticket.
  • Adding custom logic hooks to trigger credential validation as often or rarely as desired.
  • Add creation date to FA ticket—(re-)issue date is already captured as IssueDate.
  • Add re-issue count (unsigned integer) to FA ticket.
  • Add a unique session ID to FA ticket.
  • Provide CSRF defense by:
  • Adding an unpredictable CSRF token to FA ticket, regenerated anew on every ticket re-issue.
  • Adding a second cookie with a JavaScript-readable CSRF token value, lifetime-synced to FA cookie.

The additional data we want to add to the FA ticket container has to be stored in a UserData string in an API‑transparent way so that the API-consuming user can continue to store user-provided data in the UserData string as its documentation specifies. Let’s see what our storage requirements might look like:

Table 9: Alternative forms authentication ticket internals

Data

Size (bytes)

Comments

Creation Date

8

DateTime structure as binary.

Issue Count

4

UInt32.

Session ID

16

128-bit random CSP-generated.

CSRF token

15

120-bit random CSP-generated.

User-credential-derived PMK

32

PBKDF2-HMAC-SHA-512/256 (i.e. leftmost 256 bits only).

Total

75

As byte array.

Base64(Total)

100

As UserData-stored string.

Text-serialized

200

Double the string size as a byte array.

Base-64-encoded cookie

~267

As added to FA cookie container.

As you can see, piggybacking on top of the UserData string field leads to a lot of storage overhead as a result of FA ticket APIs not supporting direct byte array storage. Had direct byte array storage been available to us, we would only need a single Base64 encoding for the final cookie container—a 1.33 times bloat factor, instead of 3.56 times.

On the positive side, all of the fields are fixed-length, which means that we can easily prepend and un‑prepend their combined Base64-encoded value to or from the user-provided UserData. Algorithmic agility can be implemented via the existing mandatory Version byte field.

120-bit CSRF token security strength should be more than sufficient since CSRF tokens have a lifetime of a single FA ticket issue, which is 30 minutes, with default ASP.NET sliding expiration settings.

Using a full, 512-bit PMK would have been desirable from the security perspective. However, 64 bytes are a heavy payload to carry on every single FA cookie, which would get further bloated via the 1.33 × 2 × 1.33 = 3.54 multiplier to 227 cookie bytes. Using a truncated, 256-bit PMK reduces the extra cookie payload in half (to 113 bytes) while still providing adequate user password entropy extraction and security margin.

Given a 256-bit PMK, the server-side will store MK = HMAC-SHA-512/256PMK(<user ID>)—i.e. the PMK-keyed 512-bit HMAC of a user ID, truncated to 256 bits. It is crucial for the server side to have no record of users’ PMKs. Each authenticated client-side request will provide a PMK claim and a user ID claim to the server side. The server side then has full discretion to either trust these claims because the EtM decryption was successful (weak claim authentication), or alternatively, do a bit more work by calculating MK and validating it against storage for the claimed user ID (strong claim authentication). An insider would be able to bypass weak authentication, but not strong authentication.

The inclusion of a unique, unpredictable session ID can also help reduce the costs or latencies of PMK-to-MK validation via a caching layer. The PMK-to-MK validation can only be triggered when the session ID is not in the cache; otherwise, the FA ticket is deemed to be strongly authenticated due to a prior PMK-to-MK validation. There are many distributed in-memory cache solutions available for .NET (free and commercial, from Microsoft and other vendors, cloud-based and on-server), with varying consistency features. Even if consistency is not perfect, the worst that can happen is an extra PMK-to-MK validation—there are no security failures when cache consistency fails. Simple scenarios can even use ConcurrentDictionary for session ID cache.

Our security model assumes that insiders have read-only access to all server-side secret keys and read-only access to all server-side storage, but have no access to application or web server memory. Even if insiders have read-write access to the in-memory session ID cache, the cached session IDs are anonymous. There is no way to determine that a particular session ID belongs to a particular logged-in user, and there is no way to predict or “fix” the Session ID for new logins. You should avoid using FA tickets as values in ConcurrentDictionary because that would de-anonymize session IDs.

You might consider using a custom FA ticket container that does not suffer from the bloat of ASP.NET FA ticket APIs, since you already know by now exactly what data you want to include, and how to serialize, store, encode, and encrypt it. We recommend not to do it, however, unless ASP.NET FA ticket storage and processing are serious bottlenecks for you that you have measured, and you have already exhausted all other optimization avenues. The key reason to piggyback on top of ASP.NET FA APIs is assurance of the following:

  • Microsoft’s implementation is mature (time-tested) and reasonably sane from a security perspective.
  • Microsoft maintains it for you, and will fix any security issues faster and better than you can.
  • Should any serious issues be uncovered in the future, ASP.NET FA will be broken for everybody, worldwide. The headlines will be about Microsoft and not about your company. You could stand behind “following Microsoft guidelines” instead of defending the merits of and reasons for your custom implementation. Microsoft could even be liable to your company in the extreme case.

The decision to trigger strong claim authentication is entirely flexible, and could depend on the following:

  • Session ID cache miss.
  • FA ticket renewal.
  • FA ticket issue count getting above a certain, non-typical threshold.
  • FA ticket issue count going through N increments (i.e. on every Nth ticket reissue).
  • FA ticket lifetime (expiration date minus issue date) is above a certain threshold.
  • CSRF token validation failure (which by itself should prevent a requested action, but perhaps you also want to check for and log any strong claim authentication failures in this case).
  • IP-based, role/permission-based, action-based, or any other custom logic.

New ASP.NET crypto stack

ASP.NET 4.5 cryptographic code paths have been substantially revamped due to serious security vulnerabilities that have plagued prior ASP.NET versions. These changes are well-covered in Part 1, Part 2, and Part 3 of Levi Broderick’s “Cryptographic Improvements in ASP.NET 4.5” MSDN blog posts. The old MachineKey.Encode and MachineKey.Decode methods have been obsoleted by new MachineKey.Protect and MachineKey.Unprotect methods, which do proper EtM with proper KDF and granular, context-driven master-key key derivation.

Code Listing 21

static byte[] Protect(byte[] userData, params string[] purposes)

static byte[] Unprotect(byte[] protectedData, params string[] purposes)

The secret encryption and verification master keys used by Protect and Unprotect are automatically sourced from <machineKey> configuration settings, just like with Encode and Decode. This implicit master-key configuration makes the Protect and Unprotect APIs easier to use, and more difficult to abuse or misuse. Unfortunately, this inability to use externally provided master keys also substantially limits the usefulness of the Protect and Unprotect APIs. One might be tempted to “hack” around this limitation by using fixed MachineKey keys and providing custom external master keys via the purposes array instead. While we cannot confirm or deny the security or insecurity of such an approach, we do know that the purposes array is designed to be a non-secret distinguisher, not a secret key. Abusing this design is no different than abusing HMAC for message confidentiality: it might work, but unless a professional cryptographer tells you that it does, you should avoid any misuse of cryptographic designs and APIs.

ASP.NET 4.5 allows for a complete replacement of the cryptographic black box used by the Protect and Unprotect methods (and thus virtually all ASP.NET crypto) to convert userData into protectedData and back. This replacement is enabled via any concrete implementation of a new abstract DataProtector class coupled with .config file instructions to use such DataProtector implementation in place of the default ASP.NET 4.5 crypto. You could experiment with this feature by doing a simple DataProtector implementation that returns a plaintext byte array as a ciphertext byte array and vice versa—purely for learning purposes, and just to get a feel for it.

The DataProtector replacement capability is a great backup plan to have in case major security vulnerabilities are uncovered, either in ASP.NET’s implementation or in the cryptographic primitives used by ASP.NET. It still does not help us with custom, externally provided secret master keys since the Protect and Unprotect APIs are still the only available high-level APIs. Fortunately, we can leverage a proper EtM implementation from the Inferno library, which does not have implicit-key limitations. Inferno-based implementation of DataProtector also happens to be about 15 percent faster than Microsoft’s default Protect/Unprotect crypto, although that is not a good enough reason to prefer it, unless need for speed trumps all other concerns for you.

ASP.NET CSRF API

ASP.NET 4.5 comes with CSRF defense APIs located in the AntiForgery static helper, which provides two method pairs:

  • HtmlString GetHtml()
  • void Validate()

...and…

  • void GetTokens(string oldCookieToken,out string newCookieToken,
    out string formToken)
  • void Validate(string cookieToken,string formToken)

MSDN has basic information on how to use the first pair of methods, but not the second pair. There is also very little information on how these APIs are designed to work, which makes it very difficult to use them properly. Let’s go over the AntiForgery mechanics first. There are two “tokens” at play: a cookie token and a form token.

Table 10: ASP.NET AntiForgery token internals

Cookie token

Form token

1-byte token version, set to 1.

128-bit (16-byte) CSP-random value.

1-byte IsSession flag, set to 1.

1-byte token version, set to 1.

128-bit (16-byte) CSP-random value.

1-byte IsSession flag, set to 0.

1-byte flag for whether claim-based (1) or not (0).

ClaimUid bytes or Username string (UTF-8) based on above flag.

AdditionalData string (UTF-8) or 0-byte if absent.

Total size (pre-encryption): 18 bytes

Total size (pre-encryption): 21 bytes minimum

The binding of cookie and form tokens to each other is done via their 128-bit CSP-random value, which must be identical for both tokens to infer a valid binding. Both cookie and form tokens are EtM-encrypted and Base64-encoded to become final string values. There is no real need to encrypt the cookie token since there is nothing secret in it, but Microsoft does it anyway, likely in order to authenticate it (the M part of EtM). This unnecessary encryption adds a lot of bloat to cookie tokens, which travel on every HTTP request (it adds up).

ASP.NET CSRF tokens are not integrated with ASP.NET authentication and authorization mechanisms. You might be forgiven for thinking that they are a new optional ASP.NET feature with no dependencies, because that is what their documentation implies. The reality is that ASP.NET CSRF tokens are fundamentally tied to context identity controlled by HttpContext.Current.User.Identity. If your application is setting identity on HttpContext, then the ASP.NET CSRF tokens will encode that identity into the form token and validate it later against the current HttpContext. However, a vast number of ASP.NET applications have custom authentication logic which is not driven by HttpContext.Current.User.Identity. ASP.NET CSRF form token would only capture and validate identity when HttpContext.Current.User.Identity exists and has IsAuthenticated=true.

The following code would set a generic “empty-string” identity, which has IsAuthenticated=false:

Code Listing 22

HttpContext.Current.User = new GenericPrincipal(
    new GenericIdentity(""), null);

If HttpContext.Current.User.Identity is null or has IsAuthenticated=false, the form token will encode an empty string as identity—it will not throw or otherwise alert you. This is dangerous because it makes CSRF tokens from all users interchangeable and allows existing users to mount CSRF attacks against each other. Thus ASP.NET CSRF token APIs do not work out-of-the-box because they require specific ways of identity management. This dependency is not mentioned or covered by Microsoft’s documentation, but even if it were documented, it would only emphasize that Microsoft’s CSRF token implementation is an afterthought approach, which is inferior to alternative approaches that integrate CSRF defense mechanisms with identity and authentication by design.

For example, the 15-byte CSRF token we described in the “Improving forms authentication” section would Base64-encode to 20 bytes. Compare that with Microsoft’s 18-byte CSRF token, which gets EtM-encrypted to 108 bytes:

18 bytes à 32 (AES padding) à 80 (16 bytes of IV; 32 bytes of MAC) à 108 (Base64)

Another issue is that ASP.NET CSRF tokens are identity-bound, not session-bound. Imagine that Twitter is running ASP.NET (try). You have just signed up for the @Ironman account. However, unbeknownst to you, someone already had the @Ironman account before and deleted it, which allowed you to grab it. That someone has valid CSRF tokens for the @Ironman identity, and can mount CSRF attacks against your account—forever.

Why forever? Yet another issue with ASP.NET CSRF tokens is that they have no expiration (sessions expire, identities do not). Form tokens are intended to live within HTML, and HTML does not expire either. The only way to invalidate already-issued ASP.NET CSRF tokens is to change the ASP.NET machine keys on the server.

You can work around that by making ASP.NET CSRF tokens session-bound. One approach is to use unique identities. Unique identities are not session-bound, but offer some defense against CSRF token replay. While the old and the new Twitter @Ironman handles are the same, you would internally identify them with different GUIDs, which become the real internal account identities. Another approach is to leverage form token’s AdditionalData property. You would want to store some session-specific identifier in this property. You cannot set it directly, however, and can only do it through setting AntiForgeryConfig.AdditionalDataProvider, which requires you to implement the IAntiForgeryAdditionalDataProvider interface.

Making CSRF tokens session-bound, and not merely identity-bound (even when that identity is unique), is also important for the insider threat model, which is rarely considered by built-in ASP.NET components. An insider with read access to machine keys can generate forever-valid CSRF tokens for any known identity, such as “Administrator.” Making CSRF tokens session-bound would require an insider attacker to guess a valid, short-lived session identifier for a specific identity, which is arguably a much harder task. Do not use ASP.NET SessionIDs as your session identifier because they can be client-side fixed by an attacker (do not use ASP.NET session at all). Client-side session fixation would enable an attacker to generate a specific known session and trick the victim into using that session instead of a random un-guessable session, so make sure you have addressed that threat.

Since Microsoft’s CSRF tokens use the ASP.NET crypto stack for EtM encryption, .NET 4.5-produced tokens will be very different and incompatible with tokens produced by earlier .NET Framework versions (see our discussion in the “New ASP.NET crypto stack” section). The .NET 4.5 crypto stack is only used by Microsoft’s CSRF token logic when HttpRuntime.Targetframework returns a “4.5” version object—otherwise, pre-4.5 crypto APIs are used. In ASP.NET applications that property can be controlled via the .config file setting, as this blog post explains. Unfortunately, the <httpRuntime targetFramework> .config setting has no effect on non-ASP.NET applications, which will default to HttpRuntime.TargetFramework=4.0, even when compiled for and running on .NET 4.5. One way to force non-ASP.NET applications running on .NET 4.5 into producing 4.5-based CSRF tokens is:

Code Listing 23

AppDomain.CurrentDomain.SetData("ASPNET_TARGETFRAMEWORK",

            new FrameworkName(".NETFramework", new Version(4, 5)));

HttpRuntime.TargetFramework.Dump(); // Version 4.5 now

Client-side PBKDF

One obvious consequence of employing PBKDF and similar password-based KDF schemes to derive keys from low-entropy passwords is their intentionally high computational cost, measured in CPU utilization, memory utilization, or both. This cost can quickly become a limiting factor for server-side throughput (for example, measured in requests per second).

One tempting idea is moving PBKDF computation from the server side to the client side. Let us cover why this idea falls apart on closer inspection. Typical client-side devices tend to have inferior CPU and memory resources compared to server-side infrastructure. Client-side software is often not even able to fully utilize the client-side resources that are available. Many modern browsers are stuck in 32-bit mode, even when they are running in 64-bit environments (that is, they are unable to fully utilize available memory). Even when browsers are running in 64-bit mode, their computational capabilities are typically limited by their single-threaded JavaScript engines (they are unable to fully utilize available CPUs). Mobile client devices are often primarily battery powered, and are even further constrained in their resource capabilities.

The crucial implication of client-side resource inferiority is that the number of client-side PBKDF rounds that can be tolerated with acceptable usability is typically a lot lower compared to server-side execution. Any substantial reduction in the number of PBKDF rounds would weaken PBKDF security and circumvent its intended purpose.

Another problem is that PBKDF is salted (if it is not, then you are not using it right). Salt values are available on the server side (which makes server-side PBKDF straightforward), but not on the client side. Sending salt values to the client side would not only introduce additional complexity, but would weaken security as well. While salt values do not have to be secret, they should not be blissfully disclosed to anyone who asks, either. An adversary could ask for a salt value of an “Administrator” account and combine it with a list of common passwords to generate potential credentials and quickly mount an online attack. Disclosing salt values is clearly not in our best interests.

Password reset

Password reset protocols try to provide users with an “acceptably secure” means of resetting or changing the what-you-know authenticator (password) without providing a valid existing authenticator, which undermines the entire purpose for such an authenticator. It is therefore important to realize that most password reset protocols are an exercise in intentional weakening of security of the system you are trying to protect, and are often the weakest link in the chain of security measures.

Password authenticators tend to be the only “proof” of account ownership in many systems, due to practicality (ease of use, ease of management, and ease of delegation and revocation). The only secure password reset protocol for these systems is really “password change/reset requires proof of knowledge of existing password.” If the existing password is lost, there is no way to correctly establish account ownership when the existing password is the only proof of ownership. The various password “alternatives” used in the wild—such as email, SMS, and automated phone calls—merely avoid the establish-account-ownership problem by outsourcing it to another external system, or by splitting it among several external systems (email, SMS, and voice call codes are all required for a password change or reset).

Most side channels used for automated identity confirmation and password resets provide weak or no security. The only mitigating factor that makes the risk of using these insecure side channels marginally plausible is a short time-to-live property of identity confirmation tokens sent through those channels. Another serious problem of typical side channels is that their security is often interdependent. A stolen smartphone is likely to allow an adversary to receive phone calls, SMS, and read owner’s email due to a logged-in email client.

If you have no choice and must weaken your system’s security for the sake of usability (asking users to create new accounts when they are unable to provide valid credentials to existing or claimed accounts is bad for business), then an email side channel will likely be the first thing you will try to leverage, since you are likely already using this channel for user communication. We will now describe how you should do it in the least damaging way.

There are two distinct phases in the email-based password reset process: token request and token validation.

Token request:

  1. Ask for account-linked email address and CAPTCHA solution.
  2. If CAPTCHA is solved and email matches precisely one account:
  1. Generate a CSP-random token T (120 bits are enough) and its creation UTC timestamp TS.
  2. Use T as key to calculate signature SIG = HMACT(account ID + TS).
  3. Record SIG and TS in the database (but not TK).
  4. Email Base64(T) and forget T as soon as possible.

Token validation:

  1. Obtain T from user—for example, user clicks on email-delivered URL with embedded Base64(T).
  2. Obtain account ID or username from user (do not disclose in the email) and CAPTCHA solution.
  3. If CAPTCHA is solved and TS (looked up for claimed account ID) is valid:
  1. Calculate SIG’ = HMACT(account ID + TS).
  2. Compare SIG’ and SIG (looked up for claimed account ID). If SIG’ = SIG:
  1. Ask user to choose a new password.
  2. Reset SIG and TS to null against that account ID.
  3. Send “your password has been changed” notification email.

Always keep detailed audit records on password reset requests (such as an IP address or browser fingerprint).

You should not trust identity or credential management and verification to a third-party provider (such as Google, Twitter, Facebook, or Microsoft), even if initial integration seems attractive (easy). Outsourcing such key pillars of your security infrastructure is likely to cause problems later.

Strict Transport Security (STS)

Secure web applications (or any of their parts or fragments) should never be loaded over HTTP (TLS should be mandatory). Unfortunately, there is a big usability problem with the “we speak HTTPS only” approach. The public has been conditioned to ignore URI schemas and avoid explicitly specifying the protocol. It is “facebook.com” and “google.com” rather than “https://www.facebook.com” and “https://www.google.com”. Since the schema or protocol is rarely specified, the “default” protocol choice is delegated to the user agents. Unfortunately, most (all?) user agents default to “HTTP” rather than “HTTPS” as their default schema. Not supporting HTTP will improve security, but it will also make the web application inaccessible to a lot of users.

Most web applications choose to respond via HTTP as well to minimize usability issues. The HTTP-based responses have no content, and instead return the HTTP-301 (permanent redirect) to the HTTPS-equivalent URL. Responding to HTTP requests improves usability, but exposes users to the risk of eventually stumbling into a hostile or untrusted networking environment that can maliciously intercept and modify HTTP responses to compromise user sessions.

Strict Transport Security (STS) is a mechanism to minimize the likelihood of a casual user being attacked when (eventually) accessing an HTTP-responding web application over a hostile or untrusted network. STS adds a Strict-Transport-Security header to HTTP and HTTPS responses, which specifies a period of time during which the user agent should access the server over HTTPS only. The STS-compliant user agent is expected to remember the STS instruction and the duration (max-age) to enforce it. The enforced STS causes user agents to automatically turn any HTTP requests into equivalent HTTPS requests before they are made. This automatic client-side HTTPS conversion will save users who accidentally stumble into a rogue network environment from being served malicious HTTP content. However, STS will not offer any protection to those user agents that have never accessed an HTTP-responding web application before, and for which the user agent has no prior enforced STS entry recorded.

One complimentary STS mechanism is STS preloading, which is a way to submit HTTPS-speaking domains for user agent inclusion as being HTTPS-only. You can inspect a full list of preloaded STS participants for the Chrome browser here.

There is practically no downside to using STS, and it is a low-effort and high-impact preventative security mechanism. The IIS configuration (web.config) to add one-year STS enforcement is:

Code Listing 24

<system.webServer>
...
  <httpProtocol>

    <customHeaders>

      <add name="Strict-Transport-Security" value="max-age=31536000;
        includeSubDomains; preload "/>

    </customHeaders>

  </httpProtocol>
...
</system.webServer>

Not all user agents support STS (yet), so an explicit HTTP-301 redirect might still be useful.

X-Frame-Options (XFO)

Clickjacking is a common attack technique where a user is tricked into unwittingly performing an otherwise undesired or unapproved action the user never intended. Clickjack victims are usually tricked into clicking on some UI elements from another web application that is presented in a transparent layer over what the user believes to be clicking on. This trick is often achieved via exploiting <frame> or <iframe> HTML tags.

The X-Frame-Options (XFO) header instructs compliant browsers to impose certain defensive restrictions on loading XFO-marked content into <frame> and <iframe> tags. There are three settings for XFO value:

  • DENY
  • SAMEORIGIN
  • ALLOW-FROM uri

DENY prevents XFO-marked resources from being framed. SAMEORIGIN allows XFO-marked content to be framed only from the same origin as the XFO-marked content. ALLOW-FROM allows XFO-marked content to be framed only from an origin specified by the uri.

Sample IIS configuration to set XFO is:

Code Listing 25

<system.webServer>
...

  <httpProtocol>

    <customHeaders>

      <add name="X-Frame-Options" value="SAMEORIGIN" />

    </customHeaders>

  </httpProtocol>

...

</system.webServer>

The ALLOW-FROM XFO setting is not supported by most browsers, but DENY and SAMEORIGIN are widely supported.

Content-Security-Policy (CSP)

The Content-Security-Policy header (CSP, not to be confused with cryptographic service provider from previous sections) is a more robust replacement for XFO with a richer defensive functionality. While XFO has decent support in modern browsers, it is deprecated as a standard, and is obsoleted by the frame-ancestors directive of CSP.

CSP is a very powerful policy-setting mechanism, but requires careful policy configuration, testing, and tuning to be effective. Non-trivial CSP policies also tend to be quite verbose and waste a lot of bytes on the wire, adding response bloat. A simple CSP policy might be:

Code Listing 26

Content-Security-Policy: default-src 'self'

This policy forces all content to be loaded from the document’s own origin only. Sample IIS configuration to set CSP is:

Code Listing 27

<system.webServer>
...

  <httpProtocol>

    <customHeaders>

      <add name="Content-Security-Policy" value="default-src 'self';" />

    </customHeaders>

  </httpProtocol>
...

</system.webServer>

Subresource Integrity (SRI)

Most non-trivial web applications are assembled from resources (images, scripts, stylesheets, fonts, etc.) that are loaded from multiple origins. Therefore, comprehensive web application security requires authenticity guarantees from all these origins as well, most of which will be outside your domain of control. If authenticity guarantees are absent, not only are you exposed to the risk of these external origins getting hacked and legitimate resources being maliciously replaced, but you can also suffer from DNS poisoning attacks replacing the IP addresses of these origins, which might be an easier attack than hacking the servers directly.

TLS (HTTPS) connections, Strict-Transport-Security (STS), and Public Key Pinning (HPKP) mechanisms all play important roles in comprehensive web security. However, they are focused on authenticating the server, but not the content. Subresource Integrity (SRI) is a relatively new mechanism that allows web applications to pin the content as well (by validating its authenticity).

Code Listing 28

<script
  src="https://ajax.googleapis.com/ajax/libs/jquery/2.2.2/jquery.min.js"
integrity="sha384-mXQoED/lFIuocc//nss8aJOIrz7X7XruhR6bO+sGceiSyMELoVdZkN7F0oYwcFH+"
  crossorigin="anonymous">
</script>

The integrity attribute on the <script> tag contains a SHA384 hash of the binary payload loaded from the src attribute. SRI-enabled user agents (e.g., browsers) will refuse to load any resources that fail a hash match. One way to calculate the SRI hash of a resource is as follows:

Code Listing 29

using (var http = new HttpClient())

using (var dataStream = await http.GetStreamAsync(

  "https://ajax.googleapis.com/ajax/libs/jquery/2.2.2/jquery.min.js"))

using (var hash = SecurityDriven.Inferno.Hash.HashFactories.SHA384())

{

      byte[] hashBytes = hash.ComputeHash(dataStream);

      string hash_b64 = Convert.ToBase64String(hashBytes);

      $"sha384-{hash_b64}".Dump();

}

// result:
// sha384-mXQoED/lFIuocc//nss8aJOIrz7X7XruhR6bO+sGceiSyMELoVdZkN7F0oYwcFH+

Alternatively, you can use a convenient and untrusted online generator at srihash.org.

JSON Object Signing and Encryption (JOSE) framework

JOSE is a set of related specifications for encrypting and signing content, and making it portable and transferrable over the web. JOSE has many moving parts, namely:

One of the JOSE coauthors (who is a Microsoft standards architect) described it as a “4.5-year journey to create a simple JSON-based security token format and underlying JSON-based cryptographic standards, with the goal to keep things simple.”

While JOSE specifications might indeed be simpler than some other Microsoft-backed standards such as SAML 2.0 (coauthored by the same Microsoft architect), they are far from simple (as 5+ RFCs suggest), and are more comparable to TLS RFCs in terms of specification complexity. The parallels between TLS and JOSE can also be seen in the sheer number of possible combinations of various JOSE components, some of which might be secure, with the JOSE Cookbook stating that “the full set of permutations is extremely large, and might be daunting to some.” That is not how good security specifications that aim to “keep things simple” should start.

Continuing our TLS analogy, we can draw parallels between securing data at the transport layer (which is what TLS does), and similar attempts to secure data at the application layer (which is what JOSE tries to do via JSON).

There are very few high-quality TLS libraries out there, and unless you are a big browser-maker or browser-backer, or have Amazon or CloudFlare scale, writing a TLS library should not be on your agenda. Similarly, despite JOSE’s purported goal of keeping things simple, custom JOSE implementations are likely fraught with security vulnerabilities. JOSE was standardized in 2015, so it is still too new for any particular implementation to withstand the test of time (which is one of the key tests for security specifications).

If you are considering using some parts of JOSE, here are some things to keep in mind:

  • JOSE makes heavy use of JSON serialization and deserialization, which (in .NET) is CPU-expensive, memory-expensive, and GC-expensive.
  • JOSE does not efficiently encode binary data, or any other non-string data, and needlessly wastes a lot of storage, memory, and bytes-over-the-wire.
  • JOSE (JSON) does not support lazy, just-in-time access to individual fields—the entire JSON structure must be de-serialized first.
  • JOSE encryption (JWE) is optional (another shortcoming), and is often not used. JWT (tokens) are usually signed but not encrypted.
  • JWE symmetric encryption modes are limited to AES-CBC (good conservative choice, but not as good as CTR, for example). The MAC tag is at least 32 bytes long (overkill that wastes storage).
  • JOSE follows an “a la carte” approach in the name of algorithmic agility. We know how well that same “a la carte” approach worked for TLS, where it is a constant source of vulnerabilities. The TLS 1.3 working group is finally wising up and tightening the 1.3 spec to use fewer AEAD-only schemes without all broken “features” of prior versions. The algorithmic agility of JOSE is creating security bugs already.

Cookies

Some people make a false assumption that HTTP cookie-based authentication is an equivalent to server-side state, where the cookie only contains the session ID, and all session information is keyed on that ID and stored server-side. However, Cookie (request) and Set-Cookie (response) headers are just specific HTTP headers, and do not mandate any particular authentication mechanism or approach. Cookies are actually described in RFC 6265 – “HTTP State Management Mechanism”, which states:

The Cookie header contains cookies the user agent received in previous Set-Cookie headers. The origin server is free to ignore the Cookie header or use its contents for an application-defined purpose.

It is that “application-defined purpose” that makes the cookie mechanism so useful, and the magic of cookies that makes them unlike any other HTTP header is the following capability:

Using the Set-Cookie header, a server can send the user agent a short string in an HTTP response that the user agent will return in future HTTP requests that are within the scope of the cookie.

This capability to send some data or state from the server to the client and have the client’s user agent (browser) automatically and transparently send that data or state back to the server on all subsequent requests is a great way of implementing client-side state (stateless from the server’s perspective).

Another excellent cookie capability is the HttpOnly attribute, which instructs the user agent to prohibit access to cookies via any non-HTTP API (specifically, JavaScript). We can use JavaScript APIs to read every HTTP header—except the HttpOnly-marked cookie header. Most other web defenses fall apart under cross-site scripting (XSS) or JavaScript-injection exploits, but not the HttpOnly cookies, which might be weakened (since many other attacks will become possible under XSS), but will not allow cookies (often containing user’s authenticated identity) to be hijacked.

XSS attacks are the bane of the modern web (lots of user-contributed content), and HttpOnly cookies are a great way of offloading state to the client without suffering the consequences. XSS attacks are usually not a matter of “if,” but a matter of “when”. If you are storing identity-bearing tokens client-side and are using some client-side storage other than HttpOnly cookies (for example, HTML5 local/session storage, or cookies without HttpOnly), you are effectively placing your “castle keys” outside the castle walls—are you really that confident?

HTTP/2

HTTP/2 (RFC 7540) is the next version of the HTTP/1.1 protocol. HTTP/1.0 was released in 1996, followed by HTTP/1.1 in 1999, and HTTP/2 in 2015—16 years later. HTTP/2 is a major revision with significant differences from HTTP/1.1. Some of these differences are:

Table 11: HTTP/1.1 vs. HTTP/2

HTTP/1.1

HTTP/2

One connection per request (with browsers pooling 6-8 connections per origin)

One connection per origin

Textual, e.g., “GET /index.html HTTP/1.1

Binary (i.e. much more byte-efficient)

Ordered and blocking (request-response)

Fully multiplexed (like TCP/IP)

Headers are repeated with every request

Headers are compressed (HPACK method)

Encryption is optional (HTTP vs. HTTPS)

Encryption is mandatory (sort of)

Client-pull only (explicit URI requests)

Server push (proactive data sending to client)

The HTTP/2 specification does not make encryption mandatory, but all current HTTP/2-supporting browsers will only establish HTTP/2 with TLS, which makes encryption practically mandatory with HTTP/2. Another important HTTP/2 security advance is that HTTP/2 implementations must use TLS 1.2 or higher. Therefore, using HTTP/2 automatically avoids broken or vulnerable old TLS versions. HTTP/2 is making the web faster and safer, so adopting HTTP/2 should be on your agenda. Microsoft supports HTTP/2 in IIS 10.0, which is part of  Windows Server 2016 OS. To see HTTP/2 in action, visit the HTTP vs. HTTPS Test and Akamai demo sites.

Scroll To Top
Disclaimer
DISCLAIMER: Web reader is currently in beta. Please report any issues through our support system. PDF and Kindle format files are also available for download.

Previous

Next



You are one step away from downloading ebooks from the Succinctly® series premier collection!
A confirmation has been sent to your email address. Please check and confirm your email subscription to complete the download.