7 reasons to use HTTP/2 when delivering sites to your client

HTTP/1.1 has served the web well for over twenty years, but now it’s age is starting to show. Loading a webpage has become more resource intensive than ever - pages have become larger and more complex, with many assets pumped through the network. Loading all these resources efficiently has become harder, because browsers are limited to one outstanding request per TCP connection.

In order to work around the limit, both browser developers and webmasters came up with a set of performance improvement tricks and mitigation techniques that form the foundation of “high performance websites”. The existence of techniques such as domain sharding (https://www.keycdn.com/support/domain-sharding/), data inlining, concatenation and spriting is an indication of underlying problems in the protocol itself. The use of these techniques can cause a number of problems of their own when implemented.

Here are some of the reasons why you should consider enabling HTTP/2 on your webserver.

  • It’s done. That’s right, HTTP2 spec has been finalised back in May 2015 in RFC 7540 ( https://tools.ietf.org/html/rfc7540 ). It’s the latest version of a protocol that powers the web and has seen explosive adoption in the last couple of years and it is here to stay.

  • It’s backwards compatible. Existing HTTP/1.1 clients will continue to operate normally, while clients supporting the new version can request the connection to be upgraded to version 2, thus taking advantage of the new protocol.

  • It’s supported by most popular browsers. (http://caniuse.com/#feat=http2)

  • It’s easy to enable. Both Apache and Nginx can turn on HTTP/2 support with just a couple of lines of code.

  • It’s binary, instead of textual. Binary protocols are more efficient to parse, more compact “on the wire” and make the protocol much less error-prone, since there is only one way of doing things and no ambiguity.

  • It’s multiplexed, instead of ordered and blocking. HTTP/1 has a problem called “head-of-line blocking” (https://en.wikipedia.org/wiki/Head-of-line_blocking) where a request can block all other requests until it is fulfilled. To work around this, browsers can open four to eight requests per origin in order to process requests in parallel. Many websites use multiple origins, which can mean that for a single page more than fifty connections can be opened. This unnecessarily wastes network resources, by duplicating data in each connection and effectively diminishes congestion control mechanisms built into TCP, which hurts both performance and the network. Because of this limitation it has become the industry norm to make as few HTTP requests as possible. Multiplexing addressed these problems by allowing multiple request and response messages to be in flight at the same time over a single TCP connection per origin.

  • It’s headers are safely compressed. There have been documented attacks ( https://en.wikipedia.org/wiki/CRIME_(security_exploit) ) in the wild against TLS protected HTTP resources, where it’s possible for an attacker who has the ability to inject data into the encrypted stream to recover portions of the plaintext. This can lead to extraction of authentication cookies or headers. HTTP/2 employs a compression scheme resistant to this class of attacks while delivering reasonable compression efficiency. This makes per request overhead much cheaper, since large headers such as cookies often form a significant part of the overhead, which in turn leads to lower latency because fewer roundtrips are required.

As of Feb 2017, 12.3% of top 10 million websites support HTTP/2 and this number is growing every day. HTTP/2 is more efficient, more performant, modern protocol for the web. Your apps will run better, your websites will get to the users faster and you can stop hacking your app to add optimisation complexities to work around limitations of HTTP/1.1