PHP UK 2017

The Quidco team went to the PHP UK Conference 2017 for two days of great talks, networking and socials (and let's not forget those delicious tiny burgers).

We attended some really good talks about RESTful APIs, Test Driven Development, static analysis tools, architecture, deployments and brilliant opening/closing keynotes about the PHP community and open source contributions.

Day one started with a talk about the PHP Community, about how the community has grown and gotten stronger and how we can keep growing and improving - a very inspiring talk. 

 

After the opening keynote we attended Real-time Communication To Simplify Your Life about IoT use cases for using SIM cards and endpoints, to obtain a scalable and globally distributed communication within an application.

 

Probably my favourite talk from day one was Building RESTful APIs with Symfony Components, a very interesting talk about DOs and DONTs when building RESTful APIs and what Symfony components/bundles to use depending on what you're trying to achieve. It was a very insightful talk and also very relevant for the projects we work on.

We then attended Don’t Lose Sleep - Secure Your REST where we learned how to use JSON Object Signing and Encryption(JOSE) components to secure RESTful APIs and Static analysis saved my code tonight about how static analysis can bring expertise to review code, enforce good practices and keep the code ready for migrations to newer PHP versions.

 

The closing key note Using Open Source for Fun and Profit was a very inspiring and absolutely hilarious talk about how open source projects are not only making our lives easier, but they can also help build friendships and get support from these open source communities.

It was a very motivational speach, that also raised awarenes on the high number of developers suffering from mental health illnesses - it was so great to see the PHP community involved in this!

Day one ended with beers, burgers and great networking with old colleagues and fellow developers from different environments.

 

Day two started with I Think I Know What You’re Talking About, But I'm Not Sure and addressed issues that we face in the tech world daily. The challenge developers face in communicating with their peers within the same functional team or cross-functional teams within a business or when working with external clients. An unspoken issue that usually leads to misinterpretation of specifications.

We attended JWT - To authentication & beyond! about JSON Web Tokens, a nice overview of the how to use JWT and how it works, and then Preparing your Dockerised Application for Production Deployment about how to use Docker for deployments. 

Silo-Based Architectures for High Availability Applications was an interesting approach to handling deployments, and Drupal8 for Symfony Developers was interesting for former Drupal and Symfony developers (but might not be everyone). 

We then went to Docker, Kubernetes, and OpenShift for PHP Developers was good to see another approach for how Kubernetes can be used (since we are using Kubernetes too), and OpenShift in particular looks like a very convenient solution.

 

The day two closing keynote Towards a framework-less world re-enforced the current trend of moving towards framework agnostic architectures and the general realisation that we basically just need to string together functional component (libraries), rather than having a big mass where only 30% is relevant to our projects.

 

The PHP UK Conference 2017 was a great experience, we learned a lot and met some awesome people, but we also realised that we are going in the right direction (as we're already using quite a few of the technologies presented). Looking forward to next year's event.

7 reasons to use HTTP/2 when delivering sites to your client

HTTP/1.1 has served the web well for over twenty years, but now it’s age is starting to show. Loading a webpage has become more resource intensive than ever - pages have become larger and more complex, with many assets pumped through the network. Loading all these resources efficiently has become harder, because browsers are limited to one outstanding request per TCP connection.

In order to work around the limit, both browser developers and webmasters came up with a set of performance improvement tricks and mitigation techniques that form the foundation of “high performance websites”. The existence of techniques such as domain sharding (https://www.keycdn.com/support/domain-sharding/), data inlining, concatenation and spriting is an indication of underlying problems in the protocol itself. The use of these techniques can cause a number of problems of their own when implemented.

Here are some of the reasons why you should consider enabling HTTP/2 on your webserver.

  • It’s done. That’s right, HTTP2 spec has been finalised back in May 2015 in RFC 7540 ( https://tools.ietf.org/html/rfc7540 ). It’s the latest version of a protocol that powers the web and has seen explosive adoption in the last couple of years and it is here to stay.

  • It’s backwards compatible. Existing HTTP/1.1 clients will continue to operate normally, while clients supporting the new version can request the connection to be upgraded to version 2, thus taking advantage of the new protocol.

  • It’s supported by most popular browsers. (http://caniuse.com/#feat=http2)

  • It’s easy to enable. Both Apache and Nginx can turn on HTTP/2 support with just a couple of lines of code.

  • It’s binary, instead of textual. Binary protocols are more efficient to parse, more compact “on the wire” and make the protocol much less error-prone, since there is only one way of doing things and no ambiguity.

  • It’s multiplexed, instead of ordered and blocking. HTTP/1 has a problem called “head-of-line blocking” (https://en.wikipedia.org/wiki/Head-of-line_blocking) where a request can block all other requests until it is fulfilled. To work around this, browsers can open four to eight requests per origin in order to process requests in parallel. Many websites use multiple origins, which can mean that for a single page more than fifty connections can be opened. This unnecessarily wastes network resources, by duplicating data in each connection and effectively diminishes congestion control mechanisms built into TCP, which hurts both performance and the network. Because of this limitation it has become the industry norm to make as few HTTP requests as possible. Multiplexing addressed these problems by allowing multiple request and response messages to be in flight at the same time over a single TCP connection per origin.

  • It’s headers are safely compressed. There have been documented attacks ( https://en.wikipedia.org/wiki/CRIME_(security_exploit) ) in the wild against TLS protected HTTP resources, where it’s possible for an attacker who has the ability to inject data into the encrypted stream to recover portions of the plaintext. This can lead to extraction of authentication cookies or headers. HTTP/2 employs a compression scheme resistant to this class of attacks while delivering reasonable compression efficiency. This makes per request overhead much cheaper, since large headers such as cookies often form a significant part of the overhead, which in turn leads to lower latency because fewer roundtrips are required.

As of Feb 2017, 12.3% of top 10 million websites support HTTP/2 and this number is growing every day. HTTP/2 is more efficient, more performant, modern protocol for the web. Your apps will run better, your websites will get to the users faster and you can stop hacking your app to add optimisation complexities to work around limitations of HTTP/1.1

Functional programming in PHP (UK Symfony meetup)

Excellent talk given by Zsolt Sende at the UK Symfony meetup last week.  The first iteration of this talk - for which the slides can be accessed here - was delivered internally at the MSM London office just before Xmas and refined before being presented at the meetup.  Interesting subject content and good delivery in both cases.

https://www.meetup.com/symfony/events/236839687/

(Video to be uploaded at a later date).

"How useful is code coverage" - gamification metrics and code quality

"How useful is code coverage" - gamification metrics and code quality

Anyone who has ever written tests in JUnit, PHPUnit, PyUnit or Karma knows the joy of hitting that 100% code coverage mark, reflected in reports presented on their CI tool, or otherwise generated directly by Clover or Cobertura. It's a rewarding feeling to be sure...but the terrifying question is just how value this 100% code coverage figure is to the quality of our software product. 

Segway Slalom and Mopane Worm Mayhem

Segway Slalom and Mopane Worm Mayhem

For our monthly team building we decided to escape our comfort zones and try something new - a Segway tour at Moses Mahbida stadium.  We were fortunate to have our London CTO join us for the team building so we decided a whirlwind tour of the Durban beachfront promenade would be a great way to show off our beachfront and try our hand at some adrenaline and techi activity.

Consumer driven contracts

Last week I attended a 5 day conference in Austin Texas learning all things open source and attending some great talk and tutorials, Oscon is the largest open source conference in the world and attended by thousands of developers, CTO’s and IT managers to further develop their knowledge and skill set. 

After a 10hr flight, I thought I would stretch my legs and see the sites of Texas’s capital. However, this was a bad decision after getting caught in a thunderstorm and having a 40min walk back to the hotel in the rain.

 

The 3rd talk of the first day was on transitioning to micro services (http://conferences.oreilly.com/oscon/open-source-us/public/schedule/detail/49952), a hot trend in the tech industry today and the lecture hall was fully packed. A very interesting subject that was discussed was ‘Consumer driven contracts’. 

Having recently completed a project building a SOA micro service platform in Quidco and now having that platform being used in Shoop.fr we have faced issues with the website or the admin having functionality impacted when there has been a change made within the platform, wanting to release continuously and freely. I felt after listing to the talk that 'consumer driven contracts’ will be an efficient approach for the platform team to check against before releasing any changeS. 

Background 

Consumer-driven contracts (CDC) is a pattern for specifying and verifying interactions between different parts/modules of an application. Consumer-driven means that it is the responsibility of the Consumer to specify what interactions it is relying on, as well as their format. Other services must then agree to these contracts and ensure that they are not breaking them. It puts the responsibility on the consumer of the service e.g. in Quidco’s example the team who built and manage the admin and site, to define the coverage and write tests for the platform team to run after they make a change, if the tests pass then the platform team can proceed on pushing these changes out, if the tests fail then they need to fix the failure before proceeding, however, when there are changes in the admin or front-end (consumer), the tests written need to be updated and maintained.  

Without the necessary precautions, there are a lot of ways interactions between services and consumers can be broken by changes made in the different services. The most common one would be that the Provider would change its interface in such a way that the Consumer can no longer interact with it

Examples:

  • Change of the endpoint URL (e.g. GET /stockLevels renamed to GET /stockLevel)
  • Change in the expected parameters (e.g. GET /stockLevels expecting a new mandatory “category” field)
  • Change in the response payload (returns an array, instead of having an array wrapped in an object)

The concept behind CDCs is to split the work of the integration tests right in the middle, i.e. at the communication stage between the 2 services.

  • The consumer defines what it expects from a specific request to a service
  • The provider and the consumer agree on this contract
  • The provider continuously verifies that the contract is fulfilled

This implies a few things:

  • Consumers need a way to define and publish contracts
  • Providers need to know about these contracts and validate them
  • Consumers and provider might have to agree on some form of common state (if the provider is not stateless or depends on other services)

Since learning about CDC I have shared this with Quidco frontend lead, telling him about how CDC can reduce the reliance of heavy end-to-end or large-scale integration testing, within the conference I was told about Pact and this being a great tool for implementing CDC (https://github.com/realestate-com-au/pact). 

 

Sean Harrison

IT PM

Hypermedia in RESTful APIs

Hypermedia in RESTful APIs

In the last few years of my IT experience, I have noticed how every company wanted to build microservices, perhaps by splitting up an already existing monolith.  The advantages of having smaller independent services with single responsibility are widely known, which communicate between each other through RESTful APIs.
However, very few companies - startups and corporations alike - pay attention to an aspect of microservices which - in my opinion - is very important: how to represent data to the external world.

Symfony London - Microservices talks

Last week, several members of the Maple Syrup team took part in the Symfony London meetup near Bank, which focused on the very hot topic of microservices.  Overall, we found it was an educational and thoroughly enjoyable experience.  Informative, well-delivered talks as well as collaborative discussions with members of the thriving London tech community are their own reward.

Macbook cannot access certain sites (using eset mac antivirus)

Recently came across some strange behaviour on a couple of OSX macbooks. Both laptops could access some websites but not all and both Safari and Chrome were affected.

After a while of digging around I realised it was sites on port 80 that I had problem with (so SSL sites were not affected). The problem was present regardless of wifi or wired network I was on. The browser would request the page and eventually it would time out and some times the laptop would become unresponsive (especially using chrome which refreshes in the background).

The laptop were new and running ESET mac internet security.  Having looked at the ESET logs I realised that the eset proxy process was crashing on requests on port 80 and getting restarted stopping my browser from working until eventually my laptop would become slower and slower or crash. The error logs showed ESET Daemon Child process proxy did not handle signal restarting as per screenshot below.

To make things more frustrating switching off web access protection from Eset antivirus UI didn't fix the problem and to fix it, I had to switch of web access protection

eset 1.png

I also had to go to web access protection->setup and select a port not used by the browser (I setup port 79, make sure you select a port not used) as per below screenshot.

I waited a few minutes for the eset proxy processes to die and browsers started working fine and not had any issues with browser or slowness anywhere.