Perception drives end-user experience. Ryan Bateman and I came across a very interesting article by Bryan Gardiner in Wired Magazine describing some of the science around waiting for a page to load. At Dynatrace we are REALLY passionate about this and got to chatting about it. We constantly talk to customers about best practices when it comes to measuring customer experience and what users perceive and Google engineer Ilya Grigorik’s presentation, Performance on Rails immediately came up. In his presentation he outlined how humans perceive things:
What this shows is that after 1 second delay, users have already thought about something else. When we apply this to end user experience, the result is clear… faster response times are paramount. Mental context switches lead to changes in how end user convert and this can impact revenue. It just so happens that with Dynatrace we can do something about speeding up end user experience.
The problem in measuring this is that there are a lot of blue pills (making a movie reference to a scene from the Matrix) in terms of metrics used to represent what a user experiences. I’ve seen organizations use metrics like DOM Interactive (when the browser allows the UI to be interacted with), DOM Complete (when the browser has executed all page logic), window.onload (when the browser considers all assets loaded). There are arguments for and against using each of these metrics. A lot of the arguments against is that in a single page world (where asynchronous Web 2.0 functionality dynamically changes the page without reloading it) make these metrics no longer relevant.
There have been some efforts around providing better metrics to measure what an end user perceives and metrics like Time to First Paint are at the forefront of this. This measures when a user first sees something in the browser window rendering. Metrics like this are a good start as they are driven by what is being presented to an end user as opposed to what the browser is doing internally.
Above you can see in this real world example where a company successfully focused its efforts on decreasing load time, but only modestly changed the First Paint Time which impacts users perceptions.
Platforms like Dynatrace offer organizations a red pill, “showing them how deep the rabbit hole goes”. Our platform provides a measure called Perceived Render Time (PRT). PRT is calculated based on a number of factors including the browsers screen resolution (initial browser window) and the number of images seen in the viewable window of the browser.
This metric allows organizations to get a better understanding as to what a user is perceiving when a page loads even though there is a lot of execution of the page which is happening in the background and “below the fold” (this is an old newspaper publishing term which refers to what is being presented on the top fold of a newspaper page).
In the example above you can see an event where page load times increased, but PRT did not. For an operations team monitoring load time this could have been a costly event, however an enlightened team, managing by comparing PRT to Load Time would know that this event may not have affected end users perception at all.
PRT is being adopted by many of our business users as a main metric to manage by. Page load times do not reflect what users are perceiving, they reflect how long it takes a page to load. These are not the same thing. Items like the position of an image, the size of an image, caching images, server responsiveness all can impact PRT.
PRT is definitely providing business users better insight into what their customers are perceiving, but Dynatrace also offers a User Experience Index. Individual page loads are important, but they need to be considered in the context of a complete visit. The User Experience Index includes not only load timing, but also includes server side, network contribution, client side errors, etc… It is a more holistic view into user experience than a single metric.
As demonstrated by the Wired article and the Google presentation, interest in this topic is picking up, which our team finds both exciting and gratifying. These documents serve to affirm both the importance of monitoring end-user experience, and the level of complexity it involves.