Perceived speed


Ten years ago my most commonly used operating system was BeOS (with *nix second and windows third). I’ll try not to wax nostalgic about that particular OS (as much as I’d like to) but I vaguely remember a quote from the Be CEO Jean-Louis Gassée that it was not so much the actual speed of the OS that mattered but the perceived speed of the OS to the user. Of course, research into thresholds for computer responsiveness goes back at least to Robert Miller’s 1968 paper Response Time in Man-Computer Conversational Transactions but the quote from Gassée was more about the way that the BeOS had been built to in such a way that the UI responsiveness was maintained regardless of what other processes were running at the time.

Still, whilst responsiveness thresholds have been known for a long time, it took my first dual processor system for me to be able to use Windows without the frustration of random processes temporarily locking-up the UI. Anybody who ever used Windows 95 will surely remember the dreaded “floppy drive selected UI lockup.” Usually what would happen is that if the floppy drive was accidentally selected in the drive dropdown then the UI would lockup whilst some other part of the OS did the disk seek. Not a good user experience.

The same problem is still apparent today in devices ranging from expensive LCD and plasma televisions, mobile phones and set top boxes to ATM’s… given a selection of ATM’s my choice is always the one that isn’t St George. Whilst this aversion could partly be attributed to some other quirks in the UI, I think the primary reason is that these St George ATM’s respond to my key presses slow enough that I’m often never sure if it’s registered my input or not. Not a good user experience. Trying to use one of these ATM’s is not only slow but also annoying in the same sense that a poorly dubbed movie is annoying.

Web browsers have usually been reasonably good at feeling responsive even though it often takes a long time for them to fetch the actual UI and content of the site I’m trying to visit. It’s been a long time since I’ve used mosaic so I can’t be sure but every browser I can remember using has usually kept the UI responsive and provided feedback to me for what it was doing. There have been exceptions to this in most browsers I’ve used but in general, one of the aspects I’ve liked in every browser I’ve used is that if a website is loading too slow, I can hit stop, refresh, or simply leave the site and go elsewhere.

So, browsers have usually done this stuff pretty well by default. The websites they serve have varied wildly in how they handle things internally though. Hidden PDF downloads, Flash loaders that bear no relevance to how long things will actually take and poorly implemented iframes (the browser says something is loading but nothing is happening) are some of the offenders. JavaScript code has been another offender, not due to the nature of the language itself but simply because it gives the power to create interaction other than page loads, which means that good UI responsiveness and feedback of the browsers can be subverted.

So, to JavaScript then. What are some things that a UI developer working in JavaScript and interacting with the DOM can and should do to maintain UI responsiveness? I’m going to look at this in terms of the three basic thresholds from Miller’s paper.

One tenth of a second is the limit for the user to think the system is responding immediately.

This generally means that ajax requests will usually not feel instantaneous unless you are caching, memoising or predictively prefetching the responses.

Ajax requests aside, if a JavaScript event handler has to do some processing you should handle the processing after providing feedback to the user. So, instead of this:

something.onclick = function() {
    // do some intensive processing
   alert("UI response");
}

It’s better to do this where possible

something.onclick = function() {
   alert("UI response");
   // do some intensive processing
}

Of course, processing can often be done well in advance of the event hander being fired in many instances, which is also good. Additionally, whilst JavaScript is single threaded, functional techniques (and setTimeout) can be used to fork processing in terms of how the user perceives the UI.

Selecting the most appropriate event to fire on is also important. In many cases it’s better to use onmousedown than onclick due to the former firing earlier than the latter by something like 80 milliseconds.

Another point is to avoid querying the DOM repeatedly to determine state of the UI. In many cases the current state can be modelled in code - many UI responses are based on simple mathematical formulas (or can be approximated) which means it’s possible to avoid repeatedly, and expensively, querying DOM elements for their current state. In Gang of Four terms we’re basically talking about a mediator.

1.0 second is about the limit for the user’s flow of thought to stay uninterrupted

If the UI is responding to user action somewhere between one tenth of a second and one second then the user’s flow will not be interrupted, but they will not feel that they are acting directly on the system. For many UI controls like tabs and accordions I’ve found it important to have the response right down near one tenth of a second important. This also applies to image galleries - whilst a cross fade can be cute, it is cutting into the time for the user getting the response action they want - a long transition for user initiated switches will diminish the user feel for acting directly on the system. Transitions longer than one second should be avoided for anything user initiated.

Personally, I tend towards user initiated actions responding within one tenth of a second and either completing immediately or transitioning to completion in under one second.

10 seconds is about the limit for keeping the user’s attention focused on the dialogue

For anything above one second, the user needs to know that the UI has registered their input and is doing something. This applies whether the delay is caused by processing or by the fetching of data in some ajaxy fashion. I personally find that one second doesn’t feel quite zippy enough to make a loading indicator of some sort useful but the specified time would need to be tested for a given application. For the purposes here I will be using half a second.

One way to approach the crossover into loading-message-required territory would be something along the lines of this (incomplete) code fragment:

(function() {
    var showLoader = function() {
            // some code to show the user that something is happening
    };
    var hideLoader = function() {
         // some code to hide that same loader
    };
    
    

    something.onclick = function() {
        var t = setTimeout(showLoader, 500);
        
        // a call to some myapp.doProcessing function that 
        // accepts a callback for when it completes
        var myCallback = function() {
            // just in case we return immediately, 
            // stop the loader from appearing
            clearTimeout(t);
 
            // remove the loader in case it was there.
            hideLoader();
        };
        myapp.doProcessing({"data": ""}, myCallback);
    };
}());

In the example above, we will display a loading message if the UI control hasn’t finished after half a second. If the loader is displayed, it is removed as soon as the UI control returns.

It should be noted that single threading means the accuracy of timers in JavaScript is inversely proportional to the amount of processing that is happening. So, if a transition needs to complete in under one second it’s best to shoot for something less than one second. In addition, it’s important to understand how timers work and the difference between setInterval and setTimeout but instead of also covering that here, I’ll refer to a post from John Resig.

In the example above, adding a further timeout at 10 seconds to handle longer response times (and possible errors) is an exercise left up to the reader.

More than 10 seconds

Anything that takes more than ten seconds will need to allow the user to interact with some other part of the UI whilst the user input is being acted upon. There are a number of ways to achieve a file upload progress bar but if it’s going to take more than ten seconds, the UI should allow the user to get on with it and let them know later when it’s done. Modeling this on browser download managers would be a good start.

Knowing how long it should take to generate a response

There’s a large amount of variation in responsiveness for different users depending on their individual connection and configuration. If you’re profiling the performance of your UI then for any given action, it’s possible to have a baseline responsiveness. From there, you’re able to compare that baseline to something like the page load time for the given user, and adjust the feedback provided by the UI accordingly. Comparing the page load time would be something along these lines:

(function() {
    var baselineLoad = 1800,
        startTime = (new Date()).getTime(),
        diff;

    window.onload = function() {
        diff = baselineLoad - ((new Date()).getTime() - startTime));
    };

}());

The diff will give some indication of the performance of the client relative to tests. In addition to this comparison, an average of response times for specific UI components could be kept throughout the page session. That is also left as an exercise for the reader.

Hopefully there were a few useful tips in there around UI performance and applying responsiveness thresholds to JavaScript powered UI’s.



CoffeeScript in Action


CoffeeScript in Action book cover

I'm the author. Get it from Manning.