Top Tasks Classification

Year-in, year-out, the number one reason for task failure and slow task times for customers or employees using digital environments has been confusing menus and links—overly complex and poorly designed classification and navigation. This approach delivers you a customer architecture. It’s a classification and navigation system designed for customers by customers based on their top tasks.

The following steps are involved in Top Tasks classification:

  1. Take the top tasks (those that have received the first 50% of the vote) and other critical tasks, which might include:
    • Top tasks of a key sub-category or demographic.
    • Tasks that have a very high management priority.
  2. Create a list of tasks that is ideally no more than 20 and with a maximum of 30.
  3. Get 30-50 customers to sort these tasks into their preferred classifications.
  4. Identify classification patterns that emerge and then create a hypothetical Level 1 classification.
  5. Create between 10 and 15 task instructions based on the top tasks and then ask 30-50 customers where they would click on the hypothetical classification if they were trying to solve these tasks.
  6. The target is to achieve an 80-90% success rate.
  7. Usually, after the first round we have a success rate in the region of 60%. We discuss the results, agree changes we need to make to the classification and test again.
  8. Typically, it takes three rounds of testing to achieve an 80-90% success rate.
  9. The deliverable is a top level / level 1 of your classification. The method can then be further used to design lower levels.

European Union case study

The European Union wanted to create a single, unified classification for its digital environment for 28 countries and 24 languages. The first and most crucial step is to create the top level / level one of the classification. To do this we start by getting a sample of customers to sort the top tasks.

  • Based on the Top Tasks results we selected about 30 tasks for sorting. (See preceding image for a selection of these tasks.)
  • There were almost 80 tasks in the EU survey, but we only selected the top 30 tasks for sorting so that the sort would reflect customers’ top tasks.
  • However, all tasks will find a home in the classification. It’s just that the tasks that didn’t get such a big vote from customers will be found at lower levels of the classification.

Customers drag tasks from the left column into the center, create class groups and then name these groups. Once we get 30-50 people to do this, class grouping trends begin to emerge.

The following chart shows the type of data we now get.

We begin to see patterns in relation to how customers mentally organize things. For example, people very strongly group “Climate change, global warming” and “Environmental protection.” They also very strongly group “Find a job in another EU country” and “Working in an EU country (rights, permits, benefits)”.

1.1.     Testing the top tasks classes

We apply a series of rules we have developed over the years to the sorted data and come up with a hypothetical level one classification. The following table shows what was arrived at for the European Union.

The next step is to test this hypothetical classification with task instructions based on the top tasks. The objective is to get an 80-90% first click success rate. In other words, when customers’ have a top task we want them to click on the right top level link / class 80-90%+ of the time. That’s how we judge that the classification is intuitive and useful. We will usually create about 15 task instructions to test. There follows an example of such instructions:

  • The top task is Research and Innovation. It is the number 2 most popular task amongst customers.
  • On the left of the preceding image are two instructions based on this top task. On the right, the green shaded box reflects where we expect people to click.
  • We create 15-30 such instructions.
  • We then go back out to another sample of customers and ask them to select where they would click first for each of the instructions in order to get the answer.
  • The target is an 80%-90% first click success rate. In other words, if people are choosing the expected classification 80%-90% of the time, we have the evidence that this is an easy-to-use classification.
  • However, typically after the first round of testing these task instructions we get a roughly 60% succes rate. We have to figure out what’s wrong and try and fix it, and then go out and do a second round of testing.

There now follows some results from the classification testing for the European Union:

  • This question only had a 48% success rate in the first round of testing.
  • By round two, however, success had jumped to 92%. What happened?

  • We expected that people would click on Funding for this task. 46% did. However, 49% clicked on Research & Innovation.
  • This is what we call a “twin”. When it comes to customer journeys, there are often two dominent mental models. Here we see that some people think of this as a funding task, whereas others think of it as a research and innovation task.
  • Thus, we need to make Research & Innovation link also correct, and design a route to the answer if people click on Research & Innovation.

The next example shows us a different type of problem:


  • Round 1 had only a 53% success rate, but by Round 2, the success rate had risen to 78% Why was that?

  • The initial hypothesis had a classification called Jobs. This was for jobs offered by the European Commission itself.
  • However, when people were asked to find a job in another EU country, they quite reasonably selected the classification Jobs, whereas we expected them to select Work, Live, Travel in EU.
  • After discussion, the classification was changed to Jobs at the European Union, and the success rate went from 53% to 78%
  • 17% were still clicking on Jobs at the European Commission. However, it was argued that this could be made correct, because for many Europeans, getting a job in Brussels with the European Union was in fact getting a job in another European country.

Here’s another example:

  • In Round 1, we only had a 27% success rate. We obviously had some serious problems.
  • By Round 2, things had improved dramatically, with success rising to 83%. What had happened?

  • In the original hypothesis, there was a class called Food, Farming, Regional Development.
  • However, it was found with a number of the tasks that Regional Development was not a good fit with Food & Farming.
  • Therefore, in the next hypothesis, the old class was deleted and two new classes were created:
    • Food & Farming
    • EU Regional & Urban Development
  • We also found a twin effect with Funding & Tenders, so that was made correct.
  • As a result of these changes, the success rate increased by 56%.

We have found that typically three iterations of testing are required to get an 80-90% success level for your top tasks.


  • Data on how customers sort the top tasks into groups and classes.
  • Identification of a thoroughly tested and highly intuitive level 1 task-based classification for your website based on this initial sorting.
  • Solid foundation for the development of level 2 classification.
  • The ability to use the same method to develop level 2 and lower level classifications.


For a larger, complex environment, it costs about €20,000 to design one level of the classification. We also have a training service available where we transfer the skills to you and support you at each step of the way. This costs about €6,000.