Design principles

The least you actually could do

I read an article about UX the other day. A person had made the so very common comment to the area of user experience in general. He said that "All we need to do is pretend that we are the user and evaluate what we have done from that perspective". I regret that I do not agree on this. We are too biased as product designers and builders, we know too much about the inner workings of a product. Also, we are most likely not the intended target group, so we will not use the product in an actual situation to solve an actual problem, hence it will be very hard to pretend sufficiently. 

But, what we can do, is to look to research results that apply to most of the population and thus most of our users. These results are called design principles and they can help you improve your product up to a certain point. Below is the set of design principles that I use the most. They are cherry-picked from Nielsen's 10 heuristics, Shneiderman's 8 golden rules, Benyon's 10 principles for interactive design and the Usability Body of Knowledge.

Support the user's mental model...

  • Consistency - Use the same usage patterns for the different but similar features in the product. Be consistent with similar products and follow standards/conventions. Note that both conceptual and actual consistency are important.
  • Familiarity - Learn what products your intended audience is using today and follow conventions in these. Use commonly understood concepts and expressions. If the concept is brand new, find a suitable metaphor to help them transfer old knowledge to this domain. 

Help the user understand what to do...

  • Simplicity - Make the simplest design possible to solve a problem. Eliminate unnecessary steps or elements in the product to help focus the user on the task at hand.
  • Constraints - Provide constraints so that people do not try to do things that are inappropriate. In particular, people should be prevented from making serious errors through constraining allowable actions and seeking confirmation of dangerous operations.
  • Affordance - Design things so it is clear how they can be used. For instance, make buttons look like buttons and links look like links so that people will press them.

Make sure the user feels safe and secure... 

  • Visibility/Feedback - Try to ensure that things are visible so that people can see what functions are available and what the system is currently doing. Constant and consistent feedback will enhance the feeling of control.
  • Recovery - Enable easy recovery from actions, particularly mistakes and errors. This includes always giving the user a way out if a certain action was unintended.

Be nice to the user...

  • Conviviality - Design the product to be polite, friendly and generally pleasant. Nothing ruins a product more than an aggressive message or an abrupt interruption. 

We can use these design principles to clear up existing problems in our product or simply have in mind when we design new things. In the former case, go through your application with one design principle in mind at a time and try to find problems to correct that relates to the specific design principle. In the latter case, learn these (very few) design principles by heart, teach them to your team and incorporate them in all of your designs. 

But remember, to make sure that the user experience for your product is good enough, you need to validate it with actual users of the product, otherwise it would not be called the user experience.

Measuring usability

This sounds like a very scientific subject, but let me assure you that it is not. Measuring usability is done to make sure that we focus on outcomes instead of output. Success in product development is based on our ability to deliver what the customer really appreciates, which for most of the time has no correlation with the amount of features produced. The skill needed is being able to minimize the number of product features while also delivering value, and this is where measuring the usability can help us greatly. This is actually not rocket science. The level of measuring does not need to be advanced or specific. We wish only to avoid getting the hipster designer disorder by making sure that we are building the right thing.


UX Apprentice says:
Avoid the ‘hipster designer disorder’. It’s characterized by an intense need to create novel designs just to be different. Typically leads to unintuitive interfaces with astronomical implementation costs and low adoption.


A very good side effect of measuring is that if we present the results in a structured way we can use it to show management (and other stakeholders) why UX is important. It can also help us to see trends and get guidance when designing. In this article I will present a low-cost good-enough method for measuring usability so that you can easily try it out and see the benefits of measuring usability.

Gathering needs
The outcome we are normally looking for when it comes to user experience design is to fulfil the needs of the users. The natural first step would be to gather the needs through interviewing actual users (or potential actual users, or even competitors' users) about their current situation. I can recommend doing this using the contextual inquiry protocol, which includes interviewing in an authentic usage situation to get the real needs, not only the perceived needs. Needs can be expressed in many different ways but a lot of needs can be found by drilling down in problem areas with follow-up questions such as "why did you find this complicated?" or similar. Possible answers to that question might be "because I do not remember how this works" or "I never really learned it properly". Both of these indicate that there is a problem with the learnability of the tool. 

Usability qualities, like learnability, is the output we want from the needs gathering sessions. Reshaping the needs into qualities makes it easier to group slightly different needs into a quality that is easier to measure and it is also easier to generalize the problem area to the product as a whole. Examples of qualities can be taken from the ISO definition of usability: Effectiveness, efficiency and satisfaction. Other qualities can be ease of use, manageability, relevance, attitude, consistency, reliability, sense of being in control, conviviality and simplicity. 

If it is complicated getting the users to express their needs, then the System Usability Scale-questions can come in handy. Use the questions to draw out the needs. For instance, "Did you find the system cumbersome to use?" would implicate the manageability quality, while questions 4, 7 and 10 could be grouped to form the learnability quality.

Creating a baseline
When you have a list of qualities that seem to point out needs, create a survey based on these (or use the System Usability Scale-questions immediately, but you might miss some of the users' actual needs). Quantify the questions to be able to create a numeric baseline easily. The following three example questions take care of efficiency, effectiveness and satisfaction respectively:

  • On a scale from 1 to 7, where 1 is that you strongly disagree and 7 is that you strongly agree,
    do you find that working with the tool produces the intended result?
  • On a scale from 1 to 7, where 1 is that you strongly disagree and 7 is that you strongly agree,
    do you complete your work quickly using the tool?
  • On a scale from 1 to 7, where 1 is that you strongly disagree and 7 is that you strongly agree,
    do you find that working with the tool is a satisfying experience?

This survey could be sent out to people, but it tends to be answered better if you make sure that people are answering the questions at the same time as they are using the tool. So, I would suggest you go back to the users that you interviewed and ask the questions in the survey in person. 

You will end up with both quantiative results, a number on the scale and in the end also an average score for all the people you've interviewed, as well as qualitative results after asking follow-up questions to really understand why a person answered 3 on a certain question.

Visualizing the baseline
To communicate and analyze the results efficiently, I would strongly suggest that you visualize it. After calculating the average score for all questions and combining some scores to form a specific quality, you can plot the qualities in a chart, like the radar chart.

Here, it is easy to see in what areas your product is inadequate. Thus, you have a baseline for your future work.

Setting a target
To use the baseline as a guide when designing, you will need to set a target criterion for each quality. The target is there to have something to aim at, to set a focus for the design work. In reality, reaching the target will not automatically mean success. You will have to evaluate to find out if you are in a good spot or if you need to carry on. 

Setting the target criteria for the qualities is a team effort. Analyze the radar chart with your UX team and include the product owner. Decide upon a target level for each quality. Make sure that these targets are not set too high, because that will make them totally unreachable (for instance, setting all the qualities' targets to 7 is not plausible). Remember that some qualities may effect each other, like simplicity and maturity/feature completeness. Use your experience and built-in gut feeling as a UX:er. Then, plot a target line in the radar chart.

Moving forward
When something new is designed, create a simple prototype for the user to try out. It is important to let the user interact with the new design in a way that is as close as possible to the current product. It is not enough discussing a new design for this to give value. Start the session with the user by going through the current interaction (to solve some task). Then go through the new design with the prototype, performing the exact same task. After that, let the user answer the survey but for the new design. Add the result to the radar chart and you will easily make out in what areas the new design made an impact

This chart can be updated after every new feature is added to the new version, showing the old version, the target criteria and the latest version. There can be added value in plotting the individual qualities over time to follow-up what kind of changes made the coverage of the specific quality better or worse.

Of course, what we are seeing here are only trends, especially if the participants in the studies are few, but trends can tell a lot if updated regularly. I try to have short meetings with around 5 to 10 users (and asking them to fill out this survey) at least every second month, of course depending on the speed of churning out new design/new features, both for getting short feedback loops and creating relatively correct trend curves. Showing trend curves to the product management can give them good guidance in the product development cycle including giving proof that user experience design is valuable. Before you know it, they will show the graphs to everyone else, to show how great the products are.

End note
There is certainly value in measuring usability , but at the same time it is really important to do the least amount of work possible to get this value. I hope my suggestions and way of working will help you in this venture.

User stories reimagined

Since the concept of user stories was introduced it has been used (and misused) in so many ways. The stories were intended to replace formal requirements (that were, for practical reasons, not followed anyway) and use cases (that were way too cumbersome to write). A story is supposed to be a promise to have a conversation around the subject of the story, to eliminate misunderstandings through speech, not writing. This means that a user story can be very high-level and somewhat abstract, and the discussion around it will make it more tangible. 

Mike Cohn wished to formalize the user story a few years back. Having a common format helps the stakeholders to prioritize compared to having a lot of different ways to write similar feature requests. His suggestion of a format is now de facto standard:

  • As a [role] I want [feature] so that [benefit]

To explain something with the focus on who wants it helps the person reading the story to put herself in the shoes of this person. Mike Cohn tells a story[1] about Beatles popularity during the sixties. Beatles were among the first ones using pronouns in the songs, such as She Loves You, I Wanna Hold Your Hand, I Saw Her Standing There, I Am The Walrus, Baby You Can Drive My Car, etc. This helped people identify with the song text and thus get more into the music. 

But, since it requires a little bit more thinking (and hopefully research) to find out who is wanting this feature and why, the story has been shortened to:

  • As a user I want [feature]

This will of course not help the person reading the story identify herself with the target group of the feature. And, not knowing the benefit will not help the story getting prioritized. The other day I saw a story saying As a database server, I want to be quicker and of course, nobody wants to identify with a database server. I hope the author of that story had had enough of these bad stories and was being sarcastic.

As a UX professional, I work more with target groups (often in the form of personas) and why these target groups need a feature, than with the feature itself. This will ensure that we only build the features that makes sense building from a target groups perspective, thus helping us building something that people actually want. A good approach is to use impact mapping (spoken about here, under the name of effect mapping), where you first find out the impact (or benefit) that the product you are building is supposed to make, for example earn more money to the company. Then you find out who can help you with this, the target group (for instance formulated as a persona) and in what way (their usage goal) a feature would benefit them in a way so that the product can reach its intended impact. These impacts, target groups and goals can add to the story structure like this:

  • As a [persona / target group]
  • I want [feature]
  • So that [benefit for the user (preferably measurable)]
  • In order to [impact / benefit for the company (preferably measurable)]

Now you ask yourself, since the main user of a user story is a developer, why would she need to know the impacts and benefits. The answer is simple. If a developer understands why a story is being made, it is easier to decide on a solution and it is way more fun to work knowing that you do something good for someone.

Here's a more practical example of the format above:

  • As Kent the Single-Parent
  • I want to always be able to lock the door from the bed
  • So that I feel secure in the hotel room
  • In order for the hotel to be able to target more customer types

Now, the conversation will start. This conversation is between the product owners and developers as a minimum. User experience people and domain experts may come in handy to weed out the misunderstandings (see the article about roles). The findings are formulated in short scenarios, borrowed from specification-by-example[2] (a very similar method is called Behavior Driven Design) notation:

  • Scenario 1: [name / short description]
  • Given [the preconditions]
  • When [action is taken]
  • Then [consequences occur]

If we add a scenario to the example above and add measurable impacts and goals, it could look like this:

  • Feature: Remote locking of door 
  • As Kent the Single-Parent
  • I want to always be able to lock the door from the bed
  • So that I feel secure in the hotel room
  • (7 out of 10 customers should rate the hotel room security above 5 on a scale from 1 to 7)
  • In order for the hotel to be able to target more customer types 
  • (200 more customers per month)
  • Scenario 1: Lock switch function
  • Given there is a bedstand with a switch
  • When I switch on
  • Then the door is locked

The scenarios can easily be used for both BDD-programming, for instance with SpecFlow, and as a base for quality assurance. The story cards that you would put on your wall would consist of the first part, not the scenarios.

For the kind of stories that do not have a clear target group, like the As a database server-story above, we add another user story type called Chore. The format is the following:

  • [Feature/Chore] is needed / required
  • So that [impact / benefit for the company (preferably measurable)]
  • Scenario 1: [name / short description]
  • Given [the preconditions]
  • When [action is taken]
  • Then [consequences occur]

Adding the So that-clause to the Chore gives the possibility for a more technical story to be on equal terms with a user story when it comes to prioritization. Otherwise, a developer could have a way harder time to convince a product owner that a certain action taken would be a very good idea.

As I said, these stories exist as a promise for a conversation, a conversation that will lead to great communication. And that is the impact that we want.


The principles of LeanUX phrase user stories as hypotheses to force validation as soon as possible. Shorter and shorter feedback loops are better for knowing you are always building something valuable. A LeanUX user story combined with the baseline above could look like this:

  • We assume that
  • As Kent the Single-Parent
  • I want to always be able to lock the door from the bed
  • So that I feel secure in the hotel room
  • In order for the hotel to be able to target more customer types 
  • We intend to prove this hypothesis by
  • Showing that 7 out of 10 customers rate the hotel room security above 5 on a scale from 1 to 7
  • Reaching more than 200 additional customers per month before the end of the year

Read more about this way of working in the article about Continuous Discovery!

[1] Advantages of the "as a user I want" user story template by Mike Cohn

[2] The key ideas of Specification-by-example by Gojko Adzic

Continuous Discovery - The grand experiment

In agile product development, the focus usually lies on continuous delivery, i.e. releasing high quality software fast through build, test and deployment automation. This ensures that we build the product right. Lately, I joined a group dedicated to the discovery phase of the agile development timespan. In that group we coined the term continuous discovery, to make sure that we always build the right product (i.e. focusing on business value) by frequently validating hypotheses and measuring outcome. The combination of these two are of course preferable. 

In a recent project, I decided to try this approach with a little more sincerity. The usual caveats applied, so I couldn't go the whole way, but what I learned (and that is what matters, right?) is something we can all value. So, here is my ideal product development setup utilizing concepts from Agile Product DiscoveryLean StartupAgile UXLeanUXJeff PattonJeff Gothelf and hopefully common sense. The foundation of this approach is the Lean Startup-methodology where the focus lies on validated learning and you consider everything an experiment.

Collaborative chartering workshop
The aim for the first activity, a workshop that would usually take around a day to complete, is to get a shared understanding in the team1 of why the product is being made and for whom we are building it. This is a hypothesis that we need to verify later. We use an effect (or impact) map as a base for our hypothesis. The effect map we create in this workshop consists of four sentences or paragraphs; Explaining why we are building this and what the effect will be, who will help reaching this effect, what their needs are (which we must fulfil for them to help us reaching the effect), and how we are going to fulfil their needs. That was a mouthful. Here is an example: 

In the example above we form several hypotheses to be validated later. It is important that they are testable. Here is one example:

To more easily list why we are building the product, our client has already prepared a business case (he would naturally need to do that to get funding for the project) or at least an elevator pitch. A good starting point is the Business Model Canvas. This gives us basic information about who we are building the product for (or rather, who can help us reach the effect we want) and we might then guess what their needs are. This is written down in an Ad-hoc Persona Template, where we list relevant characteristics and needs of the target group. These personas can be sketched out individually by the members of the team (based on their understanding) and then merged together.

The business model and the ad-hoc persona are of course also hypotheses. The former is the business analyst's responsibility to validate and the latter the UX person's. Since people in our line of work come up with technical solutions naturally, a special exercise for finding out what we can build to try fulfilling the users' needs is usually not needed. We will save that exercise until we need details about features in the next workshop.

For a slightly larger project, we usually end up with 3-4 effects that we want to reach and perhaps the double amount of target groups / personas that will help us fulfill it. When the project goes on, we will change these effect maps, add more effects and fill out the holes. We never end up with anything even near full coverage for this first workshop. If we manage to build one effect map, the most important one, answering the why, the who, the what and the how, then that would be good enough. 

The full day is an intensive back-and-forth between team and stakeholder, iterating toward shared understanding, while creating a lightweight continuously evolving at-a-glance vision, i.e. on a whiteboard, preferably permanently placed within eyesight of the team.

Validating the hypothesis with user research
The outcome of the collaborative chartering workshop is a hypothesis, now it is time to go out and validate it. Depending on how easy it is to get hold of potential users, these activities could take as little as a couple of days. 

A few deep interviews with potential users that seem to match the personas are usually enough to get insights on the needs and how well the features would match them. If the product you are building is supposed to cover one part of a larger workflow, and that is usually the case, then it is suitable to create an Experience Map to visualize what happens before and after the user's interaction with the product, to make sure that the outcome is the correct one.

Story mapping workshop
The aim for the second workshop is to end up with a set of features that we would need to build and a draft of a release plan. This would also take around one day.

To find the features that we believe our persona would want to use to help us reach the effect, based on the user research that we've done, we use a slightly modified version of Design Studio. Here, the focus is on coming up with features by sketching them, like a brainstorming session. There is no need to iterate, as you would do in a normal Design Studio exercise. We are not talking about the look and feel, but what features might exist and what kind of data they would use or show, just using the medium of sketching. This gives a better understanding of every feature mentioned compared to only writing them on post-its. The features do end up on post-its though, getting them ready for the next part of this workshop. 

The story mapping exercise, the second part of this workshop, is based on Jeff Patton's story mapping, but is built up from the user's perspective with a workflow suggestion as a base. This is later used as the foundation of an experience map-based evaluation.

The features are placed under each activity in the user's workflow, then with the help of the business analyst's conclusions and the developer's T-shirt estimates, they are sorted into releases starting with a Minimum Viable Product (MVP). Of course, you might benefit from validating this MVP right after this workshop, but for us it has been suitable to immediately begin with the next exercise. 

Design and story pampering (and delivery phase cadence)
Now that we have an idea of what our product would entail, we dig into the details of the stories by using the design studio method for exploration of design ideas. We make sure that every story is thoroughly discussed based upon these design ideas. We call this Design Pampering, and it takes around 3 hours, depending on the size of the sketching timebox and how many iterations you do. After this, the product owner and the UX designer can specify stories and create a design very quickly. The design pampering would then be followed by a Story Pampering (~1 hour), where developers can discuss and estimate stories with the rest of the team for easier planning, instead of doing it on their own.

These two workshops will be revisited during the delivery phase when they are needed. We use cards representing these workshops in the next queue on our team board, so when the developers are almost out of stories, it is time for a new design and story pampering round. 

Validating the hypothesis with usage tests (and delivery phase cadence)
The value of the MVP (and subsequent releases) is validated by discussing with users, showing sketches, creating an experience map and/or testing with a simple click-through prototype. We carry out continuous usage tests during the delivery phase by setting up a cadence for this. For instance, we book time with users every other week, to discuss or test something, whatever we might need to get feedback on. It is always a good idea to continuously meet the actual users, if only just a couple of them at a time. The main thing is to achieve the feeling that it would be out of the ordinary not to meet users regularly to validate hypotheses.

Continuous delivery and the most important cadence
Aligning our marketing and user experience strategy with the continuous delivery phase, developers use Feature Flags to ensure continuous deployment, and wait with release until we have a feature set that makes sense to potential clients and current users. In the first release iteration, that would be the minimum viable product. In the latter, it might be Minimum Marketable Features or bigger releases. 

To keep the shared vision and to speed up communication, we use Story Bootstrapping and cross-functional Design Pairing when working with stories. The bootstrapping is a small session held when the developers pull the next story from the queue. During this session, the nitty-gritty details of the stories are discussed, sketched out, etc. to give the developers a good start. Whenever needed, a designer or the product owner can pair with a developer to help out with design and strategy.

During this phase, whenever we learn new things, we will update the effect map and the personas, and communicate this to the team (and other people that might benefit from it). The artifacts that we created are dynamic.  

The same can be said about the process. The most important cadence is the one for continuous improvement, where the whole team (incl. business owners/stakeholders) gather for a retrospective aimed at evaluating and improving the process. If we don't have that, we cannot call this process agile.  

End note
I am currently working in a process that has looked somewhat like this. Based on the outcome so far, from the product discovery, delivery and retrospectives, this would be my starting point for the next project. But even if I say that this is my preferred way of working right now, it might look totally different after a few retrospectives. 

I presented this at Scandinavian Developer Conference and Devlin 2013, the slides can be found on SlideShare.

1 - The team players in the workshops mentioned above are at least a client or other type of stakeholder who brings the vision, a product owner or business analyst who can say what outcome can give value to the company, a UX advocate who can decide what is usable for the intended users, and someone from (or the full) technical team of developers etc. who can bring clarity in the technical feasibility of creating the aforementioned outcome. Since the workshops are to be the basis for future collaboration, the more the merrier, as long as they are pigs, to quote Scrum-terminology.

Processes and practices are nothing without principles

Why inspect and adapt is the most important lesson from Agile

During the last 10 years, quite a large chunk of the developer community in Sweden have evolved with the help of Agile (and, recently, Lean). This means that almost everywhere you go, IT departments are inspired by this fairly new approach to development. And quite often people have implemented different Agile techniques in their processes, making the turnaround times smaller and collaboration better. But, in too many cases, people try to do "Scrum by the book", copying the methods and techniques without questioning, without adapting them to the situation at hand. I've seen many companies saying "Yeah, sure, we're doing Agile, we have our daily scrum every morning". You can probably guess where I am heading, just by reading the title of this rant, but before that I have another tale to tell...

The User eXperience community have, through the years since its birth, been wildly influenced by both cognitive psychology research and design (as in creative, graphic design in the ad agencies) as well as traditional IT methods. The psychology foundation has given the UX practitioners a firm method base to stand on, but (and I do say but) the ad agency heritage had made the UX practitioners heroes of design, working on their own, producing wireframes and other deliverables like magic. When UX:ers entered the old IT world of the waterfall method, this malpractice was enforced, it was easy to set up shop in the silo named "The Design Phase". This way of working was also taught in the human-computer interaction courses. I am not passing blame, only telling it as I actually lived it and taught it this way myself. Everything could be solved by just applying a bunch of techniques in a certain order, that was the message I used to preach.

Then combining Agile and UX should be as simple as shoehorning in the old UX methods into the Agile iterations, right? We'll just do our pixelperfect wireframes that we will deliver to the developers a bit quicker than we used to do. But, all that shoehorning only gives us blisters. Putting a lot of thinking in the design is needed, we have learnt that, to build the right product. The complexity of the analysis and design phases are high, there are a lot of things that needs to be set before we deliver what shall be implemented. (This is by the way also true even if we use an Agile method.)

So, we add a long Sprint Zero before the developers start implementation, thus going back to the waterfall-style design phase were we feel comfortable. We continue throwing documents over the invisible but oh-so-tangible wall. Some teams have found that doing design one sprint ahead helps, and it does, but it is still throwing documents, albeit smaller ones.

Going back to the Agile guys from the first paragraph. They too are trying hard to make the methods work for them in their setting. Their management are telling them to work faster to be able to deliver on a certain date. But things like on-time and on-budget delivery or even high quality aren't helpful if you've built the wrong thing. I guess I said that already.

But, isn't there another way of doing this, where we can keep the detailed design thinking but moving ahead quickly anyway? I believe that the actual problem stems from a misconception. The misconception that Agile is methods, processes and practices ... and nothing else. The Agile Manifesto exists, with its "We prefer [the left side statements] over [the right side statements]", but there is actually more. In my experience, people do not know that, so today I've taken on the role of the educator. I present to you, the 12 Agile Principles:

Scrum, Extreme Programming, Kanban and the rest of the Agile and Lean methods are actually frameworks. Frameworks with the purpose of upholding the principles. These principles shall lead the way to better products through things like better collaboration and motivated people. We have to understand why we are implementing a certain method. When we have understood that, we can pick the practices we see work towards the principles and for the specific product we are working on right now. So, following principles, working smarter instead of faster, will in the end make us work faster while maintaining depth and quality.

But to make sure that we are doing the right thing, we must constantly fight. It will not be enough taking a 2-day-long Scrum Master course to know what methods and practices are the correct one, we must constantly inspect and adapt. Our tool for that is 改善 - Kaizen, continuous improvement. If someone tells you that Agile is to have a Daily Scrum each morning, I can tell you they are wrong, but if they say it is to have a retrospective every week (and react on things said there), then they are on the right track. UX practitioners can use this tool as well, to adapt their methods (we really have a lot of them) to the agile ones. Inspect and adapt.

Through inspecting (reading about or discussing methods and trying them out in an agile setting) and adapting (them to the situation at hand), I have found a few principles that seems to work for me. They might prove a good starting point for you. My goal for these principles is to deliver actual value through collaboration, building on the Agile principles.

  • Stop delivering deliverables, deliver value.

    User research and design that is continously communicated to the team and the stakeholders give a lot more value than a wireframe in a document. Use the Build-Measure-Learn-loop from Lean Startups  to make sure that you are always on the right track instead of trying to validate only when the product is "done". You should always look at your product as a hypothesis that can and should be validated as soon as possible. Building a minimum viable product and throwing it away if it proves to be the wrong thing to build gives validated learning, and that is more valuable than anything else. Eric Ries argues that "Success is not delivering a feature; success is learning how to solve the customer's problem".

  • Let go of the hockey rink boards and sketch early

    Short feedback loops. The quicker ideas are visualized, the faster you get feedback. Use pen and paper. Stay away from cumbersome Adobe products. Or as the Agile manifesto says: "Simplicity - the art of maximizing the amount of work not done - is essential." Be transparent. Put your sketches on a sketchboard and let everyone give their comments. Of course, you as a UX:er is the expert, your final decision is what goes into production, but all input is good input. Yes, the ice is slippery, but you can do it!

  • Collaborate collaborate collaborate!!! (Paraphrasing Mr Ballmer)

    Create the possibility of collaboration everywhere. This will make you able to dig in deep, to iterate design as much as it is needed, but with a lot higher speed. Dare to step out of your silos. Pair design with developers, bring them with you on usage tests, involve everyone in the product team in a design studio-session where everyone can try out their ideas. The possibilites are many and should be taken. Everyone wants to work on something tangible, and what is more tangible than a mockup that you've built together. As a UX:er, you change role from being a design to become more of a design facilitator. This will build trust and team spirit. It's not only "developers developers developers", but they can be a lot of help in situations you'd never imagined.

  • Get out of the building 

    Meet actual users. Jeffrey Liker says, in The Toyota Way, "In my Toyota interviews, when I asked what distinguishes the Toyota Way from other management approaches, the most common first response was genchi gembutsu [go and see] [...] You cannot be sure you really understand any part of any business problem unless you go and see fo yourself firsthand." This should be a no-brainer for a UX practitioner, but we tend to do it in large scale and quite seldom. Have a "user-day" every week, even if you do not have anything apparent to test. It's worthwile just sitting down and talk for awhile. Don't make a big thing out of it, make it normal.

  • Continuously improve

    I've given you examples of what works for me. Now, try it out yourselves. Inspect and adapt.

To shun silos

Collaborate, collaborate, collaborate!

The life of a UX designer can often be lonesome, being the only person with that set of skills on the team or even in the company is not uncommon. You might sit totally by yourself in your own one-person-department or you might be part of a team, but working in silos.

When you are part of the team, you would think this would not happen. But the cross-functional team principle of agile is often thought to mean that every person on the team should be able to do everything to bring the product forward, i.e. producing code, and seemingly mitigating risk by being able to cover for each other. This usually means that the UX designer has to code (and hence contribute!) or that all coders together create some half-assed user experience during the development cycles based on a delivered wireframe. This is clearly not getting the most out of the development process to actually deliver value. If you are in this situation, it is time to collaboratively produce a user experience with your team mates that will give great benefit to the customer.

Leah Buley (of Adaptive Path) came up with a design method a few years back under the name of UX team of one. This method has been the foundation of my work since then and I can strongly recommend you to try it out. Its aim is to make sure that you leave your silo as much as possible, getting maximum feedback from your environment, but at the same time avoid the design by committee-problem. There are three phases in the method:

In my opinion, the easiest way to ideate is to work with a template. I use a 6up-templates to create (at least) 6 different possible solutions to a problem. The template consists of 6 small boxes on an A4-paper. The size of the boxes forces you to sketch roughly and not consider details. I decide before sketching if I am doing a spectrum design (6 different sketches ranging from what fits the beginner to what fits the expert, or any other characteristic on the axis), a graph design (2-dimensional) or a grid design (1 sketch per grid element). Sometimes I use the 1up-template to show the best idea in more detail so that it is easy to understand. 

After I'm done ideating (i.e. creating a lot of different versions to a solution) I put the sketches up on a wall to create a sketchboard (shown as a yellow board in the method figure above) so that everyone can see them and give feedback. This sketchboard will also contain any other kind of information or notes about the design problem, such as the user story card, specific requirements and limitations, the persona that will use this solution, etc. The reviews of the sketchboard can take different approaches. Some people just like to discuss the sketches, some use post-its to write comments and on some occasions we use dot-voting to decide what sketches are good solutions.

The actual realization of the outcome from the sketchboard can be in any form, but I like to keep it fairly lo-fi and create prototypes in Balsamiq Mockups. These can then easily be tested with actual users and since they are still in low fidelity, the user dares to comment on them.

So, this method allows me to only deliver what gives value at the moment, such as sketches on a sketchboard for a feedback session. But, I am still working in my silo. Hence, I recommend using the collaborative Design Studio method for ideation.

The goal of Design Studio is to come up with a solid foundation for further design collaboratively in the team (where the team can consist of almost anybody, including stakeholders and developers). It can be broken down into 4 (or 5) steps.

  1. Illuminate - In the first step, the team reaches a shared vision of the problem and set the boundaries. One way of getting there is to brainstorm about the current situation. To reach this understanding could take a long time. My opinion is that it is of outmost importance that everyone understands the situation, so let this part take its due time.
  2. Sketch - Let everyone in the team (including you) sketch using a timebox of about 5 minutes in the second step. It is important that the sketching is quick, since giving people time gets them stuck on unnecessary details. 
  3. Present - In the third step, everyone shows their design in a short presentation, quickly followed by...
  4. Critique - An open discussion about the design, meant to churn out the key issues and inspire the other members for the next sketching iteration. The critique should focus on the few most important parts of the design.
  5. Iterate - Run the last three steps 2-4 times depending on how much time is left after step 1. Iteration is the key to finding reliable solutions. 

The overall rule for Design Studio is to never dwell on details (with the exception of the illumination phase), to get most value out of the least amount of time. After a Design Studio session, I, the UX designer, have plenty of material to work with when returning to my silo.

More information about collaborative design in agile projects (in Swedish).

Lean UX vs. Agile UX

The early adopters

People wonder what differs between what has been called Agile UX for a while now and the new kid on the block, Lean UX. My take on the question is that UX designers who have entered the agile community early have needed to define their own set of tools to work tightly together with the agile developers. Agile UX is all about collaboration. There are really no hints in the agile literature on how to incorporate UX. The UX designers in the agile teams have been caught in a frantic struggle to adjust their old tools and deliverables to whatever seems to fit in the agile environment. Meanwhile, the agile people have moved on (or the early adopters at least) from "old" agile methods such as XP and Scrum to Kanban and Lean methods. The new Agile UX methods of collaboration, such as cross-functional pairing and all-out team design studio sessions work in these new settings as well.

With Eric Ries' book Lean Startups, which has become a sort of bible in the hands of these early adopters, Eric takes the Lean approach all the way and explains how to focus on business value with the help of validated learning (build-measure-learn-loop), customer archetypes (personas), etc. This is easy for a UX designer to grasp and it brings a clear vision of what business value is. This is the base of LeanUX, which is focusing on validation. The practices in Lean UX are often called common sense, because for a UX designer it is, since we have been brought up on discussion about business and customer value.

Lean UX works in agile environments and that is only because Lean and Agile are closely related. If you are benificial to change (a lot of people aren't, especially some developers, and this is why Kent Beck's first XP-book, aimed at developers, is subtitled "Embrace change"), then it is not a problem slashing away at your own methods, killing your method darlings so to speak, to end up in something that would work in agile environments. But, overall, I feel that UX-people (me included) have always only talked about killing your method darlings, but never, as Jeff Gothelf phrases it, about "getting out of the deliverables business". Lean UX is all about finding out what actually delivers business value, not spending time on deliverables such as huge information architecture diagrams and unused wireframes. This still means that some things need to be thoroughly done, even if the agile environment do not think so, the things that gives actual value.

In the article about cupcakes, I try to explain how to extract business value using Lean UX methods.

Value in a cupcake

The MVP in Lean UX

As I wrote in the article Agile UX Roles, it is very important to have a broad set of competences to get benefit from the software produced in a project. The competences have to fulfill the Three Pillars of Innovation - Viability (business), Feasibility (development), and Desirability (design). The business analysts have to find out how profitable the solution is while the developers have to find out if they actually can build it and at the same time, the designers need to know if it is desirable enough. All of this together forms the value of the product. This value can in most projects be found over time, but at that moment it might be too late since the world out there is constantly changing. Naturally, it is really important to find out early if your product is what people want.

To many agile teams, this seems like an easy task. Incorporate the UX-guys into the team, make them follow the same methods, as I've written so many times. Just build it and test it. Instant gratification. But Anders Ramsay says (and so do I) that the UX designers' biggest mistake is to think that methods like Scrum or XP are synonymous with Agile. He argues that those methods were created by and for developers to solve developer problems and to create high-quality efficient software. That is one form of value, one form of quality, but not the whole story that makes the customer want to buy our product. We designers need to look towards what actually creates value in the long run, and with that, desirability.

Enter the Minimum Viable Product (MVP). In product development these days (and now I am talking about the business side) the MVP has emerged to find out if the customers are the least interested in a new product. A minimum viable product has the exact (not least) amount of features to make it desirable and sellable, thus giving value. It is created to maximize learning. The all too common approach to using an MVP is what Brandon Schauer calls a dry cake.

First, a cake is created, this is a product that has the minimal but complete usage flow with a minimal but complete technical framework. It is nice, it is a cake. It shows that the product is feasible. It might say something about viability, for instance showing the cost for the developers to add a feature. It might also say that the interaction design is good. It is easy to add filling (features) since we have both the design and the technical frameworks. Sales persons will probably argue that to make it sellable it needs this and that feature as well, adding items to the feature list on a daily basis. What is missing is the icing. So, somewhere along the line, we find what is the icing, what makes the product desirable and interesting, but as I mentioned earlier, this has taken way too much time. We need a better approach, we need to use a real MVP.

Let's start with creating a cupcake instead of a dry cake without filling and icing. The cupcake has filling and icing, but not so much compared to the cake. But, the cupcake is something that the customers want, need and love, so you can start measuring how much and if it is viable to produce.

After that you can easily expand your MVP to become a cake with filling and icing. And since you then know that you have a feasible, desirable and viable product, you can easily expand it to become a wedding cake. The only thing you have to remember is that less is more.

(Ha Phan, @hpdailyrant on Twitter, adds "MVP: The smallest nugget that matters. Not a cupcake but maybe tasting samples." And to that I add: "The minimum thing that can validate your hypothesis.")

UX - What is it?

And what are the different hats the UX designer should wear?

There is a confusion around the concept of user experience design (UX). Everybody has their own meaning to the expression. The current Wikipedia definition is in my opinion cutting right to the chase:

User experience design is a subset of the field of experience design that pertains to the creation of the architecture and interaction models that impact user experience of a device or system. As user experience is a subjective feeling, it cannot actually be "designed". Instead, you can design for a user experience, trying to enable certain kind of experiences. The scope of the field is directed at affecting "all aspects of the user’s interaction with the product: how it is perceived, learned, and used."

Based on this and on discussions in the UX community, I think the general consensus is that user experience is the umbrella term that encompasses a wide array of interface-related fields. The three biggest fields are information architecture (IA), interaction design (IxD) and usability (and most organizations see visual design as a big part of UX as well). The figure below separates the areas by focusing on which methods and techniques that are most commonly implemented by a certain role. 

As a UX designer or as a part of the UX design team, you need to be both information architect, interaction designer and usability engineer, and sometimes also a graphic designer or art director.

  • The information architect role is often described as the librarian of UX. It involves a lot of categorizing and organizing the content of the system. Methods like card sorting and search engine optimization are often used.
  • The interaction designer role, quite often also an interface designer, concerns all aspects of a user's interaction with the system, its behaviour and crafting a workflow that covers all aspects of this.
  • The usability engineer role, focuses on methods for user research and usage testing, both with and without users, to provide insights about the system. Another big part (or side) of usability is accessibility, i.e. focus on people with disabilities and their right of access to the system. This role tends to be less creative than the designer role, aligning on measuring and modelling.

All of these roles are obviously overlapping, both in time and in methods. The figure above is not showing a comprehensive list, e.g. information architects and interaction designers should of course do usage tests as well, but they usually focus on other methods.

Some people have other fancy titles, such as usability expert, UX architect or interface rockstar. These architects/experts probably do a lot of high-level, strategic work related to UX. A designer will probably be more hands on in implementing the actual interface solution itself.

Regardless, everybody in the UX area should do user-centred design, i.e. incorporating the user into the development process. To accommodate that, everybody should have a cognitive science background to be able to really understand the user and his/her needs and limitations. Otherwise, we will never know if we are doing the right thing and doing it right.

Usability goals 101

I've discussed using usability goals in several articles on this site as well as actually writing my thesis on the matter back in 2001. Hence, from time to time I've got questions on how to actually create and use usability goals in the best way. This is what this article will try to explain.

When gathering information from stakeholders in a project you usually end up with a lot of functional requirements (as in "You shall be able to save your drawing in a PNG-format") and some semi-functional ones (like "The drawing shall be stored in a database"), as well as some non-functional ones (for example "The system's latency when loading a drawing from the database shall be low"). These latter ones tend to come from stakeholders with a technical perspective, but these non-functional requirements are obviously quality aspects of the system, and so is usability. Thus, the quality 'usability' should have an equal place in the requirement documentation as the other non-functional requirements. But, what subqualities of usability are there?

The answer to that question is simple and at the same time somewhat complex; any adjective that users may use to describe their interaction with the system is a subquality to usability. If the user wants the system to be snappy, then a corresponding usability quality might be quickness or swiftness. One could argue that this is the same quality we spoke about above from a technical perspective, i.e. latency. But, latency is the exact (timed) response from the system and quickness is the user's perception of this response. For instance, if a call to a not-so-well-designed database would take 30 seconds (high latency), then giving the user appropriate feedback during this time will get the user to rate the quickness as fairly high. Not giving feedback, on the other hand, would get a really low quickness rating. (In effect, this means that a good design can make a bad system look better, and a bad design can make a good system look worse.)

If you feel that the users might not be able to express different qualities, then you can use already set subqualities. The most famous set is hidden in the ISO definition of usability (i.e. in ISO 9241-11). It specifies the "extent to which a product can be used by specified users to achieve specified goals with effectiveness, efficiency and satisfaction in a specified context of use". The qualitites being effectiveness (task completion), efficiency (task in time) and satisfaction (user's experience). 

In practice, usability goals are often created to make the qualities more general. Contemplate the example below (from Hackos & Redish[1]), where a more specific expression from the user is written in broader terms.

These kinds of goals can be used as project goals (where they are sometimes called effects), as sprint goals or milestone goals, or simply serve as focus for a design.

But if we make them measurable, they can be what makes your usability process legitimate in a world of non-believers. You can for instance measure efficiency by the time it takes to complete a task, by judging the percentage of the task that was completed or the number of times the interface misleads the user.[2] What works best is up to you. Below is another example from Hackos & Redish[1], this time of how a usability goal can be translated into a measurable objective.

But, you usually end up with goals that require a user's subjective opinion. You can probably not tell your superior at work that the users thought that the product was fairly nice to use, you have to be a bit more precise than that. I usually go with a five- or seven-point Likert scale as in the example below:

I would not recommend using a ten-point scale, because that would give the users too many alternative. And in Sweden, since we have a special word that means "just enough, not too little, not too much", in some occasions I would go for a four-point or six-point scale, not to make it too easy for the users to pick the middle one.

In an ordinary project I would start out by running a contextual inquiry with the users working in the current version of the system or a competitor's, and afterwards try to create usability goals from the needs and requirements that they express. Then I would create measurable objectives from these usability goals and ask the users about their feelings about the current system (or measure time/errors/etc. during an observation). This will set a baseline for the project. During the design and implementation phases, I use the usability goals for focusing my design work and the sprints, as well as using the measurable objectives when testing out mockups and prototypes. I sometimes use the measurable objectives as definition of 'Done' for agile stories, as well.

When using the Effect management method, the usability goals (and their measurable objectives) are the core for everything created in the project, and the results from the method are steering the project. In the example below, a baseline has been set and the business owner has decided what would be the level to reach for the effect to happen.

Afterwards, when the product is finished, I use the goals and objectives for usability validation. Below is an example of how my colleagues and I summarized the usability validation results, including some statistics, in a document for executives just before product launch:

I am certain that there are other ways to use these goals and objectives. Do drop a line in the comment field below if you know of other ways to use this. And by the way, here are some more examples of measurable objectives presented by

Now that I have shared my knowledge concerning usability goals, please share yours in the comment, especially if this small article has helped you in any way.

    [1] Hackos, J.T. & Redish, J.C. (1998). User and Task Analysis for Interface Design. New York: Wiley & Sons.
    [2] Pär Carlshamre - A usability perspective on requirements engineering

UX in the balanced team

In most agile methods, there is a business side and a developer side. Particularly, in Scrum, there is a Product Owner and a Team (the latter including the Scrum Master). Developers find it quite easy to categorize people into these two blocks. If you do system architecture or software testing, you are preferably with the Team. If you do requirements engineering or pilot studies, you should definately inform the Product Owner. The Team is usually kept quite small, under 10 members, to simplify communication between Team members. The Product Owner is one person, the one with all the business knowledge, to ensure that there is one and only one way for business demands to leak into the Team. Hence, when a UX person shows up on the scene, he or she is automatically placed with the Product Owner. All well in theory, but in practice it is another matter.

In recent articles I've written about creating an Effect Backlog together with the Product Owner as well as working tight with the team during the sprint with design sessions and continuous usability testing. So my answer is obviously that the UX person(s) should join both the Product Owner and the Team.

Working with the Team, actually in the Team, creates a greater understanding of the user experience and often engages the developers to think more in usability terms. I'd say that is a good thing. One way to achieve the connection the UX person would like to have with the developers is to really incorporate the UX work mentioned in the Sprinting UX article into the sprints, by writing UX stories, putting them on the taskboard and estimating points for them. The UX person should really take an active (and somewhat equal) part in the sprint.

Apart from this, as stated above, it is obvious that the UX person should be involved with the Product Owner as well. A great way of involving the competencies necessary is to create a cross-functional Product Owner Team consisting of people with:

  • Domain knowledge (the original Product Owner)
  • Developing and architectural skills
  • Design skills (especially UX)
  • A vision of the product's future (and perhaps the whole product range)

This skill set will probably not fit in one person's head, thus requiring a Product Owner Team. Jeff Patton argues that there should at least be people covering the following three areas comprising the team:

Patton explains that to get benefit from the software, it must be used (usable/desirable), cost effective (feasible) and give value back to the business (valuable). This means you need to incorporate business concerns in design decisions. So let's add one more bullet, one more role on the list of people in the Product Owner team that shall have:

  • Business perspective (perhaps in the form of a Business Analyst)

This is often mentioned in the role of information architect, and it might fit there. In my opinion, the business perspective requires greater focus and a better overall view. A role in traditional project containing this perspective might be the one of the product manager.

For large applications and product ranges there might be several Product Owner Teams, lead by a Super Product Owner or such a Product Manager mentioned above. This hierarchy is quite common for the developer teams, calling the supergroup a Scrum of Scrums.

In this picture three teams are supported by one Scrum Master each, and the Scrum Masters have a Scrum of Scrums team together with the Project Lead. Equivalently, there can be a Scrum of Scrums team with the Product Manager and a Product Owner Team for each developer Team.  

So, let's focus on the UX person's responsibilities in a Product Owner Team. They are:

  • Creating measurable product goals, i.e. effects. When I discussed The Effect Backlog, I used the example of a hotel that wanted to attract more customers. A measurable goal for that could be to get 50% more customers in a month.  
  • User research leading to personas. This is work done that will contribute in a great way to the whole product line. An investment for the future, and as being such, it might be that user research should be done continuously outside the project. The first set of Personas, though, should be simplistic to help you learn what you know now and what you need to fill in later, to get the project going.
  • Creating measurable usability goals, to attend to and measure the user's perception of the system's usefulness. A usability goal from the hotel example was that the customers/users should feel safe when staying at the hotel. A measurable, although perhaps unachievable goal could be that 100% of the customers should say that they feel safe.
  • Creating the actual effect backlog. The UX person should write full epics, with user needs and context in them. This is to make it easier to tie the whole project together after it has been divided into smaller chunks/stories and implemented.

These epics usually consist of a sentence describing the problem in short, on the form of As a [stakeholder], I want [feature] so that [benefit]. The UX person's job is to get more substance into the [stakeholder] and the [benefit] parts. The persona constitutes the former part and the usability goal the latter. Apart from this, every epic should have some kind of basic interface sketch connected to it, since communicating with pictures AND words gives the best understanding. This is to avoid writing a detailed design document. Also, specifying acceptance criteria, what is the definition of 'Done' for every epic, should also be the UX person's responsibility. Most of these epics could have an acceptance criteria corresponding to the measurable usability goals and/or the effect.

The epics can also be used for creating high-level scenarios (for design sketching or usability testing). The easiest way of doing this would be to combine stories (the [feature]-part for a certain user/stakeholder) into sentences, using conjunctions to connect them. An example:

As a hotel guest, I want to feel safe during my stay, so I use my personal keycard to get into my room and use the extra latch to lock the door as well as the ordinary lock.

For lower-level scenarios, to be used as functional requirements or similar, use stories instead of epics.

Other tasks for the UX person involves evaluating spikes with users to investigate the implications of new technologies and of course prioritizing the backlog, which is a constant task.

When the system is released and goes into operations phase, the job for the UX person is not finished. To ensure that the general usability of the system isn't degraded over time, there is a need to continuously measure the effect and usefulness, especially in an environment where other people are responsible for adding and removing content (e.g. a website with a news page).

That concludes this article and like all the methods and techniques I write about, it is all contextual. This might or might not work for you. Please tell me in a comment below how you incorporate UX in an agile project.

User-centred design

Or redesigning Twitter to create a modern way for seniors to keep up with their grandchildren

An acquaintance of mine asked if there was a twitter client for the iPhone especially made for seniors. He used twitter to update the world about his family's life. His father, an 82 year old Swede, had recently gotten an iPhone, but all twitter clients had too many functions and were in English. I did not know of any so I set out to create one myself. 

As a Usability Designer, I naturally use user-centred methods to reach a good enough design. The ISO 13407 is a good framework to use for achieving quality in use of a product. The framework is based on iterative design and thus one can perform different activities for each iteration to solve the current design problem in that iteration.

It is common to use a plethora of techniques for each phase in the iterative UCD cycle, such as focus groups, depth interviews, expert evaluations, wireframes, card sorting and personas. For this small project, I chose the quick and dirty version.

Context & Requirements
To understand the context of use and to agree on the requirements, I created a scenario for the senior users to review and give feedback on.

The scenario acted as a starting point for understanding what the users wanted to get from the project. The users read it and contemplated on it. Then we together discussed and refined what such an application would look like. The outcome of this discussion was a design. 

Solutions & Validation
A good idea is to create a first design that the users dare comment on. A first design that displays a high fidelity and great face validity often make the users say "Oh, fine, are you finished already, great!". Instead, I usually start out with hand-drawn sketches, also known as low fidelity prototypes or simple mockups. These can be displayed after each other and explain the flow through the application. Using them while following a scenario such as the one above is sometimes called a cognitive walkthrough. Such a walkthrough aims at finding bottle-necks in the interaction as well as issues with the interface itself. The user steps through the scenario using the prototype as a full implementation, using his/her imagination and faking the parts of the user interface that are not yet implemented. This gives a very real evaluation of the application, without the need for implementation.

In this sketch, Arne is the senior user and Tomas is his son who has the Twitter feed. The users could easily identify with Arne and could give feedback directly on the sketch. One comment on this sketch was that the possibility to reply to a post was unneccesary and thus, in the name of simplicity, it was removed totally from the interface. Another comment was "what about pictures of my grandchildren?". A new version of the prototype was made, one with higher fidelity to be able to discuss design issues on a more specific level. This was swiftly created with OmniGraffle. 

The second evaluation, done in the same manner as the first one, found some issues that would have been hard finding with only a low fidelity prototype, such as the clarity of the text on the screen. The choice of colours and the bubbles were not popular amongst the senior users with less than great visibility. Another comment was that there was little need for an update button; it is better to restart the application since this follows the users' mental model.

After this session, the design was ready to be implemented. For you who are interested in how it was implemented, the backend is a simple PHP-parser for the Twitter user's RSS-feed. This version requires the tweets to contain a link to a twitpic for the image handling to work. The frontend was made with HTML5 and CSS3 with iPhone Webkit-specifics. 

This first actual version aimed at maximizing visibility. Apart from adding the date, a feature that was missed in the earlier designs, a new feature was added; the possibility to zoom. Two old fingers on a slippy display is much harder than just turning the device 90 degrees.

One last evaluation was made before releasing the application to its intended target group. This evaluation just corrected some minor issues and the full application can be seen on the designs page. All this was just a few days' work and the web application was ready in time before Christmas.

This was a simple way of implementing a UCD method for a single designer/developer. This article ends as a lot of other articles on this site. Always contextualize your method and add what works for you today!

Sprinting UX

This article is the last of three about how to implement user experience design in the agile method Scrum, concerning the sprint.

Scrum Primer
Scrum is focused on the planning and following-up parts of a project. Three roles are specified for Scrum: The Product Owner, the Scrum Master and the Team. The Product Owner is the voice of the customer, this person (or team) has the domain knowledge and the mandate to decide what will be built. The Product Owner is in charge of the Product Backlog, which contains both the plan and the requirements (features to be built, etc.) for the project. The Scrum Master is in charge of making sure that the team follows the Scrum rules and stays agile. This person is an ordinary team member but has great process knowledge and acts as a coach. The Team is a cross-functional group of people tasked with implementing the requirements.

Each iteration in Scrum is called a Sprint, and this sprint usually lasts between 3 and 5 weeks. In the beginning of each sprint there is a Sprint Planning meeting, where the Product Owner presents a chunk of the Product Backlog that he or she would like to see built at the end of the sprint. The team estimates the chunk according to their knowledge and skill, and makes sure it will fit during the sprint. Once the team and the Product Owner have agreed that they have a well-sized and meaningful chunk of features to build, which is called the Sprint Backlog, the team builds it (supported by the Product Owner for requirement details). Each day during the sprint, a very short project status meeting occurs. This is called the Daily Scrum. During this meeting, each team member answers the questions: What did you do yesterday? What will you do today? What might hinder you from doing that today? The reason is to keep everybody up to speed and solve difficult problems as soon as they arise. At the end of the sprint there is a Sprint Review meeting, where the team shows their built chunk and the Product Owner gives feedback. This feedback might then be a part of the Product Backlog and put in the Sprint Backlog for later sprints. After the Sprint Review, there is a Sprint Retrospective meeting where the sprint is analysed and the team can adjust/improve/correct their process before the next sprint starts.

The team sprints until the Product Owner is either content or out of money. This requires each sprint  chunk to be a potentially shippable product. To make this work there has to be some kind of rough plan as to what will (or might) be released to the Product Owner in each sprint. Hence, the team usually starts by building a foundation (as well as getting the process on track). This sprint is called sprint zero.

As with all agile methods, you are supposed to add to the method what you think is missing. This is why you often see Scrum combined with Extreme Programming in software development, but Scrum in itself has many other application areas. The following is what a combination between Scrum and user experience would look like.

UX Sprint Zero
For the user experience practitioner, the sprint zero should answer why the project shall be built (usability goals and effects) and who it will be built for (target groups, the customer, other stakeholders). This is well-taken care of using Effect Management in Scrum. Sprint zero should also contain some basic design, to get a good start in the following sprint. This design should be lightweight. This sprint zero is counted in weeks, not months. In a single week of collaborative design, one should be able to sufficiently understand the project objectives and the high level functional scope so that the the size of the project can be roughly estimated and a sprint release strategy can be formulated. It is the same here as in pair programming, two heads think better than one, and you will get automatic quality control of your ideas. You should end up with a rough overall design for the project. Since Scrum is running in iterations, this design will have to be split up in parts that can be designed, built and validated somewhat independently.

If the project is large, break it down and do parts in different agile teams. It is still possible to have only one or two UX practitioners, even if you have two or three teams.

UX Sprint Planning
Since the UX practitioner is tasked with creating usability goals for the project, this also applies to the sprint. This is done together with the Product Owner, who should create a general sprint goal for the sprint. The usability goals will help the team in estimating how long it will take to build each feature.

Creating low fidelity prototypes (mockups) which easily may be brought to a tester, stakeholder or user for informal usability tests gives optimal feedback given the time invested.
If such mockups of the design exist, it will greatly aid the team's estimation task, since a visual explanation helps understanding than the more formal requirement from the backlog. Also, alternative solutions can easily be discussed and dealt with swiftly using prototypes.

UX Sprint Running
For each sprint, it is not necessary (nor is there time) to create more than the before-mentioned prototype, since there is time set aside during the sprint for communicating details. But exactly how this takes place can be discussed. Here are two suggestions how to handle the communication between UX practitioners and developers during the sprint

  • Plan ahead

    Here, the design and the development are seen as two parallel tracks. The developers run their course as before, and for them to be able to do this, there must be enough designed of the user interface before the start of the sprint where the feature is to be built. This demands that the sprint zero is somewhat deeper than mentioned above or that the first sprint for the developers mainly consists of non-GUI features. In reality, both of these statements are required, since UX design penetrates further down than just the user interface.

    Additionally, as shown in the figure, the UX practitioners are one or two steps ahead all the time in this method. While the developers implement design, the features from the last sprint are being usability tested, and at the same time, the research and design for the following sprints are done. The advantage of this approach is the added time for research and design compared to the following suggestion, and this is the reason this might work better for some people.

  • Incorporate design sessions

    And for other people, this approach might be more suitable. In this approach, the detailed design is done in cross-functional design sessions during the sprint. This gives time in sprint zero for deeper research (such as a better detailed sprint release strategy) that pays off later as well as time during the sprint for basic design for later sprints. When a UX practitioner is a team member, this basic design time will be accounted for during the sprint planning.

    The design sessions themselves work as following: A session is the start of the building of a feature. It requires the whole team including the Product Owner and UX practitioner (as well as at least a tester), to give everyone in the team an understanding of the design and an appreciation for the process and tradeoffs necessary for a good design. The group designs the GUI together, in as much detail as possible. The rest of the details follow generic platform guidelines, i.e. do not dwell on pixel-specific issues unless they really make an impact on the user experience. The design session is timeboxed (depending on the size of the feature), from 30 minutes to about 4 hours. For a well-defined increment of the product, i.e. a good sprint backlog, it is possible to do a longer design session for all of the GUI design of the features at the start of the sprint.

This last suggestion is advantageous since it gives more time for usability testing and quality assurance during the sprint. This testing can be done when a feature (or a set of features) is finished, but the recommendation is to do it the last week of the sprint. Then you test all the features that have been built so far with select users. In parallel you may run an informal pre-validation test (depending on how many testers and user representatives that are available) since this gives a lot of good input to both usability QA and to the product owner before the sprint review meeting.

Apart from all of the above, do not forget to try adding what works for you today.

The Effect Backlog

This article is the second of three about how to implement user experience design in Scrum or any other agile method, concerning the product backlog.

Using the method Effect Management (by InUse) creates a good base for the Product Backlog in Scrum. The method steers the project towards the expected effect (or the user experience) of a product, using a five step method and four key concepts: Effect, usability goal, target group and action.For example, a hotel branch would like to make more money, i.e. get more customers. This is the expected effect. To achieve this, they would have to convey different feelings to attract the customers, like being classy but at the same time affordable and secure. This is a usability goal. So, when creating the main entrance for the hotel, you would of course like it to function properly, i.e. open inwards to let customers in as well as open easily outwards for panic situations, as well as welcoming the customer and ensure his/her safety. This is the action. This has to be tested to ensure the effect, with real customers. This is the target group. And of course, this target group helps with finding the usability goals and the necessary actions to fulfill the goals as well.


Putting this into the five-step method: Firstly, describe the expected effects. The description should contain how to measure the effect, otherwise it is of low use. Secondly, clarify the user's goals. Translate the user's goals into usability goals and measure them. Thirdly, create possible solutions (i.e. the action) to the usability goal. The easiest way is to create some kind of prototype. Fourthly, test in actual use, otherwise the effects will not relate to the actual situation. Especially if you test with a prototype, try to do it with actual users where they would use a real solution. And last, visualize all of this in an effect map, it will be a lot easier to track changes and understand correlations. 


It is widely spread amongst Agile practitioners that, while describing what to do in a backlog item, it is a good idea to explain why this has to be done. This is where Effect Management fits. Combine the effect mapping that we did above with the common format for a product backlog (and replace Description with Action, as well as adding whatever columns you usually have). It might look something like this (in your spreadsheet application of choice):

As an example, from the beforementioned Product Backlog-link, the first item in the Product Backlog is Finish database versioning. Using the Effect Management method, this description (or action) would be explained/motivated by the corresponding usability goal. At the same time, it would be easy to understand which users that would benefit from this backlog item and what effect this backlog item would have on the software in the long run.

An even better approach is to write user stories on a slightly different formats, so that every story on the agile board will show for who the feature is created, why it is good for them and why it is good for the company.


For the whole Agile team, it would then be a lot simpler to estimate each item as well as easier to understand how and why a certain item is prioritized the way it is. This user story will then truly guide the development towards the right product. It's a win-win situation since UX practitioners also like to connect actions (i.e. production code) to usability goals (and effect), e.g. for easier validation or usability testing.

Agile UX Design

Almost two years ago, I wrote a short article (published here) about combining Agile and Usability practices. I described one way of implementing this combination, a way that today seems a bit too complicated. I have been practising Scrum in combination with Extreme Programming the last few years, which is what is popular at the moment. This article is the first of three (the other two are linked below) which explains how to implement user experience design in Scrum.

The Agile Manifesto values:

  • Individuals and interactions over processes and tools
  • Working software over comprehensive documentation
  • Customer collaboration over contract negotiation
  • Responding to change over following a plan

The problem areas have traditionally been the same in design as in software development. A focus on processes, documentation, etc. has prolonged (and sometimes ruined) the project. In many kinds of projects, adapting to agile methods (which means implementing the above-mentioned principles) has proved successful. But how do you apply them to user experience design?

In a waterfall-like project, you have room for a lot of analysis, a lot of design and hopefully also a lot of testing. The simple approach is to scale down everything you're used to doing. This can be enough for the product you are designing, but there might be a constant struggle to explain designs, keep up with the developers, and actually be able to test anything before release. To solve this, you'll have to adapt the methods to agile principles and the time-frames of agile projects. If you ordinarily do deep interviewing and Visio-style interface specifications, you will probably need to change the method to something less time-consuming, like time-boxed group interviews, cross-functional collaborative design sessions, as well as swiftly created mockups (on paper or with Balsamiq) actually constituting the specification. Also, instead of using a complicated setup for usability testing with many users, do a small-scale test (while recording with Silverback) using Guerilla HCI methods.

Apart from lowering the fidelity of your design and adapting your methods, you need to position yourself in the project to achieve a productive agile UX environment.

You need to have mandate in deciding what is built as well as the overall business strategy, the effects of the product. Create a map that visualises how a design can support the envisioned effects, supported by user stories and prototypes. This map can then also be used to roughly estimate the size of the project, as well as helping developers to estimate tasks. A suggested approach for the agile method of Scrum is described in the second article.

At the same time, make sure that you synchronise your design work with development, i.e. integrate your usability methods in the development method and be a part of the team. It is possible for the UX practitioner to act as customer to the agile team (or be part of the Product Owner team in Scrum), but communication and design would benefit more from a closer integration. You would then have a better involvement in the final result and the chance to learn from the developer's knowledge. A collaborative design approach can help you being and staying agile; having continuous discussions about the design instead of detailed specifications will support the core principles of agile methods. In the third article is a suggestion of how this might be achieved in Scrum.

To summarise, start thinking and start acting agile. It will be productive and fun.

Are your users precocious?

Usability testing with children

A common way of separating users with the intent of forming different target groups for usability testing is to use age as a variable, for example in a user cube. The problem is that this is of course a tricky variable to use. Usually, we aim to define end-user populations as homogeneously as possible, since it is then easier to do user testing and optimize user interface design. This approach is good as long as the product is intended for professional use, by users who are between 18 and 50 years old. This approach was good enough twenty years ago when most computer programs had that target group. Today the situation has changed. Target user populations have become heterogeneous, with people of all ages using computers more or less on a daily basis and more or less successfully. Two outliers in this distribution are children and the elderly. Digital natives are a large user group and the computer literates has become twenty years older as well.

During these two periods, adolescence and senescence, the changes on a cognitive level and the differences between individuals are great. The former period we call development and the latter, for lack of a better word, decline. For example, you have probably heard about both precocious kids and late bloomers. A uniform age group is not applicable, hence it is important to have usability professionals taking a deeper look at user group compositions when developing for the youngsters and elder people of today.

I once carried out several projects involving both these outliers and became very intrigued by developmental psychology, especially the parts focusing on children. This article will focus on the theories behind and will try to give you some heads-up before you conduct usability testing with children.

 © MrsGooding

One theory of cognitive development of children, which seems to be the most common, is the one of Jean Piaget. He divides childhood into four development stages; sensori-motor period (infants), preoperational period (2-7 years), concrete operational stage (7-11 years) and formal operational stage (11 years and up). Although the timing may vary, the sequence of the stages does not, thus this theory is a good base for differentiating target groups of children. The characteristics of the preoperational period is a development of an inner representation of external objects. The thought processes of the child are then linked to the most prominent features of an object or a situation. The concrete operational stage is characterised by logical and purposive thinking, although the operations are always connected to the actual situation. In the formal operational period the children disengage from the concrete situation and become able to perform systematic analysis on an abstract level.

Apart from standard interview techniques such as commencing the interview with smalltalk, there are some issues to take into consideration when interviewing children that I learned during the aforementioned projects. One such thing is considering the attention span of the child. A session with young children demands a flexible setup of the evaluation. The child should be able to explore the product almost on their own instead of following a set of tasks. They are often motivated by making adults happy, hence let them show what they have found in the product and increase their motivation by encouraging them. For example, say "Wow, did you do all that by yourself?" or "Is that how it works! Thank you for telling me!". Furthermore, avoid placing the child in front of the interviewer, place the child in front of the product with the interviewer acting as support on the side. In one of the projects, four male interviewers in their early twenties sat down in front of an eight-year-old girl and were surprised when she didn't want to cooperate in the evaluation. Apart from steering clear of these kinds of situations, it is a good idea to have younger children evaluating in pairs where they can encourage each other and share ideas. It is also easier for them to speak about the product with a peer than with an adult.

This shyness towards adults is most visible in the preoperational stage. Hence, children in this stage can also have problems with expressing their feelings for the product in words, especially in front of a grown-up. Observe their behaviour, sighs, smiles or if they simply disappear under the table (which occured a lot with some children). Also, try to avoid asking the children if they wish to play a game or perform a task as this will give them the option to say no. Instead, say "Now, I would want you to..." or "It is time that we...". This is easier to do with children in the concrete operational stage.

Children in the concrete operational stage have a high tolerance for complex interfaces. They employ pattern-based problem solving, "push twice on the left button and three times on the right button to reach the gold" comes natural as long as it benefits them. They are starting to understand how to critically review the task given to them. They will be able to answer questions regarding the task and try new approaches with joy, but they are very aware that they are being observed. The previously mentioned eight-year-old asked the interviewers, in a latter session, why they wanted her and not her sister to evaluate the product. When the session ended it seemed as if she only criticised the method and did not care about the product. Later on, her teacher collected some drawings of hers containing references to the product and it was accompanied with a sun and a couple of green trees. Some children would like to answer questions orally and other in writing, but remember not to neglect those who want to express themselves with pictures.

In the formal operational stage, children might be able to think aloud while they are performing a task. However, what has to be taken into consideration is that these pre-teens or teens are not geniuses that can adapt to every complex situation. The possibly bad performance of the teens are caused by mainly three factors; inadequate ability to read, bad research strategies and a relatively low level of patience. There are simply a lot of other things happening in their world at that particular moment that we unfortunately have to contemplate.

On the other hand, most children tend to be smarter than you would give them credit for. One eleven-year-old girl demonstrated her own Klik & Play-made programs to the interviewer and explained how she could make the product in question smarter. The outcome was, needless to say, very appreciated by both parties. Children understand the concept of usability. Most children in the two operational stages can spot the difference between fun and efficient. They are as motivated by reaching their goals as adults are, and they really do not like when a product is not working.

Simple user picking

Something we all know by now is that a test that uses programmers when the product is intended for legal secretaries is not a usability test. But the number of legal secretaries in the world is quite large, so what subgroup of legal secretaries are we supposed to test with? Categorizing users in the novice and expert subgroups is quite common solution, as is looking for primary and secondary users[1]. If we could combine these in a nice table, everything would be hunkydory, but adding dimensions (subcategories) makes the table complex. We need to visualize the solution in more dimensions. The following figure shows the user cube[2] of the three main dimensions along which users' experience differs: age, experience with computers in general, and with the task domain.


Jakob Nielsen has also taught us that we only need to test with 5 users[3], but which 5 would that be? If we combine the user cube with this, and add something I call boundary user, we can add another dimension of the categorization that actually helps us in finding users that may help us best with testing a system.

Usercube with Boundary Users

The four boundary users are picked out since they are in the outskirts of the main group, hence they represent typical categories. Complete the user group with the one in the middle (another extreme point) and you get a five-user group that represents the two dimensions in the example above very well. For more dimensions, add more boundary users (another 4 for 3D, etc.).

    [1] Faulkner, X. (2000) Usability Engineering, Palgrave
    [2] Nielsen, J. (1993). Usability Engineering. Academic Press.
    [3] Nielsen, J. (2000). AlertBox: Why You Only Need to Test With 5 Users

Agile and Usability

When working in cross-functional teams, agreeing on a development method is usually the greatest task. Sometimes though, people are thinking on a too detailed level. My first computer science teacher told me to abstract the problem to find a simple solution, but I've often taught others to divide the problem into more simplified parts, and then find a solution to the smaller problem. From now on I promise I will try to do both at the same time (and try to remember what else he said :).

When the problem with combining usability and agile methods presented itself, I tried to divide and conquer (thanks, Mr. Alexander!): I would need to conduct interviews, do prototyping, and put forward an interaction design before the developers could start developing. To do this I would need to gather requirements beforehand from the customer as well. Not a good solution. It's like placing a wall between the two methods.

The wall between the methods

The solution just lay there and stared at me. Think abstract! Usability is an iterative method, and so are agile methods. So instead of seeing them as two different tasks, the combination is quite straightforward. I removed the wall (with a little help from Eric Idebro & Illugi Ljótsson) and the solution presented itself:

Agile Usability combined

The combination requires that in the first iteration (or 'sprint', as in Scrum) you only do usability tasks. A small specification with rough sketches and flows are presented, but they're still based on interviews, etc. This is good enough for building a foundation. If this would be Scrum (and from now on we assume it is), this first sprint takes about 3 weeks, and the conceptual design will be a part of the product backlog as well as the tasks for achieving the usability goals. This makes it easier for the developers to recognize usability as important work.

Then, during 4-week sprints, usability tests are conducted about once a week. Although this is somewhat demanding (the architecture must be good, probably a three-tier architecture), the input you get will drive the design in a simple way. To do this in an easy and straightforward way, the sprint is divided into four smaller iterations. In the beginning of each iteration, something small is built that is testable with users, a part of the GUI and some functionality perhaps. The results are put in the sprint backlog (and you have planned for that on the sprint meeting), and change requests are dealt with the following week. This is for steering during a sprint, and you do it by refactoring your usability design. Quite complicated.

The work for the interaction designer during the sprint is to produce test cases for the usability tests, and of course, conduct these, while doing GUI design. For every test phase in every iteration, you'll need real users. It is usually hard to find users and with this combined method you burn out users even faster, but on the upside there are enough test sessions to last for a lifetime.

At the end of the whole sprint, a full usability test on real functionality is done, the results of this test are discussed with the product owner, and will perhaps end up in the next sprint.

To work this way, there are some musts:

  • The developers have to be interested in usability
  • Continuous integration has to work really good. You have to deploy each week instead of each month.
  • The product owner needs to know what user-centered design is, apart from just Scrum.
  • The developers can't be junior usability specialists; It would probably be too demanding since you need to refactor the usability as well as doing everything else at once.

But you can always try!

I understand that this is only the beginning, but it is a rough plan. I believe that for every project you have to adapt the method to the circumstances.