Designing and Evaluating Mobile Interaction: Challenges and

Transcription

Designing and Evaluating Mobile Interaction: Challenges and
R
Foundations and Trends
in
Human–Computer Interaction
Vol. 4, No. 3 (2010) 175–243
c 2011 M. de Sá and L. Carriço
DOI: 10.1561/1100000025
Designing and Evaluating
Mobile Interaction: Challenges and Trends
By Marco de Sá and Luı́s Carriço
Contents
1 Introduction
176
2 Understanding the Real World
179
2.1
2.2
2.3
2.4
2.5
180
181
183
188
193
Context in HCI
The Mobile Dynamics
Understanding Mobile Contexts
Recreating and Focusing Contexts
Section Overview
3 Prototyping the Concepts
196
3.1
3.2
3.3
3.4
3.5
197
200
201
206
212
The Use of Prototypes in HCI
Mobile Requirements
Low-fi Prototypes
Mixed and High-Fidelity Prototypes
Section Overview
4 Evaluating the Results
214
4.1
215
Usability Evaluation
4.2
4.3
4.4
4.5
The Challenge of Mobile Evaluation
Techniques for Data Collection
Technology-aided Mobile Evaluation
Section Overview
217
219
224
229
5 Conclusions
231
5.1
233
The Future of Mobile Design and Evaluation
References
235
R
Foundations and Trends
in
Human–Computer Interaction
Vol. 4, No. 3 (2010) 175–243
c 2011 M. de Sá and L. Carriço
DOI: 10.1561/1100000025
Designing and Evaluating
Mobile Interaction: Challenges and Trends
Marco de Sá1 and Luı́s Carriço2
1
2
Yahoo! Research, 4401 Great America Parkway, Santa Clara, CA 95054,
USA, marcosa@yahoo-inc.com
LaSIGE and University of Lisboa, Edificio C6, Campo Grande, Lisboa,
1749-016, Portugal, lmc@di.fc.ul.pt
Abstract
Mobile technology has been rapidly gaining ground and has become
intrinsic to our daily lives. As its importance within society continues
to grow, features, functionalities and usage opportunities accompany
such growth, turning mobile devices into essential tools. Faced with
the role that mobile interactive technology has assumed, it is vital that
ease of use also reaches new levels, attenuating the growing complexity within the critical status that they represent. Accordingly, mobile
usability evaluation needs to re-invent itself to keep pace with this new
phenomenon. This article reviews the current approaches and recent
advances in the design and evaluation of mobile interaction and mobile
user interfaces, addressing the challenges, the most significant results
and the upcoming research directions.
1
Introduction
Our lives and society are rapidly gravitating towards a world in which
mobile devices are taken for granted. These are no longer accessory
but become natural extensions to our bodies. Their use has become
quasi-permanent, following and supporting us in most of our activities and affecting the way we interact, share, and communicate with
others. Additionally, they are growing in diversity, and complexity, featuring new interaction paradigms, modalities, shapes, and purposes
(e.g., GPS, Portable Media Players, Gaming consoles, Smart Phones,
E-book Readers). Together with the evolution of the available hardware, software has also evolved greatly and users are becoming ever
more demanding of user interfaces (UIs) that provide both functionality and pleasant user experiences.
The fact that mobile devices are used anywhere and anytime also
brought new conditions of use, frequently in austere settings. This pervasiveness of use combined with enthrallment of the devices into our
lives and the expectations’ growth brings usability and user experience to the front line. As a consequence it is paramount that design
methods accompany this evolution, in order to aid designers during this
demanding creation process.
176
177
Taking into account the strong differentiating factors that characterize mobile devices from traditional personal computing (e.g., desktop PCs), such as their ubiquitous use, usual small size, and mixed
interaction modalities, designers are faced with additional challenges.
In particular, two design stages show greater difficulties and are most
emblematic of the new challenges faced by designers: the prototyping
and evaluation of mobile design concepts and mobile interaction.
In face of these new found adversities, research has been gradually responding to these challenges. Several works have been emerging with different approaches. Some depart from traditional evaluation
techniques. Others essay rupturing proposals and offer new means to
capture and assess mobile usage experience. In general, most rely on
prototype design and evaluation and on the acceptance of context as a
fundamental cornerstone on the whole process.
This monograph presents an overview of the current state-of-theart, addressing recent and emerging work that focuses several issues
within the two stages of the design process. In particular, it describes
research focusing on
(1) the understanding of context and scenarios to drive interaction design and evaluation. With the need to design and test
mobile interaction within realistic environments, comes the
need to select proper conditions, scenarios, and settings in
which to focus on. Some research has been addressing this
issue, providing new findings on how to frame concerns and
characterize the settings and variables that define and affect
mobile interaction;
(2) the updated prototyping techniques that field work requires,
which are crucial in order to properly support trials at the
very initial steps of conceptualization and design, also offering means to propel user engagement throughout the design
process;
(3) the new evaluation techniques that mobile interaction design
entails, which can be used in new scenarios and environments,
where designers are, many times, required to be as mobile as
178
Introduction
their designs and end users, conceiving new strategies and
evaluation approaches for ubiquitous use,
Based on these research advances, future directions and trends will be
mentioned and suggestions on additional research paths will also be
drawn.
2
Understanding the Real World
As with the design of any interactive software or technology, the user,
his/her goals, and the overall conditions in which such interaction will
take place are aspects that define how the UI should be created and how
it should behave. With mobile devices, this belief holds true, probably
even with more pertinence.
However, as a consequence of their mobility, multi-purpose functionalities, and ever growing universe of users, some of these factors gain a
significant relevance on the overall scope of the usage and interactive
experience. In particular, it is necessary to understand when, how, and
why such factors affect the relationship between user and system so
that not only UIs can be better designed, but also as a necessary tool
to evaluate the overall usability of an existing system.
Such challenge is as complicated as the need to understand the world
and how people interact with their devices within it. The multiplicity
of scenarios and settings along with the variety of existing technologies makes it an overwhelming task during the design process and the
evaluation stages. For these reasons, researchers have been increasingly
trying to identify the factors that are most likely to impact the mobile
usage experience, framing concerns, and highlighting the details that
must be considered during the several design stages that ensue.
179
180
Understanding the Real World
The following sections discuss the current work on context perspectives and its role on driving the design and evaluation of mobile UIs.
It is divided into three main topics, namely (1) the need to model
and understand context and its impact on the user while interacting
with mobile devices; (2) methods that have been used to gather this
information and validate the dimensions and models that have been
identified, and (3) the use of such knowledge on different approaches
and techniques, both within laboratories or out in the field.
2.1
Context in HCI
Several design techniques and methodologies have been introduced
through the years, aiming at supporting designers and engineers while
creating solutions that are adequate to users within the contexts in
which they will be using them. In particular, those that place significant focus on users and their needs [5, 79] have demonstrated solid
results for interactive software and UIs. It is nowadays almost common
sense to underpin the design of interactive software in user-centered
design approaches and its multiple variants.
At its core, these approaches rely deeply on techniques to understand and characterize users, tasks, and working settings. They range
from procedural recommendations on how to gather data to modeling and specification methods that can be used in the following design
steps. On the procedural side, worries frequently oscillate between goals
of minimal interference with the users and contexts of use (e.g., ethnographical approaches) and maximum efficiency of time and resources
(e.g., dramatization and walkthroughs), arguably at the expense of
faithfulness and accuracy. The spectrum between limits is fulfilled with
interesting proposals like, for example, that of contextual inquiring [5].
On the representation side, abstraction and formality are two of the
trade-offs, or at least two of the evolving dimensions. From conceptual
frameworks to use cases, though personas, scenarios, hierarchical task
analysis, and storyboards, or even cognitive modeling approaches, to
name a few, all have been proposed and used to elicit what is essential
to represent and consider in the design process. Models such as context diagrams [94] are used to represent an entire system as a single
2.2 The Mobile Dynamics
181
process with major data flows as inputs and outputs. In [5], for example,
several models are proposed, representing information transference and
data flow between different entities, artifact representations and stepwise activities, signaling points of break, procedures and other relevant
details. On a contextual setting perspective, the restrictions imposed
by the physical world where work takes place are included on physical
models. At a less tangible level, cultural models represent aspects that
are hard to include in other maps. They highlight the influences that
protocols, ethical issues, habits, or other factors impose on users.
Many other methodologies and modeling techniques were proposed
by researchers through the years and are now captured and mentioned
in many reference books in the HCI field [35, 107]. It is not our intent
to go through them all. Instead, those that were referred above serve
the purpose of establishing the relevance of understanding context on
the design of interactive software. Beyer and Holtzblatt [5], in particular, open the grounds for an eclectic perspective of context, modeled
through multiple multifaceted views, all of them very pertinent in the
design of mobile systems.
2.2
The Mobile Dynamics
Considering mobile devices, one should certainly look to its most unique
features, namely ubiquity and portability. From these emerges, a characteristic that directly influences its conditions of use: the dynamics
and complexity of context. On the one hand, the settings in which
users interact with a mobile application are often highly complex and
mutable (see Figure 2.1). Consider, for example, using a mobile application on the street. Even if we are stationary, the variations of noise and
light conditions, the probability of interruptions, and need for divided
attention or the number of uncomfortable circumstances resulting from
the emersion in an non-private social environment are usually much
larger than on conventional indoors settings. On the other hand, as it
has been studied [3], users often conduct one specific task that spans
through various settings and different contexts. Therefore, not only the
context is more complex and changes dynamically around the users,
but also the user changes more frequently the contexts of use.
182
Understanding the Real World
Fig. 2.1 Users interacting with mobile devices and prototypes in different settings and
contexts.
For illustration purposes, one can compare this difference between
the more traditional and the mobile views of context with the distinction between photographs and motion pictures (unfortunately directly
to high definition ones). It is not the case that there is no change or
complexity in the conditions of use for desktop applications. Yet, one
could mostly decompose them in a manageable number of situations,
usually fairly well delimited. The issue is that in mobile usage situations tend to be more complex, as the number of external influential
variables is different, larger, and uncontrolled, and especially tend to
be dynamic. Designers, we believe, must aim for almost continuous
change, not sparse snapshots.
Still, usability has to be maintained throughout the entire usage
experience and, depending on the specific requirements of demanding
and austere settings, it even has to be increased. In fact, the users’
appropriation of mobile devices often raises their exigency or further
influences the change on the contexts of use, tasks, or user behavior [12].
One of the main issues in mobile interaction design and evaluation
is the need to understand the dynamics and details of the context that
surrounds the user within a variety of settings and locations. It is crucial
to collect meaningful and influential data and requirements. Yet, one
should emphasize the “meaningful and influential” adjectives. As the
context complexity increases, then the more need there is for effective
2.3 Understanding Mobile Contexts
183
techniques and representations that drive the design process and enable
the selection of the adequate conditions that bring significance to the
results of testing and evaluation.
Thereby, although still a relatively young research field, a strong
emphasis is being placed on this understanding of context. New challenges are set into the various stages of the design process, with particular impact on data gathering procedures [37, 59, 61], ranging from
very initial stages of requirements to the evaluation and validation of
the resulting UI. Researchers have also been working to encounter ways
that facilitate the evaluation by uncovering the aspects that should be
considered in the process. For example, how to cover a wide variety of
usage settings without making the evaluation effort a burden too big to
carry and which details are most likely to affect mobile interaction and
the mobile user experience [89, 115] are commonly addressed topics.
The following sections navigate through some of the research done
so far in these areas, from the processes and representations that allow
a comprehension of context to the application of this knowledge in the
broadly evaluation processes.
2.3
Understanding Mobile Contexts
Understanding contexts is per se a complicated task. To begin with, the
definition of the concept itself is not without controversy. Dourish [36]
calls it a “slippery notion” and discusses thoroughly its semantics,
distinguishing the phenomenological or “interactional” view and the
positivist “representational” perspective of the concept. It is not the
intension of this text to pursue or argue this debate. Instead, we consider the various perspectives of context, some more interactional,
others more representational, but mostly focus on its understanding
as a way to guide design and evaluation.
We depart from a broad notion of context as “all that may influence the user experience and usability of a mobile application during its
use.” In this assertion, we still accept that context can be a consequence
of the interaction itself, or at least, and this for sure, that the relevancy of the several dimensions of context is determined by the usage
of the mobile application. Nevertheless, it is important to understand
184
Understanding the Real World
all these dimensions, or at least those that in a given pair of “context
and use” may become relevant. The more we know, the better prepared
we are to understand requirements, design prototypes, and evaluate
creations.
We believe, as we understand that others referred subsequently do,
that to make sense of the world where interaction occurs one must
(1) observe it and experiment on it, in order to plunge into data that
ultimately will bring comprehension; (2) represent it through necessarily partial, possibly biased, informal descriptions or formal models, as
a means to build knowledge from data and compensate for the difficulty to attain absolute clairvoyance; (3) use that knowledge, back in
observation and experimentation, to refine it and hopefully understand
better that world and, for the case in hands, to improve our creations
that intrinsically will also change it.
2.3.1
Grasping the Real World
On the quest to understand the real world, several researchers propose
methods to gather contextual information. In [87], for instance, the
authors describe an approach called bodystorming [8] that can be used
to help designers on understanding context. The technique consists of
taking groups of researchers or users to out of the lab and starts discussions and brainstorming sessions in situ. These will generate relevant
information for the processes based on the context in which the participants are inserted. Although the method is applied during early stages
of design, the authors claim that the immediate feedback it generates
during the outdoor design sessions allows a better understanding of
contextual factors and how they impact the usage experience.
To validate the approach, this research and presented experiments
relied on a series of techniques that were used in situ, at the original
location or in similar locations to those envisioned as potential scenarios
for the system under development. Through its application on four case
studies, the authors were able to conclude that by using the technique
within these settings, the method was very valuable in understanding
the context and how these lessons could be used as input for the design
process.
2.3 Understanding Mobile Contexts
185
In [109], an interesting approach to understand context is also presented. Here, a technique called contextmapping is discussed as means
to understand context with the help of end users, creating visions of the
contexts surrounding a specific product even before the initial design
stages. The method relies on users to gather information through different types of media, including photos, diagrams, annotations, and
stories. Together these provide multifaceted visions of context and its
influence on users and their interactions with the real world. Much alike
probing [50], it offers designers means to access and explore people’s
tacit rituals, motives, goals, and experiences while performing different
activities and how these can be mapped into the design of a product.
A particular interesting alternative is proposed by Stromberg
et al. [110]: the interactive scenario method. The authors propose the
use of improvization and acting, both with designers and users, for the
definition of early design concepts. They claim that the method brings
out aspects (ideas, innovations, and problems) that would not have
been recognized so obviously in any other way.
2.3.2
Representing Contexts
Given that context, and how it changes so frequently, is such a powerful
force for mobile interaction design and evaluation [43, 53, 89, 95, 115],
several researchers have been working on ways to describe, understand
it, and appraise its influence on the design and interaction processes.
Savio and Braiterman present a context model for mobile interaction design that integrates a set of dimensions and variables that
describe the specifics of context that are meaningful during mobile
experiences [104]. In fact, the authors go as far as claiming that for
mobile interaction design, “context is everything,” especially considering that away from the homogeneity of the desk-bound PC, mobile
interactions are deeply situated. Accordingly, they propose a set of
overlapping spheres of context in which these interactions take place.
Their model, much alike other researchers [3], considers three highlevel main concepts respectively included in each other being (1) the
outer sphere — culture, including economics, religion, etiquette, law,
social structures, etc.; (2) the middle sphere — environment details
186
Understanding the Real World
such as light, space, privacy, distractions, other people, and (3) the
inner sphere — activities such as walking, driving, eating, juggling
groceries, etc.
Within the inner layer of context, the authors suggest three additional cognitive spheres. These are once again divided into different
categories and sub-layers. The outer layer focuses on the user’s goals;
the middle layer on the user’s attention, and the inner one looks at
the user’s tasks. Finally, a last set of three spheres (and dimensions) is
related to the hardware and infrastructure. Here, the identified dimensions are focused on the device (e.g., hardware, OS), the available
connection (e.g., speed), and finally the carrier and its services and
prices. As a result, the authors vision is that the interface, and whatever it must comply with, is the intersection between all these layers
and dimensions.
Rukzio et al. also propose a notation to model mobile interactions
with the real world [101]. They define a set of categories and constraints that divide the physical world in which interaction will take
place. The main categories are the physical constraints, the device’s features, the temporal context, and the social context. The work focuses
mostly the modeling and representation of UIs instead of user behavior
or activities.
In [112], the authors present research where they also aim at understanding mobile contexts, focusing mostly on urban environments.
Following previous research on the definition of context, the authors
place significant focus on the social and psychological conditions and
resources that affect mobile interaction. With this in mind, they conducted a study where 25 adults were observed while moving within a
city during their daily lives. Each of these participants was followed by
a group of five researchers, and their activities, especially those related
to their urban commuting, were annotated.
As a result from these observations, a set of five characteristics and
activities for mobile contexts was identified: (1) situational acts with
planned ones; (2) claiming personal and group spaces; (3) social solutions
to problems in navigation; (4) temporal tensions; and (5) multitasking.
Although somewhat directed towards navigation, as most of the observations were focused on commuting within a city, the authors also identify
2.3 Understanding Mobile Contexts
187
a set of design implications for social awareness, mobile games, and UI
design in general that should be considered during the design process of
mobile UIs and mobile context-aware systems. For the latter, the main
suggestions include the use of multimodalities and care with their selection and the management of interruptions.
In our own previous work, we have [26] introduced a conceptual
framework, which intends to help designers on selecting appropriate
characteristics to define context and settings for the evaluation of
mobile UIs. The goal is trying not only to re-create realistic usage
settings and scenarios that resemble reality, but also to frame concerns
and highlight the issues that can significantly affect interaction.
In a similar approach to [104], we emphasize the need to take into
consideration (1) the location and settings in which interaction takes
place; (2) the user’s movement and posture; (3) the devices and how
they are being used; (4) the workload and activities taking place; and
(5) the user and his/her characteristics (e.g., physical, cultural). As a
complement and as a way of materializing this framework, a set of
examples and variables for each of these five dimensions is presented as
part of our research. These can be used as construction blocks used to
(a) generate complex and realistic contexts to drive design and understand users and how they behave in face of different adversities or
opportunities and (b) help designers while conducting out-of-the-lab
evaluation sessions by pointing out what and how to select locations
that combine a large set of variables and maximize the amount of
detected issues, without the need to extend these tests to a wide amount
of settings and locations. A visual presentation approach is also offered
to help modeling and conveying this information between design teams
(see Figure 2.2).
Based on three case studies [27], we also made the point that, in
addition to understanding what happens in specific contexts, it is also
paramount to understand what happens between different contexts and
how these changes affect the user’s behavior and how interaction takes
place. We call these situations “transitions” and also consider them
within the context generation framework that we devised. Through
our work, we also argue and validated through these case studies that,
in previous work, even considering context and all its variables, details
188
Understanding the Real World
Fig. 2.2 Scenarios and transitions using a visual presentation approach.
that were paramount to usability would not be encountered if these
contexts were studied separately.
As pointed out, the goal of context representation is twofold: (1) the
identification of the defining characteristics of mobile context as a
way to grasp and consider it in the design process; (2) establish the
guidelines or the boundaries to recreate it or manage it in evaluation
processes, considering that the real world and the experiences that take
place within it are not confined to singular contexts but to a multiplicity and variations on how we circulate between them. A third important
goal, not addressed in this text, takes representation further and formalizes it to the extent that it can be computed and serve as the basis
for context-aware applications (see, for example, Dey’s et al. work [34]).
2.4
Recreating and Focusing Contexts
This section presents examples of use of context descriptions and
models to refine the understanding of mobile usage conditions and the
2.4 Recreating and Focusing Contexts
189
impact they have on the usability of mobile applications and user experience. Several lines of research show how these models and knowledge
were used to (1) recreate context and use this knowledge as a basis
for re-enacting real-world situations and (2) drive in situ experiments,
researchers have found that taking the process into the wild has its own
benefits, but many times need the guidance and focus that the previous
models and guidelines can provide.
In order to support mobile evaluation and consider context without
the effort of taking these procedures to uncontrolled settings, research
has been done on recreating the conditions that would be found in
real-world settings within semi-controlled environments [3, 61, 97, 122].
Kjeldskov and Stage [61], for instance, conducted an important
survey on new emerging techniques to evaluate mobile systems on laboratories. The authors point three main difficulties that are generally felt
in real-world field settings. The first pertains to accessing the most significant situations, which is a common problem of ethnographic studies.
The second one is the applications of existing evaluation techniques on
mobile settings. The third is caused by the complicated process of data
collection.
On field evaluations, it is demanding to cope with the physical
surroundings and environment that might affect the setup. This is
noticeable both for the designers that gather the data and for the users
that accomplish tasks. On the other hand, the authors defend that in
laboratory settings these difficulties can be reduced.
They propose two main dimensions for mobile evaluation in laboratories [61]. The first focuses on the user’s physical movements while
using the mobile system and on the attention needed to navigate
through the software. These two aspects allow designers to arrange
tests according to different configurations simulating various levels of
movement and attention. For instance, if there is constant movement
implied by a specific task, the authors suggest the use of a treadmill
with varying speed to simulate a real situation. As a consequence,
conscious attention is required from the user. The second dimension
aims to evaluate divided attention. To simulate it, the suggestion is to
frequently distract the user in order to simulate the use of a mobile
system in a real-world setting.
190
Understanding the Real World
To evaluate the impact of both dimensions, experiences were
conducted to assess their validity comparing them with real-world
evaluations. These two main dimensions were compared through an
experience where users moved through a pedestrian street while using
the mobile system. Overall, conclusions state that there are no significant differences between the use of these simulated dimensions or real
setting evaluations. In fact, the situation where most usability problems
were identified was while the user was seating at a table.
Nevertheless, this particular study limits the concept of context, real
settings, and their influence to physical movement and degree of attention, which might introduce limitations to the evaluation process [6].
With these limitations, it can become difficult to assess, for instance,
how users react to different social environments and the impact of
public or privacy issues on the usability of the mobile system.
Pering [92] conducted a study, which clearly concluded that users’
behavior changes according to the privacy settings of their location
and working context and that these changes must be incorporated into
applications and usability studies. This kind of issue is further notable
when using location-enhanced technology that allows social communication. Consolvo et al. state that it is crucial to understand why, when,
and what users want to share with each other and how this influences
their social behavior as well as the use of technology [18]. Therefore, a
large body of research claims that it is paramount to undertake evaluation studies with realistic settings [37, 86, 98], where more than physical
characteristics of the environment are emulated.
In [111], the authors go slightly further and use role playing and
scenarios to drive the design and evaluation of mobile systems, extending the context to more than the physical settings. During their
research, six design workshops with different topics and themes took
place, spanning through a period of three years. From these experiences,
they concluded that context plays a significant role during the design
process and that by introducing small details such as different fidelities
to their prototypes or the participation of real users is distinguishing
factors that propel the success of the design process. Nevertheless, the
authors also highlight that the gathering of field data is very useful for
the design process.
2.4 Recreating and Focusing Contexts
191
A similar experience can also be found in [3]. Here, the authors
acknowledge the importance of context and real-world variables and
obstacles during the design and evaluation of mobile applications.
In particular, as with some of the models previously discussed, the
authors define three main factors for the context space, namely, environment, composed by the infrastructure, the physical conditions,
and the location; applications, encompassing functions and input and
output channels; and the human that comprehends the user, the social
environment, and the activities that take place.
Given the importance of these three dimensions, the authors present
a set of guidelines and experiences on how to introduce these facets on
laboratory evaluation by recreating real-world scenarios and obstacles.
In order to use different settings such as lighting level or the user’s
movement (e.g., sitting or walking), their experiences took place within
a laboratory where paths were defined, lighting conditions controlled,
and several tasks defined. As a result, the authors stress that it is
imperative to take context into consideration while designing for mobile
devices. However, in this particular case, as with Kjeldskov and Stage’s
work [61] only a small number of variables were considered and although
the authors claim context is essential, their recreation of it was specific
for the goals in that particular experience.
In addition to research efforts in understanding the dimensions of
context and how they can be recreated or simulated for the design
of mobile UIs, or as the last case, helping designers on how to select
adequate settings for the evaluation experiences, another research trend
has been trying to compare the efficiency of laboratory with real-world
in situ design and evaluation. In fact, recent work suggests that apart
from the definition of context and its components, designers must take
into account the actual settings in which interaction will most likely
take place [19, 37, 86, 102, 108], and not simulations of these as specifics
will always be disregarded and neglected.
In the line of their previous work, Kjeldskov et al. undertook this
challenge and performed a set of experiences that aimed at understanding the added value, if any, of evaluating the usability of mobile
systems in the field [60]. During the design of a context-aware mobile
application, two experiments were conducted. One took place within
192
Understanding the Real World
a laboratory using wireless cameras mounted on PDAs and additional
cameras spread out through the laboratory. In order to recreate a realistic setting, the laboratory was set up so that it would resemble the
context in which the software would be used (i.e., hospital department), including two different rooms connected by a hallway. Data
were collected using the aforementioned cameras.
For the field evaluation, the test took place at an actual hospital. For
data collection, a set of audio and video capturing equipment was used
so that a space of up to 10 m could be kept between the observer and
the user. Results from these tests showed that little value was found
by taking the evaluation process into the field considering both the
amount and criticality of the encountered issues. However, the authors
also point out that there were inherent limitations to the field study
caused by the lack of control, especially when it came to testing specific
features that were not used on the actual field context, as end users were
free to test only what they needed and no predefined tasks were set.
Clearly, this demonstrates that there is a threshold to be considered
and benefits and drawbacks from each approach should be carefully
weighed.
As a response to Kjeldskov’s work, a set of following studies has
also been published [86, 98], defending the out-of-the-lab counterpart.
On the first study [86], the authors argue that it is in fact worth the
added effort of taking the evaluation process into the field. To prove
their point, the authors also conducted two usability tests with mobile
devices, one in a usability laboratory and the second one in a field
setting. Although the results from these two tests were not that different
for most of the categories that were defined (i.e., critical issues, severe
issues, cosmetic issues) with a total of 48 issues for the laboratory test
and 60 identified issues for the field test, the authors claim that the
field evaluation was more successful. This claim is justified not only by
the amount of issues that were detected but also because only in the
field they were able to detect usability problems related to cognitive
load and interaction style. This led them to conclude that field tests can
reveal problems not otherwise encountered in laboratory evaluations.
Also following Kjeldskov et al.’s work [60], Rogers et al. conducted a
similar study [98] also pointing out the need to take the design process
2.5 Section Overview
193
out of the laboratory, into the real world, despite the added effort that
this process might imply. During the design process of a mobile UI for
an ecological-oriented tool called LillyPad, the authors conducted two
separate in situ studies. From these two experiences, the authors conclude that the results achieved during the process, in terms of detected
issues and findings regarding the usage of the application that was
being designed could not have been achieved without in situ evaluation
experiences. In fact, the authors acknowledge that the entire process
was costly in terms of time and effort. Still, results show that a cheaper
approach such as asking experts to validate the tool by predicting how
it would be used or lab testing would not have led to the same results.
Additionally, and most importantly, the in situ settings were determinant in revealing how the environment and details such as the time of
the year and the weather impacted the user experience and how the
tool was used.
In [37], the authors also compare laboratory and field tests for
mobile evaluation. Through the analysis of the results from a set of
tests, the authors concluded that designers were able to detect a larger
amount of usability issues during the field tests. In fact, from the
experiences that took place, significant differences between the two
approaches were found. The two types of tests were held within a
usability laboratory and Singapore’s Mass Rapid Transit train.
From the analysis of the results, the authors were able to conclude
that, in addition to the larger amount — almost the double (92 for the
lab test and 171 for the field test), the problems encountered during the
field tests tended to be much more critical than those detected during
laboratory-based tests [37]. As the main lesson, the authors point out
that although one can be aware of most of the variables that define
context, it is impossible to recreate all the factors and assess how they
can affect the interaction with the mobile system and that field tests
provide better overall results.
2.5
Section Overview
In summary, two main issues were discussed throughout this section:
(1) the understanding of the world where mobile applications are used;
194
Understanding the Real World
(2) the use of that knowledge to manage the evaluation of designed artifacts. The first addresses early data gathering for context appropriation
and context representations, balanced between informal descriptions
and detailed models. The second oscillates among the use of context
knowledge to select evaluation dimensions, recreate contexts on the lab,
or even establish the validity limits of either lab or field evaluations.
Regardless of the various approaches and definitions of context,
the feeling remains that knowing the variables that constitute the
real-world settings where mobile interaction usually takes place is
paramount and should be considered throughout the design process
of mobile UIs. This holds true, in particular, for the evaluation stages
of the design process, where the systems and UIs need to be validated
by the end users where they were designed to be used.
The various research experiences that were described take different
approaches in doing so. Some try to recreate context within controlled environments, mimicking obstacles and lighting conditions or
even offering designers, guidelines and road maps on how to consider the various dimensions and creating realistic scenarios with them.
This trend invests on the effort and time saving and given that some
of the most relevant dimensions of context can be easily manufactured, achieves significantly positive results for particular cases and
contexts.
A parallel research currently approaches these issues differently and
defends that in order to truly capture the real-world context and its
impact on the user experience, it is necessary to take the evaluation and
design process into the real world, out of the laboratory. In doing so,
there is clearly an added effort and a wide set of variables that cannot
be controlled and many times anticipated. Nevertheless, the richness of
the settings and used situations can, many times, overcome simulations
within laboratories and, despite the use of additional resources, provide
a deeper and more positive impact on the design process.
Naturally, in both cases, taking the evaluation process out of the
lab or doing it under simulated scenarios that emulate real-world conditions, regardless of the design stage in which it might take place,
poses interesting challenges. In particular, one must pay attention to
2.5 Section Overview
195
the data gathering techniques, which are usually destined for a single
context and location and the tools used during such experiences (e.g.,
low- or high-fidelity prototypes and data-capturing equipment).
The following sections address these issues and research that has
been tackling them.
3
Prototyping the Concepts
Prototypes are an essential tool for many design methodologies, including User-Centered ones. While designing a certain software program or
artifact, prototypes allow designers to discuss, share, and test their
ideas and concepts with others, designers, and final users, before the
final product is completed [5, 44, 79].
Current literature, although only recently stating some evidences
of evolution and suggesting some new approaches with strong potential [53, 119], has been increasingly providing guidelines on how to
develop or use low-fidelity prototypes for mobile devices.
With the need to emulate real-world settings and provide users with
realistic context and user experiences, prototypes, even at the earliest
stages, need to be conceived in ways that afford such use. Traditional
paper only approaches, which tend to fall short on these goals and limit
the evaluation stages even with traditional lab-based experiments.
A growing trend of research is working to offer suggestions on
what to use or do in order to comply with the evaluation that
is required on mobile usability, trying to cope with the specific
characteristics that mobile devices and their usage require, as well
as with their growing multimedia and different input and output
196
3.1 The Use of Prototypes in HCI
197
capabilities (e.g., multimodal interaction). Available examples, found
in the literature, follow different approaches, pointing directions that
rely on low-fidelity prototypes and paper-based procedures, especially
for early design stages, the use of technology, and design and prototyping software that addresses the added mobile requirements and
supports the creation and experiments with mixed and high-fidelity
prototypes.
In summary, new prototyping approaches, which overcome problems posed by traditional techniques, having a strong impact on the
following evaluation tasks, are required and for the past few years,
researchers and practitioners have been increasingly working on this
topic. The following sections present a short summary on the use of
prototypes in HCI and highlight the challenges that emerge once some
of the traditional prototyping techniques are applied for mobile scenarios and discusses the upcoming research efforts within this area.
3.1
The Use of Prototypes in HCI
As previously introduced, prototypes are essential tools for the design
of interactive systems as they convey users and designers an image of
a forthcoming system and can be used at various stages of the design
process, even before any system is being developed.
In particular, low-fidelity prototypes, which are non-functional UIs
sketched on paper cards or post-its, put together with glue, pins, and
cut out with scissors [5, 49, 117], which are used to simulate an actual
system, while evaluating its features and detecting its flaws at early
stages, can provide a crucial tool for any designer.
In [38], the importance of low-fidelity prototypes to drive design and
evaluate ideas with low cost is addressed. The author stresses the need
to provide users with objects that reflect ideas, illustrating assumptions and concepts on a physical and tangible manner. Accordingly,
the use of prototypes facilitates user interaction with design concepts
and challenges users to provide input and generate ideas.
Advantages for designers are also paramount. As detailed in [99],
low-fidelity prototypes provide a way for designers to assess user satisfaction at very early stages. They also provide a common reference
198
Prototyping the Concepts
point for all the design team members, improve the completeness of the
product’s functional specification, and substantially reduce the total
development cost of the product.
This type of easy to use and highly configurable prototypes is particularly interesting for early design stages since they can be quickly
built using inexpensive material [13, 17, 55].
As they are so easy to make and re-arrange, the possibility of
cutting, editing, and adjusting them even during evaluation sessions
in which they are used allows designers and even participant users, to
explore alternatives in a thought provoking way. This usage and creation easiness that facilitates user involvement propels innovation and
a rapid idea generation with shorter sketching and evaluation cycles,
which makes them ideal solutions for mobile design.
In [39], a research experience demonstrates how all kinds of material even “junk” can be used to create low-cost but highly effective
prototypes. During the course of the experiences that were conducted,
three prototypes were created using “junk” material such as paper
plates, cardboard trays, coffee stirrers, toothpicks, etc. In addition to
the aforementioned advantages of using such material in constructing
prototypes, the outcome of the experience highlighted the cooperation
efforts that were necessary and the innovation that resulted from this
collaborative process.
These low-fidelity prototypes are typically used with the Wizard
of Oz technique [58]. As sketches gain form and UIs are drawn on
paper cards, these are used, following specific sequences and navigation
arrangements in order to simulate the actual application as it should
function. This simulation is usually achieved through techniques such
as the Wizard of Oz technique where a designer acts as the computer,
changing sketches and screens according to the user’s actions, whereas,
if possible, another designer annotates and analyzes the user’s behavior
and errors that might be detected [58].
Virzi et al. [117] conducted a thorough study focusing specifically
on the efficiency with which low-fidelity prototypes can be used during evaluation stages and how they allow designers to detect design
errors, in several stages of the design process, concluding that these are
3.1 The Use of Prototypes in HCI
199
as effective as high-fidelity prototypes. Furthermore, once again, their
low cost and short build time are emphasized and pointed as some
of low-fidelity prototyping major advantages. In fact, in some cases,
their efficiency is almost as high as software prototypes. Several studies [55, 117] have demonstrated that paper prototypes can be efficiently
used to prevent posterior design errors and unusable applications.
Nevertheless, even if they are to be viewed as a mere designing tool,
during the evaluation experiences, they are many times perceived by
users as very resemblant to the final application, especially when it
comes to mobile applications and systems, where users tend to have
difficulties in separating the UI with the available hardware [23].
In [48], the author points designers’ attention to avoid misleading
users while creating prototypes. Often, designers create very appealing
prototypes, which please users but are too expensive or impossible to
actually implement possibly leading to failure or disappointment, to
which the author calls the “cargo-cult syndrome.” On a more user
concerned level, this “cargo-cult syndrome,” introduced by Holmquist
as a warning, can also affect the usability of the resulting applications
during evaluation. As the prototypes may not be realistic and, at times,
may be far better and more appealing than what the actual product will
result in, major usability issues may pass undetected during evaluation.
However, on a final version, without some of the prototyped features,
usability issues can be present in a very obtrusive way.
High-fidelity prototypes, in opposition to low-fidelity ones, are
generally composed by functional software components that can be
experimented on the targeted platforms [14, 80, 121]. Although in different ways, some of problems felt with low-fidelity prototypes, especially the refinement of the prototypes, can also be experienced with
software prototypes [32]. These details are often defining when selecting
the tools that are used to build the prototypes. To support this type of
prototyping, generally used at later stages of design, designers utilize
developing environments or prototyping tools [15, 47]. Naturally, if not
properly adequate to mobile devices and considering the new evaluation requirements, these might not be suitable to their final goal as
well. High-fidelity prototypes are also particularly adequate to design
200
Prototyping the Concepts
and test more advanced interaction features, increasingly available on
mobile devices, such as multimodal interaction (e.g., speech recognition), haptics, multi-touch and gesture recognition.
Overall, when properly used, prototypes represent a great tool for
designers as they are the main component, together with users, of every
stage of evaluation.
3.2
Mobile Requirements
Despite all their advantages, prototypes, either low or high fidelity,
are generally used within controlled scenarios, where techniques as the
Wizard of Oz present little challenge or where paper, pencils, and cards
can be easily exchanged and edited during the process. For higher
fidelity prototypes, within lab-based tests, equipment for video capturing and analysis can be easily installed and used.
Naturally, when it comes to mobile devices, several issues might
affect how they are built, used, and tested, especially considering that
they often need to be used within highly complex lab settings that try to
recreate rich contexts or, in some instances, they are used in real-world
settings [27, 77]. For instance, using a Letter-sized paper to draw a UI
and using post-its as menus might be an acceptable way to prototype
a desktop application. However, when it comes to mobile applications,
specific attention must be paid to details which compromise their usage
on real settings. In fact, here, paper prototyping may pose a considerable problem and mislead users. For instance, using a post-it with a
few key buttons to simulate the use of a PDA with a GPS card might
seem a great idea to test a future application. However, using the real
device on one’s hand throughout the day may be a demanding task,
requiring a much different UI [23].
Testing an application with an artificial keyboard might work if the
size of the letters is large; however, using a real device keyboard might
be quite different, suggesting the need to use alternative solutions.
In addition, for mobile devices, even external characteristics as size
can affect how efficiently users interact with a certain UI [57]. These
details may pass without notice if one does not create suitable prototypes and perform tests on real-life scenarios. It is true that low-fidelity
3.3 Low-fi Prototypes
201
prototypes need to be quick to build and easy to use [5, 117] but the
trade-off between the effort in building them and the misleading results
that they might provide needs to be carefully analyzed [?].
3.3
Low-fi Prototypes
In general, to be as profitable and useful as acclaimed, low-fidelity prototypes for mobile devices require special attention [23, 68]. In many
situations, when ill-implemented they might even produce the opposite effect than that expected [44, 48]. Users and sometimes designers
tend to visualize mobile prototypes, even low-fidelity ones, as extremely
resembling to final solutions. If care is not taken, this conduct can affect
the result of the evaluation process by misleading users on what the
actual experience might end up resulting in and on the use of the prototype itself [28, 68, 77]. Additionally, it is paramount to create them
in a way that facilitates their evaluation within realistic settings, either
simulated lab-based scenarios or field studies [23, 77].
The same issues apply to higher-fidelity prototypes. Clearly, once
again providing the opportunity to test and interact with the prototypes in realistic settings is paramount. Thus, functioning on actual
devices rather than emulators quickly becomes a requirement to make
the most out of higher fidelity prototypes, as well as means to quickly
adjust them and take advantage of the digital medium to capture data
out in the field. In summary, running a prototype on a desktop computer and asking a user to interact with it using a mouse is not the best
way to test the UI as it provides an experience that is very different
from what the actual interaction with the resulting UI might be.
Accordingly, and aware of this fact, new approaches have been
emerging, aiming at introducing new types of low-fidelity and mixedfidelity prototypes and prototyping tools that overcome these issues.
3.3.1
Adapting for Out-of-the-lab Use
Currently, low-fidelity prototyping for mobile devices is gradually evolving from the classic approach to a much more adequate technique for
mobile design. Traditionally, prototypes are built by creating sketches
on paper cards and using post-its as buttons or menus. However, as
202
Prototyping the Concepts
discussed, most of the issues that hinder the process, and material and
techniques commonly used, prevent designers from conducting evaluation on realistic settings.
These issues and consequent requirements are relevant because, as
said in [119], prototypes, apart from serving their communication and
concept design purposes, are most valuable when they allow designers
to conduct usability tests on it. Thus, the generation of prototypes that
cannot be carried and used on a usability test to fall short on their
potential. On his work, Weiss notices prototyping for mobile devices
from a different perspective [119] when compared with the traditional
approach. Here, the offered sugestions lead designers to create prototypes with some degree of realistic features. The author presents a list
of material and advises designers to measure and create sketches following closely the actual screen real estate of the device counting available
character rows and columns.
Lim et al. also make the point that the validation of a prototype
through its evaluation is not possible if the chosen prototyping technique is not appropriate [68]. The authors conducted a study that
highlighted key characteristics that prototypes on several stages must
possess in order to provide effective results. The research concluded
that paper prototypes can be valuable tools to evaluate mobile prototypes only if particular care is taken while creating the prototypes.
In particular, similar to some of our results [23] prototypes need to
provide a closer experience to the end result even at early stages.
For instance, given mobile devices’ portability and adequateness to
intensive usage and their physical characteristics and peculiar interaction modalities, the distinction between the device and the UI, although
rarely addressed and implemented, seems to be crucial to take into
account while creating mobile low-fidelity prototypes.
In [44], this topic is discussed and some experiences described. The
author explains how paper and physical prototyping are essential to
provide users with notions such as screen real estate, input mechanisms, weight, and so on. Moreover, the authors suggest that integrating the device prototype with the sketching process inspires inventive
and creative products. The research experiences that were conducted
included the construction of physical prototypes using materials such
3.3 Low-fi Prototypes
203
as foam, acetates, and other found materials. Sketches were drawn on
paper and used in concert with the physical prototypes. Positive results
demonstrated that using both concepts and actually creating physical
prototypes enhanced the evaluation and testing processes.
The research described in [55] points in the same direction. The
monograph describes the user-centered design process of two mobile
applications for Nokia cell phones. For the first design process, for a
navigation application, very little user input was given during the initial
stages of design. Nevertheless, a three day workshop with engineers was
held and short tests with low-fidelity prototypes took place. A following
trial with end users and a pilot version of the software also took place,
during the course of three weeks. This initial experience alerted the
designers that the pilot product did not support the users’ needs when
using it in real-life contexts. This led to a second design iteration where
a new prototype with some adjustments was built and tested with end
users, resulting in an improved version of the tool.
The second design process aimed at developing an image and multimedia message editing software for the mobile phone. For this product,
the design team was able to apply a contextual design approach since
the beginning of the project. As soon as requirements were gathered,
through a series of techniques, paper prototypes were created, which
was tested with end users and later generated a higher-fidelity prototype and ultimately, the final tool. Overall, from these two experiences,
the authors learned a set of lessons. In particular, the authors suggest
that the most important aspect of the design process is providing users
with a real mobile usage context, which, for mobile devices, mobile
phones in particular, users need to be able to touch buttons and interact with prototypes that feel like they are actually working, highlighting
the importance of using realistic prototypes during the process.
More recently, although some examples available in literature apply
common techniques with no particular emphasis on the specific characteristics of the devices and their interaction, some researchers have been
updating their practices and offer advice on how to actually develop
prototypes adequate to mobile interaction design. For instance, Jones
and Marsden provide some examples of prototypes for mobile devices
suggesting how they should be built and used [53]. In their book, the
204
Prototyping the Concepts
first example shows a prototype for a UI to access a digital library
though mobile handsets. The prototype clearly shows the information
that should be presented and how it should be mapped on the device’s
screen, which is a great use for a prototype and can clearly convey functionality and share the designer’s vision. However, a closer look shows
that to include all the information that is shown into a cellular phone
or perhaps other mobile devices would be extremely difficult or impossible to achieve. Alternatively, the authors suggest a second approach
to the prototype where the same interface is recreated with post-its on
a whiteboard.
Again, as it is presented, given the common device’s characteristics
and available screen space, the mapping of the UI on a device would
be completely different, possibly misleading both users and designers.
Moreover, it would not provide much help while trying to detect usability problems and errors. More importantly, realistic evaluation with
such prototype would be difficult to achieve. Finally, a third example
shows crucial improvements and provides guidelines on how to adjust
the procedures for the mobile requirements. Here, the UI that is being
prototyped targets a cellular phone. Accordingly, the authors show a
picture of an actual mobile phone emulating the device. The picture
of the phone is used with the sketches within a predefined scenario,
which, although not including the necessary characteristics to facilitate evaluation in situ, still provides a much more realistic vision of the
actual UI.
In addition to details such as screen estate and the UI design, with
mobile prototypes, external characteristics to the sketches may influence the validity of the prototype, as pointed out in [23]. Details such
as weight, interaction modalities, size, or shape may affect the way
in which the sketch is perceived. Furthermore, simpler restraints such
as the screen resolution or area might imply that common prototyping techniques need to be adapted to such details. Clearly, due to the
aforementioned characteristics, special attention must be taken when
prototyping for mobile devices, suggesting the need for new approaches
regarding this practice. In this work [23], the authors describe a set of
experiences that resulted in guidelines on how to create prototypes for
mobile devices. In particular, it is suggested that a separation has to be
3.3 Low-fi Prototypes
205
made between the physical prototype, which should be created using
solid materials (e.g., plastic, wood) and resemble the actual device,
and the software sketch, which should consider the available resolution, modalities, and UI components.
Research conducted by Lumsden and MacLean [77] also provided
interesting results. In their work, they also conduct a series of experiences that validate the need to use realistic prototypes, even at the
earliest stages of the design process. These tests were done within
the scope of the design process of a mobile system designed to be
used in a grocery store to enhance the shopping experience. In particular, the authors present a paper prototype that was designed with
the exact screen size of the end device, preventing misleading users on
issues with screen dimension. This paper prototype was the basis for a
higher-fidelity one that could be interacted with directly on the device.
This pseudo-paper prototype was created by directly using the paper
sketches as a navigable PowerPoint presentation.
Although these tests were conducted within a laboratory setting,
different settings were experimented and results demonstrated that the
realistic pseudo-paper prototypes allowed participants to identify more
unique usability problems than the traditional paper only prototypes.
Liu and Khooshabeh also conducted a study comparing paper and
interactive prototyping techniques for mobile and ubiquitous computing environments [74]. Results from their experiences also indicate that
low-fidelity prototypes can be successfully used for this type of system
but care must be taken, especially with the prototype’s fidelity and the
way in which they are used.
In [11], the authors clearly demonstrate the importance of adapting
prototyping procedures for mobile devices as well. In particular, they
highlight the interactivity of the prototypes as a very important factor
for evaluation of mobile prototypes in the wild, also highlighting the
need for a continuous investment in tools that facilitate this process by
allowing designers to quickly create prototypes that can be interacted
within mobile settings.
Other research experiences that focus on how prototypes should
be built and what should be their characteristics for mobile evaluation consider the utilization of various fidelities of prototypes, including
206
Prototyping the Concepts
more or less details depending on what is being evaluated. In [80], the
authors point the differences between prototype fidelities and present
an example of a mixed-fidelity prototype. Accordingly, they suggest five
different dimensions that can be adjusted according to the evaluation
that needs to be conducted. The first is the visual refinement, which can
vary from hand drawn sketches to pixel-accurate mock-ups. The second
dimension refers to the breadth of functionality, which defines what the
user can actually do with the prototype, ranging from no functionality
where no action can be done with the prototype, to highly functional
prototypes where most of the final functionalities are available. Closely
related is the third dimension, depth of functionality, which pertains to
how accurate the simulation and level of functionality of each feature
is. Finally, the richness of interactivity and the richness of the data
model that refer to how close the interaction with the prototype is to
the final product and to how representative the actual domain data
employed in the prototype is.
Overall, these dimensions allow designers to categorize their prototypes depending on the type of UI or device they want to evaluate or
design for. As previously mentioned, given mobile device’s characteristics, dimensions such as the level of visual refinement and richness of
functionality are paramount.
In summary, to maximize the benefits of prototypes for mobile
interaction design, the above-mentioned studies have pointed out that
the devices’ characteristics, if possible, should be taken into account
when creating prototypes. This can be done either by creating physical
mock-ups of the devices that can be handed to end users or by using
actual devices in concert with the low-fidelity prototypes (e.g., placing sketches on top of actual devices). In addition, other features such
as screen real estate and/or available items and interaction modalities should be considered while creating the prototypes and simulated
during their development and experimentation.
3.4
Mixed and High-Fidelity Prototypes
Although low-fidelity prototypes are extremely useful during early design
stages, some errors and details are not detectable with these, even when
3.4 Mixed and High-Fidelity Prototypes
207
some of the guidelines presented by the literature described in the previous section are used. Usability issues of a final product, especially detailed
interaction, may pass unnoticed [55]. Moreover, many times, depending
on the complexity of the UI being developed, more functional prototypes
are required (e.g., multimodal interaction, gestures). Navigation between
cards and the arrangement of the several screens amongst others are few
examples. Furthermore, the need for a mediator to act as the application,
by exchanging cards, might compromise the need to evaluate a prototype
on the adequate settings or environment. Techniques such as Wizard of
Oz [58] are hard to apply and might even obstruct the evaluation when
testing personal applications, often used on mobile devices. Accordingly,
there is a time when designs have to evolve from the sketch-based prototypes into higher fidelity ones.
However, as the effort of creating a high-fidelity software prototype is considerably big, these are many times forgotten and the
design problems are neglected and only detected when the software
is completed [55]. Some research addressing this issue and a number of
prototyping frameworks that allow designers to sketch or use sketches
to create high-fidelity prototypes are becoming available. To provide
some context, this section starts by addressing initial efforts and nonmobile-specific platforms and tools, followed by the evolution of some
and appearance of new ones that consider the additional challenges of
mobile design.
Landay [64] created SILK, one of the first systems for this purpose.
It supports the early stage sketching maintaining the same principles
from paper prototyping, but enhancing it with the interactivity that
the electronic medium provides, encompassing annotations and quick
modification of the sketches. Furthermore, the software recognizes widgets and UI details, allowing the user to easily move and change them.
With DENIM [71, 72], the authors followed Landay’s footsteps and
extended the sketching tools also providing designers with the ability
to quickly create sketch-based prototypes and interact with them on
the computer, including the possibility of replacing drawn components
with actual programmatic components.
Although none of these tools were directed to mobile devices and
mobile design, they were some of the predecessors for this research field
208
Prototyping the Concepts
and inspiration for works such as SketchWizard [22] or SUEDE [62] also
to emerge. These support new modalities and interaction modes such
as pen-based input on the former and speech UIs on the latter. Ex-ASketch [45] also allows designers to quickly animate sketches drawn on
a whiteboard.
In more recent years, tools have been gradually targeting ubiquitous contexts. Damask [69, 70] is another example of a prototyping
framework, here extending its scope to multi-device applications. Using
sketches made by the user and merging them with design patterns,
Damask creates usable prototypes for several devices. These can be
later used on a simulator for usability tests. The design process is based
on design patterns that can be used to create multi-device UIs with similar quality for each of the specific devices. Although not specifically
directed towards mobile devices, its multi-device approach supports
prototyping for mobile phones, PDAs, and other mobile technology.
A similar work by Collignon et al. also demonstrates a prototyping
tool that aims at supporting the design of UIs for different devices [16].
Coyette et al. have also been working on multi-device and multifidelity prototyping tools [21]. Their tool (SketchiXML) also supports a
sketching-based design that can be later replaced by interactive widgets
and items. Prototypes are designed with a specific device in mind that
can be selected from an available list of options or a custom one can also
be defined. Using shape recognition techniques, sketches are identified
and a set of widgets is generated automatically, depending on the target
platform.
An evolution of their initial tool [20] includes additional features
that support different fidelities in their prototypes and advanced
options such as gesture recognition. This newer version also exports
prototypes to a platform-independent language that allows them to be
used in different devices.
More recently, more specific prototyping systems directed especially
to mobility and ubiquity are being developed. For instance, Topiary [66]
is a tool particularly dedicated to location-enhanced applications
and enables designers to use several items and build interfaces
and corresponding storyboards. The tool is divided into two major
3.4 Mixed and High-Fidelity Prototypes
209
parts: a Wizard UI and an End User UI. The first allows users to design
the prototypes whereas the second serves testing purposes.
Ripcord is another system that intends to support prototyping and
evaluation of mobile applications [63]. It suggests the use of HTML programming to create the low-fidelity prototypes. The framework includes
a mobile device with an installed web browser where the prototype will
be used. The mobile device is then connected to other devices such
as GPS receivers or cameras. These devices capture usage data. More
details on its functionalities are available on the following section.
Li and Landay, on a more recent work, introduce ActivityDesigner,
which is another example of a tool for prototyping ubicomp and mobile
applications for everyday human activities [67]. Although the tool
follows an activity centered design orientation, allowing designers to
model activities at a high level, it still provides designers with the ability to visually create prototypes that can be interacted with on actual
devices. The usage process starts by modeling activities based on data
gathered from observations. Afterwards, these can be represented as
scenes, which have stakeholders and roles. These scenes can also be
part of themes, which allow their organization based on domains, for
instance.
After scenes and themes are defined, functional prototypes can be
created using storyboarding, demonstration, and scripting. These can
be later deployed into a target device either as HTML+JavaScript or,
for high-end devices such as TabletPCs, they can be used on a virtual
machine also available. In addition to these features, the tool includes
data gathering mechanisms such as journals and logs that can be used
directly on the target device.
ActivityDesigner has been used on several case studies providing
positive results since it has been supporting the development and evaluation of several applications, speeding up the design process and, given
that it functions mostly visually or through scripts, empowering designers with fewer technical skills.
Based on some of the findings described through the previous section
(e.g., the need to provide more fidelity and interactivity for low-fi prototypes and the ability to test them on rich settings), we have also
210
Prototyping the Concepts
developed a tool for mobile prototyping [30]. Its umbrella goal is to
support the early design stages of applications for mobile devices. It
provides designers with tools to quickly create prototypes and evaluate
them, focusing specifically mobile and handheld devices. The framework allows the construction of low-, mid-, and high-fidelity prototypes
and extends its automatic Wizard of Oz usage through their evaluation,
also providing means to analyze the gathered data.
More concisely, for prototyping it aims at (a) supporting a visual,
quick and easy design of realistic mobile prototypes, with flexibility
regarding their fidelity, (b) offering expert users or users without any
programming knowledge the possibility of building or adjusting their
prototypes, (c) allowing and promoting participatory design and prototyping during outdoor evaluation sessions within realistic settings.
These goals are achieved through a couple of general functionalities included in their prototyping tool. First, the tool supports
prototyping with mixed fidelities (thus analyzing different usability
dimensions) — see Figure 3.1. Hand-drawn sketches or interactive
visual preprogrammed components can compose different prototypes
with varied levels of visual refinement, depth of functionality, and
Fig. 3.1 MobPro — desktop editor.
3.4 Mixed and High-Fidelity Prototypes
211
richness of interactivity, comparing different design alternatives, evaluating button sizes, screen arrangements, element placement, interaction types, navigation schemes, audio icons, and interaction modalities,
among others.
Second, they integrate usability guidelines for mobile devices on
mid-fidelity prototypes, enforcing details such as the location of each
component, the actual size of the component, or even the amount of
information per screen.
In a follow-up project, again following the findings of previous work
and found in the available literature, we introduced an extension to the
prototyping framework [25]. This extension is mostly composed by an
in situ prototyping tool that aims at supporting a set of techniques that
have proven to enhance the design process of mobile UIs. These techniques were previously applied and validated by the utilization of the previous framework on various case studies. Building on the positive results,
and the need to carry on with the design process in situ, while testing
the prototypes during user interaction and experimentation, a mobile
extension was required. Overall, the goal is to combine the advantages
of rapid prototyping and adjustment of sketches on realistic settings and
scenarios, with the added features that the digital medium can provide.
Figure 3.2 depicts the mobile version of the prototyping tool.
Fig. 3.2 MobPro — mobile editor.
212
Prototyping the Concepts
With this addition, prototypes can be created on an existing desktop tool and easily edited on the mobile tool during outdoor evaluation
sessions whenever necessary. Additionally, new prototypes can also be
created with the mobile tool. This provides a set of new benefits. For
instance, by providing the option to augment sketches with behavior,
the tool allows users to maintain their sketching procedures (e.g., drawing a UI on a sheet of paper or card) and to turn it into an interactive
prototype. For example, if the device includes a camera, the user can
easily capture a photo of a hand-drawn sketch, import it to the editor,
and define interactive areas within it or place software elements (e.g.,
buttons) on top of it.
3.5
Section Overview
Globally, it is clear that prototyping is an extremely positive endeavor
that allows designers to propel design, include users on the design
process and evaluate concepts, and ideas on early stages of design.
However, as some research already points, with mobile devices, and
given their features and characteristics, particular care must be taken
while generating effective prototypes. Low-fidelity prototypes play an
important role in this process mainly because of their low cost, easiness to build, rapid development, and efficacy to allow error detection.
These previous sections and the suggestions they include clearly show
the evolution and trend with low-fidelity prototyping procedures for
mobile devices. Most researchers agree that creating realistic prototypes, even at this early stage, improves results and contributes to a
more realistic experimental context. To do so, research suggests that
best results are achieved when
• prototypes provide a close resemblance to the final software and experience, including dimensions such as aesthetics,
features, and funtionality. Although they do not need to emulate exactly the final application, they should provide a close
enough artifact in order to offer a realistic usage experience
that conveys the design ideas and allows users to understand
the interaction concepts and issues;
3.5 Section Overview
213
• prototypes follow the device’s characteristics (e.g., size,
weight) and features, and do not include functionality, visual
elements, fonts, etc., which cannot be available on the final
result. This allows users to grasp, carry, drop, touch, and be
engaged while interacting with the prototype and simulate
an experience that is much closer to what will be available
with the final design.
• the different interaction modalities are included or simulated
along with the other characteristics of the device and UI.
• mobile low-fidelity prototypes allow designers, and possibly
users, to discern the UI issues with those that are intrinsic
to the underlying technology and form factor.
As a consequence from these guidelines and conclusions, significant
research has been addressing the development of prototyping tools for
both early low-fidelity and high-fidelity prototypes. Overall, prototyping tools offer a set of advantages that enhance the prototyping process and overcome some of the issues that traditional or even update
paper-based or low-fidelity prototypes have. In particular, these include
development facilitation, iteration speed up, and easier user feedback
during early stages, but most importantly, a large set of them includes
features that promote evaluation within rich, and possibly out-of-thelab settings, and using actual devices, resulting in a testing context
much closer to reality. Moreover, supported by the digital medium, they
open the opportunity to be complemented by evaluation techniques
that take advantage of different modalities and digitally supported
data gathering. These opportunities and extensions are discussed in
the following section.
4
Evaluating the Results
As it has been discussed through the previous sections, mobile evaluation is a crucial part of the mobile interaction design process.
Additionally, in order to reach the best results, one must pay attention to a set of factors that emphasize realism and the inclusion of
different contexts and variables during the evaluation process, or even
taking it out of the lab.
Unfortunately, a study conducted by Kjeldskov in 2003 demonstrates that this was hardly the case [59]. In fact, from this research,
it was concluded that the majority of researchers working with mobile
interaction either avoided the evaluation stage or stopped at the lab
testing sessions with traditional techniques. Notwithstanding, as it has
been shown, a few years after, it is now possible to find a significant
body of literature that proves the evolution so far and shows that mobile
evaluation is becoming an important research field.
The techniques and technology hereby discussed are particularly
adequate for field evaluation scenarios. Nevertheless, some complex
laboratory tests that mimic parts of real settings (e.g., hospital simulations) become also contextually demanding and often recur to these to
gather data and understand usability. Moreover, this text’s emphasis
214
4.1 Usability Evaluation
215
on field evaluation does not intend to diminish the need for rigorous
laboratory tests. On the contrary, lab experiments are fundamental as
they often lay the grounds for effective and feasible field studies. A good
example of this complementarity is even taken further by Williamson
et al [120]. The authors, before going into the wild, delimited some
of the design decisions by building a simulator that models basic user
behavior, relevant for the problem in hands.
Following the advances both on the acknowledgment of context’s
importance and on the need to update prototyping techniques and how
they can be used to support mobile evaluation, this section addresses
the techniques that have been emerging in order to complement these
advances and how designers and engineers have been applying them.
We first start with a brief presentation of classic usability evaluation
techniques. Then, we emphasize the major challenges imposed by the
mobile factor. The following section addresses techniques and procedures that can be used at different steps of the design process, with
both low- and high-fidelity prototypes. A final section describes how
hardware and equipment have been used to support some of the previous techniques and how software and evaluation frameworks have been
developed to complement and support evaluation.
4.1
Usability Evaluation
Usability, according to the ISO9241-11 (1998) Guidance on Usability
document, is “the extent to which a product can be used by specified users to achieve specified goals with effectiveness, efficiency, and
satisfaction in a specified context of use.” The interpretations of this
definition vary and evolve (see, for example, [4, 93] for a discussion on
the subject or the recent standards). Addressing all its facets particularly into the users’ satisfaction dimension or even its projection into
the broader concept User Experience as defined by ISO9241-210 (2010)
is a huge and open task and not the objective of this text. Nevertheless, no matter the scope of its interpretation, evaluating a software
application’s usability is a major part of its development process.
To cope with the demand of evaluating a software system and its UI
during its design process, several studies and techniques (e.g., Heuristic,
216
Evaluating the Results
Formal, and Empirical methods) have been proposed through the
years [85, 106]. An extensive part of HCI literature addresses this topic
and various guidelines, for each of the aforementioned methods, have
been established, on their majority focusing laboratory evaluation sessions [84, 100]. These guidelines and techniques have also been recurrently used and assessed resulting in various reports of their weaknesses
and advantages [81, 82, 85]. In [56], the effectiveness of empirical usability testing and individual and team walk-through methods is compared.
The authors discovered that empirical usability evaluation provided up
to four times more relevant information and error detection than the
counterpart.
Nielsen and Phillips conducted a similar study to compare heuristic,
formal, and empirical methods to estimate the relative usability of two
interfaces [85]. The authors defined a specific task and designed two
different UIs that support such task. The first applied method was
the heuristic estimate. The evaluators inspected the prototypes and
created a list of usability problems by noting deviations from accepted
usability principles. Time estimations for accomplishing the prescribed
task were also made. For the formal analysis, 19 students used the
GOMS (Goals, Operators, Methods, and Selection Rules) method [9].
Finally, for the user testing, 20 students tried both prototypes and
were given a two-question subjective satisfaction questionnaire. After
results were compared, the authors concluded that user testing is the
best method although alerting that laboratory testing is not a perfect
predictor of field performance.
A more recent study comparing various usability evaluation techniques has been conducted by Molich et al. [81]. The study reports
the evaluation of a single UI, conducted by various entities using different techniques and resources. Overall, nine different teams participated over the course of about one month. Eight of the teams used a
think-aloud technique using different approaches. Some complemented
it with some inquiry methods. The remaining team used a questionnaire
and a semi-structured exploration of the product. The results suggest
that both approaches are valid. The quality difference is determined
by the selected tasks and how the methodologies are applied by those
in charge of the tests. Overall, the main conclusion resulting from the
4.2 The Challenge of Mobile Evaluation
217
aforementioned studies seems to point user testing as the most profitable method, providing the most valuable and reliable results [85].
Contrary to walk-through approaches that are generally performed
by designers and expert users, user testing relies on final users’ evaluations and opinions to propel error discovery [56]. Among these, we can
find several empirical techniques, namely,
•
•
•
•
think-aloud evaluations;
user observation;
scenario-based reviews;
questionnaire-based evaluation.
Within these techniques, users can either accomplish prescribed
tasks or use the prototype on a self-guided exploration. Prescribed tasks
have shown better results [56] and can usually point users to critical
areas. However, care must be taken not to create biased results that
overlook on specific features and fail to detect serious usability problems. Moreover, taking into account that usability tests depend on the
chosen methodology, on the prescribed tasks and even on the selected
users, the combination of more than one technique is advised. Molich
et al. [81] recommend that a mix of methods should be used in order
to overcome unique flaws that each may contain.
4.2
The Challenge of Mobile Evaluation
The previously mentioned techniques have been successfully used on a
variety of situations for different systems and applications. However, the
particular requirements that mobile devices introduce require further
effort and adjustments to these methods [42, 61, 105].
Users behavior and their perception, efficiency, and use of the mobile
applications are influenced by the number of contexts and the constant
changing from one to another. These influences need to be understood.
As for the comprehension of the real world, the main challenge with
mobile evaluation is the ability to obtain data and find issues within real
scenarios and especially while changing between contexts, considering
the mobility factor. The difference here is that the focus is on the
validation and evaluation of the usability or user experience for a mobile
218
Evaluating the Results
system, product, or UI, which by itself also influences the contexts of
use [36].
The techniques that were described through Section 4.1 are all valid
and have been used and applied successfully during the design of many
systems. However, most of the existing techniques are focused mainly
on a specific setting and scenario. They rely heavily on the designer’s
ability to observe the user, recognize issues, and understand existing
problems. As they are generally applied to fixed solutions, to be used
within a single context, scenarios are more stable and data gathering
techniques can be easily applied.
With mobile technology the case is hardly the same. As it was discussed before, contexts tend to be dynamic and more complex and users
go through different contexts doing the same of different activities.
This influences behavior and efficiency in using mobile applications
and sometimes satisfaction. Consider a user doing a simple task, such
as listening to some music and building a playlist with an mp3 player
while going to work. Leaving home she goes out into the street where
light conditions change possibly precluding her control over the music
selection. Then, she finds the stairs and crown descending for the subway, requiring some divided attention and possibly slowing down her
pace. Finally, as she sits on the single free bench giving her the opportunity for easier interaction, a train arrives making so much noise she
cannot properly hear the music selection. This context changeability is
generally non-existent on fixed solutions [83].
Given these issues, designers are necessarily forced to follow users
and apply these procedures outdoors, observing and inquiring users
as they go about their activities and tasks, moving through several
scenarios. Although this could provide great results and useful data,
facilitating the analysis of new requirements, it necessarily introduces
yet another set of problems. First, it raises privacy and ethical issues,
depending on the domain for which the solution is being designed [103].
Second, it is extremely difficult to get the agreement of end users to be
followed constantly and monitoring while using their personal devices.
Third and lastly, the effort that such endeavors would imply is far too
great to justify such solution in the majority of cases. In fact, the effort
4.3 Techniques for Data Collection
219
involved in following users throughout different scenarios is the most
common cause for the few accounts of mobile evaluation [59].
4.3
Techniques for Data Collection
Recent studies have addressed the need to evaluate mobile systems
on realistic scenarios and, most importantly, have investigated emerging techniques that aim to support these procedures [37, 42]. Hagen
et al. categorized currently used techniques as mediated data collection, simulations, and enactments and combinations. On the first,
both users and the used technology collect usage data on natural settings. On the second, simulations of real settings are used, whereas
on the third combinations of several techniques are utilized. These
three categories are composed of a set of different techniques, which
can rely on the use of specific data collecting hardware or on participants collecting data themselves. In fact, this last approach seems to
be gaining momentum [2, 19, 40].
The two following sections describe procedures that can be applied
at different stages of the design and evaluation process as they do not
require working systems or any specific hardware.
4.3.1
Active Data Gathering
Active evaluation and data gathering techniques are those that rely
on end users and the evaluation participants to gather data and participate actively on the evaluation process. This type of evaluation
method has been increasingly used and recent techniques have been
tried and successful reports are now available. The main advantage
for these approaches is that given that users can capture information
by themselves, designers can minimize the effect of observers on their
studies and participants.
One example of such techniques is the Experience Sampling Method
(ESM) [19]. ESM is a technique inherited from the field of psychology,
which relies on questionnaires to gather usage information on natural settings. Participants of evaluation sessions, using mobile devices,
are prompted with questionnaires, providing usability data that can be
220
Evaluating the Results
later analyzed and statistically evaluated. As shown in [19], where users
were equipped with PDAs that prompted questionnaires at specific
times, the resulting information can be extremely valuable. In general,
the gathered information pertains to feelings, reactions, positive or negative experiences, fundamental to assessing the mobile user experience.
Furthermore, as designers are not present during questionnaire fillingin, bias associated with observation is reduced.
Iachello et al. used a similar approach to evaluate a mobile personal
audio application. In addition, they developed an inquiry technique of
their own, called paratype, which is based on the ESM. They claim
that their contribution is the fact that a paratype is used to introduce a simulated interaction with a given technological artifact or
system within a specific setting or social action. A survey (or questionnaire) documents the effects of this combination. In this case, the
used questionnaire’s goals are twofold. A first section is presented by
an investigator, removing some of the user’s responsibility in gathering the information, and contained information about the location, the
participants, and the activity taking place. The second portion of the
questionnaire is given to the participant after the initial questionnaire
takes place. As with the ESM, questionnaires are expected to be filled
immediately to increase accuracy of the results; however, here there is
direct intervention from the observer at the initial stage. From their
experiences, the authors claim that their technique offers useful results
for mobile and ubiquitous, especially for applications that provide information when needed or where interaction is embedded in unpredicted
social practice or everyday routine.
In previous work [29], we have also made use of the ESM, especially
during the initial stages of design. Here, to ease the burden on users, we
provided digital equipment (e.g., audio recorders, PDAs) for users to
capture and record their feelings and emotions whenever they encountered any problems or felt that there was some information that they
wanted to provide. This approach allowed us to gather more and richer
data given that it was much easier for the participants to speak than
write down their thoughts or fill-in a questionnaire.
Another often used technique to gather usability information during mobile evaluation is applying diaries. Usually called diary studies
4.3 Techniques for Data Collection
221
[90, 108], also reminiscent from psychology, this technique relies again
on end users to gather data, who generally annotate particular activities, as they occur, on a paper diary. Diaries are generally simple annotation sheets, sometimes containing spaces for time-stamping, but can
also be highly structured with pre-defined categories of activities.
In [90], diary studies were used to capture information under mobile
conditions during two different studies. The first was an investigation
on the adoption and use of mobile technology with 19 novice mobile
technology users. Their goals were to see how usability affected the use
and discovery of handset features and how this could affect the integration of the telephone into the participants’ daily life. The second was
a follow-up study involving a larger amount of participants and based
on the results from the former study. On their studies, the authors
introduced an innovation. In order to register their diary entries, participants were requested, although as an option, to use voice mail, using
the handsets that they were also testing.
Results from this experience were clearly beneficial. In addition to
providing a better feel of the nature of each participant with the mobile
device, it also allowed the investigators to tailor much more specifically
the following interviews and additional evaluation techniques based on
the voice mail messages.
Carter and Mankoff in [10] followed a similar approach and tried
to understand the effects of using different media as support for diary
studies. To do so, three studies were conducted. The first using photos
as prompts, the second using phone feedback and the third comparing
photos, audio and tangible objects. Results from these studies indicate
that different types of media suit different studies and different goals.
In particular, the authors found out that although audio diary studies suffer from recognition problems, they encourage more clandestine
capture events. Tangible objects trigger creativity while photos are the
easiest way to capture and recognize information and events. In general, the main conclusion is that diary studies can provide very good
results, especially if tailored and adjusted to the study’s goals and when
supported by different tools and media.
As a consequence, the authors propose a diary study procedure to maximize efficiency and results, including five main stages,
222
Evaluating the Results
namely (1) lightweight in situ capture by participants augmented with
(2) lightweight in situ annotation at the time of capture to encourage
recall, followed by (3) additional extensive annotation after the initial
process, allowing (4) researchers to better structure the data through
reviews and finally (5) a post-study interview.
Brandt et al. also address a similar issue and try to lower the burden
for diary studies under mobile conditions [7]. Their approach relies
on minimizing the amount of effort and time spent annotating and
registering diary entries while on the field and instead to use a system
that creates small snippets via text messages, pictures and voice mail
messages. These can be later completed with full diary entries at a
convenient time.
A slightly more focused method is also becoming increasingly used.
In [29, 116] the concept of contextual questionnaires is described. Vaatja
and Roto propose a set of guidelines on how to create questionnaires
that can be used on handheld devices. Main suggestions include minimizing the total time to completing the questionnaire, including the
time to access, read, and submit it, also limiting its length. Additional
guidelines focus on the usability and screen layout of the actual questionnaires. On our own approach, we propose contextual questionnaires
that contain specific questions aimed at particular characteristics of the
locations in which the evaluation takes place [29]. Although these might
imply that the designers know beforehand where the evaluation and
interaction will take place, we tackle this issue by also suggesting the
inclusion of items and questions targeted and generic context dimensions, in particular some of those discussed in previous sections within
this monograph. For instance, if designing a software application for
a sport activity, the questionnaire can contain questions regarding the
impact of sweat or movement when interacting with the device or UI.
These focused questions provide additional information on details that
may pass unnoticed on a brief experience but that designers are aware
that may cause issues further along the usage experience.
4.3.2
Passive Data Gathering
In opposition to the aforementioned techniques, passive data gathering approaches require no intervention from the user/participant [33].
4.3 Techniques for Data Collection
223
As already discussed, traditional approaches rely strongly on user
observation. However, within mobile settings, it is much more difficult to observe a user while interacting with a mobile application and
behaving as usual, moving around different contexts, especially without
compromising the user’s privacy and his/her normal endeavors.
As a consequence, not many accounts of user observation within
mobile settings can be found in the literature. Nevertheless, some
reports on successful observations for mobile evaluation can be
found [75]. In Patel et al.’s work [91], 12 participants engaged on an
experiment to evaluate a system for image searching and browsing on
mobile devices. During these tests, participants were observed and the
“think aloud” approach was used. Although these observations provided very positive results and feedback, very little is provided on the
settings and contexts or actual mobility involved in their experiences.
Salovaara et al. also report some findings on conducting observation within mobile settings, especially for inter-group studies for mobile
group applications [103]. To overcome difficulties as those mentioned
above, in addition to those that are a consequence of group interactions
(e.g., number of participants, their locations), the authors suggest a
combination of logging mechanisms with mixed observation procedures.
One researcher observes interactions taking place from a remote location and notifies other researchers on the field about interesting events.
From the two case studies described on their work, results proved to
be promising and the approach allowed for the gathering of both data
regarding individual use and group interactions, especially the use of
media in collocated and collective settings.
As with Salovaara’s work, there are several accounts on the usage
of logging mechanisms to support mobile evaluation [1, 54, 98] especially given that provides a great deal of information (depending on
the logged data, naturally) without any added effort for designers and
participants. Additionally, as long as allowed by participants, it permits designers to capture information within contexts that would be
otherwise inaccessible given to either ethical or privacy issues. The use
of logging will be further addressed in the following section.
Inspired by a growing trend of applications that exist for higherfidelity prototypes, on our previous work we proposed a non-technological
logging mechanism [29]. By using low-fidelity prototypes with rigid
224
Evaluating the Results
structures, and providing participants and end users with pens and pencils with different colors, it is possible to track navigation paths, behaviors towards the UI, and even usability issues regarding UI elements such
as button and graphical element dimensions. The users are requested to
interact with the mobile device using different colored pens for different
tasks (i.e., simulating a stylus — particularly useful for Tablet PCs or
stylus-based PDAs). Researchers can review the sketches and prototypes
and detect the number of taps and locations of these taps for each task
and activity. The same procedure can also be used for different contexts.
For instance, if a user is walking while interacting with the prototype and
if he/she is using a particular color, researchers can assess and compare
performance and usability issues with other contexts if, in these other
situations, a different color is used. This technique allows researchers to
gather logs without using technology, without using a working system
(i.e., using low-fidelity prototypes), and without following the end user
while he/she interacts with a prototype.
4.4
Technology-aided Mobile Evaluation
Because of the additional challenges that mobile evaluation introduces,
the techniques that are usually applied for fixed technologies (e.g.,
user observation, WOz simulations) are demanding when applied realistic settings and moving contexts, sometimes even inadequate and
frequently avoided [61].
To tackle these challenges and overcome some of the existing issues,
recent experiences and research addressing these problems and aiming
at further improving and supporting mobile evaluation have introduced
technological methods to gather usage data remotely through active
(e.g., ESM, Diary Studies) and passive modes (e.g., Logging), enabling
evaluation on real settings. These can be divided into two main facets,
namely the use of hardware and evaluation kits or the use of software
and evaluation platforms.
4.4.1
Hardware and Evaluation Equipment
Recently, some evaluation procedures that take advantage of specific
hardware are being experimented [46, 52, 76, 114] with interesting
4.4 Technology-aided Mobile Evaluation
225
results. In [52], the authors present an experience where users were
filmed and followed while using a mobile device. As expected, the
researchers given their presence and the public environment affected
the user experience. Alternatively, this work relied on users to capture
images from other users while utilizing the applications. As most of the
photographs provided little information when analyzed individually,
some users also provided annotations of the events in which the photos
were taken. Still, the gathered information was insufficient information
about usability issues.
The authors then suggest a new approach called the “experience
clip.” The method is based on the usage of a camera phone and a PDA.
The PDA contains the application that is being evaluated whereas the
camera phone is used to capture clips that seem relevant to users.
The experience demonstrated that the technique is able to provide
rich data about user emotions and feelings. However, users have to be
well motivated and clearly instructed on what they are supposed to
capture. Furthermore, the utilization of two devices generally requires
the participation of more than one user. Similar to what was discovered
with the first experience that motivated the study, it is likely that the
second presence and experience per se might change and influence the
evaluation results.
Ripcord already discussed on the previous section (Prototyping
Tools) also supports evaluation [63]. The system includes a kit for
data capturing on the wild, which connects the used device, through
a Bluetooth connection, to a mobile server kept on a backpack carried by the user. Prototypes used with the system are rendered on
the mobile server, which transmits information directly to the target
device. Although no video is captured, the mobile server collects and
stores all available data from the target device for remote supervision
or posterior analysis.
In [65], the authors describe their experiences on the evaluation
of multimodal UIs in various contexts. Although most of the experiences were made within simulated environments (e.g., car driving
context within laboratory, walking context within office environment),
the authors applied a series of interesting methods to gather information using digital equipment. On one of the experiences, undertaken
226
Evaluating the Results
on a walking scenario while the user was moving through various open
office areas (although being followed by a moderator), interaction was
captured using two small cameras. One of the cameras, controlled by
the observer that follows the user, captured the user’s interaction with
the mobile device whereas the second, placed on the user’s shoulder,
captured information about the context. Both cameras were connected
to a laptop that the user was carrying.
We have also successfully used a similar approach reported in [29].
This approach did not require users to be followed by moderators or
researchers. In order to capture information both on the context and
on the user’s interaction with the device, we used a small camera, also
connected to a laptop on the user’s backpack, which was placed on the
user’s head, attached to a hat. With this placing, the camera captured
video of the user’s interaction with the mobile device, including the
device’s screen, but also information regarding interruptions, conversations, obstacles, and the context in general. The ability to capture
audio allowed the design team to request users, whenever necessary, to
use the “think aloud” method, capturing both the video and the users’
thoughts and opinions.
In this same monograph [29], we also describe experiences in using
the same equipment and setup to conduct remote evaluation sessions.
Given that the used laptop had network connectivity (e.g., Wi-Fi or
3G network), using a simple video conference software allowed us to
monitor whatever was happening during the user experience in real
time, without following the user around. This kit can also be used
with low-fidelity prototypes, especially considering its low cost and use
of traditional desktop equipment, available for most researchers (e.g.,
laptop, backpack, hat, and webcam). This type of setup is usually called
a mobile or living lab [113].
Mobile labs or the use of equipment capable of recording information and usage data without the designer or moderator’s presence has
also been studied by Oulasvirta et al. In [88], the authors created a
high-quality video capturing kit with a set of cameras and using data
transmission to monitor usage interaction. Although much more expensive, the kit is composed by lightweight equipment that captures highquality video during extended periods of time. The kit’s cameras are
4.4 Technology-aided Mobile Evaluation
227
placed on the device to capture the user’s expressions; on the user’s
head to capture interaction with the device and its screen; and on a
necklace to capture information about context. The majority of the
remaining equipment was placed on a belt, which increased flexibility
on the use of the equipment as well as the user’s movement. Furthermore, data capture was automatically sent through wireless connections
to remote locations.
Similar approaches have also been studied by Thompson et al.
in [114] and suggested by Lyons and Starner [78], Hummel et al. [51],
and Reichl et al. [96].
On a more technical level, Lukander proposes a system that is able
to measure gaze point with mobile devices [76]. Although this technique
and system alone do not provide sufficient usability details and data to
evaluate a mobile application, their usage in concert with other techniques can provide interesting results. For the moment, the used equipment does not allow for real-world testing. Yet, it brings the potential of
a powerful technique, traditionally used for desktop and web usability
evaluation, to mobile devices.
4.4.2
Software for Mobile Evaluation
On a different facet, researchers and developers have also been introducing frameworks that support mobile evaluation using, most of the
times, the actual devices were software being tested [32, 41, 73]. In these
approaches, the frameworks include specific prototypes of software in
general and provide support for both active and passive data gathering.
A good example of such systems is MyExperience. The MyExperience [41] system provides support for remote data gathering. With
this system, user activities on mobile phones are logged and stored
on the device. These are then synchronized depending on connection
availability. The logging mechanism detects several events triggered by
the system or by the user (e.g., software events such as changing applications, incoming calls, or hardware events such as key presses or the
microphone) and active evaluation techniques can be triggered according to contextual settings. This combination provides researchers and
designers with both quantitative data, from the logs, and qualitative
228
Evaluating the Results
data, collected by users using the active evaluation techniques (e.g.,
Experience Sampling). From a set of case studies with researchers working with mobile technology, results demonstrated that (1) the system’s
main qualities include the ability to trigger anything in response to a
large range of events and combinations of these (around 140 events); (2)
the easiness to set up the system for experience studies on cell phone
applications; (3) the richness of the collected information; and (4) the
used structure to store information’s expandability were considered the
main benefits.
The Momento [102] system also provides means to conduct mobile
evaluation. The system relies on text messaging and media messaging to distribute data from users and their devices to experimenters.
It gathers usage information and prompts questionnaires as required,
sending them to a server where an experimenter manages the received
data through a desktop GUI. In addition, this remote experimenter
is also able to adjust the experiment on the go, responding to participants’ requests or asking them to manually record or collect specific
data. Momento was used in three different experiences and results have
been very positive, allowing designers and researchers to conduct in situ
studies with participants without following them and, at the same time,
managing the experiment and gathering rich evaluation data.
Waterson et al. also developed a remote logging and analysis tool for
mobile remote usability [118]. Their system uses a clickstream logging
and visualization system. Logging is achieved through a proxy and
results are then viewed through a UI that allows zooming and filtering
displaying most visited content and common paths between it.
As briefly introduced, our own framework, MobPro, also extends
its prototyping features into the evaluation process [24, 31]. The prototypes developed with the framework include a runtime environment
that keeps a detailed description of every event, or a selected range of
events, occurred while the prototype was used. The selection criteria
(e.g., event type, modality) enable configurable data gathering that
can be focused on the evaluation goals without adding effort or requiring users intervention. The framework also supports the integration
of questionnaires within the prototypes and means to define specific
conditions or settings in which these questionnaires can or should be
4.5 Section Overview
229
presented to users. This technique, if well used, provides support for
intelligent ESM and pro-active Diary Studies as usability questionnaires
can be prompted according to not only time but also location or behavior triggers. These questionnaires also share the multimodal approach
of the frameworks thus being able to gather textual, audio, and video
information.
The framework also includes a log-player tool. It resembles a movie
player, which re-enacts every action that took place while the user
was interacting with the prototype. The log-player tool is equipped
with a communication module that allows the player to be connected
to another device while a user is interacting with a prototype. Here,
the logging mechanism forwards every event to the monitoring device,
allowing the designer to remotely review, in real time, the users interaction with the prototype. For instance, if the prototype is running on
a Smart Phone with a 3G connection or within the range of a Wi-Fi
network, the designer is able to monitor and gather data on real time
on the users interaction with the prototype directly on his/her desktop
computer or even another mobile device.
4.5
Section Overview
As the main conclusion extracted from the various research work and
experiences that have been conducted and are available in the literature, one can claim that choices and guidelines for mobile evaluation are
gradually being developed and made available by researchers and practitioners. Although the process of taking evaluation experiments into
the wild, within real contexts, is still somewhat scarce, mainly motivated by the added difficulties in selecting the proper contexts and on
conducting user observation within mobile settings, new techniques and
approaches have been emerging and providing different approaches for
designers and researchers to follow.
On the one hand, we can find a trend that shows adaptations of previous data gathering and evaluation techniques, mostly used in fixed
settings, which have been slowly being adjusted to mobile needs. In
addition, researchers have been experimenting with techniques that
were inherited from other fields such as psychology and relying on users
230
Evaluating the Results
to gather information by themselves, mitigating the need for observation and user shadowing. This type of procedures, although placing an
added burden on users, allows designers to reach a larger amount of participants and facilitates qualitative, although subjective, data collection
within settings that, otherwise, would probably be unreachable.
On the other hand, the current literature clearly shows that
designers are, more often than ever, taking advantage of the same
technology that they are trying to evaluate, to take the evaluation
process out of the laboratory. Even with the use of technology, one can
find two separate directions. The first one places a strong emphasis
on allowing user observation with remote monitoring using equipment
such as sensors, microphones, video cameras, and a wide variety of
mobile data-capturing equipment. This trend is often named as mobile
laboratories. These can collect data that is directly and automatically
sent to researchers or experimenters who review what is going on in real
time on a remote location, or they can be stored for posterior analysis.
Moreover, as an addition or extension to what is happening with
prototyping, technology has also been used to create new software that
supports evaluation endeavors. Here, different types of frameworks and
systems support and enhance both active and data gathering techniques
by logging events or by prompting questionnaires and surveys according
to specific events.
In summary, mobile evaluation is rapidly evolving and the amount
of trends and examples within each current shows that, despite all the
difficulties that taking the evaluation procedures out of the lab imply,
the research community is living up to the challenge and gradually
overcoming the problems that motivated the works described in this
section.
5
Conclusions
Mobile devices are rapidly taking over the digital and interactive world.
Their role within our lives is growing in diversity and importance.
As our need to use them for a wide variety of activities increases,
the need to use them easily and as tools that enhance our performance in those same activities increases proportionally. In face of such
scenario, brands, developers, engineers, and designers have to continuously improve not only the features and functionalities of the available
technology but also the way we interact with it.
As they present a significant shift from the desktop metaphor
and usage paradigm, the traditional methods and techniques used to
develop and design desktop software and UIs have to be reconsidered,
reassessed, and, many times, reinvented.
In this monograph, we discussed the evolution of user-centered
methodologies and techniques, propelled by the need to cope with the
added challenges that mobile technology and the new usage paradigm
that it entails have been posing to designers and engineers. In particular, we have focused on the two stages that present the biggest
challenges but are also the most important for the design and validation of mobile interaction, namely the prototyping and evaluation
231
232
Conclusions
stage. As a necessary bootstrapping point, we also discussed the need
to understand context. Context is arguably the most influential factor
on the uniqueness of mobile user experience. Identifying which variables
and how they should be integrated with the design and evaluation process is paramount.
As the literature gradually blooms and research experiences for
mobile devices continue to take place, new techniques, approaches, and
tools have been emerging. Some of the literature that was addressed
within this monograph has been focusing on the definition of context.
In particular, we discussed work that deals with the complexity of the
world and tries to map it into modular concepts that can be easily
verified and integrated into scenarios. These also aim at supporting the
arduous task of emulating mobile contexts within controlled settings
while others have placed their efforts into confirming that mobile design
and evaluation should take place within the locations and settings in
which end users will use the resulting products.
Following this last approach, the second topic of this monograph
dealt with the updating of the prototyping techniques, which are necessary tools for the consequent evaluation process. Here, the available
literature seems to have consensus in that prototypes, either low- or
high-fidelity ones, should be tested within realistic settings and, for
that purpose, should be developed in such a way that supports the outof-the-lab evaluation process. As such, researchers pointed out that for
low-fidelity prototypes different materials can be used and care must
be taken to differentiate the physical prototype from the UI prototype,
emulating, in both cases, the actual device and software. An alternative,
perhaps more advance trend, has been working to develop prototyping
software, which overcomes the inherent limitations of paper-based prototypes by combining the easy to create and edit mechanisms with the
interactivity and ability to be used and tested on actual devices, thus
within a realistic context.
Finally and the culminating point of the previous stages, the last
section of this monograph addressed the evaluation of mobile applications and UIs. Naturally, as the gluing material between the two previous topics, the advances in mobile evaluation are closely tied to the
research developments in regards to both the meaning and importance
5.1 The Future of Mobile Design and Evaluation
233
of context for mobile interaction design and on the prototyping techniques that are necessary to provide the working and testing material
used during the evaluation experiences.
For this purpose, a large corpus of research is being made available and both techniques and case studies have now introduced significant advances for mobile evaluation. New techniques discussed in
this monograph focus on, on the one hand, the adjustments of previous
procedures and equipment and how they aim at supporting data collection out-of-the-lab without added effort or without corrupting the
free mobile settings in which evaluation takes place. On the other hand,
in a similar manner to what happens with prototyping, the introduction of new tools and frameworks that support several data collection
techniques and allow designers and engineers to have a glimpse of what
takes place between the user and the interface, in situ, and without
their physical presence hindering the process.
In general, these three topics and the works here presented and discussed unveil prosperous times as new approaches, tools, and scenarios
keep appearing and, although introducing new challenges, also presenting new opportunities for research and advances within the mobile
interaction design field.
5.1
The Future of Mobile Design and Evaluation
Mobile technology has advanced greatly. As unexpected and overwhelmingly fast as this evolution may have been, the previous sections
within this monograph clearly show a growing trend and investment on research and practical examples on how to create, through
user-centered design approaches, UIs and experiences for mobile
devices.
Nevertheless, there is still much ground to cover. As social networks, navigation, and location-enhanced applications continue to grow
in number, cooperation and group activities also assume a greater role
as target services for mobile devices. Naturally, these require special
attention to a wide set of factors, some covered by the works hereby
discussed, but others, such as the ability to evaluate multiple interactions within groups, collaboration within the real world, and the impact
234
Conclusions
of mobile devices on group behavior, are still challenges that have not
been completely tackled.
In addition to these services and social phenomena, an increasingly
relevant movement is that of creating personalized applications, which
adjust to users and not exclusively to the surrounding location and
external context facets. Here, care must be given not only to understanding the interaction context but also how the software and UIs
also react to the changes that might take place. Moreover, as design
and development tools are made available to everyone and successes
such as the App Store and Android Market show that they are here to
stay, attention must also be paid to guidelines and ways to advise end
users who create their own tools, for them and for others, so that these
provide meaningful and useful experiences when used.
As a complement to the already used hardware and sometimes in
concert with context-aware technology, the use of sensors will most
likely play a crucial role within mobile interaction design. Certainly,
this role will be focused on the use of sensors to promote adaptation
and awareness but also as means to gather further information, possibly
physiological, during evaluation studies and in situ experiments.
Finally, a research field of utmost importance that advances hand
in hand with mobile interaction and usability is mobile acessibility.
As mobile devices reach more and more users, from different cultures,
locations, backgrounds, and with different abilities, the evaluation of
mobile software and UIs has to continuously add accessibility concerns
to the already ongoing research that was discussed in this monograph.
As a conclusion, the mobile interaction future is bright and considering the pace in which it evolves, either through end user developed
software, new services or new interactive technology emerges, design,
prototyping, and evaluation techniques and procedures will certainly
continue trying to keep up through and with the highly prolific research
and development community.
References
[1] M. Allen, J. McGrenere, and B. Purves, “The field evaluation of a mobile
digital image communication application designed for people with aphasia,”
ACM Transactions on Accessible Computing, vol. 1, no. 1, Article 5, May 2008,
ACM.
[2] J. Axup, N. J. Bidwell, and S. Viller, “Representation of self-reported information usage during mobile field studies: Pilots & Orienteers 2,” OzCHI,
Wollongong, Australia, 2004.
[3] Barnard, et al., “Capturing the effects of context on human performance
in mobile computing systems,” Personal and Ubiquitous Computing, vol. 11,
no. 46, pp. 81–96, 2007.
[4] N. Bevan, “What is the difference between the purpose of usability and user
experience evaluation methods?,” UXEM’09 Workshop, INTERACT 2009,
Uppsala, Sweden, 2009.
[5] H. Beyer and K. Holtzblatt, Contextual Design: Customer Centered Approach
to Systems Design. San Francisco, CA, USA: Academic Press, 1998.
[6] S. Bodker and J. Buur, “The design collaboratorium — A place for usability
design,” Transactions on Computer Human-Interaction, vol. 9, no. 2, pp. 152–
169, 2002, ACM.
[7] J. Brandt, N. Weiss, and S. R. Klemmer, “txt 4 l8r: Lowernig the burden
for diary studies under mobile conditions,” CHI 2007 Extended Abstracts,
2303–2308, San Jose, California, USA, ACM: April 28, May 3, 2007.
[8] C. Burns, E. Dishman, B. Verplank, and B. Lassiter, “Actors hair-dos
and videotape: Informance design; using performance techniques in multidisciplinary, observation based design,” in CHI94 Conference Companion
4/94, Boston, MA, 1994, ACM.
235
236
References
[9] S. K. Card, T. P. Moran, and A. Newell, The Psychology of Human-Computer
Interaction. Hillsdale, New Jersey: Erlbaum Associates, 1983.
[10] S. Carter and J. Mankoff, “When participants do the capturing: The role of
media in diary studies,” in Proceedings of CHI 2005, Portland, Oregon, USA:
ACM, April 27 2005.
[11] S. Carter and J. Mankoff, “Prototypes in the wild: Lessons learned from evaluating three ubicomp systems,” IEEE Pervasive Computing, vol. 4, pp. 51–57,
2005, IEEE.
[12] M. Chalmers and A. Galani, “Seamful interweaving: Heterogeneity in the theory and design of interactive systems,” in Proceedings of the 5th Conference
on Designing Interactive Systems: Processes, Practices, Methods, and Techniques, DIS ’04, pp. 243–252, New York, NY, USA: ACM, 2004.
[13] C. D. Chandler, G. Lo, and A. K. Sinha, “Multimodal theater: Extending low
fidelity paper prototyping to multimodal applications,” in CHI2002 Extended
Abstracts, CHI 2002, Minneapolis, Minnesota, USA: ACM, 2002.
[14] C. Cheong, D.-C. Kim, and T.-D. Han, “Usability evaluation of designed image
code interface for mobile computing environment,” in Proceedings of the 12th
International Conference on Human–computer Interaction: Interaction Platforms and Techniques (HCI’07), (J. A. Jacko, ed.), pp. 241–251, Berlin, Heidelberg: Springer-Verlag, 2007.
[15] P. Cohen, C. Swindells, S. Oviatt, and A. Arthur, “A high-performance dualwizard infrastructure for designing speech, pen, and multimodal interfaces,” in
In Proceedings of the 10th International Conference on Multimodal Interfaces
(ICMI ’08), pp. 137–140, New York, NY, USA: ACM, 2008.
[16] B. Collignon, J. Vanderdonckt, and G. Calvary, “An intelligent editor for
multi-presentation user interfaces,” in Proceedings of SAC08 — Symposium on Applied Computing, Fortaleza, Ceara, Brazil: ACM, March 16–20
2008.
[17] K. H. Connelly, Y. Rogers, K. A. Siek, J. Jones, M. A. Kraus, S. M. Perkins,
L. L. Trevino, and J. L. Welch, “Designing a PDA interface for dialysis patients
to monitor diet in their everyday life,” in Proceedings of the 11th Human–
Computer Interaction International Conference, HCII05, Las Vegas, USA,
2005.
[18] S. Consolvo, I. E. Smith, T. Mathews, A. LaMarca, J. Tabert, and P. Powledge,
“Location disclosure to social relations: Why, when and what people want to
share,” in Proceedings of CHI 2005, pp. 81–90, Portland, Oregon, USA: ACM,
2005.
[19] S. Consolvo and M. Walker, “Using the experience sampling method to
evaluate ubicomp applications,” IEEE Pervasive Computing, vol. 2, no. 2,
pp. 24–31, 2003.
[20] A. Coyette, S. Kieffer, and J. Vanderdonckt, “Multi-fidelity prototyping of
user interfaces,” in Procs. of INTERACT 2007, pp. 150–164, LNCS 4662,
Part I, IFIP, 2007.
[21] A. Coyette and J. Vanderdonckt, “A sketching tool for designing any user,
any platform, anywhere user interfaces,” in Proceedings of INTERACT 2005,
pp. 550–564, LNCS 3585, IFIP, 2005.
References
237
[22] R. C. Davis, T. S. Saponas, M. Shilman, and J. A. Landay, “SketchWizard:
Wizard of Oz prototyping of pen-based user interfaces,” in UIST ’07: Proceedings of the 20th Annual ACM Symposium on User Interface Software and
Technology, pp. 119–128, Newport, Rhode Island, USA: ACM, 2007.
[23] M. de Sá and L. Carriço, “Low-fi prototyping for mobile devices,” in Proceedings of CHI’06 (Extended Abstracts), pp. 694–699, Montreal, Canada: ACM,
2006.
[24] M. de Sá and L. Carriço, “An evaluation framework for mobile user interfaces,” in Proceedings of INTERACT 2009, the 12th IFIP TC13 Conference
in Human–Computer Interaction, Uppsala, Sweden: Springer-Verlag, Lecture
Notes in Computer Science, 2009.
[25] M. de Sá and L. Carriço, “A mobile tool for in-situ prototyping,” in Proceedings of MobileHCI09, the 11th International Conference on Human–Computer
Interaction with Mobile Devices and Services, Bonn, Germany: ACM Press,
2009.
[26] M. de Sá and L. Carriço, “Defining scenarios for mobile design and evaluation,” in Proceedings of CHI’08, SIGCHI Conference on Human Factors
in Computing Systems (Extended Abstracts), pp. 2847–2852, Florence, Italy:
ACM Press, April 5–10 April 2008.
[27] M. de Sá and L. Carriço, “Lessons from early stages design of mobile applications,” in Proceedings of MobileHCI’08, pp. 127–136, Amsterdam, Netherlands: ACM Press, September 2008.
[28] M. de Sá, L. Carriço, and P. Antunes, “Ubiquitous psychotherapy,” IEEE
Pervasive Computing, vol. 6, no. 1, no. 1, pp. 20–27, IEEE Press, 2007.
[29] M. de Sá, L. Carriço, and C. Duarte, Mobile Interaction Design: Techniques
for Early Stage In-Situ Design. Advances in Human-Computer Interaction,
Chapter 10. Vienna, Austria, EU: I-Tech Education and Publishing, October
2008, ISBN 978-953-7619-14-5.
[30] M. de Sá, L. Carriço, L. Duarte, and T. Reis, “A mixed-fidelity prototyping tool for mobile devices,” in Proceedings of AVI’08, International Working
Conference on Advanced Visual Interfaces, pp. 225–232, Napoli, Italy: ACM
Press, 2008.
[31] M. de Sá, L. Carriço, L. Duarte, and T. Reis, “A framework for mobile evaluation,” in Proceedings of CHI’08, SIGCHI Conference on Human Factors
in Computing Systems (Extended Abstracts), pp. 2673–2678, Florence, Italy:
ACM Press, April 05–10 April 2008.
[32] M. de Sá, C. Duarte, L. Carriço, and T. Reis, “Designing mobile multimodal
applications,” in Multimodality in Mobile Computing and Mobile Devices:
Methods for Adaptable Usability, (S. Kurkovsky, ed.), pp. 106–136, IGI Global,
November 2009.
[33] R. Demumieux and P. Losquin, “Gather customer’s real usage on mobile
phones,” in Proceedings of the 7th international conference on Human
computer interaction with mobile devices and services (MobileHCI ’05),
pp. 267–270, New York, NY, USA: ACM, 2005.
[34] A. K. Dey, G. D. Abowd, and D. Salber, “Aconceptual framework and a
toolkit for supporting the rapid prototyping of context-aware applications,”
Hum.-Comput. Interact, vol. 16, no. 2, pp. 97–166, December 2001.
238
References
[35] A. Dix, J. Finlay, G. Abowd, and R. Beale, Human–Computer Interaction.
Prentice Hall, 2nd ed., 1998.
[36] P. Dourish, “What we talk about when we talk about context,” Personal
Ubiquitous Comput., vol. 8, pp. 19–30, February 2004.
[37] H. B.-L. Duh, G. C. B. Tan, and V. H. h. Chen, “Usability evaluation for
mobile device: A comparison of laboratory and field tests,” in Mobile HCI’06,
Finland: ACM, 2006.
[38] L. Frishberg, “Presumptive design, or cutting the looking-glass cake,” Interactions, vol. 8, no. 1, pp. 18–20, ACM, 2006.
[39] N. Frishberg, “Prototyping with junk,” Interactions, vol. 8, no. 1, no. 1,
pp. 21–23, ACM, 2006.
[40] J. Froehlich, M. Chen, I. Smith, and F. Potter, “Voting with your feet: An
investigative study of the relationship between place visit behavior and preference,” in UbiComp 2006, California, USA: Orange County, 2006.
[41] J. Froehlich et al., “Myexperience: A system for in situ tracing and capturing
of user feedback on mobile phones,” in MobiSys’07, pp. 57–70, ACM, 2007.
[42] P. Hagen et al., “Emerging research methods for understanding mobile technology use,” in Proceedings of OZCHI 2005, Australia: Camberra, 2005.
[43] J. Halloran et al., “Unfolding understandings: Co-designing UbiComp in situ,
over time,” in Proceedings of DIS 2006, pp. 109–118, University Park, Pennsylvania, USA: ACM, June 26-28 2006.
[44] B. Hanington, “Interface in form: Paper and product prototyping for feedback
and fun,” Interactions, vol. 8, no. 1, pp. 28–30, ACM, 2006.
[45] B. Hartmann, S. Doorley, S. Kim, and P. Vora, “Wizard of Oz sketch animation for experience prototyping,” in Proceedings of Ubicomp, 2006.
[46] R. H. Hegh, J. Kjeldskov, M. B. Skov, and J. Stage, “A field laboratory for
evaluating in situ,” in Handbook of Research on User Interface Design and
Evaluation for Mobile Technology, (J. Lumsden, ed.), IGI Global, 2008.
[47] P. Holleis and A. Schmidt, “MakeIt: Integrate user interaction times in the
design process of mobile applications,” in Proceedings of the 6th International
Conference on Pervasive Computing (Pervasive ’08), (D. J. Patterson, T. Rodden, M. Ott, and J. Indulska, eds.), pp. 56–74, Berlin, Heidelberg: SpringerVerlag, 2009.
[48] L. E. Holmquist, “Generating ideas or cargo cult designs?,” Interactions,
vol. 7, no. 2, pp. 48–54, ACM, 2005.
[49] K. Holtzblatt, J. B. Wendell, and S. Wood, Rapid Contextual Design: A HowTo Guide to Key Techniques for User-Centered Design. San Francisco, CA,
USA: Morgan Kaufmann, 2005.
[50] S. Hulkko et al., “Mobile probes,” in NordiCHI’04, ACM, 2004.
[51] K. A. Hummel, A. Hess, and T. Grill, “Environmental context sensing for
usability evaluation in mobile hci by means of small wireless sensor networks,”
in Proceedings of MoMM 2008, Linz, Austria: ACM, November 24–26 2008.
[52] M. Isomursi, K. Kuutti, and S. Vainamo, “Experience clip: Method for user
participation and evaluation of mobile concepts,” in Proceedings of Participatory Design Conference 2004, Toronto, Canada: ACM, 2004.
[53] M. Jones and G. Marsden, Mobile Interaction Design. West Sussex, England:
John Wiley & Sons Ltd, 2006.
References
239
[54] Y. Jung, P. Persson, and J. Blom, “DeDe: Design and evaluation of a contextenhanced mobile messaging system,” in Proceedings of CHI 2005, pp. 351–360,
Portland, Oregon, USA: ACM, April 27 2005.
[55] E. Kangas and T. Kinnunen, “Applying user-centered design to mobile application development,” Communications of the ACM, vol. 48, pp. 55–59, 2005.
[56] C.-M. Karat, R. Campbell, and T. Fiegel, “Comparison of empirical testing and walkthrough methods in user interface evaluation,” in Proceedings of
CHI’92, pp. 397–404, Monterey, California, USA: ACM Press, 1992.
[57] A. K. Karlson and B. B. Bederson, “One-handed touchscreen input for legacy
applications,” in Proceedings of CHI 2008, pp. 1399–1408, Florence, Italy:
ACM, April 5-10 2008.
[58] J. F. Kelley, “An iterative design methodology for user-friendly natural language office information applications,” Transactions on Office Information
Systems, vol. 2, no. 1, no. 1, pp. 26–41, ACM, 1984.
[59] J. Kjeldskov and C. Graham, “A review of mobile HCI research methods,”
in Mobile HCI’03, vol. 2795/2003, pp. 317–335, Berlin/Heidelberg: Springer,
2003.
[60] J. Kjeldskov, M. B. Skov, B. S. Als, and R. T. Hegh, “Is it worth the hassle?
Exploring the added value of evaluating the usability of context-aware mobile
systems in the field,” in Proceedings of 6th International Symposium on Mobile
Human-Computer Interaction (MobileHCI’04), pp. 61–73, Glasgow, Scotland,
September 13–16 2004.
[61] J. Kjeldskov and J. Stage, “New techniques for usability evaluation of mobile
systems,” International Journal of Human Computer Studies, Elsevier, 2003.
[62] S. R. Klemmer, A. K. Sinha, J. Chen, J. A. Landay, N. Aboobaker, and
A. Wang, “Suede: A Wizard of Oz prototyping tool for speech user interfaces,”
in UIST ’00: Proceedings of the 13th Annual ACM Symposium on User Interface Software and Technology, pp. 1–10, San Diego, California, United States:
ACM, 2000.
[63] M. Kraub and D. Krannich, “ripcord: Rapid interface prototyping for cordless
devices,” in In MobileHCI’06, pp. 187–190, Helsinki, Finland: ACM Press,
2006.
[64] J. A. Landay, “Interactive sketching for the early stages of user interface
design,” Phd Thesis, School of Computer Science, Computer Science Division, Carnegie Mellon University, Pittsburgh, PA, 1996.
[65] S. Lemmela, A. Vetek, K. Makela, and D. Trendafilov, “Designing and evaluation multimodal interaction for mobile contexts,” in ICMI08, pp. 265–272,
Chania, Crete, Greece: ACM, October 20-22 2008.
[66] Y. Li, J. I. Hong, and J. A. Landay, “Topiary: A tool for prototyping locationenhanced applications,” in In Proceedings of UIST’04, Santa Fe, New Mexico,
USA, October 24–27 2004.
[67] Y. Li and J. A. Landay, “Activity-based prototyping of ubicomp applications
for long-lived, everyday human activities,” in Proceedings of CHI 2008, Florence, Italy: ACM, April 5-10 2008.
[68] Y.-K. Lim, A. Pangam, S. Periyasami, and S. Aneja, “Comparative analysis of
high and low-fidelity prototypes for more valid usability evaluations of mobile
240
[69]
[70]
[71]
[72]
[73]
[74]
[75]
[76]
[77]
[78]
[79]
[80]
[81]
[82]
References
devices,” in Proceedings of NordiCHI’06: Changing Roles, pp. 291–300, Oslo,
Norway: ACM, 2006.
J. Lin, “Damask: A tool for early-stage design and prototyping of cross-device
user interfaces,” in Proceedings of Symposium on User Interface Software and
Technology, UIST 2003, Vancouver, Canada: ACM, 2003.
J. Lin and J. A. Landay, “Damask: A tool for early-stage design and prototyping of multi-device user interfaces,” in Proceedings of The 8th International
Conference on Distributed Multimedia Systems, pp. 573–580, 2002 International Workshop on Visual Computing, ACM Press, 2002.
J. Lin, M. W. Newman, J. I. Hong, and J. A. Landay, “DENIM: Finding a
tighter fit between tools and practice for Web site design,” in CHI ’00: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems,
pp. 510–517, The Hague, The Netherlands: ACM, 2000.
J. Lin, M. Thomsen, and J. A. Landay, “A visual language for sketching large
and complex interactive designs,” in Proceedings of CHI 2002, pp. 307–314,
Minneapolis, Minnesota, USA, April 20–25 2002.
A. L. Liu and Y. Li, “BrickRoad: A light-weight tool for spontaneous design
of location-enhanced applications,” in Proceedings of CHI 2007, pp. 295–298,
San Jose, California, USA: ACM, April 28-May 3 2007.
L. Liu and P. Khooshabeh, “Paper or interactive?: a study of prototyping techniques for ubiquitous computing environments,” in CHI 03 Extended
Abstracts, pp. 1030–1031, ACM, 2003.
N. Liu, Y. Liu, L. Liu, and X. Wang, “User experience evaluation of posttraumatic mobile service: A case study,” in Proceedings of Mobility 2009, Nice,
France: ACM, September 2–4 2009.
K. Lukander, “Measuring gaze point on handheld mobile devices,” in Extended
Abstracts of CHI2004, Vienna, Austria: ACM, 2004.
J. Lumsden and R. MacLean, “A comparison of pseudo-paper and paper prototyping methods for mobile evaluations,” in Proceedings of the International
Workshop on Mobile and Networking Technologies for Social Applications
(MONET’2008), pp. 538–457, Monterrey, Mexico: part of the LNCS On The
Move (OTM) Federated Conferences and Workshops, November 4 2008.
K. Lyons and T. Starner, “Mobile capture for wearable computer usability
testing,” in Proceddings of Proceedings of the 5th IEEE International Symposium on Wearable Computers, Zurich, Switzerland: IEEE Press, 2001.
D. J. Mayhew, The Usability Engineering Lifecycle. San Francisco, CA, USA:
Morgan Kaufmann, 1999.
M. McCurdy, C. Connors, G. Pyrzak, B. Kanefsky, and A. Vera, “Breaking the
fidelity barrier: An examination of our current characterization of prototypes
and an example of a mixed-fidelity success,” in Proceedings of CHI 2006,
pp. 1233–1242, Montreal, Quebec, Canada: ACM, 2006.
R. Molich, M. R. Ede, K. Kaasgaard, and B. Karyukin, “Comparative usability evaluation,” Behaviour and Information Technology, Taylor and Francis,
vol. 23, no. 1, pp. 65–74, 2004.
R. Molich, A. D. Thomsen, B. Karyukina, L. Schmidt, M. Ede, W. van
Oel, and M. Arcuri, “Comparative evaluation of usability tests,” in CHI’99,
pp. 83–84, Pittsburgh, Pennsylvania, USA: ACM Press, 1999.
References
241
[83] Y. Nakhimovsky, D. Eckles, and J. Riegelsberger, “Mobile user experience
research: Challenges, methods & tools,” in CHI 2009 Extended Abstracts,
pp. 4795–4798, Boston Massachusetts, USA: ACM, April 4–9 2009.
[84] J. Nielsen, Usability Engineering. Los Altos, California: Morgan Kaufmann,
1993.
[85] J. Nielsen and V. L. Phillips, “estimating the relative usability of two interfaces: Heuristic, formal, and empirical methods compared,” in Interchi’93,
pp. 214–221, Amsterdam, The Netherlands: Addison-Wesley, 1993.
[86] C. M. Nielsen et al., “It’s worth the hassle! the added value of evaluating the
usability of mobile systems in the field,” in NordiCHI’06, ACM, 2006.
[87] A. Oulasvirta, E. Kurvinen, and T. Kankainen, “Understanding context by
being there: Case studies in bodystorming,” Personal and Ubiquitous Computing, vol. 7, pp. 125–134, 2003, Springer-Verlag.
[88] A. Oulasvirta and T. Nyyssonen, “Flexible hardware configurations for studying mobile usability,” Journal of Usability Studies, 2009.
[89] A. Oulasvirta, S. Tamminen, and K. Höök, “Comparing two approaches to
context: Realism and constructivism,” in Proceedings of the 4th Decennial
Conference on Critical Computing: Between Sense and Sensibility, Aarhus,
Denmark: ACM, August 20–24 2005.
[90] L. Palen and M. Salzman, “Voice-mail diary studies for naturalistic data
capture under mobile conditions,” in Proceedings of CSCW02, New Orleans,
Louisiana, USA: ACM, November 16-20 2002.
[91] D. Patel, G. Marsden, M. Jones, and S. Jones, “An evaluation of techniques for image searching and browsing on mobile devices,” in Proceedings of
SAICSIT09, pp. 60–69, Riverside, Vanderbijlpark, South Africa: ACM, October 12–14 2009.
[92] T. Pering, “Evaluating the privacy of using mobile devices to interact in public
places,” in Permid 2006, Workshop at Pervasive 2006, Dublin, Ireland, 2006.
[93] H. Petrie and N. Bevan, “The evaluation of accessibility, usability and user
experience,” in The Universal Access Handbook, (C. Stepanidis, ed.), CRC
Press, 2009.
[94] J. Preece, Human Computer Interaction. Addison Wesley, 1994.
[95] S. Pyykko and M. Hannuksela, “Does context matter in quality evaluation of
mobile television?,” in Proceedings of MobileHCI 2008, pp. 63–72, Amsterdam,
The Netherlands: ACM, September 2-5 2008.
[96] P. Reichl, P. Frohlich, L. Baillie, R. Schatz, and A. Dantcheva, “The LiLiPUT
prototype: A wearable lab environment for user tests of mobile telecommunication applications,” in CHI 2007 Extended Abstracts, pp. 1833–1838, San
Jose, California, USA: ACM, April 28-May 3 2007.
[97] T. Reis, M. de Sá, and L. Carriço, “Multimodal artefact manipulation: Evaluation in real contexts,” in Proceedings of ICPCA’08, 3rd International Conference on Pervasive Computing and Applications, pp. 570–575, Alexandria,
Egypt: IEEE Press, October 06–08 2008.
[98] Y. Rogers et al., “Why it’s worth the hassle: The value of in-situ studies when
designing ubicomp,” in Ubicomp 2007, Innsbruck, Austria: ACM, September
2007.
242
References
[99] D. Rosenberg, “Revisiting tangible speculation: 20 years of UI prototyping,”
Interactions, vol. 8, no. 1, pp. 31–32, ACM, 2006.
[100] J. Rubin, Handbook of Usability Testing: How to Plan, Design, and Conduct
Effective Tests. New York: Wiley, 1994.
[101] E. Rukzio, A. Pleuss, and L. Terrenghi, “The physical user interface profile
(PUIP): Modelling mobile interactions with the real world,” in Proceedings of
TAMODIA’05 — Tasks Models and Diagrams for UI Design 2005, pp. 95–102,
Gdansk, Poland: ACM, 2005.
[102] S. Carter et al., “Momento: Support for situated ubicomp experimentation,”
in CHI’07, pp. 125–134, ACM, 2007.
[103] A. Salovaara and A. O. G. Jacucci, “The panopticon: A method for observing
inter-group interactions,” in CHI2006 Extended Abstracts, Monteral, Canada:
ACM, April 22–27 2006.
[104] N. Savio and J. Braiterman, “Design sketch: The context of mobile interaction,” in Proceedings of MobileHCI 2007, Singapore: ACM, September 2007.
[105] J. Scholtz and S. Consolvo, “Toward a framework for evaluating ubiquitous
computing applications,” IEEE Pervasive Computing, vol. 2, pp. 82–86, 2004.
[106] B. Shneiderman and C. Plaisant, “Multi-dimensional in-depth long-term case
studies,” in Proceedings of BELIV 2006 Venice, Italy: ACM, 2006.
[107] B. Shneiderman, C. Plaisant, M. Cohen, and S. Jacobs, Designing the User
Interface: Strategies for Effective Human–Computer Interaction. Addison
Wesley, 5th ed., 2009.
[108] T. Sohn, K. A. Li, W. G. Griswold, and J. D. Hollan, “A diary study of
mobile information needs,” in CHI ’08: Proceeding of the Twenty-sixth Annual
SIGCHI Conference on Human Factors in Computing Systems, pp. 433–442,
New York, NY, USA: ACM, 2008.
[109] P. J. Stappers, “Creative connections: User, designer, context, and tools,”
Personal and Ubiquitous Computing, vol. 8, pp. 95–100, 2006, Springer-Verlag.
[110] H. Stromberg, V. Pirttila, and V. Ikonen, “Interactive scenariosbuilding ubiquitous computing concepts in the spirit of participatory design,” Personal and
Ubiquitous Computing, vol. 8, pp. 200–207, 2004, Springer-Verlag.
[111] D. Svanaes and G. Seland, “Putting the users center stage: Role playing and
low-fi prototyping enable end users to design mobile systems,” in CHI’04,
Vienna, Austria: ACM, 2004.
[112] S. Tamminen, A. Oulasvirta, K. Toiskallio, and A. Kankainen, “Understanding
mobile contexts,” Personal and Ubiquitous Computing, vol. 8, pp. 135–143,
2004, Springer-Verlag.
[113] H. ter Hofte, K. L. Jensen, P. Nurmi, and J. Froehlich, “Mobile living labs 09:
Methods and tools for evaluation in the wild,” in Proceedings of MobileHCI
2009, Bonn, Germany: ACM, September 15–18 2009.
[114] K. E. Thompson, E. P. Rozanski, and A. R. Haake, “Here, there, anywhere:
Remote usability testing that works,” in Proceedings of SIGITE’04, Salt Lake
City, Utah, USA: ACM, 2004.
[115] H. Vaataja, “Factors affecting user experience in mobile systems and services,” in Proceedings of MobileHCI 2008, Amsterdam, The Netherlands:
ACM, September 2-5 2008.
References
243
[116] H. Vaataja and V. Roto, “Mobile questionnaires for user experience evaluation,” in CHI 2010 Extended Abstracts, pp. 3361–3366, Atlanta, Georgia, USA,
April 10-15 2010.
[117] R. A. Virzi, J. L. Sokolov, and D. Karis, “Usability problem identification
using both low- and high-fidelity prototypes,” in Proceedings of CHI 1996,
Vancouver, Canada: ACM, 1996.
[118] S. Waterson and J. A. Landay, “In the lab and out in the wild: Remote web
usability testing for mobile devices,” in Proceedings of CHI 2002, pp. 796–797,
Minneapolis, Minnesota, USA: ACM, April 20–25 2002.
[119] S. Weiss, Handheld Usability. England: Wiley & Sons, 2002.
[120] J. Williamson, S. Robinson, C. Stewart, M. Murray, R. Smith, M. Jones,
and S. Brewster, “Social gravity: A virtual elastic tether for casual, privacypreserving pedestrian rendezvous,” in Proceedings of the 28th International
Conference on Human Factors in Computing Systems (Atlanta, Georgia, USA,
April 10-15, 2010), CHI ’10, pp. 1485–1494, New York, NY: ACM, 2010.
[121] A. U.-H. Yasar, “Enhancing experience prototyping by the help of mixedfidelity prototypes,” in Proceedings of the 4th International Conference on
Mobile Technology, Applications, and Systems and the 1st International Symposium on Computer Human Interaction in Mobile Technology (Mobility ’07),
pp. 468–473, New York, NY, USA: ACM, 2007.
[122] K. Yatani and K. N. Truong, “An evaluation of stylus-based text entry methods on hand held devices in stationary and mobile settings,” in Proceedings of
Mobile HCI’07, pp. 487–494, Singapore: ACM, September 9–12 2007.