IEEE Communications Magazine • September 2010
140
0163-6804/10/$25.00 © 2010 IEEE
AD HOC AND SENSOR NETWORKS
Nicholas D. Lane, Emiliano Miluzzo, Hong Lu, Daniel Peebles, Tanzeem Choudhury,
and Andrew T. Campbell, Dartmouth College
A Survey of Mobile Phone Sensing
INTRODUCTION
Today’s smartphone not only serves as the key
computing and communication mobile device of
choice, but it also comes with a rich set of
embedded sensors, such as an accelerometer,
digital compass, gyroscope, GPS, microphone,
and camera. Collectively, these sensors are
enabling new applications across a wide variety
of domains, such as healthcare [1], social net-
works [2], safety, environmental monitoring [3],
and transportation [4, 5], and give rise to a new
area of research called mobile phone sensing.
Until recently mobile sensing research such
as activity recognition, where people’s activity
(e.g., walking, driving, sitting, talking) is classi-
fied and monitored, required specialized mobile
devices (e.g., the Mobile Sensing Platform
[MSP]) [6] to be fabricated [7]. Mobile sensing
applications had to be manually downloaded,
installed, and hand tuned for each device. User
studies conducted to evaluate new mobile sens-
ing applications and algorithms were small-scale
because of the expense and complexity of doing
experiments at scale. As a result the research,
which was innovative, gained little momentum
outside a small group of dedicated researchers.
Although the potential of using mobile phones
as a platform for sensing research has been dis-
cussed for a number of years now, in both indus-
trial [8] and research communities [9, 10], there
has been little or no advancement in the field
until recently.
All that is changing because of a number of
important technological advances. First, the
availability of cheap embedded sensors initially
included in phones to drive the user experience
(e.g., the accelerometer used to change the dis-
play orientation) is changing the landscape of
possible applications. Now phones can be pro-
grammed to support new disruptive sensing
applications such as sharing the user’s real-time
activity with friends on social networks such as
Facebook, keeping track of a person’s carbon
footprint, or monitoring a user’s well being. Sec-
ond, smartphones are open and programmable.
In addition to sensing, phones come with com-
puting and communication resources that offer a
low barrier of entry for third-party programmers
(e.g., undergraduates with little phone program-
ming experience are developing and shipping
applications). Third, importantly, each phone
vendor now offers an app store allowing develop-
ers to deliver new applications to large popula-
tions of users across the globe, which is
transforming the deployment of new applications,
and allowing the collection and analysis of data
far beyond the scale of what was previously possi-
ble. Fourth, the mobile computing cloud enables
developers to offload mobile services to back-end
servers, providing unprecedented scale and addi-
tional resources for computing on collections of
large-scale sensor data and supporting advanced
features such as persuasive user feedback based
on the analysis of big sensor data.
The combination of these advances opens the
door for new innovative research and will lead to
the development of sensing applications that are
likely to revolutionize a large number of existing
business sectors and ultimately significantly
impact our everyday lives. Many questions
remain to make this vision a reality. For exam-
ple, how much intelligence can we push to the
phone without jeopardizing the phone experi-
ence? What breakthroughs are needed in order
to perform robust and accurate classification of
activities and context out in the wild? How do we
scale a sensing application from an individual to
a target community or even the general popula-
tion? How do we use these new forms of large-
scale application delivery systems (e.g., Apple
AppStore, Google Market) to best drive data
ABSTRACT
Mobile phones or smartphones are rapidly
becoming the central computer and communica-
tion device in people’s lives. Application delivery
channels such as the Apple AppStore are trans-
forming mobile phones into App Phones, capa-
ble of downloading a myriad of applications in
an instant. Importantly, today’s smartphones are
programmable and come with a growing set of
cheap powerful embedded sensors, such as an
accelerometer, digital compass, gyroscope, GPS,
microphone, and camera, which are enabling the
emergence of personal, group, and community-
scale sensing applications. We believe that sen-
sor-equipped mobile phones will revolutionize
many sectors of our economy, including busi-
ness, healthcare, social networks, environmental
monitoring, and transportation. In this article we
survey existing mobile phone sensing algorithms,
applications, and systems. We discuss the emerg-
ing sensing paradigms, and formulate an archi-
tectural framework for discussing a number of
the open issues and challenges emerging in the
new area of mobile phone sensing research.
LANE LAYOUT 8/24/10 10:43 AM Page 140
IEEE Communications Magazine • September 2010
141
collection, analysis and validation? How can we
exploit the availability of big data shared by
applications but build watertight systems that
protect personal privacy? While this new
research field can leverage results and insights
from wireless sensor networks, pervasive com-
puting, machine learning, and data mining, it
presents new challenges not addressed by these
communities.
In this article we give an overview of the sen-
sors on the phone and their potential uses. We
discuss a number of leading application areas and
sensing paradigms that have emerged in the liter-
ature recently. We propose a simple architectural
framework in order to facilitate the discussion of
the important open challenges on the phone and
in the cloud. The goal of this article is to bring
the novice or practitioner not working in this field
quickly up to date with where things stand.
SENSORS
As mobile phones have matured as a computing
platform and acquired richer functionality, these
advancements often have been paired with the
introduction of new sensors. For example,
accelerometers have become common after being
initially introduced to enhance the user interface
and use of the camera. They are used to automat-
ically determine the orientation in which the user
is holding the phone and use that information to
automatically re-orient the display between a
landscape and portrait view or correctly orient
captured photos during viewing on the phone.
Figure 1 shows the suite of sensors found in
the Apple iPhone 4. The phone’s sensors include
a gyroscope, compass, accelerometer, proximity
sensor, and ambient light sensor, as well as other
more conventional devices that can be used to
sense such as front and back facing cameras, a
microphone, GPS and WiFi, and Bluetooth
radios. Many of the newer sensors are added to
support the user interface (e.g., the accelerome-
ter) or augment location-based services (e.g., the
digital compass).
The proximity and light sensors allow the
phone to perform simple forms of context recog-
nition associated with the user interface. The
proximity sensor detects, for example, when the
user holds the phone to her face to speak. In
this case the touchscreen and keys are disabled,
preventing them from accidentally being pressed
as well as saving power because the screen is
turned off. Light sensors are used to adjust the
brightness of the screen. The GPS, which allows
the phone to localize itself, enables new loca-
tion-based applications such as local search,
mobile social networks, and navigation. The
compass and gyroscope represent an extension
of location, providing the phone with increased
awareness of its position in relation to the physi-
cal world (e.g., its direction and orientation)
enhancing location-based applications.
Not only are these sensors useful in driving
the user interface and providing location-based
services; they also represent a significant oppor-
tunity to gather data about people and their
environments. For example, accelerometer data
is capable of characterizing the physical move-
ments of the user carrying the phone [2]. Dis-
tinct patterns within the accelerometer data can
be exploited to automatically recognize different
activities (e.g., running, walking, standing). The
camera and microphone are powerful sensors.
These are probably the most ubiquitous sensors
on the planet. By continuously collecting audio
from the phone’s microphone, for example, it is
possible to classify a diverse set of distinctive
sounds associated with a particular context or
activity in a person’s life, such as using an auto-
matic teller machine (ATM), being in a particu-
lar coffee shop, having a conversation, listening
to music, making coffee, and driving [11]. The
camera on the phone can be used for many
things including traditional tasks such as photo
blogging to more specialized sensing activities
such as tracking the user’s eye movement across
the phone’s display as a means to activate appli-
cations using the camera mounted on the front
of the phone [12]. The combination of
accelerometer data and a stream of location esti-
mates from the GPS can recognize the mode of
transportation of a user, such as using a bike or
car or taking a bus or the subway [3].
More and more sensors are being incorporat-
ed into phones. An interesting question is what
new sensors are we likely to see over the next
few years? Non-phone-based mobile sensing
devices such as the Intel/University of Washing-
ton Mobile Sensing Platform (MSP) [6] have
shown value from using other sensors not found
in phones today (e.g., barometer, temperature,
humidity sensors) for activity recognition; for
example, the accelerometer and barometer make
it easy to identify not only when someone is
walking, but when they are climbing stairs and in
which direction. Other researchers have studied
air quality and pollution [13] using specialized
Figure 1. An off-the-self iPhone 4, representative of the growing class of sensor-
enabled phones. This phone includes eight different sensors: accelerometer,
GPS, ambient light, dual microphones, proximity sensor, dual cameras, com-
pass, and gyroscope.
Ambient light
Proximity
Dual cameras
GPS
Accelerometer
Compass
Gyroscope
Dual microphones
LANE LAYOUT 8/24/10 10:43 AM Page 141
IEEE Communications Magazine • September 2010
142
sensors embedded in prototype mobile phones.
Still others have embedded sensors in standard
mobile phone earphones to read a person’s
blood pressure [14] or used neural signals from
cheap off-the-shelf wireless electroencephalogra-
phy (EEG) headsets to control mobile phones
for hands-free human-mobile phone interaction
[36]. At this stage it is too early to say what new
sensors will be added to the next generation of
smartphones, but as the cost and form factor
come down and leading applications emerge, we
are likely to see more sensors added.
APPLICATIONS AND APP STORES
New classes of applications, which can take
advantage of both the low-level sensor data and
high-level events, context, and activities inferred
from mobile phone sensor data, are being
explored not only in academic and industrial
research laboratories [11, 15–22] but also within
startup companies and large corporations. One
such example is SenseNetworks, a recent U.S.-
based startup company, which uses millions of
GPS estimates sourced from mobile phones
within a city to predict, for instance, which sub-
population or tribe might be interested in a spe-
cific type of nightclub or bar (e.g., a jazz club).
Remarkably, it has only taken a few years for
this type of analysis of large-scale location infor-
mation and mobility patterns to migrate from
the research laboratory into commercial usage.
In what follows we discuss a number of the
emerging leading application domains and argue
that the new application delivery channels (i.e.,
app stores) offered by all the major vendors are
critical for the success of these applications.
TRANSPORTATION
Traffic remains a serious global problem; for
example, congestion alone can severely impact
both the environment and human productivity
(e.g., wasted hours due to congestion). Mobile
phone sensing systems such as the MIT VTrack
project [4] or the Mobile Millennium project [5]
(a joint initiative between Nokia, NAVTEQ, and
the University of California at Berkeley) are
being used to provide fine-grained traffic infor-
mation on a large scale using mobile phones that
facilitate services such as accurate travel time
estimation for improving commute planning.
SOCIAL NETWORKING
Millions of people participate regularly within
online social networks. The Dartmouth
CenceMe project [2] is investigating the use of
sensors in the phone to automatically classify
events in people’s lives, called sensing presence,
and selectively share this presence using online
social networks such as Twitter, Facebook, and
MySpace, replacing manual actions people now
perform daily.
ENVIRONMENTAL MONITORING
Conventional ways of measuring and reporting
environmental pollution rely on aggregate statis-
tics that apply to a community or an entire city.
The University of California at Los Angeles
(UCLA) PEIR project [3] uses sensors in phones
to build a system that enables personalized envi-
ronmental impact reports, which track how the
actions of individuals affect both their exposure
and their contribution to problems such as car-
bon emissions.
HEALTH AND WELL BEING
The information used for personal health care
today largely comes from self-report surveys and
infrequent doctor consultations. Sensor-enabled
mobile phones have the potential to collect in
situ continuous sensor data that can dramatically
change the way health and wellness are assessed
as well as how care and treatment are delivered.
The UbiFit Garden [1], a joint project between
Intel and the University of Washington, captures
levels of physical activity and relates this infor-
mation to personal health goals when presenting
feedback to the user. These types of systems
have proven to be effective in empowering peo-
ple to curb poor behavior patterns and improve
health, such as encouraging more exercise.
APP STORES
Getting a critical mass of users is a common
problem faced by people who build systems,
developers and researchers alike. Fortunately,
modern phones have an effective application dis-
tribution channel, first made available by Apple’s
App Store for the iPhone, that is revolutionizing
this new field. Each major smartphone vendor
has an app store (e.g., Apple AppStore, Android
Market, Microsoft Mobile Marketplace, Nokia
Ovi). The success of the app stores with the pub-
lic has made it possible for not only startups but
small research laboratories and even individual
developers to quickly attract a very large number
of users. For example, an early use of app store
distribution by researchers in academia is the
CenceMe application for iPhone [2], which was
made available on the App Store when it opened
in 2008. It is now feasible to distribute and run
experiments with a large number of participants
from all around the world rather than in labora-
tory controlled conditions using a small user
Figure 2. Mobile phone sensing is effective across multiple scales, including: a
single individual (e.g., UbitFit Garden [1]), groups such as social networks or
special interest groups (e.g., Garbage Watch [23]), and entire communities/
population of a city (e.g., Participatory Urbanism [20]).
Individual Group Community
UbitFit Garden Garbage Watch Participatory Urbanism
LANE LAYOUT 8/24/10 10:43 AM Page 142
IEEE Communications Magazine • September 2010
143
study. For example, researchers interested in sta-
tistical models that interpret human behavior
from sensor data have long dreamed of ways to
collect such large-scale real-world data. These
app stores represent a game changer for these
types of research. However, many challenges
remain with this new approach to experimenta-
tion via app stores. For example, what is the best
way to collect ground-truth data to assess the
accuracy of algorithms that interpret sensor
data? How do we validate experiments? How do
we select a good study group? How do we deal
with the potentially massive amount of data
made available? How do we protect the privacy
of users? What is the impact on getting approval
for human subject studies from university institu-
tional review boards (IRBs)? How do
researchers scale to run such large-scale studies?
For example, researchers used to supporting
small numbers of users (e.g., 50 users with
mobile phones) now have to construct cloud ser-
vices to potentially deal with 10,000 needy users.
This is fine if you are a startup, but are academic
research laboratories geared to deal with this?
SENSING SCALE AND PARADIGMS
Future mobile phone sensing systems will oper-
ate at multiple scales, enabling everything from
personal sensing to global sensing as illustrated
in Fig. 2 where we see personal, group, and com-
munity sensing three distinct scales at which
mobile phone sensing is currently being studied
by the research community. At the same time
researchers are discussing how much the user
(i.e., the person carrying the phone) should be
actively involved during the sensing activity (e.g.,
taking the phone out of the pocket to collect a
sound sample or take a picture); that is, should
the user actively participate, known as participa-
tory sensing [15], or, alternatively, passively par-
ticipate, known as opportunistic sensing [17]?
Each of these sensing paradigms presents impor-
tant trade-offs. In what follows we discuss differ-
ent sensing scales and paradigms.
SENSING SCALE
Personal sensing applications are designed for a
single individual, and are often focused on data
collection and analysis. Typical scenarios include
tracking the user’s exercise routines or automating
diary collection. Typically, personal sensing appli-
cations generate data for the sole consumption of
the user and are not shared with others. An excep-
tion is healthcare applications where limited shar-
ing with medical professionals is common (e.g.,
primary care giver or specialist). Figure 2 shows
the UbitFit Garden [1] as an example of a person-
al wellness application. This personal sensing
application adopts persuasive technology ideas to
encourage the user to reach her personal fitness
goals using the metaphor of a garden blooming as
the user progresses toward their goals.
Individuals who participate in sensing appli-
cations that share a common goal, concern, or
interest collectively represent a group. These
group sensing applications are likely to be popu-
lar and reflect the growing interest in social net-
works or connected groups (e.g., at work, in the
neighborhood, friends) who may want to share
sensing information freely or with privacy pro-
tection. There is an element of trust in group
sensing applications that simplify otherwise diffi-
cult problems, such as attesting that the collect-
ed sensor data is correct or reducing the degree
to which aggregated data must protect the indi-
vidual. Common use cases include assessing
neighborhood safety, sensor-driven mobile social
networks, and forms of citizen science. Figure 2
shows GarbageWatch [23] as an example of a
group sensing application where people partici-
pate in a collective effort to improve recycling by
capturing relevant information needed to
improve the recycling program. For example,
students use the phone’s camera to log the con-
tent of recycling bins used across a campus.
Most examples of community sensing only
become useful once they have a large number of
people participating; for example, tracking the
spread of disease across a city, the migration
patterns of birds, congestion patterns across city
roads [5], or a noise map of a city [24]. These
applications represent large-scale data collection,
analysis, and sharing for the good of the commu-
Figure 3. Mobile phone sensing architecture.
Inform, share and
persuasion
Learn
Sense
Application
distribution
Mobile computing cloud
Big sensor data
b
F
i1j1
Y
ij
{l}
{l}
{l}
M
ij
Y
ij
{l}
{l}
{l}
M
ij
Y
ij
{l}
{l}
{l}
M
ij
F
i2j2
R
ij
R
ij
F
injn
R
ij
LANE LAYOUT 8/24/10 10:43 AM Page 143
IEEE Communications Magazine • September 2010
144
nity. To achieve scale implicitly requires the
cooperation of strangers who will not trust each
other. This increases the need for community
sensing systems with strong privacy protection
and low commitment levels from users. Figure 2
shows carbon monoxide readings captured in
Ghana using mobile sensors attached to taxicabs
as part of the Participatory Urbanism project
[20] as an example of a community sensing appli-
cation. This project, in conjunction with the N-
SMARTs project [13] at the University of
California at Berkeley, is developing prototypes
that allow similar sensor data to be collected
with phone embedded sensors.
The impact of scaling sensing applications
from personal to population scale is unknown.
Many issues related to information sharing, pri-
vacy, data mining, and closing the loop by pro-
viding useful feedback to an individual, group,
community, and population remain open. Today,
we only have limited experience in building scal-
able sensing systems.
SENSING PARADIGMS
One issue common to the different types of sens-
ing scale is to what extent the user is actively
involved in the sensing system [12]. We discuss
two points in the design space: participatory sens-
ing, where the user actively engages in the data
collection activity (i.e., the user manually deter-
mines how, when, what, and where to sample) and
opportunistic sensing, where the data collection
stage is fully automated with no user involvement.
The benefit of opportunistic sensing is that it
lowers the burden placed on the user, allowing
overall participation by a population of users to
remain high even if the application is not that
personally appealing. This is particularly useful
for community sensing, where per user benefit
may be hard to quantify and only accrue over a
long time. However, often these systems are
technically difficult to build [25], and a major
resource, people, are underutilized. One of the
main challenges of using opportunistic sensing is
the phone context problem; for example, the
application wants to only take a sound sample
for a city-wide noise map when the phone is out
of the pocket or bag. These types of context
issues can be solved by using the phone sensors;
for example, the accelerometer or light sensors
can determine if the phone is out of the pocket.
Participatory sensing, which is gaining inter-
est in the mobile phone sensing community,
places a higher burden or cost on the user; for
example, manually selecting data to collect (e.g.,
lowest petrol prices) and then sampling it (e.g.,
taking a picture). An advantage is that complex
operations can be supported by leveraging the
intelligence of the person in the loop who can
solve the context problem in an efficient man-
ner; that is, a person who wants to participate in
collecting a noise or air quality map of their
neighborhood simply takes the phone out of
their bag to solve the context problem. One
drawback of participatory sensing is that the
quality of data is dependent on participant
enthusiasm to reliably collect sensing data and
the compatibility of a person’s mobility patterns
to the intended goals of the application (e.g.,
collect pollution samples around schools). Many
of these challenges are actively being studied.
For example, the PICK project [23] is studying
models for systematically recruiting participants.
Clearly, opportunistic and participatory rep-
resent extreme points in the design space. Each
approach has pros and cons. To date there is lit-
tle experience in building large-scale participato-
ry or opportunistic sensing applications to fully
understand the trade-offs. There is a need to
develop models to best understand the usability
and performance issues of these schemes. In
addition, it is likely that many applications will
emerge that represent a hybrid of both these
sensing paradigms.
MOBILE PHONE SENSING
ARCHITECTURE
Mobile phone sensing is still in its infancy. There
is little or no consensus on the sensing architec-
ture for the phone and the cloud. For example,
new tools and phone software will be needed to
facilitate quick development and deployment of
robust context classifiers for the leading phones
on the market. Common methods for collecting
and sharing data need to be developed. Mobile
phones cannot be overloaded with continuous
sensing commitments that undermine the perfor-
mance of the phone (e.g., by depleting battery
power). It is not clear what architectural compo-
nents should run on the phone and what should
run in the cloud. For example, some researchers
propose that raw sensor data should not be
pushed to the cloud because of privacy issues. In
the following sections we propose a simple archi-
tectural viewpoint for the mobile phone and the
computing cloud as a means to discuss the major
architectural issues that need to be addressed.
We do not argue that this is the best system
architecture. Rather, it presents a starting point
for discussions we hope will eventually lead to a
converging view and move the field forward.
Figure 3 shows a mobile phone sensing archi-
tecture that comprises the following building
blocks.
SENSE
Individual mobile phones collect raw sensor data
from sensors embedded in the phone.
LEARN
Information is extracted from the sensor data by
applying machine learning and data mining tech-
niques. These operations occur either directly on
the phone, in the mobile cloud, or with some
Figure 4. Raw audio data captured from mobile phones is transformed into
features allowing learning algorithms to identify classes of behavior (e.g., driv-
ing, in conservation, making coffee) occurring in a stream of sensor data, for
example, by SoundSense [11].
Raw data Extracted features Classification inferences
LANE LAYOUT 8/24/10 10:43 AM Page 144
IEEE Communications Magazine • September 2010
145
partitioning between the phone and cloud.
Where these components run could be governed
by various architectural considerations, such as
privacy, providing user real-time feedback,
reducing communication cost between the phone
and cloud, available computing resources, and
sensor fusion requirements. We therefore con-
sider where these components run to be an open
issue that requires research.
INFORM, SHARE, AND PERSUASION
We bundle a number of important architectural
components together because of commonality or
coupling of the components. For example, a per-
sonal sensing application will only inform the user,
whereas a group or community sensing application
may share an aggregate version of information
with the broader population and obfuscate the
identity of the users. Other considerations are how
to best visualize sensor data for consumption of
individuals, groups, and communities. Privacy is a
very important consideration as well.
While phones will naturally leverage the dis-
tributed resources of the mobile cloud (e.g.,
computation and services offered in the cloud),
the computing, communications, and sensing
resources on the phones are ever increasing. We
believe that as resources of the phone rapidly
expand, one of the main benefits of using the
mobile computing cloud will be the ability to
compute and mine big data from very large num-
bers of users. The availability of large-scale data
benefits mobile phone sensing in a variety of
ways; for example, more accurate interpretation
algorithms that are updated based on sensor
data sourced from an entire user community.
This data enables personalizing of sensing sys-
tems based on the behavior of both the individu-
al user and cliques of people with similar
behavior.
In the remainder of the article we present a
detailed discussion of the three main architec-
tural components introduced in this section:
Sense
Learn
•Inform, share, and persuasion
SENSE: THE MOBILE PHONE AS A
SENSOR
As we discussed, the integration of an ever
expanding suite of embedded sensors is one of
the key drivers of mobile phone applications.
However, the programmability of the phones
and the limitation of the operating systems that
run on them, the dynamic environment present-
ed by user mobility, and the need to support
continuous sensing on mobile phones present a
diverse set of challenges the research community
needs to address.
PROGRAMMABILITY
Until very recently only a handful of mobile
phones could be programmed. Popular plat-
forms such as Symbian-based phones presented
researchers with sizable obstacles to building
mobile sensing applications [2]. These platforms
lacked well defined reliable interfaces to access
low-level sensors and were not well suited to
writing common data processing components,
such as signal processing routines, or performing
computationally costly inference due to the
resource constraints of the phone. Early sensor-
enabled phones (i.e., prior to the iPhone in
2007) such as the Symbian-based Nokia N80
included an accelerometer, but there were no
open application programming interfaces (APIs)
to access the sensor signals. This has changed
significantly over the last few years. Note that
phone vendors initially included accelerometers
to help improve the user interface experience.
Most of the smartphones on the market are
open and programmable by third-party develop-
ers, and offer software development kits (SDKs),
APIs, and software tools. It is easy to cross-com-
pile code and leverage existing software such as
established machine learning libraries (e.g.,
Weka).
However, a number of challenges remain in
the development of sensor-based applications.
Most vendors did not anticipate that third par-
ties would use continuous sensing to develop
new applications. As a result, there is mixed API
and operating system (OS) support to access the
low-level sensors, fine-grained sensor control,
and watchdog timers that are required to devel-
op real-time applications. For example, on Nokia
Symbian and Maemo phones the accelerometer
returns samples to an application unpredictably
between 25–38 Hz, depending on the CPU load.
While this might not be an issue when using the
accelerometer to drive the display, using statisti-
cal models to interpret activity or context typi-
cally requires high and at least consistent
sampling rates.
Lack of sensor control limits the management
of energy consumption on the phone. For
instance, the GPS uses a varying amount of
power depending on factors such as the number
of satellites available and atmospheric condi-
tions. Currently, phones only offer a black box
interface to the GPS to request location esti-
mates. Finer-grained control is likely to help in
preserving battery power and maintaining accu-
racy; for example, location estimation could be
aborted when accuracy is likely to be low, or if
the estimate takes too long and is no longer use-
ful.
As third parties demand better support for
sensing applications, the API and OS support
will improve. However, programmability of the
phone remains a challenge moving forward. As
more individual, group, and community-scale
applications are developed there will be an
increasing demand placed on phones, both indi-
vidually and collectively. It is likely that abstrac-
tions that can cope with persistent spatial queries
and secure the use of resources from neighbor-
ing phones will be needed. Phones may want to
interact with other collocated phones to build
new sensing paradigms based on collaborative
sensing [12].
Different vendors offer different APIs, mak-
ing porting the same sensing application to mul-
tivendor platforms challenging. It is useful for
the research community to think about and pro-
pose sensing abstractions and APIs that could be
standardized and adopted by different mobile
phone vendors.
Most of the
smartphones on the
market are open and
programmable by
third party
developers and offer
SDKs, APIs, and
software tools. It is
easy to cross-compile
code and leverage
existing software
such as established
machine learning
libraries.
LANE LAYOUT 8/24/10 10:43 AM Page 145
IEEE Communications Magazine • September 2010
146
CONTINUOUS SENSING
Continuous sensing will enable new applications
across a number of sectors but particularly in
personal healthcare. One important OS require-
ment for continuous sensing is that the phone
supports multitasking and background process-
ing. Today, only Android and Nokia Maemo
phones support this capability. The iPhone 4 OS,
while supporting the notion of multitasking, is
inadequate for continuous sensing. Applications
must conform to predefined profiles with strict
constraints on access to resources. None of these
profiles provide the ability to have continuous
access to all the sensors (e.g., continuous
accelerometer sampling is not possible).
While smartphones continue to provide more
computation, memory, storage, sensing, and com-
munication bandwidth, the phone is still a
resource-limited device if complex signal process-
ing and inference are required. Signal processing
and machine learning algorithms can stress the
resources of the phones in different ways: some
require the CPU to process large volumes of sen-
sor data (e.g., interpreting audio data [12]), some
need frequent sampling of energy expensive sen-
sors (e.g., GPS [3]), while others require real-time
inference (e.g., Darwin [12]). Different applica-
tions place different requirements on the execu-
tion of these algorithms. For example, for
applications that are user initiated the latency of
the operation is important. Applications (e.g.,
healthcare) that require continuous sensing will
often require real-time processing and classifica-
tion of the incoming stream of sensor data. We
believe continuous sensing can enable a new class
of real-time applications in the future, but these
applications may be more resource demanding.
Phones in the future should offer support for con-
tinuous sensing without jeopardizing the phone
experience; that is, not disrupt existing applica-
tions (e.g., to make calls, text, and surf the web) or
drain batteries. Experiences from actual deploy-
ments of mobile phone sensing systems show that
phones which run these applications can have
standby times reduced from 20 hours or more to
just six hours [2]. For continuous sensing to be
viable there need to be breakthroughs in low-ener-
gy algorithms that duty cycle the device while
maintaining the necessary application fidelity.
Early deployments of phone sensing systems
tended to trade off accuracy for lower resource
usage by implementing algorithms that require
less computation or a reduced amount of sensor
data. Another strategy to reduce resource usage
is to leverage cloud infrastructure where differ-
ent sensor data processing stages are offloaded
to back-end servers [12, 26] when possible. Typi-
cally, raw data produced by the phone is not sent
over the air due to the energy cost of transmis-
sion, but rather compressed summaries (i.e.,
extracted features from the raw sensor data) are
sent. The drawback to these approaches is that
they are seldom sufficiently energy-efficient to
be applied to continuous sensing scenarios.
Other techniques rely on adopting a variety of
duty cycling techniques that manage the sleep
cycle of sensing components on the phone in
order to trade off the amount of battery con-
sumed against sensing fidelity and latency [27].
Continuous sensing raises considerable chal-
lenges in comparison to sensing applications that
require a short time window of data or a single
snapshot (e.g., a single image or short sound clip).
There is an energy tax associated with continuous-
ly sensing and potentially uploading in real time to
the cloud for further processing. Solutions that
limit the cost of continuous sensing and reduce
the communication overheard are necessary. If the
interpretation of the data can withstand delays of
an entire day, it might be acceptable if the phone
can collect and store the sensor data until the end
of the day and upload when the phone is being
charged. However, this delay-tolerant model of
sensor sampling and processing severely limits the
ability of the phone to react and be aware of its
context. Sensing applications that will be success-
ful in the real world will have to be smart enough
to adapt to situations. There is a need to study the
trade-off of continuous sensing with the goal of
minimizing the energy cost while offering suffi-
cient accuracy and real-time responsiveness to
make the application useful.
As continuous sensing becomes more com-
mon, it is likely that additional processing sup-
port will emerge. For example, the Little Rock
project [28] underway at Microsoft Research is
developing hardware support for continuous
sensing where the primary CPU frequently
sleeps, and digital signal processors (DSPs) sup-
port the duty cycle management, sensor sam-
pling, and signal processing.
PHONE CONTEXT
Mobile phones are often used on the go and in
ways that are difficult to anticipate in advance.
This complicates the use of statistical models
that may fail to generalize under unexpected
environments. The background environment or
actions of the user (e.g., the phone could be in
the pocket) will also affect the quality of the sen-
sor data that is captured. Phones may be exposed
to events for too short a period of time, if the
user is traveling quickly (e.g., in a car), if the
event is localized (e.g., a sound) or the sensor
requires more time than is possible to gather a
sample (e.g., air quality sensor). Other forms of
interfering context include a person using their
phone for a call, which interferes with the ability
of the accelerometer to infer the physical actions
of the person. We collectively describe these
issues as the context problem. Many issues remain
open in this area.
Some researchers propose to leverage co-
located mobile phones to deal with some of
these issues; for example, sharing sensors tem-
porarily if they are better able to capture the
data [12]. To counter context challenges
researchers proposed super-sampling [13] where
data from nearby phones are collectively used to
lower the aggregate noise in the reading. Alter-
natively, an effective approach for some systems
have been sensor sampling routines with admis-
sion control stages that do not process data that
is low-quality, saving resources, and reducing
errors (e.g., SoundSense [11]).
While machine learning techniques are being
used to interpret mobile phone data, the reliabil-
ity of these algorithms suffer under the dynamic
and unexpected conditions presented by every-
Different vendors
offer different APIs,
making porting the
same sensing appli-
cation to multi-ven-
dor platforms
challenging. It is use-
ful for the research
community to think
about and propose
sensing abstractions
and APIs that could
be standardized and
adopted by different
mobile phone
vendors.
LANE LAYOUT 8/24/10 10:43 AM Page 146
IEEE Communications Magazine • September 2010
147
day phone use. For example, a speaker identifi-
cation algorithm maybe effective in a quiet office
environment but not a noisy cafe. Such problems
can be overcome by collecting sufficient exam-
ples of the different usage scenarios (i.e., train-
ing data). However, acquiring examples is costly
and anticipating the different scenarios the
phone might encounter is almost impossible.
Some solutions to this problem straddle the
boundary of mobile systems and machine learn-
ing and include borrowing model inputs (i.e.,
features) from nearby phones, performing col-
laborative multi-phone inference with models
that evolve based on different scenarios encoun-
tered, or discovering new events that are not
encountered during application design [12].
LEARN: INTERPRETING SENSOR DATA
The raw sensor data able to be acquired by
phones, irrespective of the scale or modality (e.g.,
accelerometer, camera) are worthless without
interpretation (e.g., human behavior recogni-
tion). A variety of data mining and statistical
tools can be used to distill information from the
data collected by mobile phones and calculate
summary statistics to present to the users, such
as, the average emissions level of different loca-
tions or the total distance run by a user and their
ranking within a group of friends (e.g., Nike+).
Recently, crowd-sourcing techniques have
been applied to the analysis of sensor data which
is typically problematic; for example, image pro-
cessing when used in-the-wild is notoriously dif-
ficult to maintain high accuracy. In the
CrowdSearch [21] project, crowd sourcing and
micro-payments are adopted to incentivize peo-
ple to improve automated image search. In [21]
human-in-the-loop stages are added to the pro-
cess of image search with tasks distributed to the
user population.
We discuss the key challenges in interpreting
sensor data, focusing on a primary area of inter-
est: human behavior and context modeling.
HUMAN BEHAVIOR AND CONTEXT MODELING
Many emerging applications are people-centric,
and modeling the behavior and surrounding con-
text of the people carrying the phones is of par-
ticular interest. A natural question is how well
can mobile phones interpret human behavior
(e.g., sitting in conversation) from low-level mul-
timodal sensor data? Or, similarly, how accurate-
ly can they infer the surrounding context (e.g.,
pollution, weather, noise environment)?
Currently, supervised learning techniques are
the algorithms of choice in building mobile
inference systems. In supervised-learning, as
illustrated in Fig. 4, examples of high-level
behavioral classes (e.g., cooking, driving) are
hand annotated (i.e., labeled). These examples,
referred to as training data, are then provided to
a learning algorithm, which fits a model to the
classes (i.e., behaviors) based on the sensor data.
Sensor data is usually presented to the learning
algorithm in the form of extracted features,
which are calculations on the raw data that
emphasize characteristics that more clearly dif-
ferentiate classes (e.g., the variance of the
accelerometer magnitude over a small time win-
dow could be useful for separating standing and
walking classes). Supervised learning is feasible
for small-scale sensing applications, but unlikely
to scale to handle the wide range of behaviors
and contexts exhibited by a large community of
users. Other forms of learning algorithms, such
as semi-supervised (where only some of the data
is labeled) and unsupervised (where no labels
are provided by the user) ones, reduce the need
for labeled examples, but can lead to classes that
do not correspond to the activities that are use-
ful to the application or require that the unla-
beled data only come from the already labeled
class categories (e.g., an activity that was never
encountered before can throw off a semi-super-
vised learning algorithm).
Researchers show that a variety of everyday
human activities can be inferred most successful-
ly from multimodal sensor streams For example,
[29] describes a system which is capable of recog-
nizing eight different everyday activities (e.g.,
brushing teeth, riding in an elevator) using the
Mobile Sensing Platform (MSP) [6] an impor-
tant mobile sensing device that is a predecessor
of sensing on the mobile phone. Similar results
are demonstrated using mobile phones that infer
everyday activities [2, 3, 30], albeit less accurately
and with a smaller set of activities than the MSP.
The microphone, accelerometer, and GPS
found on many smartphones on the market have
proven to be effective at inferring more complex
human behavior. Early work on mobility pattern
modeling succeeds with surprisingly simple
approaches to identify significant places in peo-
ple’s lives (e.g., work, home, coffee shop). More
recently researchers [31] have used statistical
techniques to not only infer significant places but
also connect these to activities (e.g., gym, waiting
for the bus) using just GPS traces. The micro-
phone is one of the most ubiquitous sensors and
is capable of inferring what a person is doing
(e.g., in conversation), where they are (e.g., audio
signature of a particular coffee shop) — in
essence, it can capture a great deal both about a
person and their surrounding ambient environ-
ment. In SoundSense [11] a general-purpose
sound classification system for mobile phones is
developed using a combination of supervised and
unsupervised learning. The recognition of a static
set of common sounds (e.g., music) uses super-
vised learning but augmented with an unsuper-
vised approach that learns the novel frequently
recurring classes of sound encountered by differ-
ent users. Finally, the user is brought into the
loop to confirm and provide a textual description
(i.e., label) of the discovered sounds. As a result,
SoundSense extends the ability of the phone to
recognize new activities.
SCALING MODELS
Existing statistical models are unable to cope
with everyday occurrences such as a person using
a new type of exercise machine, and struggle
when two activities overlap each other or differ-
ent individuals carry out the same activity differ-
ently (e.g., the sensor data for walking will look
very different for a 10-year-old vs. a 90-year-old
person). A key to scalability is to design tech-
niques for generalization that will be effective for
entire communities containing millions of people.
A natural question is
how well can mobile
phones interpret
human behavior
(e.g., sitting in con-
servation) from low-
level multimodal
sensor data? Or, sim-
ilarly, how accurately
can they infer the
surrounding context
(e.g., pollution,
weather, noise
environment)?
LANE LAYOUT 8/24/10 10:43 AM Page 147
IEEE Communications Magazine • September 2010
148
To address these concerns current research
directions point toward models that are adaptive
and incorporate people in the process. Automati-
cally increasing the classes recognized by a model
using active learning (where the learning algo-
rithm selectively queries the user for labels) is
investigated in the context of heath care [23].
Approaches have been developed in which train-
ing data sourced directly from users is grouped
based on their social network [12]. This work
demonstrates that exploiting the social network of
users improves the classification of locations such
as significant places. Community-guided learning
[30] combines data similarity and crowd-sourced
labels to improve the classification accuracy of the
learning system. In [30] hand annotated labels are
no longer treated as absolute ground truth during
the training process but are treated as soft hints
as to class boundaries in combination with the
observed data similarity. This approach learns
classes (i.e., activities) based on the actual behav-
ior of the community and adjusts transparently to
the changes in how the community performs
these activities — making it more suitable for
large-scale sensing applications. However, if the
models need to be adapted on the fly, this may
force the learning of models to happen on the
phone, potentially causing a significant increase in
computational needs [12].
Many questions remain regarding how learn-
ing will progress as the field grows. There is a
lack of shared technology that could help accel-
erate the work. For example, each research
group develops their own classifiers that are
hand coded and tuned. This is time consuming
and mostly based on small-scale experimentation
and studies. There is a need for a common
machine learning toolkit for mobile phone sens-
ing that allows researchers to build and share
models. Similarly, there is a need for large-scale
public data sets to study more advanced learning
techniques and rigorously evaluate the perfor-
mance of different algorithms. Finally, there is
also a need for a repository for sharing datasets,
code, and tools to support the researchers.
INFORM, SHARE, AND PERSUASION:
C
LOSING THE SENSING LOOP
How you use inferred sensor data to inform the
user is application-specific. But a natural question
is, once you infer a class or collect together a set
of large-scale inferences, how do you close the
loop with people and provide useful information
back to users? Clearly, personal sensing applica-
tions would just inform the individual, while social
networking sensing applications may share activi-
ties or inferences with friends. We discuss these
forms of interaction with users as well as the
important area of privacy. Another topic we
touch on is using large-scale sensor data as a per-
suasive technology in essence using big data to
help users attain goals using targeted feedback.
SHARING
To harness the potential of mobile phone sens-
ing requires effective methods of allowing peo-
ple to connect with and benefit from the data.
The standard approach to sharing is visualization
using a web portal where sensor data and infer-
ences are easily displayed. This offers a familiar
and intuitive interface. For the same reasons, a
number of phone sensing systems connect with
existing web applications to either enrich existing
applications or make the data more widely acces-
sible [12, 23]. Researchers recognize the strength
of leveraging social media outlets such as Face-
book, Twitter, and Flickr as ways to not only dis-
seminate information but build community
awareness (e.g., citizen science [20]). A popular
application domain is fitness, such as Nike+.
Such systems combine individual statistics and
visualizations of sensed data and promote com-
petition between users. The result is the forma-
tion of communities around a sensing
application. Even though, as in the case of
Nike+, the sensor information is rather simple
(i.e., just the time and distance of a run), people
still become very engaged. Other applications
have emerged that are considerably more sophis-
ticated in the type of inference made, but have
had limited up take. It is still too early to predict
which sensing applications will become the most
compelling for user communities. But social net-
working provides many attractive ways to share
information.
PERSONALIZED SENSING
Mobile phones are not limited to simply collect-
ing sensor data. For example, both the Google
and Microsoft search clients that run on the
iPhone allow users to search using voice recogni-
tion. Eye tracking and gesture recognition are
also emerging as natural interfaces to the phone.
Sensors are used to monitor the daily activi-
ties of a person and profile their preferences and
behavior, making personalized recommendations
for services, products, or points of interest possi-
ble [32]. The behavior of an individual along
with an understanding of how behavior and pref-
erences relate to other segments of the popula-
tion with similar behavioral profiles can radically
change not only online experiences but real
world ones too. Imagine walking into a pharma-
cy and your phone suggesting vitamins and sup-
plements with the effectiveness of a doctor. At a
clothing store your phone could identify which
items are manufactured without sweatshop labor.
The behavior of the person, as captured by sen-
sors embedded in their phone, become an inter-
face that can be fed to many services (e.g.,
targeted advertising). Sensor technology person-
alized to a user’s profile empowers her to make
more informed decisions across a spectrum of
services.
PERSUASION
Sensor data gathered from communities (e.g.,
fitness, healthcare) can be used not only to
inform users but to persuade them to make posi-
tive behavioral changes (e.g., nudge users to
exercise more or smoke less). Systems that pro-
vide tailored feedback with the goal of changing
users’ behavior are referred to as persuasive
technology [33]. Mobile sensing applications
open the door to building novel persuasive sys-
tems that are still largely unexplored.
For many application domains, such as
healthcare or environmental awareness, users
Existing statistical
models are unable to
cope with everyday
occurrences such as
a person using a
new type of exercise
machine, and
struggle when two
activities overlap
each other or when
different individuals
carry out the same
activity differently.
LANE LAYOUT 8/24/10 10:43 AM Page 148
IEEE Communications Magazine • September 2010
149
commonly have desired objectives (e.g., to lose
weight or lower carbon emissions). Simply pro-
viding a user with her own information is often
not enough to motivate a change of behavior or
habit. Mobile phones are an ideal platform capa-
ble of using low-level individual-scale sensor
data and aggregated community-scale informa-
tion to drive long-term change (e.g., contrasting
the carbon footprint of a user with her friends
can persuade the user to reduce her own foot-
print). The UbiFit Garden [1] project is an early
example of integrating persuasion and sensing
on the phone. UbiFit uses an ambient back-
ground display on the phone to offer the user
continuous updates on her behavior in response
to desired goals. The display uses the metaphor
of a garden with different flowers blooming in
response to physical exercise of the user during
the day. It does not use comparison data but
simply targets the individual user. A natural
extension of UbiFit is to present community
data. Ongoing research is exploring methods of
identifying and using people in a community of
users as influencers for different individuals in
the user population. A variety of techniques are
used in existing persuasive system research, such
as the use of games, competitions among groups
of people, sharing information within a social
network, or goal setting accompanied by feed-
back. Understanding which types of metaphors
and feedback are most effective for various per-
suasion goals is still an open research problem.
Building mobile phone sensing systems that inte-
grate persuasion requires interdisciplinary
research that combines behavioral and social
psychology theories with computer science.
The use of large volumes of sensor data pro-
vided by mobile phones presents an exciting
opportunity and is likely to enable new applica-
tions that have promise in enacting positive
social changes in health and the environment
over the next several years. The combination of
large-scale sensor data combined with accurate
models of persuasion could revolutionize how
we deal with persistent problems in our lives
such as chronic disease management, depression,
obesity, or even voter participation.
PRIVACY
Respecting the privacy of the user is perhaps the
most fundamental responsibly of a phone sens-
ing system. People are understandably sensitive
about how sensor data is captured and used,
especially if the data reveals a user’s location,
speech, or potentially sensitive images. Although
there are existing approaches that can help with
these problems (e.g., cryptography, privacy-pre-
serving data mining), they are often insufficient
[34]. For instance, how can the user temporarily
pause the collection of sensor data without caus-
ing a suspicious gap in the data stream that
would be noticeable to anyone (e.g., family or
friends) with whom they regularly share data?
In personal sensing applications processing
data locally may provide privacy advantages com-
pared to using remote more powerful servers.
SoundSense [11] adopts this strategy: all the audio
data is processed on the phone, and raw audio is
never stored. Similarly, the UbiFit Garden [1]
application processes all data locally on the device.
Privacy for group sensing applications is based
on user group membership. For instance,
although social networking applications like
Loopt and CenceMe [2] share sensitive informa-
tion (e.g., location and activity), they do so within
groups in which users have an existing trust rela-
tionship based on friendship or a shared common
interest such as reducing their carbon footprint.
Community sensing applications that can col-
lect and combine data from millions of people
run the risk of unintended leakage of personal
information. The risks from location-based
attacks are fairly well understood given years of
previous research. However, our understanding
of the dangers of other modalities (e.g., activity
inferences, social network data) are less devel-
oped. There are growing examples of reconstruc-
tion type attacks where data that may look safe
and innocuous to an individual user may allow
invasive information to be reverse-engineered.
For example, the UIUC Poolview project shows
that even careful sharing of personal weight data
within a community can expose information on
whether a user’s weight is trending upward or
downward [35]. The PEIR project evaluates dif-
ferent countermeasures to this type of scenario,
such as adding noise to the data or replacing
chunks of the data with synthetic but realistic
samples that have limited impact on the quality
of the aggregate analysis [3].
Privacy and anonymity will remain a signifi-
cant problem in mobile-phone-based sensing for
the foreseeable future. In particular, the second-
hand smoke problem of mobile sensing creates
new privacy challenges, such as:
•How can the privacy of third parties be
effectively protected when other people
wearing sensors are nearby?
•How can mismatched privacy policies be
managed when two different people are
close enough to each other for their sensors
to collect information from the other party?
Furthermore, this type of sensing presents even
larger societal questions, such as who is respon-
sible when collected sensor data from these
mobile devices cause financial harm? Stronger
techniques for protecting the rights of people as
sensing becomes more commonplace will be nec-
essary.
CONCLUSION
This article discusses the current state of the art
and open challenges in the emerging field of
mobile phone sensing. The primary obstacle to
this new field is not a lack of infrastructure; mil-
lions of people already carry phones with rich
sensing capabilities. Rather, the technical barri-
ers are related to performing privacy-sensitive
and resource-sensitive reasoning with noisy data
and noisy labels, and providing useful and effec-
tive feedback to users. Once these technical bar-
riers are overcome, this nascent field will
advance quickly, acting as a disruptive technolo-
gy across many domains including social net-
working, health, and energy. Mobile phone
sensing systems will ultimately provide both
micro- and macroscopic views of cities, commu-
nities, and individuals, and help improve how
society functions as a whole.
The risks from
location-based
attacks are fairly well
understood given
years of previous
research. However,
our understanding of
the dangers of other
modalities (e.g.,
activity inferences,
social network data)
are less developed.
LANE LAYOUT 8/24/10 10:43 AM Page 149
IEEE Communications Magazine • September 2010
150
REFERENCES
[1] S. Consolvo et al., “Activity Sensing in the Wild: A Field
Trial of Ubifit Garden,” Proc. 26th Annual ACM SIGCHI
Conf. Human Factors Comp. Sys., 2008, pp. 1797–1806.
[2] E. Miluzzo et al., “Sensing meets Mobile Social Net-
works: The Design, Implementation, and Evaluation of
the CenceMe Application,” Proc. 6th ACM SenSys,
2008, pp. 337–50.
[3] M. Mun et al., “Peir, the Personal Environmental Impact
Report, as a Platform for Participatory Sensing Systems
Research,” Proc. 7th ACM MobiSys, 2009, pp. 55–68.
[4] A. Thiagarajan et al., “VTrack: Accurate, Energy-Aware
Traffic Delay Estimation Using Mobile Phones,” Proc.
7th ACM SenSys, Berkeley, CA, Nov. 2009.
[5] UC Berkeley/Nokia/NAVTEQ, “Mobile Millennium”;
http://traffic.berkeley.edu/
[6] T. Choudhury et al., “The Mobile Sensing Platform: An
Embedded System for Activity Recognition,” IEEE Perva-
sive Comp., vol. 7, no. 2, 2008, pp. 32–41.
[7] T. Starner, Wearable Computing and Contextual Aware-
ness, Ph.D. thesis, MIT Media Lab., Apr. 30, 1999.
[8] Nokia, “Workshop on Large-Scale Sensor Networks and
Applications,” Kuusamo, Finland, Feb. 3–6, 2005.
[9] A. Schmidt et al., “Advanced Interaction in Context,”
Proc. 1st Int’l. Symp. HandHeld Ubiquitous Comp.,
1999, pp. 89–101.
[10] N. Eagle and A. Pentland, “Reality Mining: Sensing
Complex Social Systems,” Personal Ubiquitous Comp.,
vol. 10, no. 4, 2006, pp. 255–268.
[11] H. Lu et al., “Sound-Sense: Scalable Sound Sensing for
People-Centric Applications on Mobile Phones,” Proc.
7th ACM MobiSys, 2009, pp. 165–78.
[12] Dartmouth College, “Mobile Sensing Grou
p”;
http://sensorlab.cs.dartmouth.edu/
[13] R. Honicky et al., “N-Smarts: Networked Suite of
Mobile Atmospheric Real-Time Sensors,” Proc. 2nd
ACM SIGCOMM NSDR, 2008, pp. 25–30.
[14] M.-Z. Poh et al., “Heartphones: Sensor Earphones and
Mobile Application for Non-Obtrusive Health Monitor-
ing,” IEEE Int’l. Symp. Wearable Comp., 2009, pp.
153–54.
[15] J. Burke et al., “Participatory Sensing,” Proc. ACM Sen-
Sys Wksp. World-Sensor-Web, 2006.
[16] A. Krause et al., “Toward Community Sensing,” Proc.
7th ACM/IEEE IPSN, 2008, pp. 481–92.
[17] A. T. Campbell et al., “People-Centric Urban Sensing,”
2nd ACM WICON, 2006, p. 18.
[18] T. Abdelzaher et al., “Mobiscopes for Human Spaces,”
IEEE Pervasive Comp., vol. 6, no. 2, 2007, pp. 20–29.
[19] M. Azizyan, I. Constandache, and R. Roy Choudhury,
“Surround-Sense: Mobile Phone Localization via Ambi-
ence Fingerprinting,” Proc. 15th ACM MobiCom, 2009,
pp. 261–72.
[20] Intel/UC Berkeley, “Urban Atmospheres”;
http://www.urban-atmospheres.net/
[21] T. Yan, V. Kumar, and D. Ganesan, “CrowdSearch:
Exploiting Crowds for Accurate Real-Time Image Search
on Mobile Phones,” Proc. 8th ACM MobiSys, 2010.
[22] Nokia, “SensorPlanet”; http://www.sensorplanet.org/
[23] CENS/UCLA, “Participatory Sensing / Urban Sensing
Projects”; http
://research.cens.ucla.edu/
[24] R. Rana et al., “Ear-Phone: An End-to-End Participatory
Urban Noise Mapping,” Proc. 9th ACM/IEEE IPSN, 2010.
[25] T. Das et al., “Prism: Platform for Remote Sensing
using Smartphones,” Proc. 8th ACM MobiSys, 2010.
[26] E. Cuervo et al., “MAUI: Making Smartphones Last
Longer with Code Offload,” Proc. 8th ACM MobiSys,
2010.
[27] Y. Wang et al., “A Framework of Energy Efficient
Mobile Sensing for Automatic User State Recognition,”
Proc. 7th ACM MobiSys, 2009, pp. 179–92.
[28] B. Priyantha, D. Lymberopoulos, and J. Liu, “Little
Rock: Enabling Energy Efficient Continuous Sensing on
Mobile Phones,” Microsoft Research, tech. rep. MSR-TR-
2010-14, 2010.
[29] J. Lester, T. Choudhury, and G. Borriello, “A Practical
Approach to Recognizing Physical Activities,” Pervasive
Comp., 2006, pp. 1–16.
[30] D. Peebles et al., “Community-Guided Learning:
Exploiting Mobile Sensor Users to Model Human Behav-
ior,” Proc. 24th National Conf. Artificial Intelligence,
2010.
[31] L. Liao, D. Fox, and H. Kautz, “Extracting Places and
Activities from GPS Traces Using Hierarchical Condition-
al Random Fields,” Int’l. J. Robotics Research, vol. 26,
no. 1, 2007, pp. 119–34.
[32] J. Liu, “Subjective Sensing: Intentional Awareness for
Personalized Services,” NSF Wksp. Future Directions Net
Sensing Sys., Nov. 2009.
[33] B. J. Fogg, Persuasive Technology: Using Computers to
Change What We Think and Do, Morgan Kaufmann,
Dec. 2002.
[34] A. Kapadia, D. Kotz, and N. Triandopoulos, “Oppor-
tunistic Sensing: Security Challenges for the New
Paradigm,” Proc. 1st COMNETS, Bangalore, 2009.
[35] R. K. Ganti et al., “Poolview: Stream Privacy for Grass-
roots Participatory Sensing,” Proc. 6th ACM SenSys,
2008, pp. 281–94.
[36] A. T. Campbell et al.
, “NeuroPhone: Brain-Mobile
Phone Interface Using a Wireless EEG Headset,” Proc.
2nd ACM SIGCOMM Wksp. Networking, Sys., and
Apps. on Mobile Handhelds, New Delhi, India, Aug. 30,
2010.
BIOGRAPHIES
NICHOLAS D. LANE ([email protected]) is a Ph.D.
candidate at Dartmouth College, and a member of the
Mobile Sensing Group and the MetroSense project. His
research interests revolve around mobile sensing systems
that incorporate scalable and robust sensor-based compu-
tational models of human behavior and context. He has an
M.Eng. in computer science from Cornell University.
E
MILIANO MILUZZO ([email protected]) is a Ph.D.
candidate in the computer science department at Dartmouth
College and a member of the Mobile Sensing Group at Dart-
mouth. His research focus is on spearheading a new area of
research on mobile phone sensing applying machine learn-
ing and mobile systems design to new sening applications
and systems on a large scale. These applications and systems
span the areas of social networks, green applications, global
environment monitoring, personal and community health-
care, sensor augmented gaming, virtual reality, and smart
transportation systems. He has an M.Sc. in electrical engi-
neering from the University of Rome La Sapienza.
H
ONG LU ([email protected]) is a Ph.D. candidate in
the computer science department at Dartmouth College,
and a member of the Mobile Sensing Group and the Met-
roSense Project.. His research interests include ubiquitous
computing, mobile sensing systems, and human behavior
modeling. He has an M.S. in computer science from Tianjin
University, China.
D
ANIEL PEEBLES (daniel.p[email protected]) is a Ph.D. stu-
dent at Dartmouth College. His research interests are in devel-
oping machine learning methods for analyzing and interpreting
people’s contexts, activities, and social networks from mobile
sensor data. He has a B.S. from Dartmouth College.
T
ANZEEM CHOUDHURY ([email protected]) is
an assistant professor in the computer science departmentat
Dartmouth College. She joined Dartmouth in 2008 after four
years at Intel Research Seattle. She recieved her Ph.D. from
the Media Laboratory at MIT. She develops systems that can
reason about human activities, interactions, and social net-
works in everyday environments. Her doctoral thesis demon-
strated for the first time the feasibility of using wearable
sensors to capture and model social networks automatically,
on the basis of face-to-face conversations. MIT Technology
Review recognized her as one of the world’s top 35 innova-
tors under the age of 35 (2008 TR35) for her work in this
area. She has also been selected as a TED Fellow and is a
recipient of the NSF CAREER award. More information can
be found at http://www.cs.dartmouth.edu/~tanzeem.
A
NDREW T. CAMPBELL (camp [email protected]) is a pro-
fessor of computer science at Dartmouth College, where he
leads the Mobile Sensing Group and the MetroSense Pro-
ject. His research interests include mobile phone sensing
systems. He has a Ph.D. in computer science from Lancast-
er University, England. He received the U.S. National Sci-
ence Foundation Career Award for his research in
programmable mobile networking.
The primary obstacle
to this new field is
not a lack of infra-
structure. Rather, the
technical barriers are
related to perform-
ing privacy-sensitive
and resource-sensi-
tive reasoning with
noisy data and noisy
labels and providing
useful and effective
feedback to users.
LANE LAYOUT 8/24/10 10:43 AM Page 150