Technical field
[0001] The present invention relates to mobile communication equipments, and more precisely
to the organization of resources into displayed virtual screens associated to workspaces
of such mobile communication equipments.
[0002] In the following description, the word "resource" designates any graphical element
representing at least one service that can run into or be controlled by a mobile communication
equipment with a touchscreen and processing means capable of executing sets of instructions.
For instance, a service can be an application or a contact shortcut.
Background of the invention
[0003] The new generations of mobile communication equipments allow a great number of services
to run in them or to be access by them through a mobile network. So, users need a
way to easily access resources representing these services when they need them. This
requires an organization of the resources into the different virtual screens associated
to the different workspaces of the mobile communication equipments. Actually the graphical
layout organization must be fully performed by the users and is frequently not optimal.
[0004] Indeed, when a user wants to access a specific resource on his mobile communication
equipment, he has to perform several actions, and notably to recall what identifies
the desired resource (name, icon), to which virtual screen it is attached, where it
is located in this virtual screen, to identify it on the touchscreen, and to access
it for example by touching it.
[0005] Moreover, access to specific resources can be made difficult by an inadequate shortcut
organization done by the user. For instance, this may occur when a most needed shortcut
is not displayed on the main virtual screen of the main workspace, or when a shortcut
is placed at a location that is not easy to reach when the user is forced to use the
hand that is holding his mobile communication equipment.
[0006] More, graphical elements presenting similarities in their representation or aspect
(name or icon, for instance) or function (resource type or service provided, for instance)
can also possibly mislead the user in the recall and identification phase.
[0007] Furthermore, a user might face situational constraints preventing him from efficiently
identifying or accessing a graphical element. For instance, this may occur when the
outside luminosity reduces the screen readability or when the user is interacting
with his mobile communication equipment with the hand that is holding it.
[0008] Therefore it often takes a long time to find the most appropriate organization of
graphical elements to access them easily, and such an organization does not take into
account the current context or constraints of the user.
[0009] To simplify the resource organization it has been proposed to group them into folders.
This allows reducing the number of top level resources on the touchscreen, but this
still does not take into account the current context or constraints of the user. For
instance, the touchscreen of the user's mobile communication equipment will have the
same layout whether the user is holding it in his right or left hand, or it will offer
the same rendering whether the user is watching it from a dark room or in plain daylight,
or else it will display resources with the same look whether the user is likely to
access a specific resource at a time or unlikely to access the same resource at another
time.
[0010] It has been also proposed to directly access resources through vocal commands. But
since voice/word recognition needs some audio surrounding requirements, resource access
based on vocal commands does not work when the user is facing situational constraints
where voice cannot be used.
Summary of the invention
[0011] So the invention aims notably at facilitating the access to key resources (or graphical
elements) by automatically adapting parameters involved in the representation and
access of resources to the user situational constraints and context.
[0012] To this effect the invention notably provides a method intended for automatically
organizing display of graphical elements, each representing at least one service,
on a touchscreen of a mobile communication equipment of a user comprising at least
one workspace associated to at least one virtual screen to which these graphical elements
can be attached. This method comprises:
- a step (i) during which one determines a current user context from first information,
current situational constraint(s) of the mobile communication equipment from second
information, and, for at least chosen ones of the graphical elements, probabilities
to be accessed by the user in this current context, and
- a step (ii) during which one attaches these chosen graphical elements to areas of
a displayed virtual screen depending on their respective access probabilities, the
current situational constraints and the current user context, to ease access to these
chosen graphical elements by the user.
[0013] The method according to the invention may include additional characteristics considered
separately or combined, and notably:
- in step (ii) one may determine a display size and/or a display aspect for each of
the chosen graphical elements to ease their respective identifications by the user;
- in step (ii) a chosen graphical element associated to a highest access probability
interval may be attached to a first area that is the easier to be touched by the thumb
of a user hand that holds the mobile communication equipment, a chosen graphical element
associated to an access probability interval smaller than the highest one may be attached
to a second area that can be easily touched by the thumb of a user hand that holds
the mobile communication equipment, and a chosen graphical element associated to an
access probability interval still smaller than the highest one may be attached to
a third area that can be relatively easily touched by the thumb of a user hand that
holds the mobile communication equipment;
- in step (i) the user context is a context in which the user is currently immersed
and which may be chosen from a group comprising at least an activity, a location,
a current time, and people in the vicinity of the user;
- in step (i) each first information may be chosen from a group comprising at least
a user habit, a user surrounding environment, and a user social environment;
➢ the user surrounding environment may be chosen from a group comprising at least
a current local weather, a current local temperature, a current light intensity, and
point(s) of interest in the vicinity of the user;
➢ in step (i) the social environment may be chosen from a group comprising at least
positions of user's friends, positions of user's working colleagues, events of interests
for the user, and addresses of user's relatives;
- in step (i) each second information may be chosen from a group comprising at least
detected user finger gestures and environment data;
➢ each environment data may be chosen from a group comprising at least an ability
to touch a graphical element, an ability to click on a graphical element, an ability
to slide a graphical element or a virtual screen, an ability to be heard by the mobile
communication equipment, an ability to listen to the mobile communication equipment,
an ability to recognize a picture, and an ability to recognize a color.
[0014] The invention also provides a computer program product comprising a set of instructions
arranged, when it is executed by processing means, for performing a method such as
the one above introduced to allow organization of the display of graphical elements,
each representing at least one service, on a touchscreen of a mobile communication
equipment comprising at least one workspace associated to at least one virtual screen.
[0015] The invention also provides a device intended for equipping a mobile communication
equipment comprising a touchscreen and at least one workspace associated to at least
one virtual screen to which graphical elements can be attached, each representing
at least one service. This device comprises:
- a first processing means arranged for determining a current user context from first
information,
- a second processing means arranged for determining current situational constraint(s)
of the mobile communication equipment from second information,
- a third processing means arranged for determining, for at least chosen ones of the
graphical elements, probabilities to be accessed by the user in the current context,
and
- a fourth processing means arranged for attaching the chosen graphical elements to
areas of a displayed virtual screen depending on their respective access probabilities,
the current situational constraints and the current user context, to ease access to
the chosen graphical elements by the user.
[0016] The invention also provides a mobile communication equipment comprising a touchscreen,
at least one workspace associated to at least one virtual screen to which graphical
elements can be attached, each representing at least one service, and a device such
as the one above introduced.
Brief description of the figures
[0017] Other features and advantages of the invention will become apparent on examining
the detailed specifications hereafter and the appended drawings, wherein:
- figure 1 schematically and functionally illustrates a mobile communication equipment
comprising a device according to the invention,
- figure 2 schematically illustrates the mobile communication equipment of figure 1
with three attachment areas, adapted to a right-handed user and to a first orientation,
materialized on its touchscreen,
- figure 3 schematically illustrates the mobile communication equipment of figure 1
with a first orientation and with several graphical elements attached to three attachment
areas adapted to a right-handed user and to this first orientation, and
- figure 4 schematically illustrates the mobile communication equipment of figure 1
with a second orientation and with several graphical elements attached to first and
second groups of attachment areas adapted respectively to right and left hands of
a user and to this second orientation.
Detailed description of the preferred embodiment
[0018] The appended drawings may serve not only to complete the invention, but also to contribute
to its understanding, if need be.
[0019] The invention aims, notably, at offering a method, and an associated device D, intended
for automatically organizing display of graphical elements (or resources) GEi, each
representing at least one service, on a touchscreen TS of a mobile communication equipment
EE comprising at least one workspace WS associated to at least one virtual screen
VS.
[0020] In the following description it will be considered, as an example, that the mobile
communication equipment EE is a smartphone. But the invention is not limited to this
type of mobile communication equipment. Indeed, it concerns any type of mobile communication
equipment comprising a touchscreen TS and at least one workspace WS. So, it could
be also a personal digital assistant, an electronic tablet, or a game console, or
else a connected (or smart) television.
[0021] Moreover, in the following description it will be considered, as an example, that
the graphical elements (or resources) GEi are icons. But the invention is not limited
to this type of graphical element (or resource). Indeed, it concerns any type of graphical
element (or resource) representing at least one service that can run into or be controlled
by a mobile communication equipment, and notably applications, telephonic contacts,
GPS destinations, pictures, videos, actions (such as switching off the electronic
equipment, for instance), files or shortcuts to external resources. So, it could be
also a widget, for instance.
[0022] As illustrated in figure 1, a mobile communication equipment EE according to the
invention comprises at least a touchscreen TS, associated with sensors SR, a display
module DM, a resource manager RS, a network module NM, at least one workspace WS,
an environment module EM, a local gesture detection module GDM, a layout and rendering
decision module LRM, and a device D.
[0023] The display module DM is responsible for the display on the touchscreen TS, for the
generation of audio output, and for any other feedback (such as vibrations).
[0024] The sensors SR are responsible for the capture of events concerning the mobile communication
equipment EE and notably relative to user interactions, for instance with the touchscreen
TS. It provides notably acceleration, orientation (pitch, roll, azimuth) and position
(latitude, longitude, height) of the mobile communication equipment EE, and audio
and video captures.
[0025] The resource manager RS is arranged for providing an abstraction layer for the different
classes of graphical elements (or resources) GEi that represent service(s) running
into or controllable by the mobile communication equipment EE and that may be attached,
directly or indirectly, to virtual screen(s) VS associated to workspace(s) WS of this
mobile communication equipment EE.
[0026] Here, the word "attached" means anchored or bound to a graphical area of the virtual
display space.
[0027] The network module NM is arranged for providing network connectivity to the mobile
communication equipment EE.
[0028] Each workspace WS is responsible for allowing the user to organize the access to
at least key graphical elements GEi, and handles events from the user (sensor) and
from the layout and rendering decision module LRM to control the display of graphical
elements GEi to the user.
[0029] The local gesture detection module GDM is responsible for detecting fragments of
gestures based on information relative to user interaction events detected by the
sensors SR. For instance, it may recognize user gesture fragments and the user hand
that holds the mobile communication equipment EE.
[0030] The environment module EM is arranged for providing access to a knowledge base about
the surrounding environment and social environment of the user. Such a knowledge base
can be, for instance, a third party database exposed as a service accessible through
the mobile network.
[0031] The layout and rendering decision module LRM is in charge of the graphical layout
and rendering of graphical elements GEi on the touchscreen TS. It takes into account
the status of a user gesture and applies an appropriate feedback (haptic, sound, animation,
for instance) on the operated graphical elements GEi.
[0032] As mentioned above, the invention proposes notably a method intended for automatically
organizing display of graphical elements (or resources) GEi on the touchscreen TS
of the mobile communication equipment EE.
[0033] This method comprises two steps (i) and (ii).
[0034] The method steps may be implemented by first PM1, second PM2, third PM3 and fourth
PM4 processing means (or modules) of the device D.
[0035] For instance and as illustrated in the non-limiting example of figure 1:
- the first processing means (or module) PM1 may be coupled to the environment module
EM, the third processing means (or module) PM3 and the layout and rendering decision
module LRM,
- the second processing means (or module) PM2 may be coupled to the local gesture detection
module GDM, the environment module EM and the layout and rendering decision module
LRM,
- the third processing means (or module) PM3 may be coupled to the workspace(s) WS,
the first processing means (or module) PM1 and the layout and rendering decision module
LRM, and
- the fourth processing means (or module) PM4 may be located into the layout and rendering
decision module LRM, which is notably coupled to the workspace(s) WS.
[0036] So, this device D may be made of software modules (and in this case it constitutes
a computer program product comprising a set of instructions arranged for performing
the method when it is executed by processing means of the mobile communication equipment
EE). But this is not mandatory. Indeed, it may be made of a combination of electronic
circuit(s) (or hardware module(s)) and software modules.
[0037] During step (i) of the method one determines a current user context from first information,
current situational constraint(s) of the mobile communication equipment EE from second
information, and, for at least chosen ones of the graphical elements GEi (stored into
the mobile communication equipment EE), probabilities to be accessed by the user in
this determined current context.
[0038] This step (i) is performed by the first PM1, second PM2 and third PM3 processing
means (or modules) of the device D. More precisely, the first processing means PM1
is arranged for determining the current user context from first information, the second
processing means PM2 is arranged for determining current situational constraint(s)
of the mobile communication equipment EE from second information, and the third processing
means PM3 is arranged for determining, for at least chosen ones of the graphical elements
GEi, probabilities to be accessed by the user in the determined current context.
[0039] This third processing means PM3 may be also arranged for capturing local user resource
access events, and for sharing resource access events between its mobile communication
equipment EE and distant communication devices. So, it determines resources accessed
by the user over time. To determine the probability of access to a resource, the third
processing means PM3 may not only take into account the current user context and the
past resources accessed by the user, but also surrounding environment data and social
environment data.
[0040] For instance, the current user context is a context in which the user is currently
immersed, i.e. a situation involving the user within a time period and/or at a location
and/or in a set of activities and/or with a set of people. So, a context may be defined
by an activity (work, sport, shopping, spectacle, show, meeting (registered in a user's
agenda)) and/or a location (office, home, theater, concert hall, cinema) and/or a
current time (dawn time, dusk time, breakfast time, lunch time, dinner time), and/or
people in the vicinity of the user (working colleagues, friends, family), for instance.
[0041] It is important to note that a first information may be a user habit, a user surrounding
environment, or a user social environment, for instance.
[0042] A user surrounding environment may be a current local weather, a current local temperature,
a current light intensity, or point(s) of interest in the vicinity of the user (restaurant,
bars, malls, libraries, theaters, cinemas, concert halls), for instance.
[0043] A social environment may be a position of a friend located in the vicinity of the
user, a position of a working colleague located in the vicinity of the user, an event
of interest for the user, resources accessed by people located in the vicinity of
the user and in the same time as the user, or the address of a user's relative, for
instance.
[0044] A situational constraint can be described as characteristics of the environment limiting
the interaction of the user with his mobile communication equipment EE. This typically
involves four senses through which users interact with their mobile communication
equipments EE: touch, speech, hearing, and vision.
[0045] It is also important to note that a second information may be a detected user finger
gesture (fragment) or environment data.
[0046] An environment data is an interaction constraint relative to limitations on user's
interaction capabilities (touch, speech, hearing, vision). So, it may be an ability
to touch a graphical element GEi, an ability to click on a graphical element GEi,
an ability to slide a graphical element GEi or a virtual screen VS, an ability to
be heard by the mobile communication equipment EE, an ability to listen to the mobile
communication equipment EE, an ability to recognize a picture, or an ability to recognize
a color.
[0047] During step (ii) of the method one attaches the chosen graphical elements GEi to
areas of a displayed virtual screen VS depending on their respective access probabilities,
the current situational constraints and the current user context, determined in step
(i). So, this step (ii) is intended for easing access to the chosen graphical elements
GEi to the user.
[0048] This step (ii) is performed by the fourth processing means (or module) PM4 of the
device D.
[0049] Preferably, during this step (ii) one may determine a display size and/or a display
aspect (or appearance) for each of the chosen graphical elements GEi to ease their
respective identifications by the user. The parameters of the aspect (or appearance)
may be opacity and/or background image and/or position on the touchscreen TS and/or
contrast and/or touch perception (texture, thickness, temperature, shape).
[0050] Moreover, and as illustrated in the non-limiting example of figure 2, several areas
Zk may be defined in the displayed virtual screen VS and dedicated respectively to
several groups of graphical elements GEi associated to different intervals of probabilities
to be accessed by the user.
[0051] The respective locations of these different attachment areas Zk preferably depend
on the orientation (portrait type or landscape type) of the mobile communication equipment
EE. This is notably the case in the example illustrated in figure 2, where the area
locations are well adapted to the case where the mobile communication equipment EE
is hold by a right-handed user.
[0052] For instance, a chosen graphical element GE1 associated to an access probability
belonging to a highest access probability interval may be attached to a first area
Z1 (k = 1) that is the easier to be touched by a thumb of a user hand that holds the
mobile communication equipment EE, another chosen graphical element GE2 associated
to an access probability belonging to an access probability interval smaller than
the highest one may be attached to a second area Z2 (k = 2) that can be easily touched
by a thumb of a user hand that holds the mobile communication equipment EE, and still
another chosen graphical element GE3 associated to an access probability belonging
to another access probability interval still smaller than the highest one is attached
to a third area Z3 (k = 3) that can be relatively easily touched by a thumb of a user
hand that holds the mobile communication equipment EE.
[0053] A non-limiting example of display of chosen graphical elements GEi in three attachment
areas Zk (k = 1 to 3) is illustrated in figure 3. This example corresponds to a case
where the mobile communication equipment EE has a first orientation (portrait type)
and is hold by a right-handed user. In this first example, a single graphical element
GE1 most likely to be accessed by the user has been attached to the first attachment
area Z1 of a displayed virtual screen VS, two different graphical elements GE2 quite
likely to be accessed by the user have been attached to the second attachment area
Z2 of the displayed virtual screen VS, and five different graphical elements GE3 lesser
likely to be accessed by the user have been attached to the third attachment area
Z3 of the displayed virtual screen VS.
[0054] Another non-limiting example of display of chosen graphical elements GEi in three
attachment areas Zk (k = 1 to 3) is illustrated in figure 4. This second example corresponds
to a case where the mobile communication equipment EE has a second orientation (landscape
type) and is hold by the two hands of a right-handed user. In this second example,
the right side of the virtual screen VS is dedicated to the thumb of the user right
hand and the left side of the virtual screen VS is dedicated to the thumb of the user
left hand. The right side comprises a first attachment area (Z1
1) comprising a single graphical element GE1
1 most likely to be accessed by the user, and a third attachment area (Z3
1) surrounding the first attachment area (Z1
1) and comprising three different graphical elements GE3
1 lesser likely to be accessed by the user. The left side comprises two second attachment
areas (Z2
2), each comprising a single graphical element GE2
2 quite likely to be accessed by the user, and two third attachment areas (Z3
2), each comprising a single graphical element GE3
2 lesser likely to be accessed by the user.
[0055] An example of scenario of use of the invention could be the display of resource shortcuts
GEi in a situation where a right-handed user has the habit of having lunch in restaurants
around his office, generally using a specific restaurant guide application (represented
by a graphical element GE1) to select its restaurant, and located at noon in his office.
This scenario begins when the user picks his mobile communication equipment EE with
his right hand. Then the local gesture detection module GDM identifies a fragment
of gesture corresponding to the phone being picked-up with a right hand. Then the
first processing means PM1 identifies the user as being in its lunch context, characterized
by the weekdays, the lunch time (noon to 2pm time period), and the office area location.
Then the environment module EM retrieves the list of neighboring restaurants from
the network (through the network module NM). Then the third processing means PM3 determines
the access probabilities associated to each chosen resource shortcut configured by
the user on the touchscreen TS of his mobile communication equipment EE (it is supposed
that the restaurant guide application shortcut obtains the highest access probability).
Then the second processing means PM3 determines that the user can only interact with
his right thumb, pivoting from the bottom right edge of the mobile communication equipment
EE. Then the fourth processing means PM4 of the layout and rendering decision module
LRM selects appropriate positions for the various resource shortcuts on the mobile
touchscreen TS, avoiding spaces hard to reach for the user thumb, and placing the
restaurant guide application shortcut GE1 on the displayed virtual screen VS at the
position that is the most easily touchable by the right hand thumb and by highlighting
it. For instance, the size of the resource shortcut size is chosen to put emphasis
on the probability that the user will access it. Then the workspace WS updates the
display to show the chosen resource shortcuts following the layout and rendering parameters
decided by the layout and rendering decision module LRM in combination with the fourth
processing means PM4.
[0056] The invention allows organizing and rendering resources that a user can access on
his mobile communication equipment, based on the likelihood that the user needs to
access them, and on the way the user is likely to interact with his mobile communication
equipment. It allows also minimizing the effort required by a user to identify and
access resources on his mobile communication equipment, by reducing the time needed
to identify a desired resource, and the distance needed to access this desired resource
with a finger.
[0057] The invention is not limited to the embodiments of method, device and mobile communication
equipment described above, only as examples, but it encompasses all alternative embodiments
which may be considered by one skilled in the art within the scope of the claims hereafter.
1. Method for organizing display of graphical elements (GEi), each representing at least
one service, on a touchscreen (TS) of a mobile communication equipment (EE) of a user
comprising at least one workspace (WS) associated to at least one virtual screen (VS)
to which said graphical elements (GEi) can be attached, said method comprising a step
(i) during which one determines a current user context from first information, current
situational constraint(s) of said mobile communication equipment (EE) from second
information, and, for at least chosen ones of said graphical elements (GEi), probabilities
to be accessed by said user in said current context, and a step (ii) during which
one attaches said chosen graphical elements (GEi) to areas of a displayed virtual
screen (VS) depending on their respective access probabilities, said current situational
constraints and said current user context, to ease access to said chosen graphical
elements (GEi) by said user.
2. Method according to claim 1, wherein during step (ii) one determines a display size
and/or a display aspect for each of said chosen graphical elements (GEi) to ease their
respective identifications by said user.
3. Method according to one of claims 1 and 2, wherein in step (ii) a chosen graphical
element (GEi) associated to a highest access probability interval is attached to a
first area that is the easier to be touched by a thumb of a user hand that holds said
mobile communication equipment (EE), a chosen graphical element (GEi) associated to
an access probability interval smaller than the highest one is attached to a second
area that can be easily touched by a thumb of a user hand that holds said mobile communication
equipment (EE), and a chosen graphical element (GEi) associated to an access probability
interval still smaller than the highest one is attached to a third area that can be
relatively easily touched by a thumb of a user hand that holds said mobile communication
equipment (EE).
4. Method according to one of claims 1 to 3, wherein in step (i) said user context is
a context in which the user is currently immersed and which is chosen from a group
comprising at least an activity, a location, a current time, and people in the vicinity
of said user.
5. Method according to one of claims 1 to 4, wherein in step (i) each first information
is chosen from a group comprising at least a user habit, a user surrounding environment,
and a user social environment.
6. Method according to claim 5, wherein said user surrounding environment is chosen from
a group comprising at least a current local weather, a current local temperature,
a current light intensity, and point(s) of interest in the vicinity of said user.
7. Method according to one of claims 5 and 6, wherein in step (i) said social environment
is chosen from a group comprising at least positions of user's friends, positions
of user's working colleagues, events of interests for said user, and addresses of
user's relatives.
8. Method according to one of claims 1 to 7, wherein in step (i) each second information
is chosen from a group comprising at least detected user finger gestures and environment
data.
9. Method according to claim 8, wherein each environment data is chosen from a group
comprising at least an ability to touch a graphical element (GEi), an ability to click
on a graphical element (GEi), an ability to slide a graphical element (GEi) or a virtual
screen (VS), an ability to be heard by said mobile communication equipment (EE), an
ability to listen to said mobile communication equipment (EE), an ability to recognize
a picture, and an ability to recognize a color.
10. Computer program product comprising a set of instructions arranged, when it is executed
by processing means, for performing the method according to one of the preceding claims
to allow organization of the display of graphical elements (GEi), each representing
at least one service, on a touchscreen (TS) of a mobile communication equipment (EE)
comprising at least one workspace (WS) associated to at least one virtual screen (VS).
11. Device (D) for a mobile communication equipment (EE) comprising a touchscreen (TS)
and at least one workspace (WS) associated to at least one virtual screen (VS) to
which graphical elements (GEi) can be attached, each representing at least one service,
said device (D) comprising i) a first processing means (PM1) arranged for determining
a current user context from first information, ii) a second processing means (PM2)
arranged for determining current situational constraint(s) of said mobile communication
equipment (EE) from second information, iii) a third processing means (PM3) arranged
for determining, for at least chosen ones of said graphical elements (GEi), probabilities
to be accessed by said user in said current context, and iv) a fourth processing means
(PM4) arranged for attaching said chosen graphical elements (GEi) to areas of a displayed
virtual screen (VS) depending on their respective access probabilities, said current
situational constraints and said current user context, to ease access to said chosen
graphical elements (GEi) by said user.