(19)
(11)EP 2 689 374 B1

(12)EUROPEAN PATENT SPECIFICATION

(45)Mention of the grant of the patent:
29.04.2020 Bulletin 2020/18

(21)Application number: 12718451.3

(22)Date of filing:  20.03.2012
(51)International Patent Classification (IPC): 
G06F 21/32(2013.01)
G10L 17/00(2013.01)
G10L 17/24(2013.01)
(86)International application number:
PCT/US2012/029810
(87)International publication number:
WO 2012/129231 (27.09.2012 Gazette  2012/39)

(54)

DEVICE ACCESS USING VOICE AUTHENTICATION

GERÄTEZUGRIFF MIT SPRACHAUTHENTIFIZIERUNG

ACCÈS À UN DISPOSITIF AU MOYEN D'UNE AUTHENTIFICATION VOCALE


(84)Designated Contracting States:
AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR

(30)Priority: 21.03.2011 US 201113053144

(43)Date of publication of application:
29.01.2014 Bulletin 2014/05

(73)Proprietor: Apple Inc.
Cupertino CA 95014 (US)

(72)Inventor:
  • CHEYER, Adam, J.
    Cupertino CA 95014 (US)

(74)Representative: Barnfather, Karl Jon 
Withers & Rogers LLP 4 More London Riverside
London SE1 2AU
London SE1 2AU (GB)


(56)References cited: : 
EP-A2- 1 229 496
WO-A1-2010/075623
  
  • Anonymous: "Speaker recognition", Wikipedia, the free encylcopedia , 2 November 2010 (2010-11-02), pages 1-3, XP002680043, Retrieved from the Internet: URL:http://web.archive.org/web/20101102233 226/http://en.wikipedia.org/wiki/Speaker_r ecognition [retrieved on 2012-07-16]
  
Note: Within nine months from the publication of the mention of the grant of the European patent, any person may give notice to the European Patent Office of opposition to the European patent granted. Notice of opposition shall be filed in a written reasoned statement. It shall not be deemed to have been filed until the opposition fee has been paid. (Art. 99(1) European Patent Convention).


Description

TECHNICAL FIELD



[0001] The disclosure generally relates to techniques for controlling user access to features of an electronic device.

BACKGROUND



[0002] Many of today's computers and other electronic devices include a feature that allows a user to lock the computer or device from access by others. Some of the devices provide a mechanism for unlocking a locked device through a graphical user interface of the device. For example, the graphical user interface can provide a mechanism that allows a user to input authentication information, such as a password or code.

[0003] Some computers and other electronic devices can provide voice command features. For example, a user of a device can speak a voice command into a microphone coupled to the device. When the voice command is received by the device, the device can recognize and execute the voice command. The document WO 2010/075623, 8th July 2010 (2010-07-08), discloses a method and system for unlocking a mobile device, via a voice command, in order to provide access to the device.

SUMMARY



[0004] The invention is disclosed by the independent claims. Further embodiments are disclosed by the dependent claims. A device can be configured to receive speech input from a user. The speech input can include a command for accessing a restricted feature of the device. The speech input can be compared to a voiceprint (e.g., text-independent voiceprint) of the user's voice to authenticate the user to the device. Responsive to successful authentication of the user to the device, the user is allowed access to the restricted feature without the user having to perform additional authentication steps or speaking the command again. If the user is not successfully authenticated to the device, additional authentication steps can be requested by the device (e.g., request a password).

[0005] In some implementations, a voiceprint can be generated for an authorized user of a device. For example, one or more samples of the user's voice can be collected as the user speaks voice commands into the device. A voiceprint can be generated based on the one or more voice samples. The voiceprint can be generated locally on the device or by a network voiceprint service (e.g., network server). The voiceprint can be used with a text-independent voice authentication process running on the device or hosted by the network service to authenticate the user to the device.

[0006] Particular embodiments of the subject matter described in this specification can be implemented to realize one or more of the following advantages. A device can include a more user-friendly authentication process for accessing a locked device. A user's voice can be authenticated at the same time that a voice command is processed; no separate authentication step is required. The device can generate a voiceprint while the user speaks voice commands into the device; no separate speaker recognition training step is required. The voice authentication features disclosed below can provide fast and secure voice control access to any/all features of the device.

[0007] In accordance with some embodiments, a method includes receiving speech input at a device. The speech input including a command associated with a restricted feature of the device. The method also includes comparing the speech input and a voiceprint of an authorized user of the device, and based on results of the comparing, determining that the speech input was spoken by the authorized user. The method further includes providing access to the restricted feature of the device according to the command. The method is performed by one or more processors of the device.

[0008] In accordance with some embodiments, a method includes receiving a speech input at a device. The speech input includes a command associated with a feature of the device. The method also includes generating a text-independent voiceprint based on the speech input; and providing access to the feature of the device according to the command. The method is performed by one or more processors of the device.

[0009] In accordance with some embodiments, a method includes receiving speech input at a device. The speech input includes a command associated with a feature of the device, and generating a voice sample based on the speech input. The method also includes transmitting the voice sample to a voiceprint service for generating a voiceprint based on the voice sample; and providing access to the feature of the device according to the command. The method is performed by one or more processors of the device.

[0010] In accordance with some embodiments, an electronic device includes one or more processors, memory, and one or more programs; the one or more programs are stored in the memory and configured to be executed by the one or more processors and the one or more programs include instructions for performing the operations of any of the methods described above. In accordance with some embodiments, a computer readable storage medium has stored therein instructions, which, when executed by an electronic device, cause the device to perform the operations of any of the methods described above. In accordance with some embodiments, an electronic device includes means for performing the operations of any of the methods described above. In accordance with some embodiments, an information processing apparatus, for use in an electronic device includes means for performing the operations of any of the methods described above.

[0011] In accordance with some embodiments, an electronic device includes a speech receiving unit configured to receiving speech input. The speech input includes a command associated with a restricted feature of the electronic device. The electronic device also includes a processing unit coupled to the speech receiving unit. The processing unit is configured to: compare the speech input and a voiceprint of an authorized user of the electronic device; based on results of the comparing, determine that the speech input was spoken by the authorized user; and provide access to the restricted feature of the electronic device according to the command.

[0012] In accordance with some embodiments, an electronic device includes a speech receiving unit configured to receive speech input, the speech input including a command associated with a feature of the electronic device. The electronic device also includes a processing unit coupled to the speech receiving unit. The processing unit is configured to: generate a text-independent voiceprint based on the speech input; and provide access to the feature of the electronic device according to the command.

[0013] In accordance with some embodiments, an electronic device includes a speech receiving unit configured to receive speech input. The speech input includes a command associated with a feature of the electronic device. The electronic device also includes a processing unit coupled to the speech receiving unit. The processing unit is configured to: generate a voice sample based on the speech input; transmit the voice sample to a voiceprint service for generating a voiceprint based on the voice sample; and provide access to the feature of the electronic device according to the command.

[0014] Details of one or more implementations are set forth in the accompanying drawings and the description below. Other features, aspects, and potential advantages will be apparent from the description and drawings, and from the claims.

DESCRIPTION OF DRAWINGS



[0015] 

FIG. 1 illustrates an example device configured for processing voice commands.

FIG. 2 is flow diagram of an example process for generating a voiceprint.

FIG. 3 illustrates an example locked device that can be configured for voice authentication.

FIG. 4 is a flow diagram of an example process for voice authentication.

FIG. 5 is a block diagram of an example network operating environment.

FIG. 6 is a block diagram of an example implementation of the mobile device of FIGS. 1-4.

FIG. 7 illustrates a functional block diagram of an electronic device in accordance with some embodiments.

FIG. 8 illustrates a functional block diagram of an electronic device in accordance with some embodiments.

FIG. 9 illustrates a functional block diagram of an electronic device in accordance with some embodiments.


DETAILED DESCRIPTION


Voice Commands



[0016] FIG. 1 illustrates an example device 100 configured for processing voice commands. For example, device 100 can be a mobile device, such as a cell phone, smart phone, electronic tablet, television system, personal data assistant, a laptop or any other mobile device. Device 100 can be a desktop computer or any other device that can require a user to authenticate the user to the device 100. In some implementations, device 100 can receive speech input, determine a command based on the speech input, and execute the command. For example, a user can activate a voice control feature of device 100 by pressing and holding down button 102. When activated, the voice control feature can display a voice control graphical user interface on touch sensitive display 104, for example, as displayed in FIG. 1. A user can cancel the voice control feature by pressing cancel button 106 displayed in menu bar 114.

[0017] In some implementations, when the voice control feature is activated, device 100 can receive speech input from a user through microphone 108. In some implementations, the speech input can be translated into text representing the words spoken in the speech input. For example, speech recognition analysis or modeling (e.g., Hidden Markov modeling (HMM), dynamic time warping (DTW), etc.) can be performed on the speech input to generate text that represents the content of the speech input.

[0018] In some implementations, the text generated from the speech input can be analyzed to determine a command to invoke a feature of device 100. For example, if the text includes the word "call," device 100 can determine that the user wants to make a phone call and can invoke a telephony application. If the text includes the word "play," device 100 can determine that the user wants to play media stored on device 100 and can invoke a media player to play content, such as music or a movie, for example.

[0019] In some implementations, the voice control feature of device 100 can provide feedback to the user to indicate the success or failure of device 100 to determine the command. For example, the feedback (e.g., audio, visual, vibration) can indicate to the user what command is about to be executed on the device, whether the device 100 was successful in determining a command based on the speech input, and/or whether the command was successfully executed by device 100. For example, a voice generated by the device can tell the user what command is about to be executed by the device.

[0020] In some implementations, voice control features of device 100 can only be accessed when the device is in an unlocked state (e.g., when the user accessing the device has been authenticated).

Voiceprinting



[0021] FIG. 2 is flow diagram of an example process 200 for generating a voiceprint. In some implementations, device 100 can be configured to generate a voiceprint for a user based on speech inputs received by device 100. For example, device 100 can collect one or more samples of the user's voice while the user is interacting with voice control features of device 100. In some implementations, device 100 can use the voiceprint in a text-independent voice authentication process to authenticate a user to device 100.

[0022] In some implementations, generating a voiceprint can be performed only when device 100 is in an unlocked state. For example, generating a voiceprint can be performed only when the user providing the speech input has been authenticated to device 100 as the owner or an authorized user of device 100 to prevent generating a voiceprint based on an unauthorized user's or intruder's voice.

[0023] At step 202, a speech input is obtained. In some implementations, device 100 can be configured to receive speech input through microphone 102 coupled to device 100. Microphone 102 can generate audio data from the speech input. In some implementations, device 100 can be configured to collect one or more voice samples from the audio data and transmit the voice samples to a remote voiceprint service.

[0024] At step 204, a voiceprint is generated. For example, the one or more voice samples can be analyzed and/or modeled to generate a voiceprint of an authorized user of device 100 based on unique information about the user's vocal tract and the behavior of the user's speaking patterns. In some implementations, the voiceprint can be generated at device 100. For example, the audio data can be processed by device 100 to generate a voiceprint that can be used to recognize an authorized user's voice during speaker authentication. In some implementations, the voiceprint can be generated at a remote or networked service. For example, device 100 can be configured to collect one or more voice samples from audio data and transmit the voice samples to voiceprint service 508 of FIG. 5. For example, voice samples can be collected over time from multiple speech inputs and the voice samples can be transmitted in batches to voiceprint service 508. The voice sample batches can be transmitted to voiceprint service 508 during periods when device 100 is idle or experiencing low resource usage, for example. Voiceprint service 508 can be configured to generate a voiceprint (e.g., a text-independent voiceprint) based on the samples received from device 100. Voiceprint service 508 can transmit the generated voiceprint to device 100 to be used by device 100 when authenticating a user using speaker recognition analysis.

[0025] In some implementations, device 100, or remote voiceprint service 508, can include a voiceprint module that can learn the "signature" or "print" of a person's voice in a text-independent way. For example, statistical models of the characteristics of the spectral features present in a user's pronunciation of various phonemes can be built to distinguish voice characteristics of different user's voices. For example, Vector Quantization (VQ) codebook-based techniques can be employed to generate a voiceprint. Ergodic-HMM-based methods that analyze the stochastic Marchovian transitions between states to build learned models of voice characteristics such as voicing, silence, stop burst, nasal/liquid, frication, etc., can be used to generate a voiceprint, for example. In some implementations, a two-pass speaker recognition approach can be used that first explicitly determines phonemes or phoneme classes from the audio data from a speech input and then performs speaker verification by a weighted combination of matches for each recognized phoneme category.

[0026] The text-independent speaker authentication processes described above can provide voice authentication without requiring a specific passphrase or particular word for voice authentication. By contrast, text-dependent speaker verification processes often require specific passphrases or word utterances to perform speaker recognition and, therefore, often require a separate authentication step (e.g., challenge-response step) that requires a user to speak a particular word or phrase. The text-independent authentication process does not require a separate challenge-response authentication step.

[0027] In some implementations, once the voiceprint is generated, the voiceprint can be stored at device 100. For example, if device 100 generates the voiceprint, the voiceprint can be stored in memory or non-volatile storage (e.g., a hard drive) coupled to device 100. If the voiceprint is generated by a network server (e.g., by the voiceprint service 508), device 100 can receive the network generated voiceprint and store the voiceprint in memory or non-volatile storage. The network server can also store voiceprints that it generates.

[0028] At step 206, a command is determined based on the speech input. In some implementations, the speech input can be processed to determine a command corresponding to the voiceprint. For example, the speech input can be translated into text using speech-to-text processing and the text can be analyzed to identify a command using speech recognition processing. For example, once the speech input is translated into text, the text of the speech input can be compared to text associated with commands known to device 100 to determine if any of the speech input text corresponds (e.g., matches) to the command text. If a textual correspondence is found, in whole or in part, in the speech input, device 100 can execute the command corresponding to the command text that corresponds to the speech input text.

[0029] In some implementations, the command can be determined while the voiceprint is generated. For example, once the speech input is received by device 100, the speech input can be processed to (e.g., processed in parallel) generate a voiceprint and determine a voice command. Thus, a single speech input can be used to generate a voiceprint and to issue a voice command.

[0030] At step 208, the command is executed. For example, once a command is determined based on the speech input, the command can be executed by device 400.

Security Features



[0031] FIG. 3 illustrates an example locked device 100 that can be configured for voice authentication. For example, device 100 can be locked (e.g., in a state requiring authentication of a user) to prevent unauthorized access to features (e.g., the entire device, individual applications, etc.) or information stored on device 100. In some implementations, individual features of device 100 can be locked. For example, individual features of device 100 can require authentication of a user before device 100 allows access to the features. Authentication of a user can be required by the device to ensure that the user accessing the device is the owner or an authorized user of the device.
In some implementations, device 100 can require a user to authenticate that the user is an authorized user of device 100 before granting access to device 100 or individual features of device 100. For example, touch sensitive display 104 can display a user interface that allows a user to enter a passcode to unlock device 100. A user can enter a passcode (e.g., a four digit number, word, sequence of characters) using touch sensitive key pad 302 to cause device 100 to unlock. Other user authentication and device unlocking mechanisms (e.g., voice authentication, face recognition, fingerprint recognition) are also possible.

[0032] In some implementations, when an unauthenticated user (e.g., a user that has not been authenticated yet) attempts to access features of or provide input to device 100, authentication of the user can be performed. For example, when a user attempts to place a telephone call, access an e-mail application, address book or calendar on a password locked device, the user interface of FIG. 3 can be presented to the user to allow the user to enter a password, code, or other user authenticating input. In some implementations, if the user enters a password or code that is known to device 100, the user can be authenticated and the device 100 and/or features of device 100 can be unlocked. If the user enters a password or code that is unknown to the device 100, the user cannot be authenticated and device 100 and/or features of device 100 can remain locked. In some implementations, device 100 can be configured to perform voice authentication of a user, as described with reference to FIG. 4.

Voice Authentication



[0033] FIG. 4 is a flow diagram of an example process 400 for voice authentication. For example, voice authentication of a user can be performed when a speech input is received at a locked device by performing speaker recognition analysis on the speech input. Authentication of a user can be performed using text-independent voice authentication techniques, as described above.

[0034] The voice authentication features described herein can allow for fast and secure access to all of the features of and data stored on device 100. For example, these voice authentication features can enable a user of device 100 to access features and information on device 100 in a secure way and without having to enter a passcode every time the user attempts to access device 100. Without these voice authentication features, user access to a device can be slowed by separate authentication steps, sensitive or private user data stored on a device can be accessed by an unauthorized user or intruder to the device, or the functionality that a user can access using voice control features of the device may have to be limited to just non-private, non-sensitive information and commands, for example.

[0035] At step 402, a speech input is obtained. For example, a user of locked device 100 can press and hold button 102 to activate voice control features of device 100, even when device 100 is locked. In some implementations, device 100 can receive a speech input through microphone 108 when voice control features of device 100 are activated.

[0036] At step 404, the speech input is used to perform user authentication. In some implementations, the speech input can be used to authenticate a user to device 100 using speaker recognition analysis. For example, if device 100 is locked, the voice of the speech input can be analyzed using speaker recognition analysis to determine if the user issuing the speech input is an authorized user of device 100. For example, the voice characteristics of the voice in the speech input can be compared to voice characteristics of a voiceprint of an authorized user stored on device 100 or by a network service. If the voice can be matched to the voiceprint, the user can be authenticated as an authorized user of device 100. If the voice cannot be matched to the voiceprint, the user will not be authenticated as an authorized user of device 100. If a user cannot be authenticated to device 100 based on the speech input, an error message can be presented (e.g., audibly and/or visually, vibration) to the user. For example, if the user cannot be authenticated based on the speech input, device 100 can notify the user of the authentication error with sound (e.g., alarm or synthesized voice message) presented through speaker 110 or loud speaker 112 or a vibration provided by a vibrating source. Device 100 can present a visual error by presenting on touch interface 104 a prompt to the user to provide additional authentication information (e.g., password, code, touch pattern, etc.).

[0037] At step 406, a command can be determined based on the speech input. As described above, the speech input can be translated to text and the text can be processed to determine a command present in the speech input. In some implementations, a user can be authenticated based on the speech input while the speech input is processed to determine the command in the speech input. That is, the user can submit a single speech input to device 100 and that single speech input can be processed to both authenticate the user and to determine which command the user wants the device to execute.

[0038] At step 408, the command can be executed when the voice is authenticated. In some implementations, if the user's voice in the speech input can be matched to a voiceprint of an authorized user, the user's voice can be authenticated and the device can execute the determined command. In some implementations, device 100 can execute the determined command while device 100 is locked. For example, device 100 can remain locked while device 100 executes the command such that additional voice (or non-voice) input received by device 100 will require authentication of the user providing such input. In some implementations, locked device 100 can be unlocked in response to authenticating a user to locked device 100 using voice authentication processes described above. For example, locked device 100 can be unlocked when a user's voice is authenticated as belonging to an authorized user of device 100 such that subsequent input or commands do not require additional authentication.

[0039] In some implementations, other biometric data (e.g., other than a user's voice) can be used to authenticate a user to a device or confirm the result of a voice authentication to provide more confidence of a successful voice authentication. For example, front facing camera 116 of mobile device 100 can be used to collect images of a user's face that can be used to recognize an authorized user of the device based on facial recognition analysis. As another example, the touch-sensitive display 104, or button 120, can be configured to collect finger print data for a user and the finger print data can be used to authenticate a user to the device.

[0040] In some implementations, authenticating a user using other types of biometric data can be performed passively. For example, authentication of a user can be performed while the user is interacting with the device in non-authentication-specific ways. For example, the user's fingerprint can be authenticated when the user touches the touch-sensitive display to interact with the music player object 124. Front facing camera 116, for example, can collect images of the user's face as the user interacts with video chat features of device 100. Front facing camera 116 can collect images for face recognition analysis and authentication while the user is operating device 100 in other ways, such a web browsing. The collected images can be used to authenticate the user using facial recognition analysis. In some implementations, a combination of biometric data can be collected a used to authenticate a user when the user attempts to access device 100. For example, a combination of speaker recognition, face recognition, fingerprint matching, or other biometric data can be used to authenticate a user to device 100.

Example Network Operating Environment



[0041] FIG. 5 is a block diagram of an example network operating environment 500. In FIG. 5, mobile devices 502a and 502b each can represent mobile device 100. Mobile devices 502a and 502b can, for example, communicate over one or more wired and/or wireless networks 510 in data communication. For example, a wireless network 512, e.g., a cellular network, can communicate with a wide area network (WAN) 514, such as the Internet, by use of a gateway 516. Likewise, an access device 518, such as an 802.11g wireless access device, can provide communication access to the wide area network 514. In some implementations, both voice and data communications can be established over the wireless network 512 and the access device 518. For example, the mobile device 502a can place and receive phone calls (e.g., using VoIP protocols), send and receive e-mail messages (e.g., using POP3 protocol), and retrieve electronic documents and/or streams, such as web pages, photographs, and videos, over the wireless network 512, gateway 516, and wide area network 514 (e.g., using TCP/IP or UDP protocols). Likewise, in some implementations, the mobile device 502b can place and receive phone calls, send and receive e-mail messages, and retrieve electronic documents over the access device 518 and the wide area network 514. In some implementations, the mobile device 502a or 502b can be physically connected to the access device 518 using one or more cables and the access device 518 can be a personal computer. In this configuration, the mobile device 502a or 502b can be referred to as a "tethered" device.

[0042] The mobile devices 502a and 502b can also establish communications by other means. For example, the wireless device 502a can communicate with other wireless devices, e.g., other mobile devices 502a or 502b, cell phones, etc., over the wireless network 512. Likewise, the mobile devices 502a and 502b can establish peer-topeer communications 520, e.g., a personal area network, by use of one or more communication subsystems, such as the Bluetooth™ communication devices. Other communication protocols and topologies can also be implemented.

[0043] The mobile device 502a or 502b can, for example, communicate with one or more services 530, 540, 550, 560, 570 and 580 over the one or more wired and/or wireless networks 510. For example, a navigation service 530 can provide navigation information, e.g., map information, location information, route information, and other information, to the mobile device 502a or 502b. A user of the mobile device 502b can invoke a map functionality and can request and receive a map for a particular location.

[0044] A messaging service 540 can, for example, provide e-mail and/or other messaging services. A media service 550 can, for example, provide access to media files, such as song files, audio books, movie files, video clips, and other media data. In some implementations, separate audio and video services (not shown) can provide access to the respective types of media files. A syncing service 560 can, for example, perform syncing services (e.g., sync files). An activation service 570 can, for example, perform an activation process for activating the mobile device 502a or 502b.

[0045] A voiceprint service 580 can, for example, generate voiceprints that can be used to authenticate users of mobile device 502a or 502b. For example, voiceprint service 580 can receive samples of a user's voice from mobile device 502a or 502b and generate a voiceprint based on the voice samples. Mobile device 502a or 502b can, for example, collect the voice samples as a user is interacting with various voice features (e.g., voice control, telephone, voice recorder, etc.) of mobile device 502a or 502b. Once voiceprint service 580 has generated a voiceprint for a user, voiceprint service 580 can transmit the voiceprint to mobile device 502a or 502b. Once the voiceprint is received at mobile device 502a or 502b, the voiceprint can be used by the mobile device to authenticate a user based on the user's voice. The voiceprints generated by voiceprint service 580 can be text-independent voiceprints, for example.

[0046] Other services can also be provided, including a software update service that automatically determines whether software updates exist for software on the mobile device 502a or 502b, then downloads the software updates to the mobile device 502a or 502b where the software updates can be manually or automatically unpacked and/or installed.

[0047] The mobile device 502a or 502b can also access other data and content over the one or more wired and/or wireless networks 510. For example, content publishers, such as news sites, RSS feeds, web sites, blogs, social networking sites, developer networks, etc., can be accessed by the mobile device 502a or 502b. Such access can be provided by invocation of a web browsing function or application (e.g., a browser) of mobile device 502a or 502b.

Example Mobile Device Architecture



[0048] FIG. 6 is a block diagram 600 of an example implementation of the mobile device 100 of FIGS. 1-4. The mobile device 100 can include a memory interface 602, one or more data processors, image processors and/or central processing units 604, and a peripherals interface 606. The memory interface 602, the one or more processors 604 and/or the peripherals interface 606 can be separate components or can be integrated in one or more integrated circuits. The various components in the mobile device 100 can be coupled by one or more communication buses or signal lines.

[0049] Sensors, devices, and subsystems can be coupled to the peripherals interface 606 to facilitate multiple functionalities. For example, a motion sensor 610, a light sensor 612, and a proximity sensor 614 can be coupled to the peripherals interface 606 to facilitate orientation, lighting, and proximity functions. Other sensors 616 can also be connected to the peripherals interface 606, such as a positioning system (e.g., GPS receiver), a temperature sensor, a biometric sensor, or other sensing device, to facilitate related functionalities.

[0050] A camera subsystem 620 and an optical sensor 622, e.g., a charged coupled device (CCD) or a complementary metal-oxide semiconductor (CMOS) optical sensor, can be utilized to facilitate camera functions, such as recording photographs and video clips. The camera subsystem 620 and the optical sensor 622 can be used to collect images of a user to be used during authentication of a user, e.g., by performing facial recognition analysis.

[0051] Communication functions can be facilitated through one or more wireless communication subsystems 624, which can include radio frequency receivers and transmitters and/or optical (e.g., infrared) receivers and transmitters. The specific design and implementation of the communication subsystem 624 can depend on the communication network(s) over which the mobile device 100 is intended to operate. For example, a mobile device 100 can include communication subsystems 624 designed to operate over a GSM network, a GPRS network, an EDGE network, a Wi-Fi or WiMax network, and a Bluetooth™ network. In particular, the wireless communication subsystems 624 can include hosting protocols such that the device 100 can be configured as a base station for other wireless devices.

[0052] An audio subsystem 626 can be coupled to a speaker 628 and a microphone 630 to facilitate voice-enabled functions, such as speaker recognition, voice replication, digital recording, and telephony functions. The audio subsystem 626 can be configured to facilitate processing voice commands, voiceprinting and voice authentication, as described above with reference to FIGS. 1-4.

[0053] The I/O subsystem 640 can include a touch screen controller 642 and/or other input controller(s) 644. The touch-screen controller 642 can be coupled to a touch screen 646. The touch screen 646 and touch screen controller 642 can, for example, detect contact and movement or break thereof using any of a plurality of touch sensitivity technologies, including but not limited to capacitive, resistive, infrared, and surface acoustic wave technologies, as well as other proximity sensor arrays or other elements for determining one or more points of contact with the touch screen 646.

[0054] The other input controller(s) 644 can be coupled to other input/control devices 648, such as one or more buttons, rocker switches, thumb-wheel, infrared port, USB port, and/or a pointer device such as a stylus. The one or more buttons (not shown) can include an up/down button for volume control of the speaker 628 and/or the microphone 630.

[0055] In one implementation, a pressing of the button for a first duration can disengage a lock of the touch screen 646; and a pressing of the button for a second duration that is longer than the first duration can turn power to the mobile device 100 on or off. Pressing the button for a third duration can activate a voice control, or voice command, module that enables the user to speak commands into the microphone 630 to cause the device to execute the spoken command. The user can customize a functionality of one or more of the buttons. The touch screen 646 can, for example, also be used to implement virtual or soft buttons and/or a keyboard.

[0056] In some implementations, the mobile device 100 can present recorded audio and/or video files, such as MP3, AAC, and MPEG files. In some implementations, the
mobile device 100 can include the functionality of an MP3 player, such as an iPod™. The mobile device 100 can, therefore, include a 36-pin connector that is compatible with the iPod. Other input/output and control devices can also be used.

[0057] The memory interface 602 can be coupled to memory 650. The memory 650 can include high-speed random access memory and/or non-volatile memory, such as one or more magnetic disk storage devices, one or more optical storage devices, and/or flash memory (e.g., NAND, NOR). The memory 650 can store an operating system 652, such as Darwin, RTXC, LINUX, UNIX, OS X, WINDOWS, or an embedded operating system such as VxWorks.

[0058] The operating system 652 can include instructions for handling basic system services and for performing hardware dependent tasks. In some implementations, the operating system 652 can be a kernel (e.g., UNIX kernel). In some implementations, the operating system 652 can include instructions for performing voice authentication. For example, operating system 652 can implement the security lockout and voice authentication features as described with reference to FIGS. 1-4. Operating system 352 can implement the voiceprint and voice authentication features described with reference to FIGS. 1-4.

[0059] The memory 650 can also store communication instructions 654 to facilitate communicating with one or more additional devices, one or more computers and/or one or more servers. The memory 650 can include graphical user interface instructions 656 to facilitate graphic user interface processing; sensor processing instructions 658 to facilitate sensor-related processing and functions; phone instructions 660 to facilitate phone-related processes and functions; electronic messaging instructions 662 to facilitate electronic-messaging related processes and functions; web browsing instructions 664 to facilitate web browsing-related processes and functions; media processing instructions 666 to facilitate media processing-related processes and functions; GPS/Navigation instructions 668 to facilitate GPS and navigation-related processes and instructions; and/or camera instructions 670 to facilitate camera-related processes and functions.

[0060] The memory 650 can store other software instructions 672 to facilitate other processes and functions, such as the security and/or authentication processes and functions as described with reference to FIGS. 1-4. For example, the software instructions can include instructions for performing voice authentication on a per application or per feature basis and for allowing a user to configure authentication requirements of each application or feature available on device 100.

[0061] The memory 650 can also store other software instructions (not shown), such as web video instructions to facilitate web video-related processes and functions; and/or web shopping instructions to facilitate web shopping-related processes and functions. In some implementations, the media processing instructions 666 are divided into audio processing instructions and video processing instructions to facilitate audio processing related processes and functions and video processing-related processes and functions, respectively. An activation record and International Mobile Equipment Identity (IMEI) 674 or similar hardware identifier can also be stored in memory 650.

[0062] Each of the above identified instructions and applications can correspond to a set of instructions for performing one or more functions described above. These instructions need not be implemented as separate software programs, procedures, or modules. The memory 650 can include additional instructions or fewer instructions. Furthermore, various functions of the mobile device 100 can be implemented in hardware and/or in software, including in one or more signal processing and/or application specific integrated circuits.

[0063] In accordance with some embodiments, Figure 7 shows a functional block diagram of an electronic device 700 configured in accordance with the principles of the invention as described above. The functional blocks of the device may be implemented by hardware, software, or a combination of hardware and software to carry out the principles of the invention. It is understood by persons of skill in the art that the functional blocks described in Figure 7 may be combined or separated into sub-blocks to implement the principles of the invention as described above. Therefore, the description herein may support any possible combination or separation or further definition of the functional blocks described herein.

[0064] As shown in Figure 7, the electronic device 700 includes a speech receiving unit 702 configured to receiving speech input. The speech input includes a command associated with a restricted feature of the electronic device 700. The electronic device 700 also includes a processing unit 706 coupled to the speech receiving unit 702. In some embodiments, the processing unit 706 includes a comparing unit 708, a determining unit 710, an access providing unit 712, an access denying unit 714, a speech processing unit 716, and a receiving unit 718.

[0065] The processing unit 706 is configured to: compare the speech input and a voiceprint of an authorized user of the electronic device 700 (e.g., with the comparing unit 708); based on results of the comparing, determine that the speech input was spoken by the authorized user (e.g., with the determining unit 710); and provide access to the restricted feature of the electronic device 700 according to the command (e.g., with the access providing unit 712).

[0066] In some embodiments, the processing unit 706 is configured to: based on the results of the comparing, determine that the speech input was not spoken by the authorized user (e.g., with the determining unit 710); and deny access to the restricted feature of the electronic device 700 (e.g., with the access denying unit 714).

[0067] In some embodiments, the processing unit 706 is configured to: process the speech input to determine the command while comparing the speech input and the voiceprint of the authorized user of the electronic device 700 (e.g., with the speech processing unit 716).

[0068] In some embodiments, the processing unit 706 is configured to: receive the voiceprint from a voiceprint service through a network interface of the electronic device 700 (e.g., with the receiving unit 718).

[0069] In some embodiments, the electronic device 700 is a mobile device. In some embodiments, the mobile device is a handheld device.

[0070] In some embodiments, the voiceprint is a text-independent voiceprint.

[0071] In accordance with some embodiments, Figure 8 shows a functional block diagram of an electronic device 800 configured in accordance with the principles of the invention as described above. The functional blocks of the device may be implemented by hardware, software, or a combination of hardware and software to carry out the principles of the invention. It is understood by persons of skill in the art that the functional blocks described in Figure 8 may be combined or separated into sub-blocks to implement the principles of the invention as described above. Therefore, the description herein may support any possible combination or separation or further definition of the functional blocks described herein.

[0072] As shown in Figure 8, the electronic device 800 includes a speech receiving unit 802 configured to receive speech input. The speech input includes a command associated with a feature of the electronic device 800. The electronic device 800 also includes a processing unit 806 coupled to the speech receiving unit 802. In some embodiments, the processing unit 806 includes a generating unit 808, an access providing unit 810, a speech processing unit 812, a voice sample generation unit 814, a voice sample storing unit 816, and a voiceprint generating unit 818.

[0073] The processing unit 806 is configured to: generate a text-independent voiceprint based on the speech input (e.g., with the generating unit 808); and provide access to the feature of the electronic device 800 according to the command (e.g., with the access providing unit 810).

[0074] In some embodiments, the processing unit 806 is configured to: process the speech input to determine the command while generating the text-independent voiceprint based on the voice of the speech input (e.g., with the speech processing unit 812).

[0075] In some embodiments, the processing unit 806 is configured to: generate voice samples based on the speech input (e.g., with voice sample generating unit 814); store the voice samples on the electronic device 800 (e.g., with the voice sample storing unit 816); and generate the text-independent voiceprint based on the voice samples (e.g., with the voiceprint generating unit 818).

[0076] In accordance with some embodiments, Figure 9 shows a functional block diagram of an electronic device 900 configured in accordance with the principles of the invention as described above. The functional blocks of the device may be implemented by hardware, software, or a combination of hardware and software to carry out the principles of the invention. It is understood by persons of skill in the art that the functional blocks described in Figure 9 may be combined or separated into sub-blocks to implement the principles of the invention as described above. Therefore, the description herein may support any possible combination or separation or further definition of the functional blocks described herein.

[0077] As shown in Figure 9, the electronic device 900 includes a speech receiving unit 902 configured to receive speech input. The speech input includes a command associated with a feature of the electronic device 900. The electronic device 900 also includes a processing unit 906 coupled to the speech receiving unit 902. In some embodiments, the processing unit 906 includes a voice sample generating unit 908, a voice sample transmitting unit 910, an access providing unit 912, a voice sample storing unit 914, and a receiving unit 916.

[0078] The processing unit is configured to: generate a voice sample based on the speech input (e.g., with the voice sample generating unit 908); transmit the voice sample to a voiceprint service for generating a voiceprint based on the voice sample (e.g., with the voice sample transmitting unit 910); and provide access to the feature of the electronic device 900 according to the command (e.g., with the access providing unit 912).

[0079] In some embodiments, the processing unit 906 is configured to: generate voice samples based on the speech input (e.g., with the voice sample generating unit 908); store the voice samples on the electronic device 900 (e.g., with the voice sample storing unit 914); and transmit the stored voice samples to a voiceprint service for generating a voiceprint based on the voice samples (e.g., with the voice sample transmitting unit 910).

[0080] In some embodiments, the processing unit 906 is configured to: receive a text-independent voiceprint from a voiceprint service through a network interface of the electronic device 900 (e.g., with the receiving unit 916).

[0081] In some embodiments, the voice sample is generated while a command is determined based on the speech input.


Claims

1. A method comprising:
while a device is in a locked state:

receiving a speech input at the device, the speech input including a command associated with a restricted feature of the device;

determining whether the speech input was spoken by an authorized user of the device, the determining including comparing the speech input to a voiceprint of the authorized user;

processing the speech input to identify the command; and

upon determining that the speech input was spoken by the authorized user, executing the identified command to invoke the restricted feature of the device, wherein the method is performed by one or more processors of the device.


 
2. The method of claim 1, further comprising:
upon determining that the speech input was not spoken by the authorized user, denying access to the restricted feature of the device.
 
3. The method of any of claims 1-2, further comprising:
receiving the voiceprint from a voiceprint service through a network interface of the device.
 
4. The method of any of claims 1-3, wherein the device is a handheld mobile device.
 
5. The method of any of claims 1-4, further comprising:

upon determining that the speech input was spoken by the authorized user, unlocking the device;

receiving a subsequent speech input at the device, the subsequent speech input including a subsequent command associated with the restricted feature of the device; and

providing access to the restricted feature of the device according to the subsequent command without determining whether the speech input was spoken by the authorized user of the device.


 
6. A computer-readable medium including one or more sequences of instructions, which, when executed by one or more processors of an electronic device, cause the electronic device to:
while the electronic device is in a locked state:

receive a speech input, the speech input including a command associated with a restricted feature of the electronic device;

determine whether the speech input was spoken by an authorized user of the electronic device, the determining including comparing the speech input to a voiceprint of the authorized user;

process the speech input to identify the command; and

upon determining that the speech input was spoken by the authorized user, executing the identified command to invoke the restricted feature of the electronic device.


 
7. The computer-readable medium of claim 6, further comprising instructions, which, when executed by the one or more processors of the electronic device, cause the electronic device to:
upon determining that the speech input was not spoken by the authorized user, deny access to the restricted feature of the electronic device.
 
8. The computer-readable medium of any of claims 6-7, further comprising instructions, which, when executed by the one or more processors of the electronic device, cause the electronic device to:
receive the voiceprint from a voiceprint service through a network interface of the electronic device.
 
9. The computer-readable medium of any of claims 6-8, wherein the electronic device is a handheld mobile device.
 
10. The computer-readable medium of any of claims 6-9, further comprising instructions, which, when executed by the one or more processors of the electronic device, cause the electronic device to:

upon determining that the speech input was spoken by the authorized user, unlock the electronic device;

receive a subsequent speech input at the electronic device, the subsequent speech input including a subsequent command associated with the restricted feature of the electronic device; and

provide access to the restricted feature of the electronic device according to the subsequent command without determining whether the speech input was spoken by the authorized user of the electronic device.


 
11. An electronic device, comprising:

at least one processor; and

memory storing: a voiceprint; and one or more programs for execution by the at least one processor, the one or more programs including instructions for:
while the electronic device is in a locked state:

receiving a speech input, the speech input including a command associated with a restricted feature of the electronic device;

determining whether the speech input was spoken by an authorized user of the electronic device, the determining including comparing the speech input to a voiceprint of the authorized user;

processing the speech input to identify the command; and

upon determining that the speech input was spoken by the authorized user, executing the identified command to invoke the restricted feature of the electronic device.


 
12. The electronic device of claim 11, the one or more programs further comprising instructions for:
upon determining that the speech input was not spoken by the authorized user, denying access to the restricted feature of the electronic device.
 
13. The electronic device of any of claims 11-12, the one or more programs further comprising instructions for:
receiving the voiceprint from a voiceprint service through a network interface of the electronic device.
 
14. The electronic device of any of claims 11-13, wherein the electronic device is a handheld mobile device.
 
15. The electronic device of any of claims 11-14, the one or more programs further comprising instructions for:

upon determining that the speech input was spoken by the authorized user, unlocking the electronic device;

receiving a subsequent speech input at the electronic device, the subsequent speech input including a subsequent command associated with the restricted feature of the electronic device; and

providing access to the restricted feature of the electronic device according to the subsequent command without determining whether the speech input was spoken by the authorized user of the electronic device.


 


Ansprüche

1. Ein Verfahren, umfassend:
während eine Vorrichtung in einem gesperrten Zustand ist:

Empfangen einer Spracheingabe an der Vorrichtung, wobei die Spracheingabe einen Befehl beinhaltet, der einer eingeschränkten Funktion der Vorrichtung zugeordnet ist;

Bestimmen, ob die Spracheingabe von einem autorisierten Benutzer der Vorrichtung gesprochen wurde, wobei das Bestimmen ein Vergleichen der Spracheingabe mit einem Stimmabdruck des autorisierten Benutzers beinhaltet;

Verarbeiten der Spracheingabe, um den Befehl zu identifizieren; und

bei Bestimmen, dass die Spracheingabe von dem autorisierten Benutzer gesprochen wurde, Ausführen des identifizierten Befehls, um die eingeschränkte Funktion der Vorrichtung aufzurufen, wobei das Verfahren von einem oder mehreren Prozessoren der Vorrichtung ausgeführt wird.


 
2. Verfahren nach Anspruch 1, ferner umfassend:
bei Bestimmen, dass die Spracheingabe nicht von dem autorisierten Benutzer gesprochen wurde, Verweigern von Zugang zu dem eingeschränkten Merkmal der Vorrichtung.
 
3. Verfahren nach einem der Ansprüche 1-2, ferner umfassend:
Empfangen der Sprachausgabe von einem Sprachausgabedienst über eine Netzwerkschnittstelle der Vorrichtung.
 
4. Verfahren nach einem der Ansprüche 1-3, wobei die Vorrichtung eine tragbare mobile Vorrichtung ist.
 
5. Verfahren nach einem der Ansprüche 1-4, ferner umfassend:

bei Bestimmen, dass die Spracheingabe vom autorisierten Benutzer gesprochen wurde, Entsperren der Vorrichtung;

Empfangen einer nachfolgenden Spracheingabe an der Vorrichtung, wobei die nachfolgende Spracheingabe einen nachfolgenden Befehl beinhaltet, der mit der eingeschränkten Funktion der Vorrichtung verbunden ist; und

Bereitstellen von Zugang an die eingeschränkte Funktion der Vorrichtung gemäß dem nachfolgenden Befehl ohne Bestimmen ob die Spracheingabe durch den autorisierten Benutzer der Vorrichtung gesprochen wurde.


 
6. Ein computerlesbares Medium beinhaltend eine oder mehrere Sequenzen von Anweisungen, die, wenn sie durch einen oder mehreren Prozessoren einer elektronischen Vorrichtung ausgeführt werden, die elektronische Vorrichtung dazu veranlassen:
während die elektronische Vorrichtung in einem gesperrten Zustand ist:

Empfangen einer Spracheingabe, wobei die Spracheingabe einen Befehl beinhaltet, der einer eingeschränkten Funktion von der elektronischen Vorrichtung zugeordnet ist;

Bestimmen ob die Spracheingabe durch einen autorisierten Benutzer von der elektronischen Vorrichtung gesprochen wurde, wobei das Bestimmen ein Vergleichen der Spracheingabe mit einem Stimmabdruck des autorisierten Benutzers beinhaltet;

Verarbeiten der Spracheingabe, um den Befehl zu identifizieren; und

bei Bestimmen, dass die Spracheingabe von dem autorisierten Benutzer gesprochen wurde, Ausführen des identifizierten Befehls, um die eingeschränkte Funktion der elektronischen Vorrichtung aufzurufen.


 
7. Computerlesbares Medium nach Anspruch 6, ferner umfassend Anweisungen, die, wenn sie durch einen oder mehrere Prozessoren der elektronischen Vorrichtung ausgeführt werden, die elektronische Vorrichtung veranlassen zum:
bei Bestimmen, dass die Spracheingabe nicht von dem autorisierten Benutzer gesprochen wurde, verweigern von Zugang zu den eingeschränkten Funktionen der elektronischen Vorrichtung.
 
8. Computerlesbares Medium nach einem der Ansprüche 6-7, ferner umfassend Anweisungen, die, wenn sie durch einen oder mehrere Prozessoren der elektronischen Vorrichtung ausgeführt werden, die elektronische Vorrichtung dazu veranlassen zum:
Empfangen eines Stimmabdrucks von einem Stimmabdruckservice über eine Netzwerkschnittstelle der elektronischen Vorrichtung.
 
9. Computerlesbares Medium nach einem der Ansprüche 6-8, wobei die elektronische Vorrichtung eine tragbare mobile Vorrichtung ist.
 
10. Computerlesbares Medium nach einem der Ansprüche 6-9, ferner umfassend Anweisungen, die, wenn sie durch einen oder mehreren Prozessoren der elektronischen Vorrichtung ausgeführt werden, die elektronische Vorrichtung veranlassen zum:

bei Bestimmen, dass die Spracheingabe vom autorisierten Benutzer gesprochen wurde, Entsperren der elektronischen Vorrichtung;

Empfangen einer nachfolgenden Spracheingabe an der elektronischen Vorrichtung, wobei die nachfolgende Spracheingabe einen nachfolgenden Befehl beinhaltet, der mit der eingeschränkten Funktion der Vorrichtung verbunden ist; und

Bereitstellen von Zugang an die eingeschränkte Funktion der elektronischen Vorrichtung gemäß dem nachfolgenden Befehl ohne Bestimmen ob die Spracheingabe durch den autorisierten Benutzer der elektronischen Vorrichtung gesprochen wurde.


 
11. Eine elektronische Vorrichtung umfassend:

mindestens einen Prozessor; und

Speicher Speicherung: einen Stimmabdruck; und einen oder mehrere Programme zum Ausführen durch mindestens einen Prozessor, wobei die einen oder mehrere Programme Anweisungen beinhalten zum:
während die elektronische Vorrichtung in einem gesperrten Zustand ist:

Empfangen einer Spracheingabe, wobei die Spracheingabe einen Befehl beinhaltet, der einer eingeschränkten Funktion der elektronischen Vorrichtung zugeordnet ist;

Bestimmen, ob die Spracheingabe von einem autorisierten Benutzer der elektronischen Vorrichtung gesprochen wurde, wobei das Bestimmen ein Vergleichen der Spracheingabe mit einem Stimmabdruck des autorisierten Benutzers beinhaltet;

Verarbeiten der Spracheingabe, um den Befehl zu identifizieren; und

bei Bestimmen, dass die Spracheingabe von dem autorisierten Benutzer gesprochen wurde, Ausführen des identifizierten Befehls, um die eingeschränkte Funktion der Vorrichtung aufzurufen.


 
12. Elektronische Vorrichtung nach Anspruch 11, wobei das eine oder die mehreren Programme ferner Anweisungen umfassen zum:
bei Bestimmen, dass die Spracheingabe nicht von dem autorisierten Benutzer gesprochen wurde, Verweigern von Zugang zu der eingeschränkten Funktion der elektronischen Vorrichtung.
 
13. Elektronische Vorrichtung nach einem der Ansprüche 11-12, wobei das eine oder die mehreren Programme ferner Anweisungen umfassen zum:
Empfangen des Stimmabdrucks von einem Stimmabdruckservice über eine Netzwerkschnittstelle der elektronischen Vorrichtung.
 
14. Elektronische Vorrichtung nach einem der Ansprüche 11-13, wobei die elektronische Vorrichtung eine tragbare mobile Vorrichtung ist.
 
15. Elektronische Vorrichtung nach einem der Ansprüche 11-14, wobei das eine oder die mehreren Programme ferner Anweisungen umfassen zum:

bei Bestimmen, dass die Spracheingabe von dem autorisierten Benutzer gesprochen wurde, Entsperren der elektronischen Vorrichtung;

Empfangen einer nachfolgenden Spracheingabe an der elektronischen Vorrichtung, wobei die nachfolgende Spracheingabe einen nachfolgenden Befehl beinhaltet, der der eingeschränkten Funktion der elektronischen Vorrichtung zugeordnet ist; und

Bereitstellen von Zugang auf die eingeschränkte Funktion der elektronischen Vorrichtung gemäß dem nachfolgenden Befehl, ohne Bestimmen, ob die Spracheingabe von dem autorisierten Benutzer der elektronischen Vorrichtung gesprochen wurde.


 


Revendications

1. Un procédé comprenant :
lorsqu'un dispositif est dans un état verrouillé :

la réception d'une entrée vocale sur le dispositif, l'entrée vocale comprenant une commande associée à une caractéristique réservée du dispositif ;

la détermination si l'entrée vocale a ou non été prononcée par un utilisateur autorisé du dispositif, la détermination comprenant la comparaison de l'entrée vocale à une empreinte vocale de l'utilisateur autorisé ;

le traitement de l'entrée vocale pour identifier la commande ; et

sur détermination que l'entrée vocale a été prononcée par l'utilisateur autorisé, l'exécution de la commande identifiée pour faire appel à la caractéristique réservée du dispositif, le procédé étant mis en Ĺ“uvre par un ou plusieurs processeurs du dispositif.


 
2. Le procédé de la revendication 1, comprenant en outre :
sur détermination que l'entrée vocale n'a pas été prononcée par l'utilisateur autorisé, le refus de l'accès à la caractéristique réservée du dispositif.
 
3. Le procédé de l'une des revendications 1 à 2, comprenant en outre :
la réception de l'empreinte vocale depuis un service d'empreintes vocales par l'intermédiaire d'une interface réseau du dispositif.
 
4. Le procédé de l'une des revendications 1 à 3, dans lequel le dispositif est un dispositif mobile tenant dans la main.
 
5. Le procédé de l'une des revendications 1 à 4, comprenant :

sur détermination que l'entrée vocale a été prononcée par l'utilisateur autorisé, le déverrouillage du dispositif ;

la réception d'une entrée vocale ultérieure sur le dispositif, l'entrée vocale ultérieure comprenant une commande ultérieure associée à la caractéristique réservée du dispositif ; et

l'accès donné à la caractéristique réservée du dispositif en fonction de la commande ultérieure sans déterminer si l'entrée vocale a été ou non prononcée par l'utilisateur autorisé du dispositif.


 
6. Un support lisible par calculateur comprenant une ou plusieurs séquences d'instructions qui, lorsqu'elles sont exécutées par un ou plusieurs processeurs d'un dispositif électronique, font en sorte que le dispositif électronique :
lorsque le dispositif électronique est dans un état verrouillé :

reçoive une entrée vocale, l'entrée vocale comprenant une commande associée à une caractéristique réservée du dispositif électronique ;

détermine si l'entrée vocale a ou non été prononcée par un utilisateur autorisé du dispositif électronique, la détermination comprenant la comparaison de l'entrée vocale à une empreinte vocale de l'utilisateur autorisé ;

traite l'entrée vocale pour identifier la commande ; et

sur détermination que l'entrée vocale a été prononcée par l'utilisateur autorisé, exécute la commande identifiée pour faire appel à la caractéristique réservée du dispositif électronique.


 
7. Le support lisible par calculateur de la revendication 6, comprenant en outre des instructions qui, lorsqu'elles sont exécutées par les un ou plusieurs processeurs du dispositif électronique, font en sorte que le dispositif électronique :
sur détermination que l'entrée vocale n'a pas été prononcée par l'utilisateur autorisé, refuse l'accès à la caractéristique réservée du dispositif.
 
8. Le support lisible par calculateur de l'une des revendications 6 à 7, comprenant en outre des instructions qui, lorsqu'elles sont exécutées par les un ou plusieurs processeurs du dispositif électronique, font en sorte que le dispositif électronique :
reçoive l'empreinte vocale depuis un service d'empreintes vocales par l'intermédiaire d'une interface réseau du dispositif.
 
9. Le support lisible par calculateur de l'une des revendications 6 à 8, dans lequel le dispositif électronique est un dispositif mobile tenant dans la main.
 
10. Le support lisible par calculateur de l'une des revendications 6 à 9, comprenant en outre des instructions qui, lorsqu'elles sont exécutées par les un ou plusieurs processeurs du dispositif électronique, font en sorte que le dispositif électronique :

sur détermination que l'entrée vocale a été prononcée par l'utilisateur autorisé, déverrouille le dispositif électronique ;

reçoive une entrée vocale ultérieure sur le dispositif électronique, l'entrée vocale ultérieure comprenant une commande ultérieure associée à la caractéristique réservée du dispositif électronique ; et

donne accès à la caractéristique réservée du dispositif électronique en fonction de la commande ultérieure sans déterminer si l'entrée vocale a été ou non prononcée par l'utilisateur autorisé du dispositif électronique.


 
11. Un dispositif électronique, comprenant :

au moins un processeur ; et

une mémoire stockant : une empreinte vocale ; et un ou plusieurs programmes destinés à être exécutés par le au moins un processeur, les un ou plusieurs programmes comprenant des instructions pour :
lorsque le dispositif électronique est dans un état verrouillé :

recevoir une entrée vocale, l'entrée vocale comprenant une commande associée à une caractéristique réservée du dispositif électronique ;

déterminer si l'entrée vocale a ou non été prononcée par un utilisateur autorisé du dispositif électronique, la détermination comprenant la comparaison de l'entrée vocale à une empreinte vocale de l'utilisateur autorisé ;

traiter l'entrée vocale pour identifier la commande ; et

sur détermination que l'entrée vocale a été prononcée par l'utilisateur autorisé, exécuter la commande identifiée pour faire appel à la caractéristique réservée du dispositif électronique.


 
12. Le dispositif électronique de la revendication 11, dans lequel les un ou plusieurs programmes comprennent en outre des instructions pour :
sur détermination que l'entrée vocale n'a pas été prononcée par l'utilisateur autorisé, refuser l'accès à la caractéristique réservée du dispositif.
 
13. Le dispositif électronique de l'une des revendications 11 à 12, dans lequel les un ou plusieurs programmes comprennent en outre des instructions pour :
recevoir l'empreinte vocale depuis un service d'empreintes vocales par l'intermédiaire d'une interface réseau du dispositif.
 
14. Le dispositif électronique de l'une des revendications 11 à 13, dans lequel le dispositif électronique est un dispositif mobile tenant dans la main.
 
15. Le dispositif électronique de l'une des revendications 11 à 14, dans lequel les un ou plusieurs programmes comprennent en outre des instructions pour :

sur détermination que l'entrée vocale a été prononcée par l'utilisateur autorisé, déverrouiller le dispositif électronique ;

recevoir une entrée vocale ultérieure sur le dispositif électronique, l'entrée vocale ultérieure comprenant une commande ultérieure associée à la caractéristique réservée du dispositif électronique ; et

donner accès à la caractéristique réservée du dispositif électronique en fonction de la commande ultérieure sans déterminer si l'entrée vocale a été ou non prononcée par l'utilisateur autorisé du dispositif électronique.


 




Drawing
































Cited references

REFERENCES CITED IN THE DESCRIPTION



This list of references cited by the applicant is for the reader's convenience only. It does not form part of the European patent document. Even though great care has been taken in compiling the references, errors or omissions cannot be excluded and the EPO disclaims all liability in this regard.

Patent documents cited in the description