(19)
(11)EP 3 629 129 A1

(12)EUROPEAN PATENT APPLICATION

(43)Date of publication:
01.04.2020 Bulletin 2020/14

(21)Application number: 18212822.3

(22)Date of filing:  15.12.2018
(51)International Patent Classification (IPC): 
G06F 3/01(2006.01)
G06F 3/0346(2013.01)
G06F 3/0488(2013.01)
G06F 3/03(2006.01)
(84)Designated Contracting States:
AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR
Designated Extension States:
BA ME
Designated Validation States:
KH MA MD TN

(30)Priority: 25.09.2018 US 201816141966

(71)Applicant: XRSpace CO., LTD.
Taoyuan City 330 (TW)

(72)Inventors:
  • Chou, Peter
    100 Taipei City (TW)
  • Chu, Feng-Seng
    235 New Taipei City (TW)
  • Lin, Yen-Hung
    330 Taoyuan City (TW)
  • Ke, Shih-Hao
    251 New Taipei City (TW)
  • Chen, Jui-Chieh
    100 Taipei City (TW)

(74)Representative: Straus, Alexander et al
2K Patent- und Rechtsanwälte Dr. Alexander Straus Keltenring 9
82041 München / Oberhaching
82041 München / Oberhaching (DE)

 
Remarks:
Amended claims in accordance with Rule 137(2) EPC.
 


(54)METHOD AND APPARATUS OF INTERACTIVE DISPLAY BASED ON GESTURE RECOGNITION


(57) A method (9) of interactive display based on gesture recognition includes determining a plurality of gestures corresponding to a plurality of images (901), interpreting a predetermined combination of gestures among the plurality of gestures as a command (904), and displaying a scene in response to the command (905).




Description

Field of the Invention



[0001] The invention relates to a method and apparatus of interactive display, and more particularly, to a method and apparatus of interactive display based on gesture recognition.

Background of the Invention



[0002] Image processing is widely used in a variety of applications, which may involve two-dimensional (2D) images, three-dimensional (3D) images, or combinations of multiple images of different types. For example, 3D images may be directly generated using a depth imager such as a structured light (SL) camera or a time of flight (ToF) camera. Such 3D images are also referred to herein as depth images, and commonly utilized in machine vision applications including those involving gesture recognition.

[0003] In a typical gesture recognition arrangement, raw image data from an image sensor is usually subject to various preprocessing operations. The preprocessed image data is then subject to additional processing used to recognize gestures in the context of particular gesture recognition applications. Such applications may be implemented in video gaming systems, kiosks or other systems providing a gesture-based user interface, for example, electronic consumer devices such as virtual reality devices, laptop computers, tablet computers, desktop computers, mobile phones, interactive projectors and television sets.

[0004] Therefore, the algorism for gesture recognition becomes crucial to facilitate the interaction between the user and the electronic device.

Summary of the Invention



[0005] This in mind, the application aims at providing a method and apparatus of interactive display based on gesture recognition for interactive display system.

[0006] This is achieved by a method and apparatus of interactive display according to claims 1 and 12. The dependent claims pertain to corresponding further developments and improvements.

[0007] As will be seen more clearly from the detailed description following below, an embodiment of the present disclosure discloses a method of interactive display based on gesture recognition. The method comprises determining a plurality of gestures corresponding to a plurality of images, interpreting a predetermined combination of gestures among the plurality of gestures as a command, and displaying a scene in response to the command.

[0008] An embodiment of the present disclosure further discloses an apparatus for an interactive display system and including a processing device, and a memory device, wherein the memory device is coupled to the processing device, and configured to store the method of interactive display as above mentioned as a process of interactive display, to instruct the processing device to execute the process of interactive display.

[0009] The interactive display system of the present disclosure detects the predetermined combination of gestures performed by the user to instruct the interactive display system to give response to the user, e.g., the display device displays a different scene in a video game after the view angle of the player is changed, or displays a moving object in the video game. Therefore, the user may interact with the interactive display system without physical contact to any user input devices.

Brief Description of the Drawings



[0010] 

FIG. 1 is a functional block diagram of an interactive display system according to an embodiment of the present disclosure.

FIG. 2 to FIG. 5 illustrates exemplary 3D images with gestures of single hand according to various embodiments of the present disclosure.

FIG. 6 to FIG. 8 illustrates an interaction between a hand and a virtual object according to various embodiment of the present disclosure.

FIG. 9 is a flowchart of an interactive display process according to an embodiment of the present disclosure.


Detailed Description



[0011] FIG. 1 is a functional block diagram of an interactive display system 1 according to an embodiment of the present disclosure. The interactive display system 1 includes an image sensor 10, a gesture recognition device 11, a command detector 12, a display device 13, a central processing unit (hereinafter abbreviated CPU) 14, and a memory device 15.

[0012] The image sensor 10 is coupled to the gesture recognition device 11, and configured to generate a plurality of images IMG0-IMGn to the gesture recognition device 11. The gesture recognition device 11 is coupled to the image sensor 10 and the command detector 12, and configured to determine a plurality of gestures GR0-GRn corresponding to the plurality of images IMG0-IMGn for the command detector 12. The command detector 12 is coupled to the gesture recognition device 11 and the CPU 14, and configured to interpret the plurality of gestures GR0-GRn as a command CMD for the CPU 14. The CPU 14 is coupled to the command detector 12, the display device 13 and the memory device 15, and configured to output image data to the display device 13 according to the command CMD. The display device 13 is coupled to the CPU 14, and configured to display a scene.

[0013] In an embodiment, the image sensor 10 may be a depth imager such as a structured light (SL) camera or a time of flight (ToF) camera configured to generate 3-dimensional (3D) images with an object of interest. For example, the image sensor 10 may generate the 3D images IMG0-IMGn with a user's hand.

[0014] In an embodiment, the gesture recognition device 11 may identify a plurality of points of interest corresponding to the object of interest from the 3D images IMG0-IMGn, and determine the gestures GR0-GRn corresponding to the images IMG0-IMGn according to relative positions of the plurality of points of interest. For example, the points of interest may be fingertips, joints and a palm of the user's hand, wherein the points of interest respectively correspond to 3D coordinates within a spatial projection range of the image sensor 10; and the gesture recognition device 11 determines the gestures GR0-GRn according to relative positions of the fingertips, the joints and the palm of the user's hand.

[0015] In an embodiment, the gesture recognition may be performed by machine learning, for example, the gesture recognition device 11 may be a neural network model that is trained by data sets of 3D images, and the neural network model produces an outcome corresponding to an input image.

[0016] In an embodiment, the command detector 12 may interpret a predetermined combination of gestures as the command CMD. For example, in the interactive display system 1, the predetermined combination of gestures refers to continuous movements of the user's fingertips, joints and palm for instructing the interactive display system 1 to give responses to the user, e.g., change a view angle of a player in a video game, move an object in the video game, and so on.

[0017] The predetermined combination of gestures may be a sequence of a first gesture, a second gesture and the first gesture. In other words, the user may repeat the first gesture after the second gesture is made to instruct the interactive display system 1 to give response to the predetermined combination of gestures, e.g., the display device displays a different scene in the video game after the view angle of the player is changed, or displays a moving object in the video game. Therefore, the user may interact with the interactive display system 1 without physical contact to any user input devices.

[0018] FIG.2 to FIG. 5 illustrates exemplary 3D images with a predetermined combination of gestures for single hand according to various embodiments of the present disclosure. The gesture recognition device 11 may identify the fingertips of thumb, index finger, middle finger, ring finger and little finger P1-P5, a center of palm P0 of the hand, joints J1-J5 of the hand.

[0019] In FIG. 2, a "grab" command is interpreted when a predetermined combination of "release", "hold", and "release" gestures is recognized. For recognizing the "release" gesture, when detecting that distances between the fingertips P1-P5 and the center of palm P0 are within predetermined ranges and an angle between vectors P0J1 and P1J1 within a range (i.e., the fingertip of thumb points away from the palm), the gesture recognition device 11 may recognize the "release" gesture. For recognizing the "hold" gesture, when detecting that distances between the fingertips P1-P5 and the center of palm P0 are within a range or approximate to zero (i.e., the fingertips P1-P5 moving toward the center of palm P0 to make a fist), the gesture recognition device 11 may recognize the "hold" gesture.

[0020] In FIG. 3, a "teleport" command is interpreted when a predetermined combination of "point", "click", and "point" gestures is recognized. For recognizing the "point" gesture, when detecting that the vector P2J2 is parallel to the vector J1J2, an angle between vectors P1J1 and P2J2 is within a range, and distances between the fingertips P3-P5 and the center of palm P0 are within a range, the gesture recognition device 11 may recognize the "point" gesture.

[0021] For recognizing the "click" gesture, when detecting that the vector P2J2 is parallel to the vectors J1J2 and P1J1 (or, the angle between the vectors P2J2, J1J2 and P1J1 approximates to zero), distances between the fingertips P1-P2 and the center of palm P0 are greater than a range, and distances between the fingertips P3-P5 and the center of palm P0 are within a range, the gesture recognition device 11 may recognize the "click" gesture. In one embodiment, when detecting that the fingertip of thumb is moving toward the joint of index finger and the palm, and the fingertips of middle, ring and little finger stay close to the palm, the gesture recognition device 11 may recognize the "point" gesture.

[0022] Take a video gaming system for example, the CPU 14 or other image analysis modules may map the pointing direction of the index finger (e.g., the vector P2J2) onto a spatial projection range of a scene displayed by the display device 13, the gesture recognition device 11 detects the predetermined combination of "point", "click", and "point" gestures to output the "teleport" command, and then the CPU 14 generates a new scene based on the pointing direction of the index finger and the "teleport" command to the display device 13 to display the new scene to the user. Therefore, the user may interact with the interactive display system 1 without physical contact to any user input devices.

[0023] In FIG. 4, a "key in" command is interpreted when a predetermined combination of "open nip", "close nip", and "open nip" gestures is recognized. For recognizing the "open nip" gesture, when detecting that distances between the fingertips P3-P5 and the center of palm P0 are within a range, a distance between the index fingertip P2 and the center of palm P0 is within a range and there is an angle within a range between the thumb and palm, the gesture recognition device 11 may recognize the "open nip" gesture. For recognizing the "close nip" gesture, when detecting that the fingertip of thumb touches the fingertip of index finger (or a distance between the points P1 and P2 is within a range or approximates to zero), the gesture recognition device 11 may recognize the "close nip" gesture.

[0024] In FIG. 5, a "duplicate" command is interpreted when a predetermined combination of "thumb up", "click", and "thumb up" gestures is recognized. For recognizing the "thumb up" gesture, when detecting that distances between the fingertips P2-P5 and the center of palm P0 are within a range and there is an angle within a range between the thumb and palm, the gesture recognition device 11 may recognize the "thumb up" gesture.

[0025] In summary of the embodiments of FIG. 2 to FIG. 5, the hand gesture can be recognized based on relative positions of the fingertips and the center of the palm, the gesture recognition device 11 may recognize the gesture according to conditions set by the relative positions of the fingertips and the center of the palm. Those skilled in the field of the Invention may make modifications and alterations accordingly, which is not limited.

[0026] In other embodiments, the CPU 14 or other image analysis modules (e.g., virtual object generation device) may project at least one object of interest (e.g., single hand or both hands) in the 3D images IMG0-IMGn as well as a virtual object of interest in a spatial projection range of a scene displayed by the display device 13, and the user may interact with the virtual object of interest by hand gestures. In an embodiment, the virtual object generation device may perform mesh generation or grid generation to generate the virtual object, and the user may input commands by hand gestures to instruct the interactive display system 1 to give response to the input commands, e.g., the display device displays an enlarged, shrinking or rotating virtual object or a pop up window based on the input commands.

[0027] FIG. 6 to FIG. 8 illustrates an interaction between single hand or both hands and a virtual object according to various embodiments of the present disclosure.

[0028] In FIG. 6, the user may select a virtual object of interest OBJ_1 displayed by the display device 13 by making the "grab" command as mentioned in the embodiment of FIG. 2, and the user may further interact with the virtual object of interest OBJ_1 by hand gestures. The command detector 12 may detect a "rotate leftward" or "rotate rightward" command by tracking a movement of a detected gesture, e.g. "thumb up".

[0029] The "thumb up" gesture is interpreted as long as the relative positions between the fingertips P1-P5 and the center of the palm P0 remain unchanged since the conditions corresponding to the "thumb up" gesture remain unchanged. In one embodiment, the movement of the "thumb up" gesture may be represented by the movement of the fingertip of thumb P1, the command detector 12 may track the movement of the fingertip of thumb P1 by computing coordinate displacement of the fingertip of thumb P1 identified in the 3D images IMG0-IMGn, so as to determine whether the "thumb up" gesture rotates leftward or rightward. For example, when the "thumb up" gesture has been detected and the fingertip of thumb P1 moves to a down-left or down-right in the spatial projection range of the display device 13, the command detector 12 may determine the "rotate leftward" or "rotate rightward" command. The fingertip of thumb P1 may be a designated point of interest that is associated with the detected "thumb up" gesture.

[0030] In FIG. 7, the user may use both hands to interact with a virtual object of interest OBJ_2 displayed by the display device 13. The gesture recognition device 11 may recognize the "hold" gesture for left and right hands of the user, and the command detector 12 may detect an "enlarge" or "shrink" command by tracking movements of detected gestures of both hands, e.g. "hold".

[0031] The "hold" gestures of both hands are detected as long as the relative positions between the fingertips P1-P5 and the center of the palm P0 remain unchanged since the conditions corresponding to the "hold" gestures remain unchanged. In an embodiment, the movements of the "hold" gestures of both hands may be represented by the movements of the centers of palm P0 of both hands (or any one of points P1-P5 and J1-J5 of interest), the command detector 12 may track the movement of the center of palm P0 by computing coordinate displacement of the centers of palm P0 of both hands identified in the 3D images IMG0-IMGn, so as to determine whether the "hold" gestures of both hands move closer or farther. For example, when the "hold" gestures of both hands have been detected and the centers of palm P0 of both hands move closer or farther in the spatial projection range of the display device 13, the command detector 12 may determine the "enlarge" or "shrink" command. For example, sizes of the virtual object of interest OBJ_2 are proportional to the coordinate displacement of the center of palm P0 of single hand or the coordinate displacement of the centers of palm P0 of both hands.

[0032] In FIG. 8, the user may use one or another hand to interact with a virtual object of interest OBJ_3 displayed by the display device 13. In this embodiment, the virtual object of interest OBJ_3 is a virtual keyboard, and includes a plurality of sub-objects of interest. For example, the virtual keyboard includes a plurality of virtual keys corresponding to a plurality of characters.

[0033] The user may move either left or right hand to where one of the plurality of keys in the spatial projection range of the display device 13, and then perform the "key in" command by performing the predetermined combination of "open nip", "close nip", and "open nip" gestures. The CPU 14 may determine the character corresponding to the "key in" command according to a location (or designated point) corresponding to the "key in" command, wherein the designated point may be the fingertip P1 of thumb or the fingertip P2 of index finger identified from the "close nip" gesture of the "key in" command. Then, the CPU 14 may instruct the display device 13 to display a pop up window with the character corresponding to the "key in" command. For example, the user may move the left hand to where a key corresponding to a character "C" is projected in the spatial projection range of the display device 13, and perform the "key in" command. The CPU 14 may determine the character "C" is inputted by the user according to the detected "key in" command and the corresponding designated point, so as to instruct the display device 13 to display the pop up window with the character "C".

[0034] Operations of the interactive display system 1 may be summarized into an interactive display process 9, as shown in FIG. 9, and the interactive display process 8 includes the following steps.

[0035] Step 901: Determine a plurality of gestures corresponding to a plurality of images.

[0036] Step 902: Determine whether a predetermined combination of gestures among the plurality of gestures is detected? Go to Step 904 if yes; go to Step 803 if no.

[0037] Step 903: Determine whether a movement of a gesture among the plurality of gestures is detected? Go to Step 904 if yes; return to Step 901 if no.

[0038] Step 904: Interpret the predetermined combination of gestures or the movement of the gesture as a command.

[0039] Step 905: Display a scene in response to the command.

[0040] In the interactive display process 9, Step 901 is performed by the gesture recognition device 11, Steps 902 to 904 are performed by the command detector 12, and Step 905 is performed by the CPU 14 and the display device 13. Detailed operation of the interactive display process 9 may be obtained by referring to descriptions regarding FIG. 1 to FIG. 8. The embodiments of FIG. 1 to FIG. 9 may be utilized in applications related to at least one of augmented reality (AR), virtual reality (VR), mixed reality (MR) and extended reality (XR).

[0041] To sum up, the interactive display system of the present disclosure detects the predetermined combination of gestures or the movement of the gesture performed by the user to instruct the interactive display system to give response to the user, e.g., the display device displays a different scene in the video game after the view angle of the player is changed, or displays a moving object in the video game. Therefore, the user may interact with the interactive display system without physical contact to any user input devices.


Claims

1. A method (9) of interactive display for an electronic device, characterized by comprising:

determining a plurality of gestures corresponding to a plurality of images (901);

interpreting a predetermined combination of gestures among the plurality of gestures as a first command (904); and

displaying a first scene in response to the first command (905).


 
2. An apparatus (1) for an interactive display system, characterized by comprising:

a processing device (14); and

a memory device (15) coupled to the processing device, and configured to store a process of interactive display to instruct the processing device (14) to execute the process (9) of interactive display, wherein the process (9) of interactive display comprises:

determining a plurality of gestures corresponding to a plurality of images;

interpreting a predetermined combination of gestures among the plurality of gestures as a first command; and

outputting a first scene in response to the first command.


 
3. The method (9) of claim 1 or the apparatus (1) of claim 2, characterized in that the predetermined combination of gestures is a sequence of a first gesture, a second gesture, and the first gesture.
 
4. The method (9) of claim 1 or the apparatus (1) of claim 2, characterized in that determining the plurality of gestures corresponding to the plurality of images comprises:

identifying a first object of interest in the plurality of images;

identifying a plurality of points of interest of the first object of interest; and

determining one of the plurality of gestures according to the plurality of points of interest;

wherein the first object of interest is a first hand of a user, and the plurality of points of interest comprises fingertips, joints and a palm of the first hand.


 
5. The method (9) of claim 1 or the apparatus (1) of claim 2, characterized by the method or the process (9) of interactive display further comprising:

identifying a virtual object of interest in the plurality of images; and

selecting the virtual object of interest in response to the first command.


 
6. The method (9) of claim 5 or the apparatus (1) of claim 5, characterized by the method or the process (9) of interactive display further comprising:

tracking a movement of one of the plurality of gestures;

interpreting the movement of one of the plurality of gestures as a second command; and

displaying a second scene in response to the second command.


 
7. The method (9) of claim 6 or the apparatus (1) of claim 6, characterized in that tracking the movement of one of the plurality of gestures comprises:

identifying a plurality of points of interest of the first object of interest; and

tracking a movement of a designated point among the plurality of points of interest of the first object of interest, wherein the designated point of interest is associated with the one of the plurality of gestures.


 
8. The method (9) of claim 5 or the apparatus (1) of claim 5, characterized in that determining the plurality of gestures corresponding to the plurality of images comprises:

identifying a first object of interest and a second object of interest in one of the plurality of images;

identifying a plurality of first points of interest of the first object of interest and a plurality of second points of interest of the second object of interest; and

determining gestures of the first and second objects of interest according to the plurality of first and second points of interest;

wherein the first object of interest is a first hand of a user, the second object of interest is a second hand of the user and the plurality of first and second points of interest comprises fingertips, joints and palms of the first and second hands.


 
9. The method (9) of claim 8 or the apparatus (1) of claim 8, characterized in that determining the plurality of gestures corresponding to the plurality of images comprises:

tracking movements of the gestures of the first and second objects of interest;

interpreting the movements of the first and second objects of interest as a second command; and

displaying a second scene in response to the second command.


 
10. The method (9) of claim 9 or the apparatus (1) of claim 9, characterized in that tracking the movements of the gestures of the first and second objects of interest comprises:

tracking movements of a first designated point among the plurality of first points of interest of the first object of interest and a second designated point among the plurality of second points of interest of the second object of interest, wherein the first and second designated point of interest is associated with the gestures of the first and second objects of interest.


 
11. The method (9) of claim 1 or the apparatus (1) of claim 2, characterized by the method or the process (9) of interactive display further comprising:

identifying a virtual object of interest in the plurality of images, wherein the virtual object of interest comprises a plurality of sub-objects of interest corresponding to a plurality of characters; and

determining one of the plurality of characters corresponding to one of the plurality of sub-objects of interest according to a designated point corresponding to the first command;

wherein displaying the first scene in response to the first command comprises:

displaying a pop up window with the one of the plurality of characters corresponding to the first command.


 
12. The method (9) of claim 11 or the apparatus (1) of claim 11, characterized in that determining one of the plurality of characters corresponding to one of the plurality of sub-objects of interest according to the designated point corresponding to the first command comprises:

identifying a first object of interest in the plurality of images;

identifying a plurality of points of interest of the first object of interest; and

determining the designated point corresponding to the first command among the plurality of points of interest;

wherein the first object of interest is a first hand of a user, the plurality of points of interest comprises fingertips of the first hand, and the designated point is a fingertip of thumb or a fingertip of index finger of the first hand.


 
13. The apparatus of claim 2, characterized in that the interactive display system comprises:

an image sensor (10) coupled to the processing device (14), and configured to generate the plurality of images to the processing device; and

a display device (13) coupled to the processing device (14), and configured to display the first scene outputted by the processing device (14).


 
14. The apparatus of claim 13, characterized in that the image sensor is a structured light camera or a time of flight camera, and each of the plurality of images is a 3-dimensional image.
 


Amended claims in accordance with Rule 137(2) EPC.


1. A method (9) of interactive display for an electronic device, characterized by comprising:

determining a plurality of gestures corresponding to a plurality of images (901);

interpreting a predetermined combination of gestures among the plurality of gestures as a first command (904), wherein the predetermined combination of gestures is a sequence of a first gesture, a second gesture, and the first gesture; and

displaying a first scene in response to the first command (905).


 
2. The method (9) of claim 1, characterized in that determining the plurality of gestures corresponding to the plurality of images comprises:

identifying a first object of interest in the plurality of images;

identifying a plurality of points of interest of the first object of interest; and

determining one of the plurality of gestures according to the plurality of points of interest;

wherein the first object of interest is a first hand of a user, and the plurality of points of interest comprises fingertips, joints and a palm of the first hand.


 
3. The method (9) of claim 1, characterized by the method or the process (9) of interactive display further comprising:

identifying a virtual object of interest in the plurality of images; and

selecting the virtual object of interest in response to the first command.


 
4. The method (9) of claim 3, characterized by the method or the process (9) of interactive display further comprising:

tracking a movement of one of the plurality of gestures;

interpreting the movement of one of the plurality of gestures as a second command; and

displaying a second scene in response to the second command.


 
5. The method (9) of claim 4, characterized in that tracking the movement of one of the plurality of gestures comprises:

identifying a plurality of points of interest of the first object of interest; and

tracking a movement of a designated point among the plurality of points of interest of the first object of interest, wherein the designated point of interest is associated with the one of the plurality of gestures.


 
6. The method (9) of claim 3, characterized in that determining the plurality of gestures corresponding to the plurality of images comprises:

identifying a first object of interest and a second object of interest in one of the plurality of images;

identifying a plurality of first points of interest of the first object of interest and a plurality of second points of interest of the second object of interest; and

determining gestures of the first and second objects of interest according to the plurality of first and second points of interest;

wherein the first object of interest is a first hand of a user, the second object of interest is a second hand of the user and the plurality of first and second points of interest comprises fingertips, joints and palms of the first and second hands.


 
7. The method (9) of claim 6, characterized in that determining the plurality of gestures corresponding to the plurality of images comprises:

tracking movements of the gestures of the first and second objects of interest;

interpreting the movements of the first and second objects of interest as a second command; and

displaying a second scene in response to the second command.


 
8. The method (9) of claim 7, characterized in that tracking the movements of the gestures of the first and second objects of interest comprises:
tracking movements of a first designated point among the plurality of first points of interest of the first object of interest and a second designated point among the plurality of second points of interest of the second object of interest, wherein the first and second designated point of interest is associated with the gestures of the first and second objects of interest.
 
9. The method (9) of claim 1, characterized by the method or the process (9) of interactive display further comprising:

identifying a virtual object of interest in the plurality of images, wherein the virtual object of interest comprises a plurality of sub-objects of interest corresponding to a plurality of characters; and

determining one of the plurality of characters corresponding to one of the plurality of sub-objects of interest according to a designated point corresponding to the first command;

wherein displaying the first scene in response to the first command comprises:

displaying a pop up window with the one of the plurality of characters corresponding to the first command.


 
10. The method (9) of claim 9, characterized in that determining one of the plurality of characters corresponding to one of the plurality of sub-objects of interest according to the designated point corresponding to the first command comprises:

identifying a first object of interest in the plurality of images;

identifying a plurality of points of interest of the first object of interest; and

determining the designated point corresponding to the first command among the plurality of points of interest;

wherein the first object of interest is a first hand of a user, the plurality of points of interest comprises fingertips of the first hand, and the designated point is a fingertip of thumb or a fingertip of index finger of the first hand.


 
11. A computation device (16) for an interactive display system (1), characterized by comprising:

a processing device (14); and

a memory device (15) coupled to the processing device (14), and configured to store a process (9) of interactive display to instruct the processing device (14) to execute the process (9) of interactive display, wherein the process (9) of interactive display comprises:

determining a plurality of gestures corresponding to a plurality of images (901);

interpreting a predetermined combination of gestures among the plurality of gestures as a first command (904), wherein the predetermined combination of gestures is a sequence of a first gesture, a second gesture, and the first gesture; and

outputting a first scene in response to the first command (905).


 




Drawing






















Search report









Search report