(19)
(11)EP 3 893 090 A1

(12)EUROPEAN PATENT APPLICATION

(43)Date of publication:
13.10.2021 Bulletin 2021/41

(21)Application number: 20169015.3

(22)Date of filing:  09.04.2020
(51)International Patent Classification (IPC): 
G06F 3/01(2006.01)
(52)Cooperative Patent Classification (CPC):
G06F 3/012; G06F 3/013
(84)Designated Contracting States:
AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR
Designated Extension States:
BA ME
Designated Validation States:
KH MA MD TN

(71)Applicant: Irisbond Crowdbonding, S.L.
20018 Guipuzcoa (ES)

(72)Inventor:
  • Martínez, Alberto Caro
    48220 Abadiño Matiena, Bizkaia (ES)

(74)Representative: Fleuchaus, Michael A. et al
Fleuchaus & Gallo Partnerschaft mbB Patentanwälte Steinerstraße 15/Haus A
81369 München
81369 München (DE)

 
Remarks:
Amended claims in accordance with Rule 137(2) EPC.
 


(54)METHOD FOR EYE GAZE TRACKING


(57) The invention refers to a computer-implemented method for locating a point of gaze onto a screen (151). The method comprises the step (210) of initiating the acquiring of an image (300) and the steps (220, 230) of initiating the locating of a first face landmark location (301) and a second face landmark location (302) in the image (300). The method further comprises the step (240) of initiating the selecting of a region of interest (310) in the image (300), said selection being performed by using the aforementioned landmark locations (301,302). The method comprises the step (250) of initiating the constructing of a gaze vector, wherein the construction of the gaze vector is performed by means of an artificial neural network using the first region of interest (310) as input. Moreover, the method comprises the step (260) of initiating the locating the point of gaze onto the screen (151), wherein the locating of the point of gaze is performed by means of the gaze vector.




Description


[0001] The invention refers to the field of eye gaze tracking methods and eye gaze tracking devices for locating points of gaze onto a screen e.g. onto a display unit of a computing device. According to the present invention, a computing device may for instance be a personal computer, a tablet, a laptop, a smart phone, a video game console, a camera, a head-mounted displays (HMD), a smart television, or the like.

[0002] Eye gaze tracking methods are widely used in Human Computer Interaction. computer programs implementing said methods are able to track the point of gaze of the user, which is thus able to provide inputs to a computing device without using conventional input devices (keyboards, mouse devices, touch pads, or the like) by simply looking at particular locations onto the display unit of the computing device. For instance, the user may provide inputs to a graphical user interface (GUI) of a computer without the need to use the hands, thereby e.g. enabling users with motor disabilities to successfully interact with a computer.

[0003] Methods for eye gaze tracking are also used in the assembly or maintenance of complex system. The operators performing such assembly or maintenance tasks often use HMDs and, in particular, augmented reality HMDs, which augment the operator's natural vision by means of computer graphics techniques. HMDs implementing eye gaze tracking methods can be hands-free controlled by the operators, which thus have both hands available for the task.

[0004] Eye gaze tracking methods are of importance in the automotive industry as well. For instance, these methods may be implemented in driving assistance systems, which are thus able to track the eye gaze of the driver to see whether he/she is paying attention to the road., e.g. whether he/she is looking through the windshield of the car. For instance, driving assistance systems implementing eye gaze tracking methods are able to detect whether the driver is looking at the screen of the vehicle rear view camera and to activate said camera only when needed, i.e. only when the driver looks at the screen.

[0005] Methods for eye gaze tracking may also allow for a hands-free interaction between the driver and the software of a vehicle, so that the former may give instructions to the latter without taking her/his hands off the wheel. The driver may thus instruct the software to perform specific activities, such as regulating the intensity of the vehicle lights, locking/unlocking the doors, controlling the vehicle speed and the like, by simply looking in specific directions.

[0006] Eye gaze tracking methods for locating points of gaze onto a screen are known in the art. Known eye gaze tracking methods may attain a relatively high accuracy only if they rely on dedicated hardware components, such as infrared (IR) cameras, wearable hardware components and eye tracking glasses, or on user-dependent calibration procedures. For instance, methods using IR cameras may reach an accuracy of the order of 0.5°. Such dedicated hardware components, however, are relatively expensive and are absent in mainstream computing devices such as laptops and smart phones. Moreover, wearable hardware components may be uncomfortable to use and hinder the mobility of the user. The calibration procedures, instead, are time consuming and limit the usability of the devices implementing known methods for eye gaze tracking and, in turn, their practical utility.

[0007] In general, known eye gaze tracking methods suffer from limited accuracy under real-world operating conditions, e.g. in absence of dedicated hardware components and/or under conditions characterized by a relatively broad variability of the eye appearance, the illumination, the head pose, the technical specifications of the camera, the image quality, or the like.

[0008] These problems are at least partially solved by the invention of the present application, which relates to a computer-implemented method according to claim 1, to a device according to claim 14, to a computer program product according to claim 15, and to a computer-readable storage medium according to claim 16. Embodiments of the invention are subject matter of the dependent claims.

[0009] The present invention concerns a computer-implemented method for locating a first point of gaze onto a screen, the method comprising at least the steps of:
  • initiating the acquiring of at least a first image;
  • initiating the locating of a first face landmark location of a first face landmark in the first image;
  • initiating the locating of a second face landmark location of a second face landmark in the first image;
  • initiating the selecting of a first region of interest in the first image, wherein the selecting of the first region of interest is performed by using at least the first face landmark location and the second face landmark location;
  • initiating the constructing of a first gaze vector, wherein the constructing of the first gaze vector is performed by means of at least an artificial neural network, the artificial neural network using as input at least the first region of interest; and
  • initiating the locating the first point of gaze onto the screen, wherein the locating of the first point of gaze is performed by means of at least the first gaze vector.


[0010] The screen may be a concave or a convex surface. In particular, the screen may be or a substantially flat surface, e.g. a panel such as a canvas, a glass panel and/or a windscreen of a vehicle. The screen may be a display unit of a computing device. For instance, the screen may be the monitor or the screen of a computing device, e.g. a substantially flat area of the computing device, on which a GUI and/or data, in particular data in the form of images, are displayed. The points on the screen may be expressed in terms of a two-dimensional screen coordinate in a two-dimensional reference frame defined onto the screen. The screen coordinate may in particular be cartesian or polar coordinates. For example, the screen location of a point of a screen is described by its two-dimensional screen coordinate, (a, b), with respect to a screen reference frame centered at a point on the screen, e.g. at the upper left corner of the screen.

[0011] According to the present invention, an image may be a vector image or a two-dimensional grid of pixels, e.g. a rectangular grid of pixels. In particular, the location of a pixel in an image may be univocally determined in terms of its two-dimensional image coordinate in the image, said coordinates representing the location of said pixel in the two-dimensional grid of pixels. The two-dimensional image coordinate may be a cartesian or a polar coordinate with respect to a two-dimensional reference frame in the plane of the image, e.g. in the plane comprising the grid of pixels. For instance, the two-dimensional image coordinate of a pixel is the coordinate of the pixel in the image plane reference frame of the first image.

[0012] In particular, the first image is a first two-dimensional grid of pixels, e.g. a first rectangular grid of pixels. The entries of the first two-dimensional grid may be arranged in columns and rows, which may be numerated in ascending order in such a way that each column and each row are associated to a column number and a row number, respectively. In particular, location of each pixel in the first image may be univocally determined in terms of the row number,

of the row to which the pixel belongs and of the column number,

of the column to which the pixel belongs. Thus, the two-dimensional image coordinate of said pixel in the first image may be the two-dimensional vector

For instance, the two-dimensional image coordinate of a pixel of the first image is the coordinate of the pixel in the image plane reference frame of the first image.

[0013] An image, e.g. the first image, may be encoded by at least a bitmap. A bitmap encoding an image or a portion thereof may comprise, e.g. consist of, an array of bits that specifies the color of each pixel of said image or portion thereof. The bitmap may be palette-indexed so that the entries of the array are indexes onto a color table. The entries of the array may store the bits encoding the color of the pixel. In particular, a bitmap may comprise, e.g. consist of, a dot matrix data structure representing the two-dimensional grid of pixels. The bitmap may further comprise information concerning the number of bits per pixel, the number of pixels per row of the two-dimensional grid of pixels and/or the number of pixels per column of said rectangular grid. An image viewer may use the information encoded in the bitmap to render the image or the portion thereof on the screen of a computing device, e.g. of a computing device carrying out the method of the present invention.

[0014] An image, e.g. the first image, may be stored, in particular temporarily stored, in a primary and/or in a secondary memory of a computing device, e.g. of the computing device carrying out the method of the present invention. According the present invention, an image may be acquired by accessing the memory in which said image it is stored. Alternatively, or in conjunction with the above, the acquiring of an image may be performed by capturing said image by means of a recording device, e.g. a photo and/or a video recording device, such as a photo or a video camera. The photo recording device and/or a video recording device may be integrated in a computing device and in particular in a computing device carrying out the method of the present invention. The captured image may then be stored in the primary and/or secondary memory of the computing device and may be accessed to locate the face landmarks and/or to select regions of interest.

[0015] A face landmark is in particular a point in the shape of human faces which, typically, marks a distinctive anatomical region thereof. For instance, a face landmark can be tip of the nose, the right corner of the mouth or the left corner of the mouth. Similarly, a face landmark may be a point of an eyebrow or of a lip which, together with other landmarks, marks said eyebrow and said lip, respectively.

[0016] A face landmark may, for example, be an eye landmark. An eye landmark is in particular a point of an eye which, together with other eye landmarks, marks the shape of said eye. For example, an eye landmark may be the left or right corner of an eye, a point of the eyelid orthe center of the eyeball. An eye landmark may be an iris landmark. In particular, an iris landmark is a point of the iris, which, together with other iris landmarks, marks the shape of said iris. For example, an iris landmark is the center of the iris.

[0017] The location in an image of a face landmark, e.g. the first and/or the second face landmark location is in particular the location in the image of the representation of said landmark in the image. For example, if in the image a facial landmark is represented by a set of pixels, the face landmark location may be the location in the image of a reference pixel of said set of pixels. This way, face landmark location is univocally expressed in terms of the location of this reference pixel, e.g. of the two-dimensional image coordinates thereof.

[0018] The locating of the first facial landmark and/or of the second facial landmark in the first image may be performed by means of the algorithm disclosed in the article "One Millisecond Face Alignment with an Ensemble of Regression Trees" authored by V. Kazemi et al, DOI: 10.1109/cvpr.2014.241, hereinafter referred to as "first location algorithm". Said algorithm comprises an ensemble of regression trees.

[0019] In particular, the first location algorithm allows for locating a set of n0 face landmarks, said set comprising at least the first and the second landmark. For instance, n0 is comprised between 2 and 194 and, in particular between 30 and 130. Moreover, n0 may be comprised between 50 and 100 and, more particularly, is equal to either 6 or 68.

[0020] The three-dimensional location of a face landmark, e.g. of the eyeball center or the iris center, is the position of said landmark in the three-dimensional space. For instance, the three-dimensional location of a face landmark is described by a three-dimensional vector describing the three-dimensional coordinate of this landmark in the camera reference frame. In particular, said three-dimensional coordinate may be obtained from the two-dimensional image coordinate of the landmark in the image plane reference frame of the first image via the inverse of the camera matrix associated to the camera acquiring the first image.

[0021] The first region of interest (ROI) may comprise at least a first eye or a portion thereof. The first ROI is represented by a set of pixels of the first image. In particular, the first ROI is a two-dimensional grid of pixels of the first image and, more particularly, a rectangular grid of pixels of the first image. The first ROI may be encoded by at least a first bitmap.

[0022] For example, if the first face landmark is the tip of the nose and the second landmark is a point of the left eyebrow, the first ROI may consist of the pixels of the rows comprised between the row of the second landmark and the row of the first landmark. Moreover, if the first and the second face landmark are, respectively, the left and the right corner of the left eye, the first ROI may consist of the pixels with column number comprised in the interval between the integer number C1 = CFL1 - E and the integer number C2 = CFL2 + E, wherein CFL1 and CFL2 are the column number of the first and of the second face landmark, respectively. For instance, E is a number comprised in the interval between 5 and 15 and, in particular, in the interval between 8 and 12. More particularly, the integer number E may be equal to 10.

[0023] The location of a pixel in the first ROI may be univocally determined in terms of two-dimensional coordinates, which represents the location of said pixel in the two-dimensional grid of pixels representing the first ROI. Moreover, the location of a pixel in the first two-dimensional grid may be expressed in terms of cartesian or polar coordinates with respect to a two-dimensional reference frame in the plane of the first image.

[0024] The selecting of the first ROI may comprise storing information about the first ROI. The information about the first ROI may comprise information about the color of the pixels of the first ROI and the location of said pixels in the first ROI. The information about the first ROI is in particular comprised in the first bitmap. The selecting of the first ROI may comprise storing data comprising the memory address of the bits storing the information about the first ROI. For instance, said bits may be arranged in the bitmap encoding the first image. Said data and/or the information about the first ROI may be stored, e.g. temporarily stored, in the primary and/or secondary memory of a computing device, e.g. of the computing device carrying out the method of the present invention.

[0025] Any structure format may be used to encode the information about the first ROI, insofar that said information may be retrieved and interpreted correctly. For example, the information about the location of the pixels of the first ROI may specify the location of some of the pixels comprised in the first ROI, insofar that said information is sufficient to correctly obtain the location of each of the pixels of the first ROI. For example, if the first ROI is a rectangular grid of the first image, the information about the location of the vertices of said grid is sufficient to obtain the location of each of the pixels of the first ROI.

[0026] In particular, the first gaze vector estimates the three-dimensional direction, in which the eye contained in the first ROI is looking. The first gaze vector may be a three-dimensional unit vector, which may be expressed in cartesian, spherical or cylindrical coordinates. For example, the first gaze vector may be expressed in spherical coordinates with respect to the three-dimensional location of the eyeball center or of the iris center of the eye contained in the first ROI. In this case, the first gaze vector may be expressed in terms of a polar angle and an azimuthal angle.

[0027] In particular, the artificial neural network (ANN) is a computational model that maps an ANN input into an ANN output and that comprises a plurality of interconnected nodes, each node mapping an input into an output. In particular, the nodes of the ANN are interconnected to one another so that, for each node, the input of said each node comprises the output of another node and/or the output of said each node is part of the input of another node. For example, the output of a generic node of the ANN is part of the ANN output and/or of the input of another node. In particular, the node input of a generic node of the ANN comprises one or more data items, each data item being either the output of another node or a data item of the ANN input.

[0028] For instance, each node of the ANN may map its input to its output by means of an activation function which may be node-dependent. Typically, the activation function of a node may depend on one or more weights weighting the data items of the input of said node.

[0029] In particular, the output of a node of the ANN may depend on a threshold. For instance, the output of a node depends on whether the value of the activation function evaluated at the input of said node is greater than, equal to, or lower than the threshold.

[0030] For example, the ANN may be a VGG-16 neural network or a MnistNet neural network. The ANN may be a convolutional neural network such as AlexNet. For example, the ANN may be part of a generative adversarial network.

[0031] The values of the weights of the ANN may be obtained by training the ANN by means of at least a training dataset. During training, the values of the weights are iteratively adjusted so as to minimize the value of a cost function which, typically, depends on the weights of the ANN, the ANN inputs, the ANN outputs, and/or the biases. For example, the training dataset may be the MPIIGaze or the Gaze-capture dataset or a synthetic dataset such as the SynthesEyes dataset or the UnityEyes dataset.

[0032] The performances of the ANN may be improved by using data augmentation during training. For example, the training dataset may be expanded by augmenting at least some of its data. Data augmentation may be performed by translating, rotating, and/or at least some of the images comprised in the training dataset. At least some of the images of the training dataset may be augmented by varying their intensity and/or by adding lines or obstructions thereto.

[0033] The ANN input may include information about a plurality of pixels of an input image or a portion thereof. For example, the ANN input includes information about the position and the color of said pixels. In particular, the information about the position of the pixels may be encoded in the two-dimensional coordinates of the pixels with respect to a two-dimensional reference frame in the plane of the input image.

[0034] When constructing of the first gaze vector, the ANN uses as input the first ROI. In this case, in particular, the ANN input comprises information about the position and the color of the pixels of the first ROI and, more particularly, the ANN input may comprise or consist of the first bitmap. The ANN output may comprise information characterising the first gaze vector, e.g. the spherical, cylindrical or cartesian coordinates thereof. In this case, the ANN constructs the first gaze vector.

[0035] The three-dimensional location of the first point of gaze, is in particular the intersection between the screen and a first gaze line. The first gaze line is in particular the line that intersects the three-dimensional location of the eyeball center or of the iris center of the eye contained in the first ROI and that is parallel to the first gaze vector. The locating of the first point of gaze onto the screen may be obtained by modelling the screen in terms of a plane (hereinafter also referred to as: "screen plane") and by constructing the first point of gaze as the intersection between said plane and the first gaze line. For example, the three-dimensional coordinate, P1, of the location of the first point of gaze in a given reference frame (e.g. the camera reference frame) is given by:

wherein, g1 is the first gaze vector and g0 is the three-dimensional coordinate of the three-dimensional location of the eyeball center or of the iris center of the eye contained in the first ROI. The unit vector n is perpendicular to the screen plane and PR are the three-dimensional coordinate of the three-dimensional location of a reference point of the screen plane. For instance, this reference point may be the upper left corner of the screen.

[0036] The three-dimensional coordinate, Q1, of the first point of gaze with respect to a three-dimensional reference frame centered at the reference point of the screen plane is given by Q1 = P1 - PR. The screen coordinate with respect to the reference point of the screen may be obtained by appropriately rotating the reference frame centered at the reference point to obtain a further three-dimensional reference frame. In the further three-dimensional reference frame, the three-dimensional coordinate of the first point of gaze is given by (, , 0), where (, ) is the two-dimensional screen coordinate of the first point of gaze with respect to a screen reference frame centered at the reference point of the screen.

[0037] The screen coordinates are typically expressed in unit of lengths such as centimeters and may also be converted in unit of pixels:



[0038] The selecting of the first ROI improves the selection of the input for the ANN thereby leading to a more accurate gaze vector construction and reducing the processing load of the ANN. The selecting of the first ROI and the constructing of the first gaze vector by means of the ANN synergistically interact with one another to improve the accuracy of the method under real-world operating conditions and, in particular, in absence of dedicated hardware components. The method of the present invention may achieve an accuracy below 1° under a wide spectrum of operating conditions.

[0039] According to an embodiment of the method of the present invention, the artificial neural network detects in the first ROI at least a first eye landmark location of a first eye landmark and a second eye landmark location of a second eye landmark. In particular, the eye landmarks detected by the ANN at the step of constructing the first gaze vector allow for reconstructing the eye depicted in the first ROI.

[0040] In particular, the first and the second eye landmark are, respectively, the eyeball center and the iris center of the eye contained in the first ROI. The three-dimensional coordinate of the eyeball center in the camera reference frame (hereinafter also referred to as: "the eyeball coordinate") may be constructed from the two-dimensional image coordinate of the first eye landmark location in the first ROI by means of the camera matrix of the camera acquiring the first image. Similarly, the three-dimensional coordinate of the iris center in the camera reference frame (hereinafter also referred to as: "the iris coordinate") may be constructed from the two-dimensional image coordinate the second eye landmark location in the first ROI by means of the aforementioned camera matrix. In particular, the first gaze vector may be constructed as the difference between the iris coordinate and the eyeball coordinate.

[0041] In this case, the first gaze vector may be constructed by using basic algebraic operations manipulating two landmark locations. This way, the complexity of the ANN may be reduced and the computational load of the method is decreased.

[0042] For example, in this embodiment, the ANN output comprises information about the first and of the second eye landmark in the first image and/or in the first ROI. In particular, the ANN output comprises the two-dimensional image coordinate of the first eye landmark and of the second eye landmark in the first image and/or the first ROI.

[0043] The ANN output may comprise at least a heatmap associated to an eye landmark. In particular, a heatmap associated to an eye landmark is an image that represents the per-pixel confidence of the location of said landmark by using the colors of its pixel. In particular, the pixels of the heatmap are in correspondence with the pixels of the first ROI and/or the first image. For example, this correspondence may be implemented by using a mapping function, e.g. an isomorphism, that maps each pixel of the heatmap onto a pixel of the first ROI and/or the first image.

[0044] The color of a pixel of a heatmap encodes information about the probability that the eye landmark associated to the heatmap is located at the pixel of the first ROI that corresponds to said pixel of the heatmap.

[0045] In particular, a pixel of a heatmap associated to an eye landmark encodes the per-pixel confidence, e.g. the likelihood orthe probability, that said landmark is located at the pixel of the first ROI, that is associated to said pixel of the heatmap. For example, the darker is the pixel of the heatmap, the more likely is that the eye landmark is located at the pixel of the first ROI, that corresponds to said pixel of the heatmap.

[0046] For example, the location in the first ROI of an eye landmark may be the location of the pixel of the first ROI, that correspond to a pixel in a first region of the heatmap associated thereto. In particular, the first region is the region of the heatmap, in which the eye landmark is most likely to be located, e.g. in which the per-pixel confidence assumes its greatest values. In this case, in particular, the the location in the first ROI of said eye landmark is detecting by using the pixel in the first region of the heatmap associated to the landmark and the mapping function described above.

[0047] In one embodiment of the invention, the ANN further detects in the first ROI at least eight eye border landmark locations, ranging from first to eighth eye border landmark location. In particular, an eye border landmark is a point of the external border of an eye, e.g. of an eyelid or of an eye corner.

[0048] In a further embodiment of the method of the invention, the ANN detects in the first ROI at least eight iris border landmark locations, ranging from first to eighth iris border landmark location. For instance, an iris border landmark is a point of the limbus of the eye, i.e. of the iris-sclera border.

[0049] For example, the ANN output comprises the first heatmap, the second heatmap and sixteen further heatmaps, ranging from third to eighteenth. In particular, each of the third to tenth heatmaps encodes the per-pixel confidence of the location of one of the first to eighth eye border landmark, in such a way that different eye border landmarks are associated to different heatmaps. Moreover, each of the eleventh to eighteenth heatmaps may encode the per-pixel confidence location of one of the first to eighth iris border landmark, in such a way that different iris border landmarks are associated to different heatmaps.

[0050] The location in the first ROI of the iris center, of the eyeball center, of the eight eye border landmarks, and of the eight iris border landmarks may be obtained by processing the aforementioned eighteen heatmaps by means of a soft-argmax layer.

[0051] For example, the ANN may be trained by using the following cost function:

wherein Mj(s) is the value at pixel s of the jth heatmap, as computed by the ANN by using as input the training ANN input IN of the training dataset. j(s) is the value of the ground-truth jth heatmap associated to the training ANN input IN. The scale factor λ is comprised between 0.1 and 10, in particular between 0.5 and 5. More particularly, the factor λ is equal to 1. For instance, Nh may be comprised between 1 and 20 and, more particularly, may be equal to 2 or 18.

[0052] According to another embodiment of the present invention, the constructing of the gaze vector is performed by means of a support vector regression (SVR) algorithm, the SVR algorithm using as input at least the first eye landmark location and the second eye landmark location. This, in particular, allows for improving the accuracy of the point of gaze location, without having to rely on models of the eye eyeball.

[0053] The SVR algorithm may allow for constructing the first gaze vector by estimating the eyeball pitch and yaw and/or the polar and azimuthal angles of the spherical coordinates of the first gaze vector. In particular, the SVR algorithm uses as input a feature vector comprising the first eye landmark location and the second eye landmark location. The feature vector may further comprise the eight iris border landmark locations and the eight eye border landmark locations.

[0054] The feature vector may also comprise a two-dimensional gaze vector prior, which may be the vector that connects the eyeball center location in the first image or the first ROI to the iris center location in the first image or the first ROI and which is directed towards the latter location. The presence of the gaze vector prior increases significantly the accuracy of the SVR algorithm, in particular if said algorithm is trained by using a relatively small number of training samples, e.g. about twenty training samples.

[0055] In order to improve the accuracy of the gaze estimation, at least a component of the feature vector, e.g. each of its components, may be normalized to the eye width, i.e. to the distance between the left corner location and the right corner location of the eye contained in the first image or the first ROI. Moreover, the face landmark locations comprised in the feature vector and, if present, the two-dimensional gaze vector prior may be expressed in cartesian or polar coordinates with respect to a two-dimensional reference frame in the plane of the first image, the origin of said reference frame being the eyeball center location.

[0056] The SVR algorithm may be trained in a user-independent way by using a relatively large set of training feature vectors obtained from images containing different persons. The SVR algorithm may also be trained in a user-dependent way by using a relatively small set of training feature vectors obtained from training images containing a specific user. For instance, the training images and the corresponding training feature vectors may be obtained by asking the user to look at predetermined locations on the screen and, while the user look at these locations, by acquiring (e.g. capturing) images depicting the user's face.

[0057] In one embodiment of the method according to the present invention, the artificial neural network is an hourglass neural network (HANN).

[0058] The HANN allows for gathering information across different scales of the input image or of the portions thereof. In particular, the HANN comprises one or more hourglass modules mapping input features to output features. In particular, the hourglass modules are stacked in series, so that the output features of an hourglass module constitute the input features of another hourglass module. The nodes of the hourglass modules are arranged in layers and are configured in such a way that convolutional and max pooling layers modify the input features by decreasing the resolution. At each max pooling step, the modules branch off and apply further convolutions at the original pre-pooled resolution. After reaching the minimal resolution, the hourglass module increases the resolution by upsampling and by combining features across the scales.

[0059] Typically reserved to human pose estimation, the use of Hourglass artificial neural networks in gaze vectors construction leads to a surprising improvement of the accuracy of the location the first point of gaze onto the screen.

[0060] According to a further embodiment of the invention, the first face landmark is a third eye landmark and/or the second face landmark is a fourth eye landmark. In this case the first and the second face landmark location are a third and a fourth eye landmark location, respectively.

[0061] The detection of two eyes landmarks allows for improving the accuracy of the selection of the first ROI, thereby improving the quality of the input fed to the input of the ANN and, ultimately, the accuracy of the construction of the first gaze vector.

[0062] A further embodiment of the method of the present invention comprises the step of initiating the locating in the first image of a fifth eye landmark location of a fifth eye landmark, of a sixth eye landmark location of a sixth eye landmark, of a seventh eye landmark location of a seventh eye landmark, and of an eighth eye landmark location of a eighth eye landmark. In this embodiment, the selecting of the first ROI is performed by using the third, fourth, fifth, sixth, seventh, and eight eye landmark location.

[0063] In this case, for example, the first ROI may consist of the pixels with column number comprised between the integer numbers C3 and C4, and with row number comprised between the integer numbers R3 and R4, wherein C3, C4, R3 and R4 fulfil the following relations:

with Se,x = {xe,3,xe,4,xe,5,xe,6,xe,7,xe,8} and Se,y = {ye,3,ye,4,ye,5,ye,6,ye,7,ye,8}. The coordinates (xe,3,ye,3), (xe,4,ye,4), (xe,5,ye,5), (xe,6,ye,6), (xe,7,ye,7), and (xe,8,ye,8), denote the two-dimensional image coordinates of the six eye landmark locations used to select the first ROI. The expression min(S) and max(S) denote the minimum and the maximum of the generic set S, respectively.

[0064] Surprisingly, the selecting of the first ROI by means of six eye landmark locations leads to an improvement of the accuracy of the constructing of the first gaze vector by means of the ANN.

[0065] One embodiment of the method of the present invention further comprises the step of:
  • initiating the constructing of a head pose estimation vector, wherein the constructing of the head pose estimation vector is performed by using at least the first face landmark location and the second face landmark location;
wherein the locating of the first point of gaze onto the screen is based on the head pose estimation vector.

[0066] The head pose estimation vector is in particular a vector that comprises at least the information needed to compute the yaw, pitch, and roll of the face contained in the first image with respect to a reference face position.

[0067] The three-dimensional location, g0, of the eyeball center or of the iris center of the eye contained in the first ROI may be equal to the head pose estimation vector, h0. In particular, the head pose estimation vector reads as follows:

where Θ3 and τ are, respectively, the 3x3 rotation matrix and the translation vector that transform the three-dimensional coordinates in a three-dimensional world reference frame into the three-dimensional coordinates in the three-dimensional camera reference frame. For example, if the location of the eye center or of the iris center of the first eye in the first image is described by the two-dimensional image coordinates (x0,y0), the three-dimensional vector v0 reads as follows:

in terms of the inverse of the camera matrix C. fx and fy are the focal lengths in the x and y directions, respectively, and (cx,cy) is the optical center. The constant s is an overall scale. The rotation matrix Θ3, the translation vector τ and/or the overall scale s may in particular be computed by solving the Perspective-n-Point (PnP) problem. The PnP problem may be formulated as the problem of estimating the pose of a face contained in the first image, given the three-dimensional coordinate in the world reference frame and the two-dimensional image coordinate of each face landmark of a set of n1 face landmarks of said face.

[0068] The set of n1 face landmarks comprises the first and the second face landmark. For instance, n1 is comprised between 2 and 80 and, in particular between 3 and 75. Moreover, n1 may be comprised between 4 and 70 and, more particularly, is equal to either 6 or 68. For example, the set of n1 face landmarks is a subset of the set of n0 face landmarks and, in particular the set of n0 face landmarks coincides with the set of n1 face landmarks.

[0069] The solution of the PnP problem may be obtained by using the P3P method, the efficient PnP (EPnP) method, the Direct Least-Squares (DLS) Method. These methods, in particular the P3P one, may be complemented by the Random sample consensus (RANSAC) algorithm, which is an iterative method to improve the parameters estimation from a set of data that contains outliers.

[0070] In particular, the rotation matrix Θ3, the translation vector τ and/or the overall scale smay be computed by using the direct linear transformation and/or the Levenberg-Marquardt optimization.

[0071] The constructing of the head pose estimation vector allows for improving the accuracy of the location of the first point of gaze under real-world, unconstrained operating conditions, e.g. without forcing the user to assume a given predetermined position with respect to the screen and/or the recording device.

[0072] In another embodiment of the method according to the present invention, the constructing of the head pose estimation vector is performed at least by means of a three-dimensional face model. The three-dimensional face model uses as input at least the first face landmark location and the second face landmark location.

[0073] In particular, the three-dimensional face model may use as input the location in the first image of each face landmark of a set of n2 face landmark, the set of n2 face landmarks comprising the first and the second face landmark. For instance, n2 is comprised between 2 and 80 and, in particular between 3 and 75. Moreover, n2 may be comprised between 4 and 70 and, more particularly, is equal to either 6 or 68. For instance, the set of n2 face landmarks is a subset of the set of n1 face landmarks and, in particular the set of n1 face landmarks coincides with the set of n2 face landmarks.

[0074] The three-dimensional model is in particular an algorithm that allows for computing the three-dimensional coordinate of each element of the set of n2 face landmarks by using as input at least the face landmark location, in particular the two-dimensional image coordinate, of the elements of the set of n2 face landmarks. The three-dimensional face landmark model may in particular be the Surrey face model or the Candide face model.

[0075] In particular, the three-dimensional face model allows for estimating the rotation matrix Θ3 and the translation vector τ, that, ultimately, allow for computing the head pose estimation vector e.g. by using equations (5) and (6).

[0076] For example, if the location in the first image of the generic jth element of the set of n2 face landmarks is described by the coordinate (xf,j,yf,j), the three-dimensional model allows for estimating the three-dimensional location, Pf,j, of said generic element in the three-dimensional world reference frame. In this case, in particular, the rotation matrix Θ3, the translation vector τ and/or the overall scale sare computed by solving the system:



[0077] The use of a three-dimensional model allows for a reliable construction of the head pose estimation vector from a relatively small amount of landmark locations, thereby reducing the processing load of the method of the present invention.

[0078] An embodiment of the method according to the present invention comprises the steps of:
  • initiating the acquiring of at least a second image;
  • initiating the locating of a third face landmark location of the first face landmark in the second image;
  • initiating the estimating of a fourth face landmark location of the first face landmark in the first image, wherein the estimating of the fourth face landmark location is performed by means of an optical flow equation and of the third face landmark location; and
  • initiating the detecting of a fifth face landmark location of the first face landmark in the first image.


[0079] The locating of the first face landmark location in the first image is based on the fourth face landmark location and on the fifth face landmark location.

[0080] The locating of the first face landmark location in the first image may comprise the computing the midpoint of the segment connecting the fourth face landmark location and the fifth face landmark location. In particular, the first face landmark location substantially corresponds to the midpoint of the segment connecting the fourth face landmark location and the fifth face landmark location.

[0081] For example, the second image is a vector image or a second two-dimensional grid of pixels, e.g. a rectangular grid of pixels. Moreover, the second image may be encoded by at least a bitmap.

[0082] The entries of the second two-dimensional grid may be arranged in columns and rows, which may be numerated in ascending order in such a way that each column and each row are associated to a column number and a row number, respectively. In particular, the location of a pixel of the second image may be univocally determined in terms of the row number,

of the row to which the pixel belongs and of the column number,

of the column to which the pixel belongs. Thus, the two-dimensional image coordinate of said pixel in the second image may be the two-dimensional vector

For instance, the two-dimensional image coordinate of a pixel of the second image is the coordinate of the pixel in the image plane reference frame of the second image.

[0083] The first image and the second image are, in particular, images comprising the same face captured at different time instants. In particular, the first and the second image may be two frames of a video, which, for instance, is acquired by the recording device.

[0084] In particular, the time instant, t1, at which the first image is captured, and the time instant, t2, at which the second image is captured fulfil the equation (t1 - t2) = Δt. For example, Δt is comprised in the interval between 0.02 and 0.07 s, in particular, comprised in the interval between 0.025 and 0.06 s and, more particularly, in the interval between 0.03 and 0.05 s. For example, Δt may be equal to 0.04 s.

[0085] Moreover, if the recording device is operated to capture images with frequency υ, Δt fulfils the equation Δt= Nt/υ. The integer number Nt may be comprised in the interval between 1 and 5 and in particular in the interval between 1 and 3. More particularly, the integer number Nt may be equal to 1, i.e. the first image is immediately consecutive to the second image. The frequency υ may be comprised in the interval between 15 and 40 Hz, in particular in the interval between 20 and 35 Hz and, more particularly, in the interval between 25 and 30 Hz. For example, the frequency υ may be equal to 27 Hz.

[0086] The two-dimensional reference frame in the plane of the second image and the one in the plane the first image may be defined in such a way that the first and the second image form a voxel, e.g. a three-dimensional grid of pixels comprising a first and a second layer. In this case, the first and second layer of pixels are the first and second rectangular grid, respectively.

[0087] The location of the pixels in the voxel may be expressed in terms of a three-dimensional voxel coordinates, (x,y,t), wherein the coordinate (x, y) is the two-dimensional coordinate of the pixel in the two-dimensional reference frame of the layer, to which the pixel belongs. The temporal coordinate, t, specifies the layer, to which the pixel belongs. In particular, if t = t1 or t = t2, the pixel belongs to the first or the second layer, respectively.

[0088] The location of a face landmark in the first and/or in the second image may be expressed in terms of the location the corresponding reference pixel in the voxel. In particular, the location of said face landmark may be expressed by the three-dimensional voxel coordinate, (x,y,t) of the reference pixel. In particular, the temporal coordinate of a face landmark location in the first image is t1 and the temporal coordinate of a face landmark location in the second image is t2.

[0089] For example, the method for constructing the point of gaze onto the screen is performed several times during an operational time interval of the device implementing the method of the invention. In particular, the operational time interval is the interval during which the construction of the point of gaze is deemed important e.g. to allow for a hands-free human-machine interaction. In this case, in particular, the second image is acquired at instant t2 and used to locate the third face landmark location to ultimately compute a point of gaze onto the screen by means of the method of the present invention. The information of the third face landmark location may then be used to obtain the first landmark location in the first image by means of the optical flow equation.

[0090] In particular, the voxel coordinate of the fourth face landmark location in the first image, (x4,y4,t1), fulfil the following equation:

wherein (x3,y3,t2) is the voxel coordinate of the location of the third face landmark in the second image. The velocity vector (vx,3,vy,3) represents the velocity of the third face landmark location in the second image. The velocity vector may be obtained by using the optical flow equation, according to which a pixel at the voxel location (x,y,t) and moving with velocity (vx,vy) fulfils the following constraint:

I(x,y,t) is the intensity at the voxel position (x,y,t). Ix, Iy and It are, respectively, the derivatives of the intensity with respect to x, y and t. Said derivatives are in particular evaluated at the voxel location (x,y,t). In particular, the optical flow equation at the location of the third face landmark in the second image reads as follows:



[0091] In particular, the optical flow equation allows for predicting the change of the location of the first face landmark in the scene during the time interval from the instant t2 to the instant t1 Thus, the fourth face landmark location is an estimate of the location of the first face landmark in the first image, based on location of the first face landmark in the second image.

[0092] Equations (9) and (10) may be used to compute the velocity vector (vx,3,vy,3) by using the Lucas-Kanade method. In particular, the method according to the present invention may comprise the step of initiating the locating of a plurality of pixel locations in the second image. In particular, the distance, e.g. the Euclidean distance, between each of the plurality of pixel locations and the location of the third face landmark in the second image is less than or equal to an upper distance, dU. If the method locates P pixel locations with voxel coordinates (xp,1,yp,1,t1), (xp,2,yp,2,t1),..., (xp,P,yp,P,t1), the velocity vector (vx,3,vy,3) may be computed by solving the following system:



[0093] The system of equations (11) may be solved by using the least squares approach. Moreover, the following relations may be fulfilled:



[0094] The upper distance may be expressed in unit of pixels and may be comprised in the interval between 1 and 10 pixels, in particular in the interval between 2 and 8 pixels. For instance, the upper distance may be comprised in the interval between 3 and 6 pixels and, in particular, may be equal to 3 or 4 pixels.

[0095] The invention may comprise the step of initiating the locating of a first pixel location in the second image. In particular, the distance, e.g. the Euclidean distance, between the first pixel location and the third face landmark location in the second image is less than or equal to the upper distance. According to the Lucas-Kanade method, the velocity vector (vx,3,vy,3) may be computed by using equation (11) with P = 1, wherein (xp,1,yp,1,t1) are the voxel coordinates of the first pixel location.

[0096] For example, the locating of the third facial landmark location in the second image may be obtained by means of the first location algorithm. Moreover, the detecting of the fifth face landmark location of the first face landmark in the first image may be performed by locating said landmark location in the first image e.g. by means of the first location algorithm.

[0097] In this embodiment, the location of the first face landmark in the first image is improved by consistently including information about its previous location, i.e. about the location of said landmark in the second image. This way, the use of the information gathered during the operational time interval of the device is, at least partially, optimised.

[0098] In another embodiment of the present invention, the first face landmark location is equal to the weighted average between the fourth face landmark location and the fifth face landmark location.

[0099] In particular, the weighted average comprises a first weight, w1, and a second weight, w2, the first and the second weight multiplying the fourth and the fifth face landmark location, respectively. The weighted average may be expressed in terms of the voxel coordinate as follows:

wherein (x1, y1,t1) and (x5, y5,t1) are respectively the voxel coordinate of the first and the fifth face landmark location in the first image. In particular, the first face landmark location is the midpoint between the fourth and the fifth face landmark location and thus its coordinate may be obtained by using equation (13) with w1 = w2 = 1/2.

[0100] In particular, in this embodiment, the accuracy of the locating of the first face landmark location in the first image may be improved if the value of the weights w1 and w2 is based on the accuracy of the locating of the third and the fifth face landmark location. For instance, if the locating of the third face landmark location in the second image is more accurate than the locating of the fifth face landmark location in the first image, the first weight may be greater than the second weight. If, instead, the locating of the third face landmark location in the second image is less accurate than the one of the fifth face landmark location in the first image, the first weight may be smaller than the second weight. The accuracy of the locating of a face landmark location in an image may in particular depend on the pitch, yaw and roll of the face depicted in the image.

[0101] According to an embodiment of the present invention, the locating of the first face landmark location in the first image is based on a landmark distance, the landmark distance being the distance between the third face landmark location and the fourth face landmark location.

[0102] In particular, the landmark distance, dL, is the Euclidean distance between the third face landmark location and the fourth face landmark location. The landmark distance may be expressed in units of pixels and/or may read as follows,



[0103] In a further embodiment of the present invention, the first weight is a monotonically decreasing function of the landmark distance and the second weight is a monotonically increasing function of the landmark distance.

[0104] According to this embodiment, the contribution of the fourth face landmark location to the weighted sum becomes more marginal as the landmark distance increases. This way, the accuracy of the locating of the first face landmark location in the first image is improved, since the accuracy of the location of the fourth face landmark, which is an estimate of the location of the first face landmark obtained by using the third face landmark location, decreases as the landmark distance decreases.

[0105] According to the preset invention, a monotonically increasing function of a distance, e.g. of the landmark distance, is in particular a function that does not decrease as said distance increases. A monotonically decreasing function of a distance, e.g. of the landmark distance, instead, is in particular a function that does not increase as distance decreases.

[0106] For example, the first and the second weight read as follows:

wherein min(dL,0, dL) denotes the minimum between dL,0 and dL.dL,0 may be expressed in unit of pixels and/or may be comprised in the interval between 1 and 10 pixels. In particular, dL,0 is comprised in the interval between 3 and 7 pixels and, more particularly, dL,0 is equal to 5 pixels.

[0107] An embodiment of the method of the present invention comprises the step of:
  • initiating the locating of a second point of gaze onto the screen, wherein the locating of the second point of gaze is performed by means of at least the first gaze vector;


[0108] In this embodiment, the locating of the first point of gaze onto the screen is performed by means of the second point of gaze.

[0109] The second point of gaze is in particular the intersection between the screen and the first gaze line. The locating of the second point of gaze onto the screen may be obtained by modelling the screen in terms of a plane and by constructing the second point of gaze as the intersection between said plane and the first gaze line.

[0110] In this case, in particular, the location of the first point of gaze onto the screen may depend only indirectly on the first gaze vector, that is only via the dependency of the location of the second point of gaze on the first gaze vector.

[0111] In an embodiment of the present invention, the locating of the second point of gaze onto the screen is performed by means of a calibration function. The calibration function depends on at least a location of a first calibration point of gaze and on an estimate of the location of the first calibration point of gaze.

[0112] In particular, the calibration function, ϕ, allows for computing the two-dimensional screen coordinates, (, ), of the first point of gaze onto the screen from the two-dimensional screen coordinates,

of the second point of gaze onto the screen,



[0113] For example, the method of the present invention may comprise at least the steps of:
  • initiating the acquiring of at least a first calibration image;
  • initiating the locating of a face landmark location of a face landmark in the first calibration image;
  • initiating the locating of a further face landmark location of a further face landmark in the first calibration image;
  • initiating the selecting of a first ROI in the first calibration image, wherein the selecting of the first ROI in the first calibration image is performed by using at least the face landmark location in the first calibration image and the further face landmark location in the first calibration image;
  • initiating the constructing a first calibration gaze vector, wherein the constructing of the first calibration gaze vector is performed by means of at least the ANN, the ANN using as input at least the first ROI of the first calibration image; and
  • initiating the constructing of the estimate of the location of the first calibration point of gaze onto the screen, wherein said constructing is performed by means of at least the first calibration gaze vector.


[0114] The estimate of the location of the first calibration point is in particular the intersection between the screen and a calibration line. The calibration line is in particular the line that intersects the eyeball center orthe iris center of the eye contained in the first ROI of the first calibration image and that is parallel to the first calibration gaze vector. The constructing of the estimate of the location of the first calibration point onto the screen may be obtained by modelling the screen in terms of a plane and by constructing said estimate as the intersection between said plane and the calibration line.

[0115] The method of the present invention may further comprise the step of initiating the prompting of the user to look at the calibration point of the screen. In particular, said prompting may be performed before the acquiring of the first calibration image. Moreover, the method may also comprise the step of initiating the constructing of a first calibration head pose estimation vector, wherein the constructing of said vector may be performed by using at least the face landmark location and the further face landmark location in the first calibration image. In this case, the estimate of the locating of the first calibration point of gaze onto the screen may be based on the first calibration head pose estimation vector. The calibration function may also depend on the first calibration gaze vector and/or on the first calibration head pose estimation vector.

[0116] In particular, the calibration function depends on a location of each element of a set of n3 calibration points of gaze and on an estimate of said location of each element of the set of n3 calibration points of gaze. For instance, n3 is comprised between 2 and 20 and, in particular between 3 and 15. Moreover, n3 may be comprised between 4 and 10 and, more particularly, is equal to 5.

[0117] For example, the estimate of the location of each element of the set of n3 calibration points of gaze may be obtained by using the steps leading to the construction of the estimate of the location of the first calibration point of gaze. In particular, the estimate of the location of each element of the set of n3 calibration points of gaze is constructed by means of a corresponding calibration gaze vector and, optionally, of a corresponding calibration head pose estimation vector. For instance, the calibration function may depend on the plurality of calibration gaze vectors and/or on the plurality calibration head pose estimation vectors.

[0118] The calibration allows for constructing the first point of gaze by taking into account the actual conditions under which the user is looking at the screen, thereby improving the accuracy of the location of the first point of gaze under real setup conditions. The actual conditions may for instance comprise the position of the screen with respect to the screen, the fact that the user wears glasses or that the user is affected by strabismus.

[0119] For example, the calibration function may comprise or consists of a radial basis function. In particular, the radial basis function may be a linear function or a polyharmonic spline with exponent equal to 1. Radial basis functions allow for an improvement of the accuracy of the location of the first point of gaze, even when the calibration is based on a relatively low amount of calibration points. The first input data may comprise information to locate the second point of gaze onto the screen. In particular, said information may comprise or consist of the two-dimensional screen coordinate of the second point of gaze. Alternatively, or in conjunction with the above, the first input data may comprise information to characterize the first gaze vector and/or the head pose estimation vector. Information to characterize a three-dimensional vector, e.g. the first gaze vector and the head pose estimation vector, may comprise or consists of the three-dimensional coordinate of said vector.

[0120] According to an embodiment of the present invention, the calibration function depends on at least the distance, e.g. the Euclidean distance, between the location of the first calibration point of gaze and the estimate of the location of the first calibration point of gaze.

[0121] The dependency on the distance between the location of the first calibration point of gaze and the estimate of the location of the first calibration point of gaze allows for an improvement of the accuracy of the location of the first point of gaze, even when the calibration is based on a relatively low amount of calibration points, e.g. on fewer than ten, in particular fewer than six calibration points.

[0122] For example, if the location onto the screen of the generic jth element of the set of n3 calibration points is described by the two-dimensional screen coordinate (ac,j,bc,j), and the estimate of the location of said generic element is described by the two-dimensional screen coordinate (ae,j,be,j), the calibration function may read as follows:

wherein

is the distance between

and

In particular,



[0123] The n3 dimensional matrix

is the inverse of the matrix Dn3. In particular, the generic entry of the latter matrix,

is given by:



[0124] The calibration function defined in equation (15) leads to a surprisingly accurate calibration.

[0125] According to a further embodiment of the present invention, the locating of the first point of gaze onto the screen is performed by means of a Kalman filter.

[0126] In particular, the Kalman filter is an algorithm that uses a series of estimations of the point of gaze on the screen to improve the accuracy of the locating of the first point of gaze. For example, the kth iteration of said filter allows for computing the two-dimensional screen coordinate, (ak, bk), and the velocity, (va,k, vb,k), of a kth estimated point of gaze in terms of the two-dimensional screen coordinate, (ak-1, bk-1), and the velocity, (va,k-1, vb,k-1), of a (k - 1)th estimated point of gaze and in terms of the two-dimensional screen coordinate, (ak, bk), of a kth intermediate the point of gaze:



[0127] The matrix Kk may be expressed as follows:

in terms of the matrix



[0128] In particular, for each value of k, the matrix Pk|k reads as follows:

wherein 14 is the 4x4 identity matrix. The matrices Rk and Qk are the covariance of the observation noise at the kth iteration and the covariance of the process noise at the kth iteration, respectively. The matrices Fk and Hk are the state transition model matrix at the kth iteration and the observation model matrix at the kth iteration, respectively. In particular, at least one of the following relations holds:



[0129] In particular, δt may be written as δt = 1/υ, in terms of the frequency υ described above. The matrices Rk, Qk, Fk and/or Hk may be independent of the iteration. At least one of the following relations may hold:

wherein 0 is the null matrix. For example, the first and the second point of gaze may be computed by using the Kalman filter with one iteration, e.g. by using the equations (16) to (21) with the replacement k → 1. In this case, the first and the second point of gaze are the first estimated point of gaze and the first intermediate point of gaze, respectively. In this case, in particular, the 0th estimated point of gaze may be the point of the screen with two-dimensional screen coordinates (a0, bo) and with velocity (va,0, vb,o), as specified in equation (21).

[0130] The first point of gaze may be iteratively computed by using the Kalman filter with M iterations starting from P0|0 and V0. In this case, in particular, the first and the second point of gaze are the Mth estimated point of gaze and the Mth intermediate point of gaze, respectively. For example, M is a given integer number. Alternatively, M may be the smallest number of iterations needed to meet a stopping condition. For example, the stopping condition may comprise the requirement that the distance between the Mth estimated point of gaze and the Mth intermediate point of gaze is lower than a first threshold and/or the requirement that the distance between the Mth estimated point of gaze and the (M - 1)th estimated point of gaze is lower than a second threshold. In particular, the first threshold may be equal to the second threshold.

[0131] According to an embodiment of the method of the present invention, the locating of the first point of gaze onto the screen is performed by means of a third point of gaze and a covariance matrix of the process noise, wherein the covariance matrix of the process noise comprises a plurality of entries, said entries being a monotonically increasing function of the distance between the second point of gaze and the third point of gaze. In particular, each of the entries of said covariance matrix may be a monotonically increasing function of the distance between the second point of gaze and the third point of gaze.

[0132] For instance, the distance between the first point of gaze and the third point of gaze may be expressed in unit of pixels. In particular, the distance between the second and the third point of gaze is the Euclidean distance between said points of gaze. In particular, if the first point of gaze is the Mth intermediate point of gaze, the third point of gaze may be the (M - 1)th estimated point of gaze and the distance, dM,M-1, between the first and the third point of gaze reads as follows:



[0133] The covariance matrix of the process noise may be the covariance matrix of the process noise at the kth iteration, Qk, and at least one of the following relations may hold:



[0134] For example, q is a function of the distance, dk,k-1, and may read as follows:

wherein dk,k-1 may be obtained by using equation (23) with the replacement Mk. In particular, q0 may be comprised in the interval between 10-35 and 10-25 and, more particularly, in the interval between 10-32 and 10-28. Moreover, q1 may be comprised in the interval between 10-25 and 10-15 and, in particular, in the interval between 10-22 and 10-18. For example, q2 is comprised in the interval between 10-15 and 10-5 and, more particularly, in the interval between 10-12 and 10-8. For instance, q3 is comprised in the interval between 10-5 and 10-1 and, in particular, in the interval between 10-3 and 0.5. Moreover, q0, q1, q2, and/or q3 may be equal to 10-30, 10-20, 10-10, and 10-2, respectively. Further, d0 may be comprised in the interval between 50 and 220 pixels and, more particularly, in the interval between 100 and 200 pixels. In particular, d1 is comprised in the interval between 200 and 400 pixels and, more particularly, in the interval between 220 and 300 pixels. Further, d2 is comprised in the interval between 300 and 600 pixels and, more particularly, in the interval between 400 and 550 pixels. Moreover, d0, d1 and/or d2, may be equal to 128 pixels, 256 pixels and 512 pixels, respectively.

[0135] In this embodiment, the numerical importance of the covariance matrix of the process noise is reduced when the distance between the intermediate point of gaze of the kth iteration and the estimated point of gaze of the (k - 1)th iteration decreases. This way, the Lukas-Kanade filter allows for attaining a relatively high accuracy of the locating of the first point of gaze by using a relatively small number of iterations.

[0136] In a further embodiment of the method of the present invention, the locating of the first point of gaze onto the screen is performed by means of the third point of gaze and a covariance matrix of the observation noise. The covariance matrix of the observation noise comprises a plurality of entries, said entries being a monotonically increasing function of the distance between the second point of gaze and the third point of gaze. For example, each of the entries of said covariance matrix may be a monotonically increasing function of the distance between the second point of gaze and the third point of gaze.

[0137] The covariance matrix of the observation noise may be the covariance of the observation noise at the kth iteration, Rk, and at least one of the following relations may hold:



[0138] For example, r is a function of the distance dk,k-1 and may read as follows:



[0139] In particular, r0 may be comprised in the interval between 10-35 and 10-25 and, more particularly, in the interval between 10-32 and 10-28. Moreover, r1 may be comprised in the interval between 10-5 and 10-1 and, in particular, in the interval between 10-3 and 0.5. Moreover, r0 and/or r1 may be equal to 10-30 and 10-2, respectively. Further, d3 may be comprised in the interval between 50 and 220 pixels and, more particularly, in the interval between 100 and 200 pixels. In particular, d3, is equal to 128 pixels.

[0140] An embodiment of the method comprises the steps of:
  • initiating the locating of a sixth face landmark location of a sixth face landmark in the first image;
  • initiating the locating of a seventh face landmark location of a seventh face landmark in the first image;
  • initiating the selecting of a second region of interest in the first image, wherein the selecting of the second region of interest is performed by using at least the sixth face landmark location and the seventh face landmark location; and
  • initiating the constructing of a second gaze vector by means of at least the artificial neural network, the artificial neural network using as input at least the second region of interest.


[0141] In particular, according to this embodiment, the locating the first point of gaze onto the screen is performed by means of the first and of the second gaze vector. For instance, the second ROI comprises at least a second eye or a portion thereof. In particular, the second eye is different from the first eye. The second ROI may be encoded by a second bitmap. In this embodiment, the location of the first point of gaze is obtained by using information about the gaze vectors of both eyes. The accuracy of said location is thereby improved.

[0142] When constructing the second gaze vector, the ANN uses as input the second ROI. In this case, in particular, the ANN input comprises information about the position and the color of the pixels of the second ROI and, more particularly, the ANN input may comprise or consist of the second bitmap.

[0143] The method according to the present invention may comprise the step of initiating the locating of a fourth and a fifth point of gaze, wherein the locating of the fourth and the fifth point of gaze is performed by means of the first and the second gaze vector, respectively. In particular, the screen coordinates of the first or the second point of gaze, (, ), may be obtained by a weighted sum of the screen coordinates of the fourth point of gaze, (4, 4), and of the screen coordinates of the fifth point of gaze, (â5, 5), as follows:



[0144] For instance, the first or the second point of gaze may substantially coincide with the midpoint between the location of the fourth and of the fifth points of gaze, that is (â, ) may be obtained from the equation above by setting u1 = u2 = 1/2. The weights u1 and u2 may depend on the head pose estimation vector. In particular, the dependence of these weights is such that:
  • u1 is lower than u2 if, due to the head pose of the user, the construction of the fourth point of gaze is less accurate than the construction of the fifth point of gaze; and
  • u1 is greater than u2 if, due to the head pose of the user, the reconstruction of the fourth point of gaze is more accurate than the construction of the fifth point of gaze.


[0145] For example, the construction of the fourth point of gaze is more or less accurate than the one of the fifth point of gaze if, respectively, the reconstruction of the first eye is more or less accurate than the reconstruction of the second eye. For instance, the reconstruction of an eye of the user's face may be more accurate than the reconstruction of the other eye if in the first image the user's head is rotated in such a way that the latter eye is at least partially obstructed, e.g. by the user's nose.

[0146] The method of the present invention may further comprise the step of receiving the location of the first point of gaze onto the screen and/or the step of initiating the displaying of the location of the first point of gaze onto the screen.

[0147] According to the present invention, the step of initiating an action, such as the acquiring of an image, the locating of a landmark or a point of gaze, the selecting of a region of interest, the constructing of a vector, and/or the estimating of a face landmark, may, in particular, be performed by performing said action. For example, the step of initiating the acquiring of the image, e.g. of the first or the second image, may be performed by acquiring said image. Similarly, the step of initiating the locating a face landmark location in an image may be performed by locating said face landmark location in said image. For instance, the step of initiating the constructing of a vector, such as the first gaze vector or the head pose estimation vector, may be performed by constructing said vector.

[0148] The step of initiating an action, such as the acquiring of an image, the locating of a landmark or a point of gaze, the selecting of a ROI, the constructing of a vector, and/or the estimating of a face landmark, may, in particular, be performed by instructing a dedicated device to perform said action. For example, the step of initiating the acquiring of the image, e.g. of the first or the second image, may be performed by instructing the recording device to acquire said image. Similarly, the step of initiating the locating of a face landmark location in an image may be performed by instructing a dedicated computing device to locate said face landmark location in said image.

[0149] According to the present invention, the step of initiating a first action may be performed together with the step of initiating one or more other actions. For instance, the step of initiating the locating of the first landmark and the step of initiating the locating of the second landmark may be performed together by initiating the locating of a plurality of face landmarks, said plurality including the first and the second landmark.

[0150] Moreover, the step of initiating the selecting of the first ROI may be performed together with the steps of initiating the locating the first and the second face landmarks. For instance, the computing device carrying out the present invention may instruct another computing device to locate the first and the second face landmarks. The latter device may be configured in such a way that, once the latter device has performed this locating, it performs the selection of the first ROI. In this case, according to the present invention, the steps of initiating the locating of the first and the second face landmarks initiates also the selecting of the first ROI.

[0151] In one embodiment, all the steps of the method according to the present invention may be performed together. For example, the computing device carrying out the present invention may instruct another computing device to acquire the first image. Once the latter device has acquired said image, it locates the first and the second landmarks, selects the first ROI, constructs the first gaze vector, and then locates the first point of gaze onto the screen. In this case, according to the present invention, the step of initiating the acquiring of the first image initiates also the locating of the first and second face landmark locations, the selecting of the first ROI, the constructing of the first gaze vector, and the locating of the first point of gaze onto the screen.

[0152] Through the present specification, the steps of the method of the present invention are disclosed according to a given order. This given order, however, does not necessarily reflect the chronological order, according to which the steps of the present invention are performed.

[0153] The present invention refers also to a data processing system comprising at least a processor configured to perform the method according to the present invention.

[0154] Moreover, the present invention refers to a computer program product comprising instructions which, when the computer program product is executed by a computing device, cause the computing device to carry out the method according to the present invention.

[0155] The present invention concerns also a computer readable storage medium comprising instructions which, when executed by a computer device, cause the computer device to carry out the method according to the present invention. The computer readable medium is, in particular, non-transitory.

[0156] Exemplary embodiments of the invention are described in the following with respect to the attached figures. The figures and corresponding detailed description serve merely to provide a better understanding of the invention and do not constitute a limitation whatsoever of the scope of the invention as defined in the claims. In particular:
Fig. 1
is a schematic view of a first embodiment of the data processing system according to the present invention;
Fig. 2
is a flow diagram of the operation of a first embodiment of the method according to the present invention;
Fig. 3a
is a schematic representation of the first image acquired by performing the first embodiment of the method according to the present invention;
Fig. 3b
is a schematic representation of the face landmarks located by performing the first embodiment of the method according to the present invention;
Fig. 3c
is a schematic representation of the first ROI selected by performing the first embodiment of the method according to the present invention;
Fig. 4a, 4b
are schematic representations of the heatmaps obtained by performing the first embodiment of the method according to the present invention; and
Fig. 5
is a flow diagram of the operation of a second embodiment of the method according to the present invention.


[0157] Fig. 1 is a schematic view of a first embodiment of the data processing system 100 according to the present invention. Said data processing system 100 may be a computing device or a cluster thereof. The data processing system 100 comprises a processing element 110 and storage means 120, which are in data communication with one another.

[0158] The processing element 110 may consist of or comprise a CPU and/or a GPU. Moreover, the processing element 110 comprises several modules 111-116 configured to perform the steps of the method according to the present invention. The first initiating module 111 is configured to initiate the acquiring at least the first image. For instance, the first initiating module 111 may be an acquiring module configured to acquire, e.g. capture, the first image.

[0159] The second initiating module 112 is configured to initiate the locating of the first face landmark location of the first face landmark in the first image. The second initiating module 112 may be a first locating module configured to locate in the first image the first face landmark location of the first face landmark. The third initiating module 113 is configured to initiate the locating of the second face landmark location of the second face landmark in the first image. In particular, the second initiating module 112 may be a second locating module configured to locate the second landmark location of the second face landmark.

[0160] The third 113 and the second initiating module 112 may be the same initiating module which, e.g., may be configured to initiate the locating in the first image of a plurality of face landmark locations, said plurality including the first face landmark location of the first face landmark and the second face landmark location of the second face landmark.

[0161] The fourth initiating module 114 is configured to initiate the selecting of the first ROI and, in particular, is a selecting module configured to select the first ROI by using at least the first face landmark location and the second face landmark location. The fifth initiating module 115 is configured to initiate the construction of the first gaze vector. For example, the fifth initiating module 115 is a constructing module configured to construct said gaze vector by means of the ANN.

[0162] The sixth initiating module 116, instead, is configured to initiate the locating of the first point of gaze onto the screen. For instance, the sixth initiating module 116 is a third locating module configured to locate the first point of gaze onto the screen by means of at least the first gaze vector.

[0163] The storage means 120 may comprise volatile primary memory 121 and/or non-volatile primary memory 122. The storage means 120 may further comprise a secondary memory 123, which may store the operating system and/or the ANN. Moreover, the secondary memory 123 may store a computer program product comprising instructions which, when the computer program product is executed by the processing element 110, cause the data processing system 100 to carry out the method according to the present invention. the secondary memory 123 may store the first image and/or information about the first ROI.

[0164] The secondary memory 123, the primary memories 121, 122, and the processing element 110 need not be physically housed within the same housing and may instead be spatially separated from one another. In particular, the secondary memory 123, the primary memories 121, 122, and the processing element 110 may be spatially separated from one another and may exchange data with one another via wired and/or wireless media (not shown).

[0165] The data processing system 100 may further comprise an input/output (I/O) interface 140 which allows said system 100 to communicate with input/output devices (e.g. displays, keyboards, touchscreens, printers, mice, or the like).

[0166] The data processing system 100 may further comprise a network interface controller (NIC) 130 configured to connect the system 100 with a suitable network (not shown). According to the present invention, a suitable network may for instance be an intranet, the internet or a cellular network. For instance, the NIC 130 may allow the data processing system 100 to exchange data with another computing device (not shown) which e.g. performs the locating of the face landmarks, the selecting of the first ROI, the constructing of the first gaze vector, and/or the locating of the first point of gaze.

[0167] In particular, the data processing system 100 comprises a recording device 160 which is configured to capture at least the first image. For instance, the recording device may be a photo camera and/or a video camera. As shown in fig. 1, the recording device 160 may be connected to the processing element 110 via the I/O interface 140. For example, the recording device 160 may be connected wirelessly to the I/O interface via the NIC 130. The data processing system 100 may comprise a display unit 150 which comprise the screen 151 and is connected to the processing element 110 via the I/O interface 140. In particular, said unit 140 may be connected wirelessly to the I/O interface via the NIC 130. The recording device 160 and/or the display unit 150 may be intelligent devices with their own memory for storing associated instructions and data for use with the I/O interface 140 or the peripheral devices.

[0168] Fig. 2 is a flow diagram 200 of the operation of a first embodiment of the method according to the present invention. In particular, the first embodiment of the method according to the present invention may be performed by a first computing device (not shown), which, for example, may be the data processing system 100 described above and schematically depicted in fig. 1. At step 210, the first computing device initiates the acquiring of the first image 300, schematically represented in fig. 3a. In particular, the first image 300 is acquired by using the recording device 160.

[0169] At steps 220 and 230, the first computing device initiates the locating of the first face landmark location 301 of the first face landmark in the first image 300 and the locating of the second face landmark location 302 of the second face landmark in the first image 300. In particular, said steps may be performed together by initiating the locating in the first image of sixty-eight face landmarks, whose locations are represented in fig. 3b by crossed dots. For instance, the locating of said face landmarks is performed by means of the first location algorithm. In particular, the first and the second face landmark are the third eye landmark 301 and the fourth eye landmark 302, respectively. Moreover, the set of sixty-eight face landmarks comprises the fifth 303, the sixth 304, the seventh 305, and the eighth 306 eye landmarks.

[0170] At step 240, the first computing device initiates the selecting of the first ROI 310 schematically represented in fig. 3c. The selecting of the first ROI 310 is performed by using the aforementioned six eye landmarks 301 to 306. In particular, the first ROI 310 consists of the pixels with column number comprised between the integer numbers C3 and C4, and with row number comprised between the integer numbers R3 and R4, wherein C3, C4, R3 and R4 fulfil equations (4).

[0171] At step 250, the first computing device initiates the constructing of the first gaze vector. The constructing of this gaze vector is performed by the ANN. In particular, the ANN output comprises the eighteen heatmaps 401 to 418, ranging from first to eighteenth. These heatmaps are schematically depicted in figs. 4a and 4b. More specifically, the heatmap 401 and 402 encodes the per-pixel confidence of the iris center and the eyeball center, respectively. Each of the heatmaps 403 to 410 encodes the per-pixel confidence of the location of one of the first to eighth eye border landmark, in such a way that different eye border landmarks are associated to different heatmaps. Moreover, each of the heatmaps 411 to 418 encodes the per-pixel confidence location of one of the first to eighth iris border landmark, in such a way that different iris border landmarks are associated to different heatmaps.

[0172] In the heatmaps 401-418 schematically depicted in figs. 4a and 4b, the regions in which the per-pixel confidence assumes its largest value are shaded or highlighted in black. Moreover, the value of the per -pixel confidence is greater in the dark-colored region than in the shaded one. The location of the eye landmark associated to each of the heatmaps 401-418 may coincide with the location of a point in the dark-colored region of said each of the heatmaps 401-418. In particular, the location of the iris center, of the eyeball center, of the eight eye border landmarks, and of the eight iris border landmarks is obtained by processing the aforementioned eighteen heatmaps 401-418 by means of a soft-argmax layer.

[0173] The constructing of the gaze vector is performed by means of the SVR algorithm. In particular, the SVR algorithm uses as input the eyeball center location, the iris center location, the eight iris border landmark locations and the eight eye border landmark locations obtained by using the eighteen heatmaps 401-418.

[0174] At step 260, the first computing device initiates the locating of the first point of gaze onto the screen. The first point of gaze is in particular the intersection between the screen 151 and the first gaze line. For example, the locating of the first point of gaze onto the screen 151 may be obtained by modelling the screen in terms of a plane and by constructing the first point of gaze as the intersection between said plane and the first gaze line.

[0175] Fig. 5 is a flow diagram 500 of the operation of a second embodiment of the method according to the present invention. In particular, said embodiment may be performed by a second computing device (not shown), which, for example, may be the data processing system 100 described above and schematically depicted in fig. 1. According to this embodiment, the first point of gaze is iteratively computed by using the Kalman filter starting from P0|0 and V0 as defined in equation (21).

[0176] The generic iteration of the Kalman filter is described in terms of a counter, m, which, at step 505 is initialized to the value zero. During the mth iteration, the second computing device initiates the acquiring of an mth intermediate image (step 515), which, in particular, may be acquired by using the recording device 160. In particular, the first image and the mth intermediate image comprise the same subject and are captured at different time instants. For example, the first image and the mth intermediate image are two frames of a video which e.g. is captured by the recording device 160 of the computing device of the present invention.

[0177] At steps 520 and 525 of the mth iteration, the second computing device initiates the locating in the mth intermediate image of a face landmark location of a face landmark and of a further face landmark location a further face landmark. Said steps may be performed together by initiating the locating of sixty-eight face landmarks by means of the first location algorithm.

[0178] In particular, the distribution of the sixty-eight face landmarks in the mth intermediate image is similar to the distribution of the sixty-eight face landmarks, that are schematically depicted in fig. 3a. In particular, the set of sixty-eight face landmarks located in the mth intermediate image comprises six eyes landmarks for the left eye and six eye landmarks for the right eye.

[0179] At step 530 of the mth iteration, the second computing device initiates the selecting of a mth intermediate ROI. In particular, the mth intermediate ROI comprises an eye, say the left one, and is selected by using the six eye landmarks of the left eye. In particular, the mth intermediate ROI consists of the pixels with column number comprised between the integer numbers C3 and C4, and with row number comprised between the integer numbers R3 and R4. The integers numbers C3, C4, R3 and R4 may be computed by using equations (4) and the coordinates in the mth intermediate image of the six eye landmarks of the left eye. The position of the mth intermediate ROI in the mth intermediate image is similar to the position of the first ROI schematically depicted in fig. 3b.

[0180] At step 535, the second computing device initiates the constructing of an mth intermediate gaze vector by using the ANN. The ANN of the second embodiment is identical to the one used by the first embodiment of the method according to the present invention. In particular, the ANN uses as inputs the pixel of the mth intermediate ROI and provides eighteen heatmaps as output. Said heatmaps are similar to the ones schematically depicted in figs. 4a and 4b and are used to find the location in the mth intermediate image of the iris center of the left eye, of the eyeball center of the left eye, of eight eye border landmarks of the left eye, and of eight iris border landmarks of the left eye. For example, said landmarks may be obtained by processing the eighteen heatmaps by means of a soft-argmax layer.

[0181] The location of the iris center of the left eye, of the eyeball center of the left eye, of eight eye border landmarks of the left eye, and of eight iris border landmarks of the left eye are then used as input of the SVR algorithm constructing the mth intermediate gaze vector.

[0182] At step 540 of the mth iteration, the second computing device initiates the locating of the mth intermediate point of gaze onto the screen. Said point of gaze is in particular the intersection between the screen and the mth intermediate gaze line. For example, the locating of the first point of gaze onto the screen may be obtained by modelling the screen in terms of a plane and by constructing the first point of gaze as the intersection between said plane and the mth intermediate gaze line. In particular, the mth intermediate gaze line is the line that intersects the eyeball center of the eye contained in the mth intermediate ROI and that is parallel to the mth intermediate gaze vector.

[0183] At step 545 of the mth iteration, the second computing device initiates the computing of the mth estimated point of gaze onto the screen. The computation of this point of gaze is in particular performed by using the mth intermediate point of gaze, the (m - 1)th estimated point of gaze computed during the the (m - 1)th iteration and equations (16) to (26) with the replacement km.

[0184] At step 550 of the mth iteration, the second computing device checks whether the mth estimated point of gaze meets the stopping condition. In particular, the stopping condition comprises the requirement that the Euclidean distance between the mth estimated point of gaze and the mth intermediate point of gaze is lower than the first threshold and/or the requirement that the distance between the mth estimated point of gaze and the (m - 1)th estimated point of gaze is lower than the second threshold.

[0185] If the stopping condition is not met, the second computing device increments the value of the counter by one, see step 510, and performs the the (m + 1)th iteration. If the stopping condition is met, at step 555, the second computing device initiates the locating of the first point of gaze, said locating being performed by setting said point of gaze to be equal to the mth estimated point of gaze constructed at the mth iteration.


Claims

1. Computer-implemented method for locating a first point of gaze onto a screen (151), said method comprising at least the steps of:

• initiating the acquiring (210) of at least a first image (300);

• initiating the locating (220) of a first face landmark location (301) of a first face landmark in the first image (300);

• initiating the locating (230) of a second face landmark location (302) of a second face landmark in the first image (300);

• initiating the selecting (240) of a first region of interest (310) in the first image (300), wherein the selecting of the first region of interest (310) is performed by using at least the first face landmark location (301) and the second face landmark location (302);

• initiating the constructing (250) of a first gaze vector, wherein the constructing of the first gaze vector is performed by means of at least an artificial neural network, the artificial neural network using as input at least the first region of interest (310); and

• initiating the locating (250) the first point of gaze onto the screen (151), wherein the locating of the first point of gaze is performed by means of at least the first gaze vector.


 
2. Method according to claim 1, wherein the artificial neural network detects in the first region of interest (310) at least a first eye landmark location (403) of a first eye landmark and a second eye landmark location (404) of a second eye landmark.
 
3. Method according to any one of the preceding claims, wherein the constructing of the gaze vector is performed by means of a support vector regression algorithm, the support vector regression algorithm using as input at least the first eye landmark location (403) and the second eye landmark location (404).
 
4. Method according to anyone of the preceding claims, wherein the artificial neural network is an hourglass neural network.
 
5. Method according to anyone of the preceding claims, further comprising the step of:

• initiating the constructing of a head pose estimation vector, wherein the constructing of the head pose estimation vector is performed by using at least the first face landmark location (301) and the second face landmark location (302);

wherein the locating of the first point of gaze onto the screen (151) is based on the head pose estimation vector.
 
6. Method according to claim 5, wherein the constructing of the head pose estimation vector is performed at least by means of a three-dimensional face model, and wherein the three-dimensional face model uses as input at least the first face landmark location (301) and the second face landmark location (302).
 
7. Method according to any one of the preceding claims, further comprising the steps of:

• initiating the acquiring of at least a second image;

• initiating the locating of a third face landmark location of the first face landmark in the second image;

• initiating the estimating of a fourth face landmark location of the first face landmark in the first image (300), wherein the estimating of the fourth face landmark location is performed by means of an optical flow equation and f the third face landmark location; and

• initiating the detecting of a fifth face landmark location of the first face landmark in the first image (300);

and wherein the locating of the first face landmark location (301) in the first image (300) is based on the fourth face landmark location and on the fifth face landmark location.
 
8. Method according to claim 7, wherein the locating of the first face landmark location (301) in the first image (300) is based on a landmark distance, the landmark distance being the distance between the third face landmark location and the fourth face landmark location.
 
9. Method according claim 7 or 8, wherein the first face landmark location (301) is equal to the weighted average between the fourth face landmark location and the fifth face landmark location.
 
10. Method according to anyone of the preceding claims, further comprising the step of:

• initiating the locating of a second point of gaze onto the screen (151), wherein the locating of the second point of gaze is performed by means of at least the first gaze vector;

wherein the locating of the first point of gaze onto the screen (151) is performed by means of the second point of gaze.
 
11. Method according to claim 10, wherein the locating of the second point of gaze onto the screen (151) is performed by means of a calibration function, the calibration function depending on at least a location of a calibration point of gaze and on an estimate of the location of the calibration point of gaze.
 
12. Method according any one of the preceding claims, wherein the locating of the first point of gaze onto the screen (151) is performed by means of a Kalman filter.
 
13. Method according to claim 12, wherein the locating of the first point of gaze onto the screen (151) is performed by means of a third point of gaze and a covariance matrix of the process noise, wherein the covariance matrix of the process noise comprises a plurality of entries, said entries being a monotonically increasing function of the distance between the first point of gaze and the third point of gaze.
 
14. A data processing system (100) comprising at least a processor (110) configured to perform the method according to any one of the previous claims.
 
15. A computer program product comprising instructions which, when the computer program product is executed by a computing device, cause the computing device to carry out the method according to anyone of claims 1 to 13.
 
16. A computer readable storage medium comprising instructions which, when executed by a computer device, cause the computer device to carry out the method according to any one of claims 1 to 13.
 


Amended claims in accordance with Rule 137(2) EPC.


1. Computer-implemented method for locating a first point of gaze onto a screen (151), said method comprising at least the steps of:

• initiating the acquiring (210) of at least a first image (300);

• initiating the locating (220) of a first face landmark location (301) of a first face landmark in the first image (300);

• initiating the locating (230) of a second face landmark location (302) of a second face landmark in the first image (300);

• initiating the selecting (240) of a first region of interest (310) in the first image (300),
wherein the selecting of the first region of interest (310) is performed by using at least the first face landmark location (301) and the second face landmark location (302);

• initiating the constructing (250) of a first gaze vector, wherein the constructing of the first gaze vector is performed by means of at least an artificial neural network, the artificial neural network using as input at least the first region of interest (310);

characterized in that:
the method further comprises the step of:

• initiating the locating (250) of the first point of gaze onto the screen (151), wherein the locating of the first point of gaze is performed by means of at least the first gaze vector and a Kalman filter, the Kalman filter using a series of estimations of the point of gaze on the screen.


 
2. Method according to claim 1, wherein the artificial neural network detects in the first region of interest (310) at least a first eye landmark location (403) of a first eye landmark and a second eye landmark location (404) of a second eye landmark.
 
3. Method according to any one of the preceding claims, wherein the constructing of the gaze vector is performed by means of a support vector regression algorithm, the support vector regression algorithm using as input at least the first eye landmark location (403) and the second eye landmark location (404).
 
4. Method according to any one of the preceding claims, wherein the artificial neural network is an hourglass neural network.
 
5. Method according to any one of the preceding claims, further comprising the step of:

• initiating the constructing of a head pose estimation vector, wherein the constructing of the head pose estimation vector is performed by using at least the first face landmark location (301) and the second face landmark location (302);

wherein the locating of the first point of gaze onto the screen (151) is based on the head pose estimation vector.
 
6. Method according to claim 5, wherein the constructing of the head pose estimation vector is performed at least by means of a three-dimensional face model, and wherein the three-dimensional face model uses as input at least the first face landmark location (301) and the second face landmark location (302).
 
7. Method according to any one of the preceding claims, further comprising the steps of:

• initiating the acquiring of at least a second image;

• initiating the locating of a third face landmark location of the first face landmark in the second image;

• initiating the estimating of a fourth face landmark location of the first face landmark in the first image (300), wherein the estimating of the fourth face landmark location is performed by means of an optical flow equation and f the third face landmark location; and

• initiating the detecting of a fifth face landmark location of the first face landmark in the first image (300);

and wherein the locating of the first face landmark location (301) in the first image (300) is based on the fourth face landmark location and on the fifth face landmark location.
 
8. Method according to claim 7, wherein the locating of the first face landmark location (301) in the first image (300) is based on a landmark distance, the landmark distance being the distance between the third face landmark location and the fourth face landmark location.
 
9. Method according claim 7 or 8, wherein the first face landmark location (301) is equal to the weighted average between the fourth face landmark location and the fifth face landmark location.
 
10. Method according to any one of the preceding claims, further comprising the step of:

• initiating the locating of a second point of gaze onto the screen (151), wherein the locating of the second point of gaze is performed by means of at least the first gaze vector;

wherein the locating of the first point of gaze onto the screen (151) is performed by means of the second point of gaze.
 
11. Method according to claim 10, wherein the locating of the second point of gaze onto the screen (151) is performed by means of a calibration function, the calibration function depending on at least a location of a calibration point of gaze and on an estimate of the location of the calibration point of gaze.
 
12. Method according to claim 10, wherein the locating of the first point of gaze onto the screen (151) is performed by means of a third point of gaze and a covariance matrix of the process noise, wherein the covariance matrix of the process noise comprises a plurality of entries, said entries being a monotonically increasing function of the distance between the second point of gaze and the third point of gaze.
 
13. A data processing system (100) comprising at least a processor (110) configured to perform the method according to any one of the previous claims.
 
14. A computer program product comprising instructions which, when the computer program product is executed by a computing device, cause the computing device to carry out the method according to anyone of claims 1 to 12.
 
15. A computer readable storage medium comprising instructions which, when executed by a computer device, cause the computer device to carry out the method according to any one of claims 1 to 12.
 




Drawing






















Search report









Search report




Cited references

REFERENCES CITED IN THE DESCRIPTION



This list of references cited by the applicant is for the reader's convenience only. It does not form part of the European patent document. Even though great care has been taken in compiling the references, errors or omissions cannot be excluded and the EPO disclaims all liability in this regard.

Non-patent literature cited in the description