[0001] This application claims priority to Patent Application No.
201510003813.0, filed with the Chinese Patent Office on January 4, 2015, and entitled "METHOD AND
APPARATUS FOR DETERMINING IDENTITY IDENTIFIER OF FACE IN FACE IMAGE, AND TERMINAL",
which is incorporated herein by reference in its entirety.
TECHNICAL FIELD
[0002] The present invention relates to the communications field, and more specifically,
to a method and an apparatus for determining an identity identifier of a face in a
face image, and a terminal.
BACKGROUND
[0003] Facial recognition is a popular computer technology in recent years and is one of
biometric recognition technologies. The biometric recognition technologies further
include fingerprint recognition, iris recognition, and the like. These recognition
technologies can achieve a high recognition rate. However, when these biometric recognition
technologies are applied, a to-be-recognized person needs to cooperate well. That
is, these recognition technologies have a strict requirement for a person and an environment,
so that application of these technologies, for example, in a public place, a densely
populated place, and a non-compulsory civil field, is greatly limited. Facial recognition
can break through the foregoing limitation, and is more widely applied.
[0004] After decades of development, many methods are generated for the facial recognition
technology, for example, template matching, learning from examples, and a neural network.
A facial recognition method based on a joint Bayesian probability model is a common
facial recognition method. Using the joint Bayesian probability model to verify whether
two face images are face images of a same person has high accuracy. The following
briefly describes the facial recognition method based on the joint Bayesian probability
model.
[0005] First, a joint Bayesian probability matrix P is generated by means of training. Then
a feature v1 of a to-be-verified face f1 and a feature v2 of a to-be-verified face
f2 are extracted separately. Next, the v1 and the v2 are spliced into a vector [v1,v2],
and finally a distance between the v1 and the v2 is calculated by using the following
formula (1):

[0006] When the distance is less than a preset threshold, the f1 and the f2 are of the same
person; when the distance is greater than the preset threshold, the f1 and the f2
are of different persons. The joint Bayesian probability matrix P may be obtained
by means of offline learning. When the joint Bayesian probability matrix P is learned,
the entire matrix P may be directly learned, or the P matrix may be decomposed by
using the following formula (2):

[0007] A submatrix A and a submatrix B are quickly learned separately. A is a cross-correlation
submatrix in the joint Bayesian probability matrix P, and B is an autocorrelation
submatrix in the joint Bayesian probability matrix P. When this method is used for
facial recognition, a face image database needs to be pre-established. Each vector
in the face image database corresponds to one identity. In a recognition process,
the feature vector v1 of the face f1 needs to be compared with each vector in the
face image database by using the formula (1) or the formula (2), to obtain a distance
s between the feature vector v1 and the vector in the face image database; and a vector
having a smallest distance from the v1 is selected from the face image database as
a matching vector of the v1. When a distance between the matching vector and the v1
is less than the preset threshold, an identity corresponding to the matching vector
in the face image database is determined as an identity of the f1. When the distance
between the matching vector and the v1 is greater than the preset threshold, it indicates
that the identity of the face f1 is not recorded in the face image database, and a
new identity may be allocated to the f1 and a correspondence between the new identity
and the v1 is established in the database.
[0008] It can be seen from the foregoing description that when a facial recognition method
based on a joint Bayesian probability model is used, a to-be-matched face feature
vector needs to be compared with all vectors in a face image database. Generally,
the face image database is of a large scale, and comparison with each vector in the
database by using a formula s=[v1,v2]*P*[v1,v2] results in a large calculation burden
and consumes a long time, which is not helpful for fast facial recognition.
SUMMARY
[0009] Embodiments of the present invention provide a method and an apparatus for determining
an identity identifier of a face in a face image, and a terminal, so as to improve
efficiency of facial recognition.
[0010] According to a first aspect, a method for determining an identity identifier of a
face in a face image is provided, including: obtaining an original feature vector
of a face image; selecting k candidate vectors from a face image database according
to the original feature vector, where a vector v
* in the face image database includes components [v·A.v·B·v
T], a feature extraction manner for v is the same as that for the original feature
vector, A represents a cross-correlation submatrix in a joint Bayesian probability
matrix, B represents an autocorrelation submatrix in the joint Bayesian probability
matrix, and k is a positive integer; selecting a matching vector of the original feature
vector from the k candidate vectors, where the matching vector of the original feature
vector is a candidate vector of the k candidate vectors that has a shortest cosine
distance from a vector [v1,1], and v1 represents the original feature vector; and
determining, according to the matching vector of the original feature vector, an identity
identifier that is of the matching vector and that is recorded in the face image database
as an identity identifier of a face in the face image.
[0011] With reference to the first aspect, in an implementation of the first aspect, the
obtaining an original feature vector of a face image includes: tracking locations
of the face in different video images to obtain N consecutive frames of face images;
and obtaining the original feature vector from a t
th frame of face image of the N frames of face images, where 1≤t≤N; and the method further
includes: storing, into a cache database, the matching vector of the original feature
vector and the identity identifier that is of the matching vector and that is recorded
in the face image database; obtaining an original feature vector of a (t+1)
th frame of face image of the N frames of face images, where the (t+1)
th frame of face image is a next frame of face image of the t
th frame of face image; selecting a target vector from the cache database according
to the original feature vector of the (t+1)
th frame of face image, where the cache database includes the matching vector of the
original feature vector of the t
th frame of face image; and when a target distance between the original feature vector
of the (t+1)
th frame of face image and the target vector is less than a preset threshold, determining
an identity identifier that is of the target vector and that is recorded in the cache
database as an identity identifier of a face in the (t+1)
th frame of face image.
[0012] With reference to the first aspect or the foregoing implementation, in another implementation
of the first aspect, the selecting a target vector from the cache database according
to the original feature vector of the (t+1)
th frame of face image includes: selecting the target vector from the cache database
according to a formula

where v2 represents the original feature vector of the (t+1)
th frame of face image,

represents a vector in the cache database, and s represents a target distance between
v2 and

[0013] With reference to any one of the first aspect, or the foregoing implementations of
the first aspect, in another implementation of the first aspect, the selecting k candidate
vectors from a face image database according to the original feature vector includes:
selecting the k candidate vectors from the face image database according to a formula
s
*=∥[v1,0]-v
*∥
2 by using a kd-tree algorithm, where s
* represents a Euclidean distance between [v1,0] and v
*.
[0014] With reference to any one of the first aspect, or the foregoing implementations of
the first aspect, in another implementation of the first aspect, the selecting a matching
vector of the original feature vector from the k candidate vectors includes: selecting
the matching vector of the original feature vector from the k candidate vectors according
to a formula

where

represents an i
th candidate vector of the k candidate vectors, s
** represents a cosine distance between [v1,1] and

represents a Euclidean distance between [v1,0] and

1≤i≤k, and c is a constant.
[0015] With reference to any one of the first aspect, or the foregoing implementations of
the first aspect, in another implementation of the first aspect, the method is executed
by a server; and the obtaining an original feature vector of a face image includes:
obtaining, by the server, a vector v
' from a terminal, where v
'=[v1,v1·B·v1
T], and extracting, by the server, the original feature vector from the vector v
'.
[0016] According to a second aspect, a method for determining an identity identifier of
a face in a face image is provided, including: obtaining an original feature vector
of a face image; selecting a target vector from a face image database according to
a formula

where v2 represents the original feature vector,

represents a vector in the face image database,

includes components [v·A,v·B·v
T], a feature extraction manner for v is the same as that for the original feature
vector, A represents a cross-correlation submatrix in a joint Bayesian probability
matrix, B represents an autocorrelation submatrix in the joint Bayesian probability
matrix, s represents a target distance between v2 and

and the target vector is a vector that is in the face image database and that has
a smallest target distance from v2; and when a target distance between the original
feature vector and the target vector is less than a preset threshold, determining
an identity identifier that is of the target vector and that is recorded in the face
image database as an identity identifier of a face in the face image.
[0017] According to a third aspect, an apparatus for determining an identity identifier
of a face in a face image is provided, including: a first obtaining unit, configured
to obtain an original feature vector of a face image; a first selection unit, configured
to select k candidate vectors from a face image database according to the original
feature vector, where a vector v
* in the face image database includes components [v·A.v·B·v
T], a feature extraction manner for v is the same as that for the original feature
vector, A represents a cross-correlation submatrix in a joint Bayesian probability
matrix, B represents an autocorrelation submatrix in the joint Bayesian probability
matrix, and k is a positive integer; a second selection unit, configured to select
a matching vector of the original feature vector from the k candidate vectors, where
the matching vector of the original feature vector is a candidate vector of the k
candidate vectors that has a shortest cosine distance from a vector [v1,1], and v1
represents the original feature vector; and a first determining unit, configured to
determine, according to the matching vector of the original feature vector, an identity
identifier that is of the matching vector and that is recorded in the face image database
as an identity identifier of a face in the face image.
[0018] With reference to the third aspect, in a first implementation of the third aspect,
the first obtaining unit is specifically configured to track locations of the face
in different video images to obtain N consecutive frames of face images, and obtain
the original feature vector from a t
th frame of face image of the N frames of face images, where 1≤t<N; and the apparatus
further includes: an access unit, configured to store, into a cache database, the
matching vector of the original feature vector and the identity identifier that is
of the matching vector and that is recorded in the face image database; a second obtaining
unit, configured to obtain an original feature vector of a (t+1)
th frame of face image of the N frames of face images, where the (t+1)
th frame of face image is a next frame of face image of the t
th frame of face image; a third selection unit, configured to select a target vector
from the cache database according to the original feature vector of the (t+1)
th frame of face image, where the cache database includes the matching vector of the
original feature vector of the t
th frame of face image; and a second determining unit, configured to: when a target
distance between the original feature vector of the (t+1)
th frame of face image and the target vector is less than a preset threshold, determine
an identity identifier that is of the target vector and that is recorded in the cache
database as an identity identifier of a face in the (t+1)
th frame of face image.
[0019] With reference to the third aspect or the foregoing implementation of the third aspect,
in another implementation of the third aspect, the third selection unit is specifically
configured to select the target vector from the cache database according to a formula

where v2 represents the original feature vector of the (t+1)
th frame of face image,

represents a vector in the cache database, and s represents a target distance between
v2 and

[0020] With reference to any one of the third aspect, or the foregoing implementations of
the third aspect, in another implementation of the third aspect, the first selection
unit is specifically configured to select the k candidate vectors from the face image
database according to a formula s
*=∥[v1,0]-v
*∥
2 by using a kd-tree algorithm, where s
* represents a Euclidean distance between [v1,0] and v
*.
[0021] With reference to any one of the third aspect, or the foregoing implementations of
the third aspect, in another implementation of the third aspect, the second selection
unit is specifically configured to select the matching vector of the original feature
vector from the k candidate vectors according to a formula

where

represents an i
th candidate vector of the k candidate vectors, s
** represents a cosine distance between [v1,1] and

represents a Euclidean distance between [v1,0] and

1≤i≤k, and c is a constant.
[0022] With reference to any one of the third aspect, or the foregoing implementations of
the third aspect, in another implementation of the third aspect, the apparatus is
a server; and the first obtaining unit is specifically configured to obtain a vector
v
' from a terminal, where v
'=[v1,v1·B·v1
T], and extract the original feature vector from the vector v
'.
[0023] According to a fourth aspect, an apparatus for determining an identity identifier
of a face in a face image is provided, including: an obtaining unit, configured to
obtain an original feature vector of a face image; a selection unit, configured to
select a target vector from a face image database according to a formula

where v2 represents the original feature vector,

represents a vector in the face image database,

includes components [v·A,v·B·v
T], a feature extraction manner for v is the same as that for the original feature
vector, A represents a cross-correlation submatrix in a joint Bayesian probability
matrix, B represents an autocorrelation submatrix in the joint Bayesian probability
matrix, s represents a target distance between v2 and

and the target vector is a vector that is in the face image database and that has
a smallest target distance from v2; and a determining unit, configured to: when a
target distance between the original feature vector and the target vector is less
than a preset threshold, determine an identity identifier that is of the target vector
and that is recorded in the face image database as an identity identifier of a face
in the face image.
[0024] According to a fifth aspect, a terminal is provided, including: a camera, configured
to collect a face image; a processor, configured to obtain an original feature vector
v1 of the face image, and generate a vector v
' according to the original feature vector v1, where v
'=[v1,v1·B·v1
T] and B represents an autocorrelation submatrix in a joint Bayesian probability matrix;
and a transmitter, configured to send the vector v
' to a server, where the vector v
' is used by the server to recognize an identity identifier of a face in the face image.
[0025] In the prior art, assuming that an original feature vector of a face image is v1
and a vector in a face image database is v, when the original feature vector v1 is
compared with each vector v in the face image database, both v·A and v·B·v
T need to be recalculated according to a formula

In the embodiments of the present invention, the original feature vector of the face
image is first obtained, and then a matching vector of the original feature vector
is selected from the face image database. A vector v
* in the face image database includes components [v·A,v·B·V
T]. That is, in the embodiments of the present invention, the face image database stores
a medium-level feature vector formed by means of mutual interaction between a low-level
face feature vector and autocorrelation and cross-correlation submatrices in a joint
Bayesian probability matrix. The medium-level feature vector includes information
about mutual interaction between the face feature vector and the autocorrelation and
cross-correlation submatrices in the joint Bayesian probability matrix, so that efficiency
of facial recognition can be improved.
BRIEF DESCRIPTION OF DRAWINGS
[0026] To describe the technical solutions in the embodiments of the present invention more
clearly, the following briefly describes the accompanying drawings required for describing
the embodiments of the present invention. Apparently, the accompanying drawings in
the following description show merely some embodiments of the present invention, and
a person of ordinary skill in the art may still derive other drawings from these accompanying
drawings without creative efforts.
FIG. 1 is a schematic flowchart of a method for determining an identity identifier
of a face in a face image according to an embodiment of the present invention;
FIG. 2 is a schematic flowchart of a method for determining an identity identifier
of a face in a face image according to an embodiment of the present invention;
FIG. 3 is a schematic block diagram of an apparatus for determining an identity identifier
of a face in a face image according to an embodiment of the present invention;
FIG. 4 is a schematic block diagram of an apparatus for determining an identity identifier
of a face in a face image according to an embodiment of the present invention;
FIG. 5 is a schematic block diagram of an apparatus for determining an identity identifier
of a face in a face image according to an embodiment of the present invention;
FIG. 6 is a schematic block diagram of an apparatus for determining an identity identifier
of a face in a face image according to an embodiment of the present invention; and
FIG. 7 is a schematic block diagram of a terminal according to an embodiment of the
present invention.
DESCRIPTION OF EMBODIMENTS
[0027] The following clearly and completely describes the technical solutions in the embodiments
of the present invention with reference to the accompanying drawings in the embodiments
of the present invention. Apparently, the described embodiments are a part rather
than all of the embodiments of the present invention. All other embodiments obtained
by a person of ordinary skill in the art based on the embodiments of the present invention
without creative efforts shall fall within the protection scope of the present invention.
[0028] FIG. 1 is a schematic flowchart of a method for determining an identity identifier
of a face in a face image according to an embodiment of the present invention. The
method in FIG. 1 includes the following steps.
[0029] 110. Obtain an original feature vector of a face image.
[0030] The original feature vector may be a vector obtained by extracting a feature such
as an HOG (Histogram of Gradient, histogram of gradient), an LBP (Local Binary Patterns,
local binary pattern), or the like of the face image. This embodiment of the present
invention imposes no specific limitation thereon.
[0031] 120. Select k candidate vectors from a face image database according to the original
feature vector, where a vector v
* in the face image database includes components [v·A.v·B·v
T], a feature extraction manner for v is the same as that for the original feature
vector, A represents a cross-correlation submatrix in a joint Bayesian probability
matrix, B represents an autocorrelation submatrix in the joint Bayesian probability
matrix, and k is a positive integer.
[0032] That v
* includes components [v·A,v·B·v
T] means that at least some components in v
* are [v·A,v·B·v
T]. For example, assuming that [v·A,v·B·v
T] =[2221], v
* may be equal to [2221], or v
*=[22210], provided that v
* includes [2221].
[0033] In the prior art, a feature vector that is obtained in a same feature extraction
manner as the original feature vector is stored in a face image database. Assuming
that the original feature vector is represented by v1 and the feature vector stored
in the face image database is represented by v, the following formula (3) may be obtained
by expanding a formula (2):

[0034] Because the vector v in the face image database is known, it can be seen from the
formula (3) that v·B·v
T and v·A in the formula (3) may be obtained by means of offline calculation. Therefore,
in this embodiment of the present invention, the face image database no longer stores
a feature vector v of a face but stores the vector v
*, and uses [v·A,v· B·v
T] as the component of v
*. In addition, v·A and v·B·v
T in [v·A,v·B·v
T] are operations that need to be performed online when the formula (3) is used for
facial recognition in the prior art. That is, by changing a specific form of a vector
in the face image database, some operations that need to be performed online are performed
offline, improving efficiency of facial recognition.
[0035] In addition, the foregoing k candidate vectors may be k candidate vectors that have
a shortest Euclidean distance from a vector [v1,0] and that are in the face image
database. The foregoing k candidate vectors that have a shortest Euclidean distance
from the original feature vector are the first k vectors, in the face image database,
sorted in ascending order according to Euclidean distances from the original feature
vector. The k candidate vectors having the shortest Euclidean distance from the vector
[v1,0] may be selected from the face image database in multiple manners. The following
gives two specific implementations.
[0036] Optionally, in an implementation, the vector [v1,0] may be compared with each vector
in the face image database to find k vectors having a shortest Euclidean distance
and use the k vectors as candidate vectors.
[0037] Optionally, in another implementation, the k candidate vectors are selected from
the face image database according to a second distance formula s
*=∥[v1,0]-v
*∥
2 by using a kd-tree algorithm, where v1 represents the original feature vector, and
s
* represents a Euclidean distance between [v1,0] and v
*. In this implementation, the k candidate vectors are found quickly by using the kd-tree
algorithm. This improves algorithm efficiency compared with a manner in which vectors
are compared one by one. A kd-tree is a tree that is derived from a binary search
tree and that is used for multi-dimensional search. A difference between the kd-tree
and a binary tree is that each node of the kd-tree represents a point in high-dimensional
space (a quantity of dimensions of the space depends on a quantity of dimensions of
a vector). The kd-tree converts an issue of solving a similarity between vectors into
solving a distance between points in the high-dimensional space. A closer distance
between two points in the high-dimensional space leads to a higher similarity between
two corresponding vectors. During specific implementation, the kd-tree performs hierarchical
division on the high-dimensional space by means of splitting a hyperplane, determines
k nearest neighbor points corresponding to a to-be-searched point (a point representing
the original feature vector), and uses vectors corresponding to the k nearest neighbor
points as the k candidate vectors.
[0038] 130. Select a matching vector of the original feature vector from the k candidate
vectors, where the matching vector of the original feature vector is a candidate vector
of the k candidate vectors that has a shortest cosine distance from a vector [v1,1],
and v1 represents the original feature vector.
[0039] The matching vector of the original feature vector may be selected from the k candidate
vectors in multiple manners. For example, the matching vector of the original feature
vector is selected from the k candidate vectors according to a formula

where

represents an i
th candidate vector of the k candidate vectors, s
** represents a cosine distance between [v1,1] and

represents a Euclidean distance between [v1,0] and

1≤i≤k, and c is a constant.
[0040] It can be verified that the matching vector, obtained by using the foregoing formula,
of the original feature vector is consistent with a matching vector result obtained
based on a joint Bayesian probability model. However, the vector

in the face image database includes components

This equals to that v
i·A and

that need to be calculated online when a vector similarity is calculated by using
the joint Bayesian probability model are obtained by means of offline operations and
are stored to the face image database. During online calculation, only

needs to be obtained from the face image database, and this also equals to that operation
results of v
i·A and

are obtained, improving efficiency of facial recognition.
[0041] Further,

or

may be pre-stored to the face image database. In this way, some operations that need
to be performed online are performed offline, and when the foregoing formula is used
to calculate the cosine distance,

or

may be directly obtained from the database and does not need to be calculated online,
further improving the efficiency of facial recognition.
[0042] 140. Determine, according to the matching vector of the original feature vector,
an identity identifier that is of the matching vector and that is recorded in the
face image database as an identity identifier of a face in the face image.
[0043] It should be understood that an identity identifier of a face is a unique identifier
of a person represented by the face, for example, an identity card number of the person.
[0044] Specifically, the face image database records a correspondence between a vector and
an identity identifier. After the matching vector of the original feature vector is
determined, the identity identifier of the matching vector may be found directly in
the face image database, and then the identity identifier is determined as the identity
identifier of the face in the face image. Alternatively, whether a cosine distance
between the original feature vector and the matching vector is less than a preset
threshold is first determined. If the cosine distance between the original feature
vector and the matching vector is less than the preset threshold, the identity identifier
of the matching vector is determined as the identity identifier of the face. If the
cosine distance between the original feature vector and the matching vector is greater
than the preset threshold, it indicates that the identity identifier of the face is
not found in the face image database.
[0045] In the prior art, assuming that an original feature vector of a face image is v1
and a vector in a face image database is v, when the original feature vector v1 is
compared with each vector v in the face image database, both v·A and v·B·v
T need to be repeatedly calculated according to a formula

In this embodiment of the present invention, the original feature vector of the face
image is first obtained, and then a matching vector of the original feature vector
is selected from the face image database. A vector v
* in the face image database includes components [v·A,v·B·V
T]. That is, in this embodiment of the present invention, the face image database stores
a medium-level feature vector formed by means of mutual interaction between a low-level
face feature vector and autocorrelation and cross-correlation submatrices in a joint
Bayesian probability matrix. The medium-level feature vector includes information
about mutual interaction between the face feature vector and the autocorrelation and
cross-correlation submatrices in the joint Bayesian probability matrix, so that efficiency
of facial recognition can be improved.
[0046] The method in this embodiment of the present invention may be applied to facial recognition
in a video. For ease of understanding, a general process of facial recognition in
a video signal is briefly described.
[0047] Step 1: Face detection is performed on an initial frame, for example, a robust real-time
object detection algorithm is used to perform face detection, to obtain an initial
location of a face in a video image and obtain one frame of face image. Then, in a
subsequent video frame, a target tracking algorithm, for example, a tracking-learning-detection
algorithm, is used to track a location of the face to obtain a face tracking sequence.
The tracking sequence includes multiple frames of face images.
[0048] Step 2: An original feature vector of each frame of face image in the tracking sequence
is compared with all feature vectors in the face image database by using the formula
(1) or the formula (2), to select an identity identifier of a face in each frame of
face image. Recognized identity identifiers are counted by using a voting mechanism,
and an identity identifier obtaining most votes is determined as a final identity
identifier of the face. Certainly, if the identity identifier of the face is not determined
in the face image database, a new identity identifier may be allocated to the face,
and the new identity identifier is registered in the face image database.
[0049] The following details specific implementations of the present invention when the
foregoing architecture of facial recognition in a video signal is used.
[0050] Optionally, in an embodiment, step 110 may include: tracking locations of the face
in different video images to obtain N consecutive frames of images; and obtaining
the original feature vector from a t
th frame of image of the N frames of images, where 1≤t<N. The method in FIG. 1 may further
include: storing, into a cache database, the matching vector of the original feature
vector and the identity identifier that is of the matching vector and that is recorded
in the face image database; obtaining an original feature vector of a (t+1)
th frame of face image of the N frames of face images, where the (t+1)
th frame of face image is a next frame of face image of the t
th frame of face image; selecting a target vector from the cache database according
to the original feature vector of the (t+1)
th frame of face image, where the cache database includes the matching vector of the
original feature vector of the t
th frame of face image; and when a target distance between the original feature vector
of the (t+1)
th frame of face image and the target vector is less than a preset threshold, determining
an identity identifier that is of the target vector and that is recorded in the cache
database as an identity identifier of a face in the (t+1)
th frame of face image.
[0051] The foregoing N frames of face images form a tracking sequence of a face in a video.
The foregoing t
th frame of face image may be the first frame of face image of the N frames of face
images, or may be any face image, other than the last frame of face image, of the
N frames of face images. In an actual application, the N frames of face images may
be arranged according to a sequence of occurrence of the N frames of face images in
the video, and then faces in the face images are recognized in sequence. When facial
recognition is performed for each frame of face image, the cache database is first
accessed. If no face is recognized from the cache database, the face image database
is continuously accessed. That is, in this embodiment of the present invention, a
two-level database structure including the cache database and the face image database
is introduced. In this way, a frequently accessed face image in the face image database
is stored into the cache database. During actual access, the cache database is first
accessed. If no face is recognized based on the cache database, the global face image
database is accessed. This improves efficiency of facial recognition to a given degree.
[0052] Specifically, the N frames of face images are face images obtained by tracking a
same face. Therefore, a probability that a matching result of the t
th frame of face image is the same as that of the (t+1)
th frame of face image is high. When the matching vector of the original feature vector
of the t
th frame of face image is found in the face image database, the matching vector is stored
into the cache database. A probability that the (t+1)
th frame of face image hits the matching vector when the cache database is first accessed
is high. It should be noted that, in an actual application, both the matching vector
of the original feature vector of the t
th frame of face image and a vector that is in the face image database and whose storage
location is adjacent to that of the matching vector can be stored into the cache database.
(Certainly, a premise is that specific organization forms of vectors in the face image
database are of a high similarity because vectors of a higher similarity have closer
storage locations in the face image database.)
[0053] It should be noted that the selecting a target vector from the cache database according
to the original feature vector of the (t+1)
th frame of face image may include: selecting the target vector from the cache database
according to a formula

where v2 represents the original feature vector of the (t+1)
th frame of face image,

represents a vector in the cache database, and s represents a target distance between
v2 and

[0054] In the prior art, the tracking sequence includes multiple frames of face images.
Because a frame rate of a video signal is high, many face images collected in the
tracking sequence are similar. If no screening is performed and all face images in
the tracking sequence are used for subsequent facial recognition, facial recognition
results of face images with an extremely high similarity are usually the same, and
unnecessary repeated calculation is introduced.
[0055] Optionally, in an embodiment, the tracking locations of the face in different video
images to obtain N consecutive frames of face images may include: obtaining M frames
of face images from the video, where M≥N; and deleting a redundant face image from
the M frames of face images according to a similarity of faces in the M frames of
face images, to obtain the N frames of face images.
[0056] That is, in this embodiment of the present invention, the M frames of face images
are first collected from the video, and then some redundant face images are deleted
based on a similarity of the M frames of face images. For example, if two frames of
images are similar, only one of them needs to be retained. The M frames of face images
form an initial tracking sequence of the faces in the video, and the N frames of face
images form a final tracking sequence used for facial recognition.
[0057] A face similarity may be measured in multiple manners. For example, original feature
vectors of the M frames of face images are first extracted, and then a similarity
(or a distance) between two vectors is calculated by using the formula (1) or the
formula (2), that is, the joint Bayesian probability model. For example, an original
feature vector of a face is collected from a current frame; the original feature vector
and an original feature vector of another face image that has been added to the tracking
sequence are compared by using the formula (1) (the original feature vector may be
compared with original feature vectors of all other face images in the tracking sequence,
or may be compared with original feature vectors of several frames of face images
that are consecutive in terms of time). When a comparison result is greater than a
preset threshold, the current frame is added to the tracking sequence; otherwise,
the current sequence is discarded.
[0058] It should be understood that the method in FIG. 1 may be executed by a cloud server.
The obtaining an original feature vector of a face image in step 110 may be: obtaining,
by the server, the original feature vector directly from the face image; or obtaining,
by a mobile terminal, the original feature vector, and uploading the original feature
vector to the server. The following provides a specific embodiment of collecting,
by a terminal, an original feature vector of a face image, and uploading the original
feature vector to a server: the server obtains a vector v
' from the terminal, where v
'=[v1,v1·B·v1
T]; and the server extracts the original feature vector from the vector v
'.
[0059] FIG. 2 is a schematic flowchart of a method for determining an identity identifier
of a face in a face image according to an embodiment of the present invention. The
method in FIG. 2 includes the following steps:
[0060] 210. Obtain an original feature vector of a face image.
[0061] 220. Select a target vector from a face image database according to a formula

where v2 represents the original feature vector,

represents a vector in the face image database,

includes components [v·A,v·B·v
T] a feature extraction manner for v is the same as that for the original feature vector,
A represents a cross-correlation submatrix in a joint Bayesian probability matrix,
B represents an autocorrelation submatrix in the joint Bayesian probability matrix,
s represents a target distance between v2 and

and the target vector is a vector that is in the face image database and that has
a smallest target distance from v2.
[0062] 230. When a target distance between the original feature vector and the target vector
is less than a preset threshold, determine an identity identifier that is of the target
vector and that is recorded in the face image database as an identity identifier of
a face in the face image.
[0063] A vector

in a face image database includes components [v·A,v·B·v
T]. That is, the face image database stores a medium-level feature vector formed by
means of mutual interaction between a low-level face feature vector and autocorrelation
and cross-correlation submatrices in a joint Bayesian probability matrix. The medium-level
feature vector includes information about mutual interaction between the face feature
vector and the autocorrelation and cross-correlation submatrices in the joint Bayesian
probability matrix, so that efficiency of facial recognition can be improved.
[0064] The method for determining an identity identifier of a face in a face image according
to an embodiment of the present invention is detailed above with reference to FIG.
1 and FIG. 2. With reference to FIG. 3 to FIG. 6, the following details an apparatus
for determining an identity identifier of a face in a face image according to an embodiment
of the present invention.
[0065] FIG. 3 is a schematic block diagram of an apparatus for determining an identity identifier
of a face in a face image according to an embodiment of the present invention. It
should be understood that the apparatus 300 in FIG. 3 can implement the steps described
in FIG. 1. To avoid repetition, details are not described herein again. The apparatus
300 includes:
a first obtaining unit 310, configured to obtain an original feature vector of a face
image;
a first selection unit 320, configured to select k candidate vectors from a face image
database according to the original feature vector, where a vector v* in the face image database includes components [v·A,v·B·vT], a feature extraction manner for v is the same as that for the original feature
vector, A represents a cross-correlation submatrix in a joint Bayesian probability
matrix, B represents an autocorrelation submatrix in the joint Bayesian probability
matrix, and k is a positive integer;
a second selection unit 330, configured to select a matching vector of the original
feature vector from the k candidate vectors, where the matching vector of the original
feature vector is a candidate vector of the k candidate vectors that has a shortest
cosine distance from a vector [v1,1], and v1 represents the original feature vector;
and
a first determining unit 340, configured to determine, according to the matching vector
of the original feature vector, an identity identifier that is of the matching vector
and that is recorded in the face image database as an identity identifier of a face
in the face image.
[0066] In the prior art, assuming that an original feature vector of a face image is v1
and a vector in a face image database is v, when the original feature vector v1 is
compared with each vector v in the face image database, both v·A and v·B·v
T need to be repeatedly calculated according to a formula

In this embodiment of the present invention, the original feature vector of the face
image is first obtained, and then a matching vector of the original feature vector
is selected from the face image database. A vector v
* in the face image database includes components [v·A,v·B·v
T]. That is, in this embodiment of the present invention, the face image database stores
a medium-level feature vector formed by means of mutual interaction between a low-level
face feature vector and autocorrelation and cross-correlation submatrices in a joint
Bayesian probability matrix. The medium-level feature vector includes information
about mutual interaction between the face feature vector and the autocorrelation and
cross-correlation submatrices in the joint Bayesian probability matrix, so that efficiency
of facial recognition can be improved.
[0067] Optionally, in an embodiment, the first obtaining unit 310 is specifically configured
to track locations of the face in different video images to obtain N consecutive frames
of face images, and obtain the original feature vector from a t
th frame of face image of the N frames of face images, where 1≤t≤N; and the apparatus
300 further includes: an access unit, configured to store, into a cache database,
the matching vector of the original feature vector and the identity identifier that
is of the matching vector and that is recorded in the face image database; a second
obtaining unit, configured to obtain an original feature vector of a (t+1)
th frame of face image of the N frames of face images, where the (t+1)
th frame of face image is a next frame of face image of the t
th frame of face image; a third selection unit, configured to select a target vector
from the cache database according to the original feature vector of the (t+1)
th frame of face image, where the cache database includes the matching vector of the
original feature vector of the t
th frame of face image; and a second determining unit, configured to: when a target
distance between the original feature vector of the (t+1)
th frame of face image and the target vector is less than a preset threshold, determine
an identity identifier that is of the target vector and that is recorded in the cache
database as an identity identifier of a face in the (t+1)
th frame of face image.
[0068] Optionally, in an embodiment, the third selection unit is specifically configured
to select the target vector from the cache database according to a formula

where v2 represents the original feature vector of the (t+1)
th frame of face image, v* represents a vector in the cache database, and s represents
a target distance between v2 and

[0069] Optionally, in an embodiment, the first selection unit 320 is specifically configured
to select the k candidate vectors from the face image database according to a formula
s
*=∥[v1,0]-v
*∥
2 by using a kd-tree algorithm, where s
* represents a Euclidean distance between [v1,0] and v
*.
[0070] Optionally, in an embodiment, the second selection unit 330 is specifically configured
to select the matching vector of the original feature vector from the k candidate
vectors according to a formula

where

represents an i
th candidate vector of the k candidate vectors, s
** represents a cosine distance between [v1,1] and

represents a Euclidean distance between [v1,0] and

1≤i≤k, and c is a constant.
[0071] Optionally, in an embodiment, the apparatus 300 is a server; and the first obtaining
unit 310 is specifically configured to obtain a vector v
' from a terminal, where v
'=[v1,v1·B·v1
T], and extract the original feature vector from the vector v
'.
[0072] FIG. 4 is a schematic block diagram of an apparatus for determining an identity identifier
of a face in a face image according to an embodiment of the present invention. It
should be understood that the apparatus 400 in FIG. 4 can implement the steps described
in FIG. 1. To avoid repetition, details are not described herein again. The apparatus
400 includes:
a memory 410, configured to store a program; and
a processor 420, configured to execute the program, where when the program is executed,
the processor 420 is specifically configured to: obtain an original feature vector
of a face image; select k candidate vectors from a face image database according to
the original feature vector, where a vector v* in the face image database includes components [v·A.v·B·vT], a feature extraction manner for v is the same as that for the original feature
vector, A represents a cross-correlation submatrix in a joint Bayesian probability
matrix, B represents an autocorrelation submatrix in the joint Bayesian probability
matrix, and k is a positive integer; select a matching vector of the original feature
vector from the k candidate vectors, where the matching vector of the original feature
vector is a candidate vector of the k candidate vectors that has a shortest cosine
distance from a vector [v1,1], and v1 represents the original feature vector; and
determine, according to the matching vector of the original feature vector, an identity
identifier that is of the matching vector and that is recorded in the face image database
as an identity identifier of a face in the face image.
[0073] In the prior art, assuming that an original feature vector of a face image is v1
and a vector in a face image database is v, when the original feature vector v1 is
compared with each vector v in the face image database, both v·A and v·B·v
T need to be repeatedly calculated according to a formula

In this embodiment of the present invention, the original feature vector of the face
image is first obtained, and then a matching vector of the original feature vector
is selected from the face image database. A vector v
* in the face image database includes components [v·A,v·B·V
T]. That is, in this embodiment of the present invention, the face image database stores
a medium-level feature vector formed by means of mutual interaction between a low-level
face feature vector and autocorrelation and cross-correlation submatrices in a joint
Bayesian probability matrix. The medium-level feature vector includes information
about mutual interaction between the face feature vector and the autocorrelation and
cross-correlation submatrices in the joint Bayesian probability matrix, so that efficiency
of facial recognition can be improved.
[0074] Optionally, in an embodiment, the processor 420 is specifically configured to track
locations of the face in different video images to obtain N consecutive frames of
face images, and obtain the original feature vector from a t
th frame of face image of the N frames of face images, where 1≤t≤N; and the processor
420 is further configured to: store, into a cache database, the matching vector of
the original feature vector and the identity identifier that is of the matching vector
and that is recorded in the face image database; obtain an original feature vector
of a (t+1)
th frame of face image of the N frames of face images, where the (t+1)
th frame of face image is a next frame of face image of the t
th frame of face image; select a target vector from the cache database according to
the original feature vector of the (t+1)
th frame of face image, where the cache database includes the matching vector of the
original feature vector of the t
th frame of face image; and when a target distance between the original feature vector
of the (t+1)
th frame of face image and the target vector is less than a preset threshold, determine
an identity identifier that is of the target vector and that is recorded in the cache
database as an identity identifier of a face in the (t+1)
th frame of face image.
[0075] Optionally, in an embodiment, the processor 420 is specifically configured to select
the target vector from the cache database according to a formula

where v2 represents the original feature vector of the (t+1)
th frame of face image,

represents a vector in the cache database, and s represents a target distance between
v2 and

[0076] Optionally, in an embodiment, the processor 420 is specifically configured to select
the k candidate vectors from the face image database according to a formula s
*=∥[v1,0]-v
*∥
2 by using a kd-tree algorithm, where s
* represents a Euclidean distance between [v1,0] and v
*.
[0077] Optionally, in an embodiment, the processor 420 is specifically configured to select
the matching vector of the original feature vector from the k candidate vectors according
to a formula

where

represents an i
th candidate vector of the k candidate vectors, s
** represents a cosine distance between [v1,1] and

represents a Euclidean distance between [v1,0] and

1≤i≤k, and c is a constant.
[0078] Optionally, in an embodiment, the apparatus 400 is a server; and the processor 420
is specifically configured to obtain a vector v
' from a terminal by using a receiver, where v
'=[v1,v1·B·v1
T], and extract the original feature vector from the vector v
'.
[0079] FIG. 5 is a schematic block diagram of an apparatus for determining an identity identifier
of a face in a face image according to an embodiment of the present invention. It
should be understood that the apparatus 500 in FIG. 5 can implement the steps described
in FIG. 2. To avoid repetition, details are not described herein again. The apparatus
500 includes:
an obtaining unit 510, configured to obtain an original feature vector of a face image;
a selection unit 520, configured to select a target vector from a face image database
according to a formula

where v2 represents the original feature vector,

represents a vector in the face image database, s represents a target distance between
v2 and

the target vector is a vector that is in the face image database and that has a smallest
target distance from v2, and B represents an autocorrelation submatrix in a joint
Bayesian probability matrix; and
a determining unit 530, configured to: when a target distance between the original
feature vector and the target vector is less than a preset threshold, determine an
identity identifier that is of the target vector and that is recorded in the face
image database as an identity identifier of a face in the face image.
[0080] A vector

in a face image database includes components [v·A,v·B·v
T]. That is, the face image database stores a medium-level feature vector formed by
means of mutual interaction between a low-level face feature vector and autocorrelation
and cross-correlation submatrices in a joint Bayesian probability matrix. The medium-level
feature vector includes information about mutual interaction between the face feature
vector and the autocorrelation and cross-correlation submatrices in the joint Bayesian
probability matrix, so that efficiency of facial recognition can be improved.
[0081] FIG. 6 is a schematic block diagram of an apparatus for determining an identity identifier
of a face in a face image according to an embodiment of the present invention. It
should be understood that the apparatus 600 in FIG. 6 can implement the steps described
in FIG. 2. To avoid repetition, details are not described herein again. The apparatus
600 includes:
a processor 610, configured to store a program; and
a processor 620, configured to execute the program, where when the program is executed,
the processor 620 is specifically configured to: obtain an original feature vector
of a face image; select a target vector from a face image database according to a
formula

where v2 represents the original feature vector,

represents a vector in the face image database, s represents a target distance between

and v*, the target vector is a vector that is in the face image database and that has a
smallest target distance from v2, and B represents an autocorrelation submatrix in
a joint Bayesian probability matrix; and when a target distance between the original
feature vector and the target vector is less than a preset threshold, determine an
identity identifier that is of the target vector and that is recorded in the face
image database as an identity identifier of a face in the face image.
[0082] A vector

in a face image database includes components [v·A,v·B·v
T]. That is, the face image database stores a medium-level feature vector formed by
means of mutual interaction between a low-level face feature vector and autocorrelation
and cross-correlation submatrices in a joint Bayesian probability matrix. The medium-level
feature vector includes information about mutual interaction between the face feature
vector and the autocorrelation and cross-correlation submatrices in the joint Bayesian
probability matrix, so that efficiency of facial recognition can be improved.
[0083] FIG. 7 is a schematic block diagram of a terminal according to an embodiment of the
present invention. The terminal 700 in FIG. 7 includes:
a camera 710, configured to collect a face image;
a processor 720, configured to obtain an original feature vector v1 of the face image,
and generate a vector v' according to the original feature vector v1, where v'=[v1,v1·B·v1T] and B represents an autocorrelation submatrix in a joint Bayesian probability matrix;
and
a transmitter 730, configured to send the vector v' to a server, where the vector v' is used by the server to recognize an identity identifier of a face in the face image.
[0084] A person of ordinary skill in the art may be aware that, the units and algorithm
steps in the examples described with reference to the embodiments disclosed in this
specification may be implemented by electronic hardware or a combination of computer
software and electronic hardware. Whether the functions are performed by hardware
or software depends on particular applications and design constraint conditions of
the technical solutions. A person skilled in the art may use different methods to
implement the described functions for each particular application, but it should not
be considered that the implementation goes beyond the scope of the present invention.
[0085] It may be clearly understood by a person skilled in the art that, for the purpose
of convenient and brief description, for a detailed working process of the foregoing
system, apparatus, and unit, reference may be made to a corresponding process in the
foregoing method embodiments, and details are not described.
[0086] In the several embodiments provided in this application, it should be understood
that the disclosed system, apparatus, and method may be implemented in other manners.
For example, the described apparatus embodiment is merely exemplary. For example,
the unit division is merely logical function division and may be other division in
actual implementation. For example, multiple units or components may be combined or
integrated into another system, or some features may be ignored or not performed.
In addition, the displayed or discussed mutual couplings or direct couplings or communication
connections may be implemented through some interfaces, indirect couplings or communication
connections between the apparatuses or units, or electrical connections, mechanical
connections, or connections in other forms.
[0087] The units described as separate parts may or may not be physically separate, and
parts displayed as units may or may not be physical units, may be located in one position,
or may be distributed on multiple network units. Some or all of the units may be selected
according to actual needs to achieve the objectives of the solutions of the embodiments.
[0088] In addition, functional units in the embodiments of the present invention may be
integrated into one processing unit, or each of the units may exist alone physically,
or two or more units are integrated into one unit.
[0089] When the functions are implemented in the form of a software functional unit and
sold or used as an independent product, the functions may be stored in a computer-readable
storage medium. Based on such an understanding, the technical solutions of the present
invention essentially, or the part contributing to the prior art, or some of the technical
solutions may be implemented in a form of a software product. The software product
is stored in a storage medium, and includes several instructions for instructing a
computer device (which may be a personal computer, a server, a network device, or
the like) to perform all or some of the steps of the methods described in the embodiments
of the present invention. The foregoing storage medium includes: any medium that can
store program code, such as a USB flash drive, a removable hard disk, a read-only
memory (ROM, Read-Only Memory), a random access memory (RAM, Random Access Memory),
a magnetic disk, or an optical disc.
[0090] The foregoing descriptions are merely specific implementations of the present invention,
but are not intended to limit the protection scope of the present invention. Any variation
or replacement readily figured out by a person skilled in the art within the technical
scope disclosed in the present invention shall fall within the protection scope of
the present invention. Therefore, the protection scope of the present invention shall
be subject to the protection scope of the claims.
1. A method for determining an identity identifier of a face in a face image, comprising:
obtaining an original feature vector of a face image;
selecting k candidate vectors from a face image database according to the original
feature vector, wherein a vector v* in the face image database comprises components [v·A.v·B·vT], a feature extraction manner for v is the same as that for the original feature
vector, A represents a cross-correlation submatrix in a joint Bayesian probability
matrix, B represents an autocorrelation submatrix in the joint Bayesian probability
matrix, and k is a positive integer;
selecting a matching vector of the original feature vector from the k candidate vectors,
wherein the matching vector of the original feature vector is a candidate vector of
the k candidate vectors that has a shortest cosine distance from a vector [v1,1],
and v1 represents the original feature vector; and
determining, according to the matching vector of the original feature vector, an identity
identifier that is of the matching vector and that is recorded in the face image database
as an identity identifier of a face in the face image.
2. The method according to claim 1, wherein the obtaining an original feature vector
of a face image comprises:
tracking locations of the face in different video images to obtain N consecutive frames
of face images; and
obtaining the original feature vector from a tth frame of face image of the N frames of face images, wherein 1≤t<N; and
the method further comprises:
storing, into a cache database, the matching vector of the original feature vector
and the identity identifier that is of the matching vector and that is recorded in
the face image database;
obtaining an original feature vector of a (t+1)th frame of face image of the N frames of face images, wherein the (t+1)th frame of face image is a next frame of face image of the tth frame of face image;
selecting a target vector from the cache database according to the original feature
vector of the (t+1)th frame of face image, wherein the cache database comprises the matching vector of
the original feature vector of the tth frame of face image; and
when a target distance between the original feature vector of the (t+1)th frame of face image and the target vector is less than a preset threshold, determining
an identity identifier that is of the target vector and that is recorded in the cache
database as an identity identifier of a face in the (t+1)th frame of face image.
3. The method according to claim 2, wherein the selecting a target vector from the cache
database according to the original feature vector of the (t+1)
th frame of face image comprises:
selecting the target vector from the cache database according to a formula

wherein v2 represents the original feature vector of the (t+1)th frame of face image,

represents a vector in the cache database, and s represents a target distance between
v2 and

4. The method according to any one of claims 1 to 3, wherein the selecting k candidate
vectors from a face image database according to the original feature vector comprises:
selecting the k candidate vectors from the face image database according to a formula
s*=∥[v1,0]-v*∥2 by using a kd-tree algorithm, wherein s* represents a Euclidean distance between [v1,0] and v*.
6. The method according to any one of claims 1 to 5, wherein the method is executed by
a server; and
the obtaining an original feature vector of a face image comprises:
obtaining, by the server, a vector v' from a terminal, wherein v'=[v1,v1·B·v1T]; and
extracting, by the server, the original feature vector from the vector v'.
7. A method for determining an identity identifier of a face in a face image, comprising:
obtaining an original feature vector of a face image;
selecting a target vector from a face image database according to a formula

wherein v2 represents the original feature vector,

represents a vector in the face image database,

comprises components [v·A,v·B·vT], a feature extraction manner for v is the same as that for the original feature
vector, A represents a cross-correlation submatrix in a joint Bayesian probability
matrix, B represents an autocorrelation submatrix in the joint Bayesian probability
matrix, s represents a target distance between v2 and

and the target vector is a vector that is in the face image database and that has
a smallest target distance from v2; and
when a target distance between the original feature vector and the target vector is
less than a preset threshold, determining an identity identifier that is of the target
vector and that is recorded in the face image database as an identity identifier of
a face in the face image.
8. An apparatus for determining an identity identifier of a face in a face image, comprising:
a first obtaining unit, configured to obtain an original feature vector of a face
image;
a first selection unit, configured to select k candidate vectors from a face image
database according to the original feature vector, wherein a vector v* in the face image database comprises components [v·A,v·B·vT], a feature extraction manner for v is the same as that for the original feature
vector, A represents a cross-correlation submatrix in a joint Bayesian probability
matrix, B represents an autocorrelation submatrix in the joint Bayesian probability
matrix, and k is a positive integer;
a second selection unit, configured to select a matching vector of the original feature
vector from the k candidate vectors, wherein the matching vector of the original feature
vector is a candidate vector of the k candidate vectors that has a shortest cosine
distance from a vector [v1,1], and v1 represents the original feature vector; and
a first determining unit, configured to determine, according to the matching vector
of the original feature vector, an identity identifier that is of the matching vector
and that is recorded in the face image database as an identity identifier of a face
in the face image.
9. The apparatus according to claim 8, wherein the first obtaining unit is specifically
configured to track locations of the face in different video images to obtain N consecutive
frames of face images, and obtain the original feature vector from a t
th frame of face image of the N frames of face images, wherein 1≤t≤N; and
the apparatus further comprises:
an access unit, configured to store, into a cache database, the matching vector of
the original feature vector and the identity identifier that is of the matching vector
and that is recorded in the face image database;
a second obtaining unit, configured to obtain an original feature vector of a (t+1)th frame of face image of the N frames of face images, wherein the (t+1)th frame of face image is a next frame of face image of the tth frame of face image;
a third selection unit, configured to select a target vector from the cache database
according to the original feature vector of the (t+1)th frame of face image, wherein the cache database comprises the matching vector of
the original feature vector of the tth frame of face image; and
a second determining unit, configured to: when a target distance between the original
feature vector of the (t+1)th frame of face image and the target vector is less than a preset threshold, determine
an identity identifier that is of the target vector and that is recorded in the cache
database as an identity identifier of a face in the (t+1)th frame of face image.
10. The apparatus according to claim 9, wherein the third selection unit is specifically
configured to select the target vector from the cache database according to a formula

wherein v2 represents the original feature vector of the (t+1)
th frame of face image,

represents a vector in the cache database, and s represents a target distance between
v2 and
11. The apparatus according to any one of claims 8 to 10, wherein the first selection
unit is specifically configured to select the k candidate vectors from the face image
database according to a formula s*=∥[v1,0]-v*∥2 by using a kd-tree algorithm, wherein s represents a Euclidean distance between [v1,0]
and v*.
12. The apparatus according to any one of claims 8 to 11, wherein the second selection
unit is specifically configured to select the matching vector of the original feature
vector from the k candidate vectors according to a formula

wherein

represents an i
th candidate vector of the k candidate vectors, s
** represents a cosine distance between [v1,1] and

represents a Euclidean distance between [v1,0] and

1≤i≤k, and c is a constant.
13. The apparatus according to any one of claims 8 to 12, wherein the apparatus is a server;
and
the first obtaining unit is specifically configured to obtain a vector v' from a terminal, wherein v'=[v1,v1·B·v1T]; and extract the original feature vector from the vector v'.
14. An apparatus for determining an identity identifier of a face in a face image, comprising:
an obtaining unit, configured to obtain an original feature vector of a face image;
a selection unit, configured to select a target vector from a face image database
according to a formula

wherein v2 represents the original feature vector,

represents a vector in the face image database,

comprises components [v·A,v·B·vT], a feature extraction manner for v is the same as that for the original feature
vector, A represents a cross-correlation submatrix in a joint Bayesian probability
matrix, B represents an autocorrelation submatrix in the joint Bayesian probability
matrix, s represents a target distance between v2 and

and the target vector is a vector that is in the face image database and that has
a smallest target distance from v2; and
a determining unit, configured to: when a target distance between the original feature
vector and the target vector is less than a preset threshold, determine an identity
identifier that is of the target vector and that is recorded in the face image database
as an identity identifier of a face in the face image.
15. A terminal, comprising:
a camera, configured to collect a face image;
a processor, configured to obtain an original feature vector v1 of the face image,
and generate a vector v' according to the original feature vector v1, wherein v'=[v1,v1·B·v1T] and B represents an autocorrelation submatrix in a joint Bayesian probability matrix;
and
a transmitter, configured to send the vector v' to a server, wherein the vector v' is used by the server to recognize an identity identifier of a face in the face image.