CROSS-REFERENCE TO RELATED APPLICATIONS
TECHNICAL FIELD
[0002] This disclosure relates to the field of image processing, and particularly, to a
method of embedding an image in a video, A method and an apparatuses of acquiring
a plane prediction model.
BACKGROUND
[0003] Advertisement in a video is one of effective propaganda means.
[0004] Inserting an advertisement video in a video is to choose a moment in the original
video and insert the prepared advertisement video into the original video. When the
advertisement video is inserted and played, a user cannot watch the original video
at all, which affects watching experience of the user.
[0005] Posting an advertisement image in a video is to post the advertisement image in a
corner region of each frame of video image and when the user watches the original
video, pop up one advertisement image in the corner of a video playing interface.
The user can watch the original video while the advertisement image is played, but
the popped-up advertisement image may block key content of the original video, and
the advertisement image and the video are fused unnaturally.
[0006] Embedding an advertisement image in a video is to embed the advertisement image into
a certain position in a video frame image and fuse the advertisement image into the
video. In some related art, the video is detected, and if targets such as specific
objects or already existing advertisements are found therein, these targets are replaced
with the advertisement image. There are also some related art in which an advertisement
placement position is labelled in one video frame image, and for other video frame
images, the advertisement placement position is tracked using feature point matching,
and the advertisement is placed in the tracked position.
[0007] Inventors have found that in the related art of embedding the advertisement image
in the video, there are many restrictions on finding the embedding position of the
advertisement image, as a result, most often a suitable embedding position cannot
be found in the video. For example, it is difficult to find a specific replaceable
target in the video, or it is impossible to track a pre-labelled advertisement placement
position in the video, such that the advertisement image is hard to be embedded into
the video.
SUMMARY
[0008] In the embodiments of the present disclosure, by automatically finding a plane mask
widely existing in each video frame image and embedding an image to be embedded into
the plane mask, the image is not only made automatically and naturally fused into
a video, but also made more widely fused into the video. In addition, key points in
the plane mask in each video frame image can also be automatically found, and the
image to be embedded is embedded into a position region corresponding to the key points,
thereby improving the fusion effect of the image and the video.
[0009] According to some embodiments of the present disclosure, there is provided a method
of embedding an image in a video, comprising:
inputting a video frame image of a video into a plane prediction model, and acquiring
a predicted plane mask of the video frame image, wherein the plane prediction model
is obtained by training a deep learning model using training images with labels of
a plane detection frame and a plane mask; and
embedding the image to be embedded into the predicted plane mask of the video frame
image.
[0010] In some embodiments, the plane prediction model is obtained by training the deep
learning model using the training images with the labels of the plane detection frame
and the plane mask as well as labeling information of 4 key points in the plane mask;
the acquiring step comprises: after inputting the video frame image of the video into
the plane prediction model, acquiring the predicted plane mask of the video frame
image as well as the 4 key points in the plane mask; and the embedding the image to
be embedded into the predicted plane mask of the video frame image comprises: aligning
4 vertexes of the image to be embedded with the 4 key points in the predicted plane
mask of the video frame image, and embedding the image to be embedded into a position
region corresponding to the 4 key points in the predicted plane mask of the video
frame image.
[0011] In some embodiments, the labelling information of the 4 key points in the plane mask
in each training image is obtained by:
converting the plane mask of the each training image from a pixel coordinate system
to a plane coordinate system;
determining a boundary line of the plane mask in the plane coordinate system;
determining an inscribed rectangle of the plane mask in the plane coordinate system
based on the boundary line of the plane mask; and
converting 4 vertices of the inscribed rectangle of the plane mask from the plane
coordinate system to the pixel coordinate system.
[0012] In some embodiments, the converting the plane mask of the each training image from
a pixel coordinate system to a plane coordinate system comprises:
converting the plane mask of the each training image from the pixel coordinate system
to a world coordinate system; and
converting the plane mask of the each training image from the world coordinate system
to the plane coordinate system.
[0013] In some embodiments, the determining a boundary line of the plane mask in the plane
coordinate system comprises:
performing edge detection on the plane mask in the plane coordinate system;
performing Hough line detection on the plane mask in the plane coordinate system based
on a detected edge of the plane mask;
determining a probability that each detected line is the boundary line of the plane
mask; and
determining one boundary line of the plane mask in the plane coordinate system from
the detected lines based on the probability.
[0014] In some embodiments, the determining a probability that each detected line is the
boundary line of the plane mask comprises:
determining the probability that each detected line is the boundary line of the plane
mask according to difference information of symmetrical regions on both sides of the
each detected line, wherein the greater the difference between the symmetrical regions
on the both sides of a line, the greater the probability that the line is the boundary
line of the plane mask.
[0015] In some embodiments, the determining one boundary line of the plane mask in the plane
coordinate system from the detected lines comprises:
choosing a pair of lines having a perpendicular relation or a parallel relation from
the detected lines;
under the condition that the pair of lines is found, determining a line with a greatest
probability in the pair of lines with a greatest sum of probabilities as the one boundary
line of the plane mask in the plane coordinate system; and
under the condition that the pair of lines is not found, determining a line with a
greatest probability as the one boundary line of the plane mask in the plane coordinate
system.
[0016] In some embodiments, the determining a boundary line of the plane mask in the plane
coordinate system further comprises at least one of:
performing median filtering on the plane mask in the plane coordinate system before
the edge detection; or
merging the detected lines based on a slope of each line after the Hough line detection.
[0017] In some embodiments, the determining an inscribed rectangle of the plane mask in
the plane coordinate system comprises: determining an inscribed rectangle of the plane
mask in the plane coordinate system, wherein the inscribed rectangle is parallel to
the boundary line and comprises a maximum inscribed square.
[0018] In some embodiments, the embedding the image to be embedded into the predicted plane
mask of the video frame image comprises:
determining a transformation matrix from the image to be embedded to the predicted
plane mask of the video frame image according to a mapping relation between the 4
vertexes of the image to be embedded and the 4 key points in the plane mask of the
predicted video frame image; and
transforming each foreground point of the image to be embedded into the position region
corresponding to the 4 key points in the predicted plane mask of the video frame image
based on the transformation matrix.
[0019] In some embodiments, the deep learning model uses a loss function that is determined
based on the 4 key points in the labelling information and the predicted 4 key points
after the alignment operation is performed,
wherein performing the alignment operation on the predicted 4 key points comprises:
determining a transformation ratio based on the 4 key points in the labelling information
and the predicted 4 key points;
performing size transformation on the predicted 4 key points according to the transformation
ratio;
determining first position transformation information based on the 4 key points in
the labelling information;
determining second position transformation information based on the predicted 4 key
points; and
respectively adding the first position transformation information to the predicted
4 key points after the size transformation and subtracting the second position transformation
information, to finish the alignment operation on the predicted 4 key points.
[0020] In some embodiments, the deep learning model comprises region-based convolutional
neural networks.
[0021] In some embodiments, the image to be embedded comprises an enterprise identification
image and a product image.
[0022] According to some embodiments of the present disclosure, there is provided a method
of acquiring a plane prediction model, comprising:
labelling a plane detection frame, a plane mask in each training image and 4 key points
in the plane mask;
training a deep learning model using the training images with labels of the plane
detection frame and the plane mask as well as labelling information of the 4 key points
in the plane mask; and
determining the trained deep learning model as the plane prediction model.
[0023] In some embodiments, the labelling 4 key points in the plane mask in each training
image comprises:
converting the plane mask of the each training image from a pixel coordinate system
to a plane coordinate system;
determining a boundary line of the plane mask in the plane coordinate system;
determining an inscribed rectangle of the plane mask in the plane coordinate system
based on the boundary line of the plane mask; and
converting 4 vertices of the inscribed rectangle of the plane mask from the plane
coordinate system to the pixel coordinate system.
[0024] According to some embodiments of the present disclosure, there is provided an apparatus
of embedding an image in a video, comprising: a memory; and a processor coupled to
the memory, wherein the processor is configured to perform the method of embedding
an image in a video according to any of the embodiments based on instructions stored
in the memory.
[0025] According to some embodiments of the present disclosure, there is provided an apparatus
of acquiring a plane prediction model, comprising: a memory; and a processor coupled
to the memory, wherein the processor is configured to perform the method of acquiring
a plane prediction model according to any of the embodiments based on instructions
stored in the memory.
[0026] According to some embodiments of the present disclosure, there is provided a non-transitory
computer-readable storage medium stored a computer program which implements the method
of embedding an image in a video according to any of the embodiments or the method
of acquiring a plane prediction model according to any of the embodiments when executed
by a processor.
BRIEF DESCRIPTION OF THE DRAWINGS
[0027] The accompanying drawings that need to be used in the description of the embodiments
or related art will be briefly described below. The present disclosure can be more
clearly understood from the following detailed description, which proceeds with reference
to the accompanying drawings.
[0028] It is apparent that the drawings in the following description are merely some embodiments
of the present disclosure, and for one of ordinary skill in the art, other drawings
can be derived from these drawings without paying out creative labor.
Fig. 1 illustrates a flow diagram of a method of acquiring a plane prediction model
according to some embodiments of the present disclosure.
Fig. 2 illustrates a flow diagram of a method of acquiring a plane prediction model
according to other embodiments of the present disclosure.
Fig. 3 illustrates a schematic diagram of a deep learning model according to some
embodiments of the present disclosure.
Fig. 4 illustrates a flow diagram of labelling 4 key points in a plane mask in a training
image according to some embodiments of the present disclosure.
Fig. 5 illustrates a schematic diagram of three coordinate systems according to some
embodiments of the present disclosure.
Fig. 6 illustrates a flow diagram of a method of embedding an image in a video according
to some embodiments of the present disclosure.
Fig. 7 illustrates a flow diagram of a method of embedding an image in a video according
to other embodiments of the present disclosure.
Fig. 8 illustrates a schematic diagram of an apparatus of embedding an image in a
video according to some embodiments of the present disclosure.
Fig. 9 illustrates a schematic diagram of an apparatus of acquiring a plane prediction
model according to some embodiments of the present disclosure.
DETAILED DESCRIPTION
[0029] The technical solutions in the embodiments of the present disclosure will be clearly
and completely described below in conjunction with the drawings in the embodiments
of the present disclosure.
[0030] Unless specified otherwise, words such as "first", "second", etc. in this disclosure
are used for distinguishing different objects, and are not used for indicating the
meaning of size or time sequence, etc.
[0031] In the embodiments of the present disclosure, by means of a plane prediction model,
a plane mask widely existing in each video frame image is automatically found, and
an image to be embedded is embedded into the plane mask, which not only makes the
image automatically and naturally fused into a video, but also makes the image more
widely fused into the video. In addition, key points in the plane mask in each video
frame image can also be automatically found, and the image to be embedded is embedded
into a position region corresponding to the key points, thereby improving the fusion
effect of the image and the video.
[0032] Fig. 1 illustrates a flow diagram of a method of acquiring a plane prediction model
according to some embodiments of the present disclosure. The plane prediction model
is capable of predicting a plane mask in an image.
[0033] As shown in Fig. 1, the method of this embodiment comprises steps 110 to 130.
[0034] In step 110, a plane detection frame and a plane mask in a training image are labelled.
[0035] The plane detection frame and the plane mask in the training images can be labelled
by itself, or a ready-made dataset of the training images already labelled with the
plane detection frame and the plane mask can also be acquired, for example, a PlaneRCNN
dataset, which not only can provide the training images with labels of the plane detection
frame and the plane mask, but also can provide camera parameters related to the training
images, and a rotation/translation matrix from a camera coordinate system to a world
coordinate system.
[0036] In step 120, a deep learning model is trained using the training image with the labels
of the plane detection frame and the plane mask, so that the deep learning model has
learning abilities of the plane detection frame and the plane mask of the image.
[0037] The deep learning model comprises region-based convolutional neural networks (RCNN),
for example, MaskRCNN network, which is a type of RCNN. The deep learning model such
as the MaskRCNN network comprises a branch of plane detection frame regression and
a branch of plane mask regression of the image. The branch of the plane detection
frame regression comprises the plane detection frame regression and can also comprise
semantic category regression. Training images with the label of the plane detection
frame train the branch of the plane detection frame regression, and training images
with the label of the plane mask train the branch of the plane mask regression.
[0038] In a training process, a total loss is determined according to a loss between the
label of the labelled plane detection frame and the model-predicted plane detection
frame and a loss between the label of the labelled plane mask and the model-predicted
plane mask, of each training image, parameters of the deep learning model are updated
according to the total loss, and the training process is iteratively executed until
a training termination condition is met, for example, a preset number of iterations
is reached, or the total loss is less than a certain value, and the like.
[0039] In step 130, the trained deep learning model is determined as the plane prediction
model. The plane prediction model can predict the plane detection frame of the image
and the plane mask in the plane detection frame.
[0040] Fig. 2 illustrates a flow diagram of a method of acquiring a plane prediction model
according to other embodiments of the present disclosure. The plane prediction model
can predict not only the plane mask in the image but also 4 key points in the plane
mask.
[0041] As shown in Fig. 2, the method of this embodiment comprises: step 210 to 230.
[0042] In step 210, the plane detection frame and the plane mask in the training image as
well as 4 key points in the plane mask are labelled.
[0043] The 4 key points in the plane mask can be labelled, for example, in the middle of
the plane mask. A method of labelling the 4 key points in the plane mask will be specifically
described in a subsequent embodiment of Fig. 4.
[0044] In step 220, the deep learning model is trained using the training images with labels
of the plane detection frame and the plane mask as well as labelling information of
the 4 key points in the plane mask, so that the deep learning model has learning abilities
of the plane detection frame, the plane mask of the image and the 4 key points thereof.
[0045] The deep learning model comprises RCNN, such as MaskRCNN network, which is one type
of RCNN. Fig. 3 illustrates a schematic diagram of a deep learning model. As shown
in Fig. 3, the deep learning model such as the MaskRCNN network comprises a branch
of plane detection frame regression, a branch of plane mask regression, and a branch
of key point regression of the image. The branch of the plane detection frame regression
comprises the plane detection frame regression and can also comprise semantic category
regression. Training images with the label of the plane detection frame train the
branch of the plane detection frame regression, training images with the label of
the plane mask train the branch of the plane mask regression, and training images
with the labelling information of the 4 key points train the branch of the key point
regression. The MaskRCNN network obtains a proposal region from the original image
by using a method of RoIAlign (region of interest align).
[0046] Since the image embedding position on a plane has a movable property, as long as
the image embedding position is on the plane and the image and boundary lines of of
the plane are parallel to each other, a result predicted by the deep learning model
is considered to be correct, so a loss function adopted by the deep learning model
is a loss function after the key points are aligned, i.e., a loss function that is
determined based on the 4 key points in the labelling information and the predicted
4 key points after alignment operation is performed. For example, the MaskRCNN network
adopts Smooth_L1 loss after the key points are aligned.
[0047] Let key point label labelled on the current plane be
gt ∈
RN×4×2, and a key point coordinate predicted by the network be
pre ∈
RN×4×2, where in a dimension space R, N denotes the number of planes, 4 denotes 4 key points,
and 2 denotes an abscissa and ordinate of the plane. If a coordinate of an aligned
key point is
pre" and a loss of a network key point branch is
lossk, lossk is calculated as follows (1 to 6) .
- (1) Based on the 4 key points in the labelling information and the predicted 4 key
points, determine a transformation ratio r.

where max denotes taking a maximum value, and min denotes taking a minimum value.
- (2) According to the transformation ratio, perform size transformation on the predicted
4 key points, and set the predicted key points after the size transformation to be
pre'.

- (3) Based on the 4 key points in the labelling information, determine first position
transformation information gtc.

- (4) Based on the predicted 4 key points, determine second position transformation
information pre'c.

- (5) Respectively add the first position transformation information to the predicted
4 key points after the size transformation and subtract the second position transformation
information to finish the alignment operation on the predicted 4 key points, wherein
the aligned key points is set as pre".

- (6) the loss lossk of the network key point branch is:

[0048] The alignment operation on the 4 key points is performed, so that a quadrilateral
region formed by the 4 key points is in the middle of the plane mask.
[0049] In the training process, a total loss is determined according to a loss between the
label of the labelled plane detection frame and the model-predicted plane detection
frame, a loss between the label of the labelled plane mask and the model-predicted
plane mask, and a loss between the labelled 4 key points and the predicted 4 key points
after the alignment operation, of each training image, parameters of the deep learning
model are updated according to the total loss, and the training process is iteratively
executed until a training termination condition is met, for example, a preset number
of iterations is reached, or the total loss is less than a certain value, and the
like.
[0050] In step 230, the trained deep learning model is determined as the plane prediction
model. The plane prediction model can predict the plane detection frame of the image,
the plane mask in the plane detection frame, and the 4 key points in the plane mask.
[0051] Fig. 4 illustrates a flow diagram of labelling 4 key points in a plane mask in a
training image according to some embodiments of the present disclosure.
[0052] As shown in Fig. 4, the method of this embodiment comprises the following steps.
[0053] In step 410, one training image containing a plane is acquired.
[0054] Many images have therein contained planes, which are, for example, but not limited
to, a desktop, a wall surface, various surfaces of a cabinet, a floor, etc. In Fig.
4, a side surface of one cabinet is shown.
[0055] In step 420, a plane mask of the training image in the pixel coordinate system is
acquired.
[0056] As described above, the plane mask of the training image in the pixel coordinate
system can be acquired by means of labelling, or the training image and its plane
mask in the pixel coordinate system can be acquired through the ready-made PlaneRCNN
dataset.
[0057] In step 430, the plane mask of the training image is converted from the pixel coordinate
system to the plane coordinate system, which comprises (1-2):
- (1) According to the camera parameter related to the training image, and the rotation/translation
matrix from the camera coordinate system to the world coordinate system, convert the
plane mask of the training image from the pixel coordinate system to the world coordinate
system.
[0058] A coordinate in the pixel coordinate system is: a coordinate on an image obtained
after a camera shoots a scene, and the pixel coordinate system is a two-dimensional
coordinate system.
[0059] A coordinate of a foreground point in the plane mask of the training image in the
pixel coordinate system is set as
Simg =

, and a coordinate of a foreground point in the plane mask of the training image in
the world coordinate system is set as

, where N denotes the number of the foreground points.
[0060] (2) Convert the plane mask of the training image from the world coordinate system
to the plane coordinate system.
[0061] A coordinate in the plane coordinate system is: equivalent to a coordinate on an
image obtained after the camera right in front of the plane shoots the plane, and
in the plane coordinate system, a depth value of each foreground point on the plane
is the same. The plane coordinate system is a two-dimensional coordinate system.
[0062] Fig. 5 illustrates schematic diagrams of three coordinate systems. They are a pixel
coordinate system, a world coordinate system and a plane coordinate system in this
order from left to right.
[0063] A coordinate of a foreground point in the plane mask in the plane coordinate system
is set as

.
[0064] Two points
A = (x1, y1, z1) ∈ Sworld, B = (x2, y2, z2) ∈ Sworld in the plane mask are found in the world coordinate system, and then one point
C = (
x3, y3, z3) on an instance is found in the world coordinate system, so that
AC ⊥
AB. The plane coordinate system is constructed by taking A as an origin,
AB as an x-axis and
AC as a y-axis.
[0065] A coordinate of the point C is calculated.
[0066] It is given that a normal of the instance is
n = (
a,
b, c), and an offset is
d, A = (
x1,
y1,
z1),
B = (
x2,
y2,
z2), and since
AC ⊥
AB and a plane normal where the point C is located is
n, the following relation is obtained:

[0067] If the vector
AC is in parallel with the x-axis, (
x3, y3, z3)
= (
x1,
y1, z1 +
1); otherwise, if the vector
AC is not in parallel with the x-axis,

[0068] Therefore, the points
A = (
x1y1z1),
B = (
x2,y2,z2),
C = (
x3,y3,z3) in the world coordinate system are obtained.
[0069] Since
A is the origin in the plane coordinate system and
AC ⊥
AB, the coordinates of the points A, B, C in the plane coordinate system respectively
are:

[0070] According to the coordinates of the three points in the world coordinate system and
the plane coordinate system, a transformation matrix
M between the world coordinate system and the plane coordinate system is obtained,
and according to the transformation matrix
M, 
can be calculated by:

[0071] Therefore,

, that is, the coordinate of the foreground point of the plane mask in the plane coordinate
system is obtained.
[0072] Each plane has its own plane coordinate system, and in the plane coordinate system,
an inscribed rectangle (such as a maximum inscribed square) of the plane mask can
be more easily found, wherein 4 vertexes are taken as the 4 key points.
[0073] In step 440, median filtering on the plane mask in the plane coordinate system is
performed, denoted as:
mask = MedianFilter(
mask), where the mask on a right side of
= is the plane mask before the filtering, and the mask on a left side of
= is the plane mask after the filtering.
[0074] The median filtering is a nonlinear smoothing technique, in which a gray value of
each pixel is set as a median of gray values of all pixels within a certain neighborhood
window of the point.
[0075] In step 450, edge detection on the plane mask is performed in the plane coordinate
system, denoted as:
edges = Edge(
mask).
[0076] The edge detection technique can refer to the prior art.
[0077] In step 460, based on a detected edge of the plane mask, Hough line detection on
the plane mask is performed in the plane coordinate system, denoted as:
lines = HoughLineDetect(
edges)
. The Hough line detection method can refer to the prior art.
[0078] Further, lines
keep_lines in which the number of pixels is greater than a set threshold
voteThresh are screened out from the detected lines
: keep_lines = {linej|linej(
pixel) ≥
voteThresh, linej ∈ lines} ,wherein
linesj(
pixel) denotes the number of pixels contained in a jth line detected on the plane mask.
[0079] In step 470, based on a slope of the line, the detected lines are merged, denoted
as:
merge_lines = MergeLine(
keep_lines), wherein
MergeLine() denotes that lines with similar slopes are merged into one line.
[0080] In step 480, a probability that the detected line is a boundary line of the plane
mask is determined, and a pair of lines having a perpendicular relation or a parallel
relation is chosen from the detected lines, denoted as:
choose_lines = ChooseLine(merge_ines).
[0081] According to difference information of symmetrical regions on both sides of the line,
the probability that the detected line is the boundary line of the plane mask is determined,
denoted as:

where
region1(linek) and
region2(linek) respectively denote a symmetric region with a fixed width on both sides of the line
linek, N is the number of pixels in the region, and
valueThresh is a set threshold. The greater the difference between the symmetric regions on the
both sides of the line, the greater the probability that the line is the boundary
line of the plane mask.
[0082] In step 490, under the condition that the pair of lines is found, a line with a greatest
probability in a pair of lines with a greatest sum of probabilities is determined
as one boundary line of the plane mask in the plane coordinate system; and under the
condition that the pair of lines is not found, a line with a greatest probability
is determined as one boundary line of the plane mask in the plane coordinate system,
thereby obtaining the boundary line of the plane mask in the plane coordinate system,
denoted as
BestLine = getBestLine(
choose_lines)
. Based on the boundary line, an inscribed rectangle of the plane mask is determined
in the plane coordinate system.
[0083] In the plane coordinate system, an inscribed rectangle of the plane mask that is
parallel to the boundary line is determined, wherein the inscribed rectangle is, for
example, a maximum inscribed square, denoted as
square = MaxInscribedSquare(mask) , wherein
square_edgei//
BestLine, and four vertexes of the maximum inscribed square are

[0084] In step 4100, 4 vertices of the inscribed rectangle of the plane mask are converted
from the plane coordinate system to the pixel coordinate system.
[0085] As described above, the coordinate

of the foreground point in the plane mask of the training image in the pixel coordinate
system, and the coordinate
Splane =

of the foreground point of the plane mask in the plane coordinate system are known,
thereby obtaining the transformation matrix T of the pixel coordinate system and the
plane coordinate system, that is,

. Based on the above determined 4 vertexes

of the inscribed rectangle in the plane mask in the plane coordinate system, the
coordinates of the 4 key points in the plane mask of the training image in the pixel
coordinate system are determined, denoted as:

.
[0086] The 4 key points inscribed in the plane mask in the training image are automatically
found and used as the training data to train the model, so that the model can predict
the 4 key points inscribed in the plane mask in the video frame image, so as to embed
the image into the proper position in the video frame image, thereby further improving
the fusion effect of the image and the video.
[0087] Fig. 6 illustrates a flow diagram of a method of embedding an image in a video according
to some embodiments of the present disclosure.
[0088] As shown in Fig. 6, the method of this embodiment comprises: step 610 to 620.
[0089] In step 610, a video frame image of a video is input into a plane prediction model,
and a predicted plane mask of the video frame image is acquired.
[0090] The plane prediction model is obtained by training a deep learning model using training
images with labels of a plane detection frame and a plane mask, and reference is made
to the foregoing embodiments for details.
[0091] In step 620, the image to be embedded is embedded into the predicted plane mask of
the video frame image.
[0092] For example, the image to be embedded is embedded into a position region in the plane
mask that is parallel to a boundary line of the plane mask.
[0093] Examples of the images to be embedded include, but are not limited to, business identification
images, product images, character images, and advertisement images.
[0094] The plane mask widely existing in the video frame image are automatically found,
and the image to be embedded is embedded into the plane mask, which not only makes
the image automatically and naturally fused into the video, but also makes the image
more widely fused into the video.
[0095] Fig. 7 is a flow diagram illustrating a method of embedding an image in a video according
to other embodiments of the present disclosure.
[0096] As shown in Fig. 7, the method of this embodiment comprises: steps 710 to 720.
[0097] In step 710, a video frame image of a video is input into a plane prediction model,
and a predicted plane mask of the video frame image as well as 4 key points in the
plane mask are acquired.
[0098] The plane prediction model is obtained by training a deep learning model using training
images with labels of a plane detection frame and a plane mask as well as labelling
information of 4 key points in the plane mask, and reference is made to the foregoing
embodiments for details.
[0099] In step 720, 4 vertices of the image to be embedded are mapped to the 4 key points
in the predicted plane mask of the video frame image, and the image to be embedded
is embedded into a position region corresponding to the 4 key points in the predicted
plane mask of the video frame image.
[0100] Specifically, according to a mapping relation between the 4 vertices (whose coordinates
are (
0, 0), (
w, 0), (
0, h), (
w, h)) of the image
Iad (whose resolution is
w ×
h) to be embedded and the 4 key points
pre" in the predicted plane mask of the video frame image
Irgb, a transformation matrix M ∈
R3∗3 from the image
Iad to be embedded to the predicted plane mask of the video frame image
Irgb is determined; and based on the transformation matrix, each foreground point of the
image to be embedded is transformed into the position region corresponding to the
4 key points in the predicted plane mask of the video frame image, that is, for each
pixel
prgb ∈
R1∗2 of the position region formed by the 4 key points on
Irgb, by [
pad,
1]
T =
M[
prgb,
1]
T, a pixel
pad ∈
R1∗2 corresponding to
prgb is found on
Iad, and finally, a pixel value of
pad is assigned to
prgb.
[0101] The plane mask widely existing in each video frame image and the 4 key points in
the plane mask are automatically found, and the image to be embedded is embedded into
the position region corresponding to the 4 key points in the plane mask, which not
only makes the image automatically, naturally, and widely fused into the video, but
also improves the fusion effect of the image and the video.
[0102] Fig. 8 illustrates a schematic diagram of an apparatus of embedding an image in a
video according to some embodiments of the present disclosure.
[0103] As shown in Fig. 8, the apparatus 800 of embedding an image in a video of this embodiment
comprises: a memory 810 and a processor 820 coupled to the memory 810, wherein the
processor 820 is configured to perform the method of embedding an image in a video
in any of the embodiments described above based on instructions stored in the memory
810.
[0104] The memory 810 can include, for example, a system memory, a fixed non-volatile storage
medium, and the like. The system memory has thereon stored, for example, an operating
system, an application, a boot loader, and other programs.
[0105] The apparatus 800 can also include an input/output interface 830, a network interface
840, a storage interface 850, and the like. These interfaces 830, 840, 850, as well
as the memory 810 and the processor 820, can be connected, for example, through a
bus 860. The input/output interface 830 provides a connection interface for an input/output
device such as a display, a mouse, a keyboard, and a touch screen. The network interface
840 provides a connection interface for a variety of networking devices. The storage
interface 850 provides a connection interface for an external storage device such
as a SD card and a USB flash disk.
[0106] Fig. 9 illustrates a schematic diagram of an apparatus of acquiring a plane prediction
model according to some embodiments of the present disclosure.
[0107] As shown in Fig. 9, the apparatus of acquiring a plane prediction model 900 of this
embodiment comprises: a memory 910 and a processor 920 coupled to the memory 910,
wherein the processor 920 is configured to perform the method of acquiring a plane
prediction model in any of the embodiments described above based on instructions stored
in the memory 910.
[0108] The memory 910 can include, for example, a system memory, a fixed non-volatile storage
medium, and the like. The system memory has thereon stored, for example, an operating
system, an application program, a boot loader, and other programs.
[0109] The apparatus 900 can also comprise an input/output interface 930, a network interface
940, a storage interface 950, and the like. These interfaces 930, 940, 950, as well
as the memory 910 and the processor 920 can be connected, for example, through a bus
960. The input/output interface 930 provides a connection interface for an input/output
device such as a display, a mouse, a keyboard, and a touch screen. The network interface
940 provides a connection interface for a variety of networking devices. The storage
interface 950 provides a connection interface for an external storage device such
as a SD card and a USB flash disk.
[0110] The apparatus 800 of embedding an image in a video can be different from or the same
as the apparatus 900 of acquiring a plane prediction model. For example, the apparatus
800 of embedding an image in a video and the apparatus 900 of acquiring a plane prediction
model can be disposed on one computer, or disposed on two computers.
[0111] According to some embodiments of the present disclosure, there is provided a non-transitory
computer-readable storage stored a computer program which implements the method of
embedding an image in a video or the method of acquiring a plane prediction model
when executed by a processor.
[0112] It should be appreciated by those skilled in the art that, the embodiments of the
present disclosure can be provided as a method, system, or computer program product.
Accordingly, the present disclosure can take a form of an entire hardware embodiment,
an entire software embodiment or an embodiment combining software and hardware aspects.
Furthermore, the present disclosure can take a form of a computer program product
implemented on one or more non-transitory computer-readable storage media (including,
but not limited to, disk memories, CD-ROMs, optical memories, etc.) having computer
program code embodied therein.
[0113] The present disclosure is described with reference to flow diagrams and/or block
diagrams of the method, apparatus (system), and computer program product according
to the embodiments of the present disclosure. It will be understood that each flow
and/or block in the flow diagrams and/or block diagrams, and a combination of flows
and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer
program instructions. These computer program instructions can be provided to a processor
of a general-purpose computer, special-purpose computer, embedded processor, or other
programmable data processing devices to produce a machine, such that the instructions,
which are executed by the processor of the computer or other programmable data processing
devices, create means for implementing the function specified in one or more flows
of the flow diagrams and/or one or more blocks of the block diagrams.
[0114] These computer program instructions can also be stored in a computer-readable memory
that can guide a computer or other programmable data processing devices to work in
a specific manner, such that the instructions stored in the computer-readable memory
produce an article of manufacture including instruction means which implement the
function specified in one or more flows of the flow diagrams and/or one or more blocks
of the block diagrams.
[0115] These computer program instructions can also be loaded onto a computer or other programmable
data processing devices, such that a series of operation steps are performed on the
computer or other programmable devices to produce a computer-implemented process,
and therefore, the instructions executed on the computer or other programmable devices
provide steps for implementing the function specified in one or more flows of the
flow diagrams and/or one or more blocks of the block diagrams.
[0116] The above description is only the preferred embodiments of the present disclosure
and not used for limiting the present disclosure, and any modifications, equivalents,
improvements and the like that are within the spirit and scope of the present disclosure
should be included in the protection scope of the present disclosure.
1. A method of embedding an image in a video, comprising:
inputting a video frame image of a video into a plane prediction model, and acquiring
a predicted plane mask of the video frame image, wherein the plane prediction model
is obtained by training a deep learning model using training images with labels of
a plane detection frame and a plane mask; and
embedding the image to be embedded into the predicted plane mask of the video frame
image.
2. The method according to claim 1, wherein the plane prediction model is obtained by
training the deep learning model using the training images with the labels of the
plane detection frame and the plane mask as well as labeling information of 4 key
points in the plane mask;
the acquiring step comprises: after inputting the video frame image of the video into
the plane prediction model, acquiring the predicted plane mask of the video frame
image as well as the 4 key points in the plane mask; and
the embedding the image to be embedded into the predicted plane mask of the video
frame image comprises: aligning 4 vertexes of the image to be embedded with the 4
key points in the predicted plane mask of the video frame image, and embedding the
image to be embedded into a position region corresponding to the 4 key points in the
predicted plane mask of the video frame image.
3. The method according to claim 2, wherein the labelling information of the 4 key points
in the plane mask in each training image is obtained by:
converting the plane mask of the each training image from a pixel coordinate system
to a plane coordinate system;
determining a boundary line of the plane mask in the plane coordinate system;
determining an inscribed rectangle of the plane mask in the plane coordinate system
based on the boundary line of the plane mask; and
converting 4 vertices of the inscribed rectangle of the plane mask from the plane
coordinate system to the pixel coordinate system.
4. The method according to claim 3, wherein the converting the plane mask of the each
training image from a pixel coordinate system to a plane coordinate system comprises:
converting the plane mask of the each training image from the pixel coordinate system
to a world coordinate system; and
converting the plane mask of the each training image from the world coordinate system
to the plane coordinate system.
5. The method according to claim 3, wherein the determining a boundary line of the plane
mask in the plane coordinate system comprises:
performing edge detection on the plane mask in the plane coordinate system;
performing Hough line detection on the plane mask in the plane coordinate system based
on a detected edge of the plane mask;
determining a probability that each detected line is the boundary line of the plane
mask; and
determining one boundary line of the plane mask in the plane coordinate system from
the detected lines based on the probability.
6. The method according to claim 5, wherein the determining a probability that each detected
line is the boundary line of the plane mask comprises:
determining the probability that each detected line is the boundary line of the plane
mask according to difference information of symmetrical regions on both sides of the
each detected line, wherein the greater the difference between the symmetrical regions
on the both sides of a line, the greater the probability that the line is the boundary
line of the plane mask.
7. The method according to claim 5, wherein the determining one boundary line of the
plane mask in the plane coordinate system from the detected lines comprises:
choosing a pair of lines having a perpendicular relation or a parallel relation from
the detected lines;
under the condition that the pair of lines is found, determining a line with a greatest
probability in the pair of lines with a greatest sum of probabilities as the one boundary
line of the plane mask in the plane coordinate system; and
under the condition that the pair of lines is not found, determining a line with a
greatest probability as the one boundary line of the plane mask in the plane coordinate
system.
8. The method according to claim 5, wherein the determining a boundary line of the plane
mask in the plane coordinate system further comprises at least one of:
performing median filtering on the plane mask in the plane coordinate system before
the edge detection; or
merging the detected lines based on a slope of each line after the Hough line detection.
9. The method according to claim 3, wherein the determining an inscribed rectangle of
the plane mask in the plane coordinate system comprises:
determining an inscribed rectangle of the plane mask in the plane coordinate system,
wherein the inscribed rectangle is parallel to the boundary line and comprises a maximum
inscribed square.
10. The method according to claim 2, wherein the embedding the image to be embedded into
the predicted plane mask of the video frame image comprises:
determining a transformation matrix from the image to be embedded to the predicted
plane mask of the video frame image according to a mapping relation between the 4
vertexes of the image to be embedded and the 4 key points in the plane mask of the
predicted video frame image; and
transforming each foreground point of the image to be embedded into the position region
corresponding to the 4 key points in the predicted plane mask of the video frame image
based on the transformation matrix.
11. The method according to claim 2, wherein
the deep learning model uses a loss function that is determined based on the 4 key
points in the labelling information and the predicted 4 key points after the alignment
operation is performed,
wherein performing the alignment operation on the predicted 4 key points comprises:
determining a transformation ratio based on the 4 key points in the labelling information
and the predicted 4 key points;
performing size transformation on the predicted 4 key points according to the transformation
ratio;
determining first position transformation information based on the 4 key points in
the labelling information;
determining second position transformation information based on the predicted 4 key
points; and
respectively adding the first position transformation information to the predicted
4 key points after the size transformation and subtracting the second position transformation
information, to finish the alignment operation on the predicted 4 key points.
12. The method according to any of claims 1 to 11, wherein
the deep learning model comprises region-based convolutional neural networks; or
the image to be embedded comprises an enterprise identification image and a product
image.
13. A method of acquiring a plane prediction model, comprising:
labelling a plane detection frame, a plane mask in each training image and 4 key points
in the plane mask;
training a deep learning model using the training images with labels of the plane
detection frame and the plane mask as well as labelling information of the 4 key points
in the plane mask; and
determining the trained deep learning model as the plane prediction model.
14. The method according to claim 13, wherein the labelling 4 key points in the plane
mask in each training image comprises:
converting the plane mask of the each training image from a pixel coordinate system
to a plane coordinate system;
determining a boundary line of the plane mask in the plane coordinate system;
determining an inscribed rectangle of the plane mask in the plane coordinate system
based on the boundary line of the plane mask; and
converting 4 vertices of the inscribed rectangle of the plane mask from the plane
coordinate system to the pixel coordinate system.
15. An apparatus of embedding an image in a video, comprising:
a memory; and a processor coupled to the memory, wherein the processor is configured
to perform the method of embedding an image in a video according to any of claims
1 to 12 based on instructions stored in the memory.
16. An apparatus of acquiring a plane prediction model, comprising:
a memory; and a processor coupled to the memory, wherein the processor is configured
to perform the method of acquiring a plane prediction model according to any of claims
13 to 14 based on instructions stored in the memory.
17. A non-transitory computer-readable storage medium stored a computer program which
implements the method of embedding an image in a video according to any of claims
1 to 12 or the method of acquiring a plane prediction model according to any of claims
13 to 14 when executed by a processor.