BACKGROUND
[0001] The present invention relates to picture coding and decoding techniques in which
a picture is split into blocks and prediction is performed.
[0002] In coding and decoding of a picture, a target picture is split into blocks, each
of which being a group of a predetermined number of samples, and processing is performed
in units of blocks. Splitting a picture into appropriate blocks with appropriate settings
of intra prediction and inter prediction enables improvement of coding efficiency.
[0003] Coding/decoding of a moving picture uses inter prediction that performs prediction
from a coded/decoded picture, thereby improving coding efficiency. Patent Literature
1 describes a technique of applying an affine transform at the time of inter prediction.
Moving pictures often involve object deformation such as enlargement/reduction or
rotation, and thus, application of the technique in Patent Document 1 enables efficient
coding.
SUMMARY
[0005] Unfortunately, however, the technique of Patent Document 1 involves picture transform,
leading to a problem of heavy processing load. The present invention has been made
in view of the above problem, and provides a low load and efficient coding technique.
[0006] In one aspect of the present invention to solve the above problem, there is provided
a technique that includes: a triangle merging candidate list constructor structured
to construct a triangle merging candidate list including spatial merging candidates;
a first triangle merging candidate selector structured to select, from the triangle
merging candidate list, a first triangle merging candidate that is uni-prediction;
and a second triangle merging candidate selector structured to select, from the triangle
merging candidate list, a second triangle merging candidate that is uni-prediction,
in which in a region where motion compensation by weighted averaging by the first
triangle merging candidate and the second triangle merging candidate is performed,
uni-prediction motion information of one of the first triangle merging candidate or
the second triangle merging candidate is saved.
[0007] According to the present invention, it is possible to achieve highly efficient and
low load picture coding/decoding process.
BRIEF DESCRIPTION OF THE DRAWINGS
[0008]
Fig. 1 is a block diagram of a picture coding device according to an embodiment of
the present invention.
Fig. 2 is a block diagram of a picture decoding device according to an embodiment
of the present invention.
Fig. 3 is a flowchart illustrating operation of splitting a tree block.
Fig. 4 is a diagram illustrating a state of splitting an input picture into tree blocks.
Fig. 5 is a diagram illustrating z-scan.
Fig. 6A is a diagram illustrating a split shape of a block.
Fig. 6B is a diagram illustrating a split shape of a block.
Fig. 6C is a diagram illustrating a split shape of a block.
Fig. 6D is a diagram illustrating a split shape of a block.
Fig. 6E is a diagram illustrating a split shape of a block.
Fig. 7 is a flowchart illustrating operation of splitting a block into four.
Fig. 8 is a flowchart illustrating operation of splitting a block into two or three.
Fig. 9 is syntax for expressing the shape of block split.
Fig. 10A is a diagram illustrating intra prediction.
Fig. 10B is a diagram illustrating intra prediction.
Fig. 11 is a diagram illustrating reference blocks for inter prediction.
Fig. 12A is syntax for expressing a coding block prediction mode.
Fig. 12B is syntax for expressing the coding block prediction mode.
Fig. 13 is a diagram illustrating a correspondence between syntax elements and modes
related to inter prediction.
Fig. 14 is a diagram illustrating affine motion compensation at two control points.
Fig. 15 is a diagram illustrating affine motion compensation at three control points.
Fig. 16 is a block diagram of a detailed configuration of an inter prediction unit
102 in Fig. 1.
Fig. 17 is a block diagram of a detailed configuration of a normal motion vector predictor
mode derivation unit 301 in Fig. 16.
Fig. 18 is a block diagram of a detailed configuration of a normal merge mode derivation
unit 302 in Fig. 16.
Fig. 19 is a flowchart illustrating a normal motion vector predictor mode derivation
process of the normal motion vector predictor mode derivation unit 301 in Fig. 16.
Fig. 20 is a flowchart illustrating a processing procedure of the normal motion vector
predictor mode derivation process.
Fig. 21 is a flowchart illustrating a processing procedure of a normal merge mode
derivation process.
Fig. 22 is a block diagram of a detailed configuration of an inter prediction unit
203 in Fig. 2.
Fig. 23 is a block diagram of a detailed configuration of a normal motion vector predictor
mode derivation unit 401 in Fig. 22.
Fig. 24 is a block diagram of a detailed configuration of a normal merge mode derivation
unit 402 in Fig. 22.
Fig. 25 is a flowchart illustrating a normal motion vector predictor mode derivation
process of a normal motion vector predictor mode derivation unit 401 in Fig. 22.
Fig. 26 is a diagram illustrating a history-based motion vector predictor candidate
list initialization/update processing procedure.
Fig. 27 is a flowchart of an identical element confirmation processing procedure in
the history-based motion vector predictor candidate list initialization/update processing
procedure.
Fig. 28 is a flowchart of an element shift processing procedure in the history-based
motion vector predictor candidate list initialization/update processing procedure.
Fig. 29 is a flowchart illustrating a history-based motion vector predictor candidate
derivation processing procedure.
Fig. 30 is a flowchart illustrating a history-based merging candidate derivation processing
procedure.
Fig. 31A is a diagram illustrating an example of a history-based motion vector predictor
candidate list update process.
Fig. 31B is a diagram illustrating an example of a history-based motion vector predictor
candidate list update process.
Fig. 31C is a diagram illustrating an example of a history-based motion vector predictor
candidate list update process.
Fig. 32 is a diagram illustrating motion compensation prediction in a case where LO-prediction
is performed and a reference picture (RefL0Pic) of L0 is at a time before a target
picture (CurPic).
Fig. 33 is a diagram illustrating motion compensation prediction in a case where LO-prediction
is performed and a reference picture of LO-prediction is at a time after the target
picture.
Fig. 34 is a diagram illustrating a prediction direction of motion compensation prediction
in bi-prediction in which an LO-prediction reference picture is at a time before the
target picture and an L1-prediction reference picture is at a time after the target
picture.
Fig. 35 is a diagram illustrating a prediction direction of motion compensation prediction
in bi-prediction in which an LO-prediction reference picture and an L1-prediction
reference picture are at a time before the target picture.
Fig. 36 is a diagram illustrating a prediction direction of motion compensation prediction
in bi-prediction in which an LO-prediction reference picture and an L1-prediction
reference picture are at a time after the target picture.
Fig. 37 is a diagram illustrating an example of a hardware configuration of a coding-decoding
device according to an embodiment of the present invention.
Fig. 38A is a diagram illustrating prediction of a triangle merge mode.
Fig. 38B is a diagram illustrating prediction of the triangle merge mode.
Fig. 39 is a flowchart illustrating an average merging candidate derivation processing
procedure.
Fig. 40 is a flowchart illustrating triangle merging candidate derivation.
Fig. 41 is a flowchart illustrating derivation of uni-prediction motion information
of a merge triangle partition 0 according to the present embodiment.
Fig. 42 is a flowchart illustrating derivation of uni-prediction motion information
of a merge triangle partition 1 in the embodiment of the present invention.
Fig. 43A is a diagram illustrating weighting in the triangle merge mode.
Fig. 43B is a diagram illustrating weighting in the triangle merge mode.
Fig. 44A is a diagram illustrating partitions in the triangle merge mode.
Fig. 44B is a diagram illustrating partitions in the triangle merge mode.
Fig. 44C is a diagram illustrating partitions in the triangle merge mode.
Fig. 44D is a diagram illustrating partitions in the triangle merge mode.
Fig. 44E is a diagram illustrating partitions in the triangle merge mode.
Fig. 44F is a diagram illustrating partitions in the triangle merge mode.
Fig. 45A is a diagram illustrating stored information in the triangle merge mode.
Fig. 45B is a diagram illustrating stored information in the triangle merge mode.
Fig. 46A is a diagram illustrating stored information in the triangle merge mode.
Fig. 46B is a diagram illustrating stored information in the triangle merge mode.
Fig. 47A is a diagram illustrating stored information in the triangle merge mode.
Fig. 47B is a diagram illustrating stored information in the triangle merge mode.
Fig. 48A is a diagram illustrating stored information in the triangle merge mode.
Fig. 48B is a diagram illustrating stored information in the triangle merge mode.
DETAILED DESCRIPTION
[0009] Technologies and technical terms used in the present embodiment will be defined.
Tree block
[0010] In the embodiment, a coding/decoding process target picture (processing target picture)
is equally split into a predetermined size. This unit is defined as a tree block.
While Fig. 4 sets the size of the tree block to 128 × 128 samples, the size of the
tree block is not limited to this and may be set to any size. The target tree block
(corresponding to a coding target in the coding process and a decoding target in the
decoding process) is switched in raster scan order, that is, in order from left to
right and from top to bottom. The interior of each tree block can be further recursively
split. A coding/decoding block as a result of recursive split of the tree block is
defined as a coding block. A tree block and a coding block are collectively defined
as a block. Execution of appropriate block split enables efficient coding. The size
of the tree block may be a fixed value determined in advance by the coding device
and the decoding device, or it is possible to adopt a configuration in which the size
of the tree block determined by the coding device is transmitted to the decoding device.
Here, the maximum size of the tree block is 128 × 128 samples, and the minimum size
of the tree block is 16 × 16 samples. The maximum size of the coding block is 64 ×
64 samples, and the minimum size of the coding block is 4 × 4 samples.
Prediction mode
[0011] Switching is performed between intra prediction (MODE_INTRA) of performing prediction
from a processed picture signal of a target picture and inter prediction (MODE_INTER)
of performing prediction from a picture signal of a processed picture in units of
target coding blocks.
[0012] The processed picture is used, in the coding process, for a picture obtained by decoding
a coded signal, a picture signal, a tree block, a block, a coding block, or the like.
The processed picture is used, in the decoding process, for a decoded picture, picture
signal, a tree block, a block, a coding block, or the like.
[0013] A mode of identifying the intra prediction (MODE_INTRA) and the inter prediction
(MODE_INTER) is defined as a prediction mode (PredMode). The prediction mode (PredMode)
has intra prediction (MODE_INTRA) or inter prediction (MODE_INTER) as a value.
Inter prediction
[0014] In inter prediction in which prediction is performed from a picture signal of a processed
picture, it is possible to use a plurality of processed pictures as reference pictures.
In order to manage a plurality of reference pictures, two types of reference lists
L0 (reference list 0) and L1 (reference list 1) are defined. A reference picture is
specified using a reference index in each of the lists. In the P slice, LO-prediction
(Pred_L0) is usable. In the B slice, LO-prediction (Pred_L0), L1-prediction (Pred_L1),
and bi-prediction (Pred_BI) is usable. LO-prediction (Pred_L0) is inter prediction
that refers to a reference picture managed by L0, while L1-prediction (Pred_L1) is
inter prediction that refers to a reference picture managed by L1. Bi-prediction (Pred_BI)
is inter prediction in which both LO-prediction and L1-prediction are performed and
one reference picture managed in each of L0 and L1 is referred to. Information specifying
LO-prediction, L1-prediction, and bi-prediction is defined as an inter prediction
mode. In the following processing, it is assumed that processing will be performed
for each of L0 and L1 for constants and variables including a suffix LX in an output.
Motion vector predictor mode
[0015] The motion vector predictor mode is a mode of transmitting an index for specifying
a motion vector predictor, a motion vector difference, an inter prediction mode, and
a reference index, and determining inter prediction information of a target block.
The motion vector predictor is derived from a motion vector predictor candidate derived
from a processed block in the neighbor of the target block or a block belonging to
the processed picture and located at the same position as or in the neighborhood (vicinity)
of the target block, and from an index to specify the motion vector predictor.
Merge mode
[0016] The merge mode is a mode that derives inter prediction information of the target
block from inter prediction information of a processed block in the neighbor of the
target block, or a block belonging to a processed picture and located at the same
position as the target block or in the neighborhood (vicinity) of the target block,
without transmit a motion vector difference or a reference index.
[0017] A processed block in the neighbor of the target block and inter prediction information
of the processed block are defined as spatial merging candidates. Blocks belonging
to the processed picture and located at the same position as the target block or in
the neighborhood (vicinity) of the target block, and inter prediction information
derived from the inter prediction information of the block are defined as temporal
merging candidates. Each of merging candidates is registered in a merging candidate
list. A merging candidate to be used for prediction of a target block is specified
by a merge index.
Neighboring block
[0018] Fig. 11 is a diagram illustrating reference blocks to be referred to for deriving
inter prediction information in the motion vector predictor mode and the merge mode.
A0, A1, A2, B0, B1, B2, and B3 are processed blocks in the neighbor of the target
block. T0 is a block belonging to the processed picture and located at the same position
as the target block or in the neighborhood (vicinity) of the target block, in the
target picture.
[0019] A1 and A2 are blocks located on the left side of the target coding block and in the
neighbor of the target coding block. B1 and B3 are blocks located above the target
coding block and in the neighbor of the target coding block. A0, B0, and B2 are blocks
respectively located at the lower left, the upper right, and the upper left of the
target coding block.
[0020] Details of how neighboring blocks are handled in the motion vector predictor mode
and the merge mode will be described below.
Affine motion compensation
[0021] The affine motion compensation first splits a coding block into subblocks of a predetermined
unit and then individually determines a motion vector for each of the split subblocks
to perform motion compensation. The motion vector of each of subblocks is derived
on the basis of one or more control points derived from the inter prediction information
of a processed block in the neighbor of the target block, or a block belonging to
the processed picture and located at the same position as or in the neighborhood (vicinity)
of the target block. While the present embodiment sets the size of the subblock to
4 × 4 samples, the size of the subblock is not limited to this, and a motion vector
may be derived in units of samples.
[0022] Fig. 14 illustrates an example of affine motion compensation in a case where there
are two control points. In this case, each of the two control points has two parameters,
that is, a horizontal component and a vertical component. Accordingly, the affine
transform having two control points is referred to as four-parameter affine transform.
CP1 and CP2 in Fig. 14 are control points.
[0023] Fig. 15 illustrates an example of affine motion compensation in a case where there
are three control points. In this case, each of the three control points has two parameters,
that is, a horizontal component and a vertical component. Accordingly, the affine
transform having three control points is referred to as six-parameter affine transform.
CP1, CP2, and CP3 in Fig. 15 are control points.
[0024] The affine motion compensation is usable in any of the motion vector predictor mode
and the merge mode. A mode of applying the affine motion compensation in the motion
vector predictor mode is defined as a subblock motion vector predictor mode. A mode
of applying the affine motion compensation in the merge mode is defined as a subblock
merge mode.
Syntax of coding block
[0025] The syntax for expressing the prediction mode of the coding block will be described
with reference to Figs. 12A, 12B, and 13. The pred_mode_flag in Fig. 12A is a flag
indicating whether the mode is inter prediction. Setting of pred_mode_flag 0 indicates
inter prediction while setting of pred_mode_flag 1 indicates intra prediction. Information
of intra prediction intra_pred_mode is transmitted in the case of intra prediction,
while merge_flag is transmitted in the case of inter prediction. merge_flag is a flag
indicating whether the mode to use is the merge mode or the motion vector predictor
mode. In the case of the motion vector predictor mode (merge_flag = 0), a flag inter_affine_flag
indicating whether to apply the subblock motion vector predictor mode is transmitted.
In the case of applying the subblock motion vector predictor mode (inter_affine_flag
= 1), cu_affine_type_flag is transmitted. cu_affine_type_flag is a flag for determining
the number of control points in the subblock motion vector predictor mode.
[0026] In contrast, in the case of the merge mode (merge_flag = 1), the merge_subblock_flag
of Fig. 12B is transmitted. merge_subblock_flag is a flag indicating whether to apply
the subblock merge mode. In the case of the subblock merge mode (merge_subblock_flag
= 1), a merge index merge_subblock_idx is transmitted. Conversely, in a case where
the mode is not the subblock merge mode (merge_subblock_flag = 0), a flag merge_triangle_flag
indicating whether to apply the triangle merge mode is transmitted. In the case of
applying the triangle merge mode (merge_triangle_flag = 1), merge triangle indexes
merge_triangle_idx0 and merge_triangle_idx1 are transmitted for each of the block
splitting directions merge_triangle_split_dir, and for each of the two split partitions.
In the case of not applying the triangle merge mode, (merge_triangle_flag = 0), a
merge index merge_idx is transmitted.
[0027] Fig. 13 illustrates the value of each of syntax elements and the corresponding prediction
mode. merge_flag = 0 and inter_affine_flag = 0 correspond to the normal motion vector
predictor mode (Inter Pred Mode). merge_flag = 0 and inter_affine_flag = 1 correspond
to a subblock motion vector predictor mode (Inter Affine Mode). merge_flag = 1, merge_subblock_flag
= 0, and merge_trianlge_flag = 0 correspond to a normal merge mode (Merge Mode). merge_flag
= 1, merge_subblock_flag = 0, and merge_trianlge_flag = 1 correspond to a triangle
merge mode (Triangle Merge Mode). merge_flag = 1, merge_subblock_flag = 1 correspond
to a subblock merge mode (Affine Merge Mode).
POC
[0028] A Picture Order Count (POC) is a variable associated with the picture to be coded,
and is set to a value that increments by one in accordance with picture output order.
The POC value makes it possible to discriminate whether the pictures are the same,
discriminate inter-picture sequential relationship in the output order, or derive
the distance between the pictures. For example, it is possible to determine that two
pictures having a same POC value are identical pictures. In a case where the POCs
of the two pictures have different values, the picture with the smaller POC value
can be determined to be the picture that is output earlier. The difference between
the POCs of the two pictures indicates the distance between the pictures in the time
axis direction.
First Embodiment
[0029] A picture coding device 100 and a picture decoding device 200 according to a first
embodiment of the present invention will be described.
[0030] Fig. 1 is a block diagram of the picture coding device 100 according to the first
embodiment. The picture coding device 100 according to an embodiment includes a block
split unit 101, an inter prediction unit 102, an intra prediction unit 103, decoded
picture memory 104, a prediction method determiner 105, a residual generation unit
106, an orthogonal transformer/quantizer 107, a bit strings coding unit 108, an inverse
quantizer/inverse orthogonal transformer 109, a decoded picture signal superimposer
110, and coding information storage memory 111.
[0031] The block split unit 101 recursively splits an input picture to construct a coding
block. The block split unit 101 includes: a quad split unit that splits a split target
block in both the horizontal direction and the vertical direction; and a binary-ternary
split unit that splits a split target block in either the horizontal direction or
the vertical direction. The block split unit 101 sets the constructed coding block
as a target coding block, and supplies a picture signal of the target coding block
to the inter prediction unit 102, the intra prediction unit 103, and the residual
generation unit 106. Further, the block split unit 101 supplies information indicating
the determined recursive split structure to the bit strings coding unit 108. Detailed
operation of the block split unit 101 will be described below.
[0032] The inter prediction unit 102 performs inter prediction of the target coding block.
The inter prediction unit 102 derives a plurality of inter prediction information
candidates from the inter prediction information stored in the coding information
storage memory 111 and the decoded picture signal stored in the decoded picture memory
104, selects a suitable inter prediction mode from the plurality of derived candidates,
and supplies the selected inter prediction mode and a predicted picture signal corresponding
to the selected inter prediction mode to the prediction method determiner 105. Detailed
configuration and operation of the inter prediction unit 102 will be described below.
[0033] The intra prediction unit 103 performs intra prediction on the target coding block.
The intra prediction unit 103 refers to the decoded picture signal stored in the decoded
picture memory 104 as a reference sample, and performs intra prediction based on coding
information such as an intra prediction mode stored in the coding information storage
memory 111 and thereby generates a predicted picture signal. In the intra prediction,
the intra prediction unit 103 selects a suitable intra prediction mode from a plurality
of intra prediction modes, and supplies the selected intra prediction mode and the
selected predicted picture signal corresponding to the selected intra prediction mode
to the prediction method determiner 105.
[0034] Figs. 10A and 10B illustrate examples of intra prediction. Fig. 10A illustrates a
correspondence between the prediction direction of intra prediction and the intra
prediction mode number. For example, an intra prediction mode 50 copies reference
samples in the vertical direction and thereby constructs an intra prediction picture.
Intra prediction mode 1 is a DC mode in which all sample values of a target block
are set to an average value of reference samples. Intra prediction mode 0 is a Planar
mode in which a two-dimensional intra prediction picture is created from reference
samples in the vertical and horizontal directions. Fig. 10B is an example of constructing
an intra prediction picture in the case of an intra prediction mode 40. The intra
prediction unit 103 copies, for each of samples of the target block, the value of
the reference sample in the direction indicated by the intra prediction mode. In a
case where the reference sample in the intra prediction mode is not at an integer
position, the intra prediction unit 103 determines a reference sample value by interpolation
from reference sample values at neighboring integer positions.
[0035] The decoded picture memory 104 stores the decoded picture constructed by the decoded
picture signal superimposer 110. The decoded picture memory 104 supplies the stored
decoded picture to the inter prediction unit 102 and the intra prediction unit 103.
[0036] The prediction method determiner 105 evaluates each of the intra prediction and the
inter prediction using the coding information, the code amount of the residual, the
distortion amount between the predicted picture signal and the target picture signal,
or the like, and thereby determines an optimal prediction mode. In the case of intra
prediction, the prediction method determiner 105 supplies intra prediction information
such as an intra prediction mode to the bit strings coding unit 108 as coding information.
In the case of the merge mode of the inter prediction, the prediction method determiner
105 supplies inter prediction information such as a merge index and information (subblock
merge flag) indicating whether the mode is the subblock merge mode to the bit strings
coding unit 108 as coding information. In the case of the motion vector predictor
mode of the inter prediction, the prediction method determiner 105 supplies inter
prediction information such as the inter prediction mode, the motion vector predictor
index, the reference index of L0 or L1, the motion vector difference, or information
indicating whether the mode is a subblock motion vector predictor mode (subblock motion
vector predictor flag) to the bit strings coding unit 108 as coding information. The
prediction method determiner 105 further supplies the determined coding information
to the coding information storage memory 111. The prediction method determiner 105
supplies the predicted picture signal to the residual generation unit 106 and the
decoded picture signal superimposer 110.
[0037] The residual generation unit 106 constructs a residual by subtracting the predicted
picture signal from the target picture signal, and supplies the constructed residual
to the orthogonal transformer/quantizer 107.
[0038] The orthogonal transformer/quantizer 107 performs orthogonal transform and quantization
on the residual according to the quantization parameter and thereby constructs an
orthogonally transformed and quantized residual, and then supplies the constructed
residual to the bit strings coding unit 108 and the inverse quantizer/inverse orthogonal
transformer 109.
[0039] The bit strings coding unit 108 codes, in addition to the sequences, pictures, slices,
and information in units of coding blocks, the bit strings coding unit 108 encodes
coding information corresponding to the prediction method determined by the prediction
method determiner 105 for each of coding blocks. Specifically, the bit strings coding
unit 108 encodes a prediction mode PredMode for each of coding blocks. In a case where
the prediction mode is inter prediction (MODE_INTER), the bit strings coding unit
108 encodes coding information (inter prediction information) such as a flag to determine
whether the mode is the merge mode, a subblock merge flag, a merge index in the case
of merge mode, an inter prediction mode in the case of non-merge modes, a motion vector
predictor index, information related to motion vector differences, and a subblock
motion vector predictor flag, on the bases of a prescribed syntax (bit string syntax
rules) and thereby constructs a first bit string. In a case where the prediction mode
is intra prediction (MODE_INTRA), coding information (intra prediction information)
such as the intra prediction mode is coded according to a prescribed syntax (bit string
syntax rules) to construct a first bit string. In addition, the bit strings coding
unit 108 performs entropy coding on the orthogonally transformed and quantized residual
on the basis of a prescribed syntax and thereby constructs a second bit string. The
bit strings coding unit 108 multiplexes the first bit string and the second bit string
on the basis of a prescribed syntax, and outputs the bitstream.
[0040] The inverse quantizer/inverse orthogonal transformer 109 performs inverse quantization
and inverse orthogonal transform on the orthogonally transformed/quantized residual
supplied from the orthogonal transformer/quantizer 107 and thereby calculates the
residual, and then supplies the calculated residual to the decoded picture signal
superimposer 110.
[0041] The decoded picture signal superimposer 110 superimposes the predicted picture signal
according to the determination of the prediction method determiner 105 with the residual
that has undergone the inverse quantization/inverse orthogonal transform by the inverse
quantizer/inverse orthogonal transformer 109, thereby constructs a decoded picture,
and stores the constructed decoded picture in the decoded picture memory 104. The
decoded picture signal superimposer 110 may perform filtering processing of reducing
distortion such as block distortion due to coding on the decoded picture, and may
thereafter store the decoded picture in the decoded picture memory 104.
[0042] The coding information storage memory 111 stores coding information such as a prediction
mode (inter prediction or intra prediction) determined by the prediction method determiner
105. In the case of inter prediction, the coding information stored in the coding
information storage memory 111 includes inter prediction information such as the determined
motion vector, reference indexes of the reference lists L0 and L1, and a history-based
motion vector predictor candidate list. In the case of the inter prediction merge
mode, the coding information stored in the coding information storage memory 111 includes,
in addition to the above-described information, a merge index and inter prediction
information including information indicating whether the mode is a subblock merge
mode (a subblock merge flag). In the case of the motion vector predictor mode of the
inter prediction, the coding information stored in the coding information storage
memory 111 includes, in addition to the above information, inter prediction information
such as an inter prediction mode, a motion vector predictor index, a motion vector
difference, and information indicating whether the mode is a subblock motion vector
predictor mode (subblock motion vector predictor flag). In the case of intra prediction,
the coding information stored in the coding information storage memory 111 includes
intra prediction information such as the determined intra prediction mode.
[0043] Fig. 2 is a block diagram illustrating a configuration of a picture decoding device
according to an embodiment of the present invention corresponding to the picture coding
device of Fig. 1. The picture decoding device according to the embodiment includes
a bit strings decoding unit 201, a block split unit 202, an inter prediction unit
203, an intra prediction unit 204, coding information storage memory 205, an inverse
quantizer/inverse orthogonal transformer 206, and a decoded picture signal superimposer
207, and decoded picture memory 208.
[0044] Since the decoding process of the picture decoding device in Fig. 2 corresponds to
the decoding process provided inside the picture coding device in Fig. 1. Accordingly,
each of configurations of the coding information storage memory 205, the inverse quantizer/inverse
orthogonal transformer 206, the decoded picture signal superimposer 207, and the decoded
picture memory 208 in Fig. 2 respectively has a function corresponding to each of
the configurations of the coding information storage memory 111, the inverse quantizer/inverse
orthogonal transformer 109, the decoded picture signal superimposer 110, and the decoded
picture memory 104 of the picture coding device in Fig. 1.
[0045] The bitstream supplied to the bit strings decoding unit 201 is separated on the basis
of a prescribed syntax rule. The bit strings decoding unit 201 decodes the separated
first bit string, and thereby obtains sequence, a picture, a slice, information in
units of coding blocks, and coding information in units of coding blocks. Specifically,
the bit strings decoding unit 201 decodes a prediction mode PredMode that discriminates
whether the prediction is inter prediction (MODE_INTER) or intra prediction (MODE_INTRA)
in units of coding block. In a case where the prediction mode is the inter prediction
(MODE_INTER), the bit strings decoding unit 201 decodes coding information (inter
prediction information) related to the flag that discriminates whether the mode is
the merge mode, the merge index in the case of the merge mode, the subblock merge
flag, and the inter prediction mode in the case of the motion vector predictor mode,
the motion vector predictor index, motion vector difference, the subblock motion vector
predictor flag or the like according to a prescribed syntax, and then, supplies the
coding information (inter prediction information) to the coding information storage
memory 205 via the inter prediction unit 203 and the block split unit 202. In a case
where the prediction mode is intra prediction (MODE_INTRA), the bit strings decoding
unit 201 decodes coding information (intra prediction information) such as the intra
prediction mode according to a prescribed syntax, and then supplies the decoded coding
information (intra prediction information) to the coding information storage memory
205 via the inter prediction unit 203 or the intra prediction unit 204, and via the
block split unit 202. The bit strings decoding unit 201 decodes the separated second
bit string and calculates an orthogonally transformed/quantized residual, and then,
supplies the orthogonally transformed/quantized residual to the inverse quantizer/inverse
orthogonal transformer 206.
[0046] When the prediction mode PredMode of the target coding block is the inter prediction
(MODE_INTER) and the motion vector predictor mode, the inter prediction unit 203 uses
the coding information of the already decoded picture signal stored in the coding
information storage memory 205 to derive a plurality of motion vector predictor candidates.
The inter prediction unit 203 then registers the plurality of derived motion vector
predictor candidates to a motion vector predictor candidate list described below.
The inter prediction unit 203 selects a motion vector predictor corresponding to the
motion vector predictor index to be decoded and supplied by the bit strings decoding
unit 201 from among the plurality of motion vector predictor candidates registered
in the motion vector predictor candidate list. The inter prediction unit 203 then
calculates a motion vector on the basis of the motion vector difference decoded by
the bit strings decoding unit 201 and the selected motion vector predictor, and stores
the calculated motion vector in the coding information storage memory 205 together
with other coding information. Here, the coding information of the coding block to
be supplied and stored includes the prediction mode PredMode, flags predFlagL0[xP][yP]
and predFlagL1[xP][yP] indicating whether to use LO-prediction and L1-prediction,
reference indexes refIdxL0[xP][yP] and refIdxL1[xP][yP] of L0 and L1; and motion vectors
mvL0[xP][yP] and mvL1[xP][yP] of L0 and L1, or the like. Here, xP and yP are indexes
indicating the position of the upper left sample of the coding block within the picture.
In a case where the prediction mode PredMode is inter prediction (MODE_INTER) and
the inter prediction mode is LO-prediction (Pred_L0), the flag predFlagLO indicating
whether to use LO-prediction is set to 1 and the flag predFlagL1 indicating whether
to use L1-prediction is set to 0. In a case where the inter prediction mode is L1-prediction
(Pred_L1), a flag predFlagLO indicating whether to use LO-prediction is set to 0 and
a flag predFlagL1 indicating whether to use L1-prediction is set to 1. In a case where
the inter prediction mode is bi-prediction (Pred_BI), both the flag predFlagLO indicating
whether to use LO-prediction and the flag predFlagL1 indicating whether to use L1-prediction
are set to 1. Furthermore, when the prediction mode PredMode of the target coding
block is in the inter prediction (MODE_INTER) and the merge mode, a merging candidate
is derived. Using the coding information of the already-decoded coding block stored
in the coding information storage memory 205, a plurality of merging candidates is
derived and registered in a merging candidate list described below. Subsequently,
a merging candidate corresponding to the merge index that is decoded by the bit strings
decoding unit 201 and supplied is selected from among the plurality of merging candidates
registered in the merging candidate list, and then, inter prediction information such
as flags predFlagL0[xP][yP] and predFlagL1[xP][yP] indicating whether to use the LO-prediction
and L1-prediction of the selected merging candidate, reference indexes refIdxL0[xP][yP]
and refIdxL1[xP][yP] of L0 and L1, and motion vectors mvL0[xP][yP] and mvL1[xP][yP]
of L0 and L1 are to be stored in the coding information storage memory 205. Here,
xP and yP are indexes indicating the position of the upper left sample of the coding
block within the picture. Detailed configuration and operation of the inter prediction
unit 203 will be described below.
[0047] The intra prediction unit 204 performs intra prediction when the prediction mode
PredMode of the target coding block is intra prediction (MODE_INTRA). The coding information
decoded by the bit strings decoding unit 201 includes an intra prediction mode. The
intra prediction unit 204 generates a predicted picture signal by intra prediction
from the decoded picture signal stored in the decoded picture memory 208 in accordance
with the intra prediction mode included in the coding information decoded by the bit
strings decoding unit 201. The intra prediction unit 204 then supplies the generated
predicted picture signal to the decoded picture signal superimposer 207. The intra
prediction unit 204 corresponds to the intra prediction unit 103 of the picture coding
device 100, and thus performs the processing similar to the processing of the intra
prediction unit 103.
[0048] The inverse quantizer/inverse orthogonal transformer 206 performs inverse orthogonal
transform/inverse quantization on the orthogonal transformed/quantized residual decoded
in the bit strings decoding unit 201, and thereby obtains inversely orthogonally transformed/inversely
quantized residual.
[0049] The decoded picture signal superimposer 207 superimposes a predicted picture signal
inter-predicted by the inter prediction unit 203 or a prediction picture signal intra-predicted
by the intra prediction unit 204 with the residual that has been inversely orthogonally
transformed/inversely quantized residual by the inverse quantizer/inverse orthogonal
transformer 206, thereby decoding the decoded picture signal. The decoded picture
signal superimposer 207 then stores the decoded picture signal that has been decoded,
in the decoded picture memory 208. When storing the decoded picture in the decoded
picture memory 208, the decoded picture signal superimposer 207 may perform filtering
processing on the decoded picture to reduce block distortion or the like due to coding,
and may thereafter store the decoded picture in the decoded picture memory 208.
[0050] Next, operation of the block split unit 101 in the picture coding device 100 will
be described. Fig. 3 is a flowchart illustrating operation of splitting a picture
into tree blocks and further splitting each of the tree blocks. First, an input picture
is split into tree blocks of a predetermined size (step S1001). Each of the tree blocks
is scanned in a predetermined order, that is, in a raster scan order (step S1002),
and a target tree block is internally split (step S1003).
[0051] Fig. 7 is a flowchart illustrating detailed operation of the split process in step
S1003. First, it is determined whether to split the target block into four (step S1101)
.
[0052] In a case where it is determined that the target block is to be split into four,
the target block will be split into four (step S1102). Each of blocks obtained by
splitting the target block is scanned in the Z-scan order, that is, in the order of
upper left, upper right, lower left, and lower right (step S1103). Fig. 5 illustrates
an example of the Z-scan order, and 601 in Fig. 6A illustrates an example in which
the target block is split into four. Numbers 0 to 3 of 601 in Fig. 6A indicate the
order of processing. Subsequently, the split process of Fig. 7 is recursively executed
for each of blocks split in step S1101 (step S1104).
[0053] In a case where it is determined that the target block is not to be split into four,
the target block will be split into two or three, namely, binary-ternary split (step
S1105).
[0054] Fig. 8 is a flowchart illustrating detailed operation of the binary-ternary split
process in step S1105. First, it is determined whether binary-ternary split is going
to be performed on the target block, that is, whether any of binary split or ternary
split is to be performed (step S1201).
[0055] In a case where it is not determined that binary-ternary split is to be performed
on the target block, that is, in a case where it is determined not to split the target
block, the split is finished (step S1211). That is, further recursive split process
is not to be performed on the block that has been split by the recursive split process.
[0056] In a case where it is determined that binary-ternary split is going to be performed
on the target block, it is further determined whether to split the target block into
two (step S1202).
[0057] In a case where it is determined that the target block is to be split into two, it
is further determined whether to split the target block in upper-lower (vertical)
direction (step S1203), and then based on the result, the target block will be binary
split in upper-lower (vertical) direction (step S1204), or the target block will be
binary split in left-right (horizontal) direction (step S1205). As a result of step
S1204, the target block is binary split in upper-lower direction (vertical direction)
as illustrated in 602 of Fig.6B. As a result of step S1205, the target block is binary
split in right-left (horizontal direction) as illustrated in 604 of Fig.6D.
[0058] In step S1202, in a case where it is not determined that the target block is to be
split into two, that is, in a case where it is determined that the target block is
to be split into three, it is further determined whether to split the target block
into three as upper, middle, lower portions (vertical direction) (step S1206). Based
on the result, the target block is split into three as either upper, middle and lower
portions (vertical direction) (step S1207), or left, middle, and right portions (horizontal
direction) (step S1208). As a result of step S1207, the target block is split into
three as upper, middle, and lower portions (vertical direction) as illustrated in
603 of Fig.6C. As a result of step S1208, the target block is split into three as
left, middle, and right (horizontal direction) as illustrated in 605 of Fig. 6E.
[0059] After execution of one of steps S1204, S1205, S1207, or S1208, each of blocks obtained
by splitting the target block is scanned in order from left to right and from top
to bottom (step S1209). The numbers 0 to 2 of 602 to 605 in Figs. 6B to 6E indicate
the order of processing. For each of split blocks, the binary-ternary split process
in Fig. 8 is recursively executed (step S1210).
[0060] In the recursive block split described here, the propriety of split may be limited
on the basis of the number of splits, the size of the target block, or the like. The
information that restricts the propriety of split may be realized in a configuration
in which information is not transmitted by making a preliminary agreement between
the coding device and the decoding device, or in a configuration in which the coding
device determines information for restricting the propriety of split and record the
information to bit strings, thereby transmitting the information to the decoding device.
[0061] When a certain block is split, a block before split is referred to as a parent block,
and each of blocks after split is referred to as a child block.
[0062] Next, operation of the block split unit 202 in the picture decoding device 200 will
be described. The block split unit 202 splits a tree block using a processing procedure
similar to case of the block split unit 101 of the picture coding device 100. Note
that there is a difference that although the block split unit 101 of the picture coding
device 100 determines an optimal block split shape by applying an optimization method
such as estimation of an optimal shape by picture recognition or distortion rate optimization,
the block split unit 202 of the picture decoding device 200 determines the block split
shape by decoding block split information recorded in the bit string.
[0063] Fig. 9 illustrates syntax (bit string syntax rules) related to block split according
to the first embodiment. coding_quadtree() represents the syntax for quad split process
of the block. multi_type_tree() represents the syntax for the process of splitting
the block into two or three. qt_split is a flag indicating whether to split a block
into four. In the case of splitting the block into four, the setting would be qt_split
= 1. In the case of not splitting the block into four, the setting would be qt_split
= 0. In the case of splitting the block into four (qt_split = 1), quad split process
will be performed recursively on each of blocks split into four (coding_quadtree (0),
coding_quadtree (1), coding_quadtree (2), and coding_quadtree (3), in which arguments
0 to 3 correspond to numbers of 601 in Fig. 6A). In a case where the quad split is
not to be performed (qt_split = 0), the subsequent split is determined according to
multi_type_tree(). mtt_split is a flag indicating whether to perform further split.
In the case where further splitting is to be performed (mtt_split = 1), transmission
of mtt_split_vertical which is a flag indicating whether to perform split in vertical
or horizontal direction and mtt_split_binary which is a flag that determines whether
to split the block into two or three will be performed. mtt_split_vertical = 1 indicates
split in the vertical direction, and mtt_split_vertical = 0 indicates split in the
horizontal direction. mtt_split_binary = 1 indicates that the block is binary split,
and mtt_split_binary = 0 indicates that the block is ternary split. In a case where
the block is to be binary split (mtt_split_binary = 1), the split process is performed
recursively on each of the two split blocks (multi_type_tree (0) and multi_type_tree
(1) in which arguments 0 to 1 correspond to numbers in 602 or 604 of Figs. 6B to 6D).
In the case where the block is to be ternary split (mtt_split_binary = 0), the split
process is performed recursively on each of the three split blocks (multi_type_tree
(0), multi_type_tree (1), and multi_type_tree (2), in which 0 to 2 correspond to numbers
in 603 of Fig. 6B or 605 of Fig. 6E). Recursively calling multi_type_tree until mtt_split
= 0 will achieve hierarchical block split.
Inter prediction
[0064] The inter prediction method according to an embodiment is implemented in the inter
prediction unit 102 of the picture coding device in Fig. 1 and the inter prediction
unit 203 of the picture decoding device in Fig. 2.
[0065] An inter prediction method according to an embodiment will be described with reference
to the drawings. The inter prediction method is implemented in any of the coding and
decoding processes in units of coding blocks.
Inter prediction unit 102 on the coding side
[0066] Fig. 16 is a diagram illustrating a detailed configuration of the inter prediction
unit 102 of the picture coding device in Fig. 1. The normal motion vector predictor
mode derivation unit 301 derives a plurality of normal motion vector predictor candidates,
selects a motion vector predictor, and calculates a motion vector difference between
the selected motion vector predictor and the detected motion vector. The detected
inter prediction mode, reference index, motion vector, and calculated motion vector
difference will be inter prediction information of the normal motion vector predictor
mode. This inter prediction information is supplied to the inter prediction mode determiner
305. Detailed configuration and processing of the normal motion vector predictor mode
derivation unit 301 will be described below.
[0067] The normal merge mode derivation unit 302 derives a plurality of normal merging candidates,
selects a normal merging candidate, and obtains inter prediction information of the
normal merge mode. This inter prediction information is supplied to the inter prediction
mode determiner 305. Detailed configuration and processing of the normal merge mode
derivation unit 302 will be described below.
[0068] The subblock motion vector predictor mode derivation unit 303 derives a plurality
of subblock motion vector predictor candidates, selects a subblock motion vector predictor,
and calculates a motion vector difference between the selected subblock motion vector
predictor and the detected motion vector. The detected inter prediction mode, reference
index, motion vector, and calculated motion vector difference will be inter prediction
information of the subblock motion vector predictor mode. This inter prediction information
is supplied to the inter prediction mode determiner 305.
[0069] The subblock merge mode derivation unit 304 derives a plurality of subblock merging
candidates, selects a subblock merging candidate, and obtains inter prediction information
of the subblock merge mode. This inter prediction information is supplied to the inter
prediction mode determiner 305.
[0070] In the inter prediction mode determiner 305 determines inter prediction information
on the basis of the inter prediction information supplied from the normal motion vector
predictor mode derivation unit 301, the normal merge mode derivation unit 302, the
subblock motion vector predictor mode derivation unit 303, and the subblock merge
mode derivation unit 304. Inter prediction information according to the determination
result is supplied from the inter prediction mode determiner 305 to a motion compensation
prediction unit 306.
[0071] The motion compensation prediction unit 306 performs inter prediction on the reference
picture signal stored in the decoded picture memory 104 on the basis of the determined
inter prediction information. Detailed configuration and processing of the motion
compensation prediction unit 306 will be described below.
Inter prediction unit 203 on decoding side
[0072] Fig. 22 is a diagram illustrating a detailed configuration of the inter prediction
unit 203 of the picture decoding device in Fig. 2.
[0073] The normal motion vector predictor mode derivation unit 401 derives a plurality of
normal motion vector predictor candidates, selects a motion vector predictor, calculates
an added value obtained by adding the selected motion vector predictor and the decoded
motion vector difference, and sets this added value as a motion vector. The decoded
inter prediction mode, reference index, motion vector will be inter prediction information
of the normal motion vector predictor mode. This inter prediction information is supplied
to the motion compensation prediction unit 406 via the switch 408. Detailed configuration
and processing of the normal motion vector predictor mode derivation unit 401 will
be described below.
[0074] The normal merge mode derivation unit 402 derives a plurality of normal merging candidates,
selects a normal merging candidate, and obtains inter prediction information of the
normal merge mode. This inter prediction information is supplied to the motion compensation
prediction unit 406 via the switch 408. Detailed configuration and processing of the
normal merge mode derivation unit 402 will be described below.
[0075] A subblock motion vector predictor mode derivation unit 403 derives a plurality of
subblock motion vector predictor candidates, selects a subblock motion vector predictor,
and calculates an added value obtained by adding the selected subblock motion vector
predictor and the decoded motion vector difference, and sets this added value as a
motion vector. The decoded inter prediction mode, reference index, and motion vector
will be the inter prediction information of the subblock motion vector predictor mode.
This inter prediction information is supplied to the motion compensation prediction
unit 406 via the switch 408.
[0076] A subblock merge mode derivation unit 404 derives a plurality of subblock merging
candidates, selects a subblock merging candidate, and obtains inter prediction information
of the subblock merge mode. This inter prediction information is supplied to the motion
compensation prediction unit 406 via the switch 408.
[0077] The motion compensation prediction unit 406 performs inter prediction on the reference
picture signal stored in the decoded picture memory 208 on the basis of the determined
inter prediction information. Detailed configuration and processing of the motion
compensation prediction unit 406 are similar to the motion compensation prediction
unit 306 on the coding side.
Normal motion vector predictor mode derivation unit (Normal AMVP)
[0078] The normal motion vector predictor mode derivation unit 301 of Fig. 17 includes a
spatial motion vector predictor candidate derivation unit 321, a temporal motion vector
predictor candidate derivation unit 322, a history-based motion vector predictor candidate
derivation unit 323, a motion vector predictor candidate replenisher 325, a normal
motion vector detector 326, a motion vector predictor candidate selector 327, and
a motion vector subtractor 328.
[0079] The normal motion vector predictor mode derivation unit 401 in Fig. 23 includes a
spatial motion vector predictor candidate derivation unit 421, a temporal motion vector
predictor candidate derivation unit 422, a history-based motion vector predictor candidate
derivation unit 423, a motion vector predictor candidate replenisher 425, a motion
vector predictor candidate selector 426, and a motion vector adder 427.
[0080] Processing procedures of the normal motion vector predictor mode derivation unit
301 on the coding side and the normal motion vector predictor mode derivation unit
401 on the decoding side will be described with reference to the flowcharts in Figs.
19 and 25, respectively. Fig. 19 is a flowchart illustrating a normal motion vector
predictor mode derivation processing procedure performed by the normal motion vector
mode derivation unit 301 on the coding side. Fig. 25 is a flowchart illustrating a
normal motion vector predictor mode derivation processing procedure performed by the
normal motion vector mode derivation unit 401 on the decoding side.
Normal motion vector predictor mode derivation unit (Normal AMVP): coding side
[0081] The normal motion vector predictor mode derivation processing procedure on the coding
side will be described with reference to Fig. 19. In the description of the processing
procedure in Fig. 19, the word "normal" illustrated in Fig. 19 will be omitted in
some cases.
[0082] First, the normal motion vector detector 326 detects a normal motion vector for each
of inter prediction modes and reference indexes (step S100 in Fig. 19).
[0083] Subsequently, a motion vector difference of a motion vector used in inter prediction
in the normal motion vector predictor mode is calculated for each of L0 and L1 (steps
S101 to S106 in Fig. 19) in the spatial motion vector predictor candidate derivation
unit 321, the temporal motion vector predictor candidate derivation unit 322, the
history-based motion vector predictor candidate derivation unit 323, the motion vector
predictor candidate replenisher 325, the motion vector predictor candidate selector
327, and the motion vector subtractor 328. Specifically, in a case where the prediction
mode PredMode of the target block is inter prediction (MODE_INTER) and the inter prediction
mode is L0-prediction (Pred_L0), the motion vector predictor candidate list mvpListLO
of L0 is calculated. Subsequently, the motion vector predictor mvpLO is selected,
and then, a motion vector difference mvdL0 of the motion vector mvLO of L0 is calculated.
In a case where the inter prediction mode of the target block is L1-prediction (Pred_L1),
a motion vector predictor candidate list mvpListL1 of L1 is calculated. Subsequently,
a motion vector predictor mvpL1 is selected, and then a motion vector difference mvdL1
of a motion vector mvL1 of L1 is calculated. In a case where the inter prediction
mode of the target block is bi-prediction (Pred_BI), LO-prediction and L1-prediction
are both performed. A motion vector predictor candidate list mvpListLO of L0 is calculated
and a motion vector predictor mvpLO of L0 is selected, and then a motion vector difference
mvdL0 of the motion vector mvLO of L0 is calculated. Along with this calculation,
a motion vector predictor candidate list mvpListL1 of L1 is calculated and a motion
vector predictor mvpL1 of L1 is calculated, and then, a motion vector difference mvdL1
of a motion vector mvL1 of L1 is calculated.
[0084] The motion vector difference calculation process is performed for each of L0 and
L1, in which the calculation process is a common process in both L0 and L1. Accordingly,
L0 and L1 will be denoted as LX as a common procedure. In the process of calculating
the motion vector difference of L0, X of LX is set to 0, while in the process of calculating
the motion vector difference of L1, X of LX is set to 1. Additionally, in a case where
information on the other list is referred to instead of one LX during the calculation
process of the motion vector difference of the one LX, the other list will be represented
as LY.
[0085] In a case where a motion vector mvLX of LX is used (step S102 in Fig. 19: YES), motion
vector predictor candidates of LX are calculated, thereby constructing a motion vector
predictor candidate list mvpListLX of LX (step S103 in Fig. 19). In the normal motion
vector predictor mode derivation unit 301, the spatial motion vector predictor candidate
derivation unit 321, the temporal motion vector predictor candidate derivation unit
322, the history-based motion vector predictor candidate derivation unit 323, and
the motion vector predictor candidate replenisher 325 derive a plurality of motion
vector predictor candidates and thereby constructs the motion vector predictor candidate
list mvpListLX. The detailed processing procedure of step S103 in Fig. 19 will be
described below using the flowchart in Fig. 20.
[0086] Subsequently, the motion vector predictor candidate selector 327 selects a motion
vector predictor mvpLX of LX from the motion vector predictor candidate list mvpListLX
of LX (step S104 in Fig. 19). Here, one element (the i-th element counted from 0)
in the motion vector predictor candidate list mvpListLX is represented as mvpListLX[i].
The motion vector difference, which is a difference between the motion vector mvLX
and each of the motion vector predictor candidates mvpListLX[i] stored in the motion
vector predictor candidate list mvpListLX, is each calculated. A code amount at the
time of coding these motion vector differences is calculated for each of elements
(motion vector predictor candidates) of the motion vector predictor candidate list
mvpListLX. Subsequently, the motion vector predictor candidate mvpListLX[i] that minimizes
the code amount for each of motion vector predictor candidates among the individual
elements registered in the motion vector predictor candidate list mvpListLX is selected
as the motion vector predictor mvpLX, and its index i is obtained. In a case where
there is a plurality of the motion vector predictor candidates having the minimum
generated code amount in the motion vector predictor candidate list mvpListLX, the
motion vector predictor candidate mvpListLX[i] having the index i in the motion vector
predictor candidate list mvpListLX represented by a small number is selected as the
optimal motion vector predictor mvpLX, and its index i is obtained.
[0087] Subsequently, the motion vector subtractor 328 subtracts the selected motion vector
predictor mvpLX of LX from the motion vector mvLX of LX and calculates a motion vector
difference mvdLX of LX as in: mvdLX=mvLX-mvpLX (step S105 in Fig. 19).
Normal motion vector predictor mode derivation unit (normal AMVP): decoding side
[0088] Next, a normal motion vector predictor mode processing procedure on the decoding
side will be described with reference to Fig. 25. On the decoding side, the spatial
motion vector predictor candidate derivation unit 421, the temporal motion vector
predictor candidate derivation unit 422, the history-based motion vector predictor
candidate derivation unit 423, and the motion vector predictor candidate replenisher
425 individually calculate motion vectors used in the inter prediction of the normal
motion vector predictor mode for each of L0 and L1 (steps S201 to S206 in Fig. 25).
Specifically, in a case where the prediction mode PredMode of the target block is
inter prediction (MODE_INTER) and the inter prediction mode of the target block is
LO-prediction (Pred_L0), the motion vector predictor candidate list mvpListLO of L0
is calculated. Subsequently, the motion vector predictor mvpLO is selected, and then,
the motion vector mvLO of L0 is calculated. In a case where the inter prediction mode
of the target block is L1-prediction (Pred_L1), the L1 motion vector predictor candidate
list mvpListL1 is calculated. Subsequently, the motion vector predictor mvpL1 is selected,
and the L1 motion vector mvL1 is calculated. In a case where the inter prediction
mode of the target block is bi-prediction (Pred_BI), LO-prediction and L1-prediction
are both performed. A motion vector predictor candidate list mvpListLO of L0 is calculated
and a motion vector predictor mvpLO of L0 is selected, and then the motion vector
mvLO of L0 is calculated. Along with this calculation, a motion vector predictor candidate
list mvpListL1 of L1 is calculated and a motion vector predictor mvpL1 of L1 is calculated,
and then, the motion vector mvL1 of L1 is calculated.
[0089] Similarly to the coding side, the decoding side performs the motion vector calculation
processing for each of L0 and L1, in which the processing is a common process in both
L0 and L1. Accordingly, L0 and L1 will be denoted as LX as a common procedure. LX
represents an inter prediction mode used for inter prediction of a target coding block.
X is 0 in the process of calculating the motion vector of L0, and X is 1 in the process
of calculating the motion vector of L1. Additionally, in a case where information
on the other reference list is referred to instead of the same reference list as the
LX to be calculated during the calculation process of the motion vector of the LX,
the other reference list will be represented as LY.
[0090] In a case where the motion vector mvLX of LX is used (step S202 in Fig. 25: YES),
motion vector predictor candidates of LX are calculated to construct a motion vector
predictor candidate list mvpListLX of LX (step S203 in Fig. 25). In the normal motion
vector predictor mode derivation unit 401, the spatial motion vector predictor candidate
derivation unit 421, the temporal motion vector predictor candidate derivation unit
422, the history-based motion vector predictor candidate derivation unit 423, and
the motion vector predictor candidate replenisher 425 calculate a plurality of motion
vector predictor candidates and thereby constructs the motion vector predictor candidate
list mvpListLX. Detailed processing procedure of step S203 in Fig. 25 will be described
below using the flowchart in Fig. 20.
[0091] Subsequently, the motion vector predictor candidate selector 426 extracts a motion
vector predictor candidate mvpListLX[mvpIdxLX] corresponding to the motion vector
predictor index mvpIdxLX decoded and supplied by the bit strings decoding unit 201
from the motion vector predictor candidate list mvpListLX, as the selected motion
vector predictor mvpLX (step S204 in Fig. 25).
[0092] Subsequently, the motion vector adder 427 adds the motion vector difference mvdLX
of LX decoded and supplied by the bit strings decoding unit 201 and the motion vector
predictor mvpLX of LX and calculates a motion vector mvLX of LX as in: mvLX=mvpLX+mvdLX
(step S205 in Fig. 25).
Normal motion vector predictor mode derivation unit (normal AMVP): motion vector prediction
method
[0093] Fig. 20 is a flowchart illustrating a processing procedure of the normal motion vector
predictor mode derivation process having a function common to the normal motion vector
predictor mode derivation unit 301 of the picture coding device and the normal motion
vector predictor mode derivation unit 401 of the picture decoding device according
to the embodiment of the present invention.
[0094] Each of the normal motion vector predictor mode derivation unit 301 and the normal
motion vector predictor mode derivation unit 401 includes a motion vector predictor
candidate list mvpListLX. The motion vector predictor candidate list mvpListLX has
a list structure, and includes a storage region that stores, as elements, a motion
vector predictor index indicating a location in the motion vector predictor candidate
list and a motion vector predictor candidate corresponding to the index. The number
of the motion vector predictor index starts from 0, and motion vector predictor candidates
are to be stored in the storage region of the motion vector predictor candidate list
mvpListLX. In the present embodiment, it is assumed that the motion vector predictor
candidate list mvpListLX can register at least two motion vector predictor candidates
(as inter prediction information). Further, a variable numCurrMvpCand indicating the
number of motion vector predictor candidates registered in the motion vector predictor
candidate list mvpListLX is set to 0.
[0095] Each of the spatial motion vector predictor candidate derivation units 321 and 421
derives a motion vector predictor candidate from blocks in the neighbor of the left
side. This process derives a motion vector predictor mvLXA with reference to inter
prediction information of the block in the neighbor of the left side (AC or A1 in
Fig. 11), namely, a flag indicating whether a motion vector predictor candidate is
usable, a motion vector, a reference index, or the like, and adds the derived mvLXA
to the motion vector predictor candidate list mvpListLX (step S301 in Fig. 20). Note
that X is 0 in LO-prediction and X is 1 in L1-prediction (similar applies hereinafter).
Subsequently, the spatial motion vector predictor candidate derivation units 321 and
421 derive the motion vector predictor candidates from an upper neighboring block.
This process derives a motion vector predictor mvLXB with reference to inter prediction
information of the upper neighboring block (B0, B1 or B2 in Fig. 11), namely, a flag
indicating whether a motion vector predictor candidate is usable, a motion vector,
a reference index, or the like, When the derived mvLXA and the derived mvLXB are not
equal, mvLXB is added to the motion vector predictor candidate list mvpListLX (step
S302 in Fig. 20). The processes in steps S301 and S302 in Fig. 20 are provided as
a common process except that the positions and numbers of reference neighboring blocks
are different, and a flag availableFlagLXN indicating whether a motion vector predictor
candidate of a coding block is usable, and a motion vector mvLXN, a reference index
refIdxN (N indicates A or B, similar applies hereinafter) will be derived in these
processes.
[0096] Subsequently, each of the temporal motion vector predictor candidate derivation units
322 and 422 derives a motion vector predictor candidate from a block in a picture
having a temporal difference from the target picture. This process derives a flag
availableFlagLXCol indicating whether a motion vector predictor candidate of a coding
block of a picture having a temporal difference is usable, and a motion vector mvLXCol,
a reference index refIdxCol, and a reference list listCol, and adds mvLXCol to the
motion vector predictor candidate list mvpListLX (step S303 in Fig. 20).
[0097] Note that it is assumed that the processes of the temporal motion vector predictor
candidate derivation units 322 and 422 can be omitted in units of a sequence (SPS),
a picture (PPS), or a slice.
[0098] Subsequently, the history-based motion vector predictor candidate derivation units
323 and 423 add the history-based motion vector predictor candidates registered in
a history-based motion vector predictor candidate list HmvpCandList to the motion
vector predictor candidate list mvpListLX. (Step S304 in Fig. 20). Details of the
registration processing procedure in step S304 will be described below with reference
to the flowchart in Fig. 29.
[0099] Subsequently, the motion vector predictor candidate replenishers 325 and 425 add
a motion vector predictor candidate having a predetermined value such as (0, 0) until
the motion vector predictor candidate list mvpListLX is satisfied (S305 in Fig. 20).
Normal merge mode derivation unit (normal merge)
[0100] The normal merge mode derivation unit 302 in Fig. 18 includes a spatial merging candidate
derivation unit 341, a temporal merging candidate derivation unit 342, an average
merging candidate derivation unit 344, a history-based merging candidate derivation
unit 345, a merging candidate replenisher 346, and a merging candidate selector 347.
[0101] The normal merge mode derivation unit 402 in Fig. 24 includes a spatial merging candidate
derivation unit 441, a temporal merging candidate derivation unit 442, an average
merging candidate derivation unit 444, a history-based merging candidate derivation
unit 445, a merging candidate replenisher 446, and a merging candidate selector 447.
[0102] Fig. 21 is a flowchart illustrating a procedure of a normal merge mode derivation
process having a function common to the normal merge mode derivation unit 302 of the
picture coding device and the normal merge mode derivation unit 402 of the picture
decoding device according to the embodiment of the present invention.
[0103] Hereinafter, various processes will be described step by step. The following description
is a case where the slice type slice_type is B slice unless otherwise specified. However,
the present invention can also be applied to the case of P slice. Note that, there
is only LO-prediction (Pred_L0) as the inter prediction mode, with no L1-prediction
(Pred_L1) or bi-prediction (Pred_BI) in the case where the slice type slice_type is
P slice. Accordingly, it is possible to omit the process related to L1 in this case.
[0104] The normal merge mode derivation unit 302 and the normal merge mode derivation unit
402 include a merging candidate list mergeCandList. The merging candidate list mergeCandList
has a list structure, and includes a storage region that stores, as elements, a merge
index indicating a location in the merging candidate list and a merging candidate
corresponding to the index. The number of the merge index starts from 0, and the merging
candidate is stored in the storage region of the merging candidate list mergeCandList.
In the subsequent processing, the merging candidate of the merge index i registered
in the merging candidate list mergeCandList will be represented by mergeCandList[i].
In the present embodiment, it is assumed that the merging candidate list mergeCandList
can register at least six merging candidates (as inter prediction information). Furthermore,
a variable numCurrMergeCand indicating the number of merging candidates registered
in the merging candidate list mergeCandList is set to 0.
[0105] The spatial merging candidate derivation unit 341 and the spatial merging candidate
derivation unit 441 derive a spatial merging candidate of each of blocks (B1, A1,
B0, A0, B2 in Fig. 11) in the neighbor of the target block in order of B1, A1, B0,
A0, and B2, from the coding information stored either in the coding information storage
memory 111 of the picture coding device or in the coding information storage memory
205 of the picture decoding device, and then, registers the derived spatial merging
candidates to the merging candidate list mergeCandList (step S401 in Fig. 21). Here,
N indicating one of B1, A1, B0, A0, B2 or the temporal merging candidate Col will
be defined. Items to be derived include a flag availableFlagN indicating whether the
inter prediction information of the block N is usable as a spatial merging candidate,
a reference index refIdxL0N of L0 and a reference index refIdxLlN of L1 of the spatial
merging candidate N, an LO-prediction flag predFlagL0N indicating whether LO-prediction
is to be performed, an L1-prediction flag predFlagL1N indicating whether L1-prediction
is to be performed, a motion vector mvL0N of L0, and a motion vector mvLlN of L1.
However, since the merging candidate in the present embodiment is derived without
reference to the inter prediction information of the block included in the target
coding block, the spatial merging candidate using the inter prediction information
of the block included in the target coding block will not be derived.
[0106] Subsequently, the temporal merging candidate derivation unit 342 and the temporal
merging candidate derivation unit 442 derive temporal merging candidates from pictures
having a temporal difference, and register the derived temporal merging candidates
in a merging candidate list mergeCandList (step S402 in Fig. 21). Items to be derived
include a flag availableFlagCol indicating whether the temporal merging candidate
is usable, an LO-prediction flag predFlagL0Col indicating whether the LO-prediction
of the time merging candidate is to be performed, an L1-prediction flag predFlagLlCol
indicating whether the L1-prediction is to be performed, and a motion vector mvL0Col
of L0, and a motion vector mvLlCol of L1.
[0107] Note that it is assumed that the processes of the temporal merging candidate derivation
units 342 and 442 can be omitted in units of a sequence (SPS), a picture (PPS), or
a slice.
[0108] Subsequently, the history-based merging candidate derivation unit 345 and the history-based
merging candidate derivation unit 445 register the history-based motion vector predictor
candidates registered in the history-based motion vector predictor candidate list
HmvpCandList, to the merging candidate list mergeCandList (step S403 in Fig. 21).
[0109] In a case where the number of merging candidates numCurrMergeCand registered in the
merging candidate list mergeCandList is smaller than the maximum number of merging
candidates MaxNumMergeCand, the history-based merging candidate is derived with the
number of merging candidates numCurrMergeCand registered in the merging candidate
list mergeCandList being limited to the maximum number of merging candidates MaxNumMergeCand,
and then registered to the merging candidate list mergeCandList.
[0110] Subsequently, the average merging candidate derivation unit 344 and the average merging
candidate derivation unit 444 derive an average merging candidate from the merging
candidate list mergeCandList, and add the derived average merging candidate to the
merging candidate list mergeCandList (step S404 in Fig. 21).
[0111] In a case where the number of merging candidates numCurrMergeCand registered in the
merging candidate list mergeCandList is smaller than the maximum number of merging
candidates MaxNumMergeCand, the average merging candidate is derived with the number
of merging candidates numCurrMergeCand registered in the merging candidate list mergeCandList
being limited to the maximum number of merging candidates MaxNumMergeCand, and then
registered to the merging candidate list mergeCandList.
[0112] Here, the average merging candidate is a new merging candidate including a motion
vector obtained by averaging the motion vectors of the first merging candidate and
the second merging candidate registered in the merging candidate list mergeCandList
for each of LO-prediction and L1-prediction.
[0113] Subsequently, in the merging candidate replenisher 346 and the merging candidate
replenisher 446, in a case where the number of merging candidates numCurrMergeCand
registered in the merging candidate list mergeCandList is smaller than the maximum
number of merging candidates MaxNumMergeCand, an additional merging candidate is derived
with the number of merging candidates numCurrMergeCand registered in the merging candidate
list mergeCandList being limited to the maximum number of merging candidates MaxNumMergeCand,
and then registered to the merging candidate list mergeCandList (step S405 in Fig.
21). In the P slice, a merging candidate having the motion vector of a value (0, 0)
and the prediction mode of LO-prediction (Pred_L0) is added with the maximum number
of merging candidates MaxNumMergeCand as the upper limit. In the B slice, a merging
candidate having the prediction mode of bi-prediction (Pred_BI) and the motion vector
of a value (0, 0) is added. The reference index at the time of addition of a merging
candidate is different from the reference index that has already been added.
[0114] Subsequently, the merging candidate selector 347 and the merging candidate selector
447 select a merging candidate from among the merging candidates registered in the
merging candidate list mergeCandList. The merging candidate selector 347 on the coding
side calculates the code amount and the distortion amount, and thereby selects a merging
candidate, and then, supplies a merge index indicating the selected merging candidate
and inter prediction information of the merging candidate to the motion compensation
prediction unit 306 via the inter prediction mode determiner 305. In contrast, the
merging candidate selector 447 on the decoding side selects a merging candidate based
on the decoded merge index, and supplies the selected merging candidate to the motion
compensation prediction unit 406. Updating history-based motion vector predictor candidate
list
[0115] Next, a method of initializing and updating the history-based motion vector predictor
candidate list HmvpCandList provided in the coding information storage memory 111
on the coding side and the coding information storage memory 205 on the decoding side
will be described in detail. Fig. 26 is a flowchart illustrating the history-based
motion vector predictor candidate list initialization/update processing procedure.
[0116] In the present embodiment, the history-based motion vector predictor candidate list
HmvpCandList is updated in the coding information storage memory 111 and the coding
information storage memory 205. Alternatively, a history-based motion vector predictor
candidate list updating unit may be provided in the inter prediction unit 102 and
the inter prediction unit 203 to update the history-based motion vector predictor
candidate list HmvpCandList.
[0117] Initial settings of the history-based motion vector predictor candidate list HmvpCandList
are performed at the head of the slice. On the coding side, the history-based motion
vector predictor candidate list HmvpCandList is updated in a case where the normal
motion vector predictor mode or the normal merge mode is selected by the prediction
method determiner 105. On the decoding side, the history-based motion vector predictor
candidate list HmvpCandList is updated in a case where the prediction information
decoded by the bit strings decoding unit 201 is the normal motion vector predictor
mode or the normal merge mode.
[0118] The inter prediction information used at the time of performing the inter prediction
in the normal motion vector predictor mode or the normal merge mode is to be registered
in the history-based motion vector predictor candidate list HmvpCandList, as an inter
prediction information candidate hMvpCand. The inter prediction information candidate
hMvpCand includes the reference index refIdxL0 of L0 and the reference index refIdxL1
of L1, the LO-prediction flag predFlagL0 indicating whether L0-prediction is to be
performed, the L1-prediction flag predFlagL1 indicating whether L1-prediction is to
be performed, the motion vector mvL0 of L0 and the motion vector mvL1 of L1.
[0119] In a case where there is inter prediction information having the same value as the
inter prediction information candidate hMvpCand among the elements (that is, inter
prediction information) registered in the history-based motion vector predictor candidate
list HmvpCandList provided in the coding information storage memory 111 on the coding
side and the coding information storage memory 205 on the decoding side, the element
will be deleted from the history-based motion vector predictor candidate list HmvpCandList.
In contrast, in a case where there is no inter prediction information having the same
value as the inter prediction information candidate hMvpCand, the head element of
the history-based motion vector predictor candidate list HmvpCandList will be deleted,
and the inter prediction information candidate hMvpCand will be added to the end of
the history-based motion vector predictor candidate list HmvpCandList.
[0120] The number of elements of the history-based motion vector predictor candidate list
HmvpCandList provided in the coding information storage memory 111 on the coding side
and the coding information storage memory 205 on the decoding side of the present
invention is set to six.
[0121] First, the history-based motion vector predictor candidate list HmvpCandList is initialized
in units of slices (step S2101 in Fig. 26). All the elements of the history-based
motion vector predictor candidate list HmvpCandList are emptied at the head of the
slice, and the number NumHmvpCand (current number of candidates) of history-based
motion vector predictor candidates registered in the history-based motion vector predictor
candidate list HmvpCandList is set to 0.
[0122] Although initialization of the history-based motion vector predictor candidate list
HmvpCandList is to be performed in units of slices (first coding block of a slice),
the initialization may be performed in units of pictures, tiles, or tree block rows.
[0123] Subsequently, the following process of updating the history-based motion vector predictor
candidate list HmvpCandList is repeatedly performed for each of coding blocks in the
slice (steps S2102 to S2107 in Fig. 26).
[0124] First, initial settings are performed in units of coding blocks. A flag identicalCandExist
indicating whether an identical candidate exists is set to a value of FALSE (false),
a deletion target index removeIdx indicating the deletion target candidate is set
to 0 (step S2103 in Fig. 26).
[0125] It is determined whether there is an inter prediction information candidate hMvpCand
to be registered (step S2104 in Fig. 26). In a case where the prediction method determiner
105 on the coding side determines the normal motion vector predictor mode or the normal
merge mode, or where the bit strings decoding unit 201 on the decoding side performs
decoding as the normal motion vector predictor mode or the normal merge mode, the
corresponding inter prediction information is set as an inter prediction information
candidate hMvpCand to be registered. In a case where the prediction method determiner
105 on the coding side determines the intra prediction mode, the subblock motion vector
predictor mode or the subblock merge mode, or in a case where the bit strings decoding
unit 201 on the decoding side performs decoding as the intra prediction mode, the
subblock motion vector predictor mode, or the subblock merge mode, update process
of the history-based motion vector predictor candidate list HmvpCandList will not
be performed, and there will be no inter prediction information candidate hMvpCand
to be registered. In a case where there is no inter prediction information candidate
hMvpCand to be registered, steps S2105 to S2106 will be skipped (step S2104 in Fig.
26: NO). In a case where there is an inter prediction information candidate hMvpCand
to be registered, the process of step S2105 and later will be performed (step S2104
in Fig. 26: YES).
[0126] Subsequently, it is determined whether individual elements of the history-based motion
vector predictor candidate list HmvpCandList include an element (inter prediction
information) having the same value as the inter prediction information candidate hMvpCand
to be registered, that is, whether the identical element exists (step S2105 in Fig.
26). Fig. 27 is a flowchart of the identical element confirmation processing procedure.
In a case where the value of the number of history-based motion vector predictor candidates
NumHmvpCand is 0 (step S2121: NO in Fig. 27), the history-based motion vector predictor
candidate list HmvpCandList is empty, and the identical candidate does not exist.
Accordingly, steps S2122 to S2125 in Fig. 27 will be skipped, finishing the identical
element confirmation processing procedure. In a case where the value of the number
NumHmvpCand of the history-based motion vector predictor candidates is greater than
0 (YES in step S2121 in Fig. 27), the process of step S2123 will be repeated from
a history-based motion vector predictor index hMvpIdx of 0 to NumHmvpCand-1 (steps
S2122 to S2125 in Fig. 27). First, comparison is made as to whether the hMvpIdx-th
element HmvpCandList[hMvpIdx] counted from 0 in the history-based motion vector predictor
candidate list is identical to the inter prediction information candidate hMvpCand
(step S2123 in Fig. 27). In a case where they are identical (step S2123 in Fig. 27:
YES), the flag identicalCandExist indicating whether the identical candidate exists
is set to a value of TRUE, and the deletion target index removeIdx indicating the
position of the element to be deleted is set to a current value of the history-based
motion vector predictor index hMvpIdx, and the identical element confirmation processing
will be finished. In a case where they are not identical (step S2123 in Fig. 27: NO),
hMvpIdx is incremented by one. In a case where the history-based motion vector predictor
index hMvpIdx is smaller than or equal to NumHmvpCand-1, the processing of step S2123
and later is performed.
[0127] Returning to the flowchart of Fig. 26, the process of shifting and adding elements
of the history-based motion vector predictor candidate list HmvpCandList is performed
(step S2106 in Fig. 26). Fig. 28 is a flowchart of the element shift/addition processing
procedure of the history-based motion vector predictor candidate list HmvpCandList
in step S2106 in Fig. 26. First, it is determined whether to add a new element after
removing the element stored in the history-based motion vector predictor candidate
list HmvpCandList, or to add a new element without removing the element. Specifically,
a comparison is made as to whether the flag identicalCandExist indicating whether
the identical candidate exists is TRUE, or whether NumHmvpCand is 6 (step S2141 in
Fig. 28). In a case where one of the conditions that the flag identicalCandExist indicating
whether the identical candidate exists is TRUE or that the number of current candidate
NumHmvpCand is 6 is satisfied (step S2141: YES in Fig. 28), the element stored in
the history-based motion vector predictor candidate list HmvpCandList is removed and
thereafter a new element will be added. An initial value of index i is set to a value
of removeIdx + 1. The element shift process of step S2143 is repeated from this initial
value to NumHmvpCand. (Steps S2142 to S2144 in Fig. 28). By copying the elements of
HmvpCandList[i] to HmvpCandList[i-1], the elements are shifted forward (step S2143
in Fig. 28) and i is incremented by one (steps S2142 to S2144 in Fig. 28). Subsequently,
the inter prediction information candidate hMvpCand is added to the (NumHmvpCand-1)th
HmvpCandList [NumHmvpCand-1] counting from 0 that corresponds to the end of the history-based
motion vector predictor candidate list (step S2145 in Fig. 28), and the element shift/addition
process of the history-based motion vector predictor candidate list HmvpCandList will
be finished. In contrast, in a case where none of the conditions that the flag identicalCandExist
indicating whether the identical candidate exists is TRUE and that NumHmvpCand is
6 are satisfied (step S2141: NO in Fig. 28), the inter prediction information candidate
hMvpCand will be added to the end of the history-based motion vector predictor candidate
list without removing the element stored in the history-based motion vector predictor
candidate list HmvpCandList (step S2146 in Fig. 28). Here, the end of the history-based
motion vector predictor candidate list is the NumHmvpCand-th HmvpCandList [NumHmvpCand]
counted from 0. Moreover, NumHmvpCand is incremented by one, and the element shift
and addition process of the history-based motion vector predictor candidate list HmvpCandList
are finished.
[0128] Fig. 31 is a view illustrating an example of a process of updating the history-based
motion vector predictor candidate list. In a case where a new element is to be added
to the history-based motion vector predictor candidate list HmvpCandList in which
six elements (inter prediction information) have already been registered, the history-based
motion vector predictor candidate list HmvpCandList is compared with new inter prediction
information in order from the head element (Fig. 31A). When the new element has the
same value as the third element HMVP2 from the head of the history-based motion vector
predictor candidate list HmvpCandList, the element HMVP2 is deleted from the history-based
motion vector predictor candidate list HmvpCandList and the following elements HMVP3
to HMVP5 are shifted (copied) one by one forward, and a new element is added to the
end of the history-based motion vector predictor candidate list HmvpCandList (Fig.31B)
to complete the update of the history-based motion vector predictor candidate list
HmvpCandList (Fig.31C) .
History-based motion vector predictor candidate derivation process
[0129] Next, a method of deriving a history-based motion vector predictor candidate from
the history-based motion vector predictor candidate list HmvpCandList will be described
in detail. This corresponds to a processing procedure of step S304 in Fig. 20 concerning
common processing performed by the history-based motion vector predictor candidate
derivation unit 323 of the normal motion vector predictor mode derivation unit 301
on the coding side and the history-based motion vector predictor candidate derivation
unit 423 of the normal motion vector predictor mode derivation unit 401 on the decoding
side. Fig. 29 is a flowchart illustrating a history-based motion vector predictor
candidate derivation processing procedure.
[0130] In a case where the current number of motion vector predictor candidates numCurrMvpCand
is larger than or equal to the maximum number of elements of the motion vector predictor
candidate list mvpListLX (here, 2), or the number of history-based motion vector predictor
candidates NumHmvpCand is 0 (step S2201: NO in Fig. 29), the process of steps S2202
to S2209 of Fig. 29 will be omitted, and the history-based motion vector predictor
candidate derivation processing procedure will be finished. In a case where the number
numCurrMvpCand of the current motion vector predictor candidates is smaller than 2,
which is the maximum number of elements of the motion vector predictor candidate list
mvpListLX, and in a case where the value of the number NumHmvpCand of the history-based
motion vector predictor candidates is greater than 0 (step S2201: YES in Fig. 29),
the process of steps S2202 to S2209 in Fig. 29 will be performed.
[0131] Subsequently, the process of steps S2203 to S2208 in Fig. 29 is repeated until the
index i is from 1 to a smaller value out of 4 or the number of history-based motion
vector predictor candidates numCheckedHMVPCand (steps S2202 to S2209 in Fig. 29).
In a case where the current number of motion vector predictor candidates numCurrMvpCand
is larger than or equal to 2, which is the maximum number of elements of the motion
vector predictor candidate list mvpListLX (step S2203: NO in Fig. 29), the process
from steps S2204 to S2209 in Fig. 29 will be omitted and the history-based motion
vector predictor candidate derivation processing procedure will be finished. In a
case where the current number of motion vector predictor candidates numCurrMvpCand
is smaller than 2 which is the maximum number of elements in the motion vector predictor
candidate list mvpListLX (step S2203 in Fig. 29: YES), the process in step S2204 and
later in Fig. 29 will be performed.
[0132] Subsequently, the process in steps S2205 to S2207 is performed for cases where Y
is 0 and Y is 1 (L0 and L1) (steps S2204 to S2208 in Fig. 29). In a case where the
current number of motion vector predictor candidates numCurrMvpCand is larger than
or equal to 2, which is the maximum number of elements of the motion vector predictor
candidate list mvpListLX (step S2205: NO in Fig. 29), the process from steps S2206
to S2209 in Fig. 29 will be omitted and the history-based motion vector predictor
candidate derivation processing procedure will be finished. In a case where the current
number of motion vector predictor candidates numCurrMvpCand is smaller than 2 which
is the maximum number of elements in the motion vector predictor candidate list mvpListLX
(step S2205: YES in Fig. 29), the process in step S2206 and later in Fig. 29 will
be performed.
[0133] Next, in a case where the history-based motion vector predictor candidate list HmvpCandList
includes an element having the same reference index as the reference index refIdxLX
of the coding/decoding target motion vector and being different from any element of
the motion vector predictor list mvpListLX (step S2206: YES in Fig. 29), a motion
vector of LY of the history-based motion vector predictor candidate HmvpCandList [NumHmvpCand-i]
is added to the numCurrMvpCand-th element mvpListLX[numCurrMvpCand] counting from
0 in the motion vector predictor candidate list (step S2207 in Fig. 29), and the number
numCurrMvpCand of the current motion vector predictor candidates is incremented by
one. In a case where there is no element in the history-based motion vector predictor
candidate list HmvpCandList that has the same reference index as the reference index
refIdxLX of the coding/decoding target motion vector and is different from any element
of the motion vector predictor list mvpListLX (step S2206: NO in Fig. 29), the additional
process in step S2207 will be skipped.
[0134] The process of steps S2205 to S2207 in Fig. 29 is performed for both L0 and L1 (steps
S2204 to S2208 in Fig. 29). The index i is incremented by one, and when the index
i is smaller than or equal to any of smaller value of 4 or the number of history-based
motion vector predictor candidates NumHmvpCand, the process of step S2203 and later
will be performed again (steps S2202 to S2209 in Fig. 29).
History-based merging candidate derivation process
[0135] The following is a detailed description of a method of deriving a history-based merging
candidate from the history-based merging candidate list HmvpCandList, a procedure
of the process of step S404 in Fig. 21, which is a common process of the history-based
merging candidate derivation unit 345 of the normal merge mode derivation unit 302
on the coding side and the history-based merging candidate derivation unit 445 of
the normal merge mode derivation unit 402 on the decoding side. Fig. 30 is a flowchart
illustrating a history-based merging candidate derivation processing procedure.
[0136] First, an initialization process is performed (step S2301 in Fig. 30). Each of elements
from 0 to (numCurrMergeCand -1) of isPruned[i] is set to the value of FALSE, and the
variable numOrigMergeCand is set to the number numCurrMergeCand of the number of elements
registered in the current merging candidate list.
[0137] Subsequently, the initial value of the index hMvpIdx is set to 1, and the additional
process from step S2303 to step S2310 in Fig. 30 is repeated from this initial value
to NumHmvpCand (steps S2302 to S2311 in Fig. 30). When the number numCurrMergeCand
of the elements registered in the current merging candidate list is not smaller than
or equal to (the maximum number of merging candidates MaxNumMergeCand-1), the merging
candidates have been added to all the elements in the merging candidate list. Accordingly,
the history-based merging candidate derivation process will be finished (step S2303:
NO in Fig. 30) In a case where the number numCurrMergeCand of the elements registered
in the current merging candidate list is smaller than or equal to (the maximum number
of merging candidates MaxNumMergeCand-1), the process of step S2304 and later will
be performed. sameMotion is set to a value of FALSE (step S2304 in Fig. 30). Subsequently,
the initial value of the index i is set to 0, and the process of steps S2306 and S2307
in Fig. 30 is performed from this initial value to numOrigMergeCand-1 (S2305 to S2308
in Fig. 30). Comparison is performed as to whether the (NumHmvpCand-hMvpIdx)-th element
HmvpCandList [NumHmvpCand-hMvpIdx] counting from 0 in the history-based motion vector
prediction candidate list is the same value as the i-th element mergeCandList[i] counting
from 0 in the merging candidate list (step S2306 in Fig. 30).
[0138] The merging candidates is determined to have the same value in a case where all the
constituent elements (inter prediction mode, reference index, motion vector) of the
merging candidate have the same value. In a case where the merging candidates have
the same value and isPruned[i] is set to FALSE (step S2306: YES in Fig. 30), both
sameMotion and isPruned[i] will be set to TRUE (step S2307 in Fig. 30). In a case
where the values are not the same (step S2306: NO in Fig. 30), the process in step
S2307 will be skipped. After completion of the repetition processing from step S2305
to step S2308 in Fig. 30, comparison is made as to whether the sameMotion is FALSE
(step S2309 in Fig. 30). In a case where the sameMotion is FALSE (step S2309: YES
in Fig. 30), that is, the (NumHmvpCand-hMvpIdx)-th element HmvpCandList [NumHmvpCand
- hMvpIdx] counting from 0 in the history-based motion vector predictor candidate
list does not exist in mergeCandList, and thus, the element HmvpCandList[NumHmvpCand
- hMvpIdx] that is (NumHmvpCand - hMvpIdx)th element counted from 0 of the history-based
motion vector predictor candidate list is added to mergeCandList[numCurrMergeCand]
that is numCurrMergeCand-th in the merging candidate list, and numCurrMergeCand is
incremented by one (step S2310 in Fig. 30). The index hMvpIdx is incremented by one
(step S2302 in Fig. 30), and the process of steps S2302 to S2311 in Fig. 30 is repeated.
[0139] After completion of confirmation of all the elements in the history-based motion
vector predictor candidate list or completion of addition of merging candidates to
all elements in the merging candidate list, the history-based merging candidate derivation
process is completed.
Average merging candidate derivation process
[0140] The following is a detailed description of a method of deriving an average merging
candidate, a procedure of the process of step S403 in Fig. 21, which is a common process
of the average merging candidate derivation unit 344 of the normal merge mode derivation
unit 302 on the coding side and the average merging candidate derivation unit 444
of the normal merge mode derivation unit 402 on the decoding side. Fig. 39 is a flowchart
illustrating an average merging candidate derivation processing procedure
[0141] First, an initialization process is performed (step S1301 in Fig. 39). The variable
numOrigMergeCand is set to the number of elements numCurrMergeCand registered in the
current merging candidate list.
[0142] Subsequently, scanning is performed sequentially from the top of the merging candidate
list to determine two pieces of motion information. Index i indicating the first motion
information is set such that index i = 0, and index j indicating the second motion
information is set such that index j = 1. (Steps S1302 to S1303 in Fig. 39). When
the number numCurrMergeCand of the elements registered in the current merging candidate
list is not smaller than or equal to (the maximum number of merging candidates MaxNumMergeCand-1),
the merging candidates have been added to all the elements in the merging candidate
list. Accordingly, the history-based merging candidate derivation process will be
finished (step S1304 in Fig. 39). In a case where the number numCurrMergeCand of the
elements registered in the current merging candidate list is smaller than or equal
to (the maximum number of merging candidates MaxNumMergeCand-1), the process of step
S1305 and later will be performed.
[0143] Determination is made as to whether both the i-th motion information mergeCandList[i]
of the merging candidate list and j-th motion information mergeCandList[j] of the
merging candidate list are invalid (step S1305 in Fig. 39). In a case where both are
invalid, the process proceeds to the next element without deriving an average merging
candidate of mergeCandList[i] and mergeCandList[j]. In a case where the condition
that both mergeCandList[i] and mergeCandList[j] are invalid is not satisfied, the
following process is repeated with X set to 0 and 1 (steps S1306 to S1314 in Fig.
39).
[0144] Determination is made as to whether the LX prediction of mergeCandList[i] is valid
(step S1307 in Fig. 39). In a case where the LX prediction of mergeCandList[i] is
valid, determination is made as to whether the LX prediction of mergeCandList[j] is
valid (step S1308 in Fig. 39). In a case where the LX prediction of mergeCandList[j]
is valid, that is, in a case where both the LX prediction of mergeCandList[i] and
the LX prediction of mergeCandList[j] are valid, a motion vector of LX prediction
obtained by averaging the motion vector of LX prediction of mergeCandList[i] and the
motion vector of LX prediction of mergeCandList[j] will be derived, and an average
merging candidate of LX prediction having a reference index of LX prediction of mergeCandList[i]
will be derived, so as to be set as LX prediction of averageCand, and the LX prediction
of averageCand will be validated (step S1309 in Fig. 39). In step S1308 of Fig. 39,
in a case where LX prediction of mergeCandList[j] is not valid, that is, in a case
where LX prediction of mergeCandList[i] is valid and LX prediction of mergeCandList[j]
is invalid, a motion vector of LX prediction of mergeCandList[i] and an average merging
candidate of LX prediction having a reference index will be derived, so as to be set
as LX prediction of averageCand, and the LX prediction of averageCand will be validated
(step S1310 in Fig. 39). In a case where the LX prediction of mergeCandList[i] is
not valid in step S1307 of Fig. 39, determination is made as to whether the LX prediction
of mergeCandList[j] is valid (step S1311 of Fig. 39). In a case where LX prediction
of mergeCandList[j] is valid, that is, in a case where LX prediction of mergeCandList[i]
is invalid and LX prediction of mergeCandList[j] is valid, a motion vector of LX prediction
of mergeCandList[j] and an average merging candidate of LX prediction having a reference
index will be derived, so as to be set as LX prediction of averageCand, and the LX
prediction of averageCand will be validated (step S1312 in Fig. 39). In step S1311
of Fig. 39, in a case where LX prediction of mergeCandList [j] is not valid, that
is, in a case where LX prediction of mergeCandList[i] and LX prediction of mergeCandList[j]
are both invalid, LX prediction of averageCand will be invalidated. (step S1312 in
Fig. 39).
[0145] The average merging candidate averageCand of L0-prediction, L1-prediction or BI prediction
constructed as described above is added to the numCurrMergeCand-th mergeCandList[numCurrMergeCand]
of the merging candidate list, and numCurrMergeCand is incremented by one (step S1315
in Fig.39). This completes the average merging candidate derivation process.
[0146] The average merging candidate is obtained by averaging in each of the horizontal
component of the motion vector and the vertical component of the motion vector.
Motion compensation prediction process
[0147] The motion compensation prediction unit 306 acquires the position and size of a block
that is currently subjected to prediction processing in coding. Further, the motion
compensation prediction unit 306 acquires inter prediction information from the inter
prediction mode determiner 305. A reference index and a motion vector are derived
from the acquired inter prediction information, and the reference picture specified
by the reference index in the decoded picture memory 104 is shifted from the same
position as a picture signal of the block that is subjected to prediction processing
by the amount of the motion vector. The picture signal of that position after the
shift is acquired and thereafter a prediction signal is generated.
[0148] In a case where prediction is made from a single reference picture, such as when
the inter prediction mode in the inter prediction is LO-prediction or L1-prediction,
a prediction signal acquired from one reference picture is set as a motion compensation
prediction signal. In a case where prediction mode is made from two reference pictures,
such as when the inter prediction mode is BI prediction, a weighted averaging of prediction
signals acquired from the two reference pictures is set as the motion compensation
prediction signal. The acquired motion compensation prediction signal is supplied
to the prediction method determiner 105. Here, the weighted averaging ratio in the
bi-prediction is set to 1: 1. Alternatively, the weighted averaging may use another
ratio. For example, the weighting ratio may be set such that the shorter the picture
interval between the prediction target picture and the reference picture, the higher
the weighting ratio. The calculation of the weighting ratio may also be performed
using a correspondence table between the combination of the picture intervals and
the weighting ratios.
[0149] The motion compensation prediction unit 406 has function similar to the motion compensation
prediction unit 306 on the coding side. The motion compensation prediction unit 406
acquires inter prediction information from the normal motion vector predictor mode
derivation unit 401, the normal merge mode derivation unit 402, the subblock motion
vector predictor mode derivation unit 403, and the subblock merge mode derivation
unit 404, via the switch 408. The motion compensation prediction unit 406 supplies
the obtained motion compensation prediction signal to the decoded picture signal superimposer
207.
Inter prediction mode
[0150] The process of performing prediction from a single reference picture is defined as
uni-prediction. Uni-prediction performs prediction of LO-prediction or L1-prediction
using one of the two reference pictures registered in the reference lists L0 or L1.
[0151] Fig. 32 illustrates a case of uni-prediction in which the reference picture (RefL0Pic)
of L0 is at a time before the target picture (CurPic). Fig. 33 illustrates a case
of uni-prediction in which the reference picture of L0-prediction is at a time after
the target picture. Similarly, uni-prediction can be performed by replacing the L0-prediction
reference picture in Figs. 32 and 33 with an L1-prediction reference picture (RefL1Pic).
[0152] The process of performing prediction from two reference pictures is defined as bi-prediction.
Bi-prediction performs prediction, expressed as BI prediction, using both LO-prediction
and L1-prediction. Fig. 34 illustrates a case of bi-prediction in which an LO-prediction
reference picture is at a time before the target picture and an L1-prediction reference
picture is at a time after the target picture. Fig. 35 illustrates a case of bi-prediction
in which the reference picture for LO-prediction and the reference picture for L1-prediction
are at a time before the target picture. Fig. 36 illustrates a case of bi-prediction
in which the reference picture for LO-prediction and the reference picture for L1-prediction
are at a time after the target picture.
[0153] In this manner, it is possible to use prediction without limiting the relationship
between the prediction type of L0/L1 and time such that L0 to the past direction and
L1 to the future direction. Moreover, bi-prediction may perform each of LO-prediction
and L1-prediction using a same reference picture. The determination whether to perform
motion compensation prediction in the uni-prediction or the bi-prediction is made
on the basis of information (for example, a flag) indicating whether to use the LO-prediction
and whether to use the L1-prediction, for example.
Reference index
[0154] In the embodiment of the present invention, it is possible to select an optimal reference
picture from a plurality of reference pictures in motion compensation prediction in
order to improve motion compensation prediction accuracy. Therefore, the reference
picture used in the motion compensation prediction is to be used as a reference index,
and the reference index is coded in a bitstream together with the motion vector difference.
Motion compensation process based on normal motion vector predictor mode
[0155] As illustrated in the inter prediction unit 102 on the coding side in Fig. 16, in
a case where inter prediction information by the normal motion vector predictor mode
derivation unit 301 has been selected on the inter prediction mode determiner 305,
the motion compensation prediction unit 306 acquires this inter prediction information
from the inter prediction mode determiner 305, and derives an inter prediction mode,
a reference index, and a motion vector of a target block and thereby generates a motion
compensation prediction signal. The constructed motion compensation prediction signal
is supplied to the prediction method determiner 105.
[0156] Similarly, as illustrated in the inter prediction unit 203 on the decoding side in
Fig. 22, in a case where the switch 408 is connected to the normal motion vector predictor
mode derivation unit 401 during the decoding process, the motion compensation prediction
unit 406 acquires inter prediction information by the normal motion vector predictor
mode derivation unit 401, and derives an inter prediction mode, a reference index,
and a motion vector of current target block and thereby generates a motion compensation
prediction signal. The constructed motion compensation prediction signal is supplied
to the decoded picture signal superimposer 207.
Motion compensation processing based on normal merge mode
[0157] As illustrated in the inter prediction unit 102 on the coding side in Fig. 16, in
a case where inter prediction information by the normal merge mode derivation unit
302 has been selected on the inter prediction mode determiner 305, the motion compensation
prediction unit 306 acquires this inter prediction information from the inter prediction
mode determiner 305, and derives an inter prediction mode, a reference index, and
a motion vector of current target block, thereby generating a motion compensation
prediction signal. The constructed motion compensation prediction signal is supplied
to the prediction method determiner 105.
[0158] Similarly, as illustrated in the inter prediction unit 203 on the decoding side in
Fig. 22, in a case where the switch 408 is connected to the normal merge mode derivation
unit 402 during the decoding process, the motion compensation prediction unit 406
acquires inter prediction information by the normal merge mode derivation unit 402,
and derives an inter prediction mode, a reference index, and a motion vector of current
target block, thereby generating a motion compensation prediction signal. The constructed
motion compensation prediction signal is supplied to the decoded picture signal superimposer
207.
Motion compensation process based on subblock motion vector predictor mode
[0159] As illustrated in the inter prediction unit 102 on the coding side in Fig. 16, in
a case where inter prediction information by the subblock motion vector predictor
mode derivation unit 303 has been selected on the inter prediction mode determiner
305, the motion compensation prediction unit 306 acquires this inter prediction information
from the inter prediction mode determiner 305, and derives an inter prediction mode,
a reference index, and a motion vector of current target block, thereby generating
a motion compensation prediction signal. The constructed motion compensation prediction
signal is supplied to the prediction method determiner 105.
[0160] Similarly, as illustrated in the inter prediction unit 203 on the decoding side in
Fig. 22, in a case where the switch 408 is connected to the subblock motion vector
predictor mode derivation unit 403 during the decoding process, the motion compensation
prediction unit 406 acquires inter prediction information by the subblock motion vector
predictor mode derivation unit 403, and derives an inter prediction mode, a reference
index, and a motion vector of a target block, thereby generating a motion compensation
prediction signal. The constructed motion compensation prediction signal is supplied
to the decoded picture signal superimposer 207.
Motion compensation process based on subblock merge mode
[0161] As illustrated in the inter prediction unit 102 on the coding side in Fig. 16, in
a case where inter prediction information by the subblock merge mode derivation unit
304 has been selected on the inter prediction mode determiner 305, the motion compensation
prediction unit 306 acquires this inter prediction information from the inter prediction
mode determiner 305, and derives an inter prediction mode, a reference index, and
a motion vector of current target block, thereby generating a motion compensation
prediction signal. The constructed motion compensation prediction signal is supplied
to the prediction method determiner 105.
[0162] Similarly, as illustrated in the inter prediction unit 203 on the decoding side in
Fig. 22, in a case where the switch 408 is connected to the subblock merge mode derivation
unit 404 during the decoding process, the motion compensation prediction unit 406
acquires inter prediction information by the subblock merge mode derivation unit 404,
and derives an inter prediction mode, a reference index, and a motion vector of current
target block, thereby generating a motion compensation prediction signal. The constructed
motion compensation prediction signal is supplied to the decoded picture signal superimposer
207.
Motion compensation process based on affine transform prediction
[0163] In the normal motion vector predictor mode and the normal merge mode, motion compensation
using an affine model is usable based on the following flags. The following flags
are reflected in the following flags on the basis of inter prediction conditions determined
by the inter prediction mode determiner 305 in the coding process, and are coded in
the bitstream. In the decoding process, whether to perform motion compensation using
the affine model on the basis of the following flags in the bitstream is specified.
[0164] sps_affine_enabled_flag indicates whether motion compensation using an affine model
is usable in inter prediction. When sps_affine_enabled_flag is 0, the process is suppressed
so as not to perform motion compensation by the affine model in units of sequence.
Moreover, inter_affine_flag and cu_affine_type_flag are not transmitted in the coding
block (CU) syntax of a coding video sequence. When sps_affine_enabled_flag is 1, motion
compensation by an affine model is usable in the coding video sequence.
[0165] sps_affine_type_flag indicates whether motion compensation using a 6-parameter affine
model is usable in inter prediction. When sps_affine_type_flag is 0, the process is
suppressed so as not to perform motion compensation using a 6-parameter affine model.
Moreover, cu_affine_type_flag is not transmitted in the CU syntax of the coding video
sequence. When sps_affine_type_flag is 1, motion compensation based on a 6-parameter
affine model is usable in a coding video sequence. In a case where sps_affine_type_flag
does not exist, it shall be 0.
[0166] In a case of decoding a P or B slice, when inter_affine_flag is 1 in the current
target CU, a motion compensation using an affine model is used in order to generate
a motion compensation prediction signal of the current target CU. When inter_affine_flag
is 0, the affine model is not used for the current target CU. In a case where inter_affine_flag
does not exist, it shall be 0.
[0167] In a case of decoding a P or B slice, when cu_affine_type_flag is 1 in the current
CU, a motion compensation using a 6-parameter affine model is used in order to generate
a motion compensation prediction signal of the current CU. When cu_affine_type_flag
is 0, motion compensation using a four-parameter affine model is used to generate
a motion compensation prediction signal of the CU currently being processed.
[0168] A reference index and a motion vector are derived in units of subblocks in the motion
compensation based on the affine model. Accordingly, a motion compensation prediction
signal is generated using the reference index and the motion vector to be processed
in subblock units.
[0169] The four-parameter affine model is a mode in which a motion vector of a subblock
is derived from four parameters of a horizontal component and a vertical component
of each of motion vectors of two control points, and motion compensation is performed
in units of subblocks.
Triangle merge mode
[0170] The triangle merge mode is a type of merge mode, in which the coding/decoding block
is split into diagonal partitions to perform motion compensation prediction. The triangle
merge mode is a type of geometric division merge mode in which the coding/decoding
block is split into blocks having a non-rectangular shape. In the geometric division
merge mode, this corresponds to a mode in which the coding/decoding block is split
into two right triangles by a diagonal line.
[0171] The geometric division merge mode is expressed by a combination of two parameters,
for example, an index (angleIdx) indicating a division angle and an index (distanceIdx)
indicating a distance from the center of the coding block. As an example, 64 patterns
are defined as the geometric division merge mode, and fixed-length encoding is performed.
Of the 64 patterns, two modes, in which the index indicating a division angle indicates
an angle forming a diagonal line of the coding block (for example, 45 degrees (angleIdx
= 4 in a configuration in which 360 degrees are represented by 32 divisions) or 135
degrees (angleIdx = 12 in a configuration in which 360 degrees are represented by
32 divisions)) and the index indicating a distance from the center of the coding block
is minimum (distanceIdx = 0, indicating that the division boundary passes through
the center of the coding block), indicate that the coding block is split by a diagonal
line, and correspond to the triangle merge mode.
[0172] The triangle merge mode will be described with reference to Figs. 38A and 38B. Figs.
38A and 38B illustrate an example of prediction of a 16 × 16 coding/decoding blocks
of the triangle merge mode. The coding/decoding block of the triangle merge mode is
split into 4 × 4 subblocks, and each of subblock is assigned to three partitions,
namely, uni-prediction partition 0 (UNIO), uni-prediction partition 1 (UNI1), and
bi-prediction partition 2 (BI). Here, subblocks above a diagonal line are assigned
to partition 0, subblocks below the diagonal line are assigned to partition 1, and
subblocks on the diagonal line are assigned to partition 2. When merge_triangle_split_dir
is 0, partitions are assigned as illustrated in Fig. 38A, and when merge_triangle_split_dir
is 1, partitions are assigned as illustrated in Fig. 38B.
[0173] Uni-prediction motion information designated by merge triangle index 0 is used for
motion compensation prediction of partition 0. Uni-prediction motion information designated
by merge triangle index 1 is used for motion compensation prediction of partition
1. Bi-prediction motion information combining uni-prediction motion information designated
by merge triangle index 0 and uni-prediction motion information designated by merge
triangle index 1 is used for motion compensation prediction of partition 2.
[0174] Here, the uni-prediction motion information is a set of a motion vector and a reference
index, while the bi-prediction motion information is formed with two sets of a motion
vector and a reference index. The motion information represents either uni-prediction
motion information or bi-prediction motion information.
[0175] The merging candidate selectors 347 and 447 use the derived merging candidate list
mergeCandList as a triangle merging candidate list triangleMergeCandList.
[0176] The flowchart of Fig. 40 related to triangle merging candidate derivation will be
described.
[0177] First, a merging candidate list mergeCandList is used as a triangle merging candidate
list triangleMergeCandList (step S3501).
[0178] Subsequently, a merging candidate having a motion information list L0 is prioritized,
and uni-prediction motion information of a merge triangle partition 0 is derived (step
S3502) .
[0179] Subsequently, a merging candidate having a motion information list L1 is prioritized,
and uni-prediction motion information of a merge triangle partition 1 is derived (step
S3503).
[0180] Note that step S3502 and step S3503 can be derived in random order and can also be
processed in parallel.
[0181] Fig. 41 is a flowchart illustrating derivation of uni-prediction motion information
of a merge triangle partition 0 according to the present embodiment.
[0182] First, for an M-th candidate in the derived merging candidate list mergeCandList,
determination is made as to whether a candidate M has motion information of the motion
information list L0 (step S3601). In a case where the candidate M has the motion information
of the motion information list L0, the motion information of the motion information
list L0 of the candidate M is set as a triangle merging candidate (step S3602). For
candidates M (M = 0, 1,..., numMergeCand-1), steps S3601 and step S3602 are performed
in ascending order, and triangle merging candidates are additionally derived.
[0183] Subsequently, for the M-th candidate in the derived merging candidate list mergeCandList,
determination is made as to whether the candidate M has motion information of the
motion information list L1 (step S3603). In a case where the candidate M has the motion
information of the motion information list L1, the motion information of the motion
information list L1 of the candidate M is set as a triangle merging candidate (step
S3604). For candidates M (M = numMergeCand-1,..., 1, 0), steps S3603 and step S3604
are performed in descending order, and triangle merging candidates are additionally
derived.
[0184] Fig. 42 is a flowchart illustrating derivation of uni-prediction motion information
of a merge triangle partition 1 according to the present embodiment.
[0185] First, for an M-th candidate in the derived merging candidate list mergeCandList,
determination is made as to whether a candidate M has motion information of the motion
information list L1 (step S3701). In a case where the candidate M has the motion information
of the motion information list L1, the motion information of the motion information
list L1 of the candidate M is set as a triangle merging candidate (step S3702). For
candidates M (M = 0, 1,..., numMergeCand-1), steps S3701 and step S3702 are performed
in ascending order, and triangle merging candidates are additionally derived.
[0186] Subsequently, for the M-th candidate in the derived merging candidate list mergeCandList,
determination is made as to whether the candidate M has motion information of the
motion information list L0 (step S3703). In a case where the candidate M has the motion
information of the motion information list L0, the motion information of the motion
information list L0 of the candidate M is set as a triangle merging candidate (step
S3704). For candidates M (M =numMergeCand-1,..., 1, 0), steps S3703 and step S3704
are performed in descending order, and triangle merging candidates are additionally
derived.
[0187] The merging candidate selector 347 on the coding side acquires motion information
from the derived triangle merging candidate list triangleMergeCandList and calculates
code amounts and distortion amounts.
[0188] The merging candidate selector 347 compares calculated pluralities of code amounts
and distortion amounts, thereby selecting coding block splitting directions and triangle
merging candidates of divided partitions. In a case where encoding is performed using
the triangle merge mode, the merging candidate selector 347 supplies selected information
(the coding block splitting directions merge_triangle_split_dir and merge triangle
indexes merge_triangle_idx0, merge_triangle_idx1 indicating the triangle merging candidates
of the divided partitions) and inter prediction information of the triangle merging
candidates to the motion compensation prediction unit 306. The bit strings coding
unit 108 encodes the selected information.
[0189] On the other hand, in the case of the triangle merge mode, the merging candidate
selector 447 on the decoding side selects triangle merging candidates based on decoded
information (the coding block splitting directions merge_triangle_split_dir and the
merge triangle indexes merge_triangle_idx0 and merge_triangle_idx1 indicating the
triangle merging candidates of the divided partitions) and supplies inter prediction
information of the selected triangle merging candidates to the motion compensation
prediction unit 406.
[0190] In the case of the triangle merge mode, the motion compensation prediction units
306 and 406 perform weighted averaging described below. In the case of luminance,
the motion compensation prediction units 306 and 406 calculate nCbR = (nCbW > nCbH)
? (nCbW / nCbH) : (nCbH / nCbW) with respect to width nCbW and height nCbH of the
coding block. Then, in a position (x, y) in the coding block, weight wValue in the
case of Fig. 38A is calculated as wValue = (nCbW > nCbH) ?

On the other hand, the weight wValue in the case of Fig. 38B is calculated as wValue
= (nCbW > nCbH) ?

Further, the motion compensation prediction units 306 and 406 calculate shift1 =
max(5, 17 - bitDepth)
[0191] offset1 = 1 << (shift1 - 1) with respect to a bit number bitDepth. Then, a result
pbSamples of the weighted averaging is calculated as pbSamples = Clip3(0, (1 << bitDepth)
- 1,

Here, predSamplesLA is a pixel value motion-compensated using a motion vector mvLA
and predSamplesLB is a pixel value motion-compensated using mvLB.
Storage process in the coding information storage memory
[0192] The inter prediction information obtained in the triangle merge mode is stored in
the coding information storage memory so that the inter prediction information can
be referred to as inter prediction information neighboring a target block when coding
and decoding are performed. A storage process in the coding information storage memory
is performed in 4x4 subblock units and the inter prediction information specified
by the partitions is stored.
[0193] The specified inter prediction information is inter prediction information of a partition
0(UNI0) of unit-prediction and a partition 1(UNI1) of uni-prediction. A partition
2(BI) of bi-prediction is obtained by using the inter prediction information of UNIO
and UNI1.
[0194] For subblocks on diagonal lines split as the partitions, in a case where weighting
illustrated in Fig. 43 is performed, a partition 2 illustrated in Fig. 44 is conceivable.
Fig. 44A illustrates a case where a region where the weighting illustrated in Fig.
43A is performed is set as the partition 2. Similarly, Fig. 44B corresponds to Fig.
43B. Fig. 44C illustrates a case where subblocks for which the weighting is performed
in Fig. 43A is set as the partition 2. Similarly, Fig. 44D corresponds to Fig. 43B.
Fig. 44E illustrates a case where subblocks for the entirety of which the weighting
is performed in Fig. 43A is set as the partition 2. Similarly, Fig. 44F corresponds
to Fig. 43B.
[0195] A region of the partition 2 illustrated in Fig. 44 can also be stored as BI as illustrated
in Fig. 45. However, in this embodiment, for subblocks for a part of which weighting
is performed as illustrated in Fig. 46, inter prediction information belonging to
a partition having a larger weighting value is stored as uni-prediction (UNIY). Here,
Y is 0 or 1.
[0196] Further, as illustrated in Fig. 47 and Fig. 48, subblocks originally stored as BI
are stored as uni-prediction using inter prediction information of a predetermined
partition. Fig. 47A and Fig. 47B are a case where UNIO is the predetermined partition.
Fig. 48A and Fig. 48B are a case where UNI1 is the predetermined partition.
[0197] As described in this embodiment, by storing a partition 2(BI) of bi-prediction in
the coding information storage memory as UNIY of uni-prediction, it is possible to
reduce a memory amount for storage. Further, since motion information of the motion
information list L0 and the motion information list L1 is stored in the coding information
storage memory without being converted into BI prediction, it is possible to reduce
a processing amount. Further, it is possible to reduce a processing amount in a case
where the inter prediction information specified by the triangle merge mode is referred
to and used in coding and decoding thereafter.
[0198] By selecting UNI1 as the predetermined partition as illustrated in Fig. 48A and Fig.
48B, in a case where a block in the neighbor of the right of a triangle merge mode
illustrated in Fig. 48B is in a triangle merge mode illustrated in Fig. 48A, continuity
of motion information to be stored can also be maintained in a region where the two
triangle merge modes continue. Therefore, coding efficiency is improved compared with
selecting UNIO as the predetermined partition as illustrated in Fig. 47A and Fig.
47B.
[0199] Selecting UNI1 as the predetermined partition as illustrated in Fig. 48A and Fig.
48B can also maintain continuity of processing because motion information of partitions
is accumulated after coding/decoding. Therefore, a processing amount is reduced compared
with selecting UNIO as the predetermined partition as illustrated in Fig. 47A and
Fig. 47B.
[0200] By fixing, as L0 prediction, uni-prediction to be saved as UNI1, since the L0 prediction
can be treated in the same manner as P slice, a processing amount is reduced compared
with a case where uni-prediction saved as L1 prediction.
[0201] In all the embodiments described above, a plurality of technologies may be combined
with each other.
[0202] In all the embodiments described above, the bitstream output from the picture coding
device has a specific data format so as to be decoded following the coding method
used in the embodiment. Furthermore, the picture decoding device corresponding to
the picture coding device is capable of decoding the bitstream of the specific data
format.
[0203] In a case where a wired or wireless network is used to exchange a bitstream between
the picture coding device and the picture decoding device, the bitstream may be converted
to a data format suitable for the transmission form of the communication channel in
transmission. In this case, there are provided a transmission device that converts
the bitstream output from the picture coding device into coded data in a data format
suitable for the transmission form of the communication channel and transmits the
coded data to the network, and a reception device that receives the coded data from
the network to be restored to the bitstream and supplies the bitstream to the picture
decoding device. The transmission device includes memory that buffers a bitstream
output from the picture coding device, a packet processing unit that packetizes the
bitstream, and a transmitter that transmits packetized coded data via a network. The
reception device includes a receiver that receives a packetized coded data via a network,
memory that buffers the received coded data, and a packet processing unit that packetizes
coded data to construct a bitstream and supplies the constructed bitstream to the
picture decoding device.
[0204] Moreover, a display unit that displays a picture decoded by the picture decoding
device may be added, as a display device, to the configuration. In that case, the
display unit reads out a decoded picture signal constructed by the decoded picture
signal superimposer 207 and stored in the decoded picture memory 208, and displays
the signal on the screen.
[0205] Moreover, an imaging unit may be added to the configuration so as to function as
an imaging device by inputting a captured picture to the picture coding device. In
that case, the imaging unit inputs the captured picture signal to the block split
unit 101.
[0206] Fig. 37 illustrates an example of a hardware configuration of the coding-decoding
device according to the present embodiment. The coding-decoding device includes the
configurations of the picture coding device and the picture decoding device according
to the embodiments of the present invention. A coding-decoding device 9000 includes
a CPU 9001, a codec IC 9002, an I/O interface 9003, memory 9004, an optical disk drive
9005, a network interface 9006, and a video interface 9009, in which individual units
are connected by a bus 9010.
[0207] A picture encoder 9007 and a picture decoder 9008 are typically implemented as a
codec IC 9002. The picture coding process of the picture coding device according to
the embodiments of the present invention is executed by the picture encoder 9007.
The picture decoding process in the picture decoding device according to the embodiment
of the present invention is executed by the picture decoder 9008. The I/O interface
9003 is implemented by a USB interface, for example, and connects to an external keyboard
9104, mouse 9105, or the like. The CPU 9001 controls the coding-decoding device 9000
on the basis of user's operation input via the I/O interface 9003 so as to execute
operation desired by the user. The user's operations on the keyboard 9104, the mouse
9105, or the like include selection of which function of coding or decoding is to
be executed, coding quality setting, input/output destination of a bitstream, input/output
destination of a picture, or the like.
[0208] In a case where the user desires operation of reproducing a picture recorded on a
disk recording medium 9100, the optical disk drive 9005 reads out a bitstream from
the inserted disk recording medium 9100, and transmits the readout bitstream to the
picture decoder 9008 of the codec IC 9002 via the bus 9010. The picture decoder 9008
executes a picture decoding process in the picture decoding device according to the
embodiments of the present invention on the input bitstream, and transmits the decoded
picture to the external monitor 9103 via the video interface 9009. The coding-decoding
device 9000 has a network interface 9006, and can be connected to an external distribution
server 9106 and a mobile terminal 9107 via a network 9101. In a case where the user
desires to reproduce a picture recorded on the distribution server 9106 or the mobile
terminal 9107 instead of the picture recorded on the disk recording medium 9100, the
network interface 9006 obtains a bitstream from the network 9101 instead of reading
out a bitstream from the input disk recording medium 9100. In a case where the user
desires to reproduce the picture recorded in the memory 9004, the picture decoding
processing is performed by the picture decoding device according to the embodiments
of the present invention on the bitstream recorded in the memory 9004.
[0209] In a case where the user desires to perform operation of coding a picture captured
by an external camera 9102 and recording the picture in the memory 9004, the video
interface 9009 inputs the picture from the camera 9102, and transmits the picture
to the picture encoder 9007 of the codec IC 9002 via the bus 9010. The picture encoder
9007 executes the picture coding process by the picture coding device according to
the embodiment of the present invention on a picture input via the video interface
9009 and thereby creates a bitstream. Subsequently, the bitstream is transmitted to
the memory 9004 via the bus 9010. In a case where the user desires to record a bitstream
on the disk recording medium 9100 instead of the memory 9004, the optical disk drive
9005 writes the bitstream on the inserted disk recording medium 9100.
[0210] It is also possible to implement a hardware configuration having a picture coding
device and not having a picture decoding device, or a hardware configuration having
a picture decoding device and not having a picture coding device. Such a hardware
configuration is implemented by replacing the codec IC 9002 with the picture encoder
9007 or the picture decoder 9008.
[0211] The above-described process related to coding and decoding may naturally be implemented
as a transmission, storage, and reception device using hardware, and alternatively,
the process may be implemented by firmware stored in read only memory (ROM), flash
memory, or the like, or by software provided for a computer or the like. The firmware
program and the software program may be provided by being recorded on a recording
medium readable by a computer or the like, may be provided from a server through a
wired or wireless network, or may be provided through data broadcasting by terrestrial
or satellite digital broadcasting.
[0212] The present invention has been described with reference to the present embodiments.
The above-described embodiment has been described merely for exemplary purposes. Rather,
it can be readily conceived by those skilled in the art that various modification
examples may be made by making various combinations of the above-described components
or processes, which are also encompassed in the technical scope of the present invention.
[0213] The present invention can be used for picture coding and decoding techniques that
split a picture into blocks to perform prediction.
- 100
- picture coding device
- 101
- block split unit
- 102
- inter prediction unit
- 103
- intra prediction unit
- 104
- decoded picture memory
- 105
- prediction method determiner
- 106
- residual generation unit
- 107
- orthogonal transformer/quantizer
- 108
- bit strings coding unit
- 109
- inverse quantizer/inverse orthogonal transformer
- 110
- decoded picture signal superimposer
- 111
- coding information storage memory
- 200
- picture decoding device
- 201
- bit strings decoding unit
- 202
- block split unit
- 203
- inter prediction unit
- 204
- intra prediction unit
- 205
- coding information storage memory
- 206
- inverse quantizer/inverse orthogonal transformer
- 207
- decoded picture signal superimposer
- 208
- decoded picture memory