[0001] The present invention relates to a method for processing video data for display on
a display device having a plurality of luminous elements by applying a dithering function
to at least a part of the video data to refine the grey scale portrayal of video pictures
of the video data. Furthermore, the present invention relates to a corresponding device
for processing video data including dithering means.
Background
[0002] A PDP (Plasma Display Panel) utilizes a matrix array of discharge cells, which can
only be "ON", or "OFF". Unlike a CRT or LCD in which grey levels are expressed by
analogue control of the light emission, a PDP controls the grey level by modulating
the number of light pulses per frame (sustain pulses). This time-modulation will be
integrated by the eye over a period corresponding to the eye time response. Since
the video amplitude is portrayed by the number of light pulses, occurring at a given
frequency, more amplitude means more light pulses and thus more "ON" time. For this
reason, this kind of modulation is also known as PWM, pulse width modulation.
[0003] This PWM is responsible for one of the PDP image quality problems: the poor grey
scale portrayal quality, especially in the darker regions of the picture. This is
due to the fact, that displayed luminance is linear to the number of pulses, but the
eye response and sensitivity to noise is not linear. In darker areas the eye is more
sensitive than in brighter areas. This means that even though modern PDPs can display
ca. 255 discrete video levels, quantization error will be quite noticeable in the
darker areas.
[0004] As mentioned before, a PDP uses PWM (pulse width modulation) to generate the different
shades of grey. Contrarily to CRTs where luminance is approximately quadratic to applied
cathode voltage, luminance is linear to the number of discharge impulses. Therefore
an approximately digital quadratic gamma function has to be applied to video before
the PWM.
[0005] Due to this gamma function, for smaller video levels, many input levels are mapped
to the same output level. In other words, for darker areas, the output number of quantization
bits is smaller than the input number, in particular for values smaller than 16 (when
working with 8 bit for video input) that are all mapped to 0. This also counts for
four bit resolution which is actually unacceptable for video.
[0006] One known solution to improve the quality of the displayed pictures is to artificially
increase the number of displayed video levels by using dithering. Dithering is a known
technique for avoiding to loose amplitude resolution bits due to truncation. However,
this technique only works if the required resolution is available before the truncation
step. Usually this is the case in most applications, since the video data after a
gamma operation used for pre-correction of the video signal has 16-bit resolution.
Dithering can bring back as many bits as those lost by truncation in principle. However,
the dithering noise frequency decreases, and therefore becomes more noticeable, with
the number of dithered bits.
[0007] The concept of dithering shall be explained by the following example. A quantization
step of 1 shall be reduced by dithering. The dithering technique uses the temporal
integration property of the human eye. The quantization step may be reduced to 0,5
by using 1-bit dithering. Accordingly, half of the time within the time response of
the human eye there is displayed the value 1 and half of the time there is displayed
the value 0. As a result the eye sees the value 0,5.
[0008] Optionally, the quantization steps may be reduced to 0,25. Such dithering requires
two bits. For obtaining the value 0,25 a quarter of the time the value 1 is shown
and three quarters of the time the value 0. For obtaining the value 0,5 two quarters
of the time the value 1 and two quarters of the time the value 0 is shown. Similarly,
the value 0,75 may be generated. In the same manner quantization steps of 0,125 may
be obtained by using 3-bit dithering. This means that 1 bit of dithering corresponds
to multiply the number of available output levels by 2, 2 bits of dithering multiply
by 4, and 3 bits of dithering multiply by 8 the number of output levels. A minimum
of 3 bits of dithering may be required to give to the grey scale portrayal a 'CRT'
look.
[0009] Proposed dithering methods in the literature (like error diffusion) were mainly developed
to improve quality of still images (fax application and newspaper photo portrayal).
Results obtained are therefore not optimal if the same dithering algorithms are directly
applied to PDPs and mainly in the displaying of video with motion.
[0010] The dithering most adapted to PDP until now is the Cell-Based Dithering, described
in the European patent application EP-A-1 136 974 and Multi-Mask dithering described
in the European patent application with the filing number 01 250 199.5, which improves
grey scale portrayal but adds high frequency low amplitude dithering noise. It is
expressively referred to both documents.
[0011] Cell-based dithering adds a temporal dithering pattern that is defined for every
panel cell and not for every panel pixel as shown in Fig. 1. A panel pixel is composed
of three cells: red, green and blue cell. This has the advantage of rendering the
dithering noise finer and thus less noticeable to the human viewer.
[0012] Because the dithering pattern is defined cell-wise, it is not possible to use techniques
like error-diffusion, in order to avoid colouring of the picture when one cell would
diffuse in the contiguous cell of a different colour. This is not a big disadvantage,
because it has been observed sometimes an undesirable low frequency moving interference,
between the diffusion of the truncation error and a moving pattern belonging to the
video signal. Error diffusion works best in case of static pictures. Instead of using
error diffusion, a static 3-dimensional dithering pattern is proposed.
[0013] This static 3-dimentional dithering is based on a spatial (2 dimensions x and y)
and temporal (third dimension t) integration of the eye. For the following explanations,
the matrix dithering can be represented as a function with three variables: ϕ(x,y,t).
The three parameters x, y and t will represent a kind of phase for the dithering.
Now, depending on the number of bits to be rebuilt, the period of these three phases
can evolve.
[0016] The spatial resolution of the eye is good enough to be able to see a fixed static
pattern A, B, A, B but if a third dimension, namely the time, is added in the form
of an alternating function, then the eye will be only able to see the average value
of each cell.
[0017] The case of a cell located at the position (x
o,y
o) shall be considered. The value of this cell will change from frame to frame as following
ϕ (x
o,y
o,t
o)=A, ϕ (x
o,y
o,t
o+1)=B, ϕ (x
o,y
o,t
o+2)=A and so on.
[0018] The eye time response of several milliseconds (temporal integration) can be then
represented by the following formula:

which, in the present example, leads to

[0019] It should be noted that the proposed pattern, when integrated over time, always gives
the same value for all panel cells. If this would not be the case, under some circumstances,
some cells might acquire an amplitude offset to other cells, which would correspond
to an undesirable fixed spurious static pattern.
[0020] While displaying moving objects on the plasma screen, the human eye will follow the
objects and no more integrates the same cell of the plasma (PDP) over the time. In
that case, the third dimension will no more work perfectly and a dithering pattern
can be seen.
[0021] In order to better understand this problem, the following example of a movement

= (1;0) shall be looked at, which represents a motion in x-direction of one pixel
per frame. In that case, the eye will look at (x
o,y
o) at time t
o and then it will follow the movement to pixel (x
o+1,y
o) at time t
o+1 and so on. In that case, the cell seen by the eye will be defined as following:

which corresponds to

In that case, the third dimension aspect of the dithering will not work correctly
and only the spatial dithering will be available. Such an effect will make the dithering
more or less visible depending on the movement. The dithering pattern is no longer
hidden by the spatial and temporal eye integration.
[0022] In view of that it is the object of the present invention to eliminate a dithering
pattern appearing for a viewer observing a moving object on a picture.
[0023] According to the present invention this object is solved by a method for processing
video data for display on a display device having a plurality of luminous elements
by applying a dithering function to at least part of said video data to refine the
grey scale portrayal of video pictures of said video data, computing at least one
motion vector from said video data and changing the phase, amplitude, spatial resolution
and/or temporal resolution of said dithering function in accordance with said at least
one motion vector when applying the dithering function to said video data.
[0024] Furthermore, according to the present invention there is provided a device for processing
video data for display on a display device having a plurality of luminous elements
including dithering means for applying a dithering function to at least a part of
said video data to refine the grey scale portrayal of video pictures of said video
data, and motion estimation means connected to said dithering means for computing
at least one motion vector from said video data, wherein the phase, amplitude, spatial
resolution and/or temporal resolution of said dithering function is changeable in
accordance with said at least one motion vector.
[0025] Fortunately, the dithering function or pattern has two spatial dimensions and one
temporal dimension. Such a dithering function enables an enhanced reduction of quantization
steps in the case of static pictures compared to error diffusion.
[0026] The dithering function may be based on a plurality of masks. Thus, different dither
patterns may be provided for different entries in a number of least significant bits
of the data word representing the input video level. This makes it possible to suppress
the disturbing patterns occurring on the plasma display panel when using the conventional
dither patterns.
[0027] Furthermore, the application of the dithering function or pattern may be based on
single luminous elements called cells of the display device. I.e. to each colour component
R, G, B of a pixel separate dithering numbers may be added. Such cell based dithering
has the advantage of rendering the dithering noise finer and thus making it less noticeable
to the human viewer.
[0028] The dithering may be performed by a 1-, 2-, 3-, and/or 4-bit function. The number
of bits used depends on the processing capability. In general 3- bit dithering is
enough so that most of the quantization noise is not visible.
[0029] Preferably, the motion vector is computed for each pixel individually. By doing so
the quality of higher resolution dithering can be enhanced compared to a technique
where the motion vector is computed for a plurality of pixels or a complete area.
[0030] Furthermore, the motion vector should be computed for both spatial dimensions x and
y. Thus, any movement of an object observed by the human viewer may be regarded for
the dithering process.
[0031] As already mentioned, a pre-correction by the quadratic gamma function should be
performed before the dithering process. Thus, also the quantization errors produced
by the gamma function correction are reduced with the help of dithering.
[0032] The temporal component of the dithering function may be introduced by controlling
the dithering in the rhythm of picture frames. Thus, no additional synchronisation
has to be provided.
[0033] The dithering according to the present invention may be based on a Cell-based and/or
Multi-Mask dithering, which consists in adding a dithering signal that is defined
for every plasma cell and not for every pixel. In addition, such a dithering may further
be optimized for each video level. This makes the dithering noise finer and less noticeable
to the human viewer.
[0034] The adaptation of the dithering pattern to the movement of the picture in order to
suppress the dithering structure appearing for specific movement may be obtained by
using a motion estimator to change the phase or other parameters of the dithering
function for each cell. In that case, even if the eye is following the movement, the
quality of the dithering will stay constant and a pattern of dithering in case of
motion will be suppressed. Furthermore, this invention can be combined with any kind
of matrix dithering.
Drawings
[0035] Exemplary embodiments of the invention are illustrated in the drawings and are explained
in more detail in the following description. In the drawings:
- Figure 1
- shows the principal of the pixel-based dithering and cell based dithering;
- Figure 2
- illustrates the concept of 3-dimensional matrix dithering; and
- Figure 3
- shows a block diagram of a hardware implementation for the algorithm according to
the present invention.
- Figure 4
- shows another embodiment for the block diagram.
Exemplary embodiments
[0036] In order to suppress the visible pattern of a classical matrix dithering in case
of moving pictures the motion of the picture is taken into account by using a motion
estimator.
[0037] This will provide, for each pixel
M(xo,yo) of the screen, a vector
(xo,yo)= (Vx(xo,yo),Vy(xo,yo)) representing its movement. In that case, this vector can be used to change the phase
of the dithering according to the formula:

[0038] More generally, the new dithering pattern will depend on five parameters and can
be defined as following:

[0039] A big advantage of such a motion compensated dithering is its robustness regarding
the motion vector. In fact, the role of the motion vectors is to avoid any visible
pattern of the dithering during a movement that suppresses the temporal integration
of the eye. Even if the motion vectors are not exact, they can suppress the pattern.
[0040] According to a more optimised solution, for each pixel M(x
o, y
o) of the screen, a vector

(
xo,yo,to)=(Vx(xoyoto),Vy(xo,yo,to)) representing its movement at time t
o is provided. In that case, this vector is used to change the phase of the dithering
according to the formula :

where f(x,y,t) is a recursive function described as following :

mod(τ)and

mod(τ).
[0041] In this formula, τ represents the period of the dithering and mod(τ) the function
modulo τ. For instance if τ=4, there is a periodic dithering pattern on 4 frames,
which means that ϕ(x
o,y
o,t
o)=ϕ(x
o,y
o,t
o+4) and the modulo 4 functions means that : (0)mod(4)=0, (1)mod(4)=1, (2)mod(4)=2,
(3)mod(4)=3, (4)mod(4)=0, (5)mod(4)=1, (6)mod(4)=2, (7)mod(4)=3 and so on.
[0042] More generally, the new dithering pattern will depend on five parameters and can
be defined as following : ζ(x
o,y
o,v
x(x
o,y
o,t),v
y(x
o,y
o,t),t). The only difference now is that the vectors used are taken from more than
one frame. Preferably 3-bit dithering is implemented so that up to 8 frames are used
for dithering. If the number of frames used for dithering is increased, the frequency
of the dithering might be too low, and so flicker will appear. Mainly 3-bit dithering
is rendered with a 4-frames cycle and a 2D spatial component.
[0043] Figure 3 illustrates a possible implementation for the algorithm. RGB input pictures
indicated by the signals R
0, G
0 and B
0 are forwarded to a gamma function block 10. It can consist of a look up table (LUT)
or it can be formed by a mathematical function. The outputs R
1, G
1 and B
1 of the gamma function block 10 are forwarded to a dithering block 12 which takes
into account the pixel position and the frame parity as temporal component for the
computation of the dithering value. The frame parity is based on the frame number
within one dithering cycle. For instance, within a 3-bit dithering based on a 4-frames
cycle the frame number changes cyclically from 0 to 3.
[0044] In parallel to that, the input picture R
0, G
0 and B
0 is also forwarded to a motion estimator 14, which will provide, for each pixel, a
motion vector (V
x, V
y). This motion vector will be additionally used by the dithering block 12 for computing
the dithering pattern.
[0045] The video signals R
1, G
1, B
1 subjected to the dithering in the dithering block 12 are output as signals R2, G2,
B2 and are forwarded to a sub-field coding unit 16 which performs sub-field coding
under the control of the control unit 18. The plasma control unit 18 provides the
code for the sub-field coding unit 16 and the dithering pattern DITH for the dithering
block 12.
[0046] As to the sub-field coding it is expressively referred to the already mentioned European
patent application EP-A-1 136 974.
[0047] The sub-field signals for each colour output from the sub-field coding unit 16 are
indicated by reference signs SF
R, SF
G, SF
B. For plasma display panel addressing, these sub-field code words for one line are
all collected in order to create a single very long code word which can be used for
the linewise PDP addressing. This is carried out in a serial to parallel conversion
unit 20 which is itself controlled by the plasma control unit 18.
[0048] Furthermore, the control unit 18 generates all scan and sustain pulses for PDP control.
It receives horizontal and vertical synchronizing signals for reference timing.
[0049] Figure 4 illustrates a modification of the embodiment of figure 3. In this case,
a frame memory is used at the dithering block level. The additional memory requirements
are not so strong since the value to be stored is modulo τ, which is mainly around
4 for standard dithering in order to limit the temporal visibility of the dithering
(low frequency). In that case, 2 bits per pixels are enough to store values that are
modulo 4. For instance a WXGA panel will require 853x3x480x2=2.34 Mbit.
[0050] Although the present embodiment requires the use of a motion estimator, such a motion
estimator is already mandatory for other skills like false contour compensation, sharpness
improvement and phosphor lag reduction. Since the same vectors can be reused the extra
costs are limited.
[0051] Motion compensated dithering is applicable to all colour cell based displays (for
instance colour LCDs) where the number of resolution bits is limited.
[0052] In all cases the present invention brings the advantages of suppressing the visible
pattern of classical matrix dithering in case of moving pictures and of strong robustness
regarding the motion vector field.
1. Method for processing video data (R0, G0, B0) for display on a display device having a plurality of luminous elements by
applying a dithering function to at least part of said video data (R0, G0, B0) to refine the grey scale portrayal of video pictures of said video data,
characterized by
computing at least one motion vector from said video data (R0, G0, B0) and
changing the phase, amplitude, spatial resolution and/or temporal resolution of said
dithering function in accordance with said at least one motion vector when applying
the dithering function to said video data (R0, G0, B0).
2. Method according to claim 1, wherein said dithering function includes two spatial
dimensions and one temporal dimension.
3. Method according to claim 1 or 2, wherein said dithering function includes the application
of a plurality of masks.
4. Method according to claim 1 or 2, wherein said applying of said dithering function
is based on single luminous elements called cells of said display device.
5. Method according to one of the claims 1 to 4, wherein said dithering function is a
1-, 2-, 3- and/or 4- bit dithering function.
6. Method according to one of the claims 1 to 5, wherein said at least one motion vector
is defined for each pixel or cell individually.
7. Method according to one of the claims 1 to 6, wherein said at least one motion vector
has two spatial dimensions.
8. Device for processing video data (R0, G0, B0) for display on a display device having a plurality of luminous elements including
dithering means (12) for applying a dithering function to at least a part of said
video data (R0, G0, B0) to refine the grey scale portrayal of video pictures of said video data (R0, G0, B0),
characterized by
motion estimations means (14) connected to said dithering means (12) for computing
at least one motion vector (Vx, Vy) from said video data (R0, G0, B0), wherein the phase, amplitude, spatial resolution and/or temporal resolution of
said dithering function is changeable in accordance with said at least one motion
vector (Vx, Vy).
9. Device according to claim 8, wherein said dithering function used by said dithering
means (12) includes two spatial dimensions and a temporal dimension.
10. Device according to claim 8 or 9, wherein said dithering function of said dithering
means (12) is based on a plurality of masks.
11. Device according to claim 8 or 9, wherein said dithering function of said dithering
means (12) is based on single luminous elements called cells of said display device.
12. Device according to one of the claims 8 to 11, wherein said dithering means (12) is
able to process a 1-, 2-, 3- and/or 4-bit dithering function.
13. Device according to one of the claims 8 to 12, wherein said at least one motion vector
(Vx, Vy) is definable for each pixel individually by said motion estimation means (14).
14. Device according to one of the claims 8 to 13, wherein said at least one motion vector
(Vx, Vy) includes two spatial dimensions.
15. Device according to one of the claims 8 to 14, further including gamma function means
(10) connected to said dithering means (12), so that the input signals of said dithering
means (12) are pre-corrected by a gamma function.
16. Device according to one of the claims 8 to 15, further including controlling means
(18) connected to said dithering means (12) for controlling said dithering means (12)
temporally in dependence of frames of said video data( R0, G0, B0).