TECHNICAL FIELD
[0001] The present disclosure relates to data processing, and more specifically to the encoding
and compression of image data.
BACKGROUND ART
[0002] The amount of images produced worldwide is in constant growth. Photographic and video
data, by its nature, consumes a large part of available digital resources, such as
storage space and network bandwidth. Image compression technology plays a critical
role by reducing storage and bandwidth requirements. There is therefore a need for
high compression ratios whilst ensuring a quantifiably negligible loss of information.
[0003] Imaging is the technique of measuring and recording the amount of light
L (radiance) emitted by each point of an object, and captured by the camera. These
points are usually laid-out in a two-dimensional grid and called pixels. The imaging
device records a measured digital value
dadc(
x, y) that represents
L(
x, y) , where (x,
y) are the coordinates of the pixel. The ensemble of values
dadc(
x,y) forms a raw digital image M . Typically M consists of several tens of millions pixels
and each
dadc(
x, y) is coded over 8 to 16 bits. This results in each raw image requiring hundreds
of megabits to be represented, transferred and stored. The large size of these raw
images imposes several practical drawbacks: within the imaging device (photo or video
camera) the size limits the image transfer speed between sensor and processor, and
between processor and memory. This limits the maximum frame rate at which images can
be taken, or imposes the use of faster, more expensive and more power consuming communication
channels. The same argument applies when images are copied from the camera to external
memory, or transmitted over a communication channel. For example:
- transmitting raw "4K" 60 frames per second video in real time requires a bandwidth
of 9Gbps; and storing one hour of such video requires 32Tbits.
- transmitting a single 40MPixel photograph from a cube satellite over a 1Mbps link
takes 10 minutes.
- backing up a 64GByte memory from a photographic camera over a 100Mbps Internet connection
takes more than one hour.
[0004] To alleviate requirements on communication time and storage space, compression techniques
are required. Two main classes of image compression exist:
- A. Lossless image compression, where digital image data can be restored exactly, and
there is zero loss in quality. This comes at the expense of very low compression ratios
of typically 1:2 or less.
- B. Lossy image compression, where the compressed data cannot be restored to the original
values because some information is lost/excised during the compression, but which
can have large compression ratios of 1:10 and higher. These compression ratios however
come at the expense of loss of image quality, i.e. distortion, and the creation of
image compression artifacts, i.e. image features that do not represent the original
scene, but that are due to the compression algorithm itself.
[0005] Lossy image compression methods, such as those in the JPEG standard or used in wavelet
compression, create distortion and artifacts. Distortion is the degree by which the
decoded image Q , i.e. the result of compressing and decompressing the original image,
differs from the original image M , and is usually measured as the root-mean-square
of the differences in the values of each corresponding pixel between Q and M . Lossy
compression typically also introduces artefacts, which are a particularly bad type
of distortion that introduces image features in Q , i.e. correlations between different
pixels, that are not present in M. Examples of artefacts are the "block" artifacts
generated by block image compression algorithms like JPEG, but also ringing, contouring,
posterizing. These artifacts are particularly nefarious, as they can be mistaken for
image features. Lossy image compression algorithms, as described for example in prior
art documents [1] and [2] cited below, typically consist of several steps. First,
the image is transformed into a representation where the correlation between adjacent
data point values is reduced. This transformation is reversible, so that no information
is lost. For example, this step would be similar to calculating the Fourier coefficients,
i.e. changing an image that is naturally encoded into a position space, into a special
frequency space. The second step is referred to as quantization. This step truncates
the value of the calculated coefficients to reduce the amount of data required to
encode the image (or block, consisting of e.g. 16x16 pixels). This second step is
irreversible, and will introduce quantization errors when reconstructing the image
from the aforementioned coefficients. The quantization error will cause errors in
the reconstructed image or block. Predicting or simulating what the error would be
for any image or block is technically impossible, due to the extremely large number
of values that such image or block may take. For example, a 16x16 block, with 16-bits
per pixel, may take 2
16×16×16≅10
4932 different values, impossible to test in practice. If the transformation is not restricted
to blocks, but rather is a function of the entire image, the number of possible values
becomes considerably larger.
[0006] More particularly, in standard lossy image compression techniques (e.g. document
[1]), a lossless "image transformer" is first applied to the image, resulting in transformed
image, where each data-point is a function of many input pixels and represents the
value of a point in a space that is not the natural (x,y) position space of the image,
but rather is a transformed space, for example a data-point might represent a Fourier
component. A lossy "image quantizer" is applied on the transformed image as a second
step. The amount of information loss cannot be accurately quantified in standard lossy
image compression techniques for the above mentioned reason of the extremely large
number of values that such image or block may take and is additionally complicated
by the fact that the lossy operation is applied on a space that is not the natural
image space.
[0007] Attempts have been made at characterizing the quality of the output of image compression
algorithms. These efforts have typically focused on characterizing the quality of
the reconstructed compressed image with respect to the human visual system, as reviewed
in documents [2, 3, 4, 5] described below. Several metrics have been devised to characterize
the performance of the compression algorithm, however, these metrics have significant
drawbacks. One such drawback is that as they relate quality to the human visual system
and human perception, which are highly dependent on the subject viewing the image,
on the viewing conditions (e.g. eye-to-image distance, lighting of the image, environment
lighting conditions, attention, angle, image size) as well as on the image rendering
algorithms (e.g. debayering, gamma correction) and characteristics of the output device,
such as display, projector screen, printer, paper etc. A second drawback of characterizing
the quality of image compression algorithms with respect to the human visual systems
or models thereof is that for such methods to be relevant, no further image processing
must take place after image compression, as such processing would make some unwanted
compression artifacts visible. For example, a compression algorithm might simplify
dark areas of an image, removing detail, judging that such detail would not be visible
by the human eye. If it is later decided to lighten the image, this detail will have
been lost, resulting in visible artifacts. Yet another problem of the above-mentioned
methods is that they are unsuitable for applications not aimed solely at image reproduction
for a human observer, as for example are images for scientific, engineering, industrial,
astronomical, medical, geographical, satellite, legal and computer vision applications,
amongst others. In these applications data is processed in a different way as by the
human visual system, and a feature invisible to the untrained human eye and therefore
removed by the above-mentioned image compression methods, could be of high importance
to the specific image processing system. For example, inappropriate image compression
of satellite imagery could result in the addition or removal of geographical features,
such as roads or buildings.
[0008] In this context, one may also mention document
EP 1 215 909 (K. Kagechi et al.) which is directed to providing an image encoding - and decoding device allowing
the maximum compression rate whilst guaranteeing a visually uniform level of picture
quality. Therefore, this device aims to realize a method of automatic setting of image
compression parameters reflecting the characteristics of human vision by extracting
the pixel regions of the image which are felt having deteriorated visually and either
to utilise only the degree of distortion thereof for the evaluation of picture quality
or by classifying the image blocks according to their properties to set up a separate
criterion for evaluation. Therefore, compression parameters are varied in this device
for each block of an image by calculating its characteristic distortion using the
original - and the encoded-decoded image, such that, if the corresponding device allows
to improve visual image quality whilst increasing the compression rate, it doesn't
provide for a well quantifiable information loss, given that the latter varies in
each block of an image.
[0009] More recently, attempts at a more quantitative approach on the information loss in
digital image compression and reconstruction have been made [6], in particular trying
to examine information loss given by the most common image compression algorithm:
Discrete Wavelet Transform (DWT), Discrete Fourier Transform (DFT) and 2D Principal
Component Analysis (PCA). Several "quantitative" metrics are measured on several images
compressed with several algorithms, for example quantitative measures of "contrast",
"correlation", "dissimilarity", "homogeneity", discrete entropy, mutual information
or peak signal-to-noise ratio (PSNR). However, the effect of different compression
algorithms and parameters on these metrics highly depends on the input image, so that
it is not possible to specify the performance of an algorithm, or even to choose the
appropriate algorithm or parameters, as applying them on different images gives conflicting
results. A further drawback of the methods described in [6], is that although these
methods are quantitative in the sense that they output a value that might be correlated
with image quality, it is unclear how this number can be used: these numbers cannot
be used as uncertainties on the value, the compression methods do not guarantee the
absence of artifacts, and it is even unclear how these numbers can be compared across
different compression methods and input images. Also, these quantitative methods do
not distinguish signal from noise, so that for example, a high value of entropy could
be given by a large amount of retained signal (useful) or retained noise (useless).
[0010] The impossibility for the above-mentioned methods to achieve compression with quantifiable
information loss arises from the fact that the space in which the algorithm looses
data is very large (e.g. number of pixels in the image or block times the number of
bits per pixel), and cannot therefore be characterized completely, as we have mentioned
earlier.
[0011] Image-processing systems that act independently on individual (or small number of)
pixels have been known and used for a long time. These systems typically provide a
"look-up table" of values so that each possible input value is associated with an
"encoded" output value. The main purpose of such processing has been to adapt the
response curves of input and output devices, such as cameras and displays respectively.
The functions used to determine the look-up tables have typically been logarithmic
or gamma law, to better reflect the response curve of the human visual system as well
as the large number of devices designed to satisfy the human eye, starting from legacy
systems like television phosphors and photographic film, all the way to modern cameras,
displays and printers that make use of such encodings. Although not their primary
purpose, these encodings do provide an insignificant level of data compression, such
as for example encoding a 12-bit raw sensor value over 10-bits, thus providing a compression
ratio of 1.2 : 1, as described, for example in [7].
[0012] Finally, one may also mention in this context another category of disclosures, like
document
US 2010/189180 (M. Narroschke et al.) which discloses a method for coding a video signal using hybrid coding in order
to realize enhanced quantization as compared to prior art hybrid video coding. To
achieve the latter, the method provides an adaptive control allowing to switch between
the frequency - and the spatial domain for transforming the prediction error signal
which is generated in a previous step by subtracting a motion compensation prediction
signal of the video signal from the input video signal in order to reduce temporal
redundancy in the video signal. The output signal of the quantization step occurring
either in the frequency - or in the spatial domain is then passed to an entropy encoder
which provides the final output signal to be transmitted or stored. Said adaptive
control allows for deciding for each block of the video image whether it is to be
coded in the frequency - or in the spatial domain and generates a side information
on the domain effectively used. This amounts to enhancing quantization as compared
to prior art hybrid video coding by switching, for each image block to be quantized,
between the frequency - and the spatial domain, depending on where the rate distortion
costs are lowest and thus efficiency is highest, this being decided via said adaptive
control. In turn, the rate distortion costs, i.e. the information loss, is varying
at the level of quantization of each image block, such that quantifying the information
loss generated by this method is difficult.
REFERENCES
[0013]
- [1] W.M. Lawton, J.C. Huffman, and W.R. Zettler. Image compression method and apparatus,
1991. US Patent 5,014,134.
- [2] V.Ralph Algazi, Niranjan Avadhanam, Robert R. Estes, and Jr. Quality measurement and
use of pre-processing in image compression. Signal Processing, 70(3):215 - 229, 1998.
- [3] Ahmet M Eskicioglu and Paul S Fisher. Image quality measures and their performance.
Communications, IEEE Transactions on, 43(12):2959-2965, 1995.
- [4] Philippe Hanhart, Martin Rerabek, Francesca De Simone, and Touradj Ebrahimi. Subjective
quality evaluation of the upcoming hevc video compression standard. 2010.
- [5] Weisi Lin and C.-C. Jay Kuo. Perceptual visual quality metrics: A survey. Journal
of Visual Communication and Image Representation, 22(4):297 - 312, 2011.
- [6] Zhengmao Ye, H. Mohamadian, and Yongmao Ye. Quantitative analysis of information loss
in digital image compression and reconstruction. In Communication Software and Networks
(ICCSN), 2011 IEEE 3rd International Conference on, pages 398-402, May 2011.
- [7] Lars U. Borg. Optimized log encoding of image data, 2012. US Patent 8,279,308.
- [8] T.A. Welch. A technique for high-performance data compression. Computer, 17(6):8-19,
June 1984.
- [9] I. Matsuda, H. Mori, and S. Itoh. Lossless coding of still images using minimum-rate
predictors. In Image Processing, 2000. Proceedings. 2000 International Conference
on, volume 1, pages 132-135 vol.1, 2000.
SUMMARY
[0014] In view of the above, it is therefore an object of the invention to provide a data
compression method that permits to compress data with a high compression ratio with
minimal information loss and to provide a technique for determining the information
loss or the uncertainty with respect to the compression ratio.
[0015] A further object of the invention is to provide a decompression method adapted to
decompress the data compressed by the compression method of the invention.
[0016] A further object of the invention is a data processing method comprising the compression
method and the decompression method.
[0017] Here are described the embodiments of an image compression system and method that
have a number of advantages:
- Precise estimation of information loss at each chosen compression ratio.
- A specific, quantifiable, bound for the compression ratio at which information loss
remains negligible.
- High compression ratio (e.g. 8:1) with negligible information loss.
- No compression artifacts.
- Both the pixel value and uncertainty ("error bar") on the pixel value can be recovered
from the compressed image data and used in subsequent processing steps.
- Allows the user to choose the desired compromise between information loss and level
of compression.
- The system can be characterized for all possible photographs, so that the amount of
information lost can be predicted for any possible photograph.
[0018] To achieve the above advantages, the embodiments presented here distinguish the amount
of image
information from the amount of image
data. Information is the useful knowledge acquired about the photographed object by means
of the camera. Data is the digital representation of the measurement result, and contains
information as well as noise and redundancy. In general, the amount of data captured
will be much larger than the amount of information, so that it is possible to eliminate
a large amount of data whilst eliminating only very little information.
[0019] We give an overview of an embodiment, before going into its details in the next section.
Referring to figure 4, first, the imaging system and compression method (e.g. sensor
and processing) are characterized, for example by calibration, by simulation or theoretically,
to determine the optimal pre-compression parameters. In contrast to the standard lossy
image compression techniques, we apply an appropriately-tuned lossy pre-compression,
or quantizer directly on the acquired raw image data, as a pre-compression step. The
amount of information lost by this quantizer can be calculated or measured, and appropriate
parameters chosen to guarantee that such information loss remains negligible. Pre-compression
removes most of the noise, which cannot, by its nature, be losslessly compressed,
however, thanks to the properly chosen parameters, this step only removes a negligible
amount of information. The appropriately-quantized image data, that now contains mostly
information and little noise, is fed to a lossless compression algorithm as a second
step. This algorithm will work very effectively, as more redundancy can be found in
this clean data than could be found in the raw image sensor data that contained a
large amount of noise. As now the second step is lossless, it will not reduce the
amount of information present in the image and therefore does not need to be characterized
for information loss.
[0020] We characterize the amount of information lost as the increase in the uncertainty
(i.e. the standard deviation, or "error bar") in the measured value of the radiance
after the application of the compression and decompression algorithm, and define that
this loss is negligible when the uncertainty on pixel values after decompression remains
smaller than the standard deviation between the pixel values of consecutive photographs
of the same subject.
[0021] The invention relates to a data compression method wherein the data comprises noise
and information, comprising a data acquisition step, a pre-compression parameter selection
step wherein said pre-compression parameters are linked to an information loss, a
compression step, and a storage step, said compression step comprises a lossy pre-compression
for removing some noise of the data, carried out using the pre-compression parameters
determined in the calibration step followed by lossless compression for compressing
the remaining data.
[0022] Preferably, the compression method comprises a parameter determination step for determining
pre-compression parameters by establishing a plot of the information loss vs. said
pre-compression parameters.
[0023] The parameter determination step comprises a system calibration step outputting the
amount of image information that is lost by the compression algorithm as a function
of pre-compression parameters.
[0024] Preferably, the system calibration step comprises
- setting the amount of light L to be measured, starting from L=0 (800),
- acquiring a number of calibration images (801) through said camera comprising an image
sensor (306), an amplifier (310) and a converter (312),
- calculating an input uncertainty σi(L)/L on the calibration data comprising said number of calibration images (801),
- setting a set of pre-compression parameters g,
- applying a lossy pre-compression followed by a lossless compression to the calibration
data;
- decompress the calibration data;
- calculate output uncertainty on the calibration data;
- repeat the above steps by increasing L and g, until Lmax and gmax have been reached,
- report information loss as a function of the set of pre-compression parameters g.
[0025] The invention also relates to a data decompression method according to independent
claim 6.
[0026] The invention also relates to a data processing method comprising the compression
method followed by the decompression method described above.
[0027] The invention also relates to a computer program run on a computer or a dedicated
hardware device, like a FPGA, adapted to carry out the data processing method described
above.
DETAILED DESCRIPTION
Imaging system
[0028] We start by describing how an image is typically acquired more in detail. This is
illustrated in figure 2. An imaging device aims at measuring (estimating) the amount
of light emitted by a specific area (302) of an object. We call this amount of light
the radiance
L(
x, y) illuminating a pixel (308) with coordinates (
x, y) on the image sensor (306). Light from this area is collected by an imaging system
(304), such as a camera objective, and focused on a specific pixel (308) of the image
sensor (306). During the camera's exposure time
T , a number of photons γ(
x,
y) will impinge on the pixel (308), and generate a number of photoelectrons
n(
x,
y). These electrons (a charge) will typically be translated to a voltage
v(
x,
y) by an amplifier (310). This voltage is digitized by an analogue-to-digital converter
(ADC) (312) that outputs a digital value
dADC(
x,
y). The raw image data (314, 212) is the ensemble of values
dADC(
x,
y) for all (
x,
y) comprising the image. From
dADC(
x, y) it is possible to estimate
Lσi(
x, y)
, i.e. the value of the radiance
L(
x,
y) with an uncertainty
σi(
x,
y) . The uncertainty is given by noise, such as shot noise and read noise. If the sensor,
amplifier and ADC are linear, the mean value 〈
dADC(
x,
y)〉 can be related to the mean value of the number of photoelectrons 〈n〉 by proportionality
constant ζ and an offset constant
c :

[0029] Similarly, the mean number of photoelectrons 〈
n〉 is proportional to the number of photons
γ(
x,
y) that impinge on the pixel during the integration time
T with a proportionality constant called the quantum efficiency QE :

[0030] The number of photons
γ(
x,
y) is proportional to the radiance
L(
x,
y) , i.e. the amount of luminous power emitted by the object by unit area and solid
angle.
γ(
x,
y) is also proportional to other constants, such as the observed area A (function of
the imaging system (304) and pixel size), observed solid angle Q (function of the
imaging system (304)), exposure time
T , and inversely proportional to photon energy
E=
hc/
λ where
h is Planck's constant and
λ is the wavelength of the incident light. This can be summarized as:

[0031] Substituting Equation (EQ3) into Equation (EQ2) and the resulting equation into Equation
(EQ1), one obtains:

[0032] Setting
Z=ζQE
AΩT/
E ,

[0033] So that
L(
x,
y) can be directly evaluated from 〈
dADC(
x,
y)〉 as:

[0034] And on for a single shot (i.e. without averaging) one may evaluate
Lσi(
x,
y) as:

[0035] Equation (EQ7) serves as an example for a system that is very linear. If this is
the case, and the sensor, amplifier and converter result in a shot-noise limited measurement,
the number of photoelectrons will be Poisson-distributed, so that the uncertainty
in photoelectron number will be

, and the relative uncertainty
.
[0036] If the sensor, amplifier or ADC presents non-linearity, it is necessary to calibrate
them, for example by creating a look-up-table LUT
d→L and LUT
d→σi that associates a value of
L and
σi, to each possible value of
dADC . Below we will detail how this is done in practice.
[0037] In conclusion, the raw data will be the recorded digital value
dadc(
x, y) , the information acquired about the object radiance
L(
x, y) will have value
Lσi(
x,
y) with uncertainty
σi(
x, y) . When image data undergoes compression and decompression, resulting in the possibility
to infer an output
Lσo(
x, y) with an uncertainty
σo(
x,
y) , it is then possible to quantify the amount of information lost by looking at the
relative increase in uncertainty:
σo(x,y)/
σi(
x, y) , i.e. the relative increase in error bars.
[0038] An overview of a preferred embodiment of the method of the present invention will
now be described with reference to figure 1.
[0039] This preferred embodiment is a data processing method which is preferably split into
3 phases as depicted in figures 4A, 4B and 4C.
- 1) The first phase also called "parameter determination" (250) and represented in
figure 4A serves to determine pre-compression parameters (210) that achieve a high
compression ratio while guaranteeing that the information lost is below a specific
bound (208).
- 2) The second phase is the compression method of the present invention represented
in figure 4B which compresses image data using the parameters (210) determined in
the parameter determination phase (250). The compressed data are transmitted and/or
stored (220).
- 3) The third phase is the decompression method of the present invention represented
in figure 4C which recovers compressed data (222) from the transmission channel or
storage (220), as well as the bound on information loss (224). This information is
used to obtain decompressed data (228), as well as the uncertainty on this data (230).
Both of these can be used for processing (232) and display or other usage (234).
[0040] It is important to note that even if the data processing method of the invention
comprises these three phases, each single phase can be carried out independently from
each other. Such that the present invention comprises a parameter determination method,
a compression method, a decompression method as well as a data processing method comprising
one or more of these three methods.
[0041] We will now describe the first phase of parameter determination.
Parameter determination phase for system calibration
[0042] As shown in figure 4A, in order to carry out a system calibration allowing to determine
the pre-compression parameters and the bound on the information loss, first, the camera
system is characterized (200) to determine the amount of image information that is
lost (202) by the compression algorithm as a function of pre-compression parameters.
This characterization is described in detail here below with reference to figure 3.
In a simple exemplary embodiment, that works well for shot-noise limited, linear systems,
the pre-compression parameter will be a single variable g taking values between 1
(no compression and data loss) and infinity (high compression and all data lost).
This parameter will be used in the pre-compression algorithm described in the compression
phase detailed below.
[0043] A sample output of this step is shown in figure 6 representing a plot of the uncertainty/information
loss with respect to parameter g. With this data (202) in hand, and according to the
preferences of the user (204), appropriate compression parameters are chosen. In the
instance of figure 6, for example, the pre-compression parameter g can be increased
up to a value of 2, with the amount of information lost remaining below the negligible
loss bound. Figure 5 shows the quantified uncertainty increase (information loss)
bound (208) for values of
L between 0 and 500 (here the units are normalized to the mean number of photoelectrons
〈
n〉 ).
[0044] In the description below, the pre-compression parameters is a single parameter relating
to a pre-defined pre-compression function; however, in general, it may be represented
by multiple parameters, or the look-up-tables LUT
d→L and LUT
d→σi . The selected parameters and associated uncertainty increase bounds are saved to
be reused in the next phase: Compression.
System characterization
[0045] An embodiment of this procedure is shown in Figure 3. System characterization evaluates
the amount of information lost by the compression system as a function of compression
parameter and input radiance
L .
[0046] For clarity reason, a list of the symbols used here is available at the end of the
present specification.
[0047] First, the amount of light to be measured is set, starting from
L=0 (800), then, a number of images is acquired (801). This can be done using the real
image acquisition system, by simulating such a system, or by theoretical modeling.
In this example, we simulate the data arising from a shot-noise limited linear sensor.
Assuming linearity means that the number of photoelectrons n will be proportional
to the radiance
L to be measured (at constant
T, Ω, QE,
A,
E ): 〈
n〉=
LZ/ζ . For simplicity of the simulation, we set ζ=1 and
c=0 so that
dADC=
n . For a shot-noise limited system, n will be distributed according to a Poisson distribution
N(
LZ/
ζ)=
N(〈
n〉) that has mean 〈
n〉 , variance 〈
n〉 and standard deviation

. With the above simplifications, the relative error
σi/〈
Li〉 will be equivalent to the relative uncertainty in the measurement of the number
of photoelectrons

. To simulate sensor data in (801) for a specific value of 〈
n〉 , we generate a number of samples
nj issued from N((n)) . Instead of simulating data, (801) may acquire data from the
imaging system.
[0048] The relative input uncertainty
σi(
L)/
L is calculated or numerically computed as root-mean-square (RMS) difference between
each
Li , the measured value of
L , and the actual (setpoint) value of
L , divided by
L itself (802).
[0049] A pre-compression parameter, or set of pre-compression parameters is chosen (804).
In our simple example, we use a single parameter g . However, more in general, this
could be a look-up-table (LUT) associating an output value of
Lo to each possible value of
Li .
[0050] Lossy pre-compression is then applied to each input measured value of
Li (808). In this embodiment, we obtain the pre-compressed digital value
dc from the function

where n is the number of photoelectrons on that sample, i.e.

or, with respect to the ADC output,
n= (
dADC-
c)/ζ . The double-line "brackets" symbolize taking the nearest integer. In this case,
g represents the reduction factor in the number of bits required to encode each possible
number of photoelectrons. For example, encoding a number of photoelectrons between
0 and 2
16-1=65535 requires 16 bits. For g=1 , no pre-compression would arise, and 16 bits would
be required. For g=2 , the number of encoding bits is reduced by a factor of 2, so
that the result of the pre-compression can be encoded over 8 bits. More in general,
any LUT can be applied.
[0051] A lossless compression algorithm may then be applied (810), either on a pixel-by-pixel
basis, or on larger blocks, or on the entire pre-compressed image. We call the data
to which this lossless compression is applied F . As after the lossy pre-compression
is applied, the amount of data is reduced, but also, the entropy of such data is reduced
(noise is removed) so that lossless compression will be very effective. In our example,
we have used several algorithms, ranging from the universal and simple Lempel-Ziv-Welch
(LZW) [8] to image-content-based algorithms such as minimum-rate predictors (MRP)
[9]. The algorithm here can be chosen at will, as being lossless, it will not lose
any information, and will not affect the uncertainty of the image. In some instances,
if compression speed is of the essence, this step may be a simple "pass-through" not
affecting the pre-compressed data at all.
[0052] In the next step (812), the output
f of this lossless compression is decompressed to obtain F , then, the inverse function
of the lossy compression is applied (or the inverse LUT). For example, the inverse
of

is
do →
dcg .
[0053] From the de-compressed output d
o it is possible to calculate a value
Lo for the radiance, e.g.
Lo=
doζ/
Z . In (814) we calculate the error that was introduced as the difference
Lo -
Li . This error can be estimated for several samples of the acquired image data, and
the root-mean-square (RMS) of this error calculated as
σo(
L,
g) , and compared to the input error (uncertainty)
σi(
L) to obtain a well-defined, properly quantified, relative increase in uncertainty
Δ(
L,
g)=
σo(
L,
g)/
σi(
L) . The compression ratio r can also be calculated as the size of the original image
data divided by the size of output
f of the lossless compression.
[0054] By repeating the above process for a number of image acquisition samples, for a number
of compression factors g (816, 824) and for a number of Radiances, integration times
or generated photoelectrons (818, 822), one obtains a characterization of the compression
system (820), i.e. the amount of information lost
Δ(
L,
g) as a function of pre-compression parameters g and pixel radiance
L (or equivalently, photoelectron number n or pixel value
dc or
do , as all these are relate by known equations).
[0055] Sample results of this characterization are shown in figures 6, where g is varied
at fixed
L , and 5, where
L is varied such as to produce between zero and 500 photoelectrons 〈
n〉 , and the pre-compression is held fixed at g =2. The results show that
Δ(
L,g)=
σo(
L,g)/
σi(
L)=1.15 across the range for all these settings. This is smaller than what we define
as the bound for negligible information loss, i.e. the RMS of the difference between
consecutive measurements ("photographs") of the same pixel, of exactly the same subject
taken with an ideal shot-noise-limited sensor, that is

.
[0056] Testing this pre-compression factor g=2 , and therefore
Δ=1.15 on an image of a 10 EUR note, acquired with an ATIK383L scientific camera, the
compression factors were as follows:
Original image file: 5,164,276 bytes
Lossless compressed original file (ZIP): 4,242,951 bytes.
Lossy Pre-compressed image: 2,580,219 bytes
Lossless compressed pre-compressed image (ZIP): 1,015,808 bytes
Lossless compressed pre-compressed image (MRP [9]): 544,336 bytes
[0057] To give a compression ratio of 9.5:1 and increasing the uncertainty by a factor of
only 1.15. Figure 5 plots both the error-bars with the original uncertainty
σi and with the, barely distinguishably larger, output uncertainty
σo , from which it is evident that this uncertainty increase is negligible.
[0058] It has to be noted that, when keeping parameters
A,
Ω,
T,
E constant, shot noise will manifest itself with the number of photoelectrons n following
a Poisson distribution with mean and variance 〈
n〉 and with standard deviation

.
[0059] Once, the system characterization is achieved and the pre-compression parameters
are available, the user can select the required parameters and continue with the compression
phase.
Compression method
[0060] An embodiment of this compression method is illustrated in figure 4B (252). Pre-compression
parameters (210) g and associated uncertainty increase bounds (208)
Δ are taken from previous phase (250) "parameter determination". In this example,
g=2,
Δ=1.15 , however it may be different depending on the parameter estimation results.
Raw image data is first acquired (212, 314), either from the image processing pipeline
or from disk/memory. Then, a lossy pre-compression is applied (214) in the same way
as described in step 808 of the section "System characterization", using pre-compression
parameter g (210). Pre-compressed data is then losslessly compressed in step (216)
such as to yield compressed image data (218) as described in step 810 of the section
"System characterization". Compressed data is then stored or transmitted (220). The
bound on information loss (208) may also be included into the stored/transmitted data
for future use, or may be a pre-defined parameter for the system.
[0061] Once, the compression is achieved and the compressed data are stored or transmitted
(that actually can be considered as a mobile storage in a channel), the user can decide
to decompress the data to recover the data and therefore pass to the last phase that
is the decompression method.
De-compression method and usage
[0062] An embodiment of this method is illustrated in figure 4C (254) "Decompression and
usage". First, the compressed data (222) are obtained from the transmission link or
storage (220). Ideally, these data were compressed by the above compression method.
The bound on information loss (224) is also obtained from the transmission link or
storage, but could also be a pre-defined part of the system. This information is fed
into a de-compression routine (226) that has already been described in step 812 of
the section "System characterization". This step will output decompressed data (228),
and the uncertainty on such de-compressed data (230).
[0063] As a matter of fact, using the example of the compression and decompression functions
described in section "System characterization", the decompressed image data uncertainty
(230) is calculated as

where
do is the decompressed data output for that pixel and
Δ is preferably 1.15.
[0064] This data (228) and, optionally, the associated uncertainty (230), can then be used
to effectively process the image (232), resulting in data to be output to display,
printing, or further analysis or processing (234).
Table of symbols
| Symbol |
Description |
| L |
Radiance of each point of an object |
| L(x,y) |
Radiance of a point of the object corresponding to pixel coordinates (x,y) |
| Li(x, y) |
Estimate of radiance L (x,y) as derived from measurement, before compression. |
| σi(x,y) |
Uncertainty (standard deviation) associated with the measurement estimate of the radiance
L(x, y) at point (x, y), before compression. |
| Lo(x, y) |
Estimate of radiance L (x,y) as derived from measurement, after compression and decompression. |
| σo(x, y) |
Uncertainty (standard deviation) associated with the measurement estimate of the radiance
L(x, y) at point (x, y), after compression and decompression. |
| γ(x, y) |
Number of photons impinging on pixel (image sensor element) at coordinates (x,y) |
| QE |
Quantum efficiency, i.e. the probability that a photon impinging on a pixel is converted
into a photoelectron. |
| n(x, y) |
Number of photoelectrons generated at pixel with coordinates (x, y) |
| v(x, y) |
Voltage |
| dadc(x, y) |
Digital number resulting from the digitization of the voltage generated by the photoelectrons
from the pixel at coordinates (x,y) |
| M |
Ensemble of captured digital values dadc(x, y), forming a raw digital image, before compression |
| Q |
Ensemble of pixel values after they have undergone compression and decompression. |
| R , r |
Compression ratio, i.e. size of the storage space required to store the input image
divided by the size of the storage space required to store the compressed image. |
| T |
Exposure time during which the image sensor absorbs photons and converts them to photoelectrons. |
| ζ |
Factor relating the output |
| dadc(x, y) of the analog-to-digital converter to the number of photoelectrons . |
| c |
Offset constant relating the output |
| dadc(x, y) of the analog-to-digital converter to the number of photoelectrons n(x,y). |
| A |
Area observed by the pixel, this is a function of optics of the camera, and remains
constant for our purposes. |
| Ω |
Observed solid angle, also a function of the optics of the camera and just a constant
in our treatment. |
| E |
Photon energy |
| h |
Plack's constant |
| c |
Speed of light |
| Z |
A constant Z=ζQEAΩT/E |
| Δn |
Uncertainty (standard deviation) in the number n of photoelectrons. |
| g |
Set of pre-compression parameters or single pre-compression parameter |
| N(µ) |
Poisson distribution of mean µ |
| nj |
Number of samples |
| dc |
Pre-compressed digital value |
| do |
De-compressed digital value |
| f |
Compressed file, after lossless compression |
| F |
De-compressed file |
| Δ(L,g), Δ |
Relative increase in uncertainty, here as a function of light hitting the pixel and
pre-compression parameter, and on it's own. |
| 〈x〉 |
Represents the mean of x |
| Lmax |
Maximum value of L to be tested when iterating |
| gmax |
Maximum value of g to be tested when iterating |
1. Data compression method wherein said data comprises noise and information, comprising
- an image data acquisition step performed by a camera in order to obtain image data
to be compressed,
- a parameter determination step allowing to determine pre-compression parameters
which are linked to an information loss,
- a compression step, and
- a storage step,
said compression step comprising
- a lossy pre-compression for removing some noise of the image data followed by
- a lossless compression for compressing the remaining image data,
characterized in that
- said parameter determination step comprises
∘ a system calibration step using calibration data comprising a number of calibration
images each acquired by or simulated for said camera at known radiance of light and
outputting the amount of image information that is lost by the compression step as
a function of pre-compression parameters, said information loss being represented
by the relative increase in the uncertainty associated with the measurement estimate
of the radiance of light of said image data after said compression step and a decompression
step as compared to the uncertainty of said image data before said compression step,
and
° a pre-compression parameter selection step allowing to choose said pre-compression
parameters,
- said lossy pre-compression being carried out using the pre-compression parameters
selected in the pre-compression parameter selection step.
2. Data compression method of claim 1, characterized in that said parameter determination step determines said pre-compression parameters by establishing
a plot of the information loss vs. said pre-compression parameters.
3. Data compression method of one of claims 1 to 2, characterized in that the parameter determination step determines a bound on information loss and pre-compression
parameters guaranteeing that the information loss is below said bound on information
loss.
4. Data compression method of one of claims 1 to 3,
characterized in that the system calibration step comprises
- setting the radiance of light L(x, y) to be measured, starting from L=0 (800),
- acquiring a number of calibration images (801) through said camera comprising an
image sensor (306), an amplifier (310) and a converter (312),
- calculating an input uncertainty σi(x, y) on the calibration data comprising said number of calibration images (801),
- setting a set of pre-compression parameters g,
- applying a lossy pre-compression followed by a lossless compression to the calibration
data,
- decompressing the calibration data,
- calculating an output uncertainty σo(x, y) on the calibration data,
- repeating the above steps by increasing L and g until Lmax and gmax have been reached,
- report information loss as a function of the set of pre-compression parameters g,
where
L(x, y) is the radiance of light at point (x, y),
g is the set of pre-compression parameters,
σi(x, y) is the uncertainty associated with the measurement estimate of the radiance L(x, y) at point (x, y) , before compression,
σo(x, y) is the uncertainty associated with the measurement estimate of the radiance L(x, y) at point (x, y) , after compression and decompression.
5. Data compression method of one of claims 1 to 4, characterized in that the lossy pre-compression of the image data is applied in the natural, non-transformed
space of the image data.
6. Data decompression method comprising
- obtaining compressed image data that have been compressed by the compression method
of one of claims 1 to 5,
- obtaining a bound on information loss associated with said compressed image data
and stored or transmitted by the compression method of one of claims 1 to 5,
- decompressing the compressed image data,
- determining an uncertainty on the decompressed image data according to the following
relation:

where:
Δ(g, L) is the relative increase in uncertainty as a function of the radiance of light L
hitting the pixel and the set of pre-compression parameters g, obtained during the
system calibration step of the compression method of one of claims 1 to 5,
σo(x, y) is the uncertainty associated with the measurement estimate of the radiance L(x, y) at point (x, y) , after compression and decompression,
σi(x, y) is the uncertainty associated with the measurement estimate of the radiance L(x, y) at point (x,y) , before compression, obtained during the system calibration step of the compression
method of one of claims 1 to 5,
Lo(x,y) is the estimate of radiance L (x, y) as derived from measurement, after compression and decompression,
Li(x,y) is the estimate of radiance L (x, y) as derived from measurement, before compression, obtained during the system
calibration step of the compression method of one of claims 1 to 5,
g is the set of pre-compression parameters obtained during the system calibration
step of the compression method of one of claims 1 to 5.
7. Data decompression method of claim 6, characterized in that g is 2 and Δ(g,Lo) is 1.15.
8. Data processing method comprising the compression method of one of claims 1 to 5 followed
by the decompression method of one of claims 6 to 7.
9. Computer program run on a computer or a dedicated hardware device, for example a FPGA,
adapted to carry out the data processing method of claim 8.
1. Datenkompressionsverfahren, wobei die besagten Daten Rauschen und Information umfassen,
umfassend
- einen Bilddatenerfassungsschritt, der von einer Kamera durchgeführt wird, um zu
komprimierende Bilddaten zu erhalten,
- einen Parameterbestimmungsschritt, der es erlaubt, Vorkompressionsparameter zu bestimmen,
die mit einem Informationsverlust verbunden sind,
- einen Kompressionsschritt und
- einen Speicherschritt,
wobei der besagte Kompressionsschritt
- eine verlustbehaftete Vorkompression zum Entfernen von etwas Rauschen aus den Bilddaten
gefolgt von
- einer verlustfreien Kompression zum Komprimieren der verbleibenden Bilddaten aufweist,
dadurch gekennzeichnet, daß
- der besagte Parameterbestimmungsschritt
∘ einen Systemkalibrierungsschritt, welcher Kalibrierungsdaten, die eine Anzahl von
Kalibrierungsbildern umfassen, die jeweils bei bekannter Lichtstrahlung von der besagten
Kamera erfaßt oder für diese simuliert wurden, verwendet und welcher die Menge an
Bildinformation ausgibt, die durch den Kompressionsschritt als Funktion der Vorkompressionsparameter
verloren geht, wobei der besagte Informationsverlust durch die relative Zunahme der
Unsicherheit, die mit der Meßschätzung der Lichtstrahlung der besagten Bilddaten nach
dem besagten Kompressionsschritt und einem Dekompressionsschritt verbunden ist, im
Vergleich zu der Unsicherheit der besagten Bilddaten vor dem besagten Kompressionsschritt
dargestellt wird, und
∘ einen Vorkompressionsparameterauswahlschritt, welcher es erlaubt, die besagten Vorkompressionsparameter
auszuwählen, aufweist,
- wobei die besagte verlustbehaftete Vorkompression unter Verwendung der im Vorkompressionsparameterauswahlschritt
ausgewählten Vorkompressionsparameter ausgeführt wird.
2. Datenkompressionsverfahren nach Anspruch 1, dadurch gekennzeichnet, daß der Parameterbestimmungsschritt die besagten Vorkompressionsparameter bestimmt, indem
ein Diagramm des Informationsverlusts in Abhängigkeit von besagten Vorkompressionsparametern
erstellt wird.
3. Datenkompressionsverfahren nach einem der Ansprüche 1 bis 2, dadurch gekennzeichnet, daß der Parameterbestimmungsschritt eine Informationsverlustgrenze und Vorkompressionsparameter,
die garantieren, daß der Informationsverlust unter der besagten Informationsverlustgrenze
liegt, bestimmt.
4. Datenkompressionsverfahren nach einem der Ansprüche 1 bis 3,
dadurch gekennzeichnet, daß der Systemkalibrierungsschritt umfaßt
- Einstellen der zu messenden Lichtstrahlung L(x, y) ausgehend von L=0 (800),
- Erfassen einer Anzahl von Kalibrierungsbildern (801) mittels der besagten Kamera,
die einen Bildsensor (306), einen Verstärker (310) und einen Konverter (312) aufweist,
- Berechnen einer Eingangsunsicherheit σi(x, y) für die Kalibrierungsdaten, welche die besagte Anzahl von Kalibrierungsbildern (801)
umfassen,
- Einstellen eines Satzes von Vorkompressionsparametern g,
- Anwenden einer verlustbehafteten Vorkompression gefolgt von einer verlustfreien
Kompression auf die Kalibrierungsdaten,
- Dekomprimieren der Kalibrierungsdaten,
- Berechnung einer Ausgangsunsicherheit σo(x, y) für die Kalibrierungsdaten,
- Wiederholen der obigen Schritte unter Erhöhen von L und g bis Lmax und gmax erreicht sind,
- Auftragen des Informationsverlusts als Funktion des Satzes von Vorkompressionsparametern
g,
wobei
L(x, y) die Lichtstrahlung am Punkt (x, y) ist,
g der Satz von Vorkompressionsparametern ist,
σi(x, y) die Unsicherheit ist, die mit der Meßschätzung der Lichtstrahlung L(x, y) am Punkt (x, y) vor der Kompression verbunden ist,
σo(x, y) die Unsicherheit ist, die mit der Meßschätzung der Lichtstrahlung L(x, y) am Punkt (x, y) nach Kompression und Dekompression verbunden ist.
5. Datenkompressionsverfahren nach einem der Ansprüche 1 bis 4, dadurch gekennzeichnet, daß die verlustbehaftete Vorkompression der Bilddaten im natürlichen, nicht transformierten
Raum der Bilddaten angewendet wird.
6. Datendekompressionsverfahren umfassend
- Erhalten komprimierter Bilddaten, die durch das Kompressionsverfahren nach einem
der Ansprüche 1 bis 5 komprimiert wurden,
- Erhalten einer den komprimierten Bilddaten zugeordneten und durch das Kompressionsverfahren
nach einem der Ansprüche 1 bis 5 gespeicherten oder übertragenen Informationsverlustgrenze,
- Dekomprimieren der komprimierten Bilddaten,
- Bestimmen einer Unsicherheit für die dekomprimierten Bilddaten gemäß der folgenden
Beziehung

wobei
Δ(g, L) die während des Systemkalibrierungsschritts des Kompressionsverfahrens nach einem
der Ansprüche 1 bis 5 erhaltene relative Zunahme der Unsicherheit als Funktion der
auf das Pixel auftreffenden Lichtstrahlung L und des Satzes von Vorkompressionsparametern
g ist,
σo(x, y) die Unsicherheit ist, die mit der Meßschätzung der Lichtstrahlung L(x, y) am Punkt (x, y) nach Kompression und Dekompression verbunden ist,
σi(x, y) die während des Systemkalibrierungsschritts des Kompressionsverfahrens nach einem
der Ansprüche 1 bis 5 erhaltene Unsicherheit ist, die mit der Meßschätzung der Lichtstrahlung
L(x, y) am Punkt (x, y) vor der Kompression verbunden ist,
Lo(x, y) die Schätzung der Lichtstrahlung L(x, y) ist, wie sie mittels Messung nach Kompression
und Dekompression abgeleitet wird,
Li(x, y) die während des Systemkalibrierungsschritts des Kompressionsverfahrens nach einem
der Ansprüche 1 bis 5 erhaltene Schätzung der Lichtstrahlung L(x, y) ist, wie sie
mittels Messung vor der Kompression abgeleitet wird,
g der während des Systemkalibrierungsschritts des Kompressionsverfahrens nach einem
der Ansprüche 1 bis 5 erhaltene Satz von Vorkompressionsparametern ist.
7. Datendekompressionsverfahren nach Anspruch 6, dadurch gekennzeichnet, daß g gleich 2 und Δ(g, Lo) gleich 1,15 ist.
8. Datenverarbeitungsverfahren, umfassend das Kompressionsverfahren nach einem der Ansprüche
1 bis 5 gefolgt von dem Dekompressionsverfahren nach einem der Ansprüche 6 bis 7.
9. Computerprogramm, das auf einem Computer oder einem zweckbestimmten Hardwaregerät
ausgeführt wird, beispielsweise einem FPGA, das geeignet ist, um das Datenverarbeitungsverfahren
nach Anspruch 8 auszuführen.
1. Procédé de compression de données dans lequel lesdites données comprennent du bruit
et de l'information, comprenant
- une étape d'acquisition de données d'image effectuée par une caméra afin d'obtenir
des données d'image à compresser,
- une étape de détermination de paramètres permettant de déterminer des paramètres
de pré-compression qui sont liés à une perte d'information,
- une étape de compression, et
- une étape de stockage,
ladite étape de compression comprenant
- une pré-compression avec perte pour supprimer quelque peu de bruit des données d'image
suivie de
- une compression sans perte pour compresser les données d'image restantes,
caractérisé en ce que
- ladite étape de détermination de paramètres comprend
∘ une étape de calibrage du système utilisant des données de calibrage comprenant
un certain nombre d'images de calibrage chacune acquises par ou simulées pour ladite
caméra à une radiation de lumière connue et délivrant la quantité d'information d'image
qui est perdue par l'étape de compression en fonction des paramètres de pré-compression,
ladite perte d'information étant représentée par l'augmentation relative de l'incertitude
associée à l'estimation de mesure de la radiation de luminière desdites données d'image
après ladite étape de compression et une étape de décompression par rapport à l'incertitude
desdites données d'image avant ladite étape de compression, et
∘ une étape de sélection de paramètres de pré-compression permettant de choisir lesdits
paramètres de pré-compression,
- ladite pré-compression avec perte étant effectuée en utilisant les paramètres de
pré-compression sélectionnés lors de l'étape de sélection de paramètres de pré-compression.
2. Procédé de compression de données selon la revendication 1, caractérisé en ce que ladite étape de détermination de paramètres détermine lesdits paramètres de pré-compression
en établissant un tracé de la perte d'information en fonction desdits paramètres de
pré-compression.
3. Procédé de compression de données selon l'une des revendications 1 à 2, caractérisé en ce que l'étape de détermination de paramètres détermine une limite de perte d'information
et des paramètres de pré-compression garantissant que la perte d'information est inférieure
à ladite limite de perte d'information.
4. Procédé de compression de données selon l'une des revendications 1 à 3,
caractérisé en ce que l'étape de calibrage du système comprend
- régler la radiation de lumière L(x, y) à mesurer, à partir de L=0 (800),
- acquérir un certain nombre d'images de calibrage (801) avec ladite caméra comprenant
un capteur d'image (306), un amplificateur (310) et un convertisseur (312),
- calculer une incertitude d'entrée σi(x, y) sur les données de calibrage comprenant ledit nombre d'images de calibrage (801),
- assigner un ensemble des paramètres de pré-compression g,
- appliquer une pré-compression avec perte suivie d'une compression sans perte aux
données de calibrage,
- décompresser les données de calibrage,
- calculer une incertitude de sortie σo(x, y) sur les données de calibrage,
- répéter les étapes ci-dessus en augmentant L et g jusqu'à ce que Lmax et gmax soient atteints,
- reporter la perte d'information en fonction de l'ensemble des paramètres de pré-compression
g,
où
L(x, y) est la radiation de lumière au point (x, y),
g est l'ensemble des paramètres de pré-compression,
σi(x, y) est l'incertitude associée à l'estimation de mesure de la radiation L(x, y) au point (x,y), avant compression,
σo(x, y) est l'incertitude associée à l'estimation de mesure de la radiation L(x, y) au point (x, y) , après compression et décompression.
5. Procédé de compression de données selon l'une des revendications 1 à 4, caractérisé en ce que la précompression avec perte des données d'image est appliquée dans l'espace naturel
non transformé des données d'image.
6. Procédé de décompression de données comprenant
- obtenir des données d'image compressées qui ont été compressées par le procédé de
compression selon l'une des revendications 1 à 5,
- obtenir une limite de perte d'information associée auxdites données d'image compressées
et stockée ou transmise par le procédé de compression selon l'une des revendications
1 à 5,
- décompresser les données d'image compressées,
- déterminer une incertitude sur les données d'image décompressées selon la relation
suivante

où
Δ(g, L) est l'augmentation relative de l'incertitude en fonction de la radiation de lumière
L frappant le pixel et de l'ensemble des paramètres de pré-compression g, obtenue
lors de l'étape de calibrage du système du procédé de compression selon l'une des
revendications 1 à 5,
σo(x, y) est l'incertitude associée à l'estimation de mesure de la radiation L(x, y) au point (x, y) , après compression et décompression,
σi(x, y) est l'incertitude associée à l'estimation de mesure de la radiation L(x, y) au point (x, y) , avant compression, obtenue lors de l'étape de calibrage du système du procédé
de compression selon l'une des revendications 1 à 5,
Lo(x, y) est l'estimation de la radiation L(x, y) dérivée de la mesure, après compression
et décompression,
Li(x, y) est l'estimation de radiation L(x, y) dérivée de la mesure, avant compression, obtenue
lors de l'étape de calibrage du système du procédé de compression selon l'une des
revendications 1 à 5,
g est l'ensemble des paramètres de pré-compression obtenu lors de l'étape de calibrage
du système du procédé de compression selon l'une des revendications 1 à 5.
7. Procédé de décompression de données selon la revendication 6, caractérisé en ce que g est égale à 2 et Δ(g, Lo) est égale à 1,15.
8. Procédé de traitement de données comprenant le procédé de compression selon l'une
des revendications 1 à 5 suivi du procédé de décompression selon l'une des revendications
6 à 7.
9. Programme informatique exécuté par un ordinateur ou un dispositif matériel dédié,
par exemple un FPGA, adapté pour exécuter le procédé de traitement de données selon
la revendication 8.