(19)
(11)EP 1 722 570 B1

(12)EUROPEAN PATENT SPECIFICATION

(45)Mention of the grant of the patent:
29.04.2020 Bulletin 2020/18

(21)Application number: 05291017.1

(22)Date of filing:  11.05.2005
(51)International Patent Classification (IPC): 
H04N 21/2383(2011.01)
H04N 19/89(2014.01)
H04N 21/438(2011.01)

(54)

METHOD OF TRANSMITTING VIDEO DATA

VERFAHREN ZUR VIDEODATENÜBERTRAGUNG

MÉTHODE DE TRANSMISSION DE DONNÉES VIDÉO


(84)Designated Contracting States:
AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HU IE IS IT LI LT LU MC NL PL PT RO SE SI SK TR

(43)Date of publication of application:
15.11.2006 Bulletin 2006/46

(73)Proprietor: Beijing Xiaomi Mobile Software Co., Ltd.
Beijing 100085 (CN)

(72)Inventor:
  • Nguyen, Hang
    92100 Clichy-la-Garenne (FR)

(74)Representative: Louis Pöhlau Lohrentz 
Patentanwälte Postfach 30 55
90014 Nürnberg
90014 Nürnberg (DE)


(56)References cited: : 
WO-A-2005/034414
US-A1- 2002 157 058
  
  • S. WENGER ET AL: "RTP payload format for H.264 video" RFC3984, February 2005 (2005-02), pages 1-83, XP002348964
  • GHARAVI H ET AL: "Cross-layer feedback control for video communications via mobile ad-hoc networks" VEHICULAR TECHNOLOGY CONFERENCE, 2003. VTC 2003-FALL. 2003 IEEE 58TH ORLANDO, FL, USA 6-9 OCT. 2003, PISCATAWAY, NJ, USA,IEEE, US, 6 October 2003 (2003-10-06), pages 2941-2945, XP010702852 ISBN: 0-7803-7954-3
  • QI QU ET AL: "Robust H.264 video coding and transmission over bursty packet-loss wireless networks" VEHICULAR TECHNOLOGY CONFERENCE, 2003. VTC 2003-FALL. 2003 IEEE 58TH ORLANDO, FL, USA 6-9 OCT. 2003, PISCATAWAY, NJ, USA,IEEE, US, 6 October 2003 (2003-10-06), pages 3395-3399, XP010701963 ISBN: 0-7803-7954-3
  • N.D. DAO, W.A.C. FERNANDO: "Channel coding for H.264 video in constant bit rate transmission context over 3G mobile systems" PROCEEDINGS OF THE 2003 IEEE INTERNATIONAL SYMPOSIUM ON CIRCUITS AND SYSTEMS, vol. 2, 2003, pages II-896-II-899, XP008053471
  • S. RANE, A. AARON, B. GIROD: "Systematic lossy forward error protection for error-resilient digital video broadcasting" PROCEEDINGS OF THE SPIE, vol. 5308, no. 1, 2004, pages 588-595, XP002348965
  • TALLURI R: "ERROR-RESILIENT VIDEO CODING IN THE ISO MPEG-4 STANDARD" IEEE COMMUNICATIONS MAGAZINE, IEEE SERVICE CENTER. PISCATAWAY, N.J, US, vol. 36, no. 6, June 1998 (1998-06), pages 112-119, XP000668915 ISSN: 0163-6804
  
Note: Within nine months from the publication of the mention of the grant of the European patent, any person may give notice to the European Patent Office of opposition to the European patent granted. Notice of opposition shall be filed in a written reasoned statement. It shall not be deemed to have been filed until the opposition fee has been paid. (Art. 99(1) European Patent Convention).


Description


[0001] The invention relates to a method of transmitting video data, a sending video processing device, a receiving video processing device, a network, and a computer program product for executing the method.

[0002] Today, multimedia streaming transmission over wireless networks offers a mediocre user video quality. Indeed, wireless channels cause high bit error rates and the residual bit errors can still be significant in the received compressed video sequences. Errors are even more important in the received bitstreams to be decoded when the ARQ is limited or even impossible like, e.g., in real-time applications, or when the channel coding is not good enough compared with the channel state (ARQ = Automatic Repeat on Request).

[0003] However, today's source encoders, designed to compress data as much as possible, assume a reliable medium for transmission. Hence, source decoders are designed to deal with image or video files with no errors. In addition, there can be a propagation of the transmission errors which can adversely affect the received end-user quality. For example, entropy compression techniques, which are known to be very sensitive to errors, are used everywhere a compression is made such as in text compression (WinZip, zip, tar, gz, ...), image compression (JPEG, ...), audio compression, and video compression (MPEG, H2x, ...) (JPEG = Joint Picture Expert Group; MPEG = Moving Picture Expert Group).

[0004] When conventional source encoders based on an entropy compression technique are used, a single bit error can often create a loss of synchronisation of the sequence. What follows is an error propagation - spatially in the case of an image, or spatially and temporally in the case of a video - and the remaining part of the data is lost. The same phenomenon also happens to audio streaming transmission.

[0005] MPEG4-AVC, also known as H.264, is a new generation compression algorithm for consumer digital video and a very promising video coding standard (AVC = Advanced Video Coding). The MPEG4-AVC design covers a Video Coding Layer (= VCL), which efficiently represents the video content, and a Network Abstraction Layer (= NAL), which formats the VCL representation of the video and provides header information in a manner appropriate for conveyance by particular transport layers such as IP/RTP or for storage media (IP = Internet Protocol, RTP = Real-Time Transport Protocol).

[0006] The NAL comprises a succession of data packets with an integer number of bytes, so-called NAL units consisting of a one-byte header and payload data.

[0007] The header indicates the type of the NAL unit, the (potential) presence of bit errors or syntax violations in the NAL unit payload, and information regarding the relative importance of the NAL unit for the decoding process. Some systems require delivery of the NAL units as an ordered stream of bytes or bits, in other systems, e.g., IP/RTP systems, the coded data is carried in packets framed by the system transport protocol.

[0008] The primary coded picture consists of NAL units that represent the samples of the picture. There is also a type of NAL called redundant coded picture containing a copy of some selected video macroblocks of the primary coded picture. Redundant coded pictures are used during loss or corruption of data in the primary coded picture. However, this approach to use redundant coded pictures to correct faulty primary coded pictures provides a very weak error correction, and the resulting data size - and hence the bandwidth cost - is significant.

[0009] RFC 3984 titled "RTP Payload Format for H.264 Video" defines a RTP Payload format for the H.264 video codec, which allows for packetization of one or more NAL Units, produced by an H.264 video encoder, in each RTP payload. It defines also the use of redundant coded pictures and Forward Error Correction (FEC) for error identification and correction. Document "Cross-layer feedback control for video communications via mobile ad-hoc networks" discloses a rate control and packet recovery scheme for H. 264 in wireless networks based on network characteristics derived from the underlying ad-hoc routing protocol. A redundant packet transmission scheme is presented for lossy recovery of the missing packets in order to enhance the quality of service, where for P frames, both redundant packet transmission and FEC are used, while for I frames only FEC is performed.

[0010] It is the object of the present invention to improve the transmission of video data.

[0011] The object of the present invention is achieved by a method of transmitting video data, as defined in claim 1.

[0012] The object of the present invention is further achieved by a video processing device with a control unit, as defined in claim 4. Moreover, the object of the present invention is achieved by a video processing device with a control unit, as defined in claim 5. And the object of the present invention is achieved by a computer program product for transmission of video data, as defined in claim 7.

[0013] The invention provides a more efficient solution than the existing solution in terms of error correction power and bandwidth. For a similar error correction power, the gain in bandwidth is of a factor of two to four. That means that the solution according to the invention needs two times to four times less bandwidth, or there can be two times to four times more users.

[0014] Instead of including a simple copy of the encoded data in the redundant coded picture NAL, the basic idea of the invention is to include more ingenious error correction data with higher error correction power and with smaller size. Hence, the invention allows to achieve a gain in bandwidth, the number of users and/or the number of radio resources.

[0015] The method according to the invention can be implemented with or independently from the standards.

[0016] Further advantages are achieved by the embodiments of the invention indicated by the dependent claims.

[0017] The method according to the invention can be applied to any video data with NAL structure. Preferably, the video data conform with the MPEG4-AVC and/or the H.264 standard.

[0018] When a code is transmitted over a channel in the presence of noise, errors will occur. The task of channel coding is to represent the source information in a manner that minimises the error probability in decoding. If it is necessary to transmit the information right at the first time, redundant check-bits are added to ensure error detection and error correction.

[0019] According to a preferred embodiment of the invention, the applied systematic channel encoding is based on a parity check code such as the Hamming code or the LDPC code (LDPC = Low-Density Parity Check). According to another preferred embodiment of the invention, the applied systematic channel encoding is based on a convolutional code.

[0020] The transmission medium for the transmission of the encoded video data from the first entity to the second entity may be any transmission medium suitable for the transmission of bit data, preferably under the IP/RTP protocol. According to a preferred embodiment of the invention, the transmission medium is a wireless network, in particular a mobile telecommunication network, or an IP network, in particular the Internet.

[0021] These as well as further features and advantages of the invention will be better appreciated by reading the following detailed description of presently preferred exemplary embodiments taken in conjunction with accompanying drawings of which:
Fig. 1
is a block diagram of a system according to a first embodiment of the invention.
Fig. 2
is an operational step diagram showing the processes at a first device according to a first embodiment of the invention.
Fig. 3
is a flow chart concerning the structure of an access unit according to a first embodiment of the invention.
Fig. 4
is an operational step diagram showing the processes at a second device according to a first embodiment of the invention.


[0022] Fig. 1 shows a first entity 10 for sending video data, a second entity 20 for receiving video data, and a transmission medium 30 for the transmission of video data. For example, it is possible that the transmission medium 30 is a packet-switched network, preferably an IP based network, i.e., a communication network having a common layer three IP layer, such as the Internet. The first entity 10 and the second entity 20 may be computers with modems to send and receive video data to/from the packet-switched network. The computers 10, 20 may be equipped with software suited to process video data.

[0023] In another embodiment, it is also possible that the transmission medium 30 is a telecommunication system comprising circuit-switched telephony networks and packet-switched telephony networks, and that the first entity 10 and the second entity 20 are mobile telecommunication terminals, e.g., cellular phones, capable to send/receive and replay video data. The circuit-switched networks may be, e.g., PSTN, ISDN, GSM, or UMTS networks (PSTN = Public Switched Telephone Network; ISDN = Integrated Services Digital Network; GSM = Global System for Mobile Communication; UMTS = Universal Mobile Telecommunication Services).

[0024] The sending and receiving entities 10, 20 are video processing devices, and usually have capabilities to both send and receive video data. For example, in a specific case as shown in the exemplary embodiment of Fig. 1, the entity 10 may be the sending entity and the entity 20 may be the receiving entity. In another communication event, the roles may be changed and the entity 20 may be the sending entity and the entity 10 may be the receiving entity.

[0025] In the specific embodiment of Fig. 1, the sending entity 10 is a terminal comprising a transmitter 11, a control unit 12, and a memory 13, whereas the receiving entity 20 is a terminal comprising a receiver 21, a control unit 22, and a memory 23. The sending and receiving entities 10, 20 are connected via connections 19, 29 to the transmission medium 30. The connections 19, 29 may be a wire-line connection or a wireless connection.

[0026] The sending entity 10 may be triggered, manually by a user or automatically by a process trigger signal, to start the transfer of video data having a NAL structure from the sending entity 10 via the connection 19, the transmission medium 30, and the connection 29 to the receiving entity 20. The video data may be retrieved from the memory 13 of the sending entity 10, be processed in the control unit 12, and transferred to the transmission medium 30 by the transmitter 11. The receiving medium 20 may receive the video data from the transmission medium 30 by means of the receiver 21, process them in the control unit 22, and possibly store them in the memory 23. But it is also possible that the received video data are directly sent to a replay unit of the receiving entity 20 for rendering and displaying the video data on a display.

[0027] In another embodiment of Fig. 1, the video data are stored on a video processing unit 40, e.g. a video server or a video proxy, comprised within or accessible from the transmission medium 30, preferably a packet-switched network such as the Internet. The video processing unit 40 may comprise a control unit 42, a memory unit or storage medium 43, and a transceiver unit 41 for transmitting and receiving messages over the network 30.

[0028] The receiving entity 20 may send via the network 30 a video request to the transceiver unit 41 of the video processing unit 40. The video processing unit 40 may process the video request, retrieve the requested video data from the storage medium 43 or from an independent storage medium 53 of the network 30, process the video data and initiate the transmission of the video data to the receiving entity 20.

[0029] The terminals 10, 20 and the video processing unit 40 comprise an electronic circuit, possibly with a radio part for wireless telecommunication, at least one microprocessor, and application programs executed by the at least one microprocessor. The terminals 10, 20 further may comprise input and output means, for example a keypad, a microphone, a loudspeaker, and a display. The functionalities of the terminals 10, 20 and the video processing unit 40 are performed by the interaction of the hardware and software components. The memory units 13, 23, 43 of the terminals 10, 20 and of the video processing unit 40 may be adapted to receive and store a computer program product, whereby the execution of the computer program product by the terminals 10, 20 and of the video processing unit 40 is suited to provide the terminals 10, 20 and of the video processing unit 40 with additional functionalities.

[0030] The video processing unit 40 of the network 30 may be implemented as one or more servers with a peer-to-peer and/or hierarchical architecture. Also, the functionalities of the video processing provided by the terminals 10, 20 and the video processing device 40, possibly in connection with the storage medium 53, may be realised as separate, independent units or in a decentral structure where the functionalities are provided by a plurality of interdependent decentralised units.

[0031] Fig. 2 shows the processing of the video data that is executed in the sending entity 10 or the video storing and processing unit 31 before transmission of the video data over the transmission medium 30.

[0032] The video data may be present as information bits 201. These information bits 201 may have been retrieved by the process of converting a video signal to a digital bitstream by means of an analog-to-digital conversion (A/D conversion). A/D conversion occurs in two steps, the sampling of data from the video stream, and the quantizing of each captured sample into a digital format.

[0033] Once the video data are digitised, they can be submitted to a systematic channel encoding 202 with error correction bits. The systematic channel encoding 202 can be based, e.g., on a Hamming code, a LDPC code, or a convolutional code. The result of the systematic channel encoding 202 is a sequence comprising the encoded information bits 203 and some more bits called error correction bits 204.

[0034] The information bits 203 are put into the "primary coded picture" NAL 205. The error correction bits 204 from the systematic channel coding 202 are put into the "redundant coded picture" NAL 206. The NAL units 205, 206 are included in IP/RTP packets for transmission. Then both "primary coded picture" NAL 205 and "redundant coded picture" NAL 206 are transmitted within the framework of an access unit from the sending entity 10 over the transmission medium 30 to the receiving entity 20.

[0035] Fig. 3 is a flow chart describing the generation of the primary coded picture and the redundant coded picture within the framework of an access unit 300.

[0036] An access unit 300 represents a set of VCL NAL units that together compose a primary coded picture. In addition to the primary coded picture, an access unit 300 may also contain one or more redundant coded pictures or other NAL units not containing slices or slice data partitions of a coded picture. The decoding of an access unit 300 always results in a decoded picture.

[0037] In step 301, an access unit delimiter 301 may be inserted which may be used for the detection of the boundary between access units 300, and may therefore aid in the detection of the start of a new primary coded picture. In step 302, a sequence parameter set containing all information related to sequence of pictures, and in step 303, a picture parameter set containing all information related to all the slices belonging to a single picture may be put into the access unit 300.

[0038] It might be advantageous for gateways and receivers to receive the characteristics of layers and sub-sequences as well as dependency information of sub-sequences such as picture timing information. Therefore, one or more blocks of Supplemental Enhancement Information (= SEI) may be inserted in step 304.

[0039] In step 305, the primary coded picture is put in the access unit 300, containing the information bits obtained by the systematic channel encoding. The primary coded picture consists of a set of VCL NAL units consisting of slices or slice data partitions that represent the samples of the video picture. The primary coded picture contains all macroblocks of the picture.

[0040] As a following block of the access unit 300, in step 306 a redundant coded picture with the error correction bits from the systematic channel coding may be inserted into the access unit 300. Usually, a redundant coded picture is a coded representation of a picture or a part of a picture. The content of a redundant coded picture shall not be used by the decoding process for a bitstream conforming to H.264. The content of a redundant coded picture may be used by the decoding process for a bitstream that contains errors or losses. According to the invention, error correction bits are inserted into the redundant coded picture.

[0041] If the coded picture is the last picture of a coded video sequence, an end of sequence NAL unit may be present in step 307 to indicate the end of the sequence. Finally, if the coded picture is the last coded picture in the entire NAL unit stream, an end of stream NAL unit 308 may be present to indicate that the stream is ending.

[0042] Fig. 4 shows the processing of the video data that is executed by the receiving entity 20 after transmission of the video data over the transmission medium 30.

[0043] The "primary coded picture" NAL 401 and the "redundant coded picture" NAL 402 are received by the receiving entity 20 as access units with a structure according to Fig. 3. When compared to the "primary coded picture" NAL 205 and the "redundant coded picture" NAL 206 at the first entity 10, the "primary coded picture" NAL 401 and the "redundant coded picture" NAL 402 at the second entity 20 may comprise one or more different information bits. The reason for these differences, i.e., the bit errors due to transmission, may be a poor transmission quality of the transmission medium 30. This may be particularly true for wireless transmission channels such as for mobile applications. Poor quality transmission channels may cause information bits to flip from one binary state to the other, i.e., from zero to one, or vice versa.

[0044] The error correction bits 403 are extracted from the redundant coded picture NAL 402. After that, when examining the primary coded picture NAL 401 for errors, the error correction bits 403 are used in a detection step 404 to detect if and where any errors are present in the primary coded picture NAL 401. In correction step 405, any such detected errors in the primary coded picture NAL 401 are corrected by means of the error correction bits 403. The result of the examination, detection and correction of the primary coded picture NAL 401 is the corrected primary coded picture NAL 406.

[0045] In a decoding process 407, the corrected primary coded picture NAL 406 is submitted to a video decoding, resulting in the corrected information bits 408. The corrected information bits 408 carry the video data which now can be processed at the receiving entity 20 or another device for display, transfer or storage.


Claims

1. A method of transmitting video data from a first entity (10, 40) to a second entity (20), whereby the video data comprise information bits (201),
the method comprises the steps of:

applying a systematic channel encoding (202) on the information bits (201) of the video data and obtaining a sequence comprising encoded information bits (203) and error correction bits (204, 403) of the information bits (201), the systematic channel encoding (202) being based on a Hamming code, a LDPC code, or a convolutional code;

generating a primary coded picture network abstraction layer (205, 401) comprising the encoded information bits (203) of the video data;

inserting the error correction bits (204, 403) into a redundant coded picture network abstraction layer (206, 402) by including, in the redundant coded picture network abstraction layer (206, 402), said error correction bits having higher error correction power and having smaller size than a copy of the encoded data, instead of inserting a copy of the encoded data;

transferring the primary coded picture network abstraction layer (205, 401) and the redundant coded picture network abstraction layer (206, 402) from the first entity (10, 40) to the second entity (20);

receiving the primary coded picture network abstraction layer (205, 401) and the redundant coded picture network abstraction layer (206, 402) at the second entity (20);

using the error correction bits (204, 403) in the redundant coded picture network abstraction layer (206, 402) for detecting (404) and correcting (405) errors in the received primary coded picture network abstraction layer (205, 401); and

performing the video decoding (407) of the corrected primary coded picture network abstraction layer (406) resulting in corrected information bits (408) which carry the video data.


 
2. The method of claim 1,
whereby the primary coded picture network abstraction layer (205, 401) and the redundant coded picture network abstraction layer (206, 402) conform with the MPEG4-AVC and/or the H.264 standards.
 
3. The method of claim 1,
whereby the primary coded picture network abstraction layer (205, 401) and the redundant coded picture network abstraction layer (206, 402) are transferred from the first entity (10, 40) to the second entity (20) over a wireless network (30), in particular a mobile telecommunication network, or over an IP network (30), in particular the Internet.
 
4. A video processing device (10, 40) with a control unit (12, 42), whereby the control unit (12) is adapted for applying a systematic channel encoding (202) on information bits (201) of video data and obtaining a sequence comprising encoded information bits (203) and error correction bits (204, 403) of the information bits (203), the systematic channel encoding (202) being based on a Hamming code, a LDPC code, or a convolutional code; generating a primary coded picture network abstraction layer (205, 401) comprising the encoded information bits (203) of the video data; inserting the error correction bits (204, 403) into a redundant coded picture network abstraction layer (206, 402) by including, in the redundant coded picture network abstraction layer (206, 402), said error correction bits having higher error correction power and having smaller size than a copy of the encoded data, instead of inserting a copy of the encoded data; transferring the primary coded picture network abstraction layer (205, 401) and the redundant coded picture network abstraction layer (206, 402) to another entity (20) for using the error correction bits (204, 403) in the redundant coded picture network abstraction layer (206, 402) for detecting (404) and for correcting (405) errors in the received primary coded picture network abstraction layer (205, 401), and for performing the video decoding (407) of the corrected primary coded picture network abstraction layer (406) at the other entity (20) resulting in corrected information bits (408) which carry the video data.
 
5. A video processing device (20) with a control unit (22),
whereby the control unit is adapted for receiving a primary coded picture network abstraction layer (205, 401) comprising encoded information bits (203) of video data and a redundant coded picture network abstraction layer (206, 402) comprising error correction bits (204, 403) of the information bits (203) from another entity (10, 40), the encoded information bits (203) and error correction bits (204, 403) obtained by applying a systematic channel encoding (202) on the information bits (201) of video data, the systematic channel encoding (202) being based on a Hamming code, a LDPC code, or a convolutional code; using the error correction bits (204, 403), inserted in the redundant coded picture network abstraction layer (206, 402) by including, in the redundant coded picture network abstraction layer (206, 402), said error correction bits having higher error correction power and having smaller size than a copy of the encoded data, instead of inserting a copy of the encoded data, for detecting (404) and correcting (405) errors in the received primary coded picture network abstraction layer (205, 401); and performing the video decoding (407) of the corrected primary coded picture network abstraction layer (406) resulting in corrected information bits (408) which carry the video data.
 
6. A video processing device (10, 20, 40) with a control unit (12, 22, 42) according to claim 4 or claim 5,
whereby the video processing device (10, 20, 40) is a terminal, a video proxy, or a video gateway.
 
7. A computer program product for transmission of video data, whereby the video data comprise information bits (201),
the computer program product comprising instructions which, when executed by a video processing unit (10, 40), cause the video processing unit to perform the steps of:

applying a systematic channel encoding (202) on the information bits (201) of the video data and obtaining a sequence comprising encoded information bits (203) and error correction bits (204, 403) of the information bits (201), the systematic channel encoding (202) being based on a Hamming code, a LDPC code, or a convolutional code;

generating a primary coded picture network abstraction layer (205, 401) comprising the encoded information bits (203) of the video data;

inserting the error correction bits (204, 403) into a redundant coded picture network abstraction layer (206, 402) by including, in the redundant coded picture network abstraction layer (206, 402), said error correction bits having higher error correction power and having smaller size than a copy of the encoded data, instead of inserting a copy of the encoded data;

transferring the primary coded picture network abstraction layer (205, 401) and the redundant coded picture network abstraction layer (206, 402) to another video processing unit (10, 40).


 


Ansprüche

1. Verfahren zum Übertragen von Videodaten von einer ersten Entität (10, 40) zu einer zweiten Entität (20), wobei die Videodaten Informationsbits (201) umfassen, wobei das Verfahren die Schritte umfasst:

Anwenden einer systematischen Kanalcodierung (202) auf die Informationsbits (201) der Videodaten und Erhalten einer Folge, die codierte Informationsbits (203) und Fehlerkorrekturbits (204, 403) der Informationsbits (201) umfasst, wobei die systematische Kanalcodierung (202) auf einem Hamming-Code, einem LDPC-Code oder einem Faltungscode basiert;

Erzeugen einer primären Netzabstraktionsschicht (205, 401) codierter Bilder, die die codierten Informationsbits (203) der Videodaten umfasst;

Einfügen der Fehlerkorrekturbits (204, 403) in eine redundante Netzabstraktionsschicht (206, 402) codierter Bilder, indem in der redundanten Netzabstraktionsschicht (206, 402) codierter Bilder die Fehlerkorrekturbits, die eine höhere Fehlerkorrekturleistung aufweisen und eine kleinere Größe als eine Kopie der codierten Daten aufweisen, aufgenommen werden, anstatt dass eine Kopie der codierten Daten eingefügt wird;

Übertragen der primären Netzabstraktionsschicht (205, 401) codierter Bilder und der redundanten Netzabstraktionsschicht (206, 402) codierter Bilder von der ersten Entität (10, 40) an die zweite Entität (20);

Empfangen der primären Netzabstraktionsschicht (205, 401) codierter Bilder und der redundanten Netzabstraktionsschicht (206, 402) codierter Bilder an der zweiten Entität (20);

Verwenden der Fehlerkorrekturbits (204, 403) in der redundanten Netzabstraktionsschicht (206, 402) codierter Bilder zum Detektieren (404) und Korrigieren (405) von Fehlern in der empfangenen primären Netzabstraktionsschicht (205, 401) codierter Bilder und

Durchführen der Videodecodierung (407) der korrigierten primären Netzabstraktionsschicht (406) codierter Bilder, was zu korrigierten Informationsbits (408) führt, die die Videodaten tragen.


 
2. Verfahren nach Anspruch 1, wobei die primäre Netzabstraktionsschicht (205, 401) codierter Bilder und die redundante Netzabstraktionsschicht (206, 402) codierter Bilder mit der MPEG4-AVC- und/oder der H.264-Norm konform sind.
 
3. Verfahren nach Anspruch 1, wobei die primäre Netzabstraktionsschicht (205, 401) codierter Bilder und die redundante Netzabstraktionsschicht (206, 402) codierter Bilder von der ersten Entität (10, 40) an die zweite Entität (20) über ein drahtloses Netz (30), insbesondere ein Mobilfunknetz, oder ein IP-Netz (30), insbesondere das Internet, übertragen werden.
 
4. Videoverarbeitungsvorrichtung (10, 40) mit einer Steuereinheit (12, 42), wobei die Steuereinheit (12) ausgelegt ist für das Anwenden einer systematischen Kanalcodierung (202) auf Informationsbits (201) von Videodaten und das Erhalten einer Folge, die codierte Informationsbits (203) und Fehlerkorrekturbits (204, 403) der Informationsbits (203) umfasst, wobei die systematische Kanalcodierung (202) auf einem Hamming-Code, einem LDPC-Code oder einem Faltungscode basiert; das Erzeugen einer primären Netzabstraktionsschicht (205, 401) codierter Bilder, die die codierten Informationsbits (203) der Videodaten umfasst; das Einfügen der Fehlerkorrekturbits (204, 403) in eine redundante Netzabstraktionsschicht (206, 402) codierter Bilder, indem in der redundanten Netzabstraktionsschicht (206, 402) codierter Bilder die Fehlerkorrekturbits, die eine höhere Fehlerkorrekturleistung aufweisen und eine kleinere Größe als eine Kopie der codierten Daten aufweisen, aufgenommen werden, anstatt dass eine Kopie der codierten Daten eingefügt wird; das Übertragen der primären Netzabstraktionsschicht (205, 401) codierter Bilder und der redundanten Netzabstraktionsschicht (206, 402) codierter Bilder an eine weitere Entität (20) zum Verwenden der Fehlerkorrekturbits (204, 403) in der redundanten Netzabstraktionsschicht (206, 402) codierter Bilder zum Detektieren (404) und Korrigieren (405) von Fehlern in der empfangenen primären Netzabstraktionsschicht (205, 401) codierter Bilder und zum Durchführen der Decodierung (407) des Videos der korrigierten primären Netzabstraktionsschicht (406) codierter Bilder in der anderen Entität (20), was zu korrigierten Informationsbits (408) führt, die die Videodaten tragen.
 
5. Videoverarbeitungsvorrichtung (20) mit einer Steuereinheit (22), wobei die Steuereinheit ausgelegt ist für das Empfangen einer primären Netzabstraktionsschicht (205, 401) codierter Bilder, die codierte Informationsbits (203) von Videodaten umfasst, und einer redundanten Netzabstraktionsschicht (206, 402) codierter Bilder, die Fehlerkorrekturbits (204, 403) der Informationsbits (203) umfasst, von einer weiteren Entität (10, 40), wobei die codierten Informationsbits (203) und die Fehlerkorrekturbits (204, 403) erhalten werden, indem eine systematische Kanalcodierung (202) auf die Informationsbits (201) von Videodaten angewendet wird, wobei die systematische Kanalcodierung (202) auf einem Hamming-Code, einem LDPC-Code oder einem Faltungscode basiert; das Verwenden der Fehlerkorrekturbits (204, 403), die in die redundante Netzabstraktionsschicht (206, 402) codierter Bilder eingefügt sind, indem in der redundanten Netzabstraktionsschicht (206, 402) codierter Bilder die Fehlerkorrekturbits, die eine höhere Fehlerkorrekturleistung aufweisen und eine kleinere Größe als eine Kopie der codierten Daten aufweisen, aufgenommen werden, anstatt dass eine Kopie der codierten Daten eingefügt wird, zum Detektieren (404) und Korrigieren (405) von Fehlern in der empfangenen primären Netzabstraktionsschicht (205, 401) codierter Bilder und das Durchführen des Decodierens (407) des Videos der korrigierten primären Netzabstraktionsschicht (406) codierter Bilder, was zu korrigierten Informationsbits (408) führt, die die Videodaten tragen.
 
6. Videoverarbeitungsvorrichtung (10, 20, 40) mit einer Steuereinheit (12, 22, 42) nach Anspruch 4 oder Anspruch 5, wobei die Videoverarbeitungsvorrichtung (10, 20, 40) ein Endgerät, ein Video-Proxy oder ein Video-Gateway ist.
 
7. Computerprogrammprodukt zum Übertragen von Videodaten, wobei die Videodaten Informationsbits (201) umfassen, wobei das Computerprogrammprodukt Anweisungen umfasst, die dann, wenn sie durch eine Videoverarbeitungseinheit (10, 40) ausgeführt werden, bewirken, dass die Videoverarbeitungseinheit die Schritte ausführt:

Anwenden einer systematischen Kanalcodierung (202) auf die Informationsbits (201) der Videodaten und Erhalten einer Folge, die codierte Informationsbits (203) und Fehlerkorrekturbits (204, 403) der Informationsbits (201) umfasst, wobei die systematische Kanalcodierung (202) auf einem Hamming-Code, einem LDPC-Code oder einem Faltungscode basiert;

Erzeugen einer primären Netzabstraktionsschicht (205, 401) codierter Bilder, die die codierten Informationsbits (203) der Videodaten umfasst;

Einfügen der Fehlerkorrekturbits (204, 403) in eine redundante Netzabstraktionsschicht (206, 402) codierter Bilder, indem in der redundanten Netzabstraktionsschicht (206, 402) codierter Bilder die Fehlerkorrekturbits, die eine höhere Fehlerkorrekturleistung aufweisen und eine kleinere Größe als eine Kopie der codierten Daten aufweisen, aufgenommen werden, anstatt dass eine Kopie der codierten Daten eingefügt wird;

Übertragen der primären Netzabstraktionsschicht (205, 401) codierter Bilder und der redundanten Netzabstraktionsschicht (206, 402) codierter Bilder an eine weitere Videoverarbeitungseinheit (10, 40).


 


Revendications

1. Procédé de transmission de données vidéo d'une première entité (10, 40) vers une seconde entité (20), moyennant quoi les données vidéo comprennent des bits d'information (201),
le procédé comprend les étapes consistant à :

appliquer un codage de canal systématique (202) sur les bits d'information (201) des données vidéo et obtenir une séquence comprenant des bits d'information codés (203) et des bits de correction d'erreurs (204, 403) des bits d'information (201), le codage de canal systématique (202) étant basé sur un code de Hamming, un code LDPC ou code convolutionnel ;

générer une couche d'abstraction de réseau d'image codée primaire (205, 401) comprenant les bits d'information codés (203) des données vidéo ;

insérer les bits de correction d'erreurs (204, 403) dans une couche d'abstraction de réseau d'image codée redondante (206, 402) par inclusion, dans la couche d'abstraction de réseau d'image codée redondante (206, 402), desdits bits de correction d'erreurs ayant une puissance de correction d'erreurs plus grande et ayant une taille plus petite qu'une copie des données codées, au lieu d'insérer une copie des données codées ;

transférer la couche d'abstraction de réseau d'image codée primaire (205, 401) et la couche d'abstraction de réseau d'image codée redondante (206, 402) de la première entité (10, 40) à la seconde entité (20) ;

recevoir la couche d'abstraction de réseau d'image codée primaire (205, 401) et la couche d'abstraction de réseau d'image codée redondante (206, 402) dans la seconde entité (20) ;

utiliser les bits de correction d'erreurs (204, 403) dans la couche d'abstraction de réseau d'image codée redondante (206, 402) pour détecter (404) et corriger (405) des erreurs dans la couche d'abstraction de réseau d'image codée primaire reçue (205, 401) ; et

réaliser le décodage vidéo (407) de la couche d'abstraction de réseau d'image codée primaire corrigée (406) ayant pour résultat des bits d'information corrigés (408) qui transportent les données vidéo.


 
2. Procédé selon la revendication 1,
moyennant quoi la couche d'abstraction de réseau d'image codée primaire (205, 401) et la couche d'abstraction de réseau d'image codée redondante (206, 402) sont conformes aux normes MPEG4-AVC et/ou H.264.
 
3. Procédé selon la revendication 1,
moyennant quoi la couche d'abstraction de réseau d'image codée primaire (205, 401) et la couche d'abstraction de réseau d'image codée redondante (206, 402) sont transférées de la première entité (10, 40) à la seconde entité (20) par le biais d'un réseau sans fil (30), en particulier d'un réseau de télécommunication mobile, ou par le biais d'un réseau IP (30), en particulier par l'Internet.
 
4. Dispositif de traitement vidéo (10, 40) avec une unité de commande (12, 42),
moyennant quoi l'unité de commande (12) est adaptée pour appliquer un codage de canal systématique (202) sur des bits d'information (201) de données vidéo et obtenir une séquence comprenant des bits d'information codés (203) et des bits de correction d'erreurs (204, 403) des bits d'information (203), le codage de canal systématique (202) étant basé sur un code de Hamming, un code LDPC ou un code convolutionnel ; générer une couche d'abstraction de réseau d'image codée primaire (205, 401) comprenant les bits d'information codés (203) des données vidéo ; insérer les bits de correction d'erreurs (204, 403) dans une couche d'abstraction de réseau d'image codée redondante (206, 402) par inclusion, dans la couche d'abstraction de réseau d'image codée redondante (206, 402), desdits bits de correction d'erreurs ayant une puissance de correction d'erreurs plus grande et ayant une taille plus petite qu'une copie des données codées, au lieu d'insérer une copie des données codées ; transférer la couche d'abstraction de réseau d'image codée primaire (205, 401) et la couche d'abstraction de réseau d'image codée redondante (206, 402) à une autre entité (20) pour utiliser les bits de correction d'erreurs (204, 403) dans la couche d'abstraction de réseau d'image codée redondante (206, 402) pour détecter (404) et pour corriger (405) des erreurs dans la couche d'abstraction de réseau d'image codée primaire reçue (205, 401), et pour réaliser le décodage vidéo (407) de la couche d'abstraction de réseau d'image codée primaire corrigée (406) dans l'autre entité (20) ayant pour résultat des bits d'information corrigés (408) qui transportent les données vidéo.
 
5. Dispositif de traitement vidéo (20) avec une unité de commande (22),
moyennant quoi l'unité de commande est adaptée pour recevoir une couche d'abstraction de réseau d'image codée primaire (205, 401) comprenant des bits d'information codés (203) de données vidéo et une couche d'abstraction de réseau d'image codée redondante (206, 402) comprenant des bits de correction d'erreurs (204, 403) des bits d'information (203) provenant d'une autre entité (10, 40), les bits d'information codés (203) et les bits de correction d'erreurs (204, 403) obtenus par application d'un codage de canal systématique (202) sur les bits d'information (201) de données vidéo, le codage de canal systématique (202) étant basé sur un code de Hamming, un code LDPC ou un code convolutionnel ; utiliser les bits de correction d'erreurs (204, 403), insérés dans la couche d'abstraction de réseau d'image codée redondante (206, 402) par inclusion, dans la couche d'abstraction de réseau d'image codée redondante (206, 402), desdits bits de correction d'erreurs ayant une puissance de correction d'erreurs plus grande et ayant une taille plus petite qu'une copie des données codées, au lieu d'insérer une copie des données codées, pour détecter (404) et corriger (405) des erreurs dans la couche d'abstraction de réseau d'image codée primaire reçue (205, 401) ; et réaliser le décodage vidéo (407) de la couche d'abstraction de réseau d'image codée primaire corrigée (406) ayant pour résultat des bits d'information corrigés (408) qui transportent les données vidéo.
 
6. Dispositif de traitement vidéo (10, 20, 40) avec une unité de commande (12, 22, 42) selon la revendication 4 ou la revendication 5,
moyennant quoi le dispositif de traitement vidéo (10, 20, 40) est un terminal, un mandataire vidéo ou une passerelle vidéo.
 
7. Produit-programme d'ordinateur pour la transmission de données vidéo, moyennant quoi les données vidéo comprennent des bits d'information (201), le programme d'ordinateur comprenant des instructions qui, lorsqu'elles sont exécutées par une unité de traitement vidéo (10, 40), amènent l'unité de traitement vidéo à réaliser les étapes consistant à :

appliquer un codage de canal systématique (202) sur les bits d'information (201) des données vidéo et obtenir une séquence comprenant des bits d'information codés (203) et des bits de correction d'erreurs (204, 403) des bits d'information (201), le codage de canal systématique (202) étant basé sur un code de Hamming, un code LDPC ou un code convolutionnel ; générer une couche d'abstraction de réseau d'image codée primaire (205, 401) comprenant les bits d'information codés (203) des données vidéo ; insérer les bits de correction d'erreurs (204, 403) dans une couche d'abstraction de réseau d'image codée redondante (206, 402) par inclusion, dans la couche d'abstraction de réseau d'image codée redondante (206, 402), desdits bits de correction d'erreurs ayant une puissance de correction d'erreurs plus grande et ayant une taille plus petite qu'une copie des données codées, au lieu d'insérer une copie des données codées ;

transférer la couche d'abstraction de réseau d'image codée primaire (205, 401) et la couche d'abstraction de réseau d'image codée redondante (206, 402) à une autre unité de traitement vidéo (10, 40).


 




Drawing