(19)
(11)EP 3 257 243 B1

(12)EUROPEAN PATENT SPECIFICATION

(45)Mention of the grant of the patent:
07.09.2022 Bulletin 2022/36

(21)Application number: 16711913.0

(22)Date of filing:  10.02.2016
(51)International Patent Classification (IPC): 
H04N 17/00(2006.01)
(52)Cooperative Patent Classification (CPC):
H04N 17/004
(86)International application number:
PCT/US2016/017376
(87)International publication number:
WO 2016/130696 (18.08.2016 Gazette  2016/33)

(54)

TECHNIQUES FOR IDENTIFYING ERRORS INTRODUCED DURING ENCODING

VERFAHREN ZUR IDENTIFIZIERUNG VON WÄHREND DER CODIERUNG EINGEFÜGTEN FEHLERN

TECHNIQUES D'IDENTIFICATION D'ERREURS INTRODUITES PENDANT UN CODAGE


(84)Designated Contracting States:
AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR

(30)Priority: 13.02.2015 US 201514622771

(43)Date of publication of application:
20.12.2017 Bulletin 2017/51

(73)Proprietor: Netflix, Inc.
Los Gatos, California 95032 (US)

(72)Inventors:
  • AARON, Anne
    Menlo Park, California 94025 (US)
  • MA, Zhonghua
    San Jose, California 95118 (US)

(74)Representative: Kilburn & Strode LLP 
Lacon London 84 Theobalds Road Holborn
London WC1X 8NL
London WC1X 8NL (GB)


(56)References cited: : 
EP-A1- 1 622 395
EP-A1- 2 408 206
US-A1- 2008 143 837
EP-A1- 1 903 809
US-A1- 2002 181 408
US-A1- 2014 016 038
  
  • Stephen Wolf: "A no reference (NR) and reduced reference (RR) metric for detecting dropped video frmaes", Fourth International Workshop on Video Processing and Quality Metrics for Consumer Electronics, 16 January 2009 (2009-01-16), XP055077554, Retrieved from the Internet: URL:http://enpub.fulton.asu.edu/resp/vpqm/ vpqm09/Proceedings_VPQM09/Papers/vpqm_09_v 2.pdf [retrieved on 2013-09-03]
  • BORER S: "A model of jerkiness for temporal impairments in video transmission", PROCEEDINGS OF THE 2010 SECOND INTERNATIONAL WORKSHOP ON QUALITY OF MULTIMEDIA EXPERIENCE (QOMEX 2010), TRONDHEIM, NORWAY, IEEE, PISCATAWAY, NJ, USA, 21 June 2010 (2010-06-21), pages 218-223, XP031712738, ISBN: 978-1-4244-6959-8
  • "Objective perceptual video quality measurement techniques for digital cable television in the presence of a full reference; J.144 (03/04)", ITU-T STANDARD, INTERNATIONAL TELECOMMUNICATION UNION, GENEVA ; CH, no. J.144 (03/04), 15 March 2004 (2004-03-15), pages 1-156, XP017466943, [retrieved on 2009-07-17]
  
Note: Within nine months from the publication of the mention of the grant of the European patent, any person may give notice to the European Patent Office of opposition to the European patent granted. Notice of opposition shall be filed in a written reasoned statement. It shall not be deemed to have been filed until the opposition fee has been paid. (Art. 99(1) European Patent Convention).


Description

BACKGROUND OF THE INVENTION


Field of the Invention



[0001] Embodiments of the present invention relate generally to computer science and, more specifically, to techniques for identifying errors introduced during encoding.

Description of the Related Art



[0002] Efficiently and accurately encoding source video is essential for real-time delivery of video content. After the encoded video content is received, the source video is decoded and viewed or otherwise operated upon. Some encoding processes employ lossless compression algorithms, such as Huffman coding, to enable exact replication of the source. By contrast, to increase compression rates and/or reduce the size of the encoded video content, other encoding processes leverage lossy data compression techniques that eliminate selected information, typically enabling only approximate reconstruction of the source.

[0003] To optimize encoding time, some encoding processes parallelize the encoding work across multiple compute instances. In one approach to parallel encoding, an encoding engine decomposes the source video into individual chunks, distributes per-chunk encoding across multiple compute instances, and then configures a final compute instance to assemble the multiple encoded chunks into an aggregate encode.

[0004] While parallelizing the encoding work can significantly decrease overall decoding time compared to conventional techniques, the complexity inherent in this "divide-and-conquer" approach introduces additional opportunities for errors. For example, if the encoding engine does not assemble the encoded chunks correctly, then synchronization errors may be introduced, thereby degrading the quality of the resulting video. Notably, synchronization errors may be experienced by an audience viewer as an unacceptable and annoying lag between the video and audio components of a movie. Unfortunately, because the source video is typically unavailable after the encoding engine splits the source video into chunks, conventional verification techniques that compare the source video to the post-encode video have limited or no applicability in the parallel encoding paradigms described above. Consequently, parallel encoding engines typically do not ensure the quality of encoded videos or do not do so in an efficient, systematic fashion.

[0005] US 2002/181408 discloses a method for evaluating an end-user's subjective assessment of streaming media quality including obtaining reference data characterizing the media stream, and obtaining altered data characterizing the media stream after the media stream has traversed a channel that includes a network. An objective measure of the QOS of the media stream is then determined by comparing the reference data and the altered data. "A no reference (NR) and reduced reference (RR) metric for detecting dropped video frames" by Stephen Wolf, Fourth International Workshop on Video Processing and Quality Metrics for Consumer Electronics, 16 January 2009, discloses the NR metric and the RR metric for detecting dropped video frames, with application for in-service video quality monitoring.

[0006] As the foregoing illustrates, what is needed in the art are more effective techniques for identifying errors introduced during encoding processes.

SUMMARY OF THE INVENTION



[0007] The invention is defined by the appended claims. One embodiment of the present invention sets forth a computer-implemented method for identifying errors introduced during encoding.

[0008] One advantage of the disclosed error identification techniques is that these techniques enable the verification of encoded data derived from source data irrespectively of the availability of the source data. Further, because the disclosed techniques operate on frame difference data derived from the source data instead of the source data, parallel encoding systems may be effectively debugged guided by errors identified in aggregate encoded data.

BRIEF DESCRIPTION OF THE DRAWINGS



[0009] So that the manner in which the above recited features of the present invention can be understood in detail, a more particular description of the invention, briefly summarized above, may be had by reference to embodiments, some of which are illustrated in the appended drawings. It is to be noted, however, that the appended drawings illustrate only typical embodiments of this invention and are therefore not to be considered limiting of its scope, for the invention may admit to other equally effective embodiments.

Figure 1 is a conceptual illustration of a system configured to implement one or more aspects of the present invention;

Figure 2 is a block diagram illustrating the encode validator of Figure 1, according to one embodiment of the present invention;

Figure 3 is a flow diagram of method steps for generating a verified, aggregate encode of a video source, according to one embodiment of the present invention; and

Figures 4A-4B set forth a flow diagram of method steps for identifying and classifying errors while encoding a video source, according to one embodiment of the present invention.


DETAILED DESCRIPTION



[0010] In the following description, numerous specific details are set forth to provide a more thorough understanding of the present invention. However, it will be apparent to one of skilled in the art that the present invention may be practiced without one or more of these specific details.

System Overview



[0011] Figure 1 is a conceptual illustration of a system 100 configured to implement one or more aspects of the present invention. As shown, the system 100 includes a virtual private cloud (i.e., encapsulated shared resources, software, data, etc.) 102 connected to a variety of devices capable of transmitting input data and/or displaying video. Such devices include, without limitation, a desktop computer 102, a smartphone 104, and a laptop 106. In alternate embodiments, the system 100 may include any number and/or type of input, output, and/or input/output devices in any combination.

[0012] The virtual private cloud (VPC) 100 includes, without limitation, any number and type of compute instances 110. The VPC 100 receives input user information from an input device (e.g., the laptop 106), one or more computer instances 110 operate on the user information, and the VPC 100 transmits processed information to the user. The VPC 100 conveys output information to the user via display capabilities of any number of devices, such as a conventional cathode ray tube, liquid crystal display, light-emitting diode, or the like.

[0013] In alternate embodiments, the VPC 100 may be replaced with any type of cloud computing environment, such as a public or a hybrid cloud. In other embodiments, the system 100 may include any distributed computer system instead of the VPC 100. In yet other embodiments, the system 100 does not include the VPC 100 and, instead, the system 100 includes a single computing unit that implements multiple processing units (e.g., central processing units and/or graphical processing units in any combination).

[0014] As shown for the compute instance 1100, each compute instance 110 includes a central processing unit (CPU) 112, a graphics processing unit (GPU) 114, and a memory 116. In operation, the CPU 112 is the master processor of the compute instance 110, controlling and coordinating operations of other components included in the compute instance 110. In particular, the CPU 112 issues commands that control the operation of the GPU 114. The GPU 114 incorporates circuitry optimized for graphics and video processing, including, for example, video output circuitry. In various embodiments, GPU 114 may be integrated with one or more of other elements of the compute instance 110. The memory 116 stores content, such as software applications and data, for use by the CPU 112 and the GPU 114 of the compute instance 110.

[0015] In general, the compute instances 110 included in the VPC 100 are configured to implement one or more applications. More specifically, the compute instances 110 included in the VPC 100 are configured to encode a source 105, such as a video file. As shown, compute instance 1100 is configured as a source inspector 110 and a source chunker 112, compute instances 1101-110N are configured as a parallel chunk encoder 120, and compute instance 110N+1 is configured as a multi-chunk assembler 130 and a encode validator 140.

[0016] The source chunker 112 receives the source 105 and breaks the source into N different source chunks 115, where N corresponds to the number of compute instances 110 included in the parallel chunk encoder 120. Subsequently, the source chunker 112 routes each of the source chunks 1151-115N to a different one of the compute instances 1101-110N, and the compute instances 110 each perform encoding operations to create corresponding encode chunks 1251-125N. The multi-chunk assembler 130 then combines the encode chunks 1251-125N into an aggregate encode 135.

[0017] Not only do concurrent encoding operations reduce the time required to encode the source 105, but distributing the encoding operations across multiple compute instances 110 decreases the impact of any single compute instance 110 on the encoding process. For example, if the compute instance 110N fails, then the parallel chunk encoder 120 reprocesses only the single encode chunk 125N. However, each of the processes of breaking the source 105, distributing the source chunks 115, encoding the source chunks 115, and reassembling the encode chunks 125 into the aggregate encode 135 are susceptible to errors. For example, an error in reassembly of the encode chunks 125 may cause one or more frames of the source 105 to be dropped in the aggregate encode 135, potentially leading to noticeable synchronization errors when the aggregate encode 135 is decoded.

[0018] For this reason, the compute instance 110N+1 is also configured as an encode validator 140. As persons skilled in the art will recognize, the complete source 105 is unavailable to the parallel chunk encoder 120, the multi-chunk assembler 130, and the encode validator 140. Consequently, the encode validator 140 does not implement conventional verification techniques that compute and compare metrics such as peak signal-to-noise ratio (PSNR) for both the source 150 and the aggregate encode 135. Instead, the source inspector 110 and encode validator 140 work together to indirectly compare the source 105 to the aggregate encode 135. In alternate embodiments, the compute instance 110N+1 is configured as the multi-chunk assembler 130, but not as the encode validator 140. In such embodiments, a different one of the compute instances 110 (e.g., the compute instance 110N+2) is configured as the encode validator 140.

[0019] As part of processing the source 105, the source inspector 110 calculates the average luma difference between each pair of adjacent frames and stores the frame difference values as source frame difference data 137. A small frame difference value indicates that two adjacent frames are relatively similar-reflecting a static scene or a scene with few motions. By contrast, a large frame difference value indicates a sharp change between two adjacent frames, typically reflecting large motions or scene cuts (i.e., scene changes). The source inspector 110 may generate the source frame difference data 137 in any technically feasible fashion.

[0020] Correspondingly, the encode validator 140 decodes the aggregate encode 135, and then calculates the average luma difference between each pair of adjacent frames, generating "encode frame difference data." Subsequently, the encode validator 140 leverages the source frame difference data 137 and the encode frame difference data to validate the aggregate encode 135 without accessing the source 105. More specifically, the encode validator 140 performs phase correlation operations between the source frame difference data 137 and the encode frame difference data. These phase correlation operations enable the encode validator 140 to detect unexpected low phase correlation errors in the aggregate encode 135 that are attributable to frame loss and encoding misbehaviors.

[0021] Notably, the encode validator 140 also uses the phase correlation operations to weed-out "false" errors. More specifically, the encode validator 140 identifies a variety of scenarios that indicate isolated errors that are accepted as part of the encoding process, and then prunes these errors. These false errors include isolated blocks of frames with artifacts attributable to the lossy data compression techniques implemented in the parallel chunk encoder 120 and/or bad encoder rate control. Advantageously, discriminating between true (i.e., unintentional) errors introduced by the encoding process and false (i.e., anticipated) errors, enables the encode validator 140 to optimally guide triage of the aggregate encode 135 and debug of the source chunker 112, the parallel chunk encoder 120, and the multi-chunk assembler 130.

Identifying "True" Errors



[0022] Figure 2 is a block diagram illustrating the encode validator 140 of Figure 1, according to one embodiment of the present invention. As shown, the encode validator 140 includes, without limitation, a decoder and frame difference generator 210 and an error identification engine 220.

[0023] Upon receiving the aggregate encode 135, the decoder and frame difference generator 210 decodes the aggregate encode 135 and then generates encode frame difference data 237. As previously disclosed herein, the encode frame difference data 237 is the average luma difference between each pair of adjacent frames in the decoded aggregate encode 135. The decoder and frame difference generator 210 may generate the encode frame difference data 237 in any technically feasible fashion. In some embodiments, the same algorithm may be used to generate both the source frame difference data 137 and the encode frame difference data 237.

[0024] As shown, the error identification engine 220 receives both the encode frame difference data 237 and the source frame difference data 137. Notably, the error identification engine 220 receives neither the source 105 nor the aggregate encode 135. Instead of relying on direct comparisons, the error identification engine 220 identifies errors in the aggregate encode 135 indirectly based on comparisons between the encode frame difference data 237 and the source frame difference data 137.

[0025] In general, the error identification engine 220 is designed to indirectly identify errors in the aggregate encode 135, prune any false errors, and then generate a validation result 295 that reflects the number and/or type of the remaining (i.e., true) errors, such as "good encode" or "bad encode." The error identification engine 220 includes, without limitation, a frame range checker 230, a low cross-correlation block detector 240, an extended cross-correlation analyzer 250, a scene cut alignment analyzer 260, a low cross-correlation persistent analyzer 270, an isolated low cross-correlation analyzer 280, and a low-bit rate encode analyzer 290.

[0026] In alternate embodiments, the error identification engine 220 may identify errors in any technically feasible manner that is based on the encode frame difference data 237 and the source frame difference data 137. Further, the error identification engine 220 may combine any number of validation techniques in any order to determine the validation result 295. For example, in some embodiments, the scene cut alignment analyzer 260 and associated functionality is omitted. The validation functionality may be combined or split into any number of individual components, for example, the low cross-correlation persistent analyzer 270 and the isolated low cross-correlation analyzer 280 may be combined into a single "low cross-correlation analyzer."

[0027] In general, the error identification engine 220 may identify errors by performing any type and number of phase correlation operations. For example, in alternate embodiments, the components of the encode validator 140 may be modified to replace or augment the cross-correlation operations with additional types of phase correlation operations.

[0028] In other embodiments, the source inspector 110 is modified to include additional information in the source frame difference data 137, such as chroma information. Similarly, the decoder and frame difference generator 210 includes chroma information in the encode frame difference data 237. In such embodiments, the error identification engine 220 is extended to include error analysis based on the chroma information. For example, the error identification engine 220 may generate a color histogram based on both the luma and chroma information, and then analyze the color histogram to identify errors attributable to color shifting and chroma coding artifacts.

[0029] The error identification engine 220 guides an error detection and pruning process, with different algorithms implemented in different components. Advantageously, the error identification engine 220 is configured to short-circuit the evaluation process upon reaching a conclusion regarding the existence of "true" errors in the aggregate encode 135, avoiding unnecessary use of compute resources.

[0030] First, the error identification engine 220 executes the frame range checker 230. The frame range checker 230 calculates the difference between the frame count of the source frame difference data 137 and the frame count of the encode frame difference data 230. If this frame count difference exceeds a configurable threshold, then the error identification engine 220 concludes that a true error exists, the error identification engine 220 emits the validation result 295 of "bad encode," and the error identification engine 220 successfully terminates. In such a scenario, the error identification engine 220 short-circuits the evaluation process, invoking none of the low cross-correlation block detect 240, extended cross-correlation analyzer 250, the scene cut alignment analyzer 260, the low cross-correlation persistent analyzer 270, the isolated low cross-correlation analyzer 280, and the low-bit rate encode analyzer 290.

[0031] The frame range checker 230 may implement any configurable threshold. For example, often some frame loss occurs at the end of aggregate encode 235 (i.e., the black frames at the end). Since such a frame loss does not perceptively impact the quality of the aggregate encode 235, in some embodiments, the configurable threshold is set to ten frames.

[0032] If the frame count difference does not exceed the configurable threshold, then the error identification engine 220 continues the analysis, and invokes the low cross-correlation block detector 240. The low cross-correlation block detector 240 computes the block-by-block cross-correlation between the source frame difference data 137 and the encode frame difference data 237. The number of frames included in each block is consistent across the blocks and may be determined in any technically feasible fashion. In some embodiments, as a trade-off between accommodating local video content variation and detecting scene cut detection, the block size is set to 1000 frames.

[0033] As persons skilled in the art will recognize, cross-correlation is a robust and effective tool for detecting phase shift between two signal sources. In general, for a given block (set of frames), if the source frame difference data 137 and the encode frame difference data 230 are relatively similar, then the video content is likely to be relatively similar-the cross-correlation between the two blocks is high. By contrast, if the source frame difference data 137 and the encode frame difference data 230 differ dramatically, then the video content of the aggregate encode 135 is likely to be out-of-synchronization with the video content of the source 105. In such a scenario, the cross-correlation between the two blocks is relatively low, often reflecting a frame drop during the encoding process.

[0034] After computing the cross-correlation data, the low cross-correlation block detector 240 generates a list of relatively low cross-correlation blocks. For each block, the low cross-correlation block detector 240 evaluates the cross-correlation between the source frame difference data 137 and the encode frame difference data 230. If the cross-correlation of the block is lower than a predetermined threshold, then the low cross-correlation block detector 240 adds the block to the list of low cross-correlation blocks. The threshold may be set to any value based on any heuristic. In some embodiments, the threshold is set to 0.78. In other embodiments, the threshold is set to a higher or lower value.

[0035] Before proceeding to the next evaluation phase, the error identification engine 220 determines whether the list of low cross-correlation blocks is empty (i.e., the low cross-correlation block detector 240 identified no low cross-correlation blocks). If the error identification engine 220 determines that there are no low cross-correlation blocks, then the error identification engine 220 emits the validation result 295 of "good encode," and the error identification engine 220 successfully terminates. In such a scenario, the error identification engine 220 short-circuits the evaluation process, invoking none of the extended cross-correlation analyzer 250, the scene cut alignment analyzer 260, the low cross-correlation persistent analyzer 270, the isolated low cross-correlation analyzer 280, and the low-bit rate encode analyzer 290.

[0036] Otherwise, the list of low cross-correlation blocks is evaluated by the extended cross-correlation analyzer 250 as part of identifying synchronization errors. For each low cross-correlation block, the extended cross-correlation analyzer 250 imposes small phase shifts between the source frame difference data 137 and the encode frame difference data 237 for a set of frames surrounding the low cross-correlation block. The extended cross-correlation analyzer 250 then computes the corresponding shifted cross-correlations and determines whether the match between the block for the source 105 and the aggregate encode 135 is better with the imposed phase shift than without the phase shift.

[0037] If the extended cross-correlation analyzer 250 determines that the shifted cross-correlation is significantly better than the cross-correlation, then the extended cross-correlation analyzer 250 determines that the aggregate encode 135 is out-of-sync at the block. Upon identifying such an out-of-sync block, the extended cross-correlation analyzer 250 considers the aggregate encode 235 to include true errors, the error identification engine 220 emits the validation result 295 of "bad encode," and the error identification engine 220 successfully terminates.

[0038] By contrast, if the extended cross-correlation analyzer 250 determines that the original cross-correlation is significantly better than the shifted cross-correlations, then the extended cross-correlation analyzer 250 determines that the aggregate encode 135 is in-sync at the block, and removes the block from the low cross-correlation list.

[0039] The extended cross-correlation analyzer 250 may implement the shifted cross-correlation comparison in any technically feasible fashion. In one embodiment, the extended cross-correlation analyzer 250 implements the following algorithm:
For a given low cross-correlation block in the source frame difference data 137, the extended cross-correlation analyzer 250 shifts the block against the corresponding encode frame difference data 237 within a preset phase window (e.g., [-5, +5]). The extended cross-correlation analyzer 250 then performs the analysis on the cross correlations per phase shift as follows:
  1. 1) If a maximum is found by shifting the block in the encode frame difference data 237 away from the original location, and the maximum is significant larger than all other cross-correlation values, then the current block is identified as "out-of-sync". The analysis is terminated and the validation result 295 of "bad encode" is issued.
  2. 2) Otherwise, if the maximum still corresponds to the original location (i.e., no shift), and all other values produced by shifting stay significant lower than the maximum, then the extended cross-correlation analyzer 250 considers the block "in sync" and removes the block from the list of low cross-correlation blocks.
Note that a correlation value is "significantly" distinct from the others if, from a statistical point of view, the correlation value is located more than twice times standard-deviation away from the data center.

[0040] After the extended cross-correlation analyzer 250 completes extended cross-correlation analysis for each of the low cross-correlation blocks, the error-identification engine 220 determines whether there are any low cross-correlation blocks remaining in the low cross-correlation block list. If the error identification engine 220 determines that there are no remaining low cross-correlation blocks, then the error identification engine 220 emits the validation result 295 of "good encode," and the error identification engine 220 successfully terminates. In such a scenario, the error identification engine 220 short-circuits the evaluation process, invoking none of the scene cut alignment analyzer 260, the low cross-correlation persistent analyzer 270, the isolated low cross-correlation analyzer 280, and the low-bit rate encode analyzer 290.

[0041] By contrast, if the list of low cross-correlation blocks still includes any low cross-correlation blocks, then the scene cut alignment analyzer 260 identifies any errors that are rendered essentially imperceptible by scene cuts. In general, scene changes and scene cuts represent critical phase information in video sequences. As persons skilled in the art will recognize, if a scene change/cut in the aggregate encode 135 is well-aligned with the scene change/cut in the source 105, the images immediately before that scene change/cut are typically in-sync. In general, the scene cut alignment analyzer 260 examines the source frame difference data 137 and the encode frame difference data 237 to determine the scene-cut alignment. The scene cut alignment analyzer 260 then removes any low-correlation blocks immediately prior to a scene cut from the list of low-correlation blocks-pruning false errors. In one embodiment, the scene cut alignment analyzer 260 implements the following algorithm:
  1. 1) Only those "significant" scene changes/cuts in the source 105 and the aggregate encode 135 are indirectly identified for alignment analysis. A scene cut/change is considered "significant" when (1) the corresponding frame difference data is a large value (e.g.., >= 15) and (2) the frame difference data for the current frame is significantly larger than the frame difference data for the frame immediately in front of the current frame (e.g., >= 5 in magnitude).
  2. 2) If a significant scene cut/change is indirectly determined to be aligned between the source 105 and the aggregate encode 135, then only the low correlation block immediately in front of the aligned scent cut/change is considered "in-sync" and removed from the list of low cross-correlation blocks.


[0042] If the list of low cross-correlation blocks is now empty, then the error identification engine 220 emits the validation result 295 of "good encode," and the error identification engine 220 successfully terminates, efficiently short-circuiting the evaluation process. Often, any remaining low cross-correlation blocks represent a significant amount of low visual similarities between the source 105 and the aggregate encode 135 attributable to one of:
  1. 1) The absence of sharp scene changes or temporal patterns in the video sequence, which makes the block-by-block cross-correlation less sensitive to any temporal shift and reduces the effectiveness of the scene-cut alignment check.
  2. 2) The presence of noticeable coding artifacts over the aggregate encode 135, which, when extending over a period of time, may significantly reduce the structure correlation between the source 105 and the aggregate encode 135, thus rendering both extended cross-correlation and scene-cut alignment analysis less reliable.


[0043] The low cross-correlation persistent analyzer 270 enables error detection for these two scenarios. The low cross-correlation persistent analyzer 270 implements heuristics based on two empirical observations:
  1. 1) If a low cross-correlation block is out-of-sync (i.e., due to frame loss), then the lack of synchronization often lasts for a certain number of blocks until reaching the next in-sync chunk in sequence.
  2. 2) If a low cross-correlation block suffers severe coding degradation, then such degradation often prevails until the end of current chunk.


[0044] In operation, the low cross-correlation persistent analyzer 270 scans through the remaining blocks in the list of low cross-correlation blocks and identifies sequences of adjacent low cross-correlation blocks. If any of the sequences of adjacent low cross-correlation blocks includes more than a pre-determined threshold of blocks (e.g., 4), then the low cross-correlation persistent analyzer 270 determines that the aggregate encode 235 is flawed. The error identification engine 220 then returns a validation result 295 of "bad encode" and terminates successfully, without performing any additional error analysis.

[0045] The isolated low cross-correlation analyzer 280 is configured to deterministically identify errors corresponding to a variety of complicated scenarios, including the following scenarios:
  1. 1) A frame was lost in the middle of an encoding chunk or at the end of the video sequence.
  2. 2) A frame was lost during a scene dissolve/fade-in/fade-out.
  3. 3) Noticeable coding were artifacts introduced during camera panning and/or zooming.
  4. 4) Temporal video quality degradation occurred as a consequence of bad encoder rate control.


[0046] In operation, the isolated low cross-correlation analyzer 170 distinguishes between false errors and true errors based on a statistical hypothesis test. In particular, the isolated low cross-correlation analyzer 170 applies the Grubbs test to the list of low cross-correlation blocks and identifies outliers based on the distribution of the cross-correlation data. In one embodiment, the isolated cross-correlation analyzer 170 considers a low cross-correlation block with a correlation value outside of a 95% confidence zone as an "outlier". If the isolated cross-correlation analyzer 170 determines that the total number of outliers is less than a pre-determined maximum (e.g., 3), then the isolated cross-correlation analyzer 170 considers the aggregate encode 135 to be a normal/good video with marginal code artifacts. In operation, if the isolated cross-correlation analyzer 170 determines that the aggregate encode 135 is "normal," then the error identification engine 220 emits the validation result 295 of "good encode," and the error identification engine 220 successfully terminates. In such a scenario, the error identification engine 220 short-circuits the evaluation process and does not invoke the low-bit rate encode analyzer 290.

[0047] The low-bit rate encode analyzer 290 performs a bit-rate assessment to determine whether the remaining blocks in the list of low cross-correlation blocks are likely attributable to a low encoding bit-rate. In particular, if the encoding bit-rate is relatively low (e.g.., <=500kbps, encompassing most of low-rate H263, H264, and VC1 encodes), then the low-bit rate encode analyzer 290 considers the aggregate encode 135 to be "normal"-with artifacts of the low encoding bit-rate. Otherwise, the low-bit rate encode analyzer 290 considers the aggregate encode 135 to be flawed.

[0048] If the low-bit rate encode analyzer 290 determines that the aggregate encode 135 is "normal," then the error identification engine 220 emits the validation result 295 of "good encode." If the low-bit rate encode analyzer 290 determines that the aggregate encode 135 is flawed, then the error identification engine 220 emits the validation result 295 of "bad encode." Irrespective of the result obtained from the low-bit rate encode analyzer 290, the error identification engine 220 then successfully terminates.

[0049] Figure 3 is a flow diagram of method steps for generating a verified, aggregate encode of a video source, according to one embodiment of the present invention. Although the method steps are described with reference to the systems of Figures 1-2, persons skilled in the art will understand that any system configured to implement the method steps, in any order, falls within the scope of the present invention.

[0050] As shown, a method 300 begins at step 304, where the source inspector 110 and the source chunker 112 receive the source 105. The source inspector 110 generates the source frame difference data 137 and the source chunker 112 decomposes the source 105 into the source chunks 115. At step 306, the parallel chunk encoder 120 distributes each of the source chunks 115 to a separate compute instance 110 included in the parallel chunk encoder 120. Each of these compute instances 110 then generates a corresponding encode chunk 125. In alternate embodiments, the parallel chunk encoder 120 may distribute the source chunks 115 in any technically feasible fashion that enables concurrent processing of at least two of the source chunks 115. For example, the parallel chunk encoder 120 may split the source chunks 115 between two of the compute instances 110 included in the parallel chunk encoder 120.

[0051] At step 308, the multi-chunk assembler 130 assembles the encode chunks 125 into the aggregate encode 135. The encode validator 140 then decodes the aggregate encode 135 and generates the encode frame difference data 237 (i.e., frame difference data for the decoded aggregate encode).

[0052] At step 310, the error identification engine 220 included in the encode validator 140 performs various cross-correlation operations between the source frame difference data 137 and the encode frame difference data 237. These cross-correlation operations are designed to identify "true" errors attributable to flaws in the encoding process without flagging "false" errors that are expected artifacts of the lossy compression algorithm. Based on the identified errors, the encode validator 140 generates the validation result 295 for the aggregate encode 135. In general, the encode validator 140 and the error identification engine 220 may perform any number and type of cross-correlation operations in addition to other operations, in any combination, in any order, and in any technically feasible fashion. For example, in some embodiments, the encode validator 140 (including the error identification engine 220) performs the method steps outlined below in conjunction with Figures 4A-4B.

[0053] At step 312, the encode validator 140 determines whether the validation result 295 reflects a "good encode." If, at step 312, if the encode validator 140 determines that the validation result 295 reflects a "good encode," then this method proceeds to step 314. At step 314, the virtual private cloud (VPC) 102 that includes the encode validator 140 is configured to deliver the aggregate encode 135 to designated users for consumption (e.g., viewing), and the method 300 ends.

[0054] If, at step 312, the encode validator 140 determines that the validation result 295 reflects a "bad encode," then this method proceeds to step 316. At step 316, the encode validator 140 issues an error message indicating that the aggregate encode 135 includes "true" errors, and the method 300 ends. As part of step 316, the encode validator 140 optionally submits the aggregate encode 135 for triage to identify flaws in the encoding process (i.e., the source chunker 112, the parallel chunk encoder 120, and the multi-chunk assembler 130). Advantageously, because the encode validator 140 weeds-out false errors prior to determining the validation result 295, debugging work may be optimally focused on analyzing the root-cause of true flaws as opposed to unproductively tracing expected artifacts attributable to the implemented compression algorithm.

[0055] Figures 4A-4B set forth a flow diagram of method steps for identifying and classifying errors while encoding a video source, according to one embodiment of the present invention. Although the method steps are described with reference to the systems of Figures 1-2, persons skilled in the art will understand that any system configured to implement the method steps, in any order, falls within the scope of the present invention.

[0056] As shown, a method 400 begins at step 406, where the encode validator 140 receives the source frame difference data 137 and the encode frame difference data 237. The source frame difference data 137 is the average luma difference between each pair of adjacent frames in the source 105, and the encode frame difference data 237 is the average luma difference between each pair of adjacent frames in the decoded aggregate encode.

[0057] At step 408, the frame range checker 230 calculates the difference between the frame count of the source frame difference data 137 and the frame count of the encode frame difference data 230. At step 410, the frame range checker 230 determines whether the calculated frame count difference exceeds a configurable threshold. If, at step 410, the frame range checker 230 determines that the calculated frame count difference exceeds a configurable threshold, then the frame range checker 230 concludes that the frame count difference is unacceptable, and the method 400 proceeds to step 412. At step 412, the error identification engine 220 emits the validation result 295 of "frame drop detected, bad encode," and the method 400 ends.

[0058] If, at step 410, the frame range checker 230 determines that the calculated frame count difference does not exceed a configurable threshold, then the frame range checker 230 concludes that the frame count difference is acceptable, and the method 400 proceeds to step 414. At step 414, the low cross-correlation block detector 240 computes the block-by-block cross-correlation between the source frame difference data 137 and the encode frame difference data 237 and generates a list of relatively low cross-correlation blocks. More specifically, for each block, the low cross-correlation block detector 240 compares the cross-correlation between the source frame difference data 137 and the encode frame difference data 230. If the cross-correlation of the block is lower than a predetermined threshold, then the low cross-correlation block detector 240 adds the block to the list of low cross-correlation blocks.

[0059] At step 416, the error identification engine 220 compares the number of low cross-correlation blocks (i.e., then number of blocks included in the list of low cross-correlation blocks) to an acceptable number of low cross-correlation block. If, at step 416, the error identification engine 220 determines that the number of low cross-correlation blocks does not exceed the acceptable number of low cross-correlation blocks, then the method 400 proceeds to step 418. At step 418, the error identification engine 220 emits the validation result 295 of "good encode, no frame drop" and the method 400 ends.

[0060] If, at step 416, the error identification engine 220 determines that the number of low cross-correlation blocks exceeds the acceptable number of low cross-correlation blocks, then the method 400 proceeds to step 420. At step 420, the extended cross-correlation analyzer 250 evaluates each of the low cross-correlation blocks as part of identifying synchronization errors. For each low cross-correlation block, the extended cross-correlation analyzer 250 imposes small phase shifts between the source frame difference data 137 and the encode frame difference data 237 for a set of frames surrounding the low cross-correlation block.

[0061] For each of the low cross-correlation blocks, if the extended cross-correlation analyzer 250 determines that the shifted cross-correlation is significantly better than the cross-correlation, then the extended cross-correlation analyzer 250 considers the aggregate encode 135 to be out-of-sync at the block. At step 422, the extended cross-correlation analyzer determines whether the aggregate encode 135 is out-of-sync at any of the low cross-correlation blocks. If at step 422, the extended cross-correlation analyzer determines that the aggregate encode 135 is out-of-sync at any of the low cross-correlation blocks, then the method 400 proceeds to step 424. At step 424, the error identification engine 220 emits the validation result 295 of "frame drop detected, bad encode," and the error identification engine 220 successfully terminates.

[0062] If, at step 422, the extended cross-correlation analyzer 250 determines that none of the shifting operations produces a significantly improved cross-correlation, then the method 400 proceeds to step 426. In alternate embodiments, to increase the efficiency of the validation process, the extended cross-correlation analyzer 250 may prune the list of low cross-correlation blocks based on the shifted cross-correlation analysis. The extended cross-correlation analyzer 250 may implement this pruning in any technically feasible fashion, such as the algorithm detailed in conjunction with Figure 2.

[0063] At step 426, the scene cut alignment analyzer 260 identifies any errors that are rendered essentially imperceptible by scene cuts, and removes the associated blocks from the list of low cross-correlation blocks. More specifically, the scene cut alignment analyzer 260 identifies scene cuts based on the source frame difference data 137 and the encode frame difference data 237. Subsequently, the scene cut alignment analyzer 260 identifies any low-correlation blocks that are temporally located immediately prior to a scene cut, and then removes the identified blocks from the list of low-correlation blocks.

[0064] At step 428, the error identification engine 220 compares the number of "remaining" low cross-correlation blocks (i.e., then number of blocks still included in the list of low cross-correlation blocks) to the acceptable number of low cross-correlation blocks. If at step 428, the error identification engine 220 determines that the number of remaining low cross-correlation blocks does not exceed the acceptable number of low cross-correlation blocks, then the method 400 proceeds to step 430. At step 430, the error identification engine 220 emits the validation result 295 of "good encode, no frame drop" and the method 400 ends.

[0065] If, at step 428, the error identification engine 220 determines that the number of remaining low cross-correlation blocks exceeds the acceptable number of low cross-correlation blocks, then the method 400 proceeds to step 432. At step 432, the low cross-correlation persistent analyzer 270 scans through the remaining blocks in the list of low cross-correlation blocks, identifying sequences of adjacent low cross-correlation blocks. At step 434, the low cross-correlation persistent analyzer 270 determines whether any of the identified sequences of adjacent low cross-correlation blocks includes more than a pre-determined threshold of blocks.

[0066] If, at step 434, the low cross-correlation persistent analyzer 270 determines that any of the identified sequences of adjacent low cross-correlation blocks includes more than the pre-determined threshold of blocks, then the low cross-correlation persistent analyzer 270 concludes that the aggregate encode 235 is flawed, and the method 400 proceeds to step 435. At step 435, the error identification engine 220 returns a validation result 295 of "found bad encoding chunk" and the method 400 ends.

[0067] If, at step 434, the low cross-correlation persistent analyzer 270 determines that none of the identified sequences of adjacent low cross-correlation blocks includes more than the pre-determined threshold of blocks, then the method 400 proceeds to the step 436. At step 436, the isolated low cross-correlation analyzer 170 distinguishes between false errors and true errors based on a statistical hypothesis test. In particular, the isolated low cross-correlation analyzer 170 applies the Grubbs test to the list of low cross-correlation blocks and identifies outliers based on the distribution of the cross-correlation data. At step 438, the low cross-correlation persistent analyzer 270 determines the extent of the low cross-correlation blocks based on the total number of outliers. If, at step 438, the low cross-correlation persistent analyzer 270 determines that the extent of the low cross-correlation blocks is limited, then the method 400 proceeds to step 440. At step 440, the error identification engine 220 emits the validation result 295 of "good encode, with coding artifacts" and the method 400 ends.

[0068] At step 438, if the low cross-correlation persistent analyzer 270 determines that the extent of the low cross-correlation blocks is not sufficiently limited, then the method 400 proceeds to step 442. At step 442, the low-bit rate encode analyzer 290 compares the encoding bit-rate to a predetermined threshold. If, at step 442, the low-bit rate encode analyzer 290 determines that the encoding bit-rate is lower than the predetermined threshold, then the method 400 proceeds to step 444. At step 444, the error identification engine 220 emits the validation result 295 of "good encode, low encoding bit-rate" and the method 400 ends.

[0069] If, at step 442, the low-bit rate encode analyzer 290 determines that the encoding bit-rate is not lower than the predetermined threshold, then the method 400 proceeds to step 444. At step 444, because the list of low cross-correlation blocks is not empty, the error identification engine 220 emits the validation result 295 of "bad encode" and the method 400 ends.

[0070] In sum, the disclosed techniques may be used to efficiently and correctly identify errors unintentionally introduced during encoding. In operation, prior to encoding a source, a source inspector creates frame difference data for the source and a source chunker decomposes the source into chunks. A parallel chunk encoder then processes the chunks substantially in parallel across multiple compute instances, and a multi-chunk assembler assembles an aggregate encode from the encoded chunks. An encode validator receives both the aggregate encode and the frame difference data for the source. After decoding the aggregate encode, the encode validator generates frame difference data for the aggregate encode (i.e., the frame difference data for the decoded aggregate encode).

[0071] Subsequently, the encode validator performs cross-correlation operations between the frame difference data for the source and the frame difference data for the aggregate encode. The encode validator implements a variety of algorithms designed to identify errors attributable to flaws in the encoding process while suppressing errors that are expected artifacts of the encode, such as poor translations due to low bit-rate encoding. In general, the encode validator may implement any number of error-detection and/or false error-suppression algorithms in any order. As part of this discriminating error detection, the encode validator identifies a list of low cross-correlation blocks and then prunes the list, removing low cross-correlation blocks that do not significantly contribute to perceptible synchronization errors. For instance, blocks immediately preceding scene cuts do not lead to persistent synchronization issues and, consequently, the encode validator considers these "false" errors.

[0072] Advantageously, unlike conventional encode verification techniques, the frame difference based techniques disclosed herein enable verification of aggregate encodes without access to the original source files. Often, the parallel encoder is implemented across a number of compute instances-different compute instances independently encode each chunk and a final compute instance assembles the chunks and verifies the encoding. Although performing encoding in this divide-and-conquer approach decreases the encoding time, the likelihood that the final aggregate encode may include unexpected errors is increased compared to conventional techniques that do not break apart the source. The techniques outlined herein enable efficient detection of errors (e.g., incorrect assembly of the encoded chunks into the aggregate encode) that are attributable to flaws in the encoding process. Further, because the encode validator implements a variety of sophisticated comparisons algorithms to identify errors for further debugging without flagging "expected" errors, the encode validator reduces the work required to triage aggregate encodes and debug the source chunker, the parallel chunk encoder, and the multi-chunk assembler.

[0073] The descriptions of the various embodiments have been presented for purposes of illustration, but are not intended to be exhaustive or limited to the embodiments disclosed.

[0074] Aspects of the present embodiments may be embodied as a system, method or computer program product. Accordingly, aspects of the present disclosure may take the form of an entirely hardware embodiment, an entirely software embodiment (including firmware, resident software, micro-code, etc.) or an embodiment combining software and hardware aspects that may all generally be referred to herein as a "circuit," "module" or "system." Furthermore, aspects of the present disclosure may take the form of a computer program product embodied in one or more computer readable medium(s) having computer readable program code embodied thereon.

[0075] Any combination of one or more computer readable medium(s) may be utilized. The computer readable medium may be a computer readable signal medium or a computer readable storage medium. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples (a non-exhaustive list) of the computer readable storage medium would include the following: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a random access memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or Flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the context of this document, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.

[0076] Aspects of the present disclosure are described above with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the disclosure. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, enable the implementation of the functions/acts specified in the flowchart and/or block diagram block or blocks. Such processors may be, without limitation, general purpose processors, special-purpose processors, application-specific processors, or field-programmable

[0077] The flowchart and block diagrams in the Figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems that perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.

[0078] While the preceding is directed to embodiments of the present disclosure, other and further embodiments of the disclosure may be devised without departing from the basic scope thereof, and the scope thereof is determined by the claims that follow.


Claims

1. A computer-implemented method for identifying errors introduced during encoding, at a system (100) comprising a parallel encoding engine (120) and a verification generator (140), the method comprising:

decoding aggregate encoded data derived from source data (105) to generate aggregate decoded data (135), wherein the aggregate encoded data is derived from the source data by separately encoding a plurality of chunks of the source data to generate a plurality of encoded chunks of the source data, wherein each chunk included in the plurality of chunks of the source data is encoded by a separate computer instance included in a plurality of compute instances;

generating frame difference data (237) derived from the aggregate decoded data, by determining, for each decoded frame included in a plurality of decoded frames of the aggregate decoded data, a difference between a characteristic of the decoded frame and a characteristic of an adjacent decoded frame that resides adjacent to the decoded frame in the plurality of decoded frames, wherein a plurality of source frames of the source data corresponds to the plurality of decoded frames of the aggregate decoded data;

performing at least one phase correlation operation on frame difference data (137) derived from the source data and the frame difference data derived from the aggregate decoded data to generate phase correlation values; and

detecting a low phase correlation error included in the aggregate encoded data based on the phase correlation values.


 
2. The computer-implemented method of claim 1, wherein detecting the low phase correlation error comprises determining that a frame count associated with the frame difference data derived from the source data varies from a frame count associated with the frame difference data derived from the aggregate decoded data by more than a predetermined threshold.
 
3. The computer-implemented method of claim 1, wherein performing the at least one phase correlation operation comprises:

partitioning the frame difference data derived from the source data and the frame difference data derived from the decoded aggregate encode into a plurality of blocks, wherein each block in the plurality of blocks includes a subset of frame difference data derived from the source data and a corresponding subset of frame difference data derived from the decoded aggregate encoded data; and

for each block, comparing the frame difference data derived from the source data to the frame difference data derived from the decoded aggregate encoded data to determine a phase correlation value for the block.


 
4. The computer-implemented method of claim 3, wherein detecting the low phase correlation error comprises:

identifying a first number of blocks, wherein each block included in the first number of blocks has a phase correlation value less than a first predetermined threshold; and

determining that the first number of blocks is greater than a second predetermined threshold.


 
5. The computer-implemented method of claim 3, wherein detecting the low phase correlation error comprises:

for a first block, performing a phase shift operation on the frame difference data derived from the source data;

for the first block, comparing the shifted frame difference data derived from the source data to the corresponding frame data derived from the decoded aggregate encoded data to determine a shifted phase correlation value; and

determining that the shifted phase correlation value is greater than a phase correlation value for the first block by at least a first predetermined amount.


 
6. The computer-implemented method of claim 3, wherein detecting the low phase correlation error comprises:

identifying a set of low correlation blocks, wherein each block included in the set of low correlation blocks has a phase correlation value less than a first predetermined threshold;

determining a distribution based on phase correlation values for the set of low correlation blocks;

computing a confidence zone based on the distribution;

identifying a first number of blocks, wherein each block included in the first number of blocks is included in the set of low correlation blocks and has a phase correlation value that is outside the confidence zone; and

determining that the first number of blocks is greater than a second predetermined threshold.


 
7. A computer-readable storage medium including instructions that, when executed by a processing unit of a system (100), the system (100) comprising a parallel encoding engine (120) and a verification generator (140), cause the processing unit to identify errors introduced during encoding by performing the steps of:

decoding aggregate encoded data derived from source data (105) to generate aggregate decoded data (135), wherein the aggregate encoded data is derived from the source data by separately encoding a plurality of chunks of the source data to generate a plurality of encoded chunks of the source data, wherein each chunk included in the plurality of chunks of the source data is encoded by a separate computer instance included in a plurality of compute instances;

generating frame difference data (237) derived from the aggregate decoded data by determining, for each decoded frame included in a plurality of decoded frames of the aggregate decoded data, a difference between a characteristic of the decoded frame and a characteristic of an adjacent decoded frame that resides adjacent to the decoded frame in the plurality of decoded frames, wherein a plurality of source frames of the source data corresponds to the plurality of decoded frames of the aggregate decoded data;

performing at least one phase correlation operation on frame difference data (137) derived from the source data and the frame difference data derived from the aggregate decoded data to generate phase correlation values; and

detecting a low phase correlation error included in the aggregate encoded data based on the phase correlation values.


 
8. The computer-readable storage medium of claim 7, wherein detecting the low phase correlation error comprises determining that a frame count associated with the frame difference data derived from the source data varies from a frame count associated with the frame difference data derived from the aggregate decoded data by more than a predetermined threshold.
 
9. The computer-readable storage medium of claim 7, wherein performing the at least one phase correlation operation comprises:

partitioning the frame difference data derived from the source data and the frame difference data derived from the decoded aggregate encode into a plurality of blocks, wherein each block in the plurality of blocks includes a subset of frame difference data derived from the source data and a corresponding subset of frame difference data derived from the decoded aggregate encoded data; and

for each block, comparing the frame difference data derived from the source data to the frame difference data derived from the decoded aggregate encoded data to determine a cross-correlation value for the block.


 
10. The computer-readable storage medium of claim 9, wherein detecting the low phase correlation error comprises:

identifying a first number of sequential blocks, wherein each block included in the first number of sequential blocks has a cross-correlation value less than a first predetermined threshold; and

determining that the first number of sequential blocks exceeds a second predetermined threshold.


 
11. The computer-readable storage medium of claim 9, wherein detecting the low phase correlation error comprises:

identifying a set of low cross-correlation blocks, wherein each block included in the set of low cross-correlation blocks has a cross-correlation value less than a first predetermined threshold;

applying the Grubbs test to the set of low cross-correlation blocks to identify a first number of blocks; and

determining that the first number of blocks is greater than a second predetermined threshold.


 
12. The computer-readable storage medium of claim 9, wherein detecting the low phase correlation error comprises:

for a first block, performing a phase shift operation on the frame difference data derived from the source data;

for the first block, comparing the shifted frame difference data derived from the source data to the corresponding frame data derived from the decoded aggregate encoded data to determine a shifted cross-correlation value; and

determining that the shifted cross-correlation value is greater than a cross-correlation value for the first block by at least a first predetermined amount.


 
13. The computer-readable storage medium of claim 9, wherein detecting the low phase correlation error comprises:

identifying a scene cut based on the cross-correlation values; and

identifying a first block, wherein the first block does not immediately proceed the scene cut and has a cross-correlation value less than a first predetermined threshold.


 
14. A system (100) configured to identify errors introduced during encoding, the system comprising:

a parallel encoding engine (120) configured to derive aggregate encoded data (135) from source data (105) by separately encoding a plurality of chunks of the source data to generate a plurality of encoded chunks of the source data, wherein each chunk included in the plurality of chunks of the source data is encoded by a separate computer instance included in a plurality of compute instances;

a verification generator (140) configured to:
decode aggregate encoded data derived from source data to generate aggregate decoded data;

generate frame difference data (237) derived from the aggregate decoded data by determining, for each decoded frame included in a plurality of decoded frames of the aggregate decoded data, a difference between a characteristic of the decoded frame and a characteristic of an adjacent decoded frame that resides adjacent to the decoded frame in the plurality of decoded frames, wherein a plurality of source frames of the source data corresponds to the plurality of decoded frames of the aggregate decoded data;

perform at least one phase operation on frame difference data (137) derived from the source data and the frame difference data derived from the aggregate decoded data to generate phase correlation values; and

detect a low phase correlation error included in the aggregate encoded data based on the phase correlation values.


 
15. The system of claim 14, wherein detecting the low phase correlation error comprises determining that a frame count associated with the frame difference data derived from the source data varies from a frame count associated with the frame difference data derived from the aggregate decoded data by more than a predetermined threshold.
 


Ansprüche

1. Computerimplementiertes Verfahren zum Identifizieren von während einer Codierung eingeführten Fehlern in einem System (100), das eine Parallelcodierungs-Engine (120) und einen Verifikationsgenerator (140) umfasst, wobei das Verfahren Folgendes umfasst:

Decodieren aggregierter codierter Daten, die aus Quellendaten (105) abgeleitet werden, um aggregierte decodierte Daten (135) zu erzeugen, wobei die aggregierten codierten Daten aus den Quellendaten abgeleitet werden, indem mehrere Chunks der Quellendaten separat codiert werden, um mehrere codierte Chunks der Quellendaten zu erzeugen, wobei jedes in den mehreren Chunks der Quellendaten enthaltene Chunk durch eine in mehreren Recheninstanzen enthaltene separate Computerinstanz codiert wird;

Erzeugen von Framedifferenzdaten (237), abgeleitet aus den aggregierten decodierten Daten, durch Bestimmen, für jeden in mehreren decodierten Frames der aggregierten decodierten Daten enthaltenen decodierten Frame, einer Differenz zwischen einer Charakteristik des decodierten Frames und einer Charakteristik eines benachbarten decodierten Frames, der in den mehreren decodierten Frames angrenzend an den decodierten Frame angeordnet ist, wobei mehrere Quellenframes der Quellendaten den mehreren decodierten Frames der aggregierten decodierten Daten entsprechen;

Durchführen mindestens einer Phasenkorrelationsoperation an aus den Quellendaten abgeleiteten Framedifferenzdaten (137) und den aus den aggregierten decodierten Daten abgeleiteten Framedifferenzdaten, um Phasenkorrelationswerte zu erzeugen; und

Detektieren eines in den aggregierten codierten Daten enthaltenen geringen Phasenkorrelationsfehlers basierend auf den Phasenkorrelationswerten.


 
2. Computerimplementiertes Verfahren nach Anspruch 1, wobei das Detektieren des geringen Phasenkorrelationsfehlers umfasst, zu bestimmen, dass ein mit den aus den Quellendaten abgeleiteten Framedifferenzdaten assoziierter Framezählwert um mehr als einen vorbestimmten Schwellenwert von einem mit den aus den aggregierten decodierten Daten abgeleiteten Framedifferenzdaten assoziierten Framezählwert abweicht.
 
3. Computerimplementiertes Verfahren nach Anspruch 1, wobei das Durchführen mindestens einer Phasenkorrelationsoperation Folgendes umfasst:
Partitionieren der aus den Quellendaten abgeleiteten Framedifferenzdaten und der aus den decodierten aggregierten codierten Daten abgeleiteten Framedifferenzdaten in mehrere Blöcke, wobei jeder Block in den mehreren Blöcken eine Teilmenge von aus den Quellendaten abgeleiteten Framedifferenzdaten und eine entsprechende Teilmenge von aus den decodierten aggregierten codierten Daten abgeleiteten Framedifferenzdaten beinhaltet; und für jeden Block, Vergleichen der aus den Quellendaten abgeleiteten Framedifferenzdaten mit den aus den decodierten aggregierten codierten Daten abgeleiteten Framedifferenzdaten, um einen Phasenkorrelationswert für den Block zu bestimmen.
 
4. Computerimplementiertes Verfahren nach Anspruch 3, wobei das Detektieren des geringen Phasenkorrelationsfehlers Folgendes umfasst:

Identifizieren einer ersten Anzahl von Blöcken, wobei jeder in der ersten Anzahl von Blöcken enthaltene Block einen Phasenkorrelationswert aufweist, der kleiner als ein erster vorbestimmter Schwellenwert ist; und

Bestimmen, dass die erste Anzahl von Blöcken größer als ein zweiter vorbestimmter Schwellenwert ist.


 
5. Computerimplementiertes Verfahren nach Anspruch 3, wobei das Detektieren des geringen Phasenkorrelationsfehlers Folgendes umfasst:

für einen ersten Block, Durchführen einer Phasenverschiebungsoperation an den aus den Quellendaten abgeleiteten Framedifferenzdaten;

für den ersten Block, Vergleichen der aus den Quellendaten abgeleiteten verschobenen Framedifferenzdaten mit entsprechenden aus den decodierten aggregierten codierten Daten abgeleiteten Framedaten, um einen verschobenen Phasenkorrelationswert zu bestimmen; und

Bestimmen, dass der verschobene Phasenkorrelationswert um mindestens einen ersten vorbestimmten Betrag größer als ein Phasenkorrelationswert für den ersten Block ist.


 
6. Computerimplementiertes Verfahren nach Anspruch 3, wobei das Detektieren des geringen Phasenkorrelationsfehlers Folgendes umfasst:

Identifizieren einer Menge von Blöcken mit geringer Korrelation, wobei jeder in der Menge von Blöcken mit geringer Korrelation enthaltene Block einen Phasenkorrelationswert aufweist, der kleiner als ein erster vorbestimmter Schwellenwert ist;

Bestimmen einer Verteilung basierend auf Phasenkorrelationswerten für die Menge von Blöcken mit geringer Korrelation;

Berechnen eines Konfidenzbereichs basierend auf der Verteilung;

Identifizieren einer ersten Anzahl von Blöcken, wobei jeder in der ersten Anzahl von Blöcken enthaltene Block in der Menge von Blöcken mit geringer Korrelation enthalten ist und einen Phasenkorrelationswert aufweist, der außerhalb des Konfidenzbereichs liegt; und

Bestimmen, dass die erste Anzahl von Blöcken größer als ein zweiter vorbestimmter Schwellenwert ist.


 
7. Computerlesbares Speichermedium, das Anweisungen beinhaltet, die bei Ausführung durch eine Verarbeitungseinheit eines Systems (100), wobei das System (100) eine Parallelcodierungs-Engine (120) und einen Verifikationsgenerator (140) umfasst, bewirken, dass die Verarbeitungseinheit während einer Codierung eingeführte Fehler identifiziert, indem sie die folgenden Schritte durchführt:

Decodieren aggregierter codierter Daten, die aus Quellendaten (105) abgeleitet werden, um aggregierte decodierte Daten (135) zu erzeugen, wobei die aggregierten codierten Daten aus den Quellendaten abgeleitet werden, indem mehrere Chunks der Quellendaten separat codiert werden, um mehrere codierte Chunks der Quellendaten zu erzeugen, wobei jedes in den mehreren Chunks der Quellendaten enthaltene Chunk durch eine in mehreren Recheninstanzen enthaltene separate Computerinstanz codiert wird;

Erzeugen von Framedifferenzdaten (237), abgeleitet aus den aggregierten decodierten Daten, durch Bestimmen, für jeden in mehreren decodierten Frames der aggregierten decodierten Daten enthaltenen decodierten Frame, einer Differenz zwischen einer Charakteristik des decodierten Frames und einer Charakteristik eines benachbarten decodierten Frames, der in den mehreren decodierten Frames angrenzend an den decodierten Frame angeordnet ist, wobei mehrere Quellenframes der Quellendaten den mehreren decodierten Frames der aggregierten decodierten Daten entsprechen;

Durchführen mindestens einer Phasenkorrelationsoperation an aus den Quellendaten abgeleiteten Framedifferenzdaten (137) und den aus den aggregierten decodierten Daten abgeleiteten Framedifferenzdaten, um Phasenkorrelationswerte zu erzeugen; und

Detektieren eines in den aggregierten codierten Daten enthaltenen geringen Phasenkorrelationsfehlers basierend auf den Phasenkorrelationswerten.


 
8. Computerlesbares Speichermedium nach Anspruch 7, wobei das Detektieren des geringen Phasenkorrelationsfehlers umfasst, zu bestimmen, dass ein mit den aus den Quellendaten abgeleiteten Framedifferenzdaten assoziierter Framezählwert um mehr als einen vorbestimmten Schwellenwert von einem mit den aus den aggregierten decodierten Daten abgeleiteten Framedifferenzdaten assoziierten Framezählwert abweicht.
 
9. Computerlesbares Speichermedium nach Anspruch 7, wobei das Durchführen mindestens einer Phasenkorrelationsoperation Folgendes umfasst:

Partitionieren der aus den Quellendaten abgeleiteten Framedifferenzdaten und der aus den decodierten aggregierten codierten Daten abgeleiteten Framedifferenzdaten in mehrere Blöcke, wobei jeder Block in den mehreren Blöcken eine Teilmenge von aus den Quellendaten abgeleiteten Framedifferenzdaten und eine entsprechende Teilmenge von aus den decodierten aggregierten codierten Daten abgeleiteten Framedifferenzdaten beinhaltet; und

für jeden Block, Vergleichen der aus den Quellendaten abgeleiteten Framedifferenzdaten mit den aus den decodierten aggregierten codierten Daten abgeleiteten Framedifferenzdaten, um einen Kreuzkorrelationswert für den Block zu bestimmen.


 
10. Computerlesbares Speichermedium nach Anspruch 9, wobei das Detektieren des geringen Phasenkorrelationsfehlers Folgendes umfasst:

Identifizieren einer ersten Anzahl sequenzieller Blöcke, wobei jeder in der ersten Anzahl sequenzieller Blöcke enthaltene Block einen Kreuzkorrelationswert aufweist, der kleiner als ein erster vorbestimmter Schwellenwert ist; und

Bestimmen, dass die erste Anzahl sequenzieller Blöcke einen zweiten vorbestimmter Schwellenwert überschreitet.


 
11. Computerlesbares Speichermedium nach Anspruch 9, wobei das Detektieren des geringen Phasenkorrelationsfehlers Folgendes umfasst:

Identifizieren einer Menge von Blöcken mit geringer Kreuzkorrelation, wobei jeder in der Menge von Blöcken mit geringer Kreuzkorrelation enthaltene Block einen Kreuzkorrelationswert aufweist, der kleiner als ein erster vorbestimmter Schwellenwert ist;

Anwenden des Grubbs-Tests auf die Menge von Blöcken mit geringer Kreuzkorrelation, um eine erste Anzahl von Blöcken zu identifizieren; und

Bestimmen, dass die erste Anzahl von Blöcken größer als ein zweiter vorbestimmter Schwellenwert ist.


 
12. Computerlesbares Speichermedium nach Anspruch 9, wobei das Detektieren des geringen Phasenkorrelationsfehlers Folgendes umfasst:

für einen ersten Block, Durchführen einer Phasenverschiebungsoperation an den aus den Quellendaten abgeleiteten Framedifferenzdaten;

für den ersten Block, Vergleichen der aus den Quellendaten abgeleiteten verschobenen Framedifferenzdaten mit entsprechenden aus den decodierten aggregierten codierten Daten abgeleiteten Framedaten, um einen verschobenen Kreuzkorrelationswert zu bestimmen; und

Bestimmen, dass der verschobene Kreuzkorrelationswert um mindestens einen ersten vorbestimmten Betrag größer als ein Kreuzkorrelationswert für den ersten Block ist.


 
13. Computerlesbares Speichermedium nach Anspruch 9, wobei das Detektieren des geringen Phasenkorrelationsfehlers Folgendes umfasst:

Identifizieren eines Szenenschnitts basierend auf den Kreuzkorrelation werden; und

Identifizieren eines ersten Blocks, wobei der erste Block dem Szenenschnitt nicht unmittelbar vorangeht und einen Kreuzkorrelationswert aufweist, der kleiner als ein erster vorbestimmter Schwellenwert ist.


 
14. System (100), ausgelegt zum Identifizieren von während einer Codierung eingeführten Fehlern, wobei das System Folgendes umfasst:

eine Parallelcodierungs-Engine (120), ausgelegt zum Ableiten aggregierter codierter Daten (135) aus Quellendaten (105) durch separates Codieren mehrerer Chunks der Quellendaten, um mehrere codierte Chunks der Quellendaten zu erzeugen, wobei jedes in den mehreren Chunks der Quellendaten enthaltene Chunk durch eine in mehreren Recheninstanzen enthaltene separate Computerinstanz codiert wird;

ein Verifikationsgenerator (140), der zu Folgendem ausgelegt ist:

Decodieren aggregierter codierter Daten, die aus Quellendaten abgeleitet werden, um aggregierte decodierte Daten zu erzeugen;

Erzeugen von Framedifferenzdaten (237), abgeleitet aus den aggregierten decodierten Daten, durch Bestimmen, für jeden in mehreren decodierten Frames der aggregierten decodierten Daten enthaltenen decodierten Frame, einer Differenz zwischen einer Charakteristik des decodierten Frames und einer Charakteristik eines benachbarten decodierten Frames, der in den mehreren decodierten Frames angrenzend an den decodierten Frame angeordnet ist, wobei mehrere Quellenframes der Quellendaten den mehreren decodierten Frames der aggregierten decodierten Daten entsprechen;

Durchführen mindestens einer Phasenoperation an aus den Quellendaten abgeleiteten Framedifferenzdaten (137) und

den aus den aggregierten decodierten Daten abgeleiteten Framedifferenzdaten, um Phasenkorrelationswerte zu erzeugen; und

Detektieren eines in den aggregierten codierten Daten enthaltenen geringen Phasenkorrelationsfehlers basierend auf den Phasenkorrelationswerten.


 
15. System nach Anspruch 14, wobei das Detektieren des geringen Phasenkorrelationsfehlers umfasst, zu bestimmen, dass ein mit den aus den Quellendaten abgeleiteten Framedifferenzdaten assoziierter Framezählwert um mehr als einen vorbestimmten Schwellenwert von einem mit den aus den aggregierten decodierten Daten abgeleiteten Framedifferenzdaten assoziierten Framezählwert abweicht.
 


Revendications

1. Procédé, mis en œuvre par ordinateur, d'identification d'erreurs introduites durant un codage, au niveau d'un système (100) comprenant un moteur de codage parallèle (120) et un générateur de vérification (140), le procédé comprenant :

le décodage de données codées agrégées dérivées de données sources (105) pour générer des données décodées agrégées (135), dans lequel les données codées agrégées sont dérivées des données sources en codant séparément une pluralité de morceaux des données sources pour générer une pluralité de morceaux codés des données sources, dans lequel chaque morceau inclus dans la pluralité de morceaux des données sources est codé par une instance informatique distincte incluse dans une pluralité d'instances de calcul ;

la génération de données de différence de trame (237) dérivées des données décodées agrégées, en déterminant, pour chaque trame décodée incluse dans une pluralité de trames décodées des données décodées agrégées, une différence entre une caractéristique de la trame décodée et une caractéristique d'une trame décodée adjacente qui réside à côté de la trame décodée dans la pluralité de trames décodées, dans lequel une pluralité de trames sources des données sources correspond à la pluralité de trames décodées des données décodées agrégées ;

la réalisation d'au moins une opération de corrélation de phase sur des données de différence de trame (137) dérivées des données sources et des données de différence de trame dérivées des données décodées agrégées pour générer des valeurs de corrélation de phase ; et

la détection d'une erreur de corrélation de phase faible incluse dans les données codées agrégées en fonction des valeurs de corrélation de phase.


 
2. Procédé mis en œuvre par ordinateur selon la revendication 1, dans lequel la détection d'erreur de corrélation de phase faible comprend la détermination qu'un nombre de trames associé aux données de différence de trame dérivées des données sources varie d'un compte de trames associé aux données de différence de trame dérivées des données décodées agrégées par plus d'un seuil prédéterminé.
 
3. Procédé mis en œuvre par ordinateur selon la revendication 1, dans lequel l'exécution de l'au moins une opération de corrélation de phase comprend :

le partitionnement des données de différence de trame dérivées des données sources et des données de différence de trame dérivées des données codées agrégées décodées en une pluralité de blocs, dans lequel chaque bloc dans la pluralité de blocs comporte un sous-ensemble de données de différence de trame dérivées des données sources et un sous-ensemble correspondant de données de différence de trame dérivées des données codées agrégées décodées ; et

pour chaque bloc, la comparaison des données de différence de trame dérivées des données source aux données de différence de trame dérivées des données codées agrégées décodées afin de déterminer une valeur de corrélation de phase pour le bloc.


 
4. Procédé mis en œuvre par ordinateur selon la revendication 3, dans lequel la détection d'erreur de corrélation de phase faible comprend :

l'identification d'un premier nombre de blocs, dans lequel chaque bloc inclus dans le premier nombre de blocs a une valeur de corrélation de phase inférieure à un premier seuil prédéterminé ; et

la détermination que le premier nombre de blocs est supérieur à un second seuil prédéterminé.


 
5. Procédé mis en œuvre par ordinateur selon la revendication 3, dans lequel la détection d'erreur de corrélation de phase faible comprend :

pour un premier bloc, la réalisation d'une opération de déphasage sur les données de différence de trame dérivées des données sources ;

pour le premier bloc, la comparaison des données de différence de trame décalées dérivées des données sources aux données de trame correspondantes dérivées des données codées agrégées décodées afin de déterminer une valeur de corrélation de phase décalée ; et

la détermination que la valeur de corrélation de phase décalée est supérieure à une valeur de corrélation de phase pour le premier bloc par au moins une première quantité prédéterminée.


 
6. Procédé mis en œuvre par ordinateur selon la revendication 3, dans lequel la détection d'erreur de corrélation de phase faible comprend :

l'identification d'un ensemble de blocs à faible corrélation, dans lequel chaque bloc inclus dans l'ensemble de blocs à faible corrélation a une valeur de corrélation de phase inférieure à un premier seuil prédéterminé ;

la détermination d'une distribution en fonction de valeurs de corrélation de phase de l'ensemble de blocs à faible corrélation ;

le calcul d'une zone de confiance en fonction de la distribution ;

l'identification d'un premier nombre de blocs, dans lequel chaque bloc inclus dans le premier nombre de blocs est inclus dans l'ensemble de blocs à faible corrélation et a une valeur de corrélation de phase qui se trouve en dehors de la zone de confiance ; et

la détermination que le premier nombre de blocs est supérieur à un second seuil prédéterminé.


 
7. Support de stockage lisible par ordinateur comprenant des instructions qui, à leur exécution par une unité de traitement d'un système (100), le système (100) comprenant un moteur de codage parallèle (120) et un générateur de vérification (140), amènent l'unité de traitement à identifier des erreurs introduites lors d'un codage en réalisant les étapes suivantes :

décodage de données codées agrégées dérivées de données sources (105) pour générer des données décodées agrégées (135), dans lequel les données codées agrégées sont dérivées des données sources en codant séparément une pluralité de morceaux des données sources pour générer une pluralité de morceaux codés des données source, dans lequel chaque morceau inclus dans la pluralité de morceaux des données sources est codé par une instance informatique distincte incluse dans une pluralité d'instances de calcul ;

génération de données de différence de trame (237) dérivées des données décodées agrégées, en déterminant, pour chaque trame décodée incluse dans une pluralité de trames décodées des données décodées agrégées, une différence entre une caractéristique de la trame décodée et une caractéristique d'une trame décodée adjacente qui réside à côté de la trame décodée dans la pluralité de trames décodées, dans lequel une pluralité de trames sources des données sources correspond à la pluralité de trames décodées des données décodées agrégées ;

réalisation d'au moins une opération de corrélation de phase sur des données de différence de trame (137) dérivées des données sources et des données de différence de trame dérivées des données décodées agrégées pour générer des valeurs de corrélation de phase ; et

détection d'une erreur de corrélation de phase faible incluse dans les données codées agrégées en fonction des valeurs de corrélation de phase.


 
8. Support de stockage lisible par ordinateur selon la revendication 7, dans lequel la détection d'erreur de corrélation de phase faible comprend la détermination qu'un nombre de trames associé aux données de différence de trame dérivées des données sources varie d'un compte de trames associé aux données de différence de trame dérivées des données décodées agrégées par plus d'un seuil prédéterminé.
 
9. Support de stockage lisible par ordinateur selon la revendication 7, dans lequel l'exécution de l'au moins une opération de corrélation de phase comprend :

le partitionnement des données de différence de trame dérivées des données sources et des données de différence de trame dérivées des données codées agrégées décodées en une pluralité de blocs, dans lequel chaque bloc dans la pluralité de blocs comporte un sous-ensemble de données de différence de trame dérivées des données sources et un sous-ensemble correspondant de données de différence de trame dérivées des données codées agrégées décodées ; et

pour chaque bloc, la comparaison des données de différence de trame dérivées des données source aux données de différence de trame dérivées des données codées agrégées décodées afin de déterminer une valeur de corrélation de phase pour le bloc.


 
10. Support de stockage lisible par ordinateur selon la revendication 9, dans lequel la détection d'erreur de corrélation de phase faible comprend :

l'identification d'un premier nombre de blocs, dans lequel chaque bloc inclus dans le premier nombre de blocs a une valeur de corrélation de phase inférieure à un premier seuil prédéterminé ; et

la détermination que le premier nombre de blocs est supérieur à un second seuil prédéterminé.


 
11. Support de stockage lisible par ordinateur selon la revendication 9, dans lequel la détection d'erreur de corrélation de phase faible comprend :

l'identification d'un ensemble de blocs à faible corrélation croisée, dans lequel chaque bloc inclus dans l'ensemble de blocs à faible corrélation croisée a une valeur de corrélation croisée inférieure à un premier seuil prédéterminé ;

l'application du test de Grubbs à l'ensemble de blocs à faible corrélation croisée pour identifier un premier nombre de blocs ; et

la détermination que le premier nombre de blocs est supérieur à un second seuil prédéterminé.


 
12. Support de stockage lisible par ordinateur selon la revendication 9, dans lequel la détection d'erreur de corrélation de phase faible comprend :

pour un premier bloc, la réalisation d'une opération de déphasage sur les données de différence de trame dérivées des données sources ;

pour le premier bloc, la comparaison des données de différence de trame décalées dérivées des données sources aux données de trame correspondantes dérivées des données codées agrégées décodées afin de déterminer une valeur de corrélation croisée décalée ; et

la détermination que la valeur de corrélation croisée décalée est supérieure à une valeur de corrélation croisée pour le premier bloc par au moins une première quantité prédéterminée.


 
13. Support de stockage lisible par ordinateur selon la revendication 9, dans lequel la détection d'erreur de corrélation de phase faible comprend :

l'identification d'une coupure de scène en fonction des valeurs de corrélation croisée ; et

l'identification d'un premier bloc, dans lequel le premier bloc ne précède pas immédiatement la coupure de scène et a une valeur de corrélation croisée inférieure à un premier seuil prédéterminé.


 
14. Système (100) configuré pour identifier des erreurs introduites lors d'un codage, le système comprenant :

un moteur de codage parallèle (120) configuré pour dériver des données codées agrégées (135) à partir de données sources (105) en codant séparément une pluralité de morceaux des données sources pour générer une pluralité de morceaux codés des données sources, dans lequel chaque morceau inclus dans la pluralité de morceaux des données sources est codé par une instance informatique distincte incluse dans une pluralité d'instances de calcul ;

un générateur de vérification (140) configuré pour :

décoder des données codées agrégées dérivées de données sources pour générer des données décodées agrégées ;

générer des données de différence de trame (237) dérivées des données décodées agrégées en déterminant, pour chaque trame décodée incluse dans une pluralité de trames décodées des données décodées agrégées, une différence entre une caractéristique de la trame décodée et une caractéristique d'une trame décodée adjacente qui réside à côté de la trame décodée dans la pluralité de trames décodées, dans lequel une pluralité de trames sources des données sources correspond à la pluralité de trames décodées des données décodées agrégées ;

réaliser au moins une opération de phase sur des données de différence de trame (137) dérivées des données sources et les données de différence de trame dérivées des données décodées agrégées pour générer des valeurs de corrélation de phase ; et

détecter une erreur de corrélation de phase faible incluse dans les données codées agrégées en fonction des valeurs de corrélation de phase.


 
15. Système selon la revendication 14, dans lequel la détection d'erreur de corrélation de phase faible comprend la détermination qu'un compte de trames associé aux données de différence de trame dérivées des données sources varie d'un compte de trames associé aux données de différence de trame dérivées des données décodées agrégées par plus d'un seuil prédéterminé.
 




Drawing




















Cited references

REFERENCES CITED IN THE DESCRIPTION



This list of references cited by the applicant is for the reader's convenience only. It does not form part of the European patent document. Even though great care has been taken in compiling the references, errors or omissions cannot be excluded and the EPO disclaims all liability in this regard.

Patent documents cited in the description




Non-patent literature cited in the description