(19)
(11)EP 3 151 577 B1

(12)EUROPEAN PATENT SPECIFICATION

(45)Mention of the grant of the patent:
04.12.2019 Bulletin 2019/49

(21)Application number: 15798806.4

(22)Date of filing:  24.02.2015
(51)International Patent Classification (IPC): 
H04N 21/44(2011.01)
G09G 5/00(2006.01)
H04N 21/431(2011.01)
(86)International application number:
PCT/JP2015/055295
(87)International publication number:
WO 2015/182189 (03.12.2015 Gazette  2015/48)

(54)

INFORMATION PROCESSING APPARATUS, INFORMATION PROCESSING METHOD, AND PROGRAM

INFORMATIONSVERARBEITUNGSVORRICHTUNG, INFORMATIONSVERARBEITUNGSVERFAHREN UND PROGRAMM

APPAREIL DE TRAITEMENT D'INFORMATIONS, PROCÉDÉ DE TRAITEMENT D'INFORMATIONS, ET PROGRAMME


(84)Designated Contracting States:
AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR

(30)Priority: 28.05.2014 JP 2014109803

(43)Date of publication of application:
05.04.2017 Bulletin 2017/14

(73)Proprietor: Sony Corporation
Tokyo 108-0075 (JP)

(72)Inventors:
  • KASAI, Kouichi
    Tokyo 108-0075 (JP)
  • MINAMINO, Takanori
    Tokyo 108-0075 (JP)

(74)Representative: Witte, Weller & Partner Patentanwälte mbB 
Postfach 10 54 62
70047 Stuttgart
70047 Stuttgart (DE)


(56)References cited: : 
WO-A1-2008/084179
JP-A- 2000 307 637
JP-A- 2010 028 633
US-A1- 2009 109 988
US-A1- 2013 166 769
US-B1- 6 636 269
WO-A1-2014/038319
JP-A- 2001 237 814
US-A1- 2004 156 624
US-A1- 2011 157 202
US-A1- 2013 342 719
  
      
    Note: Within nine months from the publication of the mention of the grant of the European patent, any person may give notice to the European Patent Office of opposition to the European patent granted. Notice of opposition shall be filed in a written reasoned statement. It shall not be deemed to have been filed until the opposition fee has been paid. (Art. 99(1) European Patent Convention).


    Description

    Technical Field



    [0001] The present disclosure relates to an information processor, an information processing method, and a program. In further detail, the present disclosure relates to an information processor, an information processing method, and a program that enables display of an image data sent from the server to be executed with a small delay.

    Background Art



    [0002] Recently, a so-called cloud computing system that transmits a data processing request from a client such as, for example a portable terminal, to a server, executes data processing on the server side and returns a processing result to the client, and obtains the data processing result on the client side is starting to become widely used.

    [0003] An example of the utilization of the cloud system includes execution of games on a game terminal. It is a system in which a game terminal (a client) that a user operates communicates with a server, and by the use of a high-function data processing function on the server side, image data such as a developing display of the game is provided to the client.

    [0004] In such a cloud based game system, user operation information is transmitted from the client to the server. The server develops the game according to the operation information sent from the client, generates updated image and audio data, and transmits the updated image and audio data to the client. The client reproduces the updated image and audio sent from the server. By performing such a process, even a client that is not equipped with a high function processor and the like is capable of obtaining the data processing results using the server-side high-function data processing function.

    [0005] However, when the transfer data between the server and the client is data with a large data size such as image data, delays and jitters (fluctuation) occur according to the state of the network. Typically, image and audio data needs to be transmitted as coded data, the client needs to store the received coded data in a buffer, and the coded data needs to be sequentially decoded and output.

    [0006] When a packet delay and a jitter (fluctuation) occur according to the state of the network, delays in reception of the packet, decoding, display, and the sequential process of the above occur on the client side and, as a result, there are cases in which a timely image display cannot be performed in the client.

    [0007] Note that as a known technology, for example, Patent Literature 1 (JP 2005-159787A) discloses processing that sequentially decodes and displays data stored in a buffer.

    Citation List


    Patent Literature



    [0008] 
    Patent Literature1:
    JP 2005-159787A


    [0009] US 2013/166769 A1 discloses a screen frame transmission method including a transmitting end capturing screen frame data and audio data and transmitting the same to a receiving end, and detecting for a user's operation and outputting a mode switching signal; a receiving end buffering the screen frame data transmitted by the transmitting end, and according to the mode switching signal, switching to the operating mode or the video mode, wherein during the video mode, a longer playback period of the screen frame data are buffered, and during the operating mode, a shorter playback period of the screen frame data are buffered. According to the aforementioned method, during the video mode, the video may be played back smoothly and the video and the audio are synchronized; during the operating mode, a low latency delay control experience may be provided. A receiving device and screen frame transmission system realizing the above method are also disclosed.

    [0010] WO 2014/038319 A1 discloses an image data output control device. If, after completing the output of image data for the first time, image data for the next screen is not input from a host even after the lapse of a first predetermined time, a period control unit starts to output said image data, which was output the first time, for the second time to a liquid crystal display device. If, after completing the output of said image data for the second time, image data for the next screen is not input from the host even after the lapse of a second predetermined time, the period control unit starts to output said image data to the liquid crystal display device for the third time, and sets the first predetermined time to be longer than the second predetermined time. Thus, the displayed image can be updated appropriately.

    Summary of Invention


    Technical Problem



    [0011] In a cloud system described above, when decoding and displaying the image data that the client receives from the server, there is a possibility of a delay occurring, and such a delay is, in a case of a game machine or the like, for example, reflected as degradation in the responsiveness of the user operation and, accordingly, reduces the fun of the game by half.

    [0012] The present disclosure is, for example, made in view of the above problem and an object thereof is to provide an information processor, an information processing method, and a program that achieve data output in which delay is reduced.

    Solution to Problem



    [0013] According to the present disclosure, there are provided an information processor, an information processing method and a program as defined in the claims.

    [0014] Note that the program according to the present disclosure is a program that can be provided in a storage medium or communication medium that is provided in a computer-readable form for an information processing device or a computer system that is capable of executing various types of program code, for example. Providing this sort of program in a computer-readable form makes it possible to implement the processing according to the program in the information processing device or the computer system.

    [0015] The object, features, and advantages of the present disclosure will be made clear later by a more detailed explanation that is based on the embodiments of the present disclosure and the appended drawings. Furthermore, the system in this specification is not limited to being a configuration that logically aggregates a plurality of devices, all of which are contained within the same housing.

    Advantageous Effects of Invention



    [0016] According to a configuration of an embodiment of the present disclosure, a device and a method are provided that is capable of display control of image data, which is received through a communication unit, with a small delay.

    [0017] Specifically, an image frame, and memory input time and transmission frame rate information that serve as metadata are stored in the memory storing the image frame. The display control unit selects the output image to the display unit on the basis of the elapsed time from the input time information. The waiting time, which is the elapsed time from the input time, and the buffering time are compared with each other in each of the queues, and the image frame associated with the newest queue, which is a queue among the queues in which the waiting time exceeds the buffering time, is selected as the output image to the display unit. Furthermore, when there is a change in the transmission frame rate, the display rate of the display unit is changed in addition to the change in the transmission frame rate.

    [0018] With the present configuration, a device and a method are provided that is capable of display control of image data, which is received through a communication unit, with a small delay.

    [0019] Note that the effects described in the present description are only exemplifications and the effects are not limited to those described in the present description and, further, there may be additional effects.

    Brief Description of Drawings



    [0020] 

    [FIG. 1] FIG. 1 is a diagram for describing an exemplary configuration of a communication system that executes processing of the present disclosure.

    [FIG. 2] FIG. 2 is a diagram for describing a configuration of a typical client.

    [FIG. 3] FIG. 3 is a diagram for explaining a delay in the image display of the client.

    [FIG. 4] FIG. 4 is a diagram for describing an exemplary configuration of a client of the present disclosure.

    [FIG. 5] FIG. 5 is a diagram for describing an example in which the display delay in the client of the present disclosure is reduced.

    [FIG. 6] FIG. 6 is a diagram illustrating a flowchart for describing a sequence of storing image data in the memory.

    [FIG. 7] FIG. 7 is a diagram for describing a specific example of the sequence of storing image data in the memory.

    [FIG. 8] FIG. 8 is a diagram illustrating a flowchart for describing an image display control sequence in a display update stopped state (uninitialized).

    [FIG. 9] FIG. 9 is a diagram for describing a specific example of the image display control sequence in the display update stopped state (uninitialized).

    [FIG. 10] FIG. 10 is a diagram for describing a specific example of the image display control sequence in the display update stopped state (uninitialized).

    [FIG. 11] FIG. 11 is a diagram illustrating a flowchart for describing an image display control sequence in a display update executing state (initialized).

    [FIG. 12] FIG. 12 is a diagram for describing a specific example of the image display control sequence in the display update executing state (initialized).

    [FIG. 13] FIG. 13 is a diagram for describing a specific example of the image display control sequence in the display update executing state (initialized).

    [FIG. 14] FIG. 14 is a diagram for describing a specific example of the image display control sequence in a case in which transmission frame rate = 60 fps.

    [FIG. 15] FIG. 15 is a diagram for describing a specific example of the image display control sequence in a case in which transmission frame rate = 30 fps.

    [FIG. 16] FIG. 16 is a diagram for describing a specific example of the image display control sequence in a case in which the transmission frame rate is changed from 60 fps to 30 fps.

    [FIG. 17] FIG. 17 is a diagram for describing a specific example of the image display control sequence in a case in which the transmission frame rate is changed from 60 fps to 30 fps.

    [FIG. 18] FIG. 18 is a diagram for describing a specific example of the image display control sequence in a case in which the transmission frame rate is changed from 60 fps to 30 fps.

    [FIG. 19] FIG. 19 is a diagram for describing a specific example of the image display control sequence in a case in which the transmission frame rate is changed from 30 fps to 60 fps.

    [FIG. 20] FIG. 20 is a diagram for describing a specific example of the image display control sequence in a case in which the transmission frame rate is changed from 30 fps to 60 fps.

    [FIG. 21] FIG. 21 is a diagram for describing a specific example of the image display control sequence in a case in which the transmission frame rate is changed from 30 fps to 60 fps.

    [FIG. 22] FIG. 22 is a diagram for describing an exemplary hardware configuration of an information processor serving as the client.


    Description of Embodiments



    [0021] Hereinafter, details of an information processor, an information processing method, and a program of the present disclosure will be described with reference to the drawings. Note that the description will be made according to the following items.
    1. 1. Exemplary Configuration of Communication System
    2. 2. Delay in Reception, Decoding, and Output Process of Image Data
    3. 3. Outline of Configuration and Processing of Information Processor of Present Disclosure
    4. 4. Memory Storing Sequence of Decoded Image Executed by Information Processor (Client) of Present Disclosure
    5. 5. Detailed Sequence of Image Display Control Processing Executed by Information Processor (Client) of Present Disclosure

      5-1. Processing in Display Update Stopped State (Uninitialized)

      5-2. Processing in Display Update Executing State (Initialized)

    6. 6. Processing in response to Switching of Image Transmission Frame Rate (fps) of Server

      6-1. Basic Display Control Processing Executed by Client When Transmission Frame Rates (fps) are 60 fps and 30 fps

      6-2. Display Control Processing Executed by Client When Transmission Frame Rates (fps) are Changed from 60 fps to 30 fps

      6-3. Display Control Processing Executed by Client When Transmission Frame Rates (fps) are Changed from 30 fps to 60 fps

    7. 7. Exemplary Configuration of Information Processor Serving as Client
    8. 8. Conclusion of Configuration of Present Disclosure

    [1. Exemplary Configuration of Communication System]



    [0022] Referring first to FIG. 1, an exemplary configuration of a communication system that executes the processing of the present disclosure will be described.

    [0023] As illustrated in FIG. 1, a communication system 10 includes a client 20 and a server 30 that are capable of bidirectional communication. The client 20 and the server 30 perform communication through a network such as the Internet, for example.

    [0024] The client 20 is a device for a general user and, specifically, is a television 21, a PC 22, a game machine, or a portable terminal 23 such as a smart phone, for example.

    [0025] The client 20 transmits user operation information on the client 20 to the server 30. The server 30 performs data processing in response to the received operation information from the client 20. For example, when the user is in the midst of playing a game with the client 20, the game is developed in accordance with the user operation and stream data in which the updated image and audio data are coded is generated and is transmitted to the client 20.

    [0026] The client 20 decodes the stream data sent from the server 30 and outputs the image and audio, which are results of the decoding, through a display or a loudspeaker.

    [2. Delay in Reception, Decoding, and Output Process of Image Data]



    [0027] In the communication system 10, such as the one illustrated in FIG. 1, a delay and a jitter (fluctuation) are generated in the communication data between the client 20 and the server 30 in accordance with the network state. Typically, image and audio data needs to be transmitted as coded data, the client 20 needs to store the received coded data in a buffer, and the coded data needs to be sequentially decoded and output.

    [0028] When a packet delay and a jitter (fluctuation) occur, delays in reception of the packet, decoding, display, and the sequential process of the above occur on the client 20 side and, as a result, there are cases in which a timely image display cannot be performed in the client 20.

    [0029] A specific example of the occurrence of a delay will be described with reference to FIGS. 2 and 3.

    [0030] FIG. 2 is a diagram illustrating an exemplary configuration of a typical client 20.

    [0031] A communication unit 51 receives stream data including image coded data and audio coded data transmitted by the server 30.

    [0032] The received data is decoded in a decoder 52.

    [0033] Note that processing of the image data will be described below.

    [0034] The decoded image data is stored in a memory 71 of an output controller 53 in units of an image frame.

    [0035] The memory 71 is configured so as to be capable of storing a plurality of image frames as a queue.

    [0036] Herein, the memory 71 is configured so as to be capable of storing two image frames (indicated as F1 and F2 in the drawing).

    [0037] The display control unit 72 sequentially acquires image frames that have been decoded and that have been stored in the memory 71 and outputs the image frames to the display unit 54.

    [0038] The display unit 54 outputs, for example, a 60 Hz vertical synchronizing signal (Vsync) to the display control unit 72, and, at a timing determined on the basis of the synchronizing signal, the display control unit 72 sequentially outputs the image frames to the display unit 54 and executes an image update.

    [0039] A controller 55 performs a general control related to processing of each component.

    [0040] Referring to FIG. 3, a delay time from when the image frame from the decoder 52 is stored in the memory 71 until when the decoded image is displayed on the display unit 54 will be described.

    [0041] FIG. 3 is a diagram illustrating a sequence associated with the transition of time from an output of data from the decoder 52 until display processing of the image frame on the display unit 54.

    [0042] Each of the following data is illustrated in FIG. 3.
    1. (A) An output of the decoder
    2. (B) Stored data in the memory
    3. (C) The image displayed on the display unit with the processing of the display control unit


    [0043] Time base (t) is illustrated in (C) of FIG. 3, and (A), (B), and (C) illustrates the processing performed in accordance with the time base (t).

    [0044] Each of the F1, F2, F3, F4... indicated in (A), (B), and (C) represents a single image frame.

    [0045] F1 is an image frame F1, and F2 is an image frame F2.

    [0046] Illustrated in (A) of FIG. 3 is a sequence in which the decoder 52 decodes the encoded image data input from the communication unit 51 and outputs the decoded data to the memory 71.

    [0047] The intervals between the Frames F1, F2, F3, and F4 are different due to jitters (fluctuation) and the like of the network communication. A certain variation occurs in the reception intervals of the image frames and, in accordance with the reception intervals, a variation occurs in the intervals of the data output from the decoder 52.

    [0048] The memory 71 in (B) of FIG. 3 has a configuration that allows two image frames to be stored. In FIG. 3, (b1) on the lower side of (B) is a preceding input image (a preceding queue) and (b2) on the upper side is a succeeding input image (a succeeding queue).

    [0049] In the example illustrated in the drawing, at time t0, the frame F1 is stored in the memory as a preceding input queue and, after that, frame F2 is stored as a succeeding queue.

    [0050] Note that the length of the frame Fn illustrated in (B) of FIG. 3 corresponds to the time stored in the memory.

    [0051] For example, frame F1 is stored in the memory as a preceding queue between time t0 to t1.

    [0052] Frame F2 is stored in the memory as a succeeding queue at an intermediate time between time t0 and t1 and is set as the succeeding queue until time t1. After that, it is illustrated that between time t1 to t2, the frame F2 is stored in the memory as a preceding queue.

    [0053] In (C) of FIG. 3, a sequence related to the processing of the display control unit 72 and the display image of the display unit 54 is illustrated.

    [0054] The frame F1 stored in the memory is fetched from the memory 71 by the display control unit 72 and is output to the display unit 54 between time t1 to time t2.

    [0055] At time t1, when the frame F1 is fetched from the memory, the frame F2 that had been the succeeding queue is changed to the preceding queue.

    [0056] Note that, strictly speaking, there is a time lag between when the image frame is fetched from the memory 71 and when the image frame is output to the display unit 54; however, the above is omitted from the illustration in FIG. 3.

    [0057] Subsequently, the frame F3 that is output from the decoder is set as the succeeding queue.

    [0058] At time t2, the display image of the display unit 54 is switched to frame F2.

    [0059] At time t2, the frame F3 that had been the succeeding queue of the memory becomes the preceding queue and, subsequently, the frame F4 input from the decoder is stored as the succeeding queue.

    [0060] In the sequence illustrated in FIG. 3 from decoding the image until displaying the image, while the output time of the image frame F1 from the decoder 52 to the memory 71 is t0, the display start time of the display unit 54 is t1, for example.

    [0061] In other words, a delay from time 0 to t1 occurs from when the decoding has been completed until the display is started.

    [0062] In the processing of the present disclosure, a configuration that reduces such a delay is implemented.

    [3. Outline of Configuration and Processing of Information Processor of Present Disclosure]



    [0063] FIG. 4 is a diagram illustrating an exemplary configuration of the client 20 serving as an information processor of the present disclosure. The configuration of the client 20 illustrated in FIG. 4 corresponds to a partial configuration of the client that mainly performs image processing.

    [0064] A communication unit 101 receives stream data including image coded data and audio coded data transmitted by the server 30.

    [0065] The received data is decoded in a decoder 102.

    [0066] The decoded image data decoded in the decoder 102 is stored in a memory 121 of an output controller 103 in units of an image frame.

    [0067] The memory 121 is configured so as to be capable of storing a plurality of image frames as a queue.

    [0068] Herein, similar to the memory 71 described earlier with reference to FIG. 2, the memory 121 is configured so as to be capable of storing two image frames (indicated as F1 and F2 in the drawing).

    [0069] A display control unit 122 sequentially acquires image frames that have been stored in the memory 121 and outputs the image frames to the display unit 104.

    [0070]  The display unit 104 outputs, for example, a 60 Hz vertical synchronizing signal (Vsync) to the display control unit 122, and, at a timing determined on the basis of the synchronizing signal, the display control unit 122 sequentially updates the output image frames to the display unit 104.

    [0071] The display control unit 122 and the display unit 104 are connected to each other with an HDMI (registered trademark) cable.

    [0072] The controller 55 performs a general control related to processing of each component.

    [0073] The above basic configuration is a configuration similar to that of the conventional device described earlier with reference to FIG. 2.

    [0074] The client 20 illustrated in FIG. 4 stores, in the memory 121, not only the image frames (Fn) but also the following metadata associated with the image frames in addition to the image frames.
    1. (1) Time input to the memory
    2. (2) Transmission frame rate (fps) of the image data from the server 30


    [0075] The above metadata is stored after associating the metadata with the image frames.

    [0076] The server 30 appropriately changes the frame rate of the image to be transmitted in accordance with the state of the network. The information on the transmission frame rate is notified to the client from the server 30. A controller 105 fetches the information on the transmission frame rate (fps) from the notified information that has been received through the communication unit 101. The decoder 102 associates the transmission frame rate with the decoded image that is to be output and stores the resultant decoded image in the memory 121.

    [0077] Furthermore, the controller 105 inputs time information from a system clock (not shown) and provides the time information to the decoder 102. The decoder 102 associates, with the decoded image, the time to input the image frame in the memory 121 as metadata = input time described above and stores the decoded image in the memory 121.

    [0078] As a result, as illustrated in the drawing, in the memory 121, the following pieces of data
    1. (1) image frame
    2. (2) input time (time in which the image frame has been input in the memory)
    3. (3) transmission frame rate (fps)
    are associated and stored.

    [0079] The display control unit uses the above data (1) to (3) to execute a display control of the image that is acquired from the memory 121 and that is output to the display unit 104.

    [0080] Details of the processing of the above will be described later.

    [0081] In the processing of the present disclosure, the time of input of the decoded image to the memory 121 is recorded as metadata associated with the relevant image frame.

    [0082] The display control unit 122, for example, calculates, for each image frame, the elapsed time after being stored in the memory, and on the basis of the calculation result, performs processing of determining the output image.

    [0083] FIG. 5 is a diagram illustrating an example of an image output sequence that the information processor (the client 20) of the present disclosure executes.

    [0084] Similar to FIG. 3 described earlier, FIG. 5 is a diagram illustrating a sequence associated with the transition of time from an output of data from the decoder 102 until display processing of the image frame on the display unit 104.

    [0085] Each of the following data is illustrated in FIG. 5.
    1. (A) An output of the decoder
    2. (B) Stored data in the memory
    3. (C) The image displayed on the display unit with the processing of the display control unit


    [0086] Time base (t) is illustrated in (C) of FIG. 5, and (A), (B), and (C) illustrates the processing performed in accordance with the time base (t).

    [0087] Each of the F1, F2, F3, F4... indicated in (A), (B), and (C) represents a single image frame.

    [0088] F1 is an image frame F1, and F2 is an image frame F2.

    [0089] The memory in (B) of FIG. 5 has a configuration that allows two image frames to be stored. In FIG. 5, (b1) on the lower side of (B) is a preceding input image (a preceding queue) and (b2) on the upper side is a succeeding input image (a succeeding queue).

    [0090] In the example illustrated in the drawing, at time t0, the frame F1 is stored in the memory as a preceding input queue and, after that, frame F2 is stored as a succeeding queue.

    [0091] Note that similar to the description given earlier with reference to FIG. 3, the length of the frame Fn illustrated in (B) of FIG. 5 corresponds to the time stored in the memory.

    [0092] For example, frame F1 is stored in the memory as a preceding queue between time t0 to t1.

    [0093] Frame F2 is stored in the memory as a succeeding queue at an intermediate time between time t0 and t1 and is set as the succeeding queue until time t1. After that, it is illustrated that between time t1 to t2, the frame F2 is stored in the memory as a preceding queue.

    [0094] In (C) of FIG. 5, a sequence related to the processing of the display control unit 122 and the display image of the display unit 104 is illustrated.

    [0095] In the example illustrated in FIG. 5, at time t1, the display control unit 122 does not select frame F1 but selects frame F2 that are stored in the memory and outputs the frame F2 to the display unit 104.

    [0096] The processing of selecting the above display image is executed as processing using the metadata associated with the image frame that is to be stored in the memory.

    [0097] In the sequence illustrated in FIG. 5 from decoding the image until the image is displayed, the delay time of the image frame F2, in other words, the delay time that is from the output time of the frame F2 from the decoder 102 to the memory 121 until the display start time of the frame F2 on the display unit 104, is short compared with the delay time of the frame F1 in the sequence described with reference to FIG. 3.

    [0098] As described above, in the configuration of the present disclosure, shortened delay in displaying the image is stored.

    [0099] The above can be achieved by selecting the display image by using the metadata associated with the image frame that is to be stored in the memory.

    [0100] The above processing will be described in detail below.

    [4. Memory Storing Sequence of Decoded Image Executed by Information Processor (Client) of Present Disclosure]



    [0101] Next, a memory storing sequence of a decoded image executed by the information processor (client) of the present disclosure will be described.

    [0102] FIG. 6 is a flowchart illustrating a data input processing sequence from the decoder 102 to the memory 121.

    [0103] The flow illustrated in FIG. 6 is executed, for example, under control of the controller 105 and of the output controller 103 according to a program stored in a storage unit.

    [0104] Note that the flow illustrated in FIG. 6 is a process that is repeatedly executed upon an input of a new data from the decoder 102.

    [0105] Processing in each step will be described below.

    (Step S101)



    [0106] First, in step S101, determination is made whether the number of queues stored in the memory has reached the upper limit (Full).

    [0107] For example, the memory 121 described with reference to FIG. 5 is configured so as to be capable of storing two queues, and when two queues are stored, it is determined that the number of queues is at the upper limit (Yes). If under one, then it is determined that the number of queues is not at the upper limit (No).

    [0108] When determined to be at the upper limit (Yes), the process proceeds to step S102. On the other hand, when determined not to be at the upper limit (No), the process proceeds to step S103.

    (Step S102)



    [0109] When determined that the number of queues stored in the memory 121 is at the upper limit, in step S102, the oldest queue in the memory is deleted. In the case of the memory 121 illustrated in FIG. 5 that stores only two queues, the preceding queue is deleted.

    [0110] Note that the queue includes, other than the image frame, metadata, such as the input time and the transmission frame rate (fps); all of the above pieces of data are deleted.

    (Step S103)



    [0111] After completion of deleting the queue in step S102, or when it is determined that the number of queues stored in the memory is not at the upper limit in step S101, processing of step S103 is executed.

    [0112] In step S103, in order to set an input time serving as metadata associated with a new image frame that has been input, the current time is acquired. The current time information is, for example, acquired from the system clock included in the client 20.

    (Step S104)



    [0113] Subsequently, in step S104, data including the new image frame and metadata associated with the new image frame, the metadata including
    the input time and
    the transmission frame rate (fps),
    are stored in the memory as a single queue.

    [0114] A specific exemplary sequence of processing that stores data in the memory will be described with reference to FIG. 7.

    [0115] Similar to FIG. 5 described earlier, FIG. 7 is a diagram illustrating a sequence associated with the transition of time from an output of data from the decoder 102 until display processing of the image frame on the display unit 104.

    [0116] Each of the following data is illustrated in FIG. 3.
    1. (A) An output of the decoder
    2. (B) Stored data in the memory
    3. (C) The image displayed on the display unit with the processing of the display control unit


    [0117] Time base (t) is illustrated in (C) of FIG. 3, and (A), (B), and (C) illustrates the processing performed in accordance with the time base (t).

    [0118] Each of the F1, F2, F3, F4... indicated in (A), (B), and (C) represents a single image frame.

    [0119] F1 is an image frame F1, and F2 is an image frame F2.

    [0120] Time t1, time t2, time t3... depicted by solid line arrows 201 on the time base (t) illustrated in (C) of FIG. 7 are each an output timing of the vertical synchronizing signal (Vsync) of the display unit 104 and each indicate a timing in which the relevant output image frame can be switched. When the vertical synchronizing signal (Vsync) is 60 Hz, the interval between each sloid line arrow is 1/60 (sec).

    [0121] Furthermore, t1m, t2m, t3m... depicted by broken line arrows 202 in (C) of FIG. 7 indicate memory access timings on acquiring images from the memory 121 in order to update the output image of the display unit 104 at the output timings t1, t2, t3... of the vertical synchronizing signals (Vsync) depicted by the solid line arrows 201.

    [0122] In other words, for example, in order to display the image frame F1 on the display unit at time t2, the image frame F1 is acquired by accessing the memory 121 at the memory access timing of time t2m.

    [0123] The above memory access timing corresponds to a display image update processing start timing.

    [0124] The memory illustrated in (B) of FIG. 7 corresponds to the processing of the memory 121 illustrated in FIG. 4. The memory 121 has a configuration that allows two image frames to be stored. In FIG. 7, (b1) on the lower side of (B) is a preceding input image (a preceding queue) and (b2) on the upper side is a succeeding input image (a succeeding queue).

    [0125] The broken line arrow 203 extending in the horizontal direction illustrated in (B) of FIG. 7 indicates a buffering time.

    [0126] The buffering time is time required to store a single queue in the memory 121. In other words, it is the time from after storing the queue configuration data in the memory 121 has been started at the above-described input time until all of the pieces of queue configuration data are stored completely in the memory 121 such that, subsequently, the queue can be fetched in a stable manner.

    [0127] The buffering time is stored in advance in a nonvolatile memory as data unique to the device (the client).
    Alternatively, the buffering time may be time information that can be set by the user.

    [0128] The processing of the decoder 102 of storing the output image in the memory 121 is executed according to the flow illustrated in FIG. 6.

    [0129] For example, when the frame F1 is input in the memory 121 from the decoder 102, the storage queue of the memory is zero and the determination in step S101 is No such that processing from step S103 to S104 is executed.

    [0130] As a result, a queue including the image frame F1, the time of input of the image frame F1, and the transmission frame rate (fps) is stored in the memory 121.

    [0131] The same applies when inputting frame F2 in the memory 121 from the decoder 102.

    [0132] Furthermore, at the time when the frame F3 is input in the memory 121 from the decoder 102, the image frame F1 that is the preceding queue of the memory 121 is acquired by the display control unit. Accordingly, when the frame F3 is input in the memory 121, the determination in step S101 is No such that processing from step S103 to S104 is executed.

    [0133] However, for example, in the example illustrated in FIG. 7, when inputting the frame F4 in the memory 121 from the decoder 102, the image frames F2 and F3 are stored in the memory 121. In other words, since it is the upper limit (the number of queues = 2) of the number of queues, the determination in step S101 is Yes.

    [0134] In the above case, in step S102, deletion of the oldest queue is executed. At the point when frame F4 is input, the oldest queue stored in the memory 121 is the image frame F2, and the image frame F2 and the metadata thereof are deleted.

    [0135] In addition to the deletion processing, the succeeding queue F3 is changed to the preceding queue and frame F4 is stored as the succeeding queue.

    [0136] As described above, at the point when a new input image is generated, when the number of queues in the memory 121 is the upper limit number, the processing of deleting the old queue and storing a new input image is executed.

    [5. Detailed Sequence of Image Display Control Processing Executed by Information Processor (Client) of Present Disclosure]



    [0137] A display control sequence of an image frame stored in the memory 121 will be described next.

    [0138] The output controller 103 of the information processor (the client 20) of the present disclosure performs different processing depending on the two following states when acquiring an image stored in the memory 121 and outputting the image to the display unit 104.

    (State 1) = display update stopped state (uninitialized)

    (State 2) = display update executing state (initialized)



    [0139] (State 1) = display update stopped state (uninitialized) is a state in which the image for display update cannot be acquired from the memory 121, and (State 2) = display update executing state (initialized) is a state in which the image for display update can be acquired from the memory 121.

    [0140] Processing in each of the above states will be described in order with reference to the drawings as described below.

    [0141] The processing in the case of (State 1) = display update stopped state (uninitialized) will be described with reference to the flowchart illustrated in FIG. 8 and the specific examples illustrated in FIGS. 9 and 10.

    [0142] The processing in the case of (State 2) = display update executing state (initialized) will be described with reference to the flowchart illustrated in FIG. 11 and the specific examples illustrated in FIG. 12 and after.

    [5-1. Processing in Display Update Stopped State (Uninitialized)]



    [0143] The processing in the case of (State 1) = display update stopped state (uninitialized) will be described first with reference to the flowchart illustrated in FIG. 8.

    [0144] FIG. 8 is a flowchart illustrating a processing sequence of selectively acquiring a queue stored in the memory 121, in other words, a queue that includes an image frame, and an input time and a transmission frame rate (fps) serving as metadata, and displaying the queue on the display unit 104.

    [0145] The flow illustrated in FIG. 8 is executed, for example, under control of the controller 105 and of the output controller 103 according to a program stored in a storage unit.

    [0146] Note that the flow illustrated in FIG. 8 is a process that is repeatedly executed in accordance with the period of the vertical synchronizing signal (Vsync) of the display unit 104.

    [0147] Processing in each step will be described below.

    (Step S201)



    [0148] First, in step S201, the display control unit 122 of the output controller 103 waits until a memory access timing (tnm) before the next Vsync.

    [0149] The memory access timing (tnm) is a memory access timing corresponding to t1m, t2m... described earlier with reference to FIG. 7.

    (Step S202)



    [0150] When the memory access timing (tnm) has come, in step S202, the display control unit 122 determines whether the present state is the display update executing state (initialized).

    [0151] As described above, the output controller 103 of the information processor (the client 20) of the present disclosure performs different processing depending on the two following states when acquiring an image stored in the memory 121 and outputting the image to the display unit 104.

    (State 1) = display update stopped state (uninitialized)

    (State 2) = display update executing state (initialized)



    [0152] (State 1) = display update stopped state (uninitialized) is a state in which the image for display update cannot be acquired from the memory 121, and (State 2) = display update executing state (initialized) is a state in which the image for display update can be acquired from the memory 121.

    [0153] If the present state is the display update executing state (initialized), the process proceeds to step S251. The processing in the above case will be described later with reference to FIG. 11 and after.

    [0154] Meanwhile, when the present state is not the display update executing state (initialized) but is the display update stopped state (uninitialized), the process proceeds to step S203. In this flow, the processing in this case will be described.

    [0155] In other words, it is processing when in a state in which the image for display update cannot be acquired from the memory 121.

    (Step S203)



    [0156] In step S203, an update image candidate (Candidate) is set to "none (NULL)".

    [0157] The update image candidate (Candidate) is a candidate for the image that is to be displayed next on the display unit 104.

    (Step S204)



    [0158] Subsequently, the display control unit 122 determines whether there is any unprocessed queue that is subject to processing in the memory 121.

    [0159] When there is no update image candidate, the process proceeds to step S209.

    [0160] When there is an update image candidate, the process proceeds to step S205.

    (Step S205)



    [0161] When it is verified that there is an unprocessed queue that is subject to processing in the memory, in step S205, the display control unit 122 sets the image (tmp) subject to processing as a front queue (Quee.front).

    (Step S206)



    [0162] Subsequently, determination is made whether condition
    waiting time > buffering time
    is satisfied by the image (tmp) subject to processing.

    [0163] The waiting time is the elapsed time after the queue has been set in the memory 121.

    [0164] The above can be calculated as the elapsed time from the input time that is metadata of the image frame set in the queue.

    [0165]  As described with reference to FIG. 7, the buffering time is time required to store a single queue in the memory 121. In other words, it is the time from after storing the queue configuration data in the memory 121 has been started at the above-described input time until all of the pieces of queue configuration data are stored completely in the memory 121 such that, subsequently, the queue can be fetched in a stable manner.

    [0166] The buffering time is stored in advance in a nonvolatile memory as data unique to the device (the client). Alternatively, the buffering time may be time set by the user.

    [0167] In step S206, determination is made whether
    waiting time > buffering time
    is satisfied by the image (tmp) subject to processing.

    [0168] When the determination condition in step S206, that is
    waiting time > buffering time
    is satisfied (Yes), the process proceeds to step S207. When not satisfied (No), the process proceeds to step S209.

    (Step S207)



    [0169] When waiting time > buffering time is satisfied, the process proceeds to step S207, and the image (tmp) subject to processing is set as the update image candidate (Candidate).

    [0170] The above update image candidate (Candidate) is acquired from the memory.

    (Step S208)



    [0171] In step S208, the front queue that has already been acquired as the update image candidate (Candidate) is deleted.

    [0172] If there is a succeeding queue, the succeeding queue is set as the front queue.

    [0173] Subsequently, the process returns to step S204 and repeats the processing from step S204 and after.

    [0174] From the front queue to the succeeding queue in the memory, processing subjects (tmps) are sequentially set and the processing of steps S204 to S208 is executed.

    [0175] In other words, during the repetitive process of the processing of steps S204 to S208, when the determination processing of step S206, in other words, when there is a processing subject queue that does not satisfy
    waiting time > buffering time
    is detected, the process proceeds to step S209.

    [0176] Note that at the point of proceeding to step S209, when there is not a single queue in the memory 121 that satisfy
    waiting time > buffering time,
    the update image candidate (Candidate) is not set.

    [0177] On the other hand, when there are one or more queues that satisfy
    waiting time > buffering time,
    among the one or more queues, the newest input image with respect to the memory 121, in other words, a single queue that is the newest is selected and the image of the newest queue is set as the update candidate image (Candidate).

    (Step S209)



    [0178] Step S209 is executed when it is determined in step S204 that there is no unprocessed queue in the memory or that, in step S206, a queue that does not satisfy
    waiting time > buffering time
    is detected.

    [0179] In step S209, determination is made whether the update image candidate (Candidate) is "none (NULL)".

    [0180] In the processing of steps S204 to S208, when there are one or more queues that satisfy
    waiting time > buffering time,
    among the one or more queues, the newest input image with respect to the memory 121, in other words, a single queue that is the newest is selected and the image of the newest queue is set as the update candidate image (Candidate).

    [0181] Other than the cases described above, in other words,
    when there is not a single queue in the memory 121, or when there is not a single queue that satisfies
    waiting time > buffering time,
    in such cases,
    the update image candidate (Candidate) is "none (NULL)" and the determination in step S209 is (No), and the process is ended.

    [0182] On the other hand, in the processing of steps S204 to S206, when there are one or more queues that satisfy
    waiting time > buffering time,
    among the one or more queues, the newest input image with respect to the memory 121, in other words, a single queue that is the newest is selected and the image of the newest queue is set as the update candidate image (Candidate).

    [0183] In the above case, the process proceeds to step S210.

    (Step S210)



    [0184] Subsequently, in step S210, the display control unit 122 outputs the update image candidate (Candidate) to the display unit 104 and executes the image update.

    (Step S211)



    [0185] Subsequently, the controller 105 sets the state to display update executing state (initialized).

    (Step S212)



    [0186] Subsequently, upon output of the update image (Candidate) to the display unit 104, the vertical synchronizing signal (Vsync) is set to zero.

    [0187] The flowchart illustrated in FIG. 8 is a processing sequence in the case in which (State 1) = display update stopped state (uninitialized), and when the update of the display image is resumed in step S210, in step S211, a change in state that changes to (State 2) = display update executing state (uninitialized) is executed.

    [0188] A specific example of the process according to the flow illustrated in FIG. 8 will be described with reference to FIGS. 9 and 10.

    [0189] Similar to FIGS. 5 and 7 described earlier, FIG. 9 is a diagram illustrating a sequence associated with the transition of time from an output of data from the decoder 102 until display processing of the image frame on the display unit 104.

    [0190] Each of the following data is illustrated in FIG. 9.
    1. (A) An output of the decoder
    2. (B) Stored data in the memory
    3. (C) The image displayed on the display unit with the processing of the display control unit


    [0191] Time base (t) is illustrated in (C) of FIG. 9, and (A), (B), and (C) illustrates the processing performed in accordance with the time base (t).

    [0192] Each of the F1, F2, F3, F4... indicated in (A), (B), and (C) represents a single image frame.

    [0193] F1 is an image frame F1, and F2 is an image frame F2.

    [0194] Time t1, time t2, time t3... depicted by solid line arrows on the time base (t) illustrated in (C) of FIG. 9 are each an output timing of the vertical synchronizing signal (Vsync) of the display unit 104 and each indicate a timing in which the relevant output image frame can be switched. When the vertical synchronizing signal (Vsync) is 60 Hz, the interval between each sloid line arrow is 1/60 (sec).

    [0195] Furthermore, time t1m, time t2m, time t3m... indicated by broken line arrows represents memory access timings to acquire images from the memory 121.

    [0196]  The memory 121 illustrated in (B) of FIG. 9 is the memory 121 illustrated in FIG. 4 and has a configuration that allows two image frames to be stored. In FIG. 9, (b1) on the lower side of (B) is a preceding input image (a preceding queue) and (b2) on the upper side is a succeeding input image (a succeeding queue).

    [0197] In FIG. 9, State 1 = display update stopped state (uninitialized) is during the period of t0 to t2m on the time base (t), and after t2m, is changed to State 2 = display update executing state (initialized).

    [0198] The above change in state is executed in the state change processing of step S211 in the flowchart illustrated in FIG. 8.

    [0199] The image frame F1 displayed on the display unit 104 during the change in state is the image frame that is selected according to the flow illustrated in FIG. 8.

    [0200] In other words, the image frame F1 is the update image candidate (Candidate) that is applied to the update processing of the display image in step S210 of the flow illustrated in FIG. 8.

    [0201] The determination processing of the update image candidate (Candidate) is executed by the processing of steps S204 to S208 in the flow illustrated in FIG.8.

    [0202] As described above, the processing of steps S204 to S208 is repeatedly executed while the front queue to the succeeding queue in the memory are sequentially set as the processing subject (tmp).

    [0203] In the processing of steps S204 to S208, when there are one or more queues that satisfy
    waiting time > buffering time,
    among the one or more queues, the newest input image with respect to the memory 121, in other words, a single queue that is the newest is selected and the image of the newest queue is set as the update candidate image (Candidate).

    [0204] In the example illustrated in FIG. 9, the processing from step S201 and after in FIG. 8 is executed at time t2m.

    [0205] At time t2m, the image frame F1 is stored as the preceding queue and the image frame F2 is stored as the succeeding queue in the memory 121.

    [0206] Accordingly, in the processing of steps S204 to S208, the two queues, in other words, the image frames F1 and F2, are executed.

    [0207] First, the determination processing of step S206 is executed on the image frame F1 that is the preceding queue.

    [0208] The display control unit 122 acquires the input time that is metadata associated with the image frame F1 and determines whether the criterion
    waiting time > buffering time
    is satisfied.

    [0209] Since the above image frame F1 satisfies the criterion described above, the determination in step S206 is Yes and the process proceeds to step S207.

    [0210] In step S207, the image frame F1 is set as the update image candidate (Candidate).

    [0211] Subsequently, the processing subject (tmp) is switched to the succeeding queue and the determination processing of step S206 is executed on the image frame F2 that is the succeeding queue.

    [0212] The display control unit 122 acquires the input time that is metadata associated with the image frame F2 and determines whether the criterion
    waiting time > buffering time
    is satisfied.

    [0213] The above image frame F2 is determined that it does not satisfy the criterion described above, that is, the determination in step S206 is No, and the process proceeds to step S209.

    [0214] As a result, the display control unit 122 outputs the image frame F1 that is set as the update image candidate (Candidate) at this point to the display unit 104, and executes the image update.

    [0215] The above processing is the processing of step S210.

    [0216] Subsequently, in step S211, the state is changed to State 2 = display update executing state (initialized).

    [0217] The example illustrated in FIG. 10 is also a process according to the flow illustrated in FIG. 8, in other words, is a diagram illustrating a specific example of the processing in State 1 = display update stopped state (uninitialized).

    [0218] In the example illustrated in FIG. 10, the processing from step S201 and after in FIG. 8 is executed at time t2m.

    [0219] At time t2m, the image frame F1 is stored as the preceding queue and the image frame F2 is stored as the succeeding queue in the memory 121.

    [0220] Accordingly, in the processing of steps S204 to S208, the two queues, in other words, the image frames F1 and F2, are executed.

    [0221] First, the determination processing of step S206 is executed on the image frame F1 that is the preceding queue.

    [0222] The display control unit 122 acquires the input time that is metadata associated with the image frame F1 and determines whether the criterion
    waiting time > buffering time
    is satisfied.

    [0223] Since the above image frame F1 satisfies the criterion described above, the determination in step S206 is Yes and the process proceeds to step S207.

    [0224] In step S207, the image frame F1 is set as the update image candidate (Candidate).

    [0225] Subsequently, the processing subject (tmp) is switched to the succeeding queue and the determination processing of step S206 is executed on the image frame F2 that is the succeeding queue.

    [0226] The display control unit 122 acquires the input time that is metadata associated with the image frame F2 and determines whether the criterion
    waiting time > buffering time
    is satisfied.

    [0227] The above image frame F2 is determined that it satisfies the criterion described above, that is, the determination in step S206 is Yes, and the process proceeds to step S207.

    [0228] In step S207, the image frame F2 is set as the update image candidate (Candidate).

    [0229] Subsequently, in step S208, the front queue is deleted. In other words, the image frame F1 is deleted.

    [0230] Subsequently, the process proceeds to step S204.

    [0231] In step S204, determination is made whether there is an unprocessed queue in the memory. Since processing of two queues has already been completed, it is determined that there is no unprocessed queue and the process proceeds to step S209.

    [0232] As a result, the display control unit 122 outputs the image frame F2 that is set as the update image candidate (Candidate) at this point to the display unit 104, and executes the image update.

    [0233] The above processing is the processing of step S210.

    [0234] Subsequently, in step S211, the state is changed to State 2 = display update executing state (initialized).

    [0235] As described above, the example illustrated in FIG. 10 is an example of processing in a case in which two queues, that is, both the image frame F1 and the image frame F2, stored in the memory 121, at the point of memory access timing t2m, satisfy the criterion
    waiting time > buffering time.

    [0236] As described above, in a case in which a plurality of queues satisfy
    waiting time > buffering time,
    the update processing is performed by setting the image corresponding to the newest input queue as the update candidate image (Candidate).

    [0237] By executing such processing, an image update with less delay is performed. Note that in the image update processing, the display of the image frame F1 on the display unit is not performed; however, omission of display of a single frame is almost not recognized by the viewer and does not generate any discomfort.

    [5-2. Processing in Display Update Executing State (Initialized)]



    [0238] In FIGS. 8 to 10, the image display control processing in a case in which (State 1) = display update stopped state (uninitialized) has been described.

    [0239] Subsequently, image display control processing in a case in which (State 2) = display update executing state (initialized) will be described with reference to FIG. 11 and after.

    [0240] Like FIG. 8, FIG. 11 is a flowchart illustrating a processing sequence of selectively acquiring a queue stored in the memory 121, in other words, a queue that includes an image frame, and an input time and a transmission frame rate (fps) serving as metadata, and displaying the queue on the display unit 104.

    [0241] The flow illustrated in FIG. 11 is executed, for example, under control of the controller 105 and of the output controller 103 according to a program stored in a storage unit.

    [0242] Note that, like the flow illustrated in FIG. 8, the flow illustrated in FIG. 11 is a process that is repeatedly executed in accordance with the period of the vertical synchronizing signal (Vsync) of the display unit 104.

    [0243] Processing in each step will be described below.

    (Step S201)



    [0244] The processing of step S201 to S202 is processing similar to the processing of step S201 to S202 in the flow of FIG. 8 described earlier.

    [0245] First, in step S201, the display control unit 122 of the output controller 103 waits until a memory access timing (tnm) before the next Vsync.

    [0246] The memory access timing (tnm) is a memory access timing corresponding to t1m, t2m... described earlier with reference to FIG. 7.

    (Step S202)



    [0247] When the memory access timing (tnm) has come, in step S202, the display control unit 122 determines whether the present state is the display update executing state (initialized).

    [0248] As described above, the output controller 103 of the information processor (the client 20) of the present disclosure performs different processing depending on the two following states when acquiring an image stored in the memory 121 and outputting the image to the display unit 104.

    (State 1) = display update stopped state (uninitialized)

    (State 2) = display update executing state (initialized)



    [0249] (State 1) = display update stopped state (uninitialized) is a state in which the image for display update cannot be acquired from the memory 121, and (State 2) = display update executing state (initialized) is a state in which the image for display update can be acquired from the memory 121.

    [0250] When the present state is not the display update executing state (initialized) but is the display update stopped state (uninitialized), the process proceeds to step S203. The processing from step S203 and after is processing that is executed in accordance with the flow illustrated in FIG. 8 described earlier.

    [0251] Meanwhile, if the present state is the display update executing state (initialized), the process proceeds to step S251. The processing in the above case will be described below.

    (Step S251)



    [0252] In step S251, the vertical synchronizing signal (Vsync) that specifies the image update timing of the display 104 is counted.

    (Step S252)



    [0253] Subsequently, the display control unit 122 determines whether there is any unprocessed queue that is subject to processing in the memory 121.

    [0254] When there is no update image candidate, the process proceeds to step S256.

    [0255] When there is an update image candidate, the process proceeds to step S205.

    (Step S253)



    [0256] When it is verified that there is an unprocessed queue that is subject to processing in the memory, in step S253, the display control unit 122 sets the image (tmp) subject to processing as a front queue (Quee.front).

    (Step S254)



    [0257] Subsequently, the display control unit 122 acquires a transmission image frame rate (fps) set as metadata associated with an image frame Fx of a front queue that is an image (tmp) subject to processing. The transmission frame rate information is information that is received by the client 20 as notification information from the server 30.

    [0258] The display control unit 122 retains the frame rate information (fps) associated with the recent display image in the memory, and determines whether the transmission frame rate is to be changed or not by making a comparison with the recent transmission frame rate (fps).

    [0259] Note that as described above, the server 30 appropriately changes the transmission frame rate in accordance with the congestion situation and the congested state of the network. For example, two rates of 60 fps and 30 fps are used while appropriately switching the two rates.

    [0260] In step S254, when the change in the server transmission frame rate (fps) is detected, the process proceeds to step S255. On the other hand, when the change in the server transmission frame rate (fps) is not detected, the process proceeds to step S256.

    (Step S255)



    [0261] In step S254, when the change in the server transmission frame rate (fps) is detected, the process proceeds to step S255, and, the count of the vertical synchronizing signal of the display unit 1104 is set to zero and the process proceeds to step S256.

    (Step S256)



    [0262] Step S256 is processing executed in either of the following cases.

    [0263] Step S256 is executed
    when in step S252, it is determined that there is no unprocessed queue in the memory 121,
    when in step S254, the change in the server transmission frame rate (fps) is detected and the process proceeding to step S255 sets the vertical synchronizing signal to zero,
    when in step S254, the change in the server transmission frame rate (fps) is not detected.

    [0264] In step S256, determination is made whether the update processing start timing of the display image has come. Note that the image update processing start timing is set different according to the transmission frame rate (fps), for example.

    [0265] Specifically, when the image transmission frame rate from the server 30 is 60 fps, the update is executed each 1/60 (sec), and when the image transmission frame rate from the server 30 is 30 fps, the update is executed each 1/30 (sec).

    [0266] In step S256, when it is determined that the update processing start timing of the display image has come, the process proceeds to step S257.

    [0267] In step S256, when it is determined that the update processing start timing of the display image has not come, the process is ended.

    (Step S257)



    [0268] In step S256, when it is determined that the update processing start timing of the display image has come, the process proceeds to step S257 and determination is made whether there is an unprocessed queue in the memory 121.

    [0269] When there is no unprocessed queue, the process proceeds to step S311.

    [0270] When there is an unprocessed queue, the process proceeds to step S321.

    (Step S311)



    [0271] At the timing when, in step S256, it is determined that the update processing start timing of the display image has come, and when, in step S257, it is determined that there is no unprocessed queue in the memory 121, image update on the display unit 104 cannot be performed.

    [0272] In such a case, the change in state is performed in step S311.

    [0273] In other words, the state is set to State 1 = display update stopped state (uninitialized) and the process is ended.

    (Step S321)



    [0274] On the other hand, when, in step S256, it is determined that the update processing start timing of the display image has come, and when, in step S257, it is determined that there is no unprocessed queue in the memory 121, the process proceeds to step S321.

    [0275] In step S321, the display control unit 122 sets the front queue (Queue.front) of the memory 121 to image (tmp) subject to processing and acquires the front queue from the memory 121.

    (Step S322)



    [0276] In step S322, the front queue that has already been acquired as the image (tmp) subject to processing is deleted.

    [0277] If there is a succeeding queue, the succeeding queue is set as the front queue.

    (Step S323)



    [0278] Subsequently, in step S323, the display control unit 122 outputs, to the display unit 104, the image (tmp) subject to processing that is the front queue that has already been acquired from the memory 121 and executes the image update.

    [0279] Note that as it can be understood from the description of the flow illustrated in FIGS. 8 and 11, on the basis of the acquisition state of the data for display from the memory 121, the display control unit 122 executes the processing of switching the state of the device from the display update executing state (initialized) to the display update stopped state (uninitialized) or from the display update stopped state (uninitialized) to the display update executing state (initialized).

    [0280] When the state of the device is in the display update stopped state (uninitialized), the waiting time, which is the elapsed time from time of input to the memory, and the buffering time, which is specified as the required time to store data in the memory, are compared and the image frame in which the waiting time exceeds the buffering time is selected as the output image to the display unit.

    [0281] On the other hand, when the state of the device is in the display update executing state (initialized), regardless of whether or not the waiting time exceeds the buffering time, the image frame that can be acquired from the memory is selected as the output image to the display unit.

    [0282] A specific example of the process according to the flow illustrated in FIG. 11 will be described with reference to FIG. 12 and after.

    [0283] FIGS. 12 and 13 illustrate examples of processing in which the determination processing in step S256 is Yes, that is, in a case in which determination is made that the display image update processing start timing has come in state 2, that is, in the display update executing state (initialized) that has been described with reference to FIG. 11.

    [0284] FIG. 12 is processing in which the determination processing in step S257 is No, that is, is processing in a case in which an unprocessed queue is left in the memory, and is an example of a sequence when the processing of steps S321 to S323 is executed.

    [0285] Meanwhile, FIG. 13 is processing in which the determination processing in step S257 is Yes, that is, is processing in a case in which no unprocessed queue is left in the memory, and is an example of a sequence when the processing of steps S311 is executed.

    [0286] Referring to FIG. 12, a sequence in a case in which an unprocessed queue is left in the memory and in a case in which the processing of step S321 to S323 is executed will be described first.

    [0287] Similar to FIGS. 5 and 7 described earlier, FIG. 12 is a diagram illustrating a sequence associated with the transition of time from an output of data from the decoder 102 until display processing of the image frame on the display unit 104.

    [0288] Each of the following data is illustrated in FIG. 12.
    1. (A) An output of the decoder
    2. (B) Stored data in the memory
    3. (C) The image displayed on the display unit with the processing of the display control unit


    [0289] Time base (t) is illustrated in (C) of FIG. 12, and (A), (B), and (C) illustrates the processing performed in accordance with the time base (t).

    [0290] Each of the F1, F2, F3, F4... indicated in (A), (B), and (C) represents a single image frame.

    [0291] F1 is an image frame F1, and F2 is an image frame F2.

    [0292] Time t1, time t2, time t3... depicted by solid line arrows on the time base (t) illustrated in (C) of FIG. 12 are each an output timing of the vertical synchronizing signal (Vsync) of the display unit 104 and each indicate a timing in which the relevant output image frame can be switched. When the vertical synchronizing signal (Vsync) is 60 Hz, the interval between each sloid line arrow is 1/60 (sec).

    [0293] Furthermore, time t1m, time t2m, time t3m... indicated by broken line arrows represents memory access timings to acquire images from the memory 121.

    [0294] The memory 121 illustrated in (B) of FIG. 12 is the memory 121 illustrated in FIG. 4 and has a configuration that allows two image frames to be stored. In FIG. 12, (b1) on the lower side of (B) is a preceding input image (a preceding queue) and (b2) on the upper side is a succeeding input image (a succeeding queue).

    [0295] In FIG. 12, State 1 = display update stopped state (uninitialized) is during the period of t0 to t2m on the time base (t), and after t2m, is changed to State 2 = display update executing state (initialized).

    [0296] The above change in state is executed in the state change processing of step S211 in the flowchart illustrated in FIG. 8.

    [0297] After changing the state, in State 2 = display update executing state (initialized), processing of step S251 and after of the flow illustrated in FIG. 11 are executed.

    [0298] In FIG. 12, processing of step S321 to S323 executed as the processing of Step S256 of the flow illustrated in FIG. 11 in which determination processing is Yes, that is, determination is made that the display image update processing start timing has come and, further, the processing of step S257 in which the determination processing is No, that is, processing in which an unprocessed queue is left in the memory will be described.

    [0299] Time t3m indicated in FIG. 12 is the timing in which the determination processing in step S256 of the flow illustrated in FIG. 11 is Yes, in other words, is one of the timings on which determination is made that the display image update processing start timing has come.

    [0300] When the determination processing in step S256 of the flow illustrated in FIG. 11 is Yes on the timing of time t3m illustrated in FIG. 12, in other words, when it is determined that the display image update processing start timing has come, the process proceeds to step S257.

    [0301] In step S257, determination is made whether there is any unprocessed queue in the memory 121.

    [0302] In the example illustrated in FIG. 12, at the display image update processing start timing t3m, as unprocessed queues on which no display processing has been performed, two queues, namely, the preceding queue (F3) and the succeeding queue (F4) exist in the memory 121.

    [0303] Accordingly, the determination processing of step S257 is No, in other words, determination is made that there in an unprocessed queue in the memory 121, and the process proceeds to step S321.

    [0304] In step S321, the preceding queue, which includes, as its component, the image frame F3 that is stored in the memory 121, is set as the image (tmp) subject to processing and is acquired from the memory 121 by the display control unit 122.

    [0305] Subsequently, in step S322, the preceding queue, which includes, as its component, the image frame F3 that has already been acquired as the image (tmp) subject to processing, is deleted.

    [0306] With the above processing, F4 that is the succeeding queue is set as the front queue.

    [0307] Subsequently, in step S323, the display control unit 122 outputs, to the display unit 104, the image (tmp) subject to processing = F3 that has already been acquired from the memory 121 and executes the image update.

    [0308] As described above, in State 2 = display update executing state (initialized), when the display image update timing has come (step S256 = Yes) and, further, when there is an unprocessed queue in the memory 121 (step S257 = No), the processing of steps S321 to S323 is executed.

    [0309] In other words, queues are sequentially fetched from the preceding queue stored in the memory 121 to perform display processing.

    [0310] Referring to FIG. 13, an example of processing of step S311 that is executed as processing in a case in which the determination processing in step S257 is Yes, in other words, in a case in which there is no unprocessed queue left in the memory will be described next.

    [0311] In the sequence diagram illustrated in FIG. 13, time t4m indicated in FIG. 13 is the timing in which the determination processing in step S256 of the flow illustrated in FIG. 11 is Yes, in other words, is one of the timings on which determination is made that the display image update processing start timing has come.

    [0312] When the determination processing in step S256 of the flow illustrated in FIG. 11 is Yes on the timing of time t3m illustrated in FIG. 12, in other words, when it is determined that the display image update processing start timing has come, the process proceeds to step S257.

    [0313] In step S257, determination is made whether there is any unprocessed queue in the memory 121.

    [0314] In the example illustrated in FIG. 13, at the display image update processing start timing t4m, in the memory 121, there is no unprocessed queue on which no display processing has been performed.

    [0315] Accordingly, the determination processing of step S257 is Yes, in other words, determination is made that there is no unprocessed queue in the memory 121, and the process proceeds to step S311.

    [0316] Change in state is performed in step S311.

    [0317] In other words, the state is set to State 1 = display update stopped state (uninitialized) and the process is ended.

    [0318] In the above case, the update of the display image on the display unit 104 is not executed. As illustrated in (C) of FIG. 13, the display image of the display unit 104 is set as F4, which is the display image between t3 and t4, without change from time t4 and after.

    [0319] As described above, in State 2 = display update executing state (initialized), when the display image update timing has come (step S256 = Yes) and, further, when there is no unprocessed queue left in the memory 121 (step S257 = Yes), the processing of step S311 is executed.

    [0320] In other words, the update of the display image is not executed and the state change processing to the display update stopped state (uninitialized) is performed.

    [6. Processing in Response to Switching of Image Transmission Frame Rate (fps) of Server]



    [0321] Processing that is executed by the client when the image transmission frame rate (fps) of the server is switched will be described next.

    [0322] Note that as described above, the server 30 appropriately changes the transmission frame rate in accordance with the congestion situation and the congested state of the network. For example, two rates of 60 fps and 30 fps are used while appropriately switching the two rates.

    [0323] The transmission frame rate information of the server 30 is sequentially notified to the client 20. For example, in step S254 in the flow illustrated in FIG. 11, when change in the server transmission frame rate (fps) is detected, in step S255, the client 20 performs processing of resetting the vertical synchronizing signal (Vsync) of the display unit 104 and changing the display update timing.

    [0324] Hereinafter, a description will be given on an example of processing of the client 20 in which there are two types of frame rates of the image transmitted by the server 30, namely, 60 fps and 30 fps, and the two transmission frame rates are switched.

    [6-1. Basic Display Control Processing Executed by Client When Transmission Frame Rates (fps) are 60 fps and 30 fps]



    [0325] With reference to FIGS. 14 and 15, processing sequence of the client 20 when the frame rate of the image transmitted by the server 30 is 60 fps and that when 30 fps will be described next.

    [0326] FIG. 14 illustrates the processing sequence of the client 20 when the frame rate of the image transmitted by the server 30 is 60 fps.

    [0327] Similar to FIG. 5 and the like described earlier, each of the following data is illustrated in FIG. 14.
    1. (A) An output of the decoder
    2. (B) Stored data in the memory
    3. (C) The image displayed on the display unit with the processing of the display control unit


    [0328] In the example illustrated in FIG. 14, the decoder output illustrated in (A) of FIG. 14 is substantially the same in intervals with the 60 fps that is the frame rate of the image transmitted by the server 30. In other words, a single image frame Fn is output each 1/60 (sec) and is stored in the memory.

    [0329] When the transmission frame rate information of the server 30 is input from the communication unit 101, the controller 105 illustrated in FIG. 4 notifies the above to the display control unit 122 of the output controller 103.

    [0330] The display control unit 122 performs update processing of the display image according to the current image transmission frame rate (fps).

    [0331] The example illustrated in FIG. 14 is processing in a case in which the transmission frame rate = 60 fps and, in such a case, the display control unit 122 performs processing of counting the vertical synchronizing signal (Vsync ≈ 60 Hz) of the display unit, and sequentially updating the image in accordance with the timing of Vsynvc = 0.

    [0332] FIG. 15 illustrates the processing sequence of the client 20 when the frame rate of the image transmitted by the server 30 is 30 fps.

    [0333] Similar to FIG. 14 and the like described earlier, each of the following data is illustrated in FIG. 15.
    1. (A) An output of the decoder
    2. (B) Stored data in the memory
    3. (C) The image displayed on the display unit with the processing of the display control unit


    [0334] In the example illustrated in FIG. 15, the decoder output illustrated in (A) of FIG. 15 is substantially the same in intervals with the 30 fps that is the frame rate of the image transmitted by the server 30. In other words, a single image frame Fn is output each 1/30 (sec) and is stored in the memory.

    [0335] In such a case, the display control unit 122 performs processing of counting the vertical synchronizing signal (Vsync ≈ 60 Hz) of the display unit, and updating the image in accordance with the timing of Vsynvc = 0 being counted for the second time.

    [0336] In other words, the image update is performed each 1/30 (sec) by counting the vertical synchronizing signal of 60 Hz.

    [6-2. Display Control Processing Executed by Client When Transmission Frame Rates (fps) are Changed from 60 fps to 30 fps]



    [0337] The server 30 appropriately changes the transmission frame rate in accordance with, for example, the congestion situation and the congested state of the network. When the network is congested and the bandwidth that can be used is limited, the frame rate is lowered. For example, processing of lowering the frame rate of 60 fps to 30 fps is performed.

    [0338] Note that the notification of the change in the transmission rate is successively notified to the client 20 from the server 30.

    [0339] The controller 105 of the client 20 illustrated in FIG. 4 outputs a metadata setting command to the decoder 102 on the basis of the notification information on the transmission rate change. In other words, as metadata of the decoded image frame Fn that the decoder 102 outputs, the transmission frame rate (fps) is set and is stored in the memory 121.

    [0340] On the basis of the metadata set in the queue that has been fetched from the memory 121, the display control unit 122 detects that the transmission frame rate has been changed and, on the basis of the detection information, changes the display update rate of the image of the display unit 104.

    [0341] Note that the display control unit determines whether there has been a change in the frame rate by retaining the metadata of the data acquired one data before from the memory and performing a comparison with the transmission frame rate (fps) information of the above preceding metadata.

    [0342] Hereinafter, description of an example of the processing of the client when the image transmission frame rate (fps) from the server 30 is changed from 60 fps to 30 fps will be given.

    [0343] Referring to FIGS. 16 to 18, description of the following specific examples will be given.

    [0344] (Specific example 1) An example of display control processing (FIG. 16) when there is an unprocessed image (undisplayed image), in which the buffering time has passed and that includes the frame rate change information, at the first display image update processing start timing (tnm) after the timing of the notification of the transmission rate reduction received by the client.

    [0345] (Specific example 2) An example of display control processing (FIG. 17) when there is no unprocessed image (undisplayed image), in which the buffering time has passed and that includes the frame rate change information, at the first display image update processing start timing (tnm) after the timing of the notification of the transmission rate reduction received by the client.

    [0346] (Specific example 3) An example of display control processing (FIG. 18) when the timing of the notification of the transmission rate reduction received by the client is within the state period of the display update stopped state (uninitialized) in which no unprocessed queue exists in the memory.

    [0347] Note that similar to FIG. 5 and the like described earlier, the diagrams illustrated in FIGS. 16 to 18 each illustrate each of the following data.
    1. (A) An output of the decoder
    2. (B) Stored data in the memory
    3. (C) The image displayed on the display unit with the processing of the display control unit


    [0348] Referring to FIG. 16, description of the following specific example 1 will be given first.

    [0349] (Specific example 1) An example of display control processing when there is an unprocessed image (undisplayed image), that includes the frame rate change information and in which the buffering time has passed, at the first display image update processing start timing (tnm) after the timing of the notification of the transmission rate reduction received by the client.

    [0350] In FIG. 16, it is assumed that the client 20 receives the notification that the image frame rate (fps) transmitted by the server has been changed from 60 fps to30 fps at time tc denoted on the time base (t) in (C) of FIG. 16.

    [0351] At the first display image update processing start timing (t4m) after the time tc at which the notification of the frame rate reduction has been received, the buffering time of the image frame F4 which includes the frame rate change information has passed and an image F4 can be acquired from the memory.

    [0352] When a notification of the frame rate reduction (60 fps → 30 fps) is received at the timing tc denoted in FIG. 16, the display control unit 122 of the client 20 acquires the image frame F4, that includes the frame rate change information and in which the buffering time has passed, at the next display image update processing start timing (t4m).

    [0353] On the basis of the metadata of the image frame F4, the reduction of the frame rate (60 fps → 30 fps) is verified, and from this point, the display update timing is changed to 30 fps corresponding to the transmission frame rate.

    [0354] As illustrated in (C) of FIG. 16, from the display image update processing start timing (t4m) of the image frame F4, the image update timing is changed to 30 fps from 60 fps until then. Specifically, update of the image is performed at every second signal timing of the vertical synchronizing signal (Vsync) of 60 Hz, in other words, at every 1/30 (sec).

    [0355] Next, the following specific example 2 will be described with reference to FIG. 17.

    [0356] (Specific example 2) An example of display control processing when there is no unprocessed image (undisplayed image), in which the buffering time has passed and that includes the frame rate change information, at the first display image update processing start timing (tnm) after the timing of the notification of the transmission rate reduction received by the client.

    [0357] In FIG. 17, it is assumed that the client 20 receives the notification that the image frame rate (fps) transmitted by the server has been changed from 60 fps to30 fps at time tc denoted on the time base (t) in (C) of FIG. 17.

    [0358] At the first display image update processing start timing (t4m) after the time tc at which the notification of the frame rate reduction has been received, the buffering time of the image frame F4 which includes the frame rate change information has not passed. Accordingly, when processing that takes buffer time into consideration is executed, the image F4 cannot be acquired from the memory.

    [0359] However, in the display update executing state (initialized state), processing of acquiring and displaying an image that can be acquired from the memory 121 without taking buffering time into consideration is performed.

    [0360] Accordingly, when a notification of the frame rate reduction (60 fps → 30 fps) is received at the timing tc denoted in FIG. 17, the display control unit 122 of the client 20 acquires the newest input image frame F4 that includes the frame rate change information from the memory 121 at the next display image update processing start timing (t4m) and performs update of the display.

    [0361] However, when an image frame F4 cannot be acquired, the image frame F3 is continuously displayed and the image frame F4 is acquired at the next display image update processing start timing (t5m) and performs update of the display.

    [0362] When, at the display image update processing start timing (t4m), acquisition of the newest input image frame F4 that includes the frame rate change information from the memory 121 is succeeded and when the update of the display is executed, on the basis of the metadata of the image frame F4, the reduction of the frame rate (60 fps → 30 fps) is verified by the display control unit 122, and from this point, the display update timing is changed to 30 fps corresponding to the transmission frame rate.

    [0363] With the change in the frame rate. The image frame F4 is continuously displayed in the next 60 fps display image update processing start timing (t5m). At the next display image update processing start timing (t6m), the newest input image frame F5 is acquired and the update of the display is performed.

    [0364] Accordingly, in the present example, processing of changing to 30 fps, which corresponds to the transmission frame rate, is performed at the display timing of the image frame F4 that has been acquired in success from the memory 121 and in which the buffering time has not passed.

    [0365]  Referring to FIG. 18, description of processing of specific example 3 will be described next.

    [0366] (Specific example 3) An example of display control processing when the timing of the notification of the transmission rate reduction received by the client is within the state period of the display update stopped state (uninitialized) in which no unprocessed queue exists in the memory.

    [0367] In FIG. 18, it is assumed that the client 20 receives the notification that the image frame rate (fps) transmitted by the server has been changed from 60 fps to30 fps at time tc denoted on the time base (t) in (C) of FIG. 18.

    [0368] The reception time tc of the notification of the reduction in the frame rate is a time zone of the display update stopped state (uninitialized) in which no unprocessed queue exists in the memory.

    [0369] At the display image update processing start timing (t4m) in the period during the display update stopped state (uninitialized), an image cannot be acquired from the memory and, during time t4 to time t5, the display image F3 of the time (t3 to t4) before time t4 to time t5 is continuously displayed.

    [0370] At the next display image update processing start timing (t5m) after the reception time tc of the notification of the reduction in the frame rate, the newest input image frame F4, in which the buffering time has passed and that includes the frame rate change information, is acquired and update of the display is performed. From the image frame F4, the display update timing is changed to 30 fps that corresponds to the transmission frame rate.

    [0371] Furthermore, at this point, the change in state, in other words, state change processing that changes to the display update executing state (initialized) is executed.

    [0372] The above state change processing corresponds to the processing of step S211 in the flow illustrated in FIG. 8.

    [6-3. Display Control Processing Executed by Client When Transmission Frame Rates (fps) are Changed from 30 fps to 60 fps]



    [0373] The server 30 appropriately changes the transmission frame rate in accordance with, for example, the congestion situation and the congested state of the network. In a case in which the congestion is cleared and the bandwidth that can be used is increased, the frame rate is increased. For example, processing of increasing the frame rate of 30 fps to 60 fps is performed.

    [0374] Note that the notification of the change in the transmission rate is successively notified to the client 20 from the server 30.

    [0375] The controller 105 of the client 20 illustrated in FIG. 4 outputs a metadata setting command to the decoder 102 on the basis of the notification information on the transmission rate change. In other words, as metadata of the decoded image frame Fn that the decoder 102 outputs, the transmission frame rate (fps) is set and is stored in the memory 121.

    [0376] On the basis of the metadata set in the queue that has been fetched from the memory 121, the display control unit 122 detects that the transmission frame rate has been changed and, on the basis of the detection information, changes the display update rate of the image of the display unit 104.

    [0377] Hereinafter, description of an example of the processing on the client side when the image transmission frame rate (fps) from the server 30 is changed from 30 fps to 60 fps will be given.

    [0378] Referring to FIGS. 19 to 21, description of the following specific examples will be given.

    [0379] (Specific example 1) An example of display control processing (FIG. 19) in a case in which the display image update processing start timing (tnm) of the image including the first frame rate change information after the timing of the notification of the increase in the transmission rate received by the client matches the timing of the display image update start timing when at a low frame rate (30 fps).

    [0380] (Specific example 2) An example of display control processing (FIG. 20) in a case in which the display image update processing start timing (tnm) of the image including the first frame rate change information after the timing of the notification of the increase in the transmission rate received by the client does not match the timing of the display image update start timing when at a low frame rate (30 fps).

    [0381] (Specific example 3) An example of display control processing (FIG. 21) in a case in which the timing of the notification of the increase in the transmission rate received by the client is in the state period of the display update stopped state (uninitialized) in which no unprocessed queue exists in the memory.

    [0382] Note that similar to FIG. 5 and the like described earlier, the diagrams illustrated in FIGS. 19 to 21 each illustrate each of the following data.
    1. (A) An output of the decoder
    2. (B) Stored data in the memory
    3. (C) The image displayed on the display unit with the processing of the display control unit


    [0383] Referring to FIG. 16, description of the following specific example 1 will be given first.

    [0384] (Specific example 1) An example of display control processing in a case in which the display image update processing start timing (tnm) of the image including the first frame rate change information after the timing of the notification of the increase in the transmission rate received by the client matches the timing of the display image update start timing when at a low frame rate (30 fps).

    [0385] In FIG. 19, it is assumed that the client 20 receives the notification that the image frame rate (fps) transmitted by the server has been changed from 30 fps to 60 fps at time tc denoted on the time base (t) in (C) of FIG. 19.

    [0386] The display image update processing start timing (t3m) of the image including the first frame rate change information after the time tc at which the notification of the frame rate increase has been received matches the display image update start timing when at a low frame rate (30 fps).

    [0387] In the above case, the display control unit 122 of the client 20 acquires the image frame F2, in which the buffering time has passed at the display image update processing start timing (t3m) and that includes the frame rate change information, and on the basis of the metadata of the image frame F2, verifies that the transmission frame rate has been changed from 30 fps to 60 fps. On the basis of the verification, the display control unit 122 changes, from the image frame F2, the display update timing to 60 fps that corresponds to the transmission frame rate.

    [0388] As illustrated in (C) of FIG. 19, from the display image update processing start timing (t3m) of the image frame F2, the image update timing is changed to 60 fps from 30 fps until then. Specifically, update of the image is performed at every signal timing of the vertical synchronizing signal (Vsync) of 60 Hz, in other words, at every 1/60 (sec).

    [0389] Referring to FIG. 20, description of the following specific example 2 will be given first.

    [0390] (Specific example 2) An example of display control processing in a case in which the display image update processing start timing (tnm) of the image including the first frame rate change information after the timing of the notification of the increase in the transmission rate received by the client does not match the timing of the display image update start timing when at a low frame rate (30 fps).

    [0391] In FIG. 20, it is assumed that the client 20 receives the notification that the image frame rate (fps) transmitted by the server has been changed from 30 fps to 60 fps at time tc denoted on the time base (t) in (C) of FIG. 20.

    [0392] The first display image update processing start timing (t2m) after the time tc at which the notification of the frame rate increase has been received does not match the display image update start timing when at a low frame rate (30 fps).

    [0393] However, at the display image update processing start timing (t2m), the image F2 in which the buffering time has already passed and that includes the frame rate change information is stored in the memory 121.

    [0394] The display control unit 122 acquires the image frame F2, in which the buffering time has passed at the display image update processing start timing (t2m) and that includes the frame rate change information, and on the basis of the metadata of the image frame F2, verifies that the transmission frame rate has been changed from 30 fps to 60 fps.

    [0395] The display control unit 122 changes, from the image frame F2 that has been acquired at the display image update processing start timing (t2m), the display update timing to 60 fps that corresponds to the transmission frame rate.

    [0396] As illustrated in (C) of FIG. 20, from the display image update processing start timing (t2m) of the image frame F2, the image update timing is changed to 60 fps from 30 fps until then. Specifically, update of the image is performed at every signal timing of the vertical synchronizing signal (Vsync) of 60 Hz, in other words, at every 1/60 (sec).

    [0397] In the present example, it is possible to promptly verify the change in the transmission rate and change the display rate at an earlier stage.

    [0398] Referring to FIG. 21, description of processing of specific example 3 will be described next.

    [0399] (Specific example 3) An example of display control processing in a case in which the timing of the notification of the increase in the transmission rate received by the client is in the state period of the display update stopped state (uninitialized) in which no unprocessed queue exists in the memory.

    [0400] In FIG. 21, it is assumed that the client 20 receives the notification that the image frame rate (fps) transmitted by the server has been changed from 30 fps to 60 fps at time tc denoted on the time base (t) in (C) of FIG. 21.

    [0401] The reception time tc of the notification of the increase in frame rate is a time zone of the display update stopped state (uninitialized) in which no unprocessed queue exists in the memory.

    [0402] At the display image update processing start timing (t4m) in the period during the display update stopped state (uninitialized), an image cannot be acquired from the memory and, during time t4 to time t5, the display image F1 of the time (t3 to t4) before time t4 to time t5 is continuously displayed.

    [0403] At the next display image update processing start timing (t5m) after the reception time tc of the notification of the increase in the frame rate, the newest input image frame F2, in which the buffering time has passed and that includes the frame rate change information, is acquired and update of the display is performed. From the image frame F2, the display update timing is changed to 60 fps that corresponds to the transmission frame rate.

    [0404] Furthermore, at this point, the change in state, in other words, state change processing that changes to the display update executing state (initialized) is executed.

    [0405] The above state change processing corresponds to the processing of step S211 in the flow illustrated in FIG. 8.

    [7. Exemplary Configuration of Information Processor Serving as Client]



    [0406] Subsequently, an exemplary configuration of the information processor serving as the client will be described with reference to FIG. 22.

    [0407] FIG. 22 illustrates an exemplary hardware configuration of the information processor serving as the client.

    [0408] The central processing unit (CPU) 401 functions as a data processing unit that executes various processing in accordance to a program stored in a read-only memory (ROM) 402 or a storage unit 408. For example, the process according to the sequence described in the embodiment described above is executed. A program executed by the CPU 401 and data are stored in a random-access memory (RAM) 403. The CPU 401, the ROM 402, and the RAM 403 are interconnected with a bus 404.

    [0409] The CPU 401 is connected to an input/output interface 405 through the bus 404, and an input unit 406 including various switches, a key board, a mouse, a microphone, and the like and an output unit 407 including a display, a loudspeaker and the like are connected to the input/output interface 405. The CPU 401 executes various processing in response to commands input from the input unit 406, and outputs the processing results to, for example, the output unit 407.

    [0410] The storage unit 408 that is connected to the input/output interface 405 is, for example, configured of a hard disk or the like and stores a program that is executed by the CPU 401 and various data. A communication unit 409 functions as a transmission/reception unit of data communication through networks such as the Internet and a local area network and, further, functions as a transmission/reception unit of broadcast waves.

    [0411] A drive 410 connected to the input/output interface 405 drives a removable media 411 such as a magnetic disk, an optical disc, a magneto-optical disk, or a semiconductor memory such as a memory card and executes recording or a read-out of data.

    [0412] Not that while coding and decoding of data can be executed as a process of the CPU 401 serving as a data processing unit, the configuration may be such that a codec serving as a dedicated hardware for executing coding processing or decoding processing is provided.

    [8. Conclusion of Configuration of Present Disclosure]



    [0413] Referring to a specific embodiment, an embodiment of the present disclosure has been described above in detail. However, it is apparent that those skilled in the art may modify or replace the embodiment without departing from the scope of the present disclosure as defined by the claims.

    [0414] Furthermore, the processing sequence that is explained in the specification can be implemented by hardware, by software and by a configuration that combines hardware and software. In a case where the processing is implemented by software, it is possible to install in memory within a computer that is incorporated into dedicated hardware a program in which the processing sequence is encoded and to execute the program. It is also possible to install a program in a general-purpose computer that is capable of performing various types of processing and to execute the program. For example, the program can be installed in advance in a storage medium. In addition to being installed in a computer from the storage medium, the program can also be received through a network, such as a local area network (LAN) or the Internet, and can be installed in a storage medium such as a hard disk or the like that is built into the computer.

    [0415] Note that the various types of processing that are described in this specification may not only be performed in a temporal sequence as has been described, but may also be performed in parallel or individually, in accordance with the processing capacity of the device that performs the processing or as needed. Furthermore, the system in this specification is not limited to being a configuration that logically aggregates a plurality of devices, all of which are contained within the same housing.

    [Industrial Applicability]



    [0416] As described above, according to a configuration of an embodiment of the present disclosure, a device and a method are provided that is capable of display control of image data, which is received through a communication unit, with a small delay.

    [0417] Specifically, an image frame, and memory input time and transmission frame rate information that serve as metadata are stored in the memory storing the image frame. The display control unit selects the image that is output to the display unit on the basis of the elapsed time from the input time information. The waiting time, which is the elapsed time from the input time, and the buffering time are compared with each other in each of the queues, and the image frame corresponding to the newest queue, which is a queue among the queues in which the waiting time exceeds the buffering time, is selected as the output image to the display unit. Furthermore, when there is a change in the transmission frame rate, the display rate of the display unit is changed in addition to the change in the transmission frame rate.

    [0418] With the present configuration, a device and a method are provided that is capable of display control of image data, which is received through a communication unit, with a small delay.

    Reference Signs List



    [0419] 
    10
    communication system
    20
    client
    21
    TV
    22
    PC
    23
    portable terminal
    30
    server
    51
    communication unit
    52
    decoder
    53
    output controller
    54
    display unit
    55
    controller
    71
    memory
    72
    display control unit
    101
    communication unit
    102
    decoder
    103
    output controller
    104
    display unit
    105
    controller
    121
    memory
    122
    display control unit
    401
    CPU
    402
    ROM
    403
    RAM
    404
    bus
    405
    input/output interface
    406
    input unit
    407
    output unit
    408
    storage unit
    409
    communication unit
    410
    drive
    411
    removable media



    Claims

    1. A client device comprising:

    a communication unit (101), a decoder (102), a controller (105), a memory (121), a display control unit (122) and a display unit (104); wherein

    the communication unit (101) is configured to receive encoded image data;

    the controller (105) is configured to input time information from a system clock and provide the time information to the decoder (102);

    the decoder (102) is configured to execute processing of decoding the encoded image data received by the communication unit (101), to store in the memory (121) image frames that have been already decoded and to store in addition in the memory (121) the time at which each image frame is input to the memory, as metadata;

    the memory (121) being configured to store two successive image frames in a preceding queue and a succeeding queue, respectively; and

    the display control unit (122) being configured to receive a vertical synchronization signal from the display unit (104) and to acquire an image frame stored in the memory (121) and output the acquired image frame to the display unit (104), at a timing corresponding to the vertical synchronization signal,

    wherein the display control unit (122) is configured to compare, in each queue, the waiting time that is the elapsed time from the time stored in the metadata associated with the image frame set in the queue with a buffering time, which is the time to completely store an image frame in a queue of the memory, to select, among queues in which the waiting time exceeds the buffering time, an image frame corresponding to a newest image frame and to output the selected image frame to the display unit (104).


     
    2. The client device according to claim 1, wherein when the display control unit (122) can not acquire next data for display from the memory, the display control unit (122) is configured to execute a continuous display of an image that is displayed and change a device state to a display update stopped state.
     
    3. The client device according to claim 2, wherein the display control unit (122) is configured, in the display update stopped state, to compare, in each queue, the waiting time with the buffering time, and to select, among queues in which the waiting time exceeds the buffering time, an image frame corresponding to a newest image frame as an output image to the display unit and
    wherein the display control unit (122) is configured to change the device state to a display update executing state.
     
    4. The client device according to any preceding claim, wherein the display control unit (122) is configured to execute processing of switching a device state from a display update executing state to a display update stopped state, or from the display update stopped state to the display update executing state according to an acquisition state of data for display from the memory.
     
    5. The client device according to claim 4, wherein when the device state is the display update stopped state, the display control unit (122) is configured to compare, in each queue, the waiting time with the buffering time, and to select, among queues in which the waiting time exceeds the buffering time, an image frame corresponding to a newest image frame as an output image to the display unit.
     
    6. The client device according to claim 1,
    wherein the decoder (102) is configured to store in the memory (121), in addition to the image frames, transmission frame rate information of the image frames as metadata, and
    wherein the display control unit (122) is configured to determine whether there has been a change in a transmission frame rate by acquiring the transmission frame rate information that is set in the image frame acquired from the memory, and when there has been a change, the display control unit (122) is configured to change a display rate of the display unit according to the change in the transmission frame rate.
     
    7. A method of controlling the client device of claim 1 comprising the steps of:

    receiving, by the communication unit (101), encoded image data; executing, by the decoder (102) processing of decoding the received encoded image data;

    storing, by the decoder (102), image frames that have already been decoded into the memory (121) and, in addition to the image frames, the time at which each image frame is input to the memory as metadata, said memory comprising a preceding queue and a succeeding queue for storing two successive image frames, respectively, and

    receiving, by the display control unit (122), a vertical synchronization signal from the display unit (104),

    acquiring, by the display control unit (122), an image frame stored in the memory and outputting said image frame to a display unit, at a timing corresponding to the received vertical synchronization signal,

    wherein the acquiring of an image frame comprises:

    comparing, by the display control unit (122), in each queue, the waiting time that is the elapsed time from the time stored in the metadata associated with the image frame set in the queue with a buffering time, which is the time to completely store an image frame in a queue of the memory, and

    selecting among queues in which the waiting time exceeds the buffering time, an image frame corresponding to a newest image frame and outputting the selected image frame to the display unit (104).


     
    8. A program comprising program code means for causing the client device of claim 1 to execute the steps of the method of claim 7 when said program is executed.
     


    Ansprüche

    1. Client-Einrichtung, umfassend:

    eine Kommunikationseinheit (101), einen Decoder (102), eine Steuereinrichtung (105), einen Speicher (121), eine Anzeigesteuereinheit (122) und eine Anzeigeeinheit (104); wobei

    die Kommunikationseinheit (101) dafür ausgelegt ist, codierte Bilddaten zu empfangen;

    die Steuereinrichtung (105) dafür ausgelegt ist, Zeitinformation von einer Systemuhr einzugeben und die Zeitinformation zu dem Decoder (102) bereitzustellen;

    der Decoder (102) dafür ausgelegt ist, die Verarbeitung des Decodierens der von der Kommunikationseinheit (101) empfangenen codierten Bilddaten durchzuführen, um Bildrahmen in dem Speicher (121) zu speichern, die bereits decodiert sind, und um zusätzlich im Speicher (121) die Zeit, zu der jeder Bildrahmen in den Speicher eingegeben wird, als Metadaten zu speichern;

    der Speicher (121) dafür ausgelegt ist, zwei aufeinanderfolgende Bildrahmen in einer vorstehenden Warteschlange bzw. einer nachfolgenden Warteschlange zu speichern; und

    die Anzeigesteuereinheit (122) dafür ausgelegt ist, ein vertikales Synchronisationssignal von der Anzeigeeinheit (104) und einen in dem Speicher (121) gespeicherten Bildrahmen zu erfassen und den erfassten Bildrahmen zu einem dem vertikalen Synchronisationssignal entsprechenden Zeitpunkt zu der Anzeigeeinheit (104) auszugeben;

    wobei die Anzeigesteuereinheit (122) dafür ausgelegt ist, in jeder Warteschlange die Wartezeit zu vergleichen, die die verstrichene Zeit ab der in den Metadaten gespeicherten Zeit ist, assoziiert mit dem in der Warteschlange eingestellten Bildrahmen, mit einer Pufferzeit, die die Zeit zum vollständigen Speichern eines Bildrahmens in einer Warteschlange des Speichers ist, um unter Warteschlangen, in denen die Wartezeit die Pufferzeit übersteigt, einen Bildrahmen auszuwählen, der einem neuesten Bildrahmen entspricht, und den ausgewählten Bildrahmen zu der Anzeigeeinheit (104) auszugeben.


     
    2. Client-Einrichtung nach Anspruch 1,
    wobei, wenn die Anzeigesteuereinheit (122) keine nächsten Daten zur Anzeige aus dem Speicher erfassen kann, die Anzeigesteuereinheit (122) dafür ausgelegt ist, eine kontinuierliche Anzeige eines Bilds auszuführen, das angezeigt wird, und einen Einrichtungsstatus zu einem Status gestoppter Anzeigeaktualisierung zu ändern.
     
    3. Client-Einrichtung nach Anspruch 2,
    wobei die Anzeigesteuereinheit (122) dafür ausgelegt ist, in dem Status gestoppter Anzeigeaktualisierung in jeder Warteschlange die Wartezeit mit der Pufferzeit zu vergleichen und unter Warteschlangen, in denen die Wartezeit die Pufferzeit übersteigt, einen Bildrahmen, der einem neuesten Bildrahmen entspricht, als ein Ausgabebild zu der Anzeigeeinheit auszuwählen, und
    wobei die Anzeigesteuereinheit (122) dafür ausgelegt ist, den Anzeigestatus zu einem Status der Anzeigeaktualisierungsausführung zu ändern.
     
    4. Client-Einrichtung nach einem der vorstehenden Ansprüche,
    wobei die Anzeigesteuereinheit (122) dafür ausgelegt ist, die Verarbeitung eines Umschaltens eines Einrichtungsstatus von einem Status der Anzeigeaktualisierungsausführung zu einem Status gestoppter Anzeigeaktualisierung oder von dem Status gestoppter Anzeigeaktualisierung zu dem Status der Anzeigeaktualisierungsausführung entsprechend einem Erfassungsstatus von Daten zur Anzeige aus dem Speicher durchzuführen.
     
    5. Client-Einrichtung nach Anspruch 4,
    wobei, wenn der Einrichtungsstatus der Status gestoppter Anzeigeaktualisierung ist, die Anzeigesteuereinheit (122) dafür ausgelegt ist, in jeder Warteschlange die Wartezeit mit der Pufferzeit zu vergleichen und unter Warteschlangen, in denen die Wartezeit die Pufferzeit übersteigt, einen Bildrahmen, der einem neuesten Bildrahmen entspricht, als ein Ausgabebild zu der Anzeigeeinheit auszuwählen.
     
    6. Client-Einrichtung nach Anspruch 1,
    wobei der Decoder (102) dafür ausgelegt ist, in dem Speicher (121) zusätzlich zu den Bildrahmen Frame-Rate-Information zur Übertragung der Bildrahmen als Metadaten zu speichern, und
    wobei die Anzeigesteuereinheit (122) dafür ausgelegt ist, festzustellen, ob eine Änderung in einer Frame-Rate der Übertragung durch Erfassen der Frame-Rate-Information zur Übertragung, die in dem aus dem Speicher erfassten Bildrahmen eingestellt ist, eingetreten ist; und wobei, falls eine Änderung eingetreten ist, die Anzeigesteuereinheit (122) dafür ausgelegt ist, eine Anzeigerate der Anzeigeeinheit entsprechend der Änderung bei der Frame-Rate der Übertragung zu ändern.
     
    7. Verfahren zum Steuern der Client-Einrichtung nach Anspruch 1, umfassend die Schritte:

    Empfangen codierter Bilddaten durch die Kommunikationseinheit (101); Durchführen von Verarbeitung des Decodierens der empfangenen codierten Bilddaten durch den Decoder (102);

    Speichern von Bildrahmen, die bereits decodiert worden sind, durch den Decoder (102) im Speicher (121) und, zusätzlich zu den Bildrahmen, des Zeitpunkts, zu dem jeder Bildrahmen als Metadaten in den Speicher eingegeben wird, wobei der Speicher eine vorstehende Warteschlange bzw. eine nachfolgende Warteschlange zum Speichern zweier aufeinanderfolgender Warteschlangen umfasst, und

    Empfangen eines vertikalen Synchronisationssignals von der Anzeigeeinheit (104) durch die Anzeigesteuereinheit (122),

    Erfassen, durch die Anzeigesteuereinheit (122), eines in dem Speicher gespeicherten Bildrahmens und Ausgeben des Bildrahmens zu einem dem empfangenen vertikalen Synchronisationssignal entsprechenden Zeitpunkt zu einer Anzeigeeinheit,

    wobei das Erfassen eines Bildrahmens umfasst:

    Vergleichen in jeder Warteschlange, durch die Anzeigesteuereinheit (122), der Wartezeit, die die verstrichene Zeit ab der in den Metadaten gespeicherten Zeit ist, assoziiert mit dem in der Warteschlange eingestellten Bildrahmen, mit einer Pufferzeit, die die Zeit zum vollständigen Speichern eines Bildrahmens in einer Warteschlange des Speichers ist, und

    Auswählen unter Warteschlangen, in denen die Wartezeit die Pufferzeit übersteigt, eines Bildrahmens, der einem neuesten Bildrahmen entspricht, und Ausgeben des ausgewählten Bildrahmens zu der Anzeigeeinheit (104).


     
    8. Programm, umfassend Programmcodemittel, um die Client-Einrichtung nach Anspruch 1 zu veranlassen, die Schritte des Verfahrens nach Anspruch 7 auszuführen, wenn das Programm ausgeführt wird.
     


    Revendications

    1. Dispositif client, comprenant :

    une unité de communication (101), un décodeur (102), un contrôleur (105), une mémoire (121), une unité de commande d'affichage (122) et une unité d'affichage (104) ; dans lequel

    l'unité de communication (101) est configurée pour recevoir des données d'image codées ;

    le contrôleur (105) est configuré pour entrer des informations de temps provenant d'une horloge système et pour fournir les informations de temps au décodeur (102) ;

    le décodeur (102) est configuré pour exécuter un traitement de décodage des données d'image codées reçues par l'unité de communication (101), pour stocker dans la mémoire (121) des trames d'image qui ont déjà été décodées et pour stocker de plus dans la mémoire (121) le moment où chaque trame d'image est entrée dans la mémoire, sous forme de métadonnées ;

    la mémoire (121) étant configurée pour stocker deux trames d'image successives dans une file d'attente précédente et une file d'attente suivante, respectivement ; et

    l'unité de commande d'affichage (122) étant configurée pour recevoir un signal de synchronisation verticale depuis l'unité d'affichage (104) et pour acquérir une trame d'image stockée dans la mémoire (121) et pour sortir la trame d'image acquise sur l'unité d'affichage (104) avec une temporisation correspondant au signal de synchronisation verticale,

    dans lequel l'unité de commande d'affichage (122) est configurée pour comparer, dans chaque file d'attente, le temps d'attente, qui est le temps écoulé depuis le moment stocké dans les métadonnées associées à la trame d'image placée dans la file d'attente, à un moment de mise en mémoire tampon, qui est le temps pour stocker complètement une trame d'image dans une file d'attente de la mémoire, pour sélectionner, parmi des files d'attente dans lesquelles le temps d'attente dépasse le temps de mise en mémoire tampon, une trame d'image correspondant à une trame d'image la plus récente et pour sortir la trame d'image sélectionnée sur l'unité d'affichage (104).


     
    2. Dispositif client selon la revendication 1,
    dans lequel l'unité de commande d'affichage (122) ne peut pas acquérir des données suivantes pour l'affichage à partir de la mémoire, l'unité de commande d'affichage (122) est configurée pour exécuter un affichage continu d'une image qui est affichée et pour modifier un état de dispositif en un état de mise à jour d'affichage arrêtée.
     
    3. Dispositif client selon la revendication 2,
    dans lequel l'unité de commande d'affichage (122) est configurée, à l'état de mise à jour d'affichage arrêtée, pour comparer, dans chaque file d'attente, le temps d'attente au temps de mise en mémoire tampon, et pour sélectionner, parmi des files d'attente dans lesquelles le temps d'attente dépasse le temps de mise en mémoire tampon, une trame d'image correspondant à une trame d'image la plus récente comme une image de sortie sur l'unité d'affichage, et
    dans lequel l'unité de commande d'affichage (122) est configurée pour modifier l'état de dispositif en un état d'exécution de mise à jour d'affichage.
     
    4. Dispositif client selon l'une quelconque des revendications précédentes,
    dans lequel l'unité de commande d'affichage (122) est configurée pour exécuter un traitement consistant à commuter l'état de dispositif d'un état d'exécution de mise à jour d'affichage sur un état de mise à jour d'affichage arrêtée, ou de l'état de mise à jour arrêtée sur l'état d'exécution de mise à jour d'affichage selon un état d'acquisition de données pour l'affichage à partir de la mémoire.
     
    5. Dispositif client selon la revendication 4,
    dans lequel, quand l'état de dispositif est l'état de mise à jour d'affichage arrêtée, l'unité de commande d'affichage (122) est configurée pour comparer, dans chaque file d'attente, le temps d'attente au temps de mise en mémoire tampon, et pour sélectionner, parmi des files d'attente dans lesquelles le temps d'attente dépasse le temps de mise en mémoire tampon, une trame d'image correspondant à une trame d'image la plus récente comme une image de sortie sur l'unité d'affichage.
     
    6. Dispositif client selon la revendication 1,
    dans lequel le décodeur (102) est configuré pour stocker dans la mémoire (121), en plus des trames d'image, des informations de fréquence de trame de transmission des trames d'image sous forme de métadonnées, et
    dans lequel l'unité de commande d'affichage (122) est configurée pour déterminer s'il y a eu une modification d'une fréquence de trame de transmission par l'acquisition des informations de fréquence de trame de transmission qui sont réglées dans la trame d'image acquise à partir de la mémoire, et en cas de modification, l'unité de commande d'affichage (122) est configurée pour modifier une fréquence d'affichage de l'unité d'affichage selon la modification de la fréquence de trame de transmission.
     
    7. Procédé de commande du dispositif client selon la revendication 1, comprenant les étapes consistant à :

    recevoir, par l'unité de communication (101), des données d'image codées ;

    exécuter, par le décodeur (102) un traitement consistant à décoder les données d'image codées reçues ;

    stocker, par le décodeur (102), des trames d'image qui ont déjà été décodées dans la mémoire (121), et en plus des trames d'image, le moment où chaque trame d'image est entrée dans la mémoire sous forme de métadonnées, ladite mémoire comprenant une file d'attente précédente et une file d'attente suivante pour stocker deux trames d'image successives, respectivement, et

    recevoir, par l'unité de commande d'affichage (122), un signal de synchronisation verticale de l'unité d'affichage (104),

    acquérir, par l'unité de commande d'affichage (122), une trame d'image stockée dans la mémoire et sortir ladite trame d'image sur une unité d'affichage, à une temporisation correspondant au signal de synchronisation verticale reçu,

    dans lequel l'acquisition d'une trame d'image comprend :

    la comparaison, par l'unité de commande d'affichage (122), dans chaque file d'attente, du temps d'attente, qui est le temps écoulé depuis le moment stocké dans les métadonnées associées à la trame d'image placée dans la file d'attente, à un moment de mise en mémoire tampon, qui est le temps pour stocker complètement une trame d'image dans une file d'attente de la mémoire, et

    sélectionner, parmi des files d'attente dans lesquelles le temps d'attente dépasse le temps de mise en mémoire tampon, une trame d'image correspondant à une trame d'image la plus récente, et sortir la trame d'image sélectionnée sur l'unité d'affichage (104).


     
    8. Programme comprenant des moyens de code programme pour amener le dispositif client selon la revendication 1 à exécuter les étapes du procédé selon la revendication 7 lorsque ledit programme est exécuté.
     




    Drawing







































































    Cited references

    REFERENCES CITED IN THE DESCRIPTION



    This list of references cited by the applicant is for the reader's convenience only. It does not form part of the European patent document. Even though great care has been taken in compiling the references, errors or omissions cannot be excluded and the EPO disclaims all liability in this regard.

    Patent documents cited in the description