(19)
(11)EP 2 859 742 B1

(12)EUROPEAN PATENT SPECIFICATION

(45)Mention of the grant of the patent:
29.04.2020 Bulletin 2020/18

(21)Application number: 12750349.8

(22)Date of filing:  06.08.2012
(51)International Patent Classification (IPC): 
H04W 4/12(2009.01)
H04L 29/06(2006.01)
H04W 4/18(2009.01)
(86)International application number:
PCT/EP2012/065374
(87)International publication number:
WO 2014/023330 (13.02.2014 Gazette  2014/07)

(54)

METHOD FOR PROVIDING A MULTIMEDIA MESSAGE SERVICE

VERFAHREN ZUR BEREITSTELLUNG EINES MULTIMEDIA-NACHRICHTENDIENSTES

PROCÉDÉ DE FOURNITURE D'UN SERVICE DE MESSAGERIE MULTIMÉDIA


(84)Designated Contracting States:
AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR

(43)Date of publication of application:
15.04.2015 Bulletin 2015/16

(73)Proprietor: Huawei Technologies Co., Ltd.
Longgang District Shenzhen, Guangdong 518129 (CN)

(72)Inventors:
  • BOUAZIZI, Imed
    80992 Munich (DE)
  • CORDARA, Giovanni
    80992 Munich (DE)
  • KONDRAD, Lukasz
    80992 Munich (DE)

(74)Representative: Kreuz, Georg Maria 
Huawei Technologies Duesseldorf GmbH Riesstraße 25
80992 München
80992 München (DE)


(56)References cited: : 
WO-A1-2004/045230
WO-A2-02/43414
  
  • NOKIA CORPORATION: "Mobile 3D video: proposed specification text for MVC in MMS", 3GPP DRAFT; S4-120668, 3RD GENERATION PARTNERSHIP PROJECT (3GPP), MOBILE COMPETENCE CENTRE ; 650, ROUTE DES LUCIOLES ; F-06921 SOPHIA-ANTIPOLIS CEDEX ; FRANCE, vol. SA WG4, no. Erlangen, Germany; 20120521 - 20120525, 16 May 2012 (2012-05-16), XP050639347, [retrieved on 2012-05-16]
  • "3rd Generation Partnership Project; Technical Specification Group Services and System Aspects; Mobile stereoscopic 3D video (Release 11)", 3GPP STANDARD; 3GPP TR 26.905, 3RD GENERATION PARTNERSHIP PROJECT (3GPP), MOBILE COMPETENCE CENTRE ; 650, ROUTE DES LUCIOLES ; F-06921 SOPHIA-ANTIPOLIS CEDEX ; FRANCE, vol. SA WG4, no. V2.0.1, 16 June 2012 (2012-06-16), pages 1-53, XP050580690, [retrieved on 2012-06-16]
  
Note: Within nine months from the publication of the mention of the grant of the European patent, any person may give notice to the European Patent Office of opposition to the European patent granted. Notice of opposition shall be filed in a written reasoned statement. It shall not be deemed to have been filed until the opposition fee has been paid. (Art. 99(1) European Patent Convention).


Description

BACKGROUND OF THE INVENTION



[0001] The present invention relates to a method for providing a multimedia message service from a server or relay to a user agent in a multimedia network, in particular a method for providing a multimedia message service (MMS) from an MMS Server/Relay B to an MMS User Agent B according to a 3GPP (Third Generation Partnership Project) Multimedia Message Service specification.

[0002] Depth perception is the visual ability to perceive the world in three dimensions (3D) and the distance of an object. Stereoscopic 3D video refers to a technique for creating tie illusion of depth in a scene by presenting two offset images of the scene separately to the left and right eye of the viewer. Stereoscopic 3D video conveys the 3D perception of the scene by capturing the scene via two separate cameras, which results in objects of the scene being projected to different locations in the left and right images.

[0003] By capturing the scene via more than two separate cameras a multi-view 3D video is created. Depending on the chosen pair of the captured images, different perspectives (views) of the scene can be presented. Multi-view 3D video allows a viewer to interactively control the viewpoint. Multi-view 3D video can be seen as a multiplex of number of stereoscopic 3D videos representing the same scene from different perspectives.

[0004] The displacement of a pixel (an object) from the right view to the left view is called disparity. The value of disparity defines the perceived depth. As an example, when a disparity is equal 0 then the related pixel (object) is perceived at the image plane (i.e. screen), with negative disparity the related pixel (object) is perceived to appear to the viewer in front of the screen and with positive disparity the related pixel (object) is perceived to appear to the viewer behind the screen.

[0005] The frame-compatible packing format consists in sub-sampling the two views which compose a stereoscopic 3D video and pack them together in order to produce a video signal compatible with a 2D frame infrastructure.

[0006] In a typical operation mode, the two stereoscopic frames related to the same time are packed into a spatial arrangement having the same resolution of a 2D compatible view. The spatial packing arrangement typically uses a side-by-side or top-bottom format where the down-sampling process is applied to each view: in this way, each view presents spatial resolution which is halved with respect to the one supported by the corresponding 2D view.

[0007] In order to avoid the lack of definition introduced by the frame-compatible packing formats, it is possible to transmit both views at full resolution. The most common format is the frame packing for which the left and right views are temporally interleaved. In this way, the two views have resolution doubled compared to the corresponding frame compatible packing format.

[0008] In frame-compatible stereoscopic video, the spatial packing of a stereo pair into a single frame is performed at the encoder side and the so obtained frames are encoded as a single view. The output frames produced by the decoder contain constituent frames of a stereo pair. The encoder side indicates the used frame packing format by specifying the frame packing arrangement in the Supplemental Enhancement Information (SEI) messages as specified in the H.264/AVC standard and are conveyed inside the 3D video H.264/AVC encoded bitstream. The SEI messages about the frame packing format are therefore extracted and parsed during the decoding process of the 3D video bitstream. The decoder side decodes the frame conventionally, unpacks the two constituent frames from the output frames of the decoder, performs up-sampling in order to revert the encoder side down-sampling process and renders the constituent frames on the 3D display.

[0009] In temporal interleaving, the video is encoded at double the frame rate of the original video. Each pair of subsequent pictures constitutes a stereo pair (left and right view). The encoder side indicates the used temporal interleaving format using the Supplemental Enhancement Information (SEI) messages as specified in the H.264/AVC standard and conveys them inside the 3D video H.264/AVC encoded bitstream. The SEI messages about the frame packing format are therefore extracted and parsed during the decoding process of the 3D video bitstream. The rendering of the time-interleaved stereoscopic video is typically performed at the high frame rate, where active (shutter) glasses are used to blind the incorrect view at each eye. This requires accurate synchronization between the glasses and the screen.

[0010] Multi-view Video Coding (MVC) was standardized as an extension (annex) to the H.264/AVC standard. In MVC, the views from different cameras are encoded into a single bitstream that is backwards compatible with single-view H.264/AVC. One of the views is encoded as "base view". A single-view H.264/AVC decoder can decode and output the base view of an MVC bitstream at different profiles (e.g. constrained baseline or progressive high profile). MVC introduces inter-view prediction between views. MVC is able to compress stereoscopic video in a backwards compatible manner without compromising the view resolutions. If the server is aware of the User Equipment (UE) capabilities, it can omit sending the view components of the non-base view to a device that does not support 3D or does not have enough bitrate to deliver both views.

[0011] MIME is a standard originally developed for including content in email messages in addition to the plain text body of the email. MIME can be used to bundle separate files together. An extension to MIME known as "MIME Encapsulation of Aggregate Documents" according to RFC 2557 "MIME Encapsulation of Aggregate Documents." allows to indicate to a client that all of the parts of the message are related to one another and may refer to one another. An example of a MIME encapsulated message 600 as specified in RFC 2557 is illustrated in Fig. 6.

[0012] The <Content-Type> indicates to the receiving client that the separate parts of the message are related and may refer to one another. The <boundary> common to all multipart messages indicates to the client what string will separate each of the parts of the message. Between each of the boundaries are the messages 601, 602, 603 themselves. Each message 601, 602, 603 also contains <Content-Type> describing the type of the message. The first message 601 shows only excerpts from the HTML. The second 602 and third 603 messages omit the actual bodies of the images and just show the information relevant to their aggregation in a multipart/related message.

[0013] HTML message can refer to an included image by either its specified <Content-ID> (cid) or <Content-Location>. These are both ways of identifying parts of the message uniquely so that other parts of the message can refer to them.

[0014] A client reading the example HTML that knows the HTML is part of a multipart/related message that looks first within the message parts for that URL before looking to the network.

[0015] UAProf according to "User Agent Profile version 2.0", Open Mobile Alliance™ is an Open Mobile Alliance (OMA) specification dealing with representation and end-to-end flow of terminal capabilities and preference information (CPI). UAProf uses the framework defined by the World Wide Web Consortium's (W3C's) Composite Capability/Preferences Profile (CC/PP) to represent capabilities and preference information (CPI) and is defined using the Resource Description Framework (RDF) schema and vocabulary. The specification also provides details of how the information should be transmitted b servers using the Wireless Session Protocol (WSP) and the Hypertext Transfer Protocol (HTTP). UAProf is partitioned in the following categories of descriptors: hardware platform, software platform, browser user agent, network characteristics, and Wireless Application Protocol (WAP) characteristics. The terminal's UAProf description may provide URLs where CPI can be retrieved on the Web or may explicitly provide them. This first option is referred to as static UAProf and allows significant reduction of the bandwidth required to transmit CPI. The second option, referred to as dynamic UAProf, allows dynamic capabilities to be sent (e.g. changing network conditions).

[0016] The server is responsible for resolving UAProf information. This feature makes dynamic UAProf rather complex to implement on both terminals and servers. For instance, on the terminal side, static UAProf implementation only requires the terminal to transmit a constant string representing its capabilities or, more often, URL(s) where the server can retrieve them. Such a string can be hardwired in the terminal. In contrast, dynamic UAProf would require the terminal to monitor changes in its capabilities (e.g. new software installed providing new MIME types support), and generate, track and send differences from a reference profile to the server. This explains why most manufacturers at present have decided to support only static UAProf.

[0017] The use of RDF enables an extensibility mechanism for CC/PP-based schemes that addresses the evolution of new types of devices and applications. The 3GPP PSS base vocabulary is an extension to UAProf and is defined as an RDF schema. A device capability profile in 3GPP is an RDF document that follows the structure of the CC/PP framework and the CC/PP application UAProf. Attributes are used to specify device capabilities and preferences. A set of attribute names, permissible values and semantics constitute a CC/PP vocabulary, which is defined by an RDF schema. For PSS, the UAProf vocabulary is reused and an additional PSS specific vocabulary is defined. The syntax of the attributes is defined in the vocabulary schema, but also, to some extent, the semantics. A PSS device capability profile is an instance of the schema (UAProf and/or the PSS specific schema).

[0018] Synchronized Multimedia Integration Language (SMIL) according to "http://www.w3.org/TR/2005/REC-SMIL2-20050107/" defines an XML-based language aiming at describing interactive multimedia presentations. Using SMIL, the temporal behavior of a multimedia presentation, associate hyperlinks with media objects, and the layout of the presentation on a screen can be described.

[0019] SMIL is defined as a set of markup modules, which define the semantics and XML syntax for certain areas of SMIL functionality. SMIL defines ten modules: Animation Modules, Content Control Modules, Layout Modules, Linking Modules, Media Object Modules, Meta information Module, Structure Module, Timing and Synchronization Modules, Time Manipulations Module and Transition Effects Modules.

[0020] The 3rd Generation Partnership Project (3GPP) defined a 3GPP SMIL Language Profile in 3GPP TS 26.246: "Transparent end-to-end packet switched streaming service (PSS); 3GPP SMIL Language Profile", also referred to as "3GPP PSS SMIL Language Profile" or just "3GPP SMIL". The 3GPP SMIL Language Profile is based on SMIL 2.0 Basic Profile according to "http://www.w3.org/TR/2005/REC-SMIL2-20050107/ ".

[0021] 3GPP Multimedia Messaging Service (MMS) specifies a set of requirements enabling the provision of non-real time multimedia messaging service, seen primarily from the subscriber's and service providers' points of view. 3GPP MMS allows non real-time transmissions for different types of media including such functionality as: multiple media elements per single message, individual handling of message elements, different delivery methods for each message element, negotiating different terminal and network Multimedia Message (MM) capabilities, notification and acknowledgement of MM related events and personalized MMS configuration.

[0022] Thus 3GPP MMS enables a unified application which integrates the composition, storage, access, and delivery of different kinds of media, e.g. text, voice, image or video in combination with additional mobile requirements. The 3GPP MMS uses 3GPP SMIL Language Profile according to 3GPP TS 26.246: "Transparent end-to-end packet switched streaming service (PSS); 3GPP SMIL Language Profile" for media synchronization and scene description. Fig. 7 shows a schematic diagram of synchronized multimedia integration language (SMIL) 700 for an MMS message.

[0023] The SMIL message body 703 is enclosed in <smil></smil> tags and the document contains head 701 and body 703 sections. The head section 701 contains information that applies to the entire message. The meta fields such as "title" and "author" are not mandatory. The layout section < layout ></ layout > within the head section 701 specifies the master layout for all the slides in the message. In the example depicted in Fig. 7, the slide will be displayed 160 pixels wide and 120 pixels high. The layout, in the example of Fig. 7, is further divided into two smaller regions, "Image" (or "Video") and "Text". The body section 703 describes the actual slides in the message. These slides are denoted with the <par> tag. All the elements within this tag are to be displayed simultaneously. The <dur> attribute for each slide is the duration of the slide in the slide show. In the example of Fig. 7, each slide contains three elements: one for being displayed in the image (or video) region, one for being displayed in the text region and an audio element that will be played when the slide is viewed.

[0024] The MMS network architecture 800 according to 3GPP TS 23.140: "Multimedia Messaging Service (MMS); Functional description; Stage 2" which consists of all the elements required for providing a complete MMS to a user is shown in Figure 8.

[0025] If MMS User Agent A 811 and MMS User Agent B 831 belong to the same network then the System A 801 and System B 803 components presented in Figure 8 can be the same entities. At the heart of 3GPP MMS architecture 800 the MMS Relay/Server 815 is located. The MMS Relay/Server 815, among others, may provide the following functionalities according to 3GPP TS 23.140: "Multimedia Messaging Service (MMS); Functional description; Stage 2": receiving and sending multimedia messages (MM); MM notification to the MMS User Agent 811; temporary storage 817 of messages; negotiation of terminal capabilities; transport of application data; personalizing MMS based on user profile information; MM deletion based on user profile or filtering information; media type conversion; and media format conversion.

[0026] The MMS User Agent (UA) 811 resides on user equipment (UE) or on an external device connected to a UE. It is an application layer function that provides the users with the ability to view, compose and handle MMs (e.g. submitting, receiving, deleting of MMs).

[0027] The MMS Value Added Service (VAS) Applications 823 offer Value Added Services to MMS users. There could be several MMS VAS Applications 823 included in or connected to MMS architecture 800.

[0028] MMS User Databases 819 element may be comprised of one or more entities that contain user related information such as subscription and configuration (e.g. UAProf).

[0029] An MMS User Agent A 811 depicted in Fig. 8 sends a multimedia message 900 as illustrated in Fig. 9 by submitting the message 900 to its home MMS Server/Relay A 815. Multiple media elements shall be combined into a composite single MM using MIME multipart format as defined in IETF RFC 2046: "Multipurpose Internet Mail Extensions (MIME) Part Two: Media Types". The media type of a single MM element shall be identified by its appropriate MIME type whereas the media format shall be indicated by its appropriate MIME subtype. A message must have the address of the recipient and a MIME content type. Several other parameters may be set for a message including the desired time of expiry for the message and the message priority. Upon reception of a message 900, the recipient home MMS Server/Relay B 835 assigns message identification to the message 900. MMS Server/Relay B 835 may also store a copy of the message 900, then routes the message 900 towards the recipient, MMS User Agent B 831. Fig. 9 is an example of a multipart MIME used to transmit MMS from MMS User Agent A 811 to MMS Server/Relay A 815. The message 900 comprises video content 901, audio content 903 and text content 905.

[0030] Fig. 10 shows a message sequence diagram between an MMS Server/Relay B 1001 corresponding to the MMS Server/Relay B 835 depicted in Fig. 8 and an MMS User Agent B 1003 corresponding to the MMS User Agent B 831 depicted in Fig. 8 according to the technical specification 3GPP TS 23.140. Upon reception of a message such as a message 900 depicted in Fig. 9, the recipient MMS Server/Relay B 1001 verifies the recipient's (MMS User Agent B 1003) profile (UAProf) and generates a notification to the recipient MMS User Agent B 1003, M-Notification.ind 1011. It also stores the message at least until one of the following events happens: the associated time of expiry is reached, the message is delivered, the recipient MMS User Agent B 1003 requests the message to be forwarded, the message is rejected. When the recipient MMS User Agent B 1003 receives a notification 1011, it uses the message reference (Uniform Resource Identifier, uri) in the notification 1011 to reject or retrieve the message, either immediately or at a later time, either manually or automatically, as determined by the operator configuration and the user profile. The MMS is retrieved using HTTP GET request message 1012 with the signaled uri.

[0031] Within a request for delivery of a message 1012, the recipient MMS User Agent B 1003 can indicate its capabilities, e.g. a list of supported media types and media formats indicating its UAProf, for the recipient MMS Server/Relay B 1001. When a delivery request 1012 is received, the recipient MMS Server/Relay B 1001 uses the information about the capabilities of the recipient MMS User Agent B 1003 to prepare the message for delivery to the recipient MMS User Agent B 1003. This preparation may involve the deletion or adaptation of unsupported media types and media formats. MMS Server/Relay B 1001 can also perform content adaptation based on the capabilities indicated by UAProf information of a recipient device, MMS User Agent B 1003, stored in the network.

[0032] In OMA-TS-MMS-CONF-V1_3-20110913-A, "MMS Conformance Document", Open Mobile Alliance™, the issues that need to be addressed in order to ensure interoperability of MMS functionalities between terminals produced by different manufacturers are identified. According to the document, MMS User Agent 1003 shall support UAProf according to "User Agent Profile version 2.0", Open Mobile Alliance™ for MMS User Agent capability negotiation. Similarly, MMS Server/Relay 1001 shall support UAProf for MMS User Agent capability negotiation.

[0033] Interoperability is essential for user experience and the success of the service. When sending messages, users expect the message to reach its destination and to be properly presented. Otherwise, users stop using the service because they do not trust it. Operators demand interoperability since they should not charge fora MM if it cannot be delivered and presented in a reasonably acceptable way to the recipient.

[0034] Currently, MMS Server/Relay B 1001 performs content adaptation based solely on the MMS recipient's UAProf information, without consulting with the device/useraccording to the message sequence depicted in Fig. 10. In this situation, a scenario where a device that does not support 3D but is connected to an external 3D display, and as such is able to display the 3D video correctly, is not able to receive 3D video content of MMS. This situation is due to a reason that the external device is not included in the UAProf information of the device and MMS Server/Relay B 1001 is always performing content adaptation and converts an MMS 3D content to a 2D content. Similarly, the opposite situation, where a 3D capable device which wants to limit the used bandwidth, for example, and thus downloads only 2D version of MMS, is also not possible.

[0035] As mentioned above, 3D video can be coded in various coding formats. Frame compatible H.264/AVC and temporally interleaved H.264/AVC use the traditional AVC file format where information about the stereo arrangement is carried in an SEI message inside the encoded 3D video bitstream. Due to this, MMS Server/Relay B performs additional processing on the encoded bitstream, i.e., decode the bitstream to extract the information about the 3D video content of the MMS in order to perform appropriate content adaptation. Multiview video coding H.264/MVC, on the other hand, uses extensions of the AVC file format and separate signaling in metadata, i.e. outside the encoded 3D video bitstream. Consequently, MMS Server/Relay B does not have to perform additional decoding to extract information and in the same way as in case of H.264/AVC encoded 3D video content, the additional complexity is reduced.

[0036] Delivering 3D video in frame compatible or temporally interleaved frame packing format using the current version of the 3GPP MMS specification ensures that a UE will be able to decode the bitstreams correctly, provided that it has the corresponding decoding capability. However, the specification does not ensure that a UE renders the 3D video correctly. For instance, a UE which is not aware of the SEI message indicating that a bitstream represents frame compatible 3D or temporally interleaved 3D simply renders the video frames as one 2D frame with two side-by-side (top-bottom) views or consecutive 2D frames representing two separate views of 3D video. This consequently leads to a degraded user experience. Currently, neither UAProf nor the device capability profile in 3GPP as mentioned above provide information about 3D rendering capabilities of a device, i.e. about frame packing formats supported by a device. As consequence, an MMS Server/Relay B does not have the full information to perform proper content adaptation and to ensure device interoperability.
Patent application WO 2004045230 A1 provides a multimedia messaging service (MMS), wherein a streamable media component of a multimedia message is streamed to a user agent. The streamable media component is represented with a descriptor pointing to a location from which session description data (SDD) necessary for establishing a streaming session for the streamable media component can be obtained. The SDD is generated and delivered after the user agent has been notified of the multimedia message or even more advantageously only if the user agent requests for the retrieval of the multimedia message. The SDD generation complies with the capabilities and / or preferences of the user agent requesting the retrieval of the multimedia message. The capabilities and / or preferences can be obtained using the retrieve request or from a streaming session setup message to ensure that they correspond to the user agent that actually will present the multimedia message.
Patent application WO 2002043414 A2 provides a multimedia messaging method comprising the steps of: receiving a content from a sender and addressed to one or more recipient; accessing a database comprising recipient data describing multimedia reception capabilities and / or reception preferences for at least one recipient; forming, in accordance with said recipient data, a notification message containing information that said media content is available to be streamed to the addressed recipient(s); and transmitting the notification message to the addressed recipient(s). A corresponding network entity, communication system and computer program are also described.

SUMMARY OF THE INVENTION



[0037] It is the object of the invention to improve interoperability of a user equipment in a network delivering 3D video, in particular in a network according to the 3GPP MMS specification.

[0038] This object is achieved by the features of the independent claims. Further implementation forms are apparent from the dependent claims, the description and the figures.

[0039] The invention is based on the finding that by using a new method for delivery of 3GPP Multimedia Message Service, in particular a new signaling between the MMS Server/Relay B and the MMS User Agent B interoperability is improved. This new method is based on providing multiple alternative choices or options of the MMS 3D content, e.g 2D, 3D, both, etc. by the MMS Server/Relay B to MMS User Agent B before the content adaptation is performed. Due to this new method, the content adaptation is not restricted to the UAProf information but also takes into account the end user preferences. Moreover, by applying new signaling information about the post-decoder requirements of 3D video content included in an MMS, interoperability is improved. The signaling information is transported in metadata, that is, outside the 3D video encoded bitstream. Due to the additional information, a number of multimedia message (MM) adaptation mechanisms for solving the 3D MMS interoperability problems can be introduced on the MMS Server/Relay B or recipient MMS User Agent B, thereby improving interoperability of the user equipment with the network. Further, new vocabulary to the UAProf, i.e., extension of the PSS vocabulary allows MMS Server/Relay B to correctly perform content adaptation to the appropriate post-decoder capabilities of a 3D capable device.

[0040] By applying such methods for the delivery of a 3GPP Multimedia Message Service as will be presented in the following, the interoperability is significantly improved.

[0041] In order to describe the invention in detail, the following terms, abbreviations and notations will be used:
3D:
three Dimensions,
MM:
Multimedia Message,
MMS:
Multimedia Message Service,
AVC:
Advanced Video Coding,
SEI:
Supplemental Enhancement Information,
MVC:
Multi-view Video Coding,
UE:
User Equipment,
MIME:
Multipurpose Internet Mail Extensions according to RFC 2046,
HTML:
Hypertext Markup Language,
UAProf:
User Agent Profile,
OMA:
Open Mobile Alliance,
CPI:
Capabilities and Preference Information,
W3C:
World Wide Web Consortium,
CC/PP:
Composite Capability/Preferences Profile,
RDF:
Resource Description Framework,
WSP:
Wireless Session Protocol,
HTTP:
Hypertext Transfer Protocol,
WAP:
Wireless Application Protocol,
URL:
Uniform Resource Locator,
PSS:
Packet-switched Streaming Service,
SMIL:
Synchronized Multimedia Integration Language,
UA:
User Agent,
VAS:
Value Added Service,
URI:
Uniform Resource Identifier.


[0042] For the purposes of supporting multimedia messaging, the term multimedia network or network shall be considered to include the mobile operator's network and any functionality which may exist outside the mobile operator's network, e.g. fixed, internet and multimedia technologies etc., and the support provided by that functionality for multimedia messaging.

[0043] According to a first aspect, the invention relates to a method for providing a multimedia message service (MMS) from a server or relay to a user agent in a multimedia network, the method comprising: determining a video content characteristic of a video content by the server or relay; determining display and/or decoding capabilities of the user agent; signaling options of the video content to the user agent; and providing the video content depending on the display and/or decoding capabilities and depending on an option selected via the user agent from the signaled options of the video content; wherein the signaled options (221) of the video content comprise one of the following: 2D, 3D, both thereof; and wherein the signaling options of the video content to the user agent (203) are provided from the server or relay (201) by using an M-NOTIFICATION.IND message (211) according to an Open Mobile Alliance specification.

[0044] By using that new method for a delivery of a multimedia message service, in particular the new signaling between the server or relay and, the user agent interoperability is improved.

[0045] In a first possible implementation form of the method according to the first aspect, the multimedia network is a network according to 3GPP Multimedia Message Service specifications, for example according to 3GPP TS 22.140: "Technical Specification Group Services and System Aspects; Multimedia Messaging Service (MMS); Stage 1 (Release 10)"; 3GPP TS 22.140:" Technical Specification Group Core Network and Terminals; Multimedia Messaging Service (MMS); Functional description; Stage 2 (Release 6) or later versions and/or releases.

[0046] The new method according to the first implementation form enhances interoperability in networks according to 3GPP MMS standardization.

[0047] In a second possible implementation form of the method according to the first aspect as such or according to the first implementation form of the first aspect, the server or relay is an MMS Server/Relay B according to 3GPP Multimedia Message Service specification, e.g. according to 3GPP TS 22.140:" Technical Specification Group Core Network and Terminals; Multimedia Messaging Service (MMS); Functional description; Stage 2 (Release 6) or a later version and/or release; wherein the user agent is an MMS User Agent B according to 3GPP Multimedia Message Service specification, e.g. 3GPP TS 22.140:" Technical Specification Group Core Network and Terminals; Multimedia Messaging Service (MMS); Functional description; Stage 2 (Release 6) or a later version and/or release; and wherein the video content is an MMS video content according to 3GPP Multimedia Message Service specification.

[0048] The new method can be applied to MMS Server/Relay B and MMS User Agent B according to 3GPP Multimedia Message Service specification. Only a small enhancement in message protocol is required which is transparent to legacy terminals.

[0049] In a third possible implementation form of the method according to the first aspect as such or according to any of the preceding implementation forms of the first aspect, the signaling options of a video content to the user agent comprises signaling all possible options of the video content to the user agent.

[0050] The user can thus choose between available options signaled to the user agent.

[0051] In a fourth possible implementation form of the method according to the first aspect as such or according to any of the preceding implementation forms of the first aspect, the options of the video content comprise one of the following: 2D, 3D, both thereof and others.

[0052] The server or relay can perform the adaptation of the 3D video file to encode it as a 2D content for legacy devices, or transcoded into a supported 3D format for devices supporting the method according to the fourth implementation form.

[0053] In a fifth possible implementation form of the method according to the first aspect as such or according to any of the preceding implementation forms of the first aspect, the video content characteristic is determined based on external metadata.

[0054] When the video content characteristic is determined based on external metadata no decoding is required to get the video content characteristic.

[0055] In a sixth possible implementation form of the method according to the fifth implementation form of the first aspect, the external metadata comprises a presentation type message field indicating 3D frame packing format of the 3D video bitstream.

[0056] The 3D frame packing format of the 3D video bitstream can be transported by using available signaling protocols, only insignificant changes are required. Thus, implementation is computational efficient.

[0057] In a seventh possible implementation form of the method according to the sixth implementation form of the first aspect, the 3D frame packing format of the 3D video bitstream comprise one of the following formats: side-by-side, top-bottom, time-interleaved.

[0058] By introducing these formats, the server is not required to decode the video content to obtain the video content characteristic.

[0059] In an eighth possible implementation form of the method according to the first aspect as such or according to any of the preceding implementation forms of the first aspect, the determining display and/or decoding capabilities comprises determining the display and/or decoding capabilities based on profiling information of the user agent, in particular based on UAProf information according to Open Mobile Alliance specification.

[0060] New vocabulary to the UAProf, i.e. extension of the PSS vocabulary allows MMS Server/Relay B to correctly perform content adaptation to the appropriate post-decoder capabilities of a 3D capable device.

[0061] In a ninth possible implementation form of the method according to the first aspect as such or according to any of the preceding implementation forms of the first aspect, the signaling options of a video content to the user agent is provided from the server or relay, in particular by using an M-NOTIFICATION.IND message according to an Open Mobile Alliance specification, e.g. according to the "Multimedia Messaging Service Encapsulation Protocol".

[0062] The enhanced message M-NOTIFICATION.IND can be applied to MMS Server/Relay B and MMS User Agent B according to 3GPP Multimedia Message Service specification. For legacy terminals, the new signaling options are transparent, thereby not influencing their operational mode.

[0063] In a tenth possible implementation form of the method according to the first aspect as such or according to any of the preceding implementation forms of the first aspect, the method comprises: informing the user about the options of the video content signaled to the user agent.

[0064] The user thus can give feedback about the desired options. This improves quality of experience.

[0065] In an eleventh possible implementation form of the method according to the tenth implementation form of the first aspect, the method comprises: indicating a selected option from the options of the video content from the user agent to the server or relay, in particular by using a header field of a GET REQUEST message according to HTTP standard, e.g. according to IETF RFC 2616: "Hypertext Transfer Protocol -- HTTP/1.1" or a later version.

[0066] The new method is further compliant to HTTP standardization.

[0067] In a twelfth possible implementation form of the method according to the eleventh implementation form of the first aspect, the method comprises: redirecting the user agent to a terminal supporting the selected option, in particular by using a REDIRECT message according to an HTTP standard, e.g. according to IETF RFC 2616: "Hypertext Transfer Protocol -- HTTP/1.1" or a later version.

[0068] The redirecting is only performed for devices according to the twelfth implementation form; redirection is transparent for legacy devices. Thus interoperability is improved for devices according to the twelfth implementation form and operation is not influenced for legacy devices.

[0069] In a thirteenth possible implementation form of the method according to any of the implementation forms of the first aspect or of the first aspect as such, wherein the method further comprises prior to the providing of the video content: displaying by or via the user agent the signaled options to a user, and selecting by the user an option from the signaled options via the user agent.

[0070] In a fourteenth possible implementation form of the method according to any of the implementation forms of the first aspect or of the first aspect as such, wherein the video content is a 3D video.

[0071] According to a second aspect, the invention relates to a User Agent device, in particular MMS User Agent B device according to 3GPP Multimedia Message Service specification, comprising a processing circuit configured for determining display and/or decoding capabilities of the user agent device; receiving options of a video content signaled by the server or relay, in particular by an MMS Server/Relay B device according to 3GPP Multimedia Message Service specification; and receiving (107) the video content provided by the server or relay depending on display and/or decoding capabilities and depending on an option (223) selected via the user agent from the signaled options (221) of the video content; wherein the signaled options (221) of the video content comprise one of the following: 2D, 3D, both thereof; and wherein the signaling options of the video content to the user agent (203) are received from the server or relay (201) by using an M-NOTIFICATION.IND message (211) according to an Open Mobile Alliance specification .

[0072] User Agent Devices applying the new method for delivery of 3GPP Multimedia Message Service, in particular the new signaling between the MMS Server/Relay B and the MMS User Agent B can improve their interoperability in an MMS network. Multiple alternative choices or options of the MMS 3D content, e.g. 2D, 3D, both, etc. are provided to the MMS User Agent B before the content adaptation is performed. The content adaptation is not restricted to the UAProf information but also takes into account the end user preferences. Moreover, by applying new signaling information about the post-decoder requirements of 3D video content included in an MMS, interoperability is improved.

[0073] According to a third aspect, the invention relates to a Server or relay device, in particular MMS Server/Relay B device according to 3GPP Multimedia Message Service specification, comprising a processing circuit configured for signalling options of a video content to a user agent device, in particular to an MMS User Agent B device according to 3GPP Multimedia Message Service specification; and providing the video content depending on display and/or decoding capabilities of the user agent device and depending on the options selected via the user agent device (203) from the signaled options (221) of the video content; wherein the signaled options (221) of the video content comprise one of the following: 2D, 3D, both thereof; and wherein the signaling options of the video content to the user agent (203) are provided from the server or relay (201) by using an M-NOTIFICATION.IND message (211) according to an Open Mobile Alliance specification.

[0074] Server or relay devices can similarly improve their interoperability in an MMS network as User Agent Devices mentioned above.

[0075] The methods described herein may be implemented as software in a Digital Signal Processor (DSP), in a micro-controller or in any other side-processor or as hardware circuit within an application specific integrated circuit (ASIC).

[0076] The invention can be implemented in digital electronic circuitry, or in computer hardware, firmware, software, or in combinations thereof.

BRIEF DESCRIPTION OF THE DRAWINGS



[0077] Further embodiments of the invention will be described with respect to the following figures, in which:

Fig. 1 shows a schematic diagram of a method for providing a multimedia message service according to an implementation form;

Fig. 2 shows a schematic diagram of a message protocol 200 between a server or relay device and a user agent device according to an implementation form;

Fig. 3 shows a schematic diagram of a presentation type message field indicating 3D frame packing format of the 3D video bitstream according to an implementation form;

Fig. 4 shows a schematic diagram of MMS signaling information comprising presentation type information according to an implementation form;

Fig. 5 shows a schematic diagram of profiling information of a user agent according to an implementation form;

Fig. 6 shows a schematic diagram of a conventional MIME (multipart internet mail extensions) encapsulated message;

Fig. 7 shows a schematic diagram of a conventional SMIL (synchronized multimedia integration language) message;

Fig. 8 shows a block diagram of a conventional MMS architecture according to technical specification 3GPP TS 23.140;

Fig. 9 shows a block diagram of a conventional multimedia message according to 3GPP specification;

Fig. 10 shows a message sequence diagram between an MMS Server/Relay B and an MMS User Agent B according to technical specification 3GPP TS 23.140.


DETAILED DESCRIPTION OF EMBODIMENTS OF THE INVENTION



[0078] Fig. 1 shows a schematic diagram of a method 100 for providing a multimedia message service according to an implementation form.

[0079] The method 100 provides a multimedia message service from a server or relay to a user agent in a multimedia network. The method 100 comprises determining 101 a video content characteristic by the server or relay. The method 100 comprises determining 103 display and/or decoding capabilities of the user agent The method 100 comprises signaling 105 options of a video content to the user agent The method 100 comprises providing 107 the video content depending on the display and/or decoding capabilities and on the options of the video content according to preferences of a user.

[0080] The server or relay may correspond to the MMS Server/Relay B 835 as described with respect to Fig. 8. The user agent may correspond to the MMS User Agent B 831 as described with respect to Fig. 8. The signaling 105 options of a video content to the user agent may be performed by using an M-Notification-ind message 211 as illustrated with respect to Fig. 2, with the M-Notification-ind message 211 illustrated in Fig. 2 being extended by an additional option field comprising the options. The server or relay may correspond to the MMS Server/Relay B 201 depicted in Fig. 2 and the user agent may correspond to the MMS User Agent B 203 depicted in Fig. 2 where MMS Server/Relay B 201 and MMS User Agent B 203 are configured to provide and receive this additional option field. The user can choose between the signaled available options of the video content depending on the display and/or decoding capabilities of the user agent. According to the user's preferences, the video content will be delivered.

[0081] Fig. 2 shows a schematic diagram of a message protocol 200 between a server or relay device 201 and a user agent device 203 according to an implementation form. The basic message protocol corresponds to the message protocol as described with respect to Fig. 10. That is, the M-Notification.ind message 211 without additional options field corresponds to the M-Notification.ind message 1011 depicted in Fig. 10, the HTTP Get.req message 212 without additional options field corresponds to the HTTP Get.req message 1012 depicted in Fig. 10, the M-retrieve.conf message 215 without the selected URI field corresponds to the M-retrieve.conf message 1015 depicted in Fig. 10 and the M-NotifyResp.ind message 216 without selected URI field and without additional options field corresponds to the M-NotifyResp.ind message 1016 depicted in Fig. 10.

[0082] The new method 200 for delivery of 3GPP Multimedia Message Service as depicted in Fig. 2 introduces an additional step where information that it is possible to choose different options 221 of the MMS content encoding are provided by an MMS Server/Relay B 201 to an MMS User Agent B 203 in an M-Notification.ind message 211. A new option header field for the M-Notification.ind PDU 211 is defined as described below. The procedure is presented in Figure 2. First, MMS Server/Relay B 201 issues a notification 211 with an URI of the 3D video to the MMS User Agent B 203. Additionally, a new 'options' field 221 in the M-Notification.ind PDU 211 is provided. By that field a supported terminal is informed that it can decide and indicate if it wants 3D or 2D content or both versions or a different 3D supported decoding/display format. The 'options' field 221 is ignored by legacy MMS User Agents and content is fetched using the provided URI in the standard way by legacy terminals.

[0083] MMS User Agents that support the new header filed issue a GET request212 to MMS Server/Relay B 201 with a new 'option' header field 221 indicating which encoding method the user chooses, that means according to the preference of the user. The 'options' field in the GET request 212 issued by MMS User Agent B 203 can be specified as new request header field according to RFC 2616 (http://www.ietf.org/rfc/rfc2616.txt). In case a supported terminal includes the new 'options' header field in the GET request212, then the MMS Server/Relay B 201 acts accordingly and responds to the MMS User Agent B 203 with redirect message 213 indicating the new selected_uri. Thereafter, MMS User Agent B issues a Get.req message 214 with the new selected_uri. After the MMS User Agent B 203 is redirected to the selected_uri it starts to fetch MMS content in the standard way but by using the selected_uri and the chosen options field 223.

[0084] Fig. 3 shows a schematic diagram of a presentation type message field 300 indicating 3D frame packing format of the 3D video bitstream according to an implementation form.

[0085] In MMS specification, frame packing format is indicated in a Supplemental Enhancement Information (SEI) message. Therefore, MMS Server/Relay must perform additional processing to acquire the information that the received content is in 3D. The additional processing requires a decoding step. In the implementation form depicted in Fig. 3, post-decoder signaling information is provided about the 3D video in a MIME multipart format. The presentation type message field indicating 3D frame packing format of the 3D video bitstream comprises different possible presentation types, i.e. signaling information of the types "side-by-side" 301, "top-bottom" 302, time-interleaved" 303, etc.

[0086] Fig. 4 shows a schematic diagram of MMS signaling information 400 comprising presentation type information according to an implementation form.

[0087] The MMS comprises the new defined 'presentation-type' information as illustrated in Fig. 4. MMS Servers/Relays that recognize the new signaling information are able to identify the 3D content and its encoding form without the need for a decoding the bitstream. The MMS signaling information 400 may correspond to the MMS signaling information as depicted in Fig. 9 but the video content 901 depicted in Fig. 9 is enhanced by the additional presentation type information 401 which is here of type "side-by-side" Of course, this additional presentation type information 401 can be of any other type defined in the presentation type message field 300 illustrated in Fig. 3, e.g. top-bottom, time-interleaved, etc.

[0088] Fig. 5 shows a schematic diagram of profiling information 500 of a user agent according to an implementation form. New Vocabulary in UAProf indicating the rendering capabilities of the device are introduced. Fig. 5 shows an exemplary definition of such new vocabulary.

[0089] The new vocabulary provides a new attribute named as "3DRenderingSupport". Its legal values are "side-by-side", "top-bottom" and "time-interleaved".

[0090] Based on the post-decoder signaling and the recipient capabilities indicated in UAProf or acquired during capability negotiation of recipient MMS User Agent B and an MMS Server/Relay B, the MMS Server/Relay B performs the adaptation of the 3D video file to encode it as a 2D content for legacy devices, or transcoded into a supported 3D format for devices providing the new Vocabulary in UAProf.

[0091] The Server/Relay B and the MMS Server/Relay B may correspond to the devices 835, 831 depicted in Fig. 8 or to the devices 1001 and 1003 depicted in Fig. 10 when being enhanced for providing the new Vocabulary in UAProf.

[0092] From the foregoing, it will be apparent to those skilled in the art that a variety of methods, systems, computer programs on recording media, and the like, are provided.

[0093] The present disclosure also supports a computer program product including computer executable code or computer executable instructions that, when executed, causes at least one computer to execute the performing and computing steps described herein.

[0094] The present disclosure also supports a system configured to execute the performing and computing steps described herein.

[0095] Many alternatives, modifications, and variations will be apparent to those skilled in the art in light of the above teachings. Of course, those skilled in the art readily recognize that there are numerous applications of the invention beyond those described herein. It is therefore to be understood that within the scope of the appended claims and their equivalents, the inventions may be practiced otherwise than as specifically described herein.

[0096] Embodiments of the invention can be implemented, in particular, in UMTS (Universal Mobile Telecommunication Systems) and LTE (Long Term Evolution) networks.


Claims

1. Method (100, 200) for providing a multimedia message service (MMS) from a server or relay (201) to a user agent (203) in a multimedia network, the method (100, 200) comprising:

determining (101) a video content characteristic of a video content by the server or relay (201);

signaling (105) options (221) of the video content to the user agent (203);

providing (107) the video content depending on display and/or decoding capabilities and depending on an option (223) selected via the user agent from the signaled options (221) of the video content;

the method being characterized in that:

the signaled options (221) of the video content comprise one of the following: 2D, 3D, both thereof; and

the signaling options of the video content to the user agent (203) are provided from the server or relay (201) by using an M-NOTIFICATION.IND message (211) according to an Open Mobile Alliance specification.


 
2. The method (100, 200) of claim 1, wherein the multimedia network is a network (803) according to a 3GPP Multimedia Message Service specification.
 
3. The method (100, 200) of claim 1 or claim 2, wherein the server or relay (201) is an MMS Server/Relay B according to a 3GPP Multimedia Message Service specification; wherein the user agent (203) is an MMS User Agent B according to the 3GPP Multimedia Message Service specification; and wherein the video content is an MMS video content according to the 3GPP Multimedia Message Service specification.
 
4. The method (100, 200) of one of the preceding claims, wherein the signaling options (221) of a video content to the user agent (203) comprises signaling all possible options of the video content to the user agent (203).
 
5. The method (100, 200) of one of the preceding claims, wherein the video content characteristic is determined based on external metadata.
 
6. The method (100, 200) of claim 5, wherein the video content is a 3D video and the external metadata comprises a presentation type message field (300) indicating a 3D frame packing format of the 3D video bitstream.
 
7. The method (100, 200) of claim 6, wherein the 3D frame packing format of the 3D video bitstream comprises one of the following formats: side-by-side (301), top-bottom (302), time-interleaved (303).
 
8. The method (100, 200) of one of the preceding claims, wherein the method further comprises:

determining (103) display and/or decoding capabilities of the user agent (203); and

wherein the determining (103) display and/or decoding capabilities comprises determining the display and/or decoding capabilities based on profiling information of the user agent (203), in particular based on UAProf information according to a Open Mobile Alliance specification.


 
9. The method (100, 200) of one of the preceding claims, comprising: informing the user about the options (221) of the video content signaled to the user agent (203).
 
10. The method (100, 200) of claim 9, comprising: indicating a selected option (223) from the options (221) of the video content from the user agent (203) to the server or relay (201), in particular by using a header field of a GET REQUEST message (212) according to HTTP standardization.
 
11. The method (100, 200) of claim 10, comprising: redirecting the user agent (203) to a terminal supporting the selected option (223), in particular by using a REDIRECT message (213) according to a HTTP standard.
 
12. User Agent device (203), in particular MMS User Agent B device according to 3GPP Multimedia Message Service specification, comprising a processing circuit (203a) configured for
determining (103) display and/or decoding capabilities of the user agent device (203);
receiving options (221) of a video content signaled by a server or relay (201), in particular by an MMS Server/Relay B device according to 3GPP Multimedia Message Service specification;
receiving (107) the video content provided by the server or relay depending on the display and/or decoding capabilities and depending on an option (223) selected via the user agent from the signaled options (221) of the video content;
characterized in that:

the signaled options (221) of the video content comprise one of the following: 2D, 3D, both thereof; and

the signaling options of the video content to the user agent (203) are received from the server or relay (201) by using an M-NOTIFICATION.IND message (211) according to an Open Mobile Alliance specification.


 
13. Server or relay device (201), in particular MMS Server/Relay B device according to 3GPP Multimedia Message Service specification, comprising a processing circuit (201a) configured for
signaling (105) options (221) of the video content to a user agent device (203), in particular to an MMS User Agent B device according to a 3GPP Multimedia Message Service specification;
providing (107) the video content depending on display and/or decoding capabilities of the user agent device (203) and depending on an option selected via the user agent device (203) from the signaled options (221) of the video content;
characterized in that:

the signaled options (221) of the video content comprise one of the following: 2D, 3D, both thereof; and

the signaling options of the video content to the user agent (203) are provided from the server or relay (201) by using an M-NOTIFICATION.IND message (211) according to an Open Mobile Alliance specification.


 


Ansprüche

1. Verfahren (100, 200) zum Bereitstellen eines Multimedia-Nachrichtendienstes (MMS) von einem Server oder einem Relais (201) an einen User-Agent (203) in einem Multimedia-Netzwerk, wobei das Verfahren (100, 200) Folgendes umfasst:

Bestimmen (101) eines Videoinhaltsmerkmals eines Videoinhalts durch den Server oder das Relais (201);

Anzeigen (105) von Optionen (221) des Videoinhalts an den User-Agent (203);

Bereitstellen (107) des Videoinhalts abhängig von Display- und/oder Decodierungsfähigkeiten und abhängig von einer über den User-Agent aus den angezeigten Optionen (221) ausgewählten Option (223);

wobei das Verfahren dadurch gekennzeichnet ist, dass:

die angezeigten Optionen (221) des Videoinhalts eines der Folgenden umfassen: 2D, 3D, beides; und

die angezeigten Optionen des Videoinhalts für den User-Agent (203) von dem Server oder dem Relais (201) unter Anwendung einer M-NOTIFICATION.IND-Nachricht (211) entsprechend einer Open-Mobile-Alliance-Spezifikation bereitgestellt sind.


 
2. Verfahren (100, 200) nach Anspruch 1, wobei das Multimedia-Netzwerk ein Netzwerk (803) entsprechend einer 3GPP-Multimedia-Nachrichtendienst-Spezifikation ist.
 
3. Verfahren (100, 200) nach Anspruch 1 oder 2, wobei der Server oder das Relais (201) ein MMS-Server/Relais-B entsprechend einer 3GPP-Multimedia-Nachrichtendienst-Spezifikation ist; wobei der User-Agent (203) ein MMS-User-Agent-B entsprechend der 3GPP-Multimedia-Nachrichtendienst-Spezifikation ist; und wobei der Videoinhalt ein MMS-Videoinhalt entsprechend der 3GPP-Multimedia-Nachrichtendienst-Spezifikation ist.
 
4. Verfahren (100, 200) nach einem der vorhergehenden Ansprüche, wobei die Anzeigeoptionen (221) eines Videoinhalts and den User-Agent (203) das Anzeigen aller möglichen Optionen des Videoinhalts an den User-Agent (203) umfassen.
 
5. Verfahren (100, 200) nach einem der vorhergehenden Ansprüche, wobei das Videoinhaltsmerkmal auf Grundlage externer Metadaten bestimmt wird.
 
6. Verfahren (100, 200) nach Anspruch 5, wobei der Videoinhalt ein 3D-Video ist und die externen Metadaten ein Mitteilungsfeld über Darstellungsarten (300) umfasst, die ein 3D-Frame-Packing-Format des 3D-Videobitstroms angeben.
 
7. Verfahren (100, 200) nach Anspruch 6, wobei das 3D-Frame-Packing-Format des 3D-Videobitstroms eines der folgenden Formate umfasst: Nebeneinander (301), Oben-und-Unten (302), Zeitversetzt (303).
 
8. Verfahren (100, 200) nach einem der vorhergehenden Ansprüche, wobei das Verfahren ferner Folgendes umfasst:
Bestimmen (103) der Display- und/oder Decodierungsfähigkeiten des User-Agents (203); und wobei das Bestimmen (103) der Display- und/oder Decodierungsfähigkeiten auf Grundlage von Profilerstellungsinformationen des User-Agents (203) beruht, insbesondere auf Grundlage von UAProf-Informationen entsprechend einer Open-Mobile-Alliance-Spezifikation.
 
9. Verfahren (100, 200) nach einem der vorhergehenden Ansprüche, das Folgendes umfasst: Informieren des Anwenders über die Optionen (221) des dem User-Agent (203) angezeigten Videoinhalts.
 
10. Verfahren (100, 200) nach Anspruch 9, das Folgendes umfasst: Angeben einer ausgewählten Option (223) von den Optionen (221) des Videoinhalts des User-Agents (203) an den Server oder das Relais (201), insbesondere durch das Anwenden eines Kopffeldes einer GET-REQUEST-Nachricht (212) entsprechend der HTTP-Standardisierung.
 
11. Verfahren (100, 200) nach Anspruch 10, das Folgendes umfasst: Umleiten des User-Agents (203) zu einer Endvorrichtung, das die ausgewählte Option (223) unterstützt, insbesondere beim Anwenden einer REDIRECT-Nachricht (213) entsprechend einem HTTP-Standard.
 
12. User-Agent-Vorrichtung (203), insbesondere eine MMS-User-Agent-B-Vorrichtung entsprechend einer 3GPP-Multimedia-Nachrichtendienst-Spezifikation, das einen Verarbeitungsschaltkreis (203a) umfasst, der zu Folgendem konfiguriert ist Bestimmen (103) der Display- und/oder Decodierungsfähigkeiten der User-Agent-Vorrichtung (203);
Empfangen, durch einen Server oder ein Relais (201), von angezeigten Optionen (221) eines Videoinhalts, insbesondere durch eine MMS-Server/Relais-B-Vorrichtung entsprechend einer 3GPP-Multimedia-Nachrichtendienst-Spezifikation;
Empfangen (107) des durch den Server oder das Relais bereitgestellten Videoinhalts abhängig von den Display- und/oder Decodierungsfähigkeiten und abhängig von einer von dem User-Agent aus den angezeigten Optionen (221) des Videoinhalts ausgewählten Option (223);
dadurch gekennzeichnet, dass:

die angezeigten Optionen (221) des Videoinhalts eines von den Folgenden umfassen: 2D, 3D, beides; und

die angezeigten Optionen des Videoinhalts für den User-Agent (203) von dem Server oder dem Relais (201) unter Anwendung einer M-NOTIFICATION.IND-Nachricht (211) entsprechend einer Open-Mobile-Alliance-Spezifikation empfangen werden.


 
13. Server- oder Relaisvorrichtung (201), insbesondere eine MMS-Server/Relais-B-Vorrichtung entsprechend einer 3GPP-Multimedia-Nachrichtendienst-Spezifikation, die einen Verarbeitungsschaltkreis (201a) umfasst, der zu Folgendem konfiguriert ist
Anzeigen (105) von Optionen (221) des Videoinhalts für eine User-Agent-Vorrichtung (203) insbesondere eine MMS-User-Agent-B-Vorrichtung entsprechend einer 3GPP-Multimedia-Nachrichtendienst-Spezifikation;
Bereitstellen (107) des Videoinhalts abhängig von den Display- und/oder Decodierungsfähigkeiten der User-Agent-Vorrichtung (203) und abhängig von einer durch die User-Agent-Vorrichtung (203) aus den angezeigten Optionen (221) ausgewählten Option des Videoinhalts;
dadurch gekennzeichnet, dass:

die angezeigten Optionen (221) des Videoinhalts eines von den Folgenden umfassen: 2D, 3D, beides; und

die angezeigten Optionen des Videoinhalts für den User-Agent (203) von dem Server oder dem Relais (201) unter Anwendung einer M-NOTIFICATION.IND-Nachricht (211) entsprechend einer Open-Mobile-Alliance-Spezifikation bereitgestellt sind.


 


Revendications

1. Procédé (100, 200) de fourniture d'un service de messagerie multimédia (MMS) depuis un serveur ou relais (201) à un agent utilisateur (203) dans un réseau multimédia, le procédé (100, 200) comprenant :

la détermination (101) d'une caractéristique de contenu vidéo d'un contenu vidéo par le serveur ou relais (201) ;

la signalisation (105) d'options (221) du contenu vidéo à l'agent utilisateur (203) ;

la fourniture (107) du contenu vidéo en fonction de capacités d'affichage et/ou de décodage et en fonction d'une option (223) sélectionnée via l'agent utilisateur parmi des options signalées (221) du contenu vidéo ;

le procédé étant caractérisé en ce que :

les options signalées (221) du contenu vidéo comprennent l'un des suivants : 2D, 3D, ou les deux ; et

les options de signalisation du contenu vidéo à l'agent utilisateur (203) sont fournies depuis le serveur ou relais (201) en utilisant un message M-NOTIFICATION.IND (211) selon une spécification d'alliance mobile ouverte.


 
2. Procédé (100, 200) selon la revendication 1, dans lequel le réseau multimédia est un réseau (803) selon une spécification de service de messagerie multimédia 3GPP.
 
3. Procédé (100, 200) selon la revendication 1 ou la revendication 2, dans lequel le serveur ou relais (201) est un serveur/relais MMS B selon une spécification de service de messagerie multimédia 3GPP ; dans lequel l'agent utilisateur (203) est un agent utilisateur MMS B selon la spécification de service de messagerie multimédia 3GPP ; et dans lequel le contenu vidéo est un contenu vidéo MMS selon la spécification de service de messagerie multimédia 3GPP.
 
4. Procédé (100, 200) selon l'une des revendications précédentes, dans lequel la signalisation d'options (221) d'un contenu vidéo à l'agent utilisateur (203) comprend la signalisation de toutes les options possibles du contenu vidéo à l'agent utilisateur (203).
 
5. Procédé (100, 200) selon l'une des revendications précédentes, dans lequel la caractéristique de contenu vidéo est déterminée d'après des métadonnées externes.
 
6. Procédé (100, 200) selon la revendication 5, dans lequel le contenu vidéo est une vidéo 3D et les métadonnées externes comprennent un champ de message de type présentation (300) indiquant un format de compression de trame 3D du train binaire vidéo 3D.
 
7. Procédé (100, 200) selon la revendication 6, dans lequel le format de compression de trame 3D du train binaire vidéo 3D comprend l'un des formats suivants : côte à côte (301), haut en bas (302), entrelacé dans le temps (303).
 
8. Procédé (100, 200) selon l'une des revendications précédentes, dans lequel le procédé comprend en outre :

la détermination (103) de capacités d'affichage et/ou de décodage de l'agent utilisateur (203) ; et

dans lequel la détermination (103) de capacités d'affichage et/ou de décodage comprend la détermination des capacités d'affichage et/ou de décodage d'après des informations de profilage de l'agent utilisateur (203), notamment d'après des informations UAProf selon une spécification d'alliance mobile ouverte.


 
9. Procédé (100, 200) selon l'une des revendications précédentes, comprenant : l'information de l'utilisateur quant aux options (221) du contenu vidéo signalé à l'agent utilisateur (203).
 
10. Procédé (100, 200) selon la revendication 9, comprenant :
l'indication d'une option sélectionnée (223) parmi les options (221) du contenu vidéo de l'agent utilisateur (203) au serveur ou relais (201), notamment en utilisant un champ d'entête d'un message GET REQUEST (212) selon une normalisation HTTP.
 
11. Procédé (100, 200) selon la revendication 10, comprenant :
la redirection de l'agent utilisateur (203) vers un terminal prenant en charge l'option sélectionnée (223), notamment en utilisant un message REDIRECT (213) selon une norme HTTP.
 
12. Dispositif d'agent utilisateur (203), notamment dispositif d'agent utilisateur MMS B selon une spécification de service de messagerie multimédia 3GPP, comprenant un circuit de traitement (203a) conçu pour
déterminer (103) des capacités d'affichage et/ou de décodage du dispositif d'agent utilisateur (203) ;
recevoir des options (221) d'un contenu vidéo signalées par un serveur ou relais (201), notamment par un dispositif de serveur/relais MMS B selon une spécification de service de messagerie multimédia 3GPP ;
recevoir (107) le contenu vidéo fourni par le serveur ou relais en fonction des capacités d'affichage et/ou de décodage et en fonction d'une option (223) sélectionnée via l'agent utilisateur parmi les options signalées (221) du contenu vidéo ; caractérisé en ce que :

les options signalées (221) du contenu vidéo comprennent l'un des suivants : 2D, 3D, les deux ; et

les options de signalisation du contenu vidéo à l'agent utilisateur (203) sont reçues en provenance du serveur ou relais (201) en utilisant un message M-NOTIFICATION.IND (211) selon une spécification d'alliance mobile ouverte.


 
13. Dispositif de serveur ou relais (201), notamment dispositif de serveur/relais MMS B selon une spécification de service de messagerie multimédia 3GPP, comprenant un circuit de traitement (201a) conçu pour
signaler (105) des options (221) du contenu vidéo à un dispositif d'agent utilisateur (203), notamment à un dispositif d'agent utilisateur MMS B selon une spécification de service de messagerie multimédia 3GPP ;
fournir (107) le contenu vidéo en fonction de capacités d'affichage et/ou de décodage du dispositif d'agent utilisateur (203) et en fonction d'une option sélectionnée via le dispositif d'agent utilisateur (203) parmi les options signalées (221) du contenu vidéo ;
caractérisé en ce que :

les options signalées (221) du contenu vidéo comprennent l'un des suivants : 2D, 3D, ou les deux ; et

les options de signalisation du contenu vidéo à l'agent utilisateur (203) sont fournies depuis le serveur ou relais (201) en utilisant un message M-NOTIFICATION.IND (211) selon une spécification d'alliance mobile ouverte.


 




Drawing



































Cited references

REFERENCES CITED IN THE DESCRIPTION



This list of references cited by the applicant is for the reader's convenience only. It does not form part of the European patent document. Even though great care has been taken in compiling the references, errors or omissions cannot be excluded and the EPO disclaims all liability in this regard.

Patent documents cited in the description