(19)
(11)EP 2 652 638 B1

(12)EUROPEAN PATENT SPECIFICATION

(45)Mention of the grant of the patent:
29.07.2020 Bulletin 2020/31

(21)Application number: 12736763.9

(22)Date of filing:  18.01.2012
(51)International Patent Classification (IPC): 
G06F 17/00(2019.01)
(86)International application number:
PCT/US2012/021710
(87)International publication number:
WO 2012/099954 (26.07.2012 Gazette  2012/30)

(54)

SYSTEM AND METHOD FOR RECOGNITION OF ITEMS IN MEDIA DATA AND DELIVERY OF INFORMATION RELATED THERETO

SYSTEM UND VERFAHREN ZUR ERKENNUNG VON OBJEKTEN IN MEDIENDATEN UND ZUR AUSGABE VON DARAUF BEZOGENEN INFORMATIONEN

SYSTÈME ET PROCÉDÉ DE RECONNAISSANCE D'ÉLÉMENTS DANS DONNÉES MULTIMÉDIAS ET DE DISTRIBUTION D'INFORMATIONS LES CONCERNANT


(84)Designated Contracting States:
AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR

(30)Priority: 18.01.2011 US 201161433755 P

(43)Date of publication of application:
23.10.2013 Bulletin 2013/43

(73)Proprietor: HSNI, LLC
St. Petersburg, FL 33729 (US)

(72)Inventor:
  • MCDEVITT, John
    Clearwater FL 33762 (US)

(74)Representative: Manitz Finsterwald Patent- und Rechtsanwaltspartnerschaft mbB 
Martin-Greif-Strasse 1
80336 München
80336 München (DE)


(56)References cited: : 
WO-A1-2010/120901
US-A1- 2006 240 862
US-B2- 6 968 337
US-A1- 2002 198 789
US-A1- 2010 260 426
  
  • RUHAN HE ET AL: "Garment Image Retrieval on the Web with Ubiquitous Camera-Phone", PROCEEDINGS / 2008 IEEE ASIA-PACIFIC SERVICES COMPUTING CONFERENCE, APSCC 2008 : 9 - 12 DECEMBER 2008, YILAN, TAIWAN, IEEE, PISCATAWAY, NJ, USA, 9 December 2008 (2008-12-09), pages 1584-1589, XP031423571, ISBN: 978-0-7695-3473-2
  • GUANNAN ZHAO ET AL: "Style matching model-based recommend system for online shopping", COMPUTER-AIDED INDUSTRIAL DESIGN&CONCEPTUAL DESIGN, 2009. CAID&CD 2009. IEEE 10TH INTERNATIONAL CONFERENCE ON, IEEE, PISCATAWAY, NJ, USA, 26 November 2009 (2009-11-26), pages 1995-1999, XP031596983, ISBN: 978-1-4244-5266-8
  
Note: Within nine months from the publication of the mention of the grant of the European patent, any person may give notice to the European Patent Office of opposition to the European patent granted. Notice of opposition shall be filed in a written reasoned statement. It shall not be deemed to have been filed until the opposition fee has been paid. (Art. 99(1) European Patent Convention).


Description

BACKGROUND OF THE INVENTION



[0001] With the continued development of portable media players, social networking services, wireless data transmission speeds, etc., individuals continue to be presented with more and more image and video content. However, when an individual receives a digital picture or a video feed or the like, the individual might also wish to have further information about something in the content, such as an item, a person, a logo or even a building or landmark. For example, a video feed might include a scene filmed at the Statue of Liberty and the viewer may wish to receive historical information about this landmark. Moreover, a video feed might include a famous actress carrying a new designer handbag or a famous athlete using a cell phone, each of which may be of interest to a consumer who wishes to learn more information about the item, share the item with a friend via a social networking website or the like, or even purchase the item. In conventional systems, the viewer/consumer is unable to quickly transform their general interest of the particular item into the ability to get additional information or engage in an e-commerce shopping session related to the item of interest. Document WO 2010 / 120 901 A1 discloses a system and a method for image recognition using a mobile device. Document Ruhan He et al.: "Garment Image Retrieval on the Web with Ubiquitous Camera-Phone", IEEE Asia-Pacific Services Computing Conference, 2008, pages 1584-1589, ISBN: 978-0-7695-3473-2 discusses the ability of a system to provide information about an item acquired by camera phone. Document Guannan Zhao et al.: "Style Matching Model-Based Recommend System for Online Shopping", Computer-Aided Industrial Design & Conceptual Design, 2009, pages 1995-1999, ISBN: 978-1-4244-5266-8 discloses a recommendation system for online shopping.

SUMMARY OF THE INVENTION



[0002] Accordingly, what is needed is a system that recognizes individual items or sets of items (collectively items) in source content and accesses information relating to the recognized items that can then be requested by or automatically pushed to the end user in order to facilitate additional interaction related to the recognized item. Thus, the system and method disclosed herein relate to the determination of both the location and identity of items in images (both pictures and videos) and the rendering of additional functionality for these identified items when the end user "points to", "clicks", or otherwise selects the identified items.

[0003] Specifically, a system is provided that includes an electronic database that stores a plurality of digital images of items and information related to each of the plurality of items; and a processor that scans source content having a plurality of elements and identifies any items that match the plurality of items stored in the database. In addition, the processor generates position data indicating the position of the identified item and links and/or merges the item with the information related to the identified item(s) and the position data. Moreover, a method is provided that scans source content, identifies items in the source content that match a digital image stored in an electronic database, generates position data indicating the position of the identified item, accesses information related to the identified item, and links and/or merges the item with the position data and the information related to the identified item. The invention is set out in the appended set of claims.

BRIEF DESCRIPTION OF THE DRAWINGS



[0004] 

Figure 1 illustrates a block diagram of a system for recognizing items in media data and delivery of related information in accordance with an exemplary embodiment.

Figure 2 illustrates a flowchart for a method for recognizing items in media data and delivery of related information in accordance with an exemplary embodiment.


DETAILED DESCRIPTION OF THE INVENTION



[0005] The following detailed description outlines possible embodiments of the proposed system and method disclosed herein for exemplary purposes. The system and method are in no way intended to be limited to any specific combinations of hardware and software. As will be described below, the system and method disclosed herein relate to the establishment of both the location and identity of individual items in images. Once the one or more items in the images and/or video are identified and the locations of the items established, additional functionality related to those identified items can occur when those identified locations are "pointed to", "clicked" or otherwise selected (e.g., purchase an item, request information, select another video stream, play a game, share the item, rate, "Like", and the like).

[0006] Figure 1 illustrates block diagram of a system 100 for recognizing items in media data and delivery of related information in accordance with an exemplary embodiment. In general, system 100 is divided into remote processing system 102 and user location 104. In the exemplary embodiment, the remote processing system 102 can be associated with a secondary processing system (e.g., a digital video recorder, a product supplier, etc.), which can be located at either remote processing system 102 or user location 104, and/or content provider that is capable of processing data transmitted to and from user location 104. A general illustration of the relationship between a user location, a product supply server, i.e., secondary processing system, and a content provider is discussed in U.S. Patent No. 7,752,083 to Johnson et al., issued on July 6, 2010, and entitled "METHOD AND SYSTEM FOR IMPROVED INTERACTIVE TELEVISION PROCESSING". Furthermore, user location 104 can be considered any location in which an end user/consumer is capable of viewing an image and/or video feed on a viewing device 145. It is noted that the terms "end user," "user" and "consumer" are used interchangeably herein and can be a human or another system as will be described in more detail below.

[0007] As shown in Figure 1, remote processing system 102 includes content source 110 that provides source images, i.e., source content, that is ultimately transmitted to the user after it is processed by the other components of remote processing system 102, as will be discussed below. In one embodiment, content source 110 can be a content provider, such as that discussed above with reference to U.S. Patent No. 7,752,083. Furthermore, source content can be live or prerecorded, analog or digital, and still (picture) or streaming (video).

[0008] Remote processing system 102 further includes reference content database 115 that contains a plurality of known images (picture or video - collectively images). In particular, reference content database 115 can store images relating to elements that may be displayed in the source content. For example, the stored images can relate to consumer products (e.g., electronics, apparel, jewelry, etc.), marketing or brand items (e.g., logos, marks, etc.), individuals, locations (e.g., buildings, landmarks, etc.), humanly invisible items (fingerprints, watermarks, etc.) or any other elements that are capable of being identified in the source content. The image data in reference content database 115 can be updated on a continuous or periodic basis by a system, a system administrator or the like.

[0009] Remote processing system 102 further includes matching processor 120 that is coupled to both content source 110 and reference content database 115. Matching processor 120 is configured to compare images in reference content database 115 with elements in the source content provided by content source 110. More particularly, matching processor 120 uses conventional scanning and image recognition algorithms for scanning image content to compare the elements in the source content with the images stored in reference content database 115 and identify matches. The scanning and related matching process can occur on a continuous or periodic basis. During the matching process, every potential item in the source content is compared with the images stored in reference content database 115. When the comparison results in a match, matching processor 120 identifies the matched item. If there is no match, matching processor 120 continues to scan the source content as it updates/changes to continually or periodically check whether elements in the source content match images in reference content database 115. It should be appreciated that the areas of the source content that do not have any identified items in them can be identified as such.

[0010] It is further contemplated that reference content database 115 can store certain images as predetermined marker items. Specifically, reference content database 115 can store images with preset identifying data (e.g., marker characteristics) that enables matching processor 120 to more quickly and more accurately identify items that correspond to the marker characteristics. Preferably, it is contemplated that items being frequently displayed in the source content are stored as predetermined marker items in reference content database 115, such that reference content database 115 is organized to contain subsets of items (associated by marker characteristics) that have a higher probability of successfully matching with elements in specific source content. For example, a subset of items that are more likely to be matched during a sporting event (e.g., team logos) can be generated and referenced during the scanning process when the source content is a game involving the specific team having that logo. As a result, the subset of items may be employed to increase the quality of the item matches (increased correct matches and decreased false positive matches), effectively reducing the processing requirements of matching processor 120. In addition, in one embodiment of the matching process, the items stored in reference content database 115 can include data fields that link similar items. For example, data fields can be provided that link items similar in type, time, relationship, or the like (e.g., all images of televisions have a common field, images of things that occur around an event such as Valentine's Day have a common field, or items that traditionally are linked have a common field, such as salt and pepper). Additionally, matching processor 120 can perform an iterative process to match the element in the source content to the item stored in reference content database 115 by making an initial predicted match in the first image or frame and then refining the prediction for each subsequent scan until a conclusive match is made and the item is identified.

[0011] As further shown, location determination processor 125 is coupled to matching processor 120 and is configured to identify the location of any matched items identified by matching processor 120. In the exemplary embodiment, the location of the matched items can be defined in a Cartesian coordinate plane, or in a position based on another location system (collectively X, Y coordinates either as an individual point or a set of points). Location determination processor 125 is configured to generate metadata setting the X, Y coordinates for each matched item's position relative to the source content as a whole. Accordingly, for each matched item's position, determination processor 125 generates metadata for the specific X, Y coordinates of that item as it is positioned within the image of the source content that includes that item. For each subsequent image (including each video frame), location determination processor 125 continues to track the movement of the item as its position varies in the source content and continues to generate metadata corresponding to the item's position. In the exemplary embodiment, the item's position can be denoted by either the X, Y coordinate set or the center point of the item shape.

[0012] It should be understood by those skilled in the art that while matching processor 120 and location determination processor 125 are described as separate processors, in an alternative embodiment, a single processor can perform both the matching and location identifying processes as well as the creation of the metadata of identity and location of the items.

[0013] Remote processing system 102 further includes additional information database 130. Although additional information database 130 is described in the exemplary embodiment to be located at remote processing system 102, additional information database 130 can also be located a user location 104, as will be described in more detail below.

[0014] In either embodiment, additional information database 130 contains additional information about the reference images stored in reference content database 115. Specifically, additional information database 130 is configured to store descriptive and relational information related to the item, including pricing information, sizing information, product description, product reviews and the like, as well as links to other information sources such as Internet websites. Thus, in operation, once the matched item is identified, remote processing system 102 subsequently accesses additional information database 130, which identifies all additional information relating to the specific matched item. It should be appreciated that there may be no additional information in additional information database 130 related to the items. In a refinement of the exemplary embodiment, additional information can be a data path to the more detailed information about an item. Thus, instead of initially providing all additional information related to an item, the additional information initially accessed by additional information database 130 may be a path to this information. Thus, only when the user is interested in the matched item and wishes to view further information about the item, additional information database 130 will subsequently access the metadata relating to the detailed information of the matched item.

[0015] It should further be appreciated by those skilled in the art that while reference content database 115 and additional information database 130 are described as separate databases, in an alternative embodiment, a single database can be provided to store both the image information and the additional information about the referenced item.

[0016] Once the additional information is identified by additional information database 130, merging processor 135 is provided to merge together this metadata, the metadata relating to the location information calculated by location determination processor 125, and the source content provided by content source 110 into a format that can be received/interpreted by viewing device 145 at user location 104. In the exemplary embodiment in which the source content is being generated live or is prerecorded, the matching is occurring so that the content and the item identification and location metadata are synchronously delivered. In an additional embodiment, the content with the related synchronous item identification and location metadata can be stored and played out directly by distribution server 140 to viewing device 145. The rendering of this combined data can be either visible or invisible in whole or in part. At this point, remote processing system 102 is configured to make the items on the display device "active" by any method known to those skilled in the art, e.g., they are "selectable" or "clickable" by the end user/consumer. Furthermore, distribution server 140 is coupled to merging processor 135 and configured to transmit the new integrated video stream to user location 104 using any conventional data communication method (e.g., over the air broadcast, cable casting, Direct Broadcast Satellite, Telco, wifi, 3G/4G, IP enabled, etc.). It is further contemplated that in an alternative embodiment, the process of rendering the item "active" is performed by viewing device 145.

[0017] User location 104 comprises viewing device 145 that is configured to receive image/video and audio content (e.g., IP data stream) and is capable of displaying an image/video feed, and, more particularly, the new integrated video stream generated by merging processor 135 and transmitted by distribution server 140. It should be understood that viewing device 145 can be any suitably appropriate device capable of viewing the new integrated image/video stream, including, but not limited to, a computer, smartphone, PDA, laptop computer, notebook computer, television, viewing device with a set-top box type processor (internal or external to the viewing device), a Blu-ray player, a video game console (internal or external to a television or the like), a Tablet PC, or any other device (individually or as part of a system) that can receive, interpret, and render on a screen image/video content as well as interpret the related metadata, receive user input related to the merged content and metadata, display additional information in response to user input and/or send that user input to a locally and/or remotely connected secondary system(s).

[0018] Furthermore, viewing device 145 (with internal or external processor(s)) is configured to enable a user to in some way select the identified items and perform additional actions. This process can be either a single process in the case of pictures or can be continuous in the case of video. In the exemplary embodiment, the user's selection of one or more identified items will result in the additional information about the item being displayed to the user on viewing device 145. In addition or in the alternative, the response from the user's selection can be sent to one or more secondary systems on either a continuous or periodic basis. The user can select the identified item using any applicable selection method such as a mouse pointer, a touch screen, or the like. Thus, when viewing device 145 displays the new integrated video stream that includes one or more "active" items, as discussed above, and the end user selects the particular active item, the user can view and/or access the additional information relating to the matched item. As mentioned above, the end user can also be another system. For example, when the new integrated video stream is being interpreted by viewing device 145, one or more items can be automatically identified and selected by viewing device 145 (e.g., an associated processor). For example, if a user is watching a free version of a movie, this embodiment contemplates that the processor of viewing device 145 automatically identifies and selects one or more items causing information (e.g., product advertisements) to be displayed to the end user. Alternatively, if the user pays to download and watch the movie, this feature can be turned off.

[0019] It is also noted that in an alternative embodiment, the new integrated video stream generated by merging processor 135 only includes metadata relating to the item's identification and position. Specifically, in this embodiment, additional information in additional information database 130 that is related to the identified item is not initially merged into the integrated video stream. Instead, the integrated video stream is transmitted to the end user without the additional information. Only after the end user selects the identified item, a request is sent by viewing device 145 to additional information database 130 at remote processing system 102, which accesses the additional information and transmits it back to viewing device 145. In yet another embodiment, additional information database 130 can be located at user location 104.

[0020] In one refinement of the exemplary embodiment, an electronic shopping request can be transmitted back to distribution server 140 when the user selects the identified item, which, in turn, causes remote processing system 102 to initiate an electronic shopping interaction with the end user that allows the end user to review and, if he or she elects, purchase the selected item. Exemplary electronic shopping systems and methods are disclosed in U.S. Patent Nos. 7,752,083 and 7,756,758 and U.S. Patent Publication No. 2010/0138875.

[0021] In addition, one or more secondary systems 150 can be provided at user location 104 and coupled to viewing device 145. These additional systems are additional processors that allow for a wide variety of functionality known to those skilled in the art (e.g., including digital video recorders, email systems, social network systems, etc), but that can be interfaced via a connection to viewing device 145.

[0022] It is also noted that while the exemplary embodiment describes the new integrated video stream as a single data stream that includes the source content, the metadata relating to the additional information that is merged in the source content, and the metadata for the X, Y coordinates of the matched items, in an alternative embodiment, two separate data streams containing this information can be transmitted by distribution server 140 to user location 104 and then merged by one or more processors of (or connected to) viewing device 145. For example, the source content can be transmitted as a first data stream using conventional transmission methods (e.g., standard broadcast, DBS, cable delivered video or the like) and the metadata about the matched items (i.e., the additional information and position information) can be transmitted using conventional IP data communication methods (e.g., wifi, 3G/4G, IP enabled, and the like). In this embodiment, merging processor 135 is located at user location 104 and is coupled to viewing device 145 to perform the same merging processing steps described above.

[0023] It should further be understood that while the various components are described to be part of remote processing system 102, it is in no way intended that these components all be located at the same physical location. In an alternative embodiment, one or more of the processes can be performed by processors that are internal or external to viewing device 145. For example, in one embodiment, source content that has not been processed by remote processing system 102 can be transmitted directly to viewing device 145. When the user selects or clicks on a particular element in the source content, a location determination processor provided at viewing device 145 can generate metadata setting the X, Y coordinates for the selected item. This metadata can then be transmitted to remote processing system 102 where the selected element is compared to images in reference content database 115 by matching processor 120. If a match is identified, the processing of this information as described above with respect to the other components of remote processing system 102 is performed and a new integrated video stream is pushed back to the user that includes the additional information about the element initially selected by the user. Further, while each of the components described in remote processing system 102 is provided with one or more specific functions, each component is by no means intended to be limited to these functions. For example, different components can provide different processing functions within the context of the invention and/or a single component can perform all of the functions described above with respect to the exemplary embodiment.

[0024] Finally, it should be understood that each of the aforementioned components of remote processing system 102 and user location 104 comprises all requisite hardware and software modules to enable communication between each of the other respective components. These hardware components can include conventional I/O interfaces, such as modems, network cards, and the like. Such hardware components and software applications are known to those skilled in the art and have not been described in detail so as not to unnecessarily obscure the description of the invention herein. Moreover, program instructions for each of the components can be in any suitable form. In particular, some or all of the instructions may be provided in programs written in a self-describing computer language, e.g., Hyper Text Markup Language (HTML), eXtensible Markup Language (XML) or the like. Transmitted program instructions may be used in combination with other previously installed instructions, e.g., for controlling a manner of display of data items described in a received program markup sheet.

[0025] Figure 2 illustrates a flowchart for a method 200 for recognizing items in media data and delivery of related information in accordance with an exemplary embodiment. The following method is described with respect to the components of Figure 1 and their associated functionality as discussed above.

[0026] As shown in Figure 2, initially, at step 205, content source 110 at remote processing system 102 generates a source picture or video that is provided to matching processor 120. At step 210, matching processor 120 uses known scanning methods and/or other image matching techniques to compare elements in the source content to item images stored in reference content database 115. These images can include a wide variety of things. For example, the stored images can related to consumer products (e.g., electronics, apparel, jewelry, etc.), marketing or brand items (e.g., logos, marks, etc.), individuals, locations (e.g., buildings, landmarks, etc.) or any other elements that are capable of being identified in the source content. If no match is identified, remote processing system 102 does nothing and matching processor 120 continues to scan the source content provided by content source 110. Furthermore, in an additional embodiment, the areas of the content source data that do not contain any identified items can be identified as such.

[0027] Alternatively, if matching processor 120 identifies a match between the element in the source content and the reference item images in reference content database 115, method 200 proceeds to step 215 in which the position of the matched item is calculated by location determination processor 125. Specifically, at step 215, location determination processor 125 generates metadata setting the X, Y coordinates for each matched item's position. Next, at step 220, remote processing system 102 accesses additional information database 130 to identify additional information relating to the identified item. This information can include descriptive or relational information related to the items including pricing information, sizing information, product description, product reviews and the like, as well as links to other information sources such as Internet websites, or, in the alternative, a data path to this detailed information.

[0028] Once the additional information is identified, the method proceeds to step 225 where merging processor 135 merges together this additional information, the metadata relating to location information calculated by location determination processor 125, and the source content provided by content source 110 into a format that can be received/interpreted by viewing device 145 at user location 104.

[0029] At step 230, the new integrated video stream is then transmitted by distribution server 140 to user location 104. Next, at step 235, when viewing device 145 receives the new integrated video stream, viewing device 145 renders visible or invisible indicators on the matched items making them "active," i.e., the matched items are rendered "selectable" or "clickable" by the end user/consumer and the additional information related to the matched item can be displayed on viewing device 145 in response to the user's selection of the active item. As noted above, this step can also be performed by remote processing system 102. Finally, as an example, at step 240, if a particular item is selected by the user/consumer, remote processing system 102 will launch an electronic shopping interaction with the user/consumer that allows the user/consumer to review and, if he or she elects, purchase the selected item. As noted above, exemplary electronic shopping systems and methods are disclosed in U.S. Patent Nos. 7,752,083 and 7,756,758 and U.S. Patent Publication No. 2010/0138875.

[0030] It should be understood that while method 200 comprises certain steps performed by the components at remote processing system 102 and certain steps performed by the components at user location 104, method 200 is by no way intended to be limited in this regard. For example, as described above, certain processes performed by the components at remote processing system 102 in the exemplary embodiment can, in an alternative embodiment, be performed by processors coupled to viewing device 145. For example, in one embodiment, the source content can be initially transmitted to the user/consumer at user location 104 before it is processed. Once the user selects a particular element, a processor coupled to viewing device 145 can generate metadata representing the X, Y coordinate of the selected item in the source content and this metadata can then be transmitted back to remote processing system 102. The subsequent processing steps discussed above (e.g., the image matching and merging processes) can then be performed on the selected item before the data is pushed back to the user/consumer.

[0031] Furthermore, it is contemplated that method 200 can be performed using digital or analog, live or recorded, and still or streaming content provided by content source 110 where the metadata related to the product identity and X, Y coordinates can be stored and delivered with the live or recorded content or, alternatively, this data can be stored at remote processing system 102 (or a combination of remote processing system 102 and user location 104) and served or created dynamically as would be understood to one skilled in the art. Additionally, in the embodiment in which the source content is being generated live or is prerecorded, the matching is occurring so that the content and the item identification and location metadata are synchronously delivered. In an additional embodiment, the content with the related synchronous item identification and location metadata can be stored and played out directly by distribution server 140 to viewing device 145.

[0032] It is finally noted that while the foregoing system 100 in Figure 1 and method 200 in Figure 2 have primarily been described with respect to image and video data, it is also contemplated that system 100 and method 200 can utilize audio data. For example, reference content database 115 can contain audio items, such as songs or famous individuals' voices, that are capable of being identified in the source content. Matching processor 120 can perform a similar matching process for source content and match audio elements in the source content to the audio items in reference content database 115. Additional information database 130 can also contain additional information about the identified audio items, such as the album of the song or movies, shows, sports teams, political party, etc. relating to the famous individual whose voice is identified. The end user can then selects a designated area in the source content or otherwise indicate an interest in the audio item to receive the additional information using the system and process and described herein.

[0033] While the foregoing has been described in conjunction with exemplary embodiments, it is understood that the term "exemplary" is merely meant as an example. Accordingly, the application is intended to cover alternatives, modifications and equivalents, which may be included within the scope of the system and method for recognizing items in media data and delivery of related information as disclosed herein.

[0034] Additionally, in the preceding detailed description, numerous specific details have been set forth in order to provide a thorough understanding of the present invention. However, it should be apparent to one of ordinary skill in the art that the system and method for recognizing items in media data and delivery of related information may be practiced without these specific details. In other instances, well-known methods, procedures, components, and circuits have not been described in detail so as not to unnecessarily obscure aspects of the system and method disclosed herein.


Claims

1. A system for recognizing individual items in image data contained in video source content and delivering related information, the system comprising:

at least one electronic database storing a plurality of digital images and information related to each of the plurality of digital images;

at least one processor communicatively coupled to the at least one electronic database, the at least one processor configured to:

(1) scan the image data contained in the video source content and identify an individual item in the image data of the video source content that matches one of the plurality of digital images stored in the at least one electronic database by comparing elements in the image data of the source content with the plurality of digital images stored in at least one electronic database to identify a matched individual item when the individual elements of the source image data match at least one of the plurality of digital images stored in the at least one electronic database,

(2) access the information stored in the at least one electronic database that is related to the digital image that matches the identified individual item,

(3) generate coordinate position data indicating a position of the identified individual item in the video source content, and

(4) generate a new integrated video stream by merging the image data contained in the video source content with the accessed information related to the identified individual item and the coordinate position data of the identified individual item; and

a server configured to transmit the new integrated video stream to a display device that displays the image data with at least one electronic indicator, which is based on the coordinate position data, for the identified individual item such that the individual identified item is active in the image data and configured to be selected by a user to view the accessed information related to the identified individual item.


 
2. The system of claim 1, wherein the image data is a video feed.
 
3. The system of claim 3, wherein the video feed is live.
 
4. The system of claim 3, wherein the video feed is prerecorded.
 
5. The system of claim 1, wherein the image data is a picture.
 
6. The system of claim 1, wherein the image data is analog data.
 
7. The system of claim 1, wherein the image data is digital data.
 
8. The system of claim 1, wherein the at least one processor comprises:

a first processor configured to scan the image data contained in the video source content and identify the individual item in the image data that matches the at least one digital image of the plurality of digital images stored in the at least one electronic database,

a second processor configured to access the information stored in the at least one electronic database that is related to the digital image that matches the identified individual item,

a third processor configured to generate coordinate position data indicating the position of the identified individual item in the video source content, and

a fourth processor configured to generate the new integrated video stream by merging the image data with the accessed information related to the identified individual item and the coordinate position data of the identified individual item.


 
9. The system of claim 1, wherein the at least one electronic database comprises a first electronic database storing the plurality of digital images and a second electronic database storing the information related to each of the plurality of digital images.
 
10. The system of claim 1, wherein the processor is further configured to initiate an electronic shopping interaction in response to a user's selection of the identified individual item that is active.
 
11. The system of claim 1, wherein the processor is further configured to display the accessed information related to the identified at least one item in response to a user's selection of the identified individual item that is selectable.
 
12. The system of claim 1, wherein the processor is further configured to update the coordinate position data indicating the position of the identified individual item.
 
13. The system of claim 1, wherein the display device is at least one of a computer, a smartphone, a tablet, a PDA, a television, a viewing device with a set-top box type processor, a Blu-ray player, and a video game console.
 
14. The system of claim 1, wherein the individual identified item that is selectable is configured to be selected by at least one of the display device or a user of the display device.
 
15. The system of claim 1, wherein the processor is further configured to scan the image data contained in the video source content and identify a plurality of individual elements in the image data that match a plurality of respective digital images stored in the at least one electronic database.
 
16. A method for recognizing individual items in image data contained in video source content and delivering related information, the method comprising:

scanning the image data contained in the video source content;

identifying an individual item in the image data of the video source content that matches one of a plurality of digital images stored in at least one electronic database by comparing individual elements in the image data of the source content with the plurality of digital images stored in at least one electronic database to identify a matched individual item when the individual elements of the source image data match at least one of the plurality of digital images stored in the at least one electronic database;

accessing the information stored in the at least one electronic database that is related to the digital image that matches the identified individual item;

generating coordinate position data indicating the position of the identified individual item in the video source content;

generate a new integrated video stream by merging the image data contained in the video source content with the accessed information related to the identified individual item and the coordinate position data of the identified individual item in the video source content; and

transmitting the new integrated video stream to a display device that displays the image data with at least one electronic indicator, which is based on the coordinate position data, for the identified individual item such that the individual identified item is active in the image data and configured to be selected by a user to view the accessed information related to the identified individual item.


 
17. The method of claim 16, further comprising initiating an electronic shopping interaction in response to a user's selection of the identified individual item that is active.
 
18. The method of claim 16, further comprising displaying the information related to the identified individual item in response to a user's selection of the identified individual item that is active.
 
19. The method of claim 16, further comprising updating the coordinate position data indicating the position of the identified individual item.
 
20. The method of claim 16, further comprising selecting the individual identified item by at least one of the display device or a user of the display device.
 
21. The method of claim 16, further comprising scanning the image data contained in the video source content and identifying a plurality of individual elements in the image data that match a plurality of respective digital images stored in the at least one electronic database.
 


Ansprüche

1. System zum Erkennen einzelner Objekte in Bilddaten, die in einem Videoquelleninhalt enthalten sind, und zum Liefern damit zusammenhängender Informationen, wobei das System umfasst:

mindestens eine elektronische Datenbank, die eine Vielzahl von digitalen Bildern und Informationen, die mit jedem der Vielzahl von digitalen Bildern zusammenhängen, speichert;

mindestens einen Prozessor, der mit der mindestens einen elektronischen Datenbank kommunikationstechnisch gekoppelt ist, wobei der mindestens eine Prozessor konfiguriert ist, um:

(1) die Bilddaten, die in dem Videoquelleninhalt enthalten sind, abzutasten und ein einzelnes Objekt in den Bilddaten des Videoquelleninhalts, das mit einem der Vielzahl von digitalen Bildern übereinstimmt, die in der mindestens einen elektronischen Datenbank gespeichert sind, zu identifizieren, indem Elemente in den Bilddaten des Quelleninhalts mit der Vielzahl von digitalen Bildern, die in mindestens einer elektronischen Datenbank gespeichert sind, verglichen werden, um ein übereinstimmendes einzelnes Objekt zu identifizieren, wenn die einzelnen Elemente der Quellenbilddaten mit mindestens einem der Vielzahl von digitalen Bildern übereinstimmen, die in der mindestens einen elektronischen Datenbank gespeichert sind,

(2) auf die in der mindestens einen elektronischen Datenbank gespeicherten Informationen, die mit dem digitalen Bild zusammenhängen, das mit dem identifizierten einzelnen Objekt übereinstimmt, zuzugreifen,

(3) Koordinatenpositionsdaten zu erzeugen, die eine Position des identifizierten einzelnen Objekts in dem Videoquelleninhalt angeben,und

(4) einen neuen integrierten Videostrom zu erzeugen, indem die in dem Videoquelleninhalt enthaltenen Bilddaten mit den abgerufenen Informationen, die mit dem identifizierten einzelnen Objekt zusammenhängen, und die Koordinatenpositionsdaten des identifizierten einzelnen Objekts zusammengeführt werden; und

einen Server, der so konfiguriert ist, dass er den neuen integrierten Videostrom an eine Anzeigevorrichtung überträgt, die die Bilddaten mit mindestens einer elektronischen Anzeige, die auf den Koordinatenpositionsdaten basiert, für das identifizierte einzelne Objekt so anzeigt, dass das identifizierte einzelne Objekt in den Bilddaten aktiv ist, und so konfiguriert ist, dass es von einem Benutzer ausgewählt werden kann, um die abgerufenen Informationen anzuzeigen, die mit dem identifizierten einzelnen Objekt zusammenhängen.


 
2. System nach Anspruch 1, wobei die Bilddaten eine Videoübertragung sind.
 
3. System nach Anspruch 3, wobei die Videoübertragung live erfolgt.
 
4. System nach Anspruch 3, wobei die Videoübertragung vorab aufgezeichnet ist.
 
5. System nach Anspruch 1,wobei die Bilddaten ein Bild sind.
 
6. System nach Anspruch 1, wobei die Bilddaten analoge Daten sind.
 
7. System nach Anspruch 1, wobei die Bilddaten digitale Daten sind.
 
8. System nach Anspruch 1, wobei der mindestens eine Prozessor umfasst:

einen ersten Prozessor, der so konfiguriert ist, dass er die in dem Videoquelleninhalt enthaltenen Bilddaten abtastet und das einzelne Objekt in den Bilddaten identifiziert, das mit dem mindestens einen digitalen Bild der Vielzahl von digitalen Bildern übereinstimmt, die in der mindestens einen elektronischen Datenbank gespeichert sind,

einen zweiten Prozessor, der so konfiguriert ist, dass er auf die in der mindestens einen elektronischen Datenbank gespeicherten Informationen zugreift, die mit dem digitalen Bild zusammenhängen, das mit dem identifizierten einzelnen Objekt übereinstimmt,

einen dritten Prozessor, der so konfiguriert ist, dass er Koordinatenpositionsdaten erzeugt, die die Position des identifizierten einzelnen Objekts in dem Videoquelleninhalt angeben, und

einen vierten Prozessor, der so konfiguriert ist, dass er den neuen integrierten Videostrom erzeugt, indem er die Bilddaten mit den abgerufenen Informationen, die mit dem identifizierten einzelnen Objekt zusammenhängen, und die Koordinatenpositionsdaten des identifizierten einzelnen Objekts zusammenführt.


 
9. System nach Anspruch 1, wobei die mindestens eine elektronische Datenbank eine erste elektronische Datenbank, in der die Vielzahl von digitalen Bildern gespeichert ist, und eine zweite elektronische Datenbank, in der die Informationen gespeichert sind, die mit jedem der Vielzahl von digitalen Bildern zusammenhängen, umfasst.
 
10. System nach Anspruch 1, wobei der Prozessor ferner so konfiguriert ist, dass er eine elektronische Einkaufsinteraktion als Reaktion auf die Auswahl des identifizierten einzelnen Objekts, das aktiv ist, durch einen Benutzer auslöst.
 
11. System nach Anspruch 1, wobei der Prozessor ferner so konfiguriert ist, dass er die abgerufenen Informationen anzeigt, die mit dem identifizierten mindestens einen Objekt zusammenhängen, als Reaktion auf die Auswahl des identifizierten einzelnen Objekts, das auswählbar ist, durch einen Benutzer.
 
12. System nach Anspruch 1, wobei der Prozessor ferner so konfiguriert ist, dass er die Koordinatenpositionsdaten aktualisiert, die die Position des identifizierten einzelnen Objekts angeben.
 
13. System nach Anspruch 1, wobei die Anzeigevorrichtung ein Computer und/oder ein Smartphone und/oder ein Tablet-Computer und/oder ein PDA und/oder ein Fernseher und/oder eine Betrachtungsvorrichtung mit einem Prozessor vom Set-Top-Box-Typ und/oder ein Blu-ray-Player und/oder eine Videospielkonsole ist.
 
14. System nach Anspruch 1, wobei das einzelne identifizierte Objekt, das auswählbar ist, so konfiguriert ist, dass es von der Anzeigevorrichtung und/oder einem Benutzer der Anzeigevorrichtung ausgewählt werden kann.
 
15. System nach Anspruch 1, wobei der Prozessor ferner so konfiguriert ist, dass er die in dem Videoquelleninhalt enthaltenen Bilddaten abtastet und eine Vielzahl von einzelnen Elementen in den Bilddaten identifiziert, die mit einer Vielzahl von entsprechenden digitalen Bildern übereinstimmen, die in der mindestens einen elektronischen Datenbank gespeichert sind.
 
16. Verfahren zum Erkennen einzelner Objekte in Bilddaten, die in einem Videoquelleninhalt enthalten sind, und zum Liefern damit zusammenhängender Informationen, wobei das Verfahren umfasst:

Abtasten der in dem Videoquelleninhalt enthaltenen Bilddaten;

Identifizieren eines einzelnen Objekts in den Bilddaten des Videoquelleninhalts, das mit einem aus einer Vielzahl von digitalen Bildern übereinstimmt, die in mindestens einer elektronischen Datenbank gespeichert sind, durch Vergleichen einzelner Elemente in den Bilddaten des Quelleninhalts mit der Vielzahl von digitalen Bildern, die in mindestens einer elektronischen Datenbank gespeichert sind, um ein übereinstimmendes einzelnes Objekt zu identifizieren, wenn die einzelnen Elemente der Quellenbilddaten mit mindestens einem aus der Vielzahl von digitalen Bildern übereinstimmen, die in der mindestens einen elektronischen Datenbank gespeichert sind;

Zugreifen auf die in der mindestens einen elektronischen Datenbank gespeicherten Informationen, die mit dem digitalen Bild zusammenhängen, das mit dem identifizierten einzelnen Objekt übereinstimmt;

Erzeugen von Koordinatenpositionsdaten, die die Position des identifizierten einzelnen Objekts in dem Videoquelleninhalt angeben;

Erzeugen eines neuen integrierten Videostroms, indem die in dem Videoquelleninhalt enthaltenen Bilddaten mit den abgerufenen Informationen, die mit dem identifizierten einzelnen Objekt zusammenhängen, und die Koordinatenpositionsdaten des identifizierten einzelnen Objekts in dem Videoquelleninhalt zusammengeführt werden; und

Übertragen des neuen integrierten Videostroms an eine Anzeigevorrichtung, die die Bilddaten mit mindestens einer elektronischen Anzeige, die auf den Koordinatenpositionsdaten basiert, für das identifizierten einzelne Objekt so anzeigt, dass das einzelne identifizierte Objekt in den Bilddaten aktiv ist und so konfiguriert ist, dass es von einem Benutzer ausgewählt werden kann, um die abgerufenen Informationen zu betrachten, die mit dem identifizierten einzelnen Objekt zusammenhängen.


 
17. Verfahren nach Anspruch 16, das ferner das Einleiten einer elektronischen Einkaufsinteraktion als Reaktion auf die Auswahl des identifizierten einzelnen Objekts, das aktiv ist, durch einen Benutzer umfasst.
 
18. Verfahren nach Anspruch 16, das ferner das Anzeigen der Informationen, die mit dem identifizierten einzelnen Objekt zusammenhängen, als Reaktion auf die Auswahl des identifizierten einzelnen Objekts, das aktiv ist, durch einen Benutzer umfasst.
 
19. Verfahren nach Anspruch 16, das ferner das Aktualisieren der Koordinatenpositionsdaten umfasst, die die Position des identifizierten einzelnen Objekts angeben.
 
20. Verfahren nach Anspruch 16, das ferner das Auswählen des einzelnen identifizierten Objekts durch die Anzeigevorrichtung und/ oder einen Benutzer der Anzeigevorrichtung umfasst.
 
21. Verfahren nach Anspruch 16, das ferner das Abtasten der in dem Videoquelleninhalt enthaltenen Bilddaten und das Identifizieren einer Vielzahl von einzelnen Elementen in den Bilddaten, die mit einer Vielzahl von jeweiligen digitalen Bildern übereinstimmen, die in der mindestens einen elektronischen Datenbank gespeichert sind, umfasst.
 


Revendications

1. Système pour reconnaître des éléments individuels dans des données d'image contenues dans un contenu source vidéo et pour délivrer des informations qui y sont liées, le système comprenant :

au moins une base de données électronique stockant une pluralité d'images numériques et des informations liées à chacune de la pluralité d'images numériques ;

au moins un processeur couplé en communication à ladite au moins une base de données électronique, ledit au moins un processeur étant configuré pour :

(1) balayer les données d'image contenues dans le contenu source vidéo et identifier un élément individuel dans les données d'image du contenu source vidéo qui correspond à l'une de la pluralité d'images numériques stockées dans ladite au moins une base de données électronique en comparant des éléments dans les données d'image du contenu source avec la pluralité d'images numériques stockées dans ladite au moins une base de données électronique pour identifier un élément individuel correspondant lorsque les éléments individuels des données d'image source correspondent à au moins une de la pluralité d'images numériques stockées dans ladite au moins une base de données électronique,

(2) accéder aux informations stockées dans ladite au moins une base de données électronique qui sont liées à l'image numérique qui correspond à l'élément individuel identifié,

(3) générer des données de position en coordonnées indiquant une position de l'élément individuel identifié dans le contenu source vidéo, et

(4) générer un nouveau flux vidéo intégré en fusionnant les données d'image contenues dans le contenu source vidéo avec les informations accédées liées à l'élément individuel identifié et les données de position en coordonnées de l'élément individuel identifié ; et

un serveur configuré pour transmettre le nouveau flux vidéo intégré à un dispositif d'affichage qui affiche les données d'image avec au moins un indicateur électronique, qui est basé sur les données de position en coordonnées, pour l'élément individuel identifié de telle sorte que l'élément individuel identifié soit actif dans les données d'image et configuré pour être sélectionné par un utilisateur pour visualiser les informations accédées liées à l'élément individuel identifié.


 
2. Système selon la revendication 1, dans lequel les données d'image sont un flux vidéo.
 
3. Système selon la revendication 3, dans lequel le flux vidéo est en direct.
 
4. Système selon la revendication 3, dans lequel le flux vidéo est préenregistré.
 
5. Système selon la revendication 1, dans lequel les données d'image sont une image.
 
6. Système selon la revendication 1, dans lequel les données d'image sont des données analogiques.
 
7. Système selon la revendication 1, dans lequel les données d'image sont des données numériques.
 
8. Système selon la revendication 1, dans lequel ledit au moins un processeur comprend :

un premier processeur configuré pour balayer les données d'image contenues dans le contenu source vidéo et identifier l'élément individuel dans les données d'image qui correspond à ladite au moins une image numérique de la pluralité d'images numériques stockées dans ladite au moins une base de données électronique,

un deuxième processeur configuré pour accéder aux informations stockées dans ladite au moins une base de données électronique qui sont liées à l'image numérique qui correspond à l'élément individuel identifié,

un troisième processeur configuré pour générer des données de position en coordonnées indiquant la position de l'élément individuel identifié dans le contenu source vidéo et

un quatrième processeur configuré pour générer le nouveau flux vidéo intégré en fusionnant les données d'image avec les informations accédées liées à l'élément individuel identifié et les données de position en coordonnées de l'élément individuel identifié.


 
9. Système selon la revendication 1, dans lequel ladite au moins une base de données électronique comprend une première base de données électronique stockant la pluralité d'images numériques et une deuxième base de données électronique stockant les informations liées à chacune de la pluralité d'images numériques.
 
10. Système selon la revendication 1, dans lequel le processeur est en outre configuré pour lancer une interaction d'achat électronique en réponse à une sélection par l'utilisateur de l'élément individuel identifié qui est actif.
 
11. Système selon la revendication 1, dans lequel le processeur est en outre configuré pour afficher les informations accédées liées audit au moins un élément identifié en réponse à une sélection par l'utilisateur de l'élément individuel identifié qui est sélectionnable.
 
12. Système selon la revendication 1, dans lequel le processeur est en outre configuré pour mettre à jour les données de position en coordonnées indiquant la position de l'élément individuel identifié.
 
13. Système selon la revendication 1, dans lequel le dispositif d'affichage est au moins un dispositif parmi un ordinateur, un smartphone, une tablette, un PDA, une télévision, un dispositif de visualisation avec un processeur de type boîtier décodeur, un lecteur Blu-ray et une console de jeux vidéo.
 
14. Système selon la revendication 1, dans lequel l'élément individuel identifié qui est sélectionnable est configuré pour être sélectionné par le dispositif d'affichage et/ou un utilisateur du dispositif d'affichage.
 
15. Système selon la revendication 1, dans lequel le processeur est en outre configuré pour balayer les données d'image contenues dans le contenu source vidéo et identifier une pluralité d'éléments individuels dans les données d'image qui correspondent à une pluralité d'images numériques respectives stockées dans ladite au moins une base de données électronique.
 
16. Procédé pour reconnaître des éléments individuels dans des données d'image contenues dans un contenu de source vidéo et pour délivrer des informations qui y sont liées, le procédé comprenant les étapes consistant à :

balayer les données d'image contenues dans le contenu vidéo source ;

identifier un élément individuel dans les données d'image du contenu vidéo source qui correspond à l'une d'une pluralité d'images numériques stockées dans au moins une base de données électronique en comparant des éléments individuels dans les données d'image du contenu source avec la pluralité d'images numériques stockées dans au moins une base de données électronique pour identifier un élément individuel correspondant lorsque les éléments individuels des données d'image source correspondent à au moins une de la pluralité d'images numériques stockées dans ladite au moins une base de données électronique ;

accéder aux informations stockées dans ladite au moins une base de données électronique qui sont liées à l'image numérique qui correspond à l'élément individuel identifié ;

générer des données de position en coordonnées indiquant la position de l'élément individuel identifié dans le contenu source vidéo ;

générer un nouveau flux vidéo intégré en fusionnant les données d'image contenues dans le contenu source vidéo avec les informations accédées liées à l'élément individuel identifié et les données de position en coordonnées de l'élément individuel identifié dans le contenu source vidéo ; et

transmettre le nouveau flux vidéo intégré à un dispositif d'affichage qui affiche les données d'image avec au moins un indicateur électronique, qui est basé sur les données de position en coordonnées, pour l'élément individuel identifié de telle sorte que l'élément individuel identifié soit actif dans les données d'image et configuré pour être sélectionné par un utilisateur pour visualiser les informations accédées liées à l'élément individuel identifié.


 
17. Procédé selon la revendication 16, consistant en outre à lancer une interaction d'achat électronique en réponse à une sélection par l'utilisateur de l'élément individuel identifié qui est actif.
 
18. Procédé selon la revendication 16, consistant en outre à afficher les informations liées à l'élément individuel identifié en réponse à la sélection par l'utilisateur de l'élément individuel identifié qui est actif.
 
19. Procédé selon la revendication 16, consistant en outre à mettre à jour les données de position en coordonnées indiquant la position de l'élément individuel identifié.
 
20. Procédé selon la revendication 16, consistant en outre à sélectionner l'élément individuel identifié par le dispositif d'affichage et/ou un utilisateur du dispositif d'affichage.
 
21. Procédé selon la revendication 16, consistant en outre à balayer les données d'image contenues dans le contenu source vidéo et à identifier une pluralité d'éléments individuels dans les données d'image qui correspondent à une pluralité d'images numériques respectives stockées dans ladite au moins une base de données électronique.
 




Drawing











Cited references

REFERENCES CITED IN THE DESCRIPTION



This list of references cited by the applicant is for the reader's convenience only. It does not form part of the European patent document. Even though great care has been taken in compiling the references, errors or omissions cannot be excluded and the EPO disclaims all liability in this regard.

Patent documents cited in the description




Non-patent literature cited in the description