<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE ep-patent-document PUBLIC "-//EPO//EP PATENT DOCUMENT 1.5//EN" "ep-patent-document-v1-5.dtd">
<!-- This XML data has been generated under the supervision of the European Patent Office -->
<ep-patent-document id="EP19218995A1" file="EP19218995NWA1.xml" lang="en" country="EP" doc-number="3838427" kind="A1" date-publ="20210623" status="n" dtd-version="ep-patent-document-v1-5">
<SDOBI lang="en"><B000><eptags><B001EP>ATBECHDEDKESFRGBGRITLILUNLSEMCPTIESILTLVFIROMKCYALTRBGCZEEHUPLSKBAHRIS..MTNORSMESMMAKHTNMD..........</B001EP><B005EP>J</B005EP><B007EP>BDM Ver 1.7.2 (20 November 2019) -  1100000/0</B007EP></eptags></B000><B100><B110>3838427</B110><B120><B121>EUROPEAN PATENT APPLICATION</B121></B120><B130>A1</B130><B140><date>20210623</date></B140><B190>EP</B190></B100><B200><B210>19218995.9</B210><B220><date>20191220</date></B220><B250>en</B250><B251EP>en</B251EP><B260>en</B260></B200><B400><B405><date>20210623</date><bnum>202125</bnum></B405><B430><date>20210623</date><bnum>202125</bnum></B430></B400><B500><B510EP><classification-ipcr sequence="1"><text>B07C   5/342       20060101AFI20201112BHEP        </text></classification-ipcr><classification-ipcr sequence="2"><text>B07C   5/34        20060101ALI20201112BHEP        </text></classification-ipcr></B510EP><B540><B541>de</B541><B542>VERFAHREN ZUM SORTIEREN VON AUF EINEM FÖRDERBAND BEWEGTEN OBJEKTEN</B542><B541>en</B541><B542>A METHOD FOR SORTING OBJECTS TRAVELLING ON A CONVEYOR BELT</B542><B541>fr</B541><B542>PROCÉDÉ DE TRI D'OBJETS SE DÉPLAÇANT SUR UNE BANDE TRANSPORTEUSE</B542></B540><B590><B598>1</B598></B590></B500><B700><B710><B711><snm>IHP Systems A/S</snm><iid>101847829</iid><irf>159800 SH/KW/MQ</irf><adr><str>Titangade 9C</str><city>2200 Copenhagen N</city><ctry>DK</ctry></adr></B711></B710><B720><B721><snm>Mensal, Lars</snm><adr><str>Rønnebærvej 106</str><city>2840 Holte</city><ctry>DK</ctry></adr></B721><B721><snm>Andersen, Jesper Stemann</snm><adr><str>Mathildevej 4, st. th.</str><city>2000 Frederiksberg</city><ctry>DK</ctry></adr></B721></B720><B740><B741><snm>Budde Schou A/S</snm><iid>101417042</iid><adr><str>Dronningens Tvaergade 30</str><city>1302 Copenhagen K</city><ctry>DK</ctry></adr></B741></B740></B700><B800><B840><ctry>AL</ctry><ctry>AT</ctry><ctry>BE</ctry><ctry>BG</ctry><ctry>CH</ctry><ctry>CY</ctry><ctry>CZ</ctry><ctry>DE</ctry><ctry>DK</ctry><ctry>EE</ctry><ctry>ES</ctry><ctry>FI</ctry><ctry>FR</ctry><ctry>GB</ctry><ctry>GR</ctry><ctry>HR</ctry><ctry>HU</ctry><ctry>IE</ctry><ctry>IS</ctry><ctry>IT</ctry><ctry>LI</ctry><ctry>LT</ctry><ctry>LU</ctry><ctry>LV</ctry><ctry>MC</ctry><ctry>MK</ctry><ctry>MT</ctry><ctry>NL</ctry><ctry>NO</ctry><ctry>PL</ctry><ctry>PT</ctry><ctry>RO</ctry><ctry>RS</ctry><ctry>SE</ctry><ctry>SI</ctry><ctry>SK</ctry><ctry>SM</ctry><ctry>TR</ctry></B840><B844EP><B845EP><ctry>BA</ctry></B845EP><B845EP><ctry>ME</ctry></B845EP></B844EP><B848EP><B849EP><ctry>KH</ctry></B849EP><B849EP><ctry>MA</ctry></B849EP><B849EP><ctry>MD</ctry></B849EP><B849EP><ctry>TN</ctry></B849EP></B848EP></B800></SDOBI>
<abstract id="abst" lang="en">
<p id="pa01" num="0001">The present invention relates to a method for sorting objects, the method includes at least one imaging sensor and a controller comprising a processor and a memory storage, wherein the controller receives image data captured by the at least one imaging sensor; and at least one sorting robot is coupled to the controller, wherein the at least one sorting robot is configured to receive an actuation signal from the controller. The processor executes an object identification module configured to detect objects travelling on a conveyor belt and recognize at least one target item travelling on the conveyor belt by processing the image data and to determine an expected time when the at least one target item will be located within a diversion path of the sorting robot; and wherein the controller selectively generates the actuation signal based on whether a sensed object detected in the image data comprise the at least one target item.
<img id="iaf01" file="imgaf001.tif" wi="147" he="102" img-content="drawing" img-format="tif"/></p>
</abstract>
<description id="desc" lang="en"><!-- EPO <DP n="1"> -->
<p id="p0001" num="0001">The present invention relates to a method for sorting objects travelling on a conveyor belt, where image data is captured by at least one imaging sensor for an image comprising at least one object travelling on the conveyor belt and where imaging sensor provides color image data.</p>
<heading id="h0001"><u>BACKGROUND ART</u></heading>
<p id="p0002" num="0002">In many recycling centers that receive recyclable materials, sortation of materials may be done by hand or by machines. For example, a stream of materials may be carried by a conveyor belt, and the operator of the recycling center may need to direct a certain fraction of the material into a bin or otherwise off the current conveyer. These conventional sorting systems are large in size and lack flexibility due to their large size. Moreover, they lack the ability to be used in recycling facilities that handle various types of items such as plastic bottles, aluminum cans, cardboard cartons, and other recyclable items, or to be readily updated to handle new or different materials. It is also known to use automated solutions using sensors or cameras to identify materials carried on a conveyor belt, which via a controller may activate a sorting mechanism. However, these new solution does not always function perfect.</p>
<p id="p0003" num="0003">The conventional plastic sorting solutions are based on near-infrared / short-wave-infrared (NIR/SWIR) spectrometry, where e.g. a NIR/SWIR reflection spectrum is collected for each plastic object and the spectrum identifies the material type of the plastic object - which determines the sorting.<br/>
The NIR/SWIR-spectrometric sorting systems are unable to handle dark and black plastics as all dark and black plastics return the same flat spectrum in the NIR/SWIR-range regardless of the material type. Moreover, NIR/SWIR-systems also cannot discriminate properly between white and transparent plastics, which is important for proper recycling. Another drawback of the spectrometric systems is that the system cannot sort waste by application - e.g. they cannot sort food from non-food plastics.<!-- EPO <DP n="2"> --></p>
<p id="p0004" num="0004">Finally spectrometric systems are also challenged by composite plastic objects, e.g. a bottle with a bottle cap and a foil covering the bottle - the spectrometric system might sort the object based on the foil.</p>
<heading id="h0002"><u>DISCLOSURE OF THE INVENTION</u></heading>
<p id="p0005" num="0005">An object of the present invention is to provide a method for identifying and sorting waste material in a more precise manner.</p>
<p id="p0006" num="0006">A further object is to provide a cost-effective and effective method of identifying and sorting waste material, in particular waste material comprising plastic</p>
<p id="p0007" num="0007">Normally, when waste and garbage is collected and initial sorting into different material categories is performed. The categories may e.g. be glass, metal, plastic, cardboard, paper and biological waste. Thus, when the waste reaches the recycling center each material fraction is normally sorted into even finer fractions. The metal fraction may sorted into aluminium and iron fractions and plastic into fractions based on different plastic types such as PE, PP or fractions with soft and hard plastic.</p>
<p id="p0008" num="0008">The present invention relates to a method for sorting objects travelling on a conveyor belt,<br/>
the method comprising:
<ul id="ul0001" list-style="none">
<li>receiving image data captured by at least one imaging sensor for an image comprising at least one object travelling on the conveyor belt said imaging sensor provides color image data with a spatial resolution of at least 0.4 px/mm;</li>
<li>executing a product detection and recognition module on a processor, the product detection and recognition module being configured to detect characteristics of the at least one object travelling on the conveyor belt by processing the image data;</li>
<li>determining an expected time when the at least one object will be located within a sorting area of at least one sorting device; and</li>
<li>selectively generating a robot control signal to operate the at least one sorting device on whether the at least one object comprises a target object.</li>
</ul><!-- EPO <DP n="3"> --></p>
<p id="p0009" num="0009">In this context the term "sorting device" should includes a robot, mechanical actuators, actuators based on a solenoid, air jet nozzles etc.</p>
<p id="p0010" num="0010">The terms "object", "item" and "product" and their plural form are used interchangeable in this text.</p>
<p id="p0011" num="0011">The imaging sensor is preferably a camera, which are able to provide color images in environment with low light intensity, e.g. light intensities around 500 lumen. Preferably, the camera operates at light intensities around 1000 lumen or more, such as 1500 lumen or more.</p>
<p id="p0012" num="0012">In an embodiment the target object is guided to a collection device in the sorting area by means of the sorting device. The sorting robot may control e.g. a pusher device or air jet nozzles which are suitable for guiding the target object to a collection device.</p>
<p id="p0013" num="0013">In an embodiment of the method according to the invention, the characteristics of the at least one object travelling on the conveyor belt is the physical appearance or shape of the object. Thus, the method is capable of identifying objects based on their design features.</p>
<p id="p0014" num="0014">In an embodiment of the method according the invention, the characteristics of the at least one object travelling on the conveyor belt is the color and/or transparency of the object. Thus, the method is also suitable for detecting objects based on their color or transparency.</p>
<p id="p0015" num="0015">In an embodiment the characteristics of the at least one object travelling on the conveyor belt is selected from vendor names, brand names, product names, trademarks, logos, symbols, slogans or a combination of two or more of the characteristics. The product detection and recognition module may interact with one or more databases comprising information about vendor names, brand names, product names, trademarks, and slogans and retract information from these database to identify objects.</p>
<p id="p0016" num="0016">In respect of the three above mentioned embodiments it is clear that the features of these embodiments, may be combined in any desireable manner.<!-- EPO <DP n="4"> --></p>
<p id="p0017" num="0017">For the purpose of obtaining a more precise identification the product detection and recognition module may apply two or more characteristics in the product detection and recognition process.</p>
<p id="p0018" num="0018">In an embodiment the imaging sensor has a spatial resolution is at least 2 px/mm (pixel/mm). With such a spatial resolution the imaging sensor is able to provide very detailed images.</p>
<p id="p0019" num="0019">In an embodiment the spatial resolution is at least 4 px/mm. When the spatial resolution is about 4 px/mm or more, the imaging sensor is able to detect very small scale details, such as logos with an extent of about 5 mm or less.</p>
<p id="p0020" num="0020">In an embodiment the method is adapted for detecting and recognizing objects used as packaging or container for food items, such a bottles and trays. The objects may e.g. be bottles for juice and soft drinks made from plastic, such as transparent plastic. The object may also be a tray used for e.g. meat or biscuits. The trays may e.g. be made from plastic material in any desired colors. The trays may be marked with a "fork and knife" logo indicating the tray is for use with foodstuff.</p>
<p id="p0021" num="0021">In an embodiment the method is adapted for detecting and recognizing black objects. Black objects are difficult to detect due to the low reflection from the material, however, the method according to the invention has proven to be surprisingly efficient in detecting and recognizing black objects. The black object may e.g. be made from plastic which it is desirable to sort properly. Preferably the black object is tray for food, such as a plastic tray for meat.</p>
<p id="p0022" num="0022">In one aspect of the method the detection and recognition of object are based on the detection and recognition modules interaction with one or more databases, such as databases comprising information about e.g. specific product (such as materials used in the product), vendor names, brand names, product names, trademarks, and slogans.</p>
<p id="p0023" num="0023">The method may also apply a convolutional neural network.<!-- EPO <DP n="5"> --></p>
<p id="p0024" num="0024">Thus, in an embodiment of the method according to the invention, the product detection and recognition involves a convolutional neural network.</p>
<p id="p0025" num="0025">For the convolutional neural network to be used for identification of items/objects learned during training operations, the method proceeds with an inference process where during operation the neural network parameters are loaded into a computer processor (such as the processor mentioned above) in a neural network program that implements the convolutional neural network. During operation, the processor may then receive images from the imaging sensor, and pass that image through the convolutional neural network program. The convolutional neural network then outputs a decision, indicating, for example, the type of object present in the image with highest likelihood.</p>
<p id="p0026" num="0026">In a training operation, the labeled data is used by a training algorithm (which may be performed by a training processor) to optimize the convolutional neural network to identify the object in the captured images with the greatest feasible accuracy. As would be readily appreciate by one of ordinary skill in the art, a number of algorithms may be utilized to perform this optimization, such as Stochastic Gradient Descent, Nesterov's Accelerated Gradient Method, the Adam optimization algorithm, or other well-known methods. In Stochastic Gradient Descent, a random collection of the labeled images is fed through the network. The error of the output neurons is used to construct an error gradient for all the neuron parameters in the network. The parameters are then adjusted using this gradient, by subtracting the gradient multiplied by a small constant called the "learning rate". These new parameters may then be used for the next step of Stochastic Gradient Descent, and the process repeated.</p>
<p id="p0027" num="0027">The result of the optimization includes a set of convolutional neural network parameters (which are stored in a memory) that allow the convolutional neural network to determine the presence of an object in an image. During operation, the neural network parameters may be stored on digital media. In an example of implementation, the training process may be performed by creating a collection of images of items, with each image labeled with the category of the items appearing in the image. Each of the categories can be associated with a number, for instance the conveyor belt might be 0, a carton 1, a transparent plastic bottle 2, etc. The convolutional neural network would then comprise a series of output neurons, with<!-- EPO <DP n="6"> --> each neuron associated with one of the categories. Thus, neuron 0 is the neuron representing the presence of conveyor belt, neuron 1 represents the presence of a carton, neuron 2 represents the presence of a transparent plastic bottle, and so forth for other categories.</p>
<p id="p0028" num="0028">The method may be designed to detect and recognize waste objects using very specific categories, product-specific categories, i.e. to classify each waste object as belonging to a specific vendor, brand, product and/or application (food, cosmetics, other). This may be enabled by e.g., using an application/shape/color hierarchical ordering:
<ul id="ul0002" list-style="bullet" compact="compact">
<li>Food
<ul id="ul0003" list-style="none" compact="compact">
<li>∘ Bottle
<ul id="ul0004" list-style="none" compact="compact">
<li>▪ Transparent</li>
<li>▪ White</li>
<li>▪ Black</li>
<li>▪ Blue</li>
<li>▪ Green</li>
<li>▪ Red</li>
<li>▪ Other</li>
</ul></li>
<li>∘ Tray
<ul id="ul0005" list-style="none" compact="compact">
<li>▪ Transparent</li>
<li>▪ White</li>
<li>▪ Black</li>
<li>▪ Blue</li>
<li>▪ Green</li>
<li>▪ Red</li>
<li>▪ Other</li>
</ul></li>
<li>∘ Other
<ul id="ul0006" list-style="none" compact="compact">
<li>▪ Transparent</li>
<li>▪ White</li>
<li>▪ Black</li>
<li>▪ Blue</li>
<li>▪ Green</li>
<li>▪ Red</li>
<li>▪ Other</li>
</ul></li>
</ul><!-- EPO <DP n="7"> --></li>
<li>Cosmetics
<ul id="ul0007" list-style="none" compact="compact">
<li>∘ Bottle
<ul id="ul0008" list-style="none" compact="compact">
<li>▪ Transparent</li>
<li>▪ White</li>
<li>▪ Black</li>
<li>▪ Blue</li>
<li>▪ Green</li>
<li>▪ Red</li>
<li>▪ Other</li>
</ul></li>
<li>∘ Other
<ul id="ul0009" list-style="none" compact="compact">
<li>▪ Transparent</li>
<li>▪ White</li>
<li>▪ Black</li>
<li>▪ Blue</li>
<li>▪ Green</li>
<li>▪ Red</li>
<li>▪ Other</li>
</ul></li>
</ul></li>
<li>Other
<ul id="ul0010" list-style="none" compact="compact">
<li>▪ Transparent</li>
<li>▪ White</li>
<li>▪ Black</li>
<li>▪ Blue</li>
<li>▪ Green</li>
<li>▪ Red</li>
<li>▪ Other</li>
</ul></li>
</ul></p>
<p id="p0029" num="0029">For the convolutional neural network to be used for identification of items/materials learned during training operations, the method proceeds with an inference process where the neural network parameters are loaded into a computer processor (such as the processor mentioned above) in a neural network program that implements convolutional neural network. During operation, the processor may then receive images from the imaging sensor, and pass that image through the convolutional neural network program. The neural network then outputs a decision, indicating, for example, the type of item/material present in the image with highest likelihood.<!-- EPO <DP n="8"> --></p>
<p id="p0030" num="0030">In an embodiment of the method, the method further comprise interaction with a product database. The product database may contain information about an identified object, such as which material or materials the object is manufactured from. Such information is very useful in a sorting process.</p>
<p id="p0031" num="0031">In an embodiment the object is a plastic object. The object may be made from plastic material such as e.g. PE, PP, PS, PET, PVC, PVA or ABS. Large amount of plastic is used today, which generates large amount of plastic waste and the present invention provides a method for efficient sorting of plastic material.</p>
<p id="p0032" num="0032">The invention also provides a system for sorting objects, the system comprising:
<ul id="ul0011" list-style="none">
<li>at least one imaging sensor;</li>
<li>a controller comprising a processor and a memory storage, wherein the controller receives image data captured by the at least one imaging sensor; and</li>
<li>at least one sorting robot coupled to the controller, wherein the at least one sorting robot is configured to receive an actuation signal from the controller;</li>
<li>wherein the processor executes an object identification module configured to detect objects travelling on a conveyor belt and recognize at least one target item travelling on a conveyor belt by processing the image data and to determine an expected time when the at least one target item will be located within a diversion path of the sorting robot; and</li>
<li>wherein the controller selectively generates the actuation signal based on whether a sensed object detected in the image data comprise the at least one target item.</li>
</ul></p>
<heading id="h0003"><u>DETAILED DESCRIPTION OF THE INVENTION</u></heading>
<p id="p0033" num="0033">The invention will now be described in further details with reference to drawings in which:
<dl id="dl0001" compact="compact">
<dt>Figure 1:</dt><dd>shows an embodiment with a conveyor and a robot;</dd>
<dt>Figure 2:</dt><dd>shows an embodiment with just a conveyor;</dd>
<dt>Figure 3:</dt><dd>shows an embodiment without conveyor (nor robot);<!-- EPO <DP n="9"> --></dd>
<dt>Figure 4:</dt><dd>shows a detailed view of the invention;</dd>
<dt>Figure 5:</dt><dd>shows a method for logo/symbol detection;</dd>
<dt>Figure 6:</dt><dd>shows the principles of text detection and recognition;</dd>
<dt>Figure 7:</dt><dd>illustrates the principles of neural network object detection;</dd>
<dt>Figure 8:</dt><dd>illustrates the principles of two-stage neural network object detection;</dd>
<dt>Figure 9:</dt><dd>shows an embodiment linking high resolution with a neural network; and</dd>
<dt>Figure 10:</dt><dd>shows examples of symbols, which can be detected by the method.</dd>
</dl></p>
<p id="p0034" num="0034">The figures are only intended to illustrate the principles of the invention and may not be accurate in every detail. Moreover, parts which do not form part of the invention may be omitted. The same reference numbers are used for the same parts.</p>
<p id="p0035" num="0035"><figref idref="f0001">Figure 1</figref> is a diagram showing the principles of the invention. Reference number 1 indicates the conveyer belt. Box 2 illustrates the "scene" on the conveyer belt 1, i.e. the conveyor belt with one or a number of items. The scene 2 reflects light, which are registered by the camera 3, and transformed into an image. The image is processed in a product detection and recognition module 4 to identify the item or items present in the scene 2. The information from the product detection and recognition module 4 is send to the sorting control 5, which may obtain further information about the identified items from the product database 6.</p>
<p id="p0036" num="0036">The sorting control 5 communicates with a robot controller 7 which control a robot 8, which is physically able to intervene in scene 2b in a sorting area on the conveyer belt 1 and sort the item or items into specific categories of waste material.</p>
<p id="p0037" num="0037">The speed of the conveyor belt 1 is monitored, and an encoder 9 sends information about the speed of the conveyer belt 1 to a synchronizer 10. The synchronizer sends signals to the camera 3 and determines how many images the camera 3 should take per second. The synchronizer also sends signals to the robot controller 7 with information about when the scene 2b reaches the sorting area. The encoder 9 may also send signals directly to the robot controller 7.</p>
<p id="p0038" num="0038">Scene 2a and scene 2b are in principle identical, and the reference numbers only indicates that the conveyor belt has moved the scene a distance from the point where scene 2a was registered by the camera 3.<!-- EPO <DP n="10"> --></p>
<p id="p0039" num="0039"><figref idref="f0002">Figure 2</figref> illustrates the principles of the conveyor belt information system. The speed of the conveyor belt is monitored, and the information about the speed is transformed by the encoder 9 and send as an encoder signal to the synchronizer 10. The synchroniser 10 sends a signal to the camera 3 when an image of the scene 2a needs to be provided. Depending on the actual speed of the conveyor belt the camera may provide several images of the scene 2a per second. However, if the speed of the conveyor belt is slow the camera 3 only needs to provide a few images per minute.</p>
<p id="p0040" num="0040">The images from the camera 3 are send to the product detection and recognition module 4 to be processed and the items in the image identified. The information about the identified items are then send to the visualization and statistics module 5a for further processing to display or otherwise provide the information that can be extracted or accumulated from the detection system. The visualization and statistics module 5a is integrated with the sorting control 5.</p>
<p id="p0041" num="0041">The visualization and statistics module 5a communicates with the product database 6 to obtain more detailed information about product properties for an identified item. The information about product properties may e.g. be information about material.</p>
<p id="p0042" num="0042">Based on the information available the sorting control sends commands to the robot controller (not shown in <figref idref="f0002">figure 2</figref>), which will activate the robot to perform desired sorting motions and actuations, when the scene 2a reaches the sorting area (scene 2b).</p>
<p id="p0043" num="0043"><figref idref="f0003">Figure 3</figref> illustrates the principles of the information system. The information system includes the camera 3, the product detection and recognition module 4, the visualization and statistics module 5a and the product database 6.</p>
<p id="p0044" num="0044">The images from the camera 3 are send to the product detection and recognition module 4 where the items on the images (appearing on the scene 2a) are identified.<!-- EPO <DP n="11"> --></p>
<p id="p0045" num="0045">The camera 3, the lightning and the conveyor speed must be adjusted to provide images which meet the requirements, e.g. images with sufficient lightning and with little motion blur.</p>
<p id="p0046" num="0046">The information about the identified items are then send to the visualization and statistics module 5a for further processing. The visualization and statistics module 5a is integrated with the sorting control 5.</p>
<p id="p0047" num="0047">The visualization and statistics module 5a communicates with the product database 6. The visualization and statistics module 5a can search the product database 6 and obtain more detailed information about product properties for an identified item. The information about product properties may e.g. be information about material.</p>
<p id="p0048" num="0048">Based on the information available, the sorting control sends commands to the robot controller, which will activate the robot to perform desired sorting motions and actuations. This will result in that the items appearing on the scene 2a on the conveyor belt will be sorted to desired fractions.</p>
<p id="p0049" num="0049"><figref idref="f0004">Figure 4</figref> shows the principles of product detection and recognition. The image distributor 21 receives and image and distributes the image to a neural network object detection module 22, a logo detection module 23, and symbol detection module 24, and a text detection and text+font recognition module 25.</p>
<p id="p0050" num="0050">The information which is deduced from the neural network object detection module 22, the logo detection module 23, and the symbol detection module 24 are send to the recognition module 4a for further processing.</p>
<p id="p0051" num="0051">The information from the text detection and text+font recognition module 25 is further processed in the vendor name recognition module 26, the brand name recognition module 27, the product name recognition module28, the slogan recognition module 29, and product description recognition module 30, before the information is send to the product recognition module 4a for further processing.</p>
<p id="p0052" num="0052">The product recognition module 4a is integrated in the product detection and recognition module 4.<!-- EPO <DP n="12"> --></p>
<p id="p0053" num="0053"><figref idref="f0005">Figure 5</figref> is illustrates a method for logo and symbol detection as shown in <figref idref="f0004">figure 4</figref>.</p>
<p id="p0054" num="0054">In the logo detection module and symbol detection module the overall detection principles are generally the same. When the modules receive an image from the image distributor, the image is first processed in a feature extraction module 40, extracting local features. The information is sent to a feature description module 41 which describes the local features and send the information to a matching module 42. The matching module 42 interacts with a feature descriptor database 44 which can provide further information about the features. From the matching module 42 matched local feature descriptors are send to a clustering module 43, before the information is provided to the product recognition module for further processing.</p>
<p id="p0055" num="0055"><figref idref="f0006">Figure 6</figref> illustrates in more details the principles of text detection and recognition carried out in the text detection and text+font recognition module 25.</p>
<p id="p0056" num="0056">When the text detection and text+font recognition module receive an image from the image distributor, the image is first processed in a convolutional neural network 50 which send a compressed image representation to a text detection module 25a which again sends text boxes to a text recognition module 25b and font recognition module 25c. The text recognition module 25b and the font recognition module 25c provides information about text and font to the modules 26 - 30 in <figref idref="f0004">figure 4</figref>. After processing in the modules 26 - 30, text information is provided to the product recognition module.</p>
<p id="p0057" num="0057">During the processing of the image, the convolutional neural network 50, the text detection module 25a, and the text recognition module 25b interact with a images and annotations database 51. The images and annotations database 51 is a training database which supports the image the convolutional neural network 50. Neural network parameters are learned in the training phase from images and annotations. It is the learned model that is extracted from the images and annotations which is interacted with during operation/processing.</p>
<p id="p0058" num="0058"><figref idref="f0007">Figure 7</figref> illustrates the general principles of neural network object detection. The image is send to the convolutional neural network 50 for processing and the<!-- EPO <DP n="13"> --> convolutional neural network 50 sends compressed image representation to an object detection module 52 which detects the objects.</p>
<p id="p0059" num="0059">During the process the convolutional neural network 50 and the object detection module 52 interact with the images and annotations database 51. Neural network parameters are learned in the training phase from images and annotations. It is the learned model that is extracted from the images and annotations which is interacted with during operation/processing.</p>
<p id="p0060" num="0060"><figref idref="f0008">Figure 8</figref> illustrates the general principles of two-stage neural network object detection.</p>
<p id="p0061" num="0061">An image is distributed from the image distributor module 21. The image is sent to the convolutional neural network 50 and the object recognition module 53. The convolutional neural network 50 sends compressed image representation to the object detection module 52 which detects the objects and sends the information to the object recognition module 53, which recognize the objects.</p>
<p id="p0062" num="0062">The convolutional neural network 50, the object detection module 52, and the object recognition module 53 interact with the images and annotations database 51 during the detection and recognition process. The neural network parameters are learned in the training phase from images and annotations. It is the learned model that is extracted from the images and annotations which is interacted with during operation/processing.</p>
<p id="p0063" num="0063"><figref idref="f0009">Figure 9</figref> illustrates an embodiment where an image with high resolution is linked to a neural network for object detection. The architecture of the network is adapted to the high resolution in the images by neural network layers 50a, 50b and 50 c in the beginning of the network. The embodiment corresponds to the embodiment shown in <figref idref="f0007">figure 7</figref>, but adapted for images with high resolution.</p>
<p id="p0064" num="0064"><figref idref="f0010">Figure 10</figref> illustrates examples of symbols which can be detected by the method according to the invention.</p>
</description>
<claims id="claims01" lang="en"><!-- EPO <DP n="14"> -->
<claim id="c-en-0001" num="0001">
<claim-text>A method for sorting objects travelling on a conveyor belt,<br/>
the method comprising:
<claim-text>receiving image data captured by at least one imaging sensor for an image comprising at least one object travelling on the conveyor belt said imaging sensor providing color image data with a spatial resolution of at least 0.4 px/mm;</claim-text>
<claim-text>executing a product detection and recognition module on a processor, the product detection and recognition module being configured to detect characteristics of the at least one object travelling on the conveyor belt by processing the image data;</claim-text>
<claim-text>determining an expected time when the at least one object will be located within a sorting area of at least one sorting device; and</claim-text>
<claim-text>selectively generating a device control signal to operate the at least one device on whether the at least one object comprises a target object.</claim-text></claim-text></claim>
<claim id="c-en-0002" num="0002">
<claim-text>A method according to claim 1, wherein the target object is guided to a collection device in the sorting area by means of the sorting device.</claim-text></claim>
<claim id="c-en-0003" num="0003">
<claim-text>A method according to claim 1 or 2, wherein characteristics of the at least one object travelling on the conveyor belt is the physical appearance or shape of the object.</claim-text></claim>
<claim id="c-en-0004" num="0004">
<claim-text>A method according to anyone of the preceding claims, wherein characteristics of the at least one object travelling on the conveyor belt is the color or colors and/or transparency of the object.</claim-text></claim>
<claim id="c-en-0005" num="0005">
<claim-text>A method according to anyone of the preceding claims, wherein characteristics of the at least one object travelling on the conveyor belt is selected from vendor names, brand names, product names, trademarks, logos, symbols, slogans or a combination of two or more of the characteristics.<!-- EPO <DP n="15"> --></claim-text></claim>
<claim id="c-en-0006" num="0006">
<claim-text>A method according to anyone of the preceding claims, wherein the product detection and recognition module applies two or more characteristics in the product detection and recognition.</claim-text></claim>
<claim id="c-en-0007" num="0007">
<claim-text>A method according to anyone of the preceding claims, wherein said spatial resolution is at least 2 px/mm.</claim-text></claim>
<claim id="c-en-0008" num="0008">
<claim-text>A method according to anyone of the preceding claims, wherein said spatial resolution is at least 4 px/mm.</claim-text></claim>
<claim id="c-en-0009" num="0009">
<claim-text>A method according to anyone of the preceding claims, wherein product detection and recognition involves a convolutional neural network.</claim-text></claim>
<claim id="c-en-0010" num="0010">
<claim-text>A method according to anyone of the preceding claims, wherein the method further comprises interaction with a product database.</claim-text></claim>
<claim id="c-en-0011" num="0011">
<claim-text>A method according to anyone of the preceding claims, wherein the object is a plastic object.</claim-text></claim>
<claim id="c-en-0012" num="0012">
<claim-text>A method according to anyone of the preceding claims, wherein the method is adapted for detecting and recognizing objects used as packaging or container for food items, such a bottles and trays.</claim-text></claim>
<claim id="c-en-0013" num="0013">
<claim-text>A method according to anyone of the preceding claims, wherein the method is adapted for detecting and recognizing black objects.</claim-text></claim>
<claim id="c-en-0014" num="0014">
<claim-text>A method according to claim 13, wherein the black object is a tray for food.</claim-text></claim>
</claims>
<drawings id="draw" lang="en"><!-- EPO <DP n="16"> -->
<figure id="f0001" num="1"><img id="if0001" file="imgf0001.tif" wi="164" he="233" img-content="drawing" img-format="tif"/></figure><!-- EPO <DP n="17"> -->
<figure id="f0002" num="2"><img id="if0002" file="imgf0002.tif" wi="165" he="217" img-content="drawing" img-format="tif"/></figure><!-- EPO <DP n="18"> -->
<figure id="f0003" num="3"><img id="if0003" file="imgf0003.tif" wi="131" he="188" img-content="drawing" img-format="tif"/></figure><!-- EPO <DP n="19"> -->
<figure id="f0004" num="4"><img id="if0004" file="imgf0004.tif" wi="160" he="233" img-content="drawing" img-format="tif"/></figure><!-- EPO <DP n="20"> -->
<figure id="f0005" num="5"><img id="if0005" file="imgf0005.tif" wi="145" he="233" img-content="drawing" img-format="tif"/></figure><!-- EPO <DP n="21"> -->
<figure id="f0006" num="6"><img id="if0006" file="imgf0006.tif" wi="132" he="233" img-content="drawing" img-format="tif"/></figure><!-- EPO <DP n="22"> -->
<figure id="f0007" num="7"><img id="if0007" file="imgf0007.tif" wi="119" he="233" img-content="drawing" img-format="tif"/></figure><!-- EPO <DP n="23"> -->
<figure id="f0008" num="8"><img id="if0008" file="imgf0008.tif" wi="165" he="231" img-content="drawing" img-format="tif"/></figure><!-- EPO <DP n="24"> -->
<figure id="f0009" num="9"><img id="if0009" file="imgf0009.tif" wi="98" he="233" img-content="drawing" img-format="tif"/></figure><!-- EPO <DP n="25"> -->
<figure id="f0010" num="10"><img id="if0010" file="imgf0010.tif" wi="162" he="233" img-content="drawing" img-format="tif"/></figure>
</drawings>
<search-report-data id="srep" lang="en" srep-office="EP" date-produced=""><doc-page id="srep0001" file="srep0001.tif" wi="157" he="233" type="tif"/><doc-page id="srep0002" file="srep0002.tif" wi="157" he="233" type="tif"/><doc-page id="srep0003" file="srep0003.tif" wi="157" he="233" type="tif"/><doc-page id="srep0004" file="srep0004.tif" wi="155" he="233" type="tif"/></search-report-data><search-report-data date-produced="20200616" id="srepxml" lang="en" srep-office="EP" srep-type="ep-sr" status="n"><!--
 The search report data in XML is provided for the users' convenience only. It might differ from the search report of the PDF document, which contains the officially published data. The EPO disclaims any liability for incorrect or incomplete data in the XML for search reports.
 -->

<srep-info><file-reference-id>159800 SH/KW/MQ</file-reference-id><application-reference><document-id><country>EP</country><doc-number>19218995.9</doc-number></document-id></application-reference><applicant-name><name>IHP Systems A/S</name></applicant-name><srep-established srep-established="yes"/><srep-unity-of-invention><p id="pu0001" num="">1. claims: 1, 2<br/>Collection</p><p id="pu0002" num="">2. claim: 3<br/>Appearence / shape</p><p id="pu0003" num="">3. claim: 4<br/>Colour / transparency</p><p id="pu0004" num="">4. claim: 5<br/>Names, trademarks, logos etc.</p><p id="pu0005" num="">5. claim: 6<br/>Two characteristics</p><p id="pu0006" num="">6. claims: 7, 8<br/>2 px/mm, 4 px/mm</p><p id="pu0007" num="">7. claims: 9, 10<br/>Neural network, database</p><p id="pu0008" num="">8. claim: 11<br/>Plastic object</p><p id="pu0009" num="">9. claim: 12<br/>Packaging, container</p><p id="pu0010" num="">10. claims: 13, 14<br/>Black objects</p><srep-search-fees><srep-fee-4><claim-num>1, 2</claim-num></srep-fee-4></srep-search-fees></srep-unity-of-invention><srep-invention-title title-approval="yes"/><srep-abstract abs-approval="yes"/><srep-figure-to-publish figinfo="by-applicant"><figure-to-publish><fig-number>1</fig-number></figure-to-publish></srep-figure-to-publish><srep-info-admin><srep-office><addressbook><text>MN</text></addressbook></srep-office><date-search-report-mailed><date>20201118</date></date-search-report-mailed></srep-info-admin></srep-info><srep-for-pub><srep-fields-searched><minimum-documentation><classifications-ipcr><classification-ipcr><text>B07C</text></classification-ipcr></classifications-ipcr></minimum-documentation></srep-fields-searched><srep-citations><citation id="sr-cit0001"><patcit dnum="US2019247891A1" id="sr-pcit0001" url="http://v3.espacenet.com/textdoc?DB=EPODOC&amp;IDX=US2019247891&amp;CY=ep"><document-id><country>US</country><doc-number>2019247891</doc-number><kind>A1</kind><name>KUMAR NALIN [US] ET AL</name><date>20190815</date></document-id></patcit><category>X</category><rel-claims>1,2</rel-claims><rel-passage><passage>* paragraph [0055] - paragraph [0057]; figures *</passage></rel-passage></citation></srep-citations><srep-admin><examiners><primary-examiner><name>Wich, Roland</name></primary-examiner></examiners><srep-office><addressbook><text>Munich</text></addressbook></srep-office><date-search-completed><date>20200616</date></date-search-completed></srep-admin><!--							The annex lists the patent family members relating to the patent documents cited in the above mentioned European search report.							The members are as contained in the European Patent Office EDP file on							The European Patent Office is in no way liable for these particulars which are merely given for the purpose of information.							For more details about this annex : see Official Journal of the European Patent Office, No 12/82						--><srep-patent-family><patent-family><priority-application><document-id><country>US</country><doc-number>2019247891</doc-number><kind>A1</kind><date>20190815</date></document-id></priority-application><family-member><document-id><country>US</country><doc-number>2019247891</doc-number><kind>A1</kind><date>20190815</date></document-id></family-member><family-member><document-id><country>US</country><doc-number>2020368786</doc-number><kind>A1</kind><date>20201126</date></document-id></family-member></patent-family></srep-patent-family></srep-for-pub></search-report-data>
</ep-patent-document>
