(19) |
 |
|
(11) |
EP 3 467 711 A8 |
(12) |
CORRECTED EUROPEAN PATENT APPLICATION |
|
Note: Bibliography reflects the latest situation |
(15) |
Correction information: |
|
Corrected version no 1 (W1 A1) |
(48) |
Corrigendum issued on: |
|
29.05.2019 Bulletin 2019/22 |
(43) |
Date of publication: |
|
10.04.2019 Bulletin 2019/15 |
(22) |
Date of filing: 05.09.2018 |
|
(51) |
International Patent Classification (IPC):
|
|
(84) |
Designated Contracting States: |
|
AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL
NO PL PT RO RS SE SI SK SM TR |
|
Designated Extension States: |
|
BA ME |
|
Designated Validation States: |
|
KH MA MD TN |
(30) |
Priority: |
04.10.2017 US 201715724994
|
(71) |
Applicant: StradVision, Inc. |
|
Gyeongsangbuk-do 37673 (KR) |
|
(72) |
Inventors: |
|
- KIM, Yongjoong
Pohang-si,
Gyeongsangbuk-do 37673 (KR)
- NAM, Woonhyun
Pohang-si,
Gyeongsangbuk-do 37676 (KR)
- BOO, Sukhoon
Anyang-si, Gyeonggi-do 14034 (KR)
- SUNG, Myungchul
Pohang-si,
Gyeongsangbuk-do 37593 (KR)
- YEO, Donghun
Pohang-si,
Gyeongsangbuk-do 37673 (KR)
- RYU, Wooju
Pohang-si,
Gyeongsangbuk-do 37673 (KR)
- JANG, Taewoong
Seoul 06108 (KR)
- JEONG, Kyungjoong
Pohang-si,
Gyeongsangbuk-do 37671 (KR)
- JE, Hongmo
Pohang-si,
Gyeongsangbuk-do 37665 (KR)
- CHO, Hojin
Pohang-si,
Gyeongsangbuk-do 37673 (KR)
|
(74) |
Representative: Klunker IP
Patentanwälte PartG mbB |
|
Destouchesstraße 68 80796 München 80796 München (DE) |
|
|
|
(54) |
LEARNING METHOD AND LEARNING DEVICE FOR IMPROVING IMAGE SEGMENTATION AND TESTING METHOD
AND TESTING DEVICE USING THE SAME |
(57) A learning method for improving image segmentation including steps of: (a) acquiring
a (1-1)-th to a (1-K)-th feature maps through an encoding layer if a training image
is obtained; (b) acquiring a (3-1)-th to a (3-H)-th feature maps by respectively inputting
each output of the H encoding filters to a (3-1)-th to a (3-H)-th filters; (c) performing
a process of sequentially acquiring a (2-K)-th to a (2-1)-th feature maps either by
(i) allowing the respective H decoding filters to respectively use both the (3-1)-th
to the (3-H)-th feature maps and feature maps obtained from respective previous decoding
filters of the respective H decoding filters or by (ii) allowing respective K-H decoding
filters that are not associated with the (3-1)-th to the (3-H)-th filters to use feature
maps gained from respective previous decoding filters of the respective K-H decoding
filters; and (d) adjusting parameters of CNN.