Global Patent Index - EP 3881125 A4

EP 3881125 A4 20220831 - SYSTEMS AND METHODS FOR PERFORMING SELF-IMPROVING VISUAL ODOMETRY

Title (en)

SYSTEMS AND METHODS FOR PERFORMING SELF-IMPROVING VISUAL ODOMETRY

Title (de)

SYSTEME UND VERFAHREN ZUR DURCHFÜHRUNG VON SELBSTVERBESSERNDER VISUELLER ODOMETRIE

Title (fr)

SYSTÈMES ET PROCÉDÉS PERMETTANT DE RÉALISER UNE ODOMÉTRIE VISUELLE À AUTO-AMÉLIORATION

Publication

EP 3881125 A4 20220831 (EN)

Application

EP 19885433 A 20191113

Priority

  • US 201862767887 P 20181115
  • US 201962913378 P 20191010
  • US 2019061272 W 20191113

Abstract (en)

[origin: WO2020102417A1] In an example method of training a neural network for performing visual odometry, the neural network receives a plurality of images of an environment, determines, for each image, a respective set of interest points and a respective descriptor, and determines a correspondence between the plurality of images. Determining the correspondence includes determining one or point correspondences between the sets of interest points, and determining a set of candidate interest points based on the one or more point correspondences, each candidate interest point indicating a respective feature in the environment in three-dimensional space). The neural network determines, for each candidate interest point, a respective stability metric and a respective stability metric. The neural network is modified based on the one or more candidate interest points.

IPC 8 full level

G06T 7/73 (2017.01); G02B 27/01 (2006.01); G06F 3/01 (2006.01); G06K 9/62 (2022.01); G06V 10/40 (2022.01)

CPC (source: EP US)

G02B 27/0172 (2013.01 - EP US); G06F 3/011 (2013.01 - EP); G06F 3/012 (2013.01 - EP); G06N 3/08 (2013.01 - US); G06T 7/33 (2017.01 - US); G06T 7/74 (2017.01 - EP US); G06V 10/40 (2022.01 - EP US); G02B 2027/0138 (2013.01 - EP); G02B 2027/014 (2013.01 - EP); G02B 2027/0187 (2013.01 - EP); G06T 2207/10016 (2013.01 - EP US); G06T 2207/20081 (2013.01 - EP US); G06T 2207/20084 (2013.01 - EP US); G06T 2207/30244 (2013.01 - EP US)

Citation (search report)

  • [XYI] TITUS CIESLEWSKI ET AL: "SIPS: Unsupervised Succinct Interest Points", ARXIV.ORG, CORNELL UNIVERSITY LIBRARY, 201 OLIN LIBRARY CORNELL UNIVERSITY ITHACA, NY 14853, 3 May 2018 (2018-05-03), XP080885434
  • [YA] DETONE DANIEL ET AL: "SuperPoint: Self-Supervised Interest Point Detection and Description", 2018 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION WORKSHOPS (CVPRW), IEEE, 18 June 2018 (2018-06-18), pages 337 - 33712, XP033475657, DOI: 10.1109/CVPRW.2018.00060
  • [A] TINGHUI ZHOU ET AL: "Unsupervised Learning of Depth and Ego-Motion from Video", ARXIV.ORG, CORNELL UNIVERSITY LIBRARY, 201 OLIN LIBRARY CORNELL UNIVERSITY ITHACA, NY 14853, 25 April 2017 (2017-04-25), XP080765482, DOI: 10.1109/CVPR.2017.700
  • [A] CHAMARA SAROJ WEERASEKERA ET AL: "Learning Deeply Supervised Visual Descriptors for Dense Monocular Reconstruction", ARXIV.ORG, CORNELL UNIVERSITY LIBRARY, 201 OLIN LIBRARY CORNELL UNIVERSITY ITHACA, NY 14853, 16 November 2017 (2017-11-16), XP081288982
  • See also references of WO 2020102417A1

Designated contracting state (EPC)

AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR

DOCDB simple family (publication)

WO 2020102417 A1 20200522; CN 113272713 A 20210817; CN 113272713 B 20240618; EP 3881125 A1 20210922; EP 3881125 A4 20220831; JP 2022508103 A 20220119; JP 7357676 B2 20231006; US 11921291 B2 20240305; US 2022028110 A1 20220127; US 2024231102 A1 20240711

DOCDB simple family (application)

US 2019061272 W 20191113; CN 201980087289 A 20191113; EP 19885433 A 20191113; JP 2021526271 A 20191113; US 201917293772 A 20191113; US 202418417523 A 20240119