Global Patent Index - EP 3891664 A1

EP 3891664 A1 20211013 - METHOD FOR TRAINING AT LEAST ONE ALGORITHM FOR A CONTROL DEVICE OF A MOTOR VEHICLE, COMPUTER PROGRAM PRODUCT, AND MOTOR VEHICLE

Title (en)

METHOD FOR TRAINING AT LEAST ONE ALGORITHM FOR A CONTROL DEVICE OF A MOTOR VEHICLE, COMPUTER PROGRAM PRODUCT, AND MOTOR VEHICLE

Title (de)

VERFAHREN ZUM TRAINIEREN WENIGSTENS EINES ALGORITHMUS FÜR EIN STEUERGERÄT EINES KRAFTFAHRZEUGS, COMPUTERPROGRAMMPRODUKT SOWIE KRAFTFAHRZEUG

Title (fr)

PROCÉDÉ POUR L'ENTRAÎNEMENT D'AU MOINS UN ALGORITHME POUR UN APPAREIL DE COMMANDE D'UN VÉHICULE AUTOMOBILE, PRODUIT DE PROGRAMME INFORMATIQUE AINSI QUE VÉHICULE AUTOMOBILE

Publication

EP 3891664 A1 20211013 (DE)

Application

EP 19800939 A 20191024

Priority

  • DE 102018220865 A 20181203
  • EP 2019078978 W 20191024

Abstract (en)

[origin: WO2020114674A1] Method for training at least one algorithm for a control device of a motor vehicle for implementing an autonomous driving function, wherein the algorithm is trained by means of a self-learning neural network, comprising the following steps of: a) providing a computer program product module for the autonomous driving function, wherein the computer program product module contains the algorithm to be trained and the self-learning neural network; b) providing at least one metric and a reward function; c) embedding the computer program product module in a simulation environment for simulating at least one relevant traffic situation, and training the self-learning neural network by simulating critical scenarios and determining the metric (M) until a first measure of quality (G1) has been satisfied; d) embedding the trained computer program product module in the control device of the motor vehicle for simulating relevant traffic situations, and training the self-learning neural network by simulating critical scenarios and determining the metric (M) until a second measure of quality has been satisfied, wherein e), (i) when the metric (M) in step d) is worse than the first measure of quality (G1), the method is continued from step c), or, (ii) when the metric (M) in step d) is better than the first measure of quality (G1) and worse than the second measure of quality (G2), the method is continued from step d).

IPC 8 full level

G06N 3/08 (2006.01); G06N 3/00 (2006.01)

CPC (source: EP US)

B60W 40/04 (2013.01 - US); B60W 50/06 (2013.01 - US); B60W 60/001 (2020.02 - US); G05B 13/027 (2013.01 - US); G06N 3/006 (2013.01 - EP); G06N 3/08 (2013.01 - EP)

Citation (search report)

  • [I] XINLEI PAN ET AL: "Virtual to Real Reinforcement Learning for Autonomous Driving", PROCEDINGS OF THE BRITISH MACHINE VISION CONFERENCE 2017, 26 September 2017 (2017-09-26), XP055610078, ISBN: 978-1-901725-60-5, DOI: 10.5244/C.31.11
  • [I] HAOYANG FAN ET AL: "An Auto-tuning Framework for Autonomous Vehicles", ARXIV.ORG, CORNELL UNIVERSITY LIBRARY, 201 OLIN LIBRARY CORNELL UNIVERSITY ITHACA, NY 14853, 15 August 2018 (2018-08-15), XP080907856
  • [I] CUTLER MARK ET AL: "Autonomous drifting using simulation-aided reinforcement learning", 2016 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION (ICRA), IEEE, 16 May 2016 (2016-05-16), pages 5442 - 5448, XP032908826, DOI: 10.1109/ICRA.2016.7487756
  • [I] ALEX KENDALL ET AL: "Learning to Drive in a Day", ARXIV.ORG, CORNELL UNIVERSITY LIBRARY, 201 OLIN LIBRARY CORNELL UNIVERSITY ITHACA, NY 14853, 2 July 2018 (2018-07-02), XP081197602
  • [I] FAYJIE ABDUR R ET AL: "Driverless Car: Autonomous Driving Using Deep Reinforcement Learning in Urban Environment", 2018 15TH INTERNATIONAL CONFERENCE ON UBIQUITOUS ROBOTS (UR), IEEE, 26 June 2018 (2018-06-26), pages 896 - 901, XP033391036, DOI: 10.1109/URAI.2018.8441797
  • [A] OKUYAMA TAKAFUMI ET AL: "Autonomous Driving System based on Deep Q Learnig", 2018 INTERNATIONAL CONFERENCE ON INTELLIGENT AUTONOMOUS SYSTEMS (ICOIAS), IEEE, 1 March 2018 (2018-03-01), pages 201 - 205, XP033421432, ISBN: 978-1-5386-6329-5, [retrieved on 20181016], DOI: 10.1109/ICOIAS.2018.8494053
  • [A] DAVID ISELE ET AL: "Transferring Autonomous Driving Knowledge on Simulated and Real Intersections", ARXIV.ORG, CORNELL UNIVERSITY LIBRARY, 201 OLIN LIBRARY CORNELL UNIVERSITY ITHACA, NY 14853, 30 November 2017 (2017-11-30), XP081298898
  • [A] WOLF PETER ET AL: "Learning how to drive in a real world simulation with deep Q-Networks", 2017 IEEE INTELLIGENT VEHICLES SYMPOSIUM (IV), IEEE, 11 June 2017 (2017-06-11), pages 244 - 250, XP033133715, DOI: 10.1109/IVS.2017.7995727
  • See also references of WO 2020114674A1

Designated contracting state (EPC)

AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR

Designated extension state (EPC)

BA ME

DOCDB simple family (publication)

WO 2020114674 A1 20200611; CN 113168570 A 20210723; DE 102018220865 A1 20200618; DE 102018220865 B4 20201105; EP 3891664 A1 20211013; MA 54363 A 20220309; US 2022009510 A1 20220113

DOCDB simple family (application)

EP 2019078978 W 20191024; CN 201980080062 A 20191024; DE 102018220865 A 20181203; EP 19800939 A 20191024; MA 54363 A 20191024; US 201917294337 A 20191024