Global Patent Index - EP 3945729 A1

EP 3945729 A1 20220202 - SYSTEM AND METHOD FOR HEADPHONE EQUALIZATION AND SPACE ADAPTATION FOR BINAURAL REPRODUCTION IN AUGMENTED REALITY

Title (en)

SYSTEM AND METHOD FOR HEADPHONE EQUALIZATION AND SPACE ADAPTATION FOR BINAURAL REPRODUCTION IN AUGMENTED REALITY

Title (de)

SYSTEM UND VERFAHREN ZUR KOPFHÖRERENTZERRUNG UND RAUMANPASSUNG ZUR BINAURALEN WIEDERGABE BEI AUGMENTED REALITY

Title (fr)

SYSTÈME ET PROCÉDÉ D'ÉGALISATION DE CASQUE D'ÉCOUTE ET D'ADAPTATION SPATIALE POUR LA REPRÉSENTATION BINAURALE EN RÉALITÉ AUGMENTÉE

Publication

EP 3945729 A1 20220202 (DE)

Application

EP 20188945 A 20200731

Priority

EP 20188945 A 20200731

Abstract (en)

[origin: WO2022023417A2] A system is provided. The system comprises an analyzer (152) for determining a plurality of binaural room impulse responses and a loudspeaker signal generator (154) for generating at least two loudspeaker signals according to the plurality of binaural room impulse responses and according to the audio source signal of at least one audio source. The analyzer (152) is designed to determine the plurality of the binaural room impulse responses such that each of the plurality of binaural room impulse responses takes into consideration an effect which results from the wearing of headphones by a user.

Abstract (de)

Ein System wird bereitgestellt. Das System umfasst einen Analysator (152) zur Bestimmung einer Mehrzahl von binauralen Raumimpulsantworten und einen Lautsprechersignal-Erzeuger (154) zur Erzeugung von wenigstens zwei Lautsprechersignalen abhängig von der Mehrzahl der binauralen Raumimpulsantworten und abhängig von dem Audioquellsignal von wenigstens einer Audioquelle. Der Analysator (152) ist ausgebildet, die Mehrzahl der binauralen Raumimpulsantworten so zu bestimmen, dass jede der Mehrzahl der binauralen Raumimpulsantworten einen Effekt berücksichtigt, der aus dem Tragen eines Kopfhörers durch einen Nutzer resultiert.

IPC 8 full level

H04R 5/04 (2006.01); H04S 7/00 (2006.01); G10L 25/30 (2013.01); G10L 25/51 (2013.01); H04R 1/10 (2006.01); H04R 5/027 (2006.01); H04R 5/033 (2006.01); H04R 25/00 (2006.01)

CPC (source: EP US)

H04S 3/008 (2013.01 - US); H04S 7/306 (2013.01 - EP US); H04R 1/1083 (2013.01 - EP); H04R 5/027 (2013.01 - EP); H04R 5/033 (2013.01 - EP); H04R 5/04 (2013.01 - EP); H04R 25/507 (2013.01 - EP); H04R 2225/41 (2013.01 - EP); H04R 2225/43 (2013.01 - EP); H04R 2460/01 (2013.01 - EP); H04S 7/301 (2013.01 - EP); H04S 7/304 (2013.01 - EP); H04S 2400/01 (2013.01 - US); H04S 2400/11 (2013.01 - EP); H04S 2400/15 (2013.01 - EP); H04S 2420/01 (2013.01 - EP US)

Citation (applicant)

  • US 2015195641 A1 20150709 - DI CENSO DAVIDE [US], et al
  • V. VALIMAKIA. FRANCKJ. RAMOH. GAMPERL. SAVIOJA: "Assisted listening using a headset: Enhancing audio perception in real, augmented, and virtual environments", IEEE SIGNAL PROCESSING MAGAZINE, vol. 32, no. 2, March 2015 (2015-03-01), pages 92 - 99, XP011573083, DOI: 10.1109/MSP.2014.2369191
  • K. BRANDENBURGE. CANOF. KLEINT. KÖLLMERH. LUKASHEVICHA. NEIDHARDTU. SLOMAS. WERNER: "Plausible augmentation of auditory scenes using dynamic binaural synthesis for personalized auditory realities", PROC. OF AES INTERNATIONAL CONFERENCE ON AUDIO FOR VIRTUAL AND AUGMENTED REALITY, August 2018 (2018-08-01)
  • S. ARGENTIERIP. DANSP. SOURES: "A survey on sound source localization in robotics: From binaural to array processing methods", COMPUTER SPEECH LANGUAGE, vol. 34, no. 1, 2015, pages 87 - 112, XP029225205, DOI: 10.1016/j.csl.2015.03.003
  • D. FITZGERALDA. LIUTKUSR. BADEAU: "Projection-based demixing of spatial audio", IEEE/ACM TRANS. ON AUDIO, SPEECH, AND LANGUAGE PROCESSING, vol. 24, no. 9, 2016, pages 1560 - 1572
  • E. CANOD. FITZGERALDA. LIUTKUSM. D. PLUMBLEYF. STÖTER: "Musical source separation: An introduction", IEEE SIGNAL PROCESSING MAGAZINE, vol. 36, no. 1, January 2019 (2019-01-01), pages 31 - 40, XP011694891, DOI: 10.1109/MSP.2018.2874719
  • S. GANNOTE. VINCENTS. MARKOVICH-GOLANA. OZEROV: "A consolidated perspective on multimicrophone speech enhancement and source separation", IEEE/ACM TRANSACTIONS ON AUDIO, SPEECH, AND LANGUAGE PROCESSING, vol. 25, no. 4, April 2017 (2017-04-01), pages 692 - 730, XP058372577, DOI: 10.1109/TASLP.2016.2647702
  • E. CANOJ. NOWAKS. GROLLMISCH: "Exploring sound source separation for acoustic condition monitoring in industrial scenarios", PROC. OF 25TH EUROPEAN SIGNAL PROCESSING CONFERENCE (EUSIPCO, August 2017 (2017-08-01), pages 2264 - 2268, XP033236389, DOI: 10.23919/EUSIPCO.2017.8081613
  • T. GERKMANNM. KRAWCZYK-BECKER: "J. Le Roux, ''Phase processing for single-channel speech enhancement: History and recent advances", IEEE SIGNAL PROCESSING MAGAZINE, vol. 32, no. 2, March 2015 (2015-03-01), pages 55 - 66, XP011573073, DOI: 10.1109/MSP.2014.2369251
  • Y. XUQ. KONGW. WANGM. D. PLUMBLEY: "Large-Scale Weakly Supervised Audio Classification Using Gated Convolutional Neural Network", PROCEEDINGS OF THE IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP, 2018, pages 121 - 125
  • D. MATZE. CANOJ. ABESSER: "Proc. of the 16th International Society for Music Information Retrieval Conference", October 2015, ISMIR, article "New sonorities for early jazz recordings using sound source separation and automatic mixing tools", pages: 749 - 755
  • S. M. KUOD. R. MORGAN: "Active noise control: a tutorial review", PROCEEDINGS OF THE IEEE, vol. 87, no. 6, June 1999 (1999-06-01), pages 943 - 973
  • A. MCPHERSONR. JACKG. MORO: "Action-sound latency: Are our tools fast enough?", PROCEEDINGS OF THE INTERNATIONAL CONFERENCE ON NEW INTERFACES FOR MUSICAL EXPRESSION, July 2016 (2016-07-01)
  • C. ROTTONDIC. CHAFEC. ALLOCCHIOA. SARTI: "An overview on networked music performance technologies", IEEE ACCESS, vol. 4, 2016, pages 8823 - 8843
  • S. LIEBICHJ. FABRYP. JAXP. VARY: "Signal processing challenges for active noise cancellation headphones", SPEECH COMMUNICATION; 13TH ITG-SYMPOSIUM, October 2018 (2018-10-01), pages 1 - 5
  • E. CANOJ. LIEBETRAUD. FITZGERALDK. BRANDENBURG: "The dimensions of perceptual quality of sound source separation", PROC. OF IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP, April 2018 (2018-04-01), pages 601 - 605, XP033401636, DOI: 10.1109/ICASSP.2018.8462325
  • P. M. DELGADOJ. HERRE: "Objective assessment of spatial audio quality using directional loudness maps", PROC. OF IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP, May 2019 (2019-05-01), pages 621 - 625, XP033566358, DOI: 10.1109/ICASSP.2019.8683810
  • C. H. TAALR. C. HENDRIKSR. HEUSDENSJ. JENSEN: "An algorithm for intelligibility prediction of time-frequency weighted noisy speech", IEEE TRANSACTIONS ON AUDIO, SPEECH, AND LANGUAGE PROCESSING, vol. 19, no. 7, September 2011 (2011-09-01), pages 2125 - 2136, XP011335558, DOI: 10.1109/TASL.2011.2114881
  • R. SERIZELN. TURPAULTH. EGHBAL-ZADEHA. PARAG SHAH: "Large- Scale Weakly Labeled Semi-Supervised Sound Event Detection in Domestic Environments", DCASE2018 WORKSHOP, July 2018 (2018-07-01)
  • L. JIAKAI: "Mean teacher convolution system for dcase 2018 task 4", DCASE2018 CHALLENGE, TECH. REP., September 2018 (2018-09-01)
  • G. PARASCANDOLOH. HUTTUNENT. VIRTANEN: "Recurrent neural networks for polyphonic sound event detection in real life recordings", PROC. OF IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP, March 2016 (2016-03-01), pages 6440 - 6444
  • E. C, CAKIRT. VIRTANEN: "End-to-end polyphonic sound event detection using convolutional recurrent neural networks with learned time-frequency representation input", PROC. OF INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN, July 2018 (2018-07-01), pages 1 - 7
  • B. FRENAYM. VERLEYSEN: "Classification in the presence of label noise: A survey", IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, vol. 25, no. 5, May 2014 (2014-05-01), pages 845 - 869, XP011545535, DOI: 10.1109/TNNLS.2013.2292894
  • E. FONSECAM. PLAKALD. P. W. ELLISF. FONTX. FAVORYX. SERRA: "Learning sound event classifiers from web audio with noisy labels", PROCEEDINGS OF IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP, 2019
  • M. DORFERG. WIDMER: "Training general-purpose audio tagging networks with noisy labels and iterative self-verification", PROCEEDINGS OF THE DETECTION AND CLASSIFICATION OF ACOUSTIC SCENES AND EVENTS 2018 WORKSHOP (DCASE2018, 2018
  • S. ADAVANNEA. POLITISJ. NIKUNENT. VIRTANEN: "Sound event localization and detection of overlapping sources using convolutional recurrent neural networks", IEEE JOURNAL OF SELECTED TOPICS IN SIGNAL PROCESSING, 2018, pages 1 - 1
  • Y. JUNGY. KIMY. CHOIH. KIM: "Joint learning using denoising variational autoencoders for voice activity detection", PROC. OF INTERSPEECH, September 2018 (2018-09-01), pages 1210 - 1214
  • F. EYBENF. WENINGERS. SQUARTINIB. SCHULLER: "Real-life voice activity detection with LSTM recurrent neural networks and an application to hollywood movies", PROC. OF IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING, May 2013 (2013-05-01), pages 483 - 487, XP032509188, DOI: 10.1109/ICASSP.2013.6637694
  • R. ZAZO-CANDILT. N. SAINATHG. SIMKOC. PARADA: "Feature learning with rawwaveform CLDNNs for voice activity detection", PROC. OF INTERSPEECH, 2016
  • M. MCLARENY. LEIL. FERRER: "Advances in deep neural network approaches to speaker recognition", PROC. OF IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP, April 2015 (2015-04-01), pages 4814 - 4818
  • D. SNYDERD. GARCIA-ROMEROG. SEILD. POVEYS. KHUDANPUR: "X-vectors: Robust DNN embeddings for speaker recognition", PROC. OF IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP, April 2018 (2018-04-01), pages 5329 - 5333, XP033403941, DOI: 10.1109/ICASSP.2018.8461375
  • M. MCLAREND. CASTÄNM. K. NANDWANAL. FERRERE. YILMAZ: "How to train your speaker embeddings extractor", ODYSSEY, 2018
  • S. O. SADJADIJ. W. PELECANOSS. GANAPATHY: "The IBM speaker recognition system: Recent advances and error analysis", PROC. OF INTERSPEECH, 2016, pages 3633 - 3637
  • Y. HANJ. KIMK. LEE: "Deep convolutional neural networks for predominant instrument recognition in polyphonic music", IEEE/ACM TRANSACTIONS ON AUDIO, SPEECH, AND LANGUAGE PROCESSING, vol. 25, no. 1, January 2017 (2017-01-01), pages 208 - 221
  • V. LONSTANLENC.-E. CELLA: "Proceedings of the 17th International Society for Music Information Retrieval Conference", 2016, ISMIR, article "Deep convolutional networks on the pitch spiral for musical instrument recognition", pages: 612 - 618
  • S. GURURANIC. SUMMERSA. LERCH: "Proceedings of the 19th International Society for Music Information Retrieval Conference", September 2018, ISMIR, article "Instrument activity detection in polyphonic music using deep neural networks", pages: 321 - 326
  • S. DELIKARIS-MANIASD. PAVLIDIA. MOUCHTARISV. PULKKI: "DOA estimation with histogram analysis of spatially constrained active intensity vectors", PROC. OF IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP, March 2017 (2017-03-01), pages 526 - 530, XP033258473, DOI: 10.1109/ICASSP.2017.7952211
  • S. CHAKRABARTYE. A. P. HABETS: "Multi-speaker DOA estimation using deep convolutional networks trained with noise signals", IEEE JOURNAL OF SELECTED TOPICS IN SIGNAL PROCESSING, vol. 13, no. 1, March 2019 (2019-03-01), pages 8 - 21
  • X. LIL. GIRINR. HORAUDS. GANNOT: "Multiple-speaker localization based on direct-path features and likelihood maximization with spatial sparsity regularization", IEEE/ACM TRANSACTIONS ON AUDIO, SPEECH, AND LANGUAGE PROCESSING, vol. 25, no. 10, October 2017 (2017-10-01), pages 1997 - 2012
  • F. GRONDINF. MICHAUD: "Lightweight and optimized sound source localization and tracking methods for open and closed microphone array configurations", ROBOTICS AND AUTONOMOUS SYSTEMS, vol. 113, 2019, pages 63 - 80
  • D. YOOKT. LEEY. CHO: "Fast sound source localization using two-level search space clustering", IEEE TRANSACTIONS ON CYBERNETICS, vol. 46, no. 1, January 2016 (2016-01-01), pages 20 - 26, XP011594358, DOI: 10.1109/TCYB.2015.2391252
  • D. PAVLIDIA. GRIFFINM. PUIGTA. MOUCHTARIS: "Real-time multiple sound source localization and counting using a circular microphone array", IEEE TRANSACTIONS ON AUDIO, SPEECH, AND LANGUAGE PROCESSING, vol. 21, no. 10, October 2013 (2013-10-01), pages 2193 - 2206, XP011521588, DOI: 10.1109/TASL.2013.2272524
  • P. VECCHIOTTIN. MAS. SQUARTINIG. J. BROWN: "End-to-end binaural sound localisation from the raw waveform", PROC. OF IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP, May 2019 (2019-05-01), pages 451 - 455
  • Y. LUOZ. CHENN. MESGARANI: "Speaker-independent speech separation with deep attractor network", IEEE/ACM TRANSACTIONS ON AUDIO, SPEECH, AND LANGUAGE PROCESSING, vol. 26, no. 4, April 2018 (2018-04-01), pages 787 - 796
  • Z. WANGJ. LE ROUXJ. R. HERSHEY: "Multi-channel deep clustering: Discriminative spectral and spatial embeddings for speaker-independent speech separation", PROC. OF IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP, April 2018 (2018-04-01), pages 1 - 5, XP033400917, DOI: 10.1109/ICASSP.2018.8461639
  • G. NAITHANIT. BARKERG. PARASCANDOLOL. BRAMSLTWN. H. PONTOPPIDANT. VIRTANEN: "Low latency sound source separation using convolutional recurrent neural networks", PROC. OF IEEE WORKSHOP ON APPLICATIONS OF SIGNAL PROCESSING TO AUDIO AND ACOUSTICS (WASPAA, October 2017 (2017-10-01), pages 71 - 75, XP033264904, DOI: 10.1109/WASPAA.2017.8169997
  • M. SUNOHARAC. HARUTAN. ONO: "Low-Iatency real-time blind source separation for hearing aids based on time-domain implementation of online independent vector analysis with truncation of non-causal components", PROC. OF IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP, March 2017 (2017-03-01), pages 216 - 220
  • Y. LUON. MESGARANI: "TaSNet: Time-domain audio separation network for real-time, single-channel speech separation", PROC. OF IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP, April 2018 (2018-04-01), pages 696 - 700
  • J. CHUAG. WANGW. B. KLEIJN: "Convolutive blind source separation with low latency", PROC. OF IEEE INTERNATIONAL WORKSHOP ON ACOUSTIC SIGNAL ENHANCEMENT (IWAENC, September 2016 (2016-09-01), pages 1 - 5, XP032983095, DOI: 10.1109/IWAENC.2016.7602895
  • Z. RAFIIA. LIUTKUSF. STÖTERS. I. MIMILAKISD. FITZGERALDB. PARDO: "An overview of lead and accompaniment separation in music", IEEE/ACM TRANSACTIONS ON AUDIO, SPEECH, AND LANGUAGE PROCESSING, vol. 26, no. 8, August 2018 (2018-08-01), pages 1307 - 1335
  • F. WENINGERH. ERDOGANS. WATANABEE. VINCENTJ. LE ROUXJ. R. HERSHEYB. SCHULLER: "Latent Variable Analysis and Signal Separation", 2015, SPRINGER INTERNATIONAL PUBLISHING, article "Speech enhancement with LSTM recurrent neural networks and its application to noise-robust ASR", pages: 293 - 305
  • J.-L. DURRIEUB. DAVIDG. RICHARD: "A musically motivated midlevel representation for pitch estimation and musical audio source separation", SELECTED TOPICS IN SIGNAL PROCESSING, IEEE JOURNAL OF, vol. 5, no. 6, October 2011 (2011-10-01), pages 1180 - 1191, XP011386718, DOI: 10.1109/JSTSP.2011.2158801
  • S. UHLICHM. PORCUF. GIRONM. ENENKLT. KEMPN. TAKAHASHIY. MITSUFUJI: "Improving music source separation based on deep neural networks through data augmentation and network blending", PROC. OF IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP, 2017
  • P. N. SAMARASINGHEW. ZHANGT. D. ABHAYAPALA: "Recent advances in active noise control inside automobile cabins: Toward quieter cars", IEEE SIGNAL PROCESSING MAGAZINE, vol. 33, no. 6, November 2016 (2016-11-01), pages 61 - 73, XP011633441, DOI: 10.1109/MSP.2016.2601942
  • G. S. PAPINIR. L. PINTOE. B. MEDEIROSF. B. COELHO: "Hybrid approach to noise control of industrial exhaust systems", APPLIED ACOUSTICS, vol. 125, 2017, pages 102 - 112, XP085026079, DOI: 10.1016/j.apacoust.2017.03.017
  • J. ZHANGT. D. ABHAYAPALAW. ZHANGP. N. SAMARASINGHES. JIANG: "Active noise control over space: A wave domain approach", IEEE/ACM TRANSACTIONS ON AUDIO, SPEECH, AND LANGUAGE PROCESSING, vol. 26, no. 4, April 2018 (2018-04-01), pages 774 - 786
  • X. LUY. TSAOS. MATSUDAC. HORI: "Speech enhancement based on deep denoising autoencoder", PROC. OF INTERSPEECH, 2013
  • Y. XUJ. DUL. DAIC. LEE: "A regression approach to speech enhancement based on deep neural networks", IEEE/ACM TRANSACTIONS ON AUDIO, SPEECH, AND LANGUAGE PROCESSING, vol. 23, no. 1, January 2015 (2015-01-01), pages 7 - 19
  • S. PASCUALA. BONAFONTEJ. SERRÄ: "SEGAN: speech enhancement generative adversarial network", PROC. OF INTERSPEECH, August 2017 (2017-08-01), pages 3642 - 3646
  • H. WIERSTORFD. WARDR. MASONE. M. GRAISC. HUMMERSONEM. D. PLUMBLEY: "Perceptual evaluation of source separation for remixing music", PROC. OF AUDIO ENGINEERING SOCIETY CONVENTION, vol. 143, October 2017 (2017-10-01)
  • J. PONSJ. JANERT. RODEW. NOGUEIRA: "Remixing music using source separation algorithms to improve the musical experience of cochlear implant users", THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA, vol. 140, no. 6, 2016, pages 4338 - 4349, XP012214619, DOI: 10.1121/1.4971424
  • Q. KONGY. XUW. WANGM. D. PLUMBLEY: "A joint separation-classification model for sound event detection of weakly labelled data", PROCEEDINGS OF IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP, March 2018 (2018-03-01)
  • T. V. NEUMANNK. KINOSHITAM. DELCROIXS. ARAKIT. NAKATANIR. HAEB-UMBACH: "All-neural online source separation, counting, and diarization for meeting analysis", PROC. OF IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP, May 2019 (2019-05-01), pages 91 - 95
  • S. GHARIBK. DROSSOSE. CAKIRD. SERDYUKT. VIRTANEN: "Unsupervised adversarial domain adaptation for acoustic scene classification", PROCEEDINGS OF THE DETECTION AND CLASSIFICATION OF ACOUSTIC SCENES AND EVENTS WORKSHOP (DCASE, November 2018 (2018-11-01), pages 138 - 142
  • A. MESAROST. HEITTOLAT. VIRTANEN: "A multi-device dataset for urban acoustic scene classification", PROCEEDINGS OF THE DETECTION AND CLASSIFICATION OF ACOUSTIC SCENES AND EVENTS WORKSHOP, 2018
  • J. ABESSERM. GÖTZES. KÜHNLENZR. GRÄFEC. KÜHNT. CLAUSSH. LUKASHEVICH: "A Distributed Sensor Network for Monitoring Noise Level and Noise Sources in Urban Environments", PROCEEDINGS OF THE 6TH IEEE INTERNATIONAL CONFERENCE ON FUTURE INTERNET OF THINGS AND CLOUD (FICLOUD), BARCELONA, SPAIN, 2018, pages 318 - 324, XP033399745, DOI: 10.1109/FiCloud.2018.00053
  • J. ABESSERS. BALKEM. MÜLLER: "Improving bass saliency estimation using label propagation and transfer learning", PROCEEDINGS OF THE 19TH INTERNATIONAL SOCIETY FOR MUSIC INFORMATION RETRIEVAL CONFERENCE (ISMIR, 2018, pages 306 - 312
  • J. ABESSERS. LOANNIS MIMILAKISR. GRÄFEH. LUKASHEVICH: "Acoustic scene classification by combining autoencoder-based dimensionality reduction and convolutional neural net-works", PROCEEDINGS OF THE 2ND DCASE WORKSHOP ON DETECTION AND CLASSIFICATION OF ACOUSTIC SCENES AND EVENTS, 2017
  • A. AVNIJ. AHRENSM. GEIERCS. SPORSH. WIERSTORFB. RAFAELY: "Spatial perception of sound fields recorded by spherical microphone arrays with varying spatial resolution", JOURNAL OF THE ACOUSTIC SOCIETY OF AMERICA, vol. 133, no. 5, 2013, pages 2711 - 2721, XP012173358, DOI: 10.1121/1.4795780
  • E. CANOD. FITZGERALDK. BRANDENBURG: "Evaluation of quality of sound source separation algorithms: Human perception vs quantitative metrics", PROCEEDINGS OF THE 24TH EUROPEAN SIGNAL PROCESSING CONFERENCE (EUSIPCO, 2016, pages 1758 - 1762, XP033011238, DOI: 10.1109/EUSIPCO.2016.7760550
  • S. MARCHAND: "Audio scene transformation using informed source separation", THE JOURNAL OFTHE ACOUSTICAL SOCIETY OF AMERICA, vol. 140, no. 4, 2016, pages 3091
  • S. GROLLMISCHJ. ABESSERJ. LIEBETRAUH. LUKASHEVICH: "Sounding industry: Challenges and datasets for industrial sound analysis (ISA", PROCEEDINGS OF THE 27TH EUROPEAN SIGNAL PROCESSING CONFERENCE (EUSIPCO) (EINGEREICHT, 2019
  • J. ABESSERM. MÜLLER: "Fundamental frequency contour classification: A comparison between hand-crafted and CNN-based features", PROCEEDINGS OF THE 44TH IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH, AND SIGNAL PROCESSING (ICASSP, 2019
  • C.-R. NAGARJ. ABESSERS. GROLLMISCH: "Towards CNN-based acoustic modeling of seventh chords for recognition chord recognition", PROCEEDINGS OF THE 16TH SOUND & MUSIC COMPUTING CONFERENCE (SMC) (EINGEREICHT, 2019
  • J. S. GÖMEZJ. ABESSERE. CANO: "Jazz solo instrument classification with convolutional neural networks, source separation, and transfer learning", PROCEEDINGS OF THE 19TH INTERNATIONAL SOCIETY FOR MUSIC INFORMATION RETRIEVAL CONFERENCE (ISMIR, 2018, pages 577 - 584
  • J. R. HERSHEYZ. CHENJ. LE ROUXS. WATANABE: "Deep clustering: Discriminative embeddings for segmentation and separation", PROCEEDINGS OF THE IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP, 2016, pages 31 - 35, XP032900557, DOI: 10.1109/ICASSP.2016.7471631
  • E. CANOG. SCHULLERC. DITTMAR: "Pitch-informed solo and accompaniment separation towards its use in music education applications", EURASIP JOURNAL ON ADVANCES IN SIGNAL PROCESSING, vol. 23, 2014, pages 1 - 19
  • S. I. MIMILAKISK. DROSSOSJ. F. SANTOSG. SCHULLERT. VIRTANENY. BENGIO: "Monaural Singing Voice Separation with Skip-Filtering Connections and Recurrent Inference of Time-Frequency Mask", PROCEEDINGS OF THE IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH, AND SIGNAL PROCESSING (ICASSP, 2018, pages 721 - 725
  • J. F. GEMMEKED. P. W. ELLISD. FREEDMANA. JANSENW. LAWRENCER. C. MOOREM. PLAKALM. RITTER: "Audio Set: An ontology and human-Iabeled dataset for audio events", PROCEEDINGS OF THE IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP, 2017
  • KLEINER, M.: "Acoustics and Audio Technology", 2012, J. ROSS PUBLISHING
  • M. DICKREITERV. DITTELW. HOEGM. WÖHRM.: "Handbuch der Tonstudiotechnik", vol. 1, 2008, K.G. SAUR VERLAG
  • F. MÜLLERM. KARAU: "Transparant hearing", CHI ,02 EXTENDED ABSTRACTS ON HUMAN FACTORS IN COMPUTING SYSTEMS (CHI EA '02, April 2002 (2002-04-01), pages 730 - 731
  • L. VIEIRA: "Master Thesis", 2018, AALBORG UNIVERSITY, article "Super hearing: a study on virtual prototyping for hearables and hearing aids"
  • SENNHEISER, AMBEO SMART HEADSET, 1 March 2019 (2019-03-01), Retrieved from the Internet <URL:https://de-de.sennheiser.com/finalstop>
  • OROSOUND, TILDE EARPHONES, 1 March 2019 (2019-03-01), Retrieved from the Internet <URL:https://www.orosound.com/tilde-earphones>
  • BRANDENBURG, K.CANO CERON, E.KLEIN, F.KÖLLMER, T.LUKASHEVICH, H.NEIDHARDT, A.NOWAK, J.SLOMA, U.WERNER, S.: "Personalized auditory reality", JAHRESTAGUNG FÜR AKUSTIK (DAGA), GARCHING BEI MÜNCHEN, DEUTSCHE GESELLSCHAFT FÜR AKUSTIK (DEGA, vol. 44, 2018

Citation (search report)

  • [A] DE 102014210215 A1 20151203 - FRAUNHOFER GES FORSCHUNG [DE], et al
  • [Y] US 2019354343 A1 20191121 - GLASER WILLIAM [US], et al
  • [XYI] RISHABH RANJAN ET AL: "Natural listening over headphones in augmented reality using adaptive filtering techniques", IEEE/ACM TRANSACTIONS ON AUDIO, SPEECH, AND LANGUAGE PROCESSING, IEEE, USA, vol. 23, no. 11, 31 July 2015 (2015-07-31), pages 1988 - 2002, XP058072946, ISSN: 2329-9290, DOI: 10.1109/TASLP.2015.2460459
  • [A] KAROLINA PRAWDA: "Augmented Reality: Hear-through", 31 December 2019 (2019-12-31), pages 1 - 20, XP055764823, Retrieved from the Internet <URL:https://mycourses.aalto.fi/pluginfile.php/666520/course/section/128564/Karolina%20Prawda_1930001_assignsubmission_file_Prawda_ARA_hear_through_revised.pdf> [retrieved on 20210113]
  • [A] CANO ESTEFANIA ET AL: "Selective Hearing: A Machine Listening Perspective", 2019 IEEE 21ST INTERNATIONAL WORKSHOP ON MULTIMEDIA SIGNAL PROCESSING (MMSP), IEEE, 27 September 2019 (2019-09-27), pages 1 - 6, XP033660032, DOI: 10.1109/MMSP.2019.8901720

Designated contracting state (EPC)

AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR

Designated extension state (EPC)

BA ME

DOCDB simple family (publication)

EP 3945729 A1 20220202; EP 4189974 A2 20230607; JP 2023536270 A 20230824; US 2023164509 A1 20230525; WO 2022023417 A2 20220203; WO 2022023417 A3 20220324

DOCDB simple family (application)

EP 20188945 A 20200731; EP 2021071151 W 20210728; EP 21751796 A 20210728; JP 2023506248 A 20210728; US 202318158724 A 20230124