(19)
(11)EP 3 010 017 A1

(12)EUROPEAN PATENT APPLICATION

(43)Date of publication:
20.04.2016 Bulletin 2016/16

(21)Application number: 14306623.1

(22)Date of filing:  14.10.2014
(51)International Patent Classification (IPC): 
G10L 21/028(2013.01)
(84)Designated Contracting States:
AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR
Designated Extension States:
BA ME

(71)Applicant: Thomson Licensing
92130 Issy-les-Moulineaux (FR)

(72)Inventors:
  • Ozerov, Alexey
    35576 Cesson-Sévigné (FR)
  • Duong, Quang Khanh Ngoc
    35576 Cesson-Sévigné (FR)
  • Chevallier, Louis
    35576 Cesson-Sévigné (FR)

(74)Representative: Huchet, Anne 
Technicolor 1-5 rue Jeanne d'Arc
92130 Issy-Les-Moulineaux
92130 Issy-Les-Moulineaux (FR)

  


(54)Method and apparatus for separating speech data from background data in audio communication


(57) A method and an apparatus for separating speech data from background data in an audio communication are suggested. The method comprises: applying a speech model to the audio communication for separating the speech data from the background data of the audio communication; and updating the speech model as a function of the speech data and the background data during the audio communication.




Description

TECHNICAL FIELD



[0001] The present invention generally relates to the suppression of acoustic noise in a communication. In particular, the present invention relates to a method and an apparatus for separating speech data from background data in an audio communication.

BACKGROUND



[0002] This section is intended to introduce the reader to various aspects of art, which may be related to various aspects of the present disclosure that are described and/or claimed below. This discussion is believed to be helpful in providing the reader with background information to facilitate a better understanding of the various aspects of the present disclosure. Accordingly, it should be understood that these statements are to be read in this light, and not as admissions of prior art.

[0003] An audio communication, especially a wireless communication, might be taken in a noisy environment, for example, on a street with high traffic or in a bar. In this case, it is often very difficult for one party in the communication to understand the speech due to a background noise. It is therefore an important topic in the audio communication to suppress the undesirable background noise and at the same time to keep the target speech, which will be beneficial to enhance the speech intelligibility.

[0004] There is a far-end implementation of the noise suppression where the suppressing is implemented on the communication device of the listening person and a near-end implementation where it is implemented on the communication device of the speaking person. It can be appreciated that the mentioned communication device of either the listening or the speaking person can be a smart phone, a tablet, etc. From the commercial point of view the far-end implementation is more attractive.

[0005] The prior art comprises a number of known solutions that provide noise suppression for an audio communication.

[0006] One of the known solutions in this respect is called speech enhancement. One exemplary method was discussed in the reference written by Y. Ephraim and D. Malah, "Speech enhancement using a minimum mean square error short-time spectral amplitude estimator." IEEE Trans. Acoust. Speech Signal Process. 32, 1109-1121, 1984 (hereinafter referred to as reference 1). However, such solutions of speech enhancement have some disadvantages. Speech enhancement only suppresses backgrounds represented by stationary noises, i.e., noisy sounds with time-invariant spectral characteristics.

[0007] Another known solution is called online source separation. One exemplary method was discussed in the reference written by L. S. R. Simon and E. Vincent, "A general framework for online audio source separation," in International conference on Latent Variable Analysis and Signal Separation, Tel-Aviv, Israel, Mar. 2012 (hereinafter referred to as reference 2). A solution of online source separation allows dealing with non-stationary backgrounds, which normally is based on advanced spectral models of both sources: the speech and the background. However, the online source separation depends strongly on the fact whether the source models represent well the actual sources to be separated.

[0008] Consequently, there remains a need to improve the noise suppression in an audio communication for separating the speech data from the background data of the audio communication so that the speech quality can be improved.

SUMMARY



[0009] This invention disclosure describes an apparatus and a method for separating speech data from background data in an audio communication.

[0010] According to a first aspect, method for separating speech data from background data in an audio communication is suggested. The method comprises: applying a speech model to the audio communication for separating the speech data from the background data of the audio communication; and updating the speech model as a function of the speech data and the background data during the audio communication.

[0011] In an embodiment, the updated speech model is applied to the audio communication.

[0012] In an embodiment, a speech model which is in association with the caller of the audio communication is applied as a function of the calling frequency and calling duration of the caller.

[0013] In an embodiment, a speech model which is not in association with the caller of the audio communication is applied as a function of the calling frequency and calling duration of the caller.

[0014] In an embodiment, the method further comprises storing the updated speech mode after the audio communication for using in the next audio communication with the user.

[0015] In an embodiment, the method further comprises changing the speech model to be in association with the caller of the audio communication after the audio communication as a function of the calling frequency and calling duration of the caller.

[0016] According to a second aspect, an apparatus for separating speech data from background data in an audio communication is suggested. The apparatus comprises: an applying unit for applying a speech model to the audio communication for separating the speech data from the background data of the audio communication; and an updating unit for updating the speech model as a function of the speech data and the background data during the audio communication.

[0017] In an embodiment, the applying unit applies the updated speech model to the audio communication.

[0018] In an embodiment, the applying unit applies a speech model which is in association with the caller of the audio communication as a function of the calling frequency and calling duration of the caller.

[0019] In an embodiment, the applying unit applies a speech model which is not in association with the caller of the audio communication as a function of the calling frequency and calling duration of the caller.

[0020] In an embodiment, the apparatus further comprises a storing unit for storing the updated speech mode after the audio communication for using in the next audio communication with the user.

[0021] In an embodiment, the apparatus further comprises a changing unit for changing the speech model to be in association with the caller of the audio communication after the audio communication as a function of the calling frequency and calling duration of the caller.

[0022] According to a third aspect, a computer program product downloadable from a communication network and/or recorded on a medium readable by computer and/or executable by a processor is suggested. The computer program comprises program code instructions for implementing the steps of the method according to the second aspect of the invention disclosure.

[0023] According to a fourth aspect, a non-transitory computer-readable medium comprising a computer program product recorded thereon and capable of being run by a processor is suggested. The non-transitory computer-readable medium includes program code instructions for implementing the steps of the method according to the second aspect of the invention disclosure.

[0024] It is to be understood that more aspects and advantages of the invention will be found in the following detailed description of the present invention.

BRIEF DESCRIPTION OF THE DRAWINGS



[0025] The accompanying drawings are included to provide further understanding of the embodiments of the invention together with the description which serves to explain the principle of the embodiments. The invention is not limited to the embodiments.

[0026] In the drawings:

Figure 1 is a flow chart showing a method for separating speech data from background data in an audio communication according to an embodiment of the invention;

Figure 2 illustrates an exemplary system in which the disclosure may be implemented;

Figure 3 is a diagram showing an exemplary process for separating speech data from background data in an audio communication; and

Figure 4 is a block diagram of an apparatus for separating speech data from background data in an audio communication according to an embodiment of the invention.


DETAILED DESCRIPTION



[0027] An embodiment of the present invention will now be described in detail in conjunction with the drawings. In the following description, some detailed descriptions of known functions and configurations may be omitted for conciseness.

[0028] Figure 1 is a flow chart showing a method for separating speech data from background data in an audio communication according to an embodiment of the invention.

[0029] As shown in Figure 1, at step S101, it applies a speech model to the audio communication for separating speech data from background data of the audio communication.

[0030] The speech model can use any known audio source separation algorithms to separate the speech data from the background data of the audio communication, such as the one described in the reference written by A. Ozerov, E. Vincent and F. Bimbot, "A general flexible framework for the handling of prior information in audio source separation," IEEE Trans. on Audio, Speech and Lang. Proc., vol. 20, no. 4, pp. 1118-1133, 2012 (hereinafter referred to as reference 3). In this sense, the term "model" here refers to any algorithm/method/approach/processing in this technical field.

[0031] The speech model can also be a spectral source model which can be understood as a dictionary of characteristic spectral patterns describing the audio source of interest (here the speech or the speech of a particular speaker). For example, for nonnegative matrix factorization (NMF) source spectral model, these spectral patterns are combined with non-negative coefficients to describe the corresponding source (here speech) in the mixture at a particular time frame. For Gaussian mixture model (GMM) source spectral model, only one most likely spectral pattern is selected to describe the corresponding source (here speech) in the mixture at a particular time frame.

[0032] The speech model can be applied in association with the caller of the audio communication. For example, the speech model is applied in association with the caller of the audio communication according to the previous audio communications of this caller. In this case, the speech model can be called a "speaker model". The association can be based on the ID of the caller, for example, the phone number of the caller.

[0033] A database can be built to contain N speech models corresponding to the N callers in the calling history of audio communication.

[0034] Upon an initiation of the audio communication, a speaker model assigned to a caller can be selected from the database and applied to the audio communication. The N callers can be selected from all the callers in the calling history based on their calling frequencies and total calling durations. That is, a caller who calls more frequently and has longer accumulated calling durations will have the priority for being included into the list of N callers allocated with a speaker model. The number N can be set depending on the memory capacity of the communication device used for the audio communication, which for example can be 5, 10, 50, 100, and so on.

[0035] A generic speech model, which is not in association with the caller of the audio communication, can be assigned to a caller who is not in the calling history according to the calling frequency or the total calling duration of the user. That is, a new caller can be assigned with a generic speech model. A caller who is in the calling history but does not call quite often can also be assigned with a generic speech model.

[0036] Similar to the speaker model, the generic speech model can be any known audio source separation algorithms to separate the speech data from the background data of the audio communication. For example, it can be a source spectral model, or a dictionary of characteristic spectral patterns for some popular models like NMF or GMM. The difference between the generic speech model and the speaker model is that the generic speech model is learned (or trained) offline from some speech samples, such as a dataset of speech samples from many different speakers. As such, while a speaker model tend to describe the speech and the voice of a particular caller, a generic speech model tends to describe the human speech in general without focusing on a particular speaker.

[0037] Several generic speech models can be set to correspond to different classes of speakers, for example, in term of male/female and/or adult/child. In this case, a speaker class is detected to determine the speaker's gender and/or average age. According to the result of the detection, a suitable generic speech model can be selected.

[0038] At step S102, it updates the speech model as a function of speech data and background data during the audio communication.

[0039] Generally, the above adaptation can be based on the detection of a "speech only (noise free)" segment and a "background only" segment of the audio communication using known spectral source models adaptation algorithms. A more detailed description in this respect will be given below with reference to a specific system.

[0040] The updated speech model will be used for the current audio communication.

[0041] The method can further comprise a step S103 of storing the updated speech model in the database after the audio communication for using in the next audio communication with the user. In the case that the speech model is the speaker model, the updated speech model will be stored in the database if there is enough space in the database. If the speech model is the speaker model, the method can further comprise storing the updated the generic speech model in the database as a speech model, for example, according to the calling frequency and the total calling duration.

[0042] According to the method of the embodiment, upon an initiation of an audio communication, it will first check whether a corresponding speaker model is already stored in the database of speech models, for example, according to the caller ID of the incoming call. If a speaker model is already in the database, the speaker model will be used as a speech model for this audio communication. The speaker model can be updated during the audio communication. This is because, for example, the caller's voice may change due to some illness.

[0043] If there is no corresponding speaker model stored in the database of speech models, a generic speech model will be used as a speech model for this audio communication. The generic speech model can also be updated during the call to fit better this caller. For a generic speech model, it can determine whether the generic speech model can be changed into a speaker model in association with the caller of the audio communication at the end of call. For example, if it is determined that the generic speech model should be changed into a speaker model of the caller, for example, according to the calling frequency and total calling duration of the caller, this generic speech model will be stored in the database as a speaker model in association with this caller. It can be appreciated that if the database has a limited space, one or more speaker models which became less frequent can be discarded.

[0044] Figure 2 illustrates an exemplary system in which the disclosure can be implemented. The system can be any kind of communication systems which involve an audio communication between two or more parties, such as a telephone system or a mobile communication system. In the system of Figure 2, a far-end implementation of an online source separation is described. However, it can be appreciated that the embodiment of the invention can also be implemented in other manners, such as a near-end implementation.

[0045] As shown in Figure 2, the database of speech models contains the maximum of N speaker models. As shown in Figure 2, the speaker models are in association with respective callers, such as Max's model, Anna's model, Bob's model, John's model and so on.

[0046] As for the speaker models, the total call durations for all previous callers are accumulated according to their IDs. By "total call duration" for each caller, it means the total time that this caller was calling, i.e., "time_call_1 + time_call_2 + ... + time_call_K". Thus, in some sense the "total call duration" reflects both the information call frequency and the call duration of the caller. The call durations are used to identify the most frequent callers for allocating with a speaker model. In an embodiment, the "total call duration" can be computed only within a time window, for example, within the past 12 months. This will help discarding speaker models of those callers who were calling a lot in the past but not calling any more for a while.

[0047] It can be appreciated that other algorithms can also apply for identifying the most frequent callers. For example, a combination of the calling frequency and/or calling time can be considered for this purpose. No further details will be given.

[0048] As shown in Figure 2, the database also contains a generic speech model which is not in association with a specific caller of the audio communication. The generic speech model can be trained from some speech signals dataset.

[0049] When a new call is entering, a speech model is applied from the database by using either a speaker model corresponding to the caller or a generic speech model which is not speaker-dependent.

[0050] As shown in Figure 2, when Bob is calling, a speaker model "Bob's model" is selected from the database and applied to the call since this speaker model is allocated to Bob according to the calling history.

[0051] In this embodiment, the Bob's model can be a background source model which is also a source spectral model. The background source model can be a dictionary of characteristic spectral patterns (e.g., NMF or GMM). So the structure of the background source model can be exactly the same as the speech source model. The main difference is in the model parameters values, e.g., the characteristic spectral patterns of background model should describe the background, while the characteristic spectral patterns of speech model should describe the speech.

[0052] Figure 3 is a diagram showing an exemplary process for separating speech data from background data in an audio communication.

[0053] In the process illustrated in Figure 3, during the calling, the following steps are performed:
  1. 1. A detector is launched for detecting the current signal state among the following three states:
    1. a. Speech only.
    2. b. Background only.
    3. c. Speech + background.


[0054] Known detectors in this art can be used for the above purpose, for example, the detector discussed in the reference written by Shafran, I. and Rose, R. 2003, "Robust speech detection and segmentation for real-time ASR applications", In Proceedings of IEEE International Conference no Acoustics, Speech, and Signal Processing (ICASSP). Vol. 1. 432-435.) (hereinafter referred to as reference 4). As many other approaches on audio event detection, this approach relies mainly on the following steps. The signal is cut into temporal frames, and some features, e.g., the vectors of Mel-frequency cepstral coefficients (MFCC), are computed for each frame. A classifier, e.g., one based on several GMMs, each GMM representing one event (here there are three events: "speech only", "background only" and "speech + background"), is then applied to each feature vector to detect the corresponding audio event at the given time. This classifier, e.g., the one based on GMMs, needs to be pre-trained offline from some audio data, where the audio event labels are known (e.g., labeled by a human).

2. In the "Speech only" state, the speaker source model is learned online, for example, using the algorithm described in the reference 2. Online learning means that the model (here speaker model) parameters need to be continuously updated along with new signal observations available within the call progress. In other words, the algorithm can use only past sound samples and should not store too much of previous sound samples (this is due to the device memory constraints). According to the approach described in the reference 2, the speaker model (which is an NMF model according to the reference 2) parameters are smoothly updated using statistics extracted from a small fixed number (for example, 10) of most recent frames.

3. In the "Background only" state, the background source model is learned online, for example, using the algorithm described in the reference 2. This online background source model learning is performed exactly as for the speaker model, as described in the previous item.

4. In the "Speech + background" state, the speaker model is adapted online, assuming the background source model is fixed, for example, using the algorithm described in Z. Duan, G. J. Mysore, and P. Smaragdis, "Online PLCA for real-time semi-supervised source separation," in International Conference on Latent Variable Analysis and Source Separation (LVA/ICA). 2012, Springer (hereinafter referred to as reference 5). The approach is similar to the one explained in the above steps 2 and 3. The only difference between them is that this online adaptation is performed from the mixture of the sources ("speech + background"), instead of the clean sources ("speech only or background only"). For the above purpose, the process similar to the online learning (items 2 and 3) is applied. The difference is that, in this case, the speaker source model and the background source model are decoded jointly and the speaker model is continuously updated, while the background model is kept fixed.



[0055] Alternatively, the background source model can be adapted, assuming that the speaker source model is fixed. However, it could be more advantageous to update the speaker source model, since in a "usual noisy situation" it is often more probable to have speech-free segments ("Background only" detections) than background-free segments ("Speech only" detections). In other words, the background source model can be well-trained enough (on the speech-free segments). Thus it could be more advantageous to adapt the speaker source model on "Speech + background" segments.

5. Finally, source separation is continuously applied to estimate the clean speech (see Figure 3). This source separation process is based on the Wiener filter, which is an adaptive filter with the parameters estimated from the two models (the speaker source model and the background source model) and the noisy speech. The references 2 and 5 give more details in this respect. No further information will be provided.



[0056] At the end of the call, the following steps are performed:
  1. 1. The total call duration for this user is updated. This can be simply done by incrementing this duration if it was already stored or by initializing it by the current call duration if this user calls for the first time.
  2. 2. If the speech model of this speaker was already in the database of models, it is updated in the database.
  3. 3. Otherwise, if the speech model was not in the database, the speech model is added to the database only if the database consists of less than N speaker models or if this speaker is in the top N call durations among others (in any case, the model of the less frequent speaker is removed from the database so as there are always maximum N models in it).


[0057] Note that invention relies on the hypothesis that the same phone number is used by the same person, which is usually the case for mobile phones. For home stationary phones that may be less true, since, e.g., all family members may use such a phone. However, in the case of home phones background suppression is not so crucial. Indeed, it is often possible to simply shut down the music or ask other people speaking quietly. In other words, in most cases, when background suppression is necessary, this hypothesis holds, and, if it is not (indeed, one can borrow a mobile phone of some other person to speak), the proposed system will not fail either thanks to a continuous speaker model re-adaptation to new conditions.

[0058] An embodiment of the invention provides an apparatus for separating speech data from background data in an audio communication. Figure 4 is a block diagram of the apparatus for separating speech data from background data in an audio communication according to the embodiment of the invention.

[0059] As show in Figure 4, the apparatus 400 for separating speech data from background data in an audio communication comprises an applying unit 401 for applying a speech model to the audio communication for separating the speech data from the background data of the audio communication; and an updating unit 402 for updating the speech model as a function of speech data and background data during the audio communication.

[0060] The apparatus 400 can further comprise a storing unit 403 for storing the updated speech model after the audio communication for using in the next audio communication with the user.

[0061] The apparatus 400 can further comprise a changing unit 404 for changing the speech model to be in association with the caller of the audio communication after the audio communication as a function of the calling frequency and calling duration of the caller.

[0062] An embodiment of the invention provides a computer program product downloadable from a communication network and/or recorded on a medium readable by computer and/or executable by a processor, comprising program code instructions for implementing the steps of the method described above.

[0063] An embodiment of the invention provides a non-transitory computer-readable medium comprising a computer program product recorded thereon and capable of being run by a processor, including program code instructions for implementing the steps of a method described above.

[0064] It is to be understood that the present invention may be implemented in various forms of hardware, software, firmware, special purpose processors, or a combination thereof. Moreover, the software is preferably implemented as an application program tangibly embodied on a program storage device. The application program may be uploaded to, and executed by, a machine comprising any suitable architecture. Preferably, the machine is implemented on a computer platform having hardware such as one or more central processing units (CPU), a random access memory (RAM), and input/output (I/O) interface(s). The computer platform also includes an operating system and microinstruction code. The various processes and functions described herein may either be part of the microinstruction code or part of the application program (or a combination thereof), which is executed via the operating system. In addition, various other peripheral devices may be connected to the computer platform such as an additional data storage device and a printing device.

[0065] It is to be further understood that, because some of the constituent system components and method steps depicted in the accompanying figures are preferably implemented in software, the actual connections between the system components (or the process steps) may differ depending upon the manner in which the present invention is programmed. Given the teachings herein, one of ordinary skill in the related art will be able to contemplate these and similar implementations or configurations of the present invention.


Claims

1. A method for separating speech data from background data in an audio communication, comprising.
applying (S101) a speech model to the audio communication for separating the speech data from the background data of the audio communication; and
updating (S102) the speech model as a function of the speech data and the background data during the audio communication.
 
2. Method according to claim 1, wherein the updated speech model is applied to the audio communication.
 
3. Method according to claim 1 or 2, wherein a speech model which is in association with the caller of the audio communication is applied as a function of the calling frequency and calling duration of the caller.
 
4. Method according to claim 1 or 2, wherein a speech model which is not in association with the caller of the audio communication is applied as a function of the calling frequency and calling duration of the caller.
 
5. Method according to anyone of claims 1-4, further comprising:

storing (S103) the updated speech model after the audio communication for using in the next audio communication with the user.


 
6. Method according to claim 4, further comprising:

changing the speech model to be in association with the caller of the audio communication after the audio communication as a function of the calling frequency and calling duration of the caller.


 
7. Apparatus (400) for separating speech data from background data in an audio communication, comprising:

an applying unit (401) for applying a speech model to the audio communication for separating the speech data from the background data of the audio communication; and

an updating unit (402) for updating the speech model as a function of the speech data and the background data during the audio communication.


 
8. Apparatus (400) according to claim 7, wherein the applying unit (401) applies the updated speech model to the audio communication.
 
9. Apparatus (400) according to claim 7 or 8, wherein the applying unit (401) applies a speech model which is in association with the caller of the audio communication as a function of the calling frequency and calling duration of the caller.
 
10. Apparatus (400) according to claim 7 or 8, wherein the applying unit (401) applies a speech model which is not in association with the caller of the audio communication as a function of the calling frequency and calling duration of the caller.
 
11. Apparatus (400) according to anyone of claims 7-10, further comprising:

a storing unit (403) for storing the updated speech model after the audio communication for using in the next audio communication with the user.


 
12. Apparatus (400) according to claim 10, further comprising:

a changing unit (404) for changing the speech model to be in association with the caller of the audio communication after the audio communication as a function of the calling frequency and calling duration of the caller.


 
13. Computer program product downloadable from a communication network and/or recorded on a medium readable by computer and/or executable by a processor, comprising program code instructions for implementing the steps of a method according to at least one of claims 1 to 6.
 
14. Non-transitory computer-readable medium comprising a computer program product recorded thereon and capable of being run by a processor, including program code instructions for implementing the steps of a method according to at least one of claims 1 to 6.
 




Drawing
















Search report









Search report




Cited references

REFERENCES CITED IN THE DESCRIPTION



This list of references cited by the applicant is for the reader's convenience only. It does not form part of the European patent document. Even though great care has been taken in compiling the references, errors or omissions cannot be excluded and the EPO disclaims all liability in this regard.

Non-patent literature cited in the description