(19)
(11)EP 2 486 470 B1

(12)EUROPEAN PATENT SPECIFICATION

(45)Mention of the grant of the patent:
11.12.2019 Bulletin 2019/50

(21)Application number: 10775846.8

(22)Date of filing:  11.10.2010
(51)International Patent Classification (IPC): 
G06F 3/023(2006.01)
(86)International application number:
PCT/GB2010/001898
(87)International publication number:
WO 2011/042710 (14.04.2011 Gazette  2011/15)

(54)

SYSTEM AND METHOD FOR INPUTTING TEXT INTO ELECTRONIC DEVICES

SYSTEM UND VERFAHREN ZUR EINGABE VON TEXT IN ELEKTRONISCHE GERÄTE

SYSTÈME ET PROCÉDÉ DESTINÉS À ENTRER UN TEXTE DANS DES DISPOSITIFS ÉLECTRONIQUES


(84)Designated Contracting States:
AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR

(30)Priority: 09.10.2009 GB 0917753

(43)Date of publication of application:
15.08.2012 Bulletin 2012/33

(73)Proprietor: TouchType Ltd.
London SE1 0AX (GB)

(72)Inventors:
  • MEDLOCK, Benjamin
    London SE3 9LL (GB)
  • REYNOLDS, Jonathan
    London SW8 1JX (GB)

(74)Representative: CMS Cameron McKenna Nabarro Olswang LLP 
Cannon Place 78 Cannon Street
London EC4N 6AF
London EC4N 6AF (GB)


(56)References cited: : 
US-A1- 2004 201 607
  
  • KEITH TRNKA ET AL: "Topic modeling in fringe word prediction for AAC", 2006 INTERNATIONAL CONFERENCE ON INTELLIGENT USER INTERFACES. IUI 06. SYDNEY, AUSTRALIA, JAN. 29 - FEB. 1, 2006; [ANNUAL INTERNATIONAL CONFERENCE ON INTELLIGENT USER INTERFACES], NEW YORK, NY : ACM, US, 29 January 2006 (2006-01-29), pages 276-278, XP058336269, DOI: 10.1145/1111449.1111509 ISBN: 978-1-59593-287-7
  • RADU FLORIAN ET AL: "Dynamic Nonlocal Language Modeling via Hierarchical Topic-Based Adaptation", ARXIV.ORG, CORNELL UNIVERSITY LIBRARY, 201 OLIN LIBRARY CORNELL UNIVERSITY ITHACA, NY 14853, 28 April 2001 (2001-04-28), XP080046755, DOI: 10.3115/1034678.1034711
  • RADU FLORIAN ET AL: "Dynamic nonlocal language modeling via hierarchical topic-based adaptation", COMPUTATIONAL LINGUISTICS, ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, N. EIGHT STREET, STROUDSBURG, PA, 18360 07960-1961 USA, 20 June 1999 (1999-06-20), pages 167-174, XP058292410, DOI: 10.3115/1034678.1034711 ISBN: 978-1-55860-609-8
  
Note: Within nine months from the publication of the mention of the grant of the European patent, any person may give notice to the European Patent Office of opposition to the European patent granted. Notice of opposition shall be filed in a written reasoned statement. It shall not be deemed to have been filed until the opposition fee has been paid. (Art. 99(1) European Patent Convention).


Description


[0001] The present invention relates generally to a system and method for inputting text into electronic devices. In particular, the invention relates to a system and method for the adaptive weighting of text predictions.

[0002] There currently exists a wide range of text input techniques for use with electronic devices, for example, QWERTY-style keyboards for text input into a computer or laptop, handwriting recognition in the PDA market, alphabetic character entry using a 9-digit keypad for mobile phone devices, speech recognition text input systems for both standard and mobile computational devices, and touch-screen devices.

[0003] In the case of mobile phone technology there are a number of existing text input technologies. Notable examples include Tegic Communications' 'T9', Motorola's 'iTap', Nuance's 'XT9', 'eZiType' and 'eZiText', Blackberry's 'SureType', KeyPoint Technology's 'AdapTxt' and CooTek's 'TouchPal'. These techniques comprise predominantly character-based text input and utilise some form of text prediction (or disambiguation) technology. In each of the identified models, a dictionary (or plurality of dictionaries) of allowable terms is maintained and, given a particular input sequence, the system chooses a legitimate term (or set of terms) from the dictionary and presents it to the user as a potential completion candidate. The basic dictionary can be augmented with new terms entered by the user, limited by the amount of device memory available.

[0004] In these systems, completions are ordered on the basis of usage frequency statistics and in some cases (e.g. eZiText, AdapTxt, TouchPal) using immediate lexical context.

[0005] US2004/201607 discusses an alphanumeric information entry process that incudes provision and use of a personal context model that correlates various examples of user context against a unique personal language model for the user. The personal language model itself along with considerable correlation examples can be developed by statistical analysis of user documents and files including particular email files |(including address books). Such processing can be done locally or remotely. The personal context model used to predict subsequent alphanumeric entries for a given user.

[0006] Trnka et al, "Topic Modeling in Fringe Word Prediction for AAC", IUI'06, January 29-February 1, 2006, Sydney, Australia, discloses topic modelling in fringe word prediction for Alternative and Augmented Communication (AAC). The paper deals with a comparison of different methods of text predictions. The conclusion is that topic modelling can be implemented in many different ways. Florian and Yarowsky, "Dynamic Nonlocal Language Modeling via Hierarchical Topic-Based Adaptation", discloses a dynamic nonlocal language model via hierarchical topic-based adaptation. The paper discusses cluster generation; hierarchical smoothing and adaptive topic-probability estimation techniques. A language model is created from "balanced and pruned" trees.

[0007] The present invention represents a significant enhancement over systems in which text predictions are ordered solely on the basis of recency or frequency. It allows the ordering of predictions to be influenced by high-level characteristics of the text being generated, e.g. topic, genre or authorship. The scope of the invention is defined by the appended claims.

[0008] The present invention therefore provides for a more accurate ordering, by a system, of text predictions generated by the system, thereby reducing the user labour element of text input (because the user is less likely to have to scroll through a list of predicted terms, or enter additional characters, to find their intended term).

[0009] In accordance with the present invention there is provided a system which employs a machine learning technique, classification, to make real-time category predictions for sections of text entered by a user. The system uses the category predictions to reorder and/or select the text predictions generated by a text prediction engine. The generated text predictions can then be displayed for user selection to input text into an electronic device.

[0010] Reordering the text predictions by category predictions offers the advantage of placing predictions that are more likely to be relevant to the current textual topic/genre etc. at the top of a list for display and user selection, thereby facilitating user text input. The category predictions can be graded to give broad category predictions and finer category predictions within those broad categories. For example, sport as a broad category can be split into any number of sub-categories, and these sub-categories can be further divided. If a sub-category of sport is football, this sub-category could be split into further sub-categories such as football clubs, players, managers etc. The system of the present invention can therefore predict accurately, from the user inputted text, a number of categories that this text relates to. The system can then hone the text predictions generated by a text prediction engine (that generates, preferably, context based predictions) by decreasing the probabilities of predictions which are unlikely to occur given the category predictions for the user inputted text.

[0011] In accordance with the present invention there is provided a system for generating text input in a computing device, the system comprising a text prediction engine comprising at least one predictor and configured to receive text input into the device by a user and to generate text predictions using the at least one predictor, a classifier configured to receive the input text and to generate at least one text category prediction, and a weighting module configured to receive the text predictions and the at least one category prediction and to weight the text predictions by the category predictions to generate new text predictions for presentation to the user.

[0012] Preferably, the at least one predictor is trained from a text source. The system may comprise a plurality of predictors, each predictor being trained by a separate text source. Preferably, the plurality of predictors generate concurrently text predictions.

[0013] In an embodiment of the invention, the system further comprises a Feature Vector Generator which is configured to generate a feature vector representing the text input into the device by a user by extracting features from the input text, calculating the term frequency-inverse document frequency for each feature in the text input, and normalising the resulting vector to unit length. The Feature Vector Generator is preferably further configured to generate at least one feature vector for the text source or each of the separate text sources by extracting a set of features from the text source, calculating the term frequency-inverse document frequency for each feature in the text source, and normalising the resulting vectors to unit length.

[0014] In a preferred embodiment the system further comprises a classifier training module which is configured to train the classifier from the feature vectors which have been generated from the text source(s). Preferably, the text source(s) comprises text data that has been pre-labelled with at least one representative category. The classifier may be a timed aggregate perceptron classifier. The classifier is preferably configured to generate a confidence vector relating to the at least one category.

[0015] In a preferred embodiment, the weighting module is configured to generate a weights vector from the confidence vector. Preferably, the weighting module generates the weights vector by setting the largest positive value in the confidence vector to 1, and dividing all other positive values in the confidence vector by the largest positive value in the confidence vector multiplied by a constant factor, and by setting any negative confidence value to zero. The weighting module may be configured to scale the text predictions generated by the text prediction engine by the weights vector to generate the new text predictions.

[0016] Preferably, the weighting module is configured to insert the new text predictions into a multimap structure, the structure comprising text predictions mapped to probability values, and to return the p most probable new text predictions.

[0017] The at least one predictor may be one of a single language model, a multi-language model or an adaptive prediction system. The text prediction engine may comprise at least two predictors, at least one of which is an adaptive prediction system. In this embodiment, the at least one adaptive prediction system comprises a second text prediction engine comprising at least one predictor and configured to receive the input text and to generate text predictions using the at least one predictor, a second classifier configured to receive the input text and to generate at least one text category prediction, a second weighting module configured to receive the text predictions from the second text prediction engine and the at least one category prediction from the second classifier and to weight the text predictions by the category predictions to generate new text predictions.

[0018] Preferably, the computing device is one of a mobile phone, a PDA or a computer such as a desktop PC, a laptop, a tablet PC, a Mobile Internet Device, an Ultramobile PC, a games console, or an in-car system.

[0019] The present invention also provides a method of generating text predictions from user text input, the method comprising generating text predictions based upon user text input, generating, based upon the user text input, a set of text category predictions, generating, a set of category-weighted text predictions, and presenting the set of category-weighted text predictions to the user. Preferably, the method further comprises selecting one of the set of category-weighted text predictions for text input.

[0020] The predictions are generated by at least one predictor and the method preferably comprises training the at least one predictor based upon a text source. In an embodiment comprising a plurality of predictors, each predictor is trained based upon a separate text source. Preferably, predictions are generated concurrently by the plurality of predictors.

[0021] In an embodiment, the method further comprises generating a feature vector representing the text input into the device by a user by extracting features from the text input, calculating the term frequency-inverse document frequency for each term in the text input, and normalising the resulting vector to unit length. Preferably, the method also comprises generating at least one feature vector for the text source or each of the separate text sources by extracting a set of features from the text source, calculating the term frequency-inverse document frequency for each feature in the text source, and normalising the resulting vectors to unit length.

[0022] Preferably, the text category predictions are generated by a classifier. The classifier may be a timed aggregate perceptron classifier. Preferably, the method comprises training the classifier based upon the feature vector generated from the text source(s). Preferably, the text source(s) comprises text data that has been pre-labelled with at least one representative category.

[0023] In an embodiment, the method comprises inserting the category-weighted text predictions into a multimap structure, the structure comprising category-weighted text predictions mapped to probability values, and returning the p most probable category-weighted text predictions.

[0024] The step of generating the set of text category predictions may comprise generating a confidence vector relating to the categories. The step of generating the set of text category-weighted predictions may comprise generating a weights vector from the confidence vector. The weights vector may be generated by setting the largest positive value in the confidence vector to 1, and dividing all other positive values in the confidence vector by the largest positive value in the confidence vector multiplied by a constant factor, and by setting any negative confidence value to zero. The step of generating the set of category-weighted predictions may comprise scaling the text predictions generated by the text prediction engine by the weights vector.

[0025] In an embodiment, the step of generating text predictions comprises generating text predictions using at least two predictors. Preferably, generating text predictions using at least one of the at least two predictors comprises generating text predictions based upon the user text input, generating a second set of text category predictions and generating a set of new text predictions by weighting the text predictions from the second predictor by the second set of category predictions.

[0026] The present invention will now be described in detail with reference to the accompanying drawings, in which:

Fig. 1 is a schematic of an adaptive prediction architecture according to the invention;

Fig. 2 is a schematic of an example instantiation of the adaptive predictive architecture according to the invention;

Fig. 3 is a schematic of a method for generating category-weighted text predictions according to the invention.



[0027] The present invention provides a modular language model based text prediction system for the adaptive weighting of text prediction components. The system (named an adaptive predictor) utilises a machine learning technique, classification, which is trained on text data that has been pre-labelled with representative categories, and makes real-time category predictions for sections of text entered by a user.

[0028] As stated above, the real-time category predictions for sections of user-entered text are used by the system to reorder text predictions that have been generated by the system from the user inputted text. The system is therefore capable of placing the most probable predictions (based on local context, category predictions and information about the current word, if there is one) at the top of a list of text predictions generated for display and user selection, thereby facilitating user selection and text input.

[0029] The present system can be employed in a broad range of electronic devices. By way of non-limiting example, the present system can be used for mobile phone text entry, for text input into a PDA device, or for computer text entry (for example, where a key stroke and means of scrolling can be used to select the relevant prediction or where the device employs touch screen technology).

[0030] The classifier can be predominately focussed in a specialised topic/genre/authorship, etc., to facilitate text input for that given topic/genre/authorship, etc. For example, a classifier focussed on the topic of sport can comprise many sub-categories within sport. A system comprising such a classifier can be used by a sports journalist to facilitate text input (i.e. in the form of an email or word processing document). Similarly, the system of the present invention could be used in companies or organisations where there is a specialist type of language used (e.g. for legal, financial, or business documents), the classifier being trained on many text sources in that field.

[0031] The system of the present invention is schematically shown in figure 1. The elements of the system will now be described with reference to this figure.

[0032] The system comprises a plurality of text sources 1, 2, 3, each text source comprising at least one, and preferably a plurality of, documents. Each text source 1, 2, 3 is a body of electronic text for which there exists a category label referring to some aspect of the nature of the text. The category label could refer to a particular language, to a particular topic (e.g. sport, finance etc.), to a particular genre (e.g. legal, informal, etc.), to a particular author, to a particular recipient or set of recipients, to a particular semantic orientation, or to any other attribute of the text that can be identified. The text sources are used to train one or more predictors 6, 7, 8 and a classifier 9.

[0033] The system comprises a text prediction engine 100 which includes at least one predictor 6, 7, 8. A predictor can be any prediction component that generates one or more text predictions. Any prior art predictor can therefore be incorporated into the present system. Preferably, the predictor generates text predictions based on the context of the user inputted text, i.e. the predictor generates text predictions for the nth term, based on n-1 terms of context. Each predictor can be one of a single language model, a multi-language model (where a multi-language model combines predictions sourced from multiple language models to generate a set of predictions), an adaptive prediction model of the type schematically described in figure 1, or any other type of language model. Each predictor 6, 7, 8 is trained by a text source 1, 2, 3, where each text source is used to train a separate predictor. The system can utilise an arbitrary number of text sources. A predictor returns text predictions 11 as a set of terms/phrases mapped to probability values. A thorough description of the use of a predictor (single and multi-language model) to generate text predictions is presented in international patent application PCT/GB2010/000622, claiming priority from UK patent application number 0905457.8, "System and method for inputting text into electronic devices". A further thorough description of the use of a predictor (multi model) to generate text predictions is presented in UK patent application number 1016385.5, "System and method for inputting text into electronic devices".

[0034] User inputted text 14 is input into the system. The user inputted text comprises the sequence of text entered by the user from the beginning of the current document up to the current position of the cursor. The raw text 14 is input directly into the prediction engine 100 which utilises information about the current, partially-completed term, as well as preferably the context. The raw text is also input into a Feature Vector Generator 4.

[0035] The system comprises a Feature Vector Generator 4 which is configured to convert the context terms of the user inputted text 14 (excluding the partially-complete current word) into a feature vector ready for classification. The Feature Vector Generator is also used to generate the feature vectors used to train the classifier (from the text sources). Feature vectors are D-dimensional real-valued vectors

. Each dimension represents a particular feature used to represent the text. Features are typically individual terms or short phrases (n-grams). Individual term features are extracted from a text sequence by tokenising the sequence into terms (where a term denotes both words and additional orthographic items such as punctuation) and discarding unwanted terms (e.g. terms that have no semantic value such as 'stopwords'). In some cases, features may also be case-normalised, i.e. converted to lower-case. N-gram features are generated by concatenating adjacent terms into atomic entities. For example, given the text sequence "Dear special friends", the individual term features would be: "Dear", "special" and "friends", while the bigram (2-gram) features would be "Dear_special" and "special_friends".

[0036] The value D of the vector space is governed by the total number of features used in the model, typically upwards of 10,000 for a real-world classification problem. The Feature Vector Generator 4 is configured to convert a discrete section of text (e.g. an individual document / email etc.) into a vector by weighting each cell according to a value related to the frequency of occurrence of that term in the given text section, normalised by the inverse of its frequency of occurrence across the entire body of text. The formula for carrying out this weighting is known as TF-IDF, and stands for term frequency-inverse document frequency. It is defined as:

where tf(t) is the number of times term t occurs in the current document (or email, etc.), and df(t) is the number of documents in which t occurs across the whole collection, i.e. all text sources. Each vector is then normalised to unit length by the Feature Vector Generator 4.

[0037] The Feature Vector Generator 4 is configured to split user inputted text into features (typically individual words or short phrases) and to generate a feature vector from the features. The feature vector is passed to a classifier (which uses the feature vector to generate category predictions).

[0038] The system comprises a classifier 9. The classifier 9 is trained by a training module 5 using the text sources 1, 2, 3, passed through the Feature Vector Generator 4. The classifier is therefore trained by a body of electronic text which has been pre-labelled with representative categories and converted into a plurality of feature vectors. A trained classifier 9 takes as input a feature vector that has been generated by a Feature Vector Generator from sections of text received from a user 14, and yields category predictions 10, comprising a set of categories mapped to probability values, as an output. The category predictions 10 are drawn from the space of categories defined by the labels on the text sources 1, 2, 3. The present classifier is based on the batch perceptron principle where, during training, a weights vector is updated in the direction of all misclassified instances simultaneously, although any suitable classifier may be utilised. The classifier is preferably a timed aggregate perceptron (TAP) classifier 9. The TAP classifier 9 is natively a binary (2-class) classification model. To handle multi-class problems a one-versus-all scheme is utilised, in which a classifier is trained for each category against all other categories. For example, given the three categories of Sport, Finance and Politics, three individual TAP classifiers would be trained:
  1. 1) Sport vs. Finance and Politics
  2. 2) Finance vs. Sport and Politics
  3. 3) Politics vs. Sport and Finance


[0039] A classifier training module 5 carries out the training process as already mentioned. The training module 5 yields a weights vector for each classifier, which can be denoted by:
  1. 1) Sport: S
  2. 2) Finance: F
  3. 3) Politics: P


[0040] Given a set of N sample vectors of dimensionality D, paired with target labels (xi,yi) the TAP training procedure returns an optimized weights vector

The prediction for a new sample

is given by:

where the sign function converts an arbitrary real number to +/-1 based on its sign. The default decision boundary lies along the unbiased hyperplane · x = 0, though a threshold can easily be introduced to adjust the bias.

[0041] The class-normalised empirical loss at training iteration t falls within the range (0,1) and is defined by:

where Qt denotes the set of misclassified samples at the tth training iteration, N denotes the total number of training samples in a given class, and +/- denotes class specificity. The misclassification condition is given by:



[0042] A margin of +/-1 perpendicular to the decision boundary is required for correct classification of training samples.

[0043] At each iteration, an aggregate vector at, is constructed by summing all misclassified samples and normalising:

where norm (v,T) normalises v to magnitude T and Qt is the set of misclassified samples at iteration t, with

[0044] The timing variable T is set to 1 at the start of the procedure and gradually diminishes, governed by the following expression:



[0045] r is the timing rapidity hyperparameter and can be manually adjusted to tune the performance of the classifier. Its default value is 1. b is a measure of the balance of the training distribution sizes, calculated by:

with an upper bound of 0.5 representing perfect balance. Termination occurs when either the timing variable or the empirical loss reaches zero. How well the TAP solution fits the training data is governed by the rapidity of the timing schedule; earlier stopping leads to a more approximate fit.

[0046] In the present invention, preferably, a modified form of the classification expression (1) is used without the sign function to yield a confidence value for each classifier, resulting in an M-dimensional vector of confidence values, where M is the number of categories. So, for instance, given a new, unseen text section represented by vector

the following confidence vector

would be generated:



[0047] To optimise the performance of the TAP classifier 9 on a particular dataset, the timing rapidity hyperparameter r is experimentally tuned.

[0048] The system further comprises a weighting module 12. The weighting module 12 uses the category predictions 10 generated by the classifier 9 to weight text predictions 11 generated by the text prediction engine 100. The weight assigned to predictions 11 from each predictor 6, 7, 8 is governed by the distribution of confidence values assigned by the classifier 9. The weighting module 12 uses the vector of confidence values generated by the classifier 9 to weight predictions 11 from the respective prediction components 6, 7, 8 to generate category-weighted predictions 13.

[0049] A particular weighting module 12 may reside within an adaptive predictor that exists at an upper level in a hierarchical prediction structure (as discussed later), and the probabilities that are output may be involved in an arbitrary number of subsequent comparisons. Consequently, it is important that the weighting module 12 respects the absolute probabilities assigned to a set of predictions, so as not to spuriously skew future comparisons. Thus, the weighting module 12 always leaves predictions 11 from the most likely prediction component unchanged, and down-scales less likely components proportionally.

[0050] Using the confidence vector obtained from the classifier 9, the weighting module 12 constructs a corresponding M-dimensional weights vector which is used to scale the predictions 11 from the M prediction components.

[0051] In the TAP model, the decision boundary for class membership is 0. Thus, if a particular cell in the confidence vector is negative, it is an indication that the classifier has assigned low likelihood to the hypothesis that the text is of that category. In this case, the corresponding cells in the weights vector are set to 0. In practice, predictions from components with zero-valued weights will be effectively filtered out.

[0052] The element of the weights vector that corresponds to the highest positive valued confidence element is assigned a weight of 1, and the remaining weights are scaled relative to the differences between the positive confidence values.

[0053] The algorithm for constructing a weights vector from a confidence vector is as follows:

For each positive confidence value ci and corresponding weights value wi:

For instance, using the above 3-class example, the following confidence vector:

would be converted into the following weights vector:



[0054] If required, a constant can be introduced to increase the prevalence of the most likely component in relation to the others. Weights are then calculated using:



[0055] This applies to all positive confidence values except for the highest. The value of v is chosen manually. Continuing the above example, using v = 3 would result in the following weights vector:



[0056] The weighting module 12 scales the predictions 11 from each of the constituent prediction components according to the values in the corresponding cells of the weights vector, to generate a set of category-weighted text predictions. The weighting module 12 is configured to insert the category-weighted text predictions into a 'multimap' structure to return the p most probable terms as the final text predictions 13. A multimap is a map or associative array in which more than one value may be associated with and returned from a given key.

[0057] In the present invention, preferably, the multimap is an STL multimap in which an associative key-value pair is held in a binary tree structure, in which duplicate keys are allowed. The multimap can be used to store a sequence of elements as an ordered tree of nodes, each storing one element. An element consists of a key, for ordering the sequence, and a mapped value. In the STL multimap of the present system, a prediction is a string value mapped to a probability value, and the map is ordered on the basis of the probabilities, i.e. the probability values are used as keys in the multimap and the strings as values.

[0058] By way of example, given the category-weighted predictions "a" → 0.2 and "the" → 0.3 from weighting the text predictions generated by the first predictor, and the category-weighted predictions "an" → 0.1 and "these" → 0.2 from weighting text predictions generated by a second predictor, the weighting module inserts these weighted predictions into a multimap ((0.1 → "an"), (0.2 → "a"), (0.2 → "these"), (0.3 → "the")), which is then read in reverse to obtain a set of final predictions.

[0059] The final predictions 13 generated by the weighting module 12 can be outputted to a display of the system for user selection, to input text into an electronic device. The selected prediction is then part of the user inputted text 14 used to generate a new set of predictions 13 for display and user selection.

[0060] In general, but not exclusive terms, the system of the invention can be implemented as shown in Figure 1. Figure 1 is a block diagram of an adaptive prediction architecture according to the invention. A user inputs text 14 into the system. This text input 14 is passed to the text prediction engine 100 and to the Feature Vector Generator 4. The Feature Vector Generator 4 converts the user inputted text 14 into a feature vector and passes this feature vector to the classifier 9.

[0061] The text prediction engine 100 generates, using at least one predictor, at least two text predictions 11 based on the input text 14. In the case of a predictor being a multi-language model, the predictions from each of the language models (within the multi-language model) are combined by inserting the predictions into an STL multimap structure and returning the p most probable values. The resulting set of text predictions 11 is passed to the weighting module 12.

[0062] The TAP classifier 9 uses the feature vector to generate M category predictions 10 (which comprises an M-dimensional confidence vector, where there are M categories represented in the pre-labelled text sources). The category predictions are passed to the weighting module 12.

[0063] The weighting module 12 generates an M-dimensional weights vector from the M-dimensional confidence vector of the category predictions 10 and uses the weights vector to scale the text predictions 11 from the M predictors of the text prediction engine, thereby generating category-weighted text predictions. The category-weighted text predictions are inserted, by the weighting module, into a multimap and the p most probable predictions 13 are returned to the user of the system for selection and text input.

[0064] The predictions can be displayed in a list format, with the most probable term at the top or end of the list. A prediction that is selected by the user for input into the system becomes the next section of user inputted text 14. The system uses this inputted text 14, preferably along with one or more previously inputted text sections, to generate new text predictions 13 for user display and selection.

[0065] As explained above, a predictor 6, 7, 8 can be an adaptive predictive system, such as that described in figure 1. The present system therefore defines a recursive framework that allows an arbitrary number of adaptive predictors to be structured in hierarchy. Such an example is now described with reference to figure 2. Figure 2 schematically shows the adaptive prediction architecture of claim 1, where one of the predictors 26, 27, 28 of the text prediction engine 200 is an adaptive predictor 26. Each of the predictors 46, 47, 48 within this adaptive predictor 26 can be a single language model, a multi-language model or an adaptive prediction model. Thus, the adaptive prediction architecture defines a recursive framework that allows an arbitrary number of adaptive predictors to be structured in hierarchy.

[0066] Figure 2, which schematically describes an example of a two-level adaptive prediction hierarchy, is for illustrative purposes only. It is one of an infinite number of potential structures within the adaptive prediction framework according to the invention.

[0067] At the first level, three text sources 21, 22, 23 are used, representing three topics: sport, finance and politics. These sources and their respective categories are passed through the Feature Vector Generator 24 to the TAP training module 25 to yield a 3-class TAP classifier 29. The text sources 22, 23 representing the finance and politics categories are used to train single language models 27, 28 while the text source representing sport 21 is used to train an adaptive predictor 26.

[0068] Within the adaptive predictor 26, the sport text source is split into three sub-categories: football 41, golf 42 and racing 43. These are passed through a second-level Feature Vector Generator 44 to the second-level TAP classifier training module 45 to produce a second 3-class TAP classifier 49. Additionally, each sub-category text source 41, 42, 43 is used to train a respective single language model 46, 47, 48.

[0069] The user text input 34 is passed to both first-level 29 and second-level 49 TAP classifiers to generate first-level 30 and second-level 50 category predictions. The first-level category predictions 30 are used by the first-level weighting module 32 to weight the predictions 33 generated by the first-level prediction engine 200, whilst the second-level category predictions 50 are used by the second-level weighting module 52 to weight predictions 51 from the second-level prediction engine 400.

[0070] The second-level category-weighted predictions 53 are treated as a set of text predictions from a first-level prediction component 26 of a first-level prediction engine 200. The second-level category-weighted predictions 53 are therefore scaled (by the first-level weighting module 32) using a first-level weights vector (based on first-level category predictions 30).

[0071] A method according to the present invention is now described with reference to figure 3 which is a flow chart of a method for processing user text input and generating category weighted text predictions. In the particular method described, the first step comprises receipt of user text input 14 and the generation of text predictions 11 from the user inputted text 13. The method further comprises the formatting of the user inputted text by a Feature Vector Generator 4 which converts the text into a feature vector. The method comprises generating, using a classifier 9, a confidence vector relating to the categories present in pre-labelled source text(s) 1, 2 ,3. The method further comprises generating, using a weighting module 12, a weights vector from the confidence vector and scaling (using the weighting module) the text predictions 11 by the weights vector to generate a final set of text predictions 13. If the predictors 6, 7, 8 and classifier 9 have not been trained, the method further comprises training at least one predictor and a classifier from at least one source text.

[0072] The method for producing a set of weighted output text predictions is now described in greater detail with reference to figure 2 and a specific scenario. It is assumed that both TAP classifiers 29, 49 have been trained using the relevant text sources 21, 22, 23, 41, 42, 43 and likewise that the language models 26, 27, 28, 46, 47, 48 have also been trained.

[0073] By way of an example, say a user has entered the following sequence 34:
"Today's match was a classic local derby. The visitors were clearly motivated following the recent takeover of the club by the AEG group. The first "

[0074] This input text 34 is passed to the first-level text prediction text engine 200 and the second-level text prediction engine 400 to generate first-level text predictions 31 and second-level text predictions 51. The input text 34 is also passed to both the first-level TAP classifier 29 and the second-level TAP classifier 49 after being preprocessed into the TAP input format described above. Each classifier 29, 49 yields a three-element confidence vector. The input text 34 is also passed to the first-level text prediction text engine 200 and the second-level text prediction engine 400 to generate first-level text predictions 31 and second-level text predictions 51.

[0075] In the present example, the first-level classifier 29 distinguishes between the categories sport, finance and politics, and would yield a first-level confidence vector, such as the following:

where the first element corresponds to sport, the second to finance, and the third to politics. This would be converted into the following first-level weights vector by the first-level weighting module (in according to the procedure described above):



[0076] The second-level classifier 49 distinguishes between the sub-categories football, golf and racing, and would yield a second-level confidence vector such as the following:

where the first element corresponds to football, the second to golf, and the third to racing. This would be converted into the following second-level weights vector (using the second-level weighting module):



[0077] Given a target prediction set size of three terms, the finance and politics first level predictors would yield predictions 31 such as the following (bearing in mind the local context - "The first "):

while the sport constituent (adaptive predictor) would yield three sets of internal predictions 51, one for each sub-category:



[0078] These sub-category predictions 51 are then weighted according to the second-level weights vector w2 above to yield:







[0079] The first level weights vector w1, is applied to the predictions from the three first-level prediction components to yield the following:





[0080] The weighting module inserts the weighted text predictions into an STL 'multimap' structure to return the three most probable terms (where p=3) as the final weighted text predictions 33:



[0081] The final predictions 13 are outputted to a display of the system for user selection, to input text 14 into an electronic device. The predicted term selected by the user is input into the electronic device and is used by the system to predict a further set of text predictions 1 3 for display and user selection.

[0082] It will be appreciated that this description is by way of example only; alterations and modifications may be made to the described embodiment without departing from the scope of the invention as defined in the claims.


Claims

1. A system for generating text input in a computing device, comprising :

a text prediction engine comprising at least two predictors (6, 7, 8; 26, 27, 28) and configured to receive text input (14; 34) into the device by a user and to generate a plurality of text predictions (11; 31) using the at least two predictors, wherein each of the plurality of text predictions comprises a word or phrase mapped to a probability value;

a classifier (9; 29) configured to receive the input text and to generate a plurality of text category predictions (10; 30), each text category prediction comprising a category mapped to a probability value;

a weighting module (12; 32) configured to weight the text predictions from each predictor by a corresponding one of the plurality of text category predictions and to generate a plurality of category-weighted text predictions, and to return,

the p most probable category-weighted text predictions to generate new text predictions (13; 33) for presentation to the user.


 
2. The system according to claim 1, further comprising a Feature Vector Generator (4; 24) which is configured to generate a feature vector representing the text input into the device by a user by extracting features from the input text calculating the term frequency-inverse document frequency for each feature in the input text, and normalising the resulting vector to unit length.
 
3. The system according to claim 2, wherein the at least two predictors (6, 7, 8; 26, 27, 28), are each trained by a separate text source (1, 2, 3; 21, 22, 23), wherein the Feature Vector Generator is further configured to generate at least one feature vector for each of the separate text sources by extracting a set of features from the text source, calculating the term frequency-inverse document frequency for each feature in the text source, and normalising the resulting vectors to unit length.
 
4. The system according to claim 3, further comprising a classifier training module (5; 25), the module configured to train the classifier from the feature vectors which have been generated from the text sources.
 
5. The system according to any preceding claim, wherein the classifier is further configured to generate a confidence vector relating to the plurality of categories.
 
6. The system according to claim 5, wherein the weighting module is configured to generate a weights vector from the confidence vector by setting the largest positive value in the confidence vector to 1, and dividing all other positive values in the confidence vector by the largest positive value in the confidence vector multiplied by a constant factor, and by setting any negative confidence value to zero, wherein the weighting module is configured to scale the text predictions generated by the text prediction engine by the weights vector to generate the new text predictions.
 
7. The system according to any preceding claim, wherein at least one of the plurality of predictors (26) is an adaptive prediction system, the at least one adaptive prediction system comprising:

a second text prediction engine comprising at least two predictors and configured to receive the text input (14; 34) into the device by a user and to generate a plurality of text predictions (11; 31) using the at least two predictors, wherein each of the plurality of text predictions comprises a word or phrase mapped to a probability value;

a second classifier (49) configured to receive the input text and to generate the plurality of text category predictions (10; 30), each text category prediction comprising a category mapped to a probability value;

a second weighting module (52) configured to weight the text predictions from each predictor of the second prediction engine by a corresponding one of the plurality of category predictions of the second classifier and to generate a plurality of category-weighted text predictions.


 
8. A method of generating text predictions from user text input, comprising:

generating using at least two predictors a plurality of text predictions (11; 31) based upon user text input (14; 34), wherein each of the plurality of text predictions comprises a word or phrase mapped to a probability value;

generating using a classifier (9; 29) at least two text category predictions (10; 30) based upon the user text input and a feature vector wherein each text category prediction comprises a category mapped to a probability value;

weighting the text predictions from each predictor by a corresponding one of the text category prediction to generate a set of category-weighted text predictions; returning

the p most probable category-weighted text predictions; and presenting the most probable category-weighted text predictions to the user.


 
9. The method according to one of claim 8, further comprising generating a feature vector representing the text input into the device by a user by extracting features from the input text, calculating the term frequency-inverse document frequency for each feature in the text input, and normalising the resulting vector to unit length.
 
10. The method according to claim 9, wherein the predictions are generated by the plurality of predictors (6, 7, 8; 26, 27, 28) and the plurality of predictors is trained, each predictor being trained based upon a separate text source (1, 2, 3; 21, 22, 23), the method further comprising generating at least one feature vector for each of the separate text sources, by extracting a set of features from the text source, calculating the term frequency-inverse document frequency for each feature in the text source, and normalising the resulting vectors to unit length.
 
11. The method according to claim 10 wherein the method further comprises training the classifier based upon the feature vectors generated from the text sources.
 
12. The method according to one of claims 8-11, wherein the step of generating at least two text category predictions comprises generating a confidence vector relating to the categories.
 
13. The method according to claim 12, wherein the step of generating the set of text category-weighted predictions comprises:
generating a weights vector from the confidence vector by setting the largest positive value in the confidence vector to 1, and dividing all other positive values in the confidence vector by the largest positive value in the confidence vector multiplied by a constant factor, and by setting any negative confidence value to zero; and scaling the text predictions generated by the text prediction engine by the weights vector.
 


Ansprüche

1. System zum Erzeugen von Texteingabe in eine Rechenvorrichtung, umfassend:

eine Textvorhersagefunktionseinheit, umfassend mindestens zwei Prädiktoren (6, 7, 8; 26, 27, 28) und konfiguriert, Texteingabe (14; 34) in die Vorrichtung durch einen Anwender zu empfangen und eine Vielzahl von Textvorhersagen (11; 31) unter Verwendung der mindestens zwei Prädiktoren zu erzeugen, wobei jede der Vielzahl von Textvorhersagen ein Wort oder eine Phrase umfasst, das oder die einem Wahrscheinlichkeitswert zugeordnet ist;

einen Klassifizierer (9; 29), der konfiguriert ist, den Eingabetext zu empfangen und eine Vielzahl von Textkategorienvorhersagen (10; 30) zu erzeugen, wobei jede Textkategorienvorhersage eine Kategorie umfasst, die einem Wahrscheinlichkeitswert zugeordnet ist;

ein Gewichtungsmodul (12; 32), das konfiguriert ist, die Textvorhersagen von jedem Prädiktor durch eine entsprechende der Vielzahl von Textkategorienvorhersagen zu gewichten und eine Vielzahl von Kategorie-gewichteten Textvorhersagen zu erzeugen und die p wahrscheinlichsten Kategorie-gewichteten Textvorhersagen zurückzuschicken, um neue Textvorhersagen (13; 33) zur Darstellung an den Anwender zu erzeugen.


 
2. System nach Anspruch 1, weiter umfassend einen Merkmalvektorerzeuger (4; 24), der konfiguriert ist, einen Merkmalvektor, der die Texteingabe in die Vorrichtung durch einen Anwender darstellt, durch Extrahieren von Merkmalen aus dem Eingabetext, Berechnen der ausdrucksfrequenzumgekehrten Dokumentfrequenz für jedes Merkmal im Eingabetext und Normalisieren des resultierenden Vektors auf Einheitslänge zu erzeugen.
 
3. System nach Anspruch 2, wobei die mindestens zwei Prädiktoren (6, 7, 8; 26, 27, 28) jeweils durch eine separate Textquelle (1, 2, 3; 21, 22, 23) trainiert werden, wobei der Merkmalvektorerzeuger weiter konfiguriert ist, mindestens einen Merkmalvektor für jede der separaten Textquellen durch Extrahieren eines Satzes von Merkmalen aus der Textquelle, Berechnen der ausdruckfrequenzumgekehrten Dokumentfrequenz für jedes Merkmal in der Textquelle und Normalisieren der resultierenden Vektoren auf Einheitslänge zu erzeugen.
 
4. System nach Anspruch 3, weiter umfassend ein Klassifizierertrainingsmodul (5; 25), wobei das Modul konfiguriert ist, den Klassifizierer von den Merkmalvektoren zu trainieren, die aus den Textquellen erzeugt wurden.
 
5. System nach einem der vorstehenden Ansprüche, wobei der Klassifizierer weiter konfiguriert ist, einen Vertrauensvektor bezüglich der Vielzahl von Kategorien zu erzeugen.
 
6. System nach Anspruch 5, wobei das Gewichtungsmodul konfiguriert ist, einen Gewichtungsvektor aus dem Vertrauensvektor durch Einstellen des größten positiven Werts im Vertrauensvektor auf 1 und Teilen aller anderen positiven Werte im Vertrauensvektor durch den größten positiven Wert im Vertrauensvektor, der mit einem konstanten Faktor multipliziert ist, und durch Einstellen eines beliebigen negativen Vertrauenswerts auf null zu erzeugen, wobei das Gewichtungsmodul konfiguriert ist, die Textvorhersagen, die von der Textvorhersagefunktionseinheit erzeugt sind, durch den Gewichtungsvektor zu skalieren, um die neuen Textvorhersagen zu erzeugen.
 
7. System nach einem der vorstehenden Ansprüche, wobei mindestens einer der Vielzahl von Prädiktoren (26) ein adaptives Vorhersagesystem ist, wobei das mindestens eine adaptive Vorhersagesystem umfasst:

eine zweite Textvorhersagefunktionseinheit, die mindestens zwei Prädiktoren umfasst und konfiguriert ist, die Texteingabe (14; 34) in die Vorrichtung durch einen Anwender zu empfangen und eine Vielzahl von Textvorhersagen (11; 31) unter Verwendung der mindestens zwei Prädiktoren zu erzeugen, wobei jede der Vielzahl von Textvorhersagen ein Wort oder eine Phrase umfasst, das oder die einem Wahrscheinlichkeitswert zugeordnet ist;

einen zweiten Klassifizierer (49), der konfiguriert ist, den Eingabetext zu empfangen und die Vielzahl von Textkategorienvorhersagen (10; 30) zu erzeugen, wobei jede Textkategorienvorhersage eine Kategorie umfasst, die einem Wahrscheinlichkeitswert zugeordnet ist;

ein zweites Gewichtungsmodul (52), das konfiguriert ist, die Textvorhersagen von jedem Prädiktor der zweiten Vorhersagefunktionseinheit durch eine entsprechende der Vielzahl von Kategorienvorhersagen des zweiten Klassifizierers zu gewichten und eine Vielzahl von Kategorie-gewichteten Textvorhersagen zu erzeugen.


 
8. Verfahren zum Erzeugen von Textvorhersagen aus Anwendertexteingabe, umfassend:

Erzeugen, unter Verwendung mindestens zweier Prädiktoren, einer Vielzahl von Vorhersagen (11; 31), basierend auf Anwendertexteingabe (14; 34), wobei jede der Vielzahl von Textvorhersagen ein Wort oder eine Phrase umfasst, die einem Wahrscheinlichkeitswert zugeordnet ist;

Erzeugen, unter Verwendung eines Klassifizierers (9; 29) mindestens zweier Textkategorienvorhersagen (10; 30), basierend auf der Anwendertexteingabe und einem Merkmalvektor, wobei jede Textkategorienvorhersage eine Kategorie umfasst, die einem Wahrscheinlichkeitswert zugeordnet ist;

Gewichten von Textvorhersagen von jedem Prädiktor durch eine entsprechende der Textkategorienvorhersage, um einen Satz von Kategorie-gewichteten Textvorhersagen zu erzeugen;

Zurücksenden der wahrscheinlichsten Kategorie-gewichteten Textvorhersagen; und

Darstellen der wahrscheinlichsten Kategorie-gewichteten Textvorhersagen für den Anwender.


 
9. Verfahren nach einem von Anspruch 8, weiter umfassend Erzeugen eines Merkmalvektors, der die Texteingabe in die Vorrichtung durch einen Anwender darstellt, durch Extrahieren von Merkmalen von der Texteingabe, Berechnen der ausdrucksfrequenzumgekehrten Dokumentfrequenz für jedes Merkmal in der Texteingabe und Normalisieren des resultierenden Vektors auf Einheitslänge.
 
10. Verfahren nach Anspruch 9, wobei die Vorhersagen durch die Vielzahl von Prädiktoren (6, 7, 8; 26, 27, 28) erzeugt sind und die Vielzahl von Prädiktoren trainiert sind, wobei jeder Prädiktor basierend auf einer separaten Textquelle (1, 2, 3; 21, 22, 23) trainiert wird, das Verfahren weiter Erzeugen mindestens eines Merkmalvektors für jede der separaten Textquellen durch Extrahieren eines Satzes von Merkmalen aus der Textquelle, Berechnen der ausdrucksfrequenzumgekehrten Dokumentfrequenz für jedes Merkmal in der Textquelle und Normalisieren der resultierenden Vektoren auf Einheitslänge umfasst.
 
11. Verfahren nach Anspruch 10, wobei das Verfahren weiter Trainieren des Klassifizierers basierend auf den Merkmalvektoren umfasst, die aus den Textquellen erzeugt sind.
 
12. Verfahren nach einem der Ansprüche 8-11, wobei der Schritt zum Erzeugen mindestens zweier Textkategorienvorhersagen Erzeugen eines Vertrauensvektors bezüglich der Kategorien umfasst.
 
13. Verfahren nach Anspruch 12, wobei der Schritt zum Erzeugen des Satzes von Textkategorie-gewichteten Vorhersagen umfasst:
Erzeugen eines Gewichtungsvektors aus dem Vertrauensvektor, durch Einstellen des größten positiven Werts im Vertrauensvektor auf 1 und Teilen aller anderen positiven Werte im Vertrauensvektor durch den größten positiven Wert im Vertrauensvektor, der mit einem konstanten Faktor multipliziert ist, und durch Einstellen eines beliebigen negativen Vertrauenswerts auf null; und Skalieren der Textvorhersagen, die durch die Textvorhersagefunktionseinheit erzeugt ist, durch den Gewichtungsvektor.
 


Revendications

1. Système pour générer une entrée de texte dans un dispositif informatique, comprenant :

un moteur de prédiction de texte comprenant au moins deux prédicteurs (6, 7, 8 ; 26, 27, 28) et configuré pour recevoir une entrée de texte (14 ; 34) dans le dispositif par un utilisateur et pour générer une pluralité de prédiction de texte (11 ; 31) en utilisant les au moins deux prédicteurs, dans lequel chacune de la pluralité de prédictions de texte comprend un mot ou un groupe de mots mappé à une valeur de probabilité ;

un classificateur (9 ; 29) configuré pour recevoir le texte d'entrée et pour générer une pluralité de prédictions de catégorie de texte (10; 30), chaque prédiction de catégorie de texte comprenant une catégorie mappée à une valeur de probabilité ;

un module de pondération (12; 32) configuré pour pondérer les prédictions de texte provenant de chaque prédicteur par l'une correspondante de la pluralité de prédictions de catégorie de texte et pour générer une pluralité de prédictions de texte pondérées par catégorie et pour renvoyer les p prédictions de texte pondérées par catégorie les plus probables pour générer de nouvelles prédictions de texte (13 ; 33) pour la présentation à l'utilisateur.


 
2. Système selon la revendication 1, comprenant en outre un générateur de vecteur de caractéristique (4 ; 24) qui est configuré pour générer un vecteur de caractéristique représentant le texte entré dans le dispositif par un utilisateur en extrayant des caractéristiques du texte d'entrée, en calculant la fréquence de document inverse de la fréquence de terme pour chaque caractéristique dans le texte d'entrée et en normalisant le vecteur résultant à une longueur unitaire.
 
3. Système selon la revendication 2, dans lequel les au moins deux prédicteurs (6, 7, 8 ; 26, 27, 28) sont entraînés chacun par une source de texte séparée (1, 2, 3 ; 21, 22, 23), dans lequel le générateur de vecteur de caractéristique est configuré en outre pour générer au moins un vecteur de caractéristique pour chacune des sources de texte séparées en extrayant un ensemble de caractéristiques à partir de la source de texte, en calculant la fréquence de document inverse de la fréquence de terme pour chaque caractéristique dans la source de texte et en normalisant les vecteurs résultants à une longueur unitaire.
 
4. Système selon la revendication 3, comprenant en outre un module d'entraînement de classificateur (5 ; 25), le module étant configuré pour entraîner le classificateur à partir des vecteurs de caractéristique qui ont été générés à partir des sources de texte.
 
5. Système selon une quelconque revendication précédente, dans lequel le classificateur est configuré en outre pour générer un vecteur de confiance concernant la pluralité de catégories.
 
6. Système selon la revendication 5, dans lequel le module de pondération est configuré pour générer un vecteur de poids à partir du vecteur de confiance en réglant la plus grande valeur positive dans le vecteur de confiance à 1 et en divisant toutes les autres valeurs positives dans le vecteur de confiance par la plus grande valeur positive dans le vecteur de confiance multipliée par un facteur constant, et en réglant toute valeur de confiance négative à zéro, dans lequel le module de pondération est configuré pour mettre à l'échelle les prédictions de texte générées par le moteur de prédiction de texte à l'aide du vecteur de poids pour générer les nouvelles prédiction de texte.
 
7. Système selon une quelconque revendication précédente, dans lequel au moins un de la pluralité de prédicteurs (26) est un système de prédiction adaptatif, l'au moins un système de prédiction adaptif comprenant :

un second moteur de prédiction de texte comprenant au moins deux prédicteurs et configuré pour recevoir l'entrée de texte (14 ; 34) dans le dispositif par un utilisateur et pour générer une pluralité de prédictions de texte (11 ; 31) en utilisant les au moins deux prédicteurs, dans lequel chacune de la pluralité de prédictions de texte comprend un mot ou un groupe de mots mappé à une valeur de probabilité ;

un second classificateur (49) configuré pour recevoir le texte d'entrée et pour générer une pluralité de prédictions de catégorie de texte (10 ; 30), chaque prédiction de catégorie de texte comprenant une catégorie mappée à une valeur de probabilité ;

un second module de pondération (52) configuré pour pondérer les prédictions de texte provenant de chaque prédicteur du second moteur de prédiction par l'une correspondante de la pluralité de prédictions de catégorie du second classificateur et pour générer une pluralité de prédictions de texte pondérées par catégorie.


 
8. Procédé de génération de prédictions de texte à partir d'une entrée de texte d'utilisateur, comprenant :

la génération, en utilisant au moins deux prédicteurs, d'une pluralité de prédictions de texte (11 ; 31) sur la base d'une entrée de texte d'utilisateur (14 ; 34), dans lequel chacune de la pluralité de prédictions de texte comprend un mot ou un groupe de mots mappé à une valeur de probabilité ;

la génération, en utilisant un classificateur (9 ; 29), d'au moins deux prédictions de catégorie de texte (10 ; 30) sur la base de l'entrée de texte d'utilisateur et d'un vecteur de caractéristique, dans lequel chaque prédiction de catégorie de texte comprend une catégorie mappée à une valeur de probabilité ;

la pondération des prédictions de texte provenant de chaque prédicteur par l'une correspondante de la prédiction de catégorie de texte pour générer un ensemble de prédictions de texte pondérées par catégorie ;

le renvoi des p prédictions de texte pondérées par catégorie les plus probables ; et

la présentation des prédictions de texte pondérées par catégorie les plus probables à l'utilisateur.


 
9. Procédé selon une de la revendication 8, comprenant en outre la génération d'un vecteur de caractéristique représentant le texte entré dans le dispositif par un utilisateur en extrayant des caractéristiques du texte d'entrée, en calculant la fréquence de document inverse de la fréquence de terme pour chaque caractéristique dans le texte d'entrée et en normalisant le vecteur résultant à une longueur unitaire.
 
10. Procédé selon la revendication 9, dans lequel les prédictions sont générées par la pluralité de prédicteurs (6, 7, 8 ; 26, 27, 28) et la pluralité de prédicteurs est entraînée, chaque prédicteur étant entraîné sur la base d'une source de texte séparée (1, 2, 3 ; 21, 22, 23), le procédé comprenant en outre la génération d'au moins un vecteur de caractéristique pour chacune des sources de texte séparées en extrayant un ensemble de caractéristiques à partir de la source de texte, en calculant la fréquence de document inverse de la fréquence de terme pour chaque caractéristique dans la source de texte et en normalisant les vecteurs résultants à une longueur unitaire.
 
11. Procédé selon la revendication 10, dans lequel le procédé comprend en outre l'entraînement du classificateur sur la base des vecteurs de caractéristique générés à partir des sources de texte.
 
12. Procédé selon une des revendications 8 à 11, dans lequel l'étape de génération d'au moins deux prédictions de catégorie de texte comprend la génération d'un vecteur de confiance concernant les catégories.
 
13. Procédé selon la revendication 12, dans lequel l'étape de génération de l'ensemble de prédictions de texte pondérées par catégorie comprend :
la génération d'un vecteur de poids à partir du vecteur de confiance en réglant la plus grande valeur positive dans le vecteur de confiance à 1 et en divisant toutes les autres valeurs positives dans le vecteur de confiance par la plus grande valeur positive dans le vecteur de confiance multipliée par un facteur constant, et en réglant toute valeur de confiance négative à zéro; et la mise à l'échelle des prédictions de texte générées par le moteur de prédiction de texte à l'aide du vecteur de poids.
 




Drawing














Cited references

REFERENCES CITED IN THE DESCRIPTION



This list of references cited by the applicant is for the reader's convenience only. It does not form part of the European patent document. Even though great care has been taken in compiling the references, errors or omissions cannot be excluded and the EPO disclaims all liability in this regard.

Patent documents cited in the description




Non-patent literature cited in the description