(19)
(11)EP 3 370 191 B1

(12)EUROPEAN PATENT SPECIFICATION

(45)Mention of the grant of the patent:
06.09.2023 Bulletin 2023/36

(21)Application number: 18159686.7

(22)Date of filing:  02.03.2018
(51)International Patent Classification (IPC): 
G06N 3/08(2023.01)
G06N 3/04(2023.01)
G06N 3/063(2023.01)
(52)Cooperative Patent Classification (CPC):
G06N 3/063; G06N 3/08; G06N 3/045

(54)

APPARATUS AND METHOD IMPLEMENTING AN ARTIFICIAL NEURAL NETWORK TRAINING ALGORITHM USING WEIGHT TYING

VORRICHTUNG UND VERFAHREN ZUR IMPLEMENTATION EINES TRAININGSALGORITHMUS UNTER VERWENDUNG VON GEWICHTSBINDUNG

APPAREIL ET PROCÉDÉ DE MISE EN OEUVRE D'UN ALGORITHME D'APPRENTISSAGE D'UN RÉSEAU DE NEURONES ARTIFICIELS UTILISANT LA LIAISON DE POIDS


(84)Designated Contracting States:
AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR

(30)Priority: 02.03.2017 EP 17158959

(43)Date of publication of application:
05.09.2018 Bulletin 2018/36

(73)Proprietor: Sony Group Corporation
Tokyo 108-0075 (JP)

(72)Inventors:
  • CARDINAUX, Fabien
    70327 Stuttgart (DE)
  • UHLICH, Stefan
    70327 Stuttgart (DE)
  • KEMP, Thomas
    70327 Stuttgart (DE)
  • ALONSO GARCIA, Javier
    70327 Stuttgart (DE)
  • YOSHIYAMA, Kazuki
    Tokyo 108-0075 (JP)

(74)Representative: MFG Patentanwälte Meyer-Wildhagen Meggle-Freund Gerhard PartG mbB 
Amalienstraße 62
80799 München
80799 München (DE)


(56)References cited: : 
  
  • Song Han ET AL: "Deep Compression: Compressing Deep Neural Networks with Pruning, Trained Quantization and Huffman Coding", , 15 February 2016 (2016-02-15), XP055393078, Retrieved from the Internet: URL:https://arxiv.org/pdf/1510.00149v5.pdf [retrieved on 2017-07-21]
  • James Garland ET AL: "Low Complexity Multiply Accumulate Unit for Weight-Sharing Convolutional Neural Networks", , 30 June 2016 (2016-06-30), pages 1-4, XP055450473, DOI: 10.1109/LCA.2017.2656880 Retrieved from the Internet: URL:https://arxiv.org/pdf/1609.05132v1.pdf [retrieved on 2018-02-12]
  • YOOJIN CHOI ET AL: "Towards the Limit of Network Quantization", ARXIV.ORG, CORNELL UNIVERSITY LIBRARY, 201 OLIN LIBRARY CORNELL UNIVERSITY ITHACA, NY 14853, 5 December 2016 (2016-12-05), XP080736980,
  • Yunchao Gong ET AL: "Compressing Deep Convolutional Networks using Vector Quantization", , 18 December 2014 (2014-12-18), pages 1-10, XP055262159, Retrieved from the Internet: URL:http://arxiv.org/pdf/1412.6115v1.pdf [retrieved on 2016-04-01]
  • SUN FANGXUAN ET AL: "Intra-layer nonuniform quantization of convolutional neural network", 2016 8TH INTERNATIONAL CONFERENCE ON WIRELESS COMMUNICATIONS & SIGNAL PROCESSING (WCSP), IEEE, 13 October 2016 (2016-10-13), pages 1-5, XP033002144, DOI: 10.1109/WCSP.2016.7752720 [retrieved on 2016-11-21]
  • ITAY HUBARA ET AL: "Quantized Neural Networks: Training Neural Networks with Low Precision Weights and Activations", ARXIV.ORG, CORNELL UNIVERSITY LIBRARY, 201 OLIN LIBRARY CORNELL UNIVERSITY ITHACA, NY 14853, 22 September 2016 (2016-09-22), XP080813052,
  
Note: Within nine months from the publication of the mention of the grant of the European patent, any person may give notice to the European Patent Office of opposition to the European patent granted. Notice of opposition shall be filed in a written reasoned statement. It shall not be deemed to have been filed until the opposition fee has been paid. (Art. 99(1) European Patent Convention).


Description

TECHNICAL FIELD



[0001] The present disclosure generally pertains to the field of machine learning, in particular to systems and methods that use artificial neural network learning algorithms to perform machine vision or machine control tasks.

TECHNICAL BACKGROUND



[0002] Machine learning is the subfield of computer science that gives computers the ability to learn certain aspects without being explicitly programmed. An artificial neural network (ANN) learning algorithm, usually called "neural network", is a computer-implemented learning algorithm in which computations are structured in terms of an interconnected group of artificial neurons. Neural networks model complex relationships between inputs and outputs, to find patterns in data, or to capture the statistical structure in an unknown joint probability distribution between observed variables.

[0003] Systems that are based on deep neural networks (DNN) have achieved breakthrough performances in many machine learning applications ranging from speech recognition and natural language processing to computer vision. While DNNs have already made it into many commercial applications, the usage is usually limited to web/cloud based applications where computation can be performed on large servers. The main reason is that, by their nature, DNNs are large and computationally expensive. In fact, the success of DNNs has been driven by the increase in computational power (including the use of graphical processor units).

[0004] Using DNNs locally on devices presents a number of challenges, in particular the limited memory availability and the limited computation power of computer systems. In order to leverage the power of DNNs for electronic devices (in contrast to web based applications), implementing DNNs poses the challenges of reducing both their size and their computational complexity.

[0005] Although there exist techniques for training artificial networks using prior knowledge, it is generally desirable to provide more efficient techniques for training artificial networks. Song Han ET AL: "Deep Compression: Compressing Deep Neural Networks with Pruning, Trained Quantization and Huffman Coding", 15 February 2016, describes an apparatus comprising circuitry that implements an artificial neural network training algorithm that uses weight tying, wherein the circuitry is configured to compute a weight-tied weight matrix based on an index matrix and based on a value vector and to quantize the values of the value vector after updating the weight tying.

SUMMARY



[0006] According to a first aspect, the disclosure provides an apparatus as defined appended claim 1.

[0007] According to a further aspect, the disclosure provides an apparatus as defined in appended claim 9.

[0008] According to a further aspect, the disclosure provides a method as defined in appended claim 11.

[0009] Further aspects are set forth in the dependent claims, the following description and the drawings.

BRIEF DESCRIPTION OF THE DRAWINGS



[0010] Embodiments are explained by way of example with respect to the accompanying drawings, in which:

Fig. 1 schematically shows the principle of weight-tying;

Fig. 2 shows a forward pass that is iteratively applied for neural network training;

Fig. 3 shows a backward pass that is iteratively applied for neural network training;

Fig. 4 shows a diagram in which the result of two quantization schemes is depicted;

Fig. 5 shows a computer that implements an artificial neural network;

Fig. 6 shows a machine that uses a trained artificial neural network; and

Fig. 7 shows a software or computer device that is configured to train an artificial neural network or to compress an existing artificial neural network before deploying it.


DETAILED DESCRIPTION OF EMBODIMENTS



[0011] The embodiments described below relate to an apparatus comprising circuitry that implements an artificial neural network training algorithm that uses weight tying. The artificial neural network may for example be a deep convolutional neural network. The training algorithm may for example be based on a stochastic gradient descent training algorithm.

[0012] Use of prior knowledge about the domain of application of a neural network helps to define the optimal layer type, layer size and number of layers and may also provide information that enables weight-tying. In weight-tying (also called weight-sharing), units can share weights in the network or are working on similar types of data. Weight sharing is applied to increase learning efficiency as it may reduce the number of free parameters being learnt. Weight-tying thus is a technique which allows reducing the memory footprint of a neural network and possibly eliminating the need for multipliers.

[0013] The training algorithm may for example be used to train an artificial neural network or to compress existing artificial neural networks before deploying them.

[0014] The apparatus may be any device potentially performing recognition, classification or signal generation using an artificial neural network, such as a TV, a camera, a mobile phone, a game console, etc. The apparatus may also be any device that performs controlling functions based on input, such as a lane keeping assist system in semi-autonomous driving, or the like.

[0015] The circuitry may be configured to update the weight tying using a predefined number of iterations of a clustering algorithm. For example, a K-means clustering algorithm can be used. K-means clustering is a method of vector quantization, originally from signal processing. The K-means clustering can for example be done using the mean but also using median. According to other embodiments, alternative clustering algorithms can be used. The predefined number of K-means iterations used to update the weight tying may for example be one. That is, according to one embodiment, only one K-means iteration is performed. However, more iterations of K-means may be performed depending on the needs or design goals.

[0016] The circuitry is configured to compute a weight-tied weight matrix based on an index matrix and based on a value vector. Still further, the circuitry may be configured to update, in each iteration of K-means, the value vector and the index matrix based on a full-precision weight matrix.

[0017] The circuitry is configured to quantize the values of the value vector. The circuitry is configured to quantize the values of the value vector after updating the weight tying. The circuitry is configured to quantize the values of the value vector to the nearest power-of-two.

[0018] The value vector v may be fixed to specific values set by the user. As an example, the user may set predefined values for different value vector sizes. According to some embodiments, a value vector used with weight-sharing and quantization may comprise more than three values.

[0019] Still further, according to some embodiments, the circuitry may be configured to update full precision weights based on gradients. These gradients may be based on a cost function and based on a weight-tied weight matrix. In particular, the circuitry may be configured to compute the cost function based on a loss function and based on a forward pass. Still further, the circuitry may be configured to compute the cost function also based on a backward pass function.

[0020] The embodiments also disclose an apparatus comprising circuitry that implements an artificial neural network, the artificial neural network having been trained by a neural network training algorithm that uses weight tying. In particular, the circuitry may implement the artificial neural network in a multiplierless way. Such an apparatus may be any device potentially performing recognition, classification or signal generation using an artificial neural network, such as a TV, a camera, a mobile phone, a game console etc. The apparatus may also be any device that performs controlling functions based on input, such as a lane keeping assist system in semi-autonomous driving, or the like.

[0021] The disclosure comprises an apparatus comprising a programming interface that is configured to receive weight-tying parameters for use in a neural network training or compression algorithm. The programming interface may for example be configured to receive a number of quantization levels used for weight-tying, a parameter that indicates if the value vector is fix or updated, constraints to the value vectors, a parameter that indicates that the assignment matrix is fix or can be learnt (updated), an initial or fixed value vector; and/or an initial or fixed assignment matrix. The programming interface may be an application programming interface (API) or a user interface (UI).

[0022] The methods as described herein are also implemented in some embodiments as a computer program causing a computer and/or a processor to perform the method, when being carried out on the computer and/or processor. In some examples not covered by the scope of the claims, also a non-transitory computer-readable recording medium is provided that stores therein a computer program product, which, when executed by a processor, such as the processor described above, causes the methods described herein to be performed.

Weight-tying



[0023] Weight parameters usually account for most of the memory used by an artificial neural network. In order to reduce the footprint of the weights, weight-tying limits the number of possible values that the weights can take. Weight-tying as it is described in the embodiments below uses a special representation of neural networks weights. They are represented by an assignment matrix and a value vector for each layer. Both the assignment matrices and the value vectors are jointly optimized (learned) for the specific task of the neural network (e.g. image classification). With some specific constraints on the value vectors it becomes possible to train a network which does not require any multiplication when deployed (i.e. when it is used after training). For example, if the weights for a particular layer can only take two values, each weight can be encoded as a single bit. Similarly, if the weights can take 8 different values, they can be encoded with only three bits. The Weight-tying (WT) approach described in the embodiments below is based on this basic idea that many weights of the same layers are "tied" together and can be assigned to a single value.

[0024] Fig. 1 schematically shows the principle of weight-tying. The main idea of weight-tying is to decompose the weight matrix

(or tensor for multidimensional input or output) of a particular layer into a value vector

and an index matrix I ∈ [1, ..., K]O×I. The weights wmn are given by wmn = vimn. The number of possible weight values that can be used in this layer is therefore limited to the size of the value vector v. Therefore, only the index matrix I which has O · I indices with each

Bits and the value vector v which contains K floats needs to be stored.

Training



[0025] The embodiments described below provide a training approach to train this special representation of ANN weights by learning both the index matrix and the value vector. During training of weight-tying networks three variables are internally kept for each layer: the float weights W, the value vector v and the index matrix I. When training is finished the float weights W can be discarded and only v and I are used for deployement.

[0026] The first step in training a weight-tying network is to initialize v and I. Given some initial float weights Winit (e.g. already learned using traditional ANN techniques or initialize randomly or using any other initialization technique), a K-means algorithm is applied to Winit to find the initial v and I. Each value of value vector v corresponds to a centroid as obtained from the K-means algorithm and each index in I refers to a cluster as obtained from the K-means algorithm.

[0027] Then as for traditional neural network training, a forward and a backward pass are iteratively applied. These two steps are applied as follows:
Fig. 2 shows a forward pass that is iteratively applied for neural network training.

[0028] At 201, tied weights Wq = v[I] are used to compute error and gradients. A more detailed embodiment of this computation is provided in section "Algorithm" below.

[0029] Fig. 3 shows a backward pass that is iteratively applied for neural network training.

[0030] At 301, the soft values W are updated by gradient descent. At 302, v and I are updated by performing a default number of one iteration of K-means on the updated W. At 302a, the value vector v is updated by computing the mean of all weights in W that belong to one of the K classes. At 302b, the index matrix I is updated by assigning weights in W to a closest value in v. A more detailed description of these computations is provided in section "Algorithm" below.

[0031] With the above described approach of weight-tying also some constraints can be applied on the value vector or the index vector. In the examples described below, the specific cases of (1) fixed value vector, (2) fixed index matrix and (3) training a multiplier less network are considered.

(1) Fixed value vector



[0032] According to this example, the value vector v is fixed to specific values set by the user. As an example the user can set the following values for different value vector sizes K:







[0033] In this case, the K-means initialization is replaced by an assignment of each weight to its closest value and step 302a of the backward pass is omitted. Besides the examples given above, it is for example also possible to put a "0" value into v in order to use the weight-tying for pruning the network.

(2) Fixed index matrix



[0034] In this case, the index assignment defined by K-means clustering at the initialization step is kept during the rest of the training, so step 302b of the backward pass is omitted.

(3) Training a multiplierless network



[0035] According to the embodiments described below, the value vector v is learnt such that elements are power-of-two numbers. If weights are restricted to powers of two, multiplications usually required when applying the artificial neural network can be avoided and turned into bit-shifts (fixed-point numbers) or additions (floating-point numbers). This avoids the necessity of performing multiplications so that multiplierless processors can be used for training an artificial neural network, which may be more cost efficient and which may provide higher processing speeds.

[0036] To achieve this, after each K-means update (i.e. after the initial K-means and after step 302a of the backward pass in Fig. 3), the values are rounded to the nearest power-of-two.

[0037] To round to the nearest power of two the following algorithmic schemes is applied:
Let x be a float number which is to be quantized to the nearest power-of-two xq = ±2k.

[0038] According to a first quantization scheme, the quantized value is given by

where s = sign(x) and b = log2|x|.

[0039] According to this first quantization scheme, the quantization threshold is at

. This first quantization scheme has the advantage that it minimizes the (squared) error.

[0040] According to an embodiment not within the scope of the claims, the quantized value is given by

with a quantization threshold at

.

[0041] Fig. 4 shows a diagram in which these two quantization schemes are depicted. On the horizontal axis the float number x which is to be quantized is plotted. On the vertical axis the respective quantized value xq that corresponds to x is plotted for each of the quantization schemes.

[0042] The embodiments disclosed here provide a "quantized" artificial neural network where the quantized values can be learnt by the algorithm. By optimizing the values during training, better results can be obtained than by setting the value arbitrarily. Some additional constraints can be imposed by the user, such as using only power of two values which allows to avoid multiplication and make the artificial neural network computationally efficient.

[0043] In the embodiments described above a training approach to train ANN weights by learning both the index matrix and the value vector has been described. This training approach may be used as a learning algorithm for training an artificial neural network. As the training approach described in the embodiment may provide a more efficient artificial neural network, it may also be used to compress an existing artificial neural network. This can be achieved by using the weights of an existing artificial neural network as starting point of the training algorithm.

Algorithm



[0044] In the following an embodiment of stochastic gradient descent (SGD) training with weight-tying is described in more detail. The algorithm is performed with minibatches of training data. Each minibatch is a subset of the training data containing inputs x and targets t. For the weight-tying scheme, the bias vectors are learned as in a traditional (full-precision) network and for simplicity they are ignored in the further discussion of the algorithm.

[0045] With each minibatch {x, t}, the following calculations are performed:
In a first step, the weight-tied weight matrix

is computed for layers I = 1, ..., L (of affine or convolutional layer) based on the index matrix I(l) for layer I and the value vector v(l) for layer l: for l = 1 to L do



[0046] In a second step, the current cost C and gradients G(l) are computed:



C is the cost function that is to be minimized.

[0047] Loss(t, t̂) is the Loss function that computes the error of predicting whereas t is the ground truth.

[0048] Forward(x, W(1), ... , W(L)) is the function that describes the forward pass of the neural network with input data x and weights W(1), ..., W(L). The function returns the output node values.

[0049] Backward(x, t, W(1), ..., W(L)) is the function that describes the backward pass of the neural network with input data x, targets t and weights W(1), ..., W(L). The function returns the gradients of the weights.

[0050] In a third step, the full precision weights are updated:
for l = 1 to L do



[0051] Here, W(l) is the full-precision weight matrix for layer l = 1, ..., L (of affine or convolutional layer) and η is the learning rate of SGD.

[0052] In a fourth step, the weight tying is updated using M iterations of K-means:
for l = 1 to L do
form = 1 to M do


for k = 1 to K(l) do



[0053] The value vectors v(l) are updated by computing the means of all weights in W(l) that belong to one of the K(l) classes. Each value of value vector [v(l)]k corresponds to a centroid as obtained from the K-means iteration. The updated value at index k is the average of all weights in the corresponding cluster. Therefore it is the sum of the weights in the cluster divided by the number of weights in the cluster (i.e. number of weights with index corresponding to the value k: #{I(l) = k}).

[0054] The index matrix I is updated by assigning weights in W to the closest value in v. Each index in [I(l)]ij refers to the cluster as obtained from the K-means algorithm. [v(l)]k is the value k of layer l and

is returning the index of the closest value from the weight [W(l)]ij.

[0055] Here, K(l) is the number of values in values vector v(l) and M is the number of K-means iterations done after each minibatch update. The number M can be chosen arbitrarily. A default value is M = 1, but more iterations may be performed. Furthermore, it is also possible to perform the K-means update only after a certain number of minibatch updates in order to reduce the computational complexity of the weight-tying training.

[0056] In this fourth step, in order to obtain a multiplierless network, the values of the value vector are in addition rounded to the closest power-of-two number as it was described in the embodiments above.

[0057] It should also be noted that in step four the update of I and v can also be done in the reverse order.

[0058] Here, SGD is used as deep neural network optimization strategy. However, instead of SGD, also other deep neural network optimization strategies could be used like Adam, RMSprop, or the like. In such a case, "Step 3" of the algorithmic description would be modified according to the needs of the applied deep neural network optimization strategy.

[0059] It should also be noted that one can also consider tying together weights from of only a part of a layer (e.g. weights leading to a single output map in a convolution network) or tying weights from different layers together.

Implementation



[0060] In the following, an embodiment of a computer 130 that implements an artificial neural network is described under reference of Fig. 5. The computer 130 can be used to implement the training algorithms described above, and/or it may be used to implement an artificial neural network that has been generated by a training algorithm as described above.

[0061] The computer has components 131 to 140, which can form circuitry to implement artificial neural networks and training algorithms.

[0062] Embodiments, which use software, firmware, programs or the like for performing the methods as described herein can be installed on computer 130, which is then configured to be suitable for the concrete embodiment.

[0063] The computer 130 has a CPU 131 (Central Processing Unit), which can execute various types of procedures and methods as described herein, for example, in accordance with programs stored in a read-only memory (ROM) 132, stored in a storage 137 and loaded into a random access memory (RAM) 133, stored on a medium 140, which can be inserted in a respective drive 139, etc.

[0064] The CPU 131, the ROM 132 and the RAM 133 are connected with a bus 141, which in turn is connected to an input/output interface 134. The number of CPUs, memories and storages is only exemplary, and the skilled person will appreciate that the computer 130 can be adapted and configured accordingly for meeting specific requirements which arise when it functions as a base station, and user equipment.

[0065] CPU 131 may comprise hardware that is specialized for implementation of artificial neural networks. Hardware that is specialized for implementation of artificial neural networks may for example be a parallel computing processor. CPU 131 may also use a GPU-accelerated deep neural network implementation. In particular, CPU 131 may comprise a multiplierless processor. Using a multiplierless processor may make the implementation more efficient in terms of processing speed and costs.

[0066] At the input/output interface 134, several components are connected: an input 135, an output 136, the storage 137, a communication interface 138 and the drive 139, into which a medium 140 (compact disc, digital video disc, compact flash memory, or the like) can be inserted.

[0067] The input 135 can be a pointer device (mouse, graphic table, or the like), a keyboard, a microphone, a camera, a touchscreen, etc.

[0068] The output 136 can have a display (liquid crystal display, cathode ray tube display, light emittance diode display, etc.), loudspeakers, etc.

[0069] The storage 137 can have a hard disk, a solid state drive and the like.

[0070] The communication interface 138 can be adapted to communicate, for example, via a local area network (LAN), wireless local area network (WLAN), mobile telecommunications system (GSM, UMTS, LTE, etc.), Bluetooth, infrared, etc.

[0071] It should be noted that the description above only pertains to an example configuration of computer 130. Alternative configurations may be implemented with less, additional or other sensors, storage devices, interfaces or the like.

[0072] Fig. 6 shows a machine that uses a trained artificial neural network. The machine comprises a processor 141 that is specialized for implementing the artificial neural network. The processor 141 receives input from a CCD camera 142 and a TOF camera 142. The input is processed in the artificial neural network implemented in the processor 141 to provide output. The output of the artificial neural network controls an actuator 144 to perform a specific action. For example, the artificial neural network may be trained for various functions in the context of autonomous or semi-autonomous driving such as lane keeping assistance, recognizing objects on the road, pedestrian recognition, solving traffic problems, mapping the raw pixels from a front-facing camera to the steering commands for a self-driving car, or the like. Based on the input from the CCD camera 142 and the TOF camera 142 the artificial neural network may judge whether the driving direction needs a correction in order to keep the lane. According to the output of the artificial neural network, the system may act on the steering of the car by controlling actuator 144.

[0073] Fig. 7 shows a software or computer device that is configured to train an artificial neural network or to compress an existing artificial neural network before deploying it. The device comprises a processor that is configured as an ANN training tool 147 and to implement an artificial neural network training algorithm. The device further comprises a memory 146 that is configured to store ANN data such as weights, value vectors, index matrices and the like. Still further, the device comprises a programming interface 145 by which a user can provide parameters to the ANN training tool 147. The programming interface may for example be configured to receive weight-tying parameters that are used to control the weight-tying aspects of a training or compression algorithm. Weight-tying parameters may for example be parameters such as a number of quantization levels used for weight-tying, a parameter that indicates if the value vector is fix or can be learnt (updated), constraints to the value vectors, a parameter that indicates that the assignment matrix is fix or updated, an initial or fixed value vector; and/or an initial or fixed assignment matrix.

[0074] It should be recognized that the embodiments describe methods with an exemplary ordering of method steps. The specific ordering of method steps is however given for illustrative purposes only and should not be construed as binding. For example, steps 302a and 302b in Fig. 3 could be performed in reverse order without changing the result.

[0075] It should also be recognized that the division of the control or circuitry of Fig. 7 into units 131 to 140 is only made for illustration purposes and that the present disclosure is not limited to any specific division of functions in specific units. For instance, at least parts of the circuitry could be implemented by a respective programmed processor, field programmable gate array (FPGA), dedicated circuits, and the like.

[0076] All units and entities described in this specification and claimed in the appended claims can, if not stated otherwise, be implemented as integrated circuit logic, for example on a chip, and functionality provided by such units and entities can, if not stated otherwise, be implemented by software.

[0077] In so far as the embodiments of the disclosure described above are implemented, at least in part, using software-controlled data processing apparatus, it will be appreciated that a computer program providing such software control and a transmission, storage or other medium by which such a computer program is provided are envisaged as aspects of the present disclosure.


Claims

1. An apparatus comprising circuitry that implements an artificial neural network training algorithm that uses weight tying, wherein the circuitry is configured to compute a weight-tied weight matrix based on an index matrix and based on a value vector and to quantize the values of the value vector after updating the weight tying to the nearest power-of-two, according to the quantization scheme:

where s = sign(x) and b = log2|x|, and where x is the value which is to be quantized and xq is the quantized value.
 
2. The apparatus of claim 1, wherein the circuitry is configured to update the weight tying using a predefined number of iterations of a clustering algorithm.
 
3. The apparatus of claim 2, wherein the predefined number of iterations of the clustering algorithm used to update the weight tying is at least one.
 
4. The apparatus of claim 2, wherein the circuitry is configured to, in each iteration of the clustering algorithm, update a value vector for a class k according to

where W(l) is a full-precision weight matrix for layer l of the neural network and I(l) is the index matrix.
 
5. The apparatus of claim 2, wherein the circuitry is configured to update, in each iteration of the clustering algorithm, an index matrix with K(l) the number of value vectors v(l) according to


 
6. The apparatus of claim 1, wherein the circuitry is configured to update full precision weights based on gradients.
 
7. The apparatus of claim 6, wherein the circuitry is configured to

compute the gradients based on a cost function and based on the weight-tied weight matrix; or to

compute the cost function based on a loss function and based on a forward pass function; or to compute the gradients based on a backward pass function.


 
8. The apparatus of claim 1, wherein the training algorithm is a stochastic gradient descent training algorithm.
 
9. An apparatus comprising circuitry that implements an artificial neural network using a weight-tied weight matrix and a quantized value vector, wherein the weight-tied weight matrix and the quantized value vector have been computed by an apparatus according to claim 1.
 
10. The apparatus of claim 9, wherein the circuitry implements the artificial neural network multiplierless by employing bit-shifts for fixed-point numbers or additions for floating-point numbers.
 
11. A method of training an artificial neural network, the method comprising performing an artificial neural network training algorithm that uses weight tying on an apparatus according to claim 1.
 


Ansprüche

1. Vorrichtung, die eine Schaltung umfasst, die einen Trainingsalgorithmus für ein künstliches neuronales Netzwerk implementiert, der eine Gewichtsverknüpfung verwendet, wobei die Schaltung dazu ausgelegt ist, eine gewichtsverknüpfte Gewichtsmatrix basierend auf einer Indexmatrix und basierend auf einem Wertvektor zu berechnen und die Werte des Wertvektors nach dem Aktualisieren der Gewichtsverknüpfung auf die nächstliegende Zweierpotenz gemäß dem folgenden Quantisierungsschema zu quantisieren:

wo s = sign(x) und b = log2|x|, und wo x der zu quantisierende Wert und xq der quantisierte Wert ist.
 
2. Vorrichtung nach Anspruch 1, wobei die Schaltung dazu ausgelegt ist, die Gewichtsverknüpfung unter Verwendung einer vordefinierten Anzahl von Iterationen eines Clustering-Algorithmus zu aktualisieren.
 
3. Vorrichtung nach Anspruch 2, wobei die vordefinierte Anzahl von Iterationen des Clustering-Algorithmus, der zum Aktualisieren der Gewichtsverknüpfung verwendet wird, mindestens eins beträgt.
 
4. Vorrichtung nach Anspruch 2, wobei die Schaltung dazu ausgelegt ist, in jeder Iteration des Clustering-Algorithmus einen Wertvektor für eine Klasse k zu aktualisieren, gemäß

wo W(l) eine Gewichtsmatrix mit voller Genauigkeit für die Schicht 1 des neuronalen Netzwerks ist und Il die Indexmatrix ist.
 
5. Vorrichtung nach Anspruch 2, wobei die Schaltung dazu ausgelegt ist, in jeder Iteration des Clustering-Algorithmus eine Indexmatrix mit K(l) der Anzahl der Wertvektoren vl zu aktualisieren, gemäß


 
6. Vorrichtung nach Anspruch 1, wobei die Schaltung dazu ausgelegt ist, Gewichte mit voller Präzision basierend auf Gradienten zu aktualisieren.
 
7. Vorrichtung nach Anspruch 6, wobei die Schaltung dazu ausgelegt ist

die Gradienten basierend auf einer Kostenfunktion und basierend auf der gewichtsverknüpften Gewichtsmatrix zu berechnen; oder

die Kostenfunktion basierend auf einer Verlustfunktion und basierend auf einer Vorwärtspassfunktion zu berechnen; oder die Gradienten basierend auf einer Rückwärtspassfunktion zu berechnen.


 
8. Vorrichtung nach Anspruch 1, wobei der Trainingsalgorithmus ein stochastischer Gradientenabstiegs-Trainingsalgorithmus ist.
 
9. Vorrichtung umfassend eine Schaltung, die ein künstliches neuronales Netzwerk unter Verwendung einer gewichtsverknüpften Gewichtsmatrix und eines quantisierten Wertvektors implementiert, wobei die gewichtsverknüpfte Gewichtsmatrix und der quantisierte Wertvektor von einer Vorrichtung nach Anspruch 1 berechnet worden sind.
 
10. Vorrichtung nach Anspruch 9, wobei die Schaltung das künstliche neuronale Netzwerk ohne Multiplikator durch Einsetzen von Bitverschiebungen für Festkommazahlen oder Additionen für Gleitkommazahlen implementiert.
 
11. Verfahren zum Trainieren eines künstlichen neuronalen Netzwerks, wobei das Verfahren das Durchführen eines Trainingsalgorithmus für ein künstliches neuronales Netzwerk umfasst, der eine Gewichtsverknüpfung auf einer Vorrichtung nach Anspruch 1 verwendet.
 


Revendications

1. Appareil comportant une circuiterie qui met en oeuvre un algorithme d'apprentissage de réseau de neurones artificiels utilisant la liaison de poids, la circuiterie étant configurée pour calculer une matrice de poids avec liaison de poids d'après une matrice d'indices et d'après un vecteur de valeurs et pour quantifier les valeurs du vecteur de valeurs après avoir mis à jour la liaison de poids à la puissance de deux la plus proche, selon le schéma de quantification :

où s = signe (x) et b = log2|x|, et où x est la valeur qui doit être quantifiée et xq est la valeur quantifiée.
 
2. Appareil selon la revendication 1, la circuiterie étant configurée pour mettre à jour la liaison de poids à l'aide d'un nombre prédéfini d'itérations d'un algorithme de regroupement.
 
3. Appareil selon la revendication 2, le nombre prédéfini d'itérations de l'algorithme de regroupement utilisé pour mettre à jour la liaison de poids étant d'au moins une.
 
4. Appareil selon la revendication 2, la circuiterie étant configurée pour, à chaque itération de l'algorithme de regroupement, mettre à jour un vecteur de valeurs pour une classe k selon

W(l) est une matrice de poids à pleine précision pour la couche 1 du réseau de neurones et I(l) est la matrice d'indices.
 
5. Appareil selon la revendication 2, la circuiterie étant configurée pour mettre à jour, à chaque itération de l'algorithme de regroupement, une matrice d'indices où K(l) est le nombre de vecteurs de valeurs v(l) selon


 
6. Appareil selon la revendication 1, la circuiterie étant configurée pour mettre à jour des poids à pleine précision d'après des gradients.
 
7. Appareil selon la revendication 6, la circuiterie étant configurée pour

calculer les gradients d'après une fonction de coût et d'après la matrice de poids avec liaison de poids ; ou pour

calculer la fonction de coût d'après une fonction de pertes et d'après une fonction de passe vers l'avant ; ou pour

calculer les gradients d'après une fonction de passe vers l'arrière.


 
8. Appareil selon la revendication 1, l'algorithme d'apprentissage étant un algorithme d'apprentissage par descente de gradient stochastique.
 
9. Appareil comportant une circuiterie qui met en oeuvre un réseau de neurones artificiels en utilisant une matrice de poids avec liaison de poids et un vecteur de valeurs quantifié, la matrice de poids avec liaison de poids et le vecteur de valeurs quantifié ayant été calculés par un appareil selon la revendication 1.
 
10. Appareil selon la revendication 9, la circuiterie mettant en oeuvre le réseau de neurones artificiels sans multiplicateurs en employant des décalages de bits pour les nombres en virgule fixe ou des additions pour les nombres en virgule flottante.
 
11. Procédé d'entraînement d'un réseau de neurones artificiels, le procédé comportant la réalisation d'un algorithme d'apprentissage de réseau de neurones artificiels qui utilise une liaison de poids sur un appareil selon la revendication 1.
 




Drawing























Cited references

REFERENCES CITED IN THE DESCRIPTION



This list of references cited by the applicant is for the reader's convenience only. It does not form part of the European patent document. Even though great care has been taken in compiling the references, errors or omissions cannot be excluded and the EPO disclaims all liability in this regard.

Non-patent literature cited in the description