TECHNOLOGICAL FIELD
[0001] Embodiments of the present disclosure relate to data communication. Some relate to data modulation and data demodulation.
BACKGROUND
[0002] Digital modulation is the process of varying one or more properties of a periodic waveform, the carrier signal, with a digital modulating signal that contains information to be transmitted.
[0003] The digital modulating signal can be a codeword of modulation symbols.
[0004] In quadrature amplitude modulation, the amplitudes of two quadrature carrier signals are varied between multiple values. Each symbol defines an amplitude (level) of the two quadrature carrier signals.
BRIEF SUMMARY
[0005] According to various, but not necessarily all, embodiments there is provided an apparatus comprising:
a first register, for a pointer to a current bit position, updated by a clock cycle;
a second register for a current Hamming Weight;
a third register for a current index value; and
adder logic configured to:
determine an interim value based on a coefficient provided from a memory and the current index value,
cause updating of the second and third registers comprising writing an updated index value based on the interim value to the third register.
[0006] In some but not necessarily all examples, the adder logic is configured to cause updating of the second register and the third register comprising writing an updated index value based on the coefficient to the third register.
[0007] In some but not necessarily all examples, the coefficient is provided from the memory in dependence upon the current index value provided by the third register and the current Hamming Weights provided by the second register.
[0008] In some but not necessarily all examples, the apparatus comprises:
means for accessing a set of multiple coefficients stored in the memory using a current value of the current bit position from the first register;
means for selecting one coefficient of the accessed set of multiple coefficients using a current value of the current Hamming Weight from the second register, wherein the selected coefficient is used to determine the interim value.
[0009] In some but not necessarily all examples, the coefficient is a truncated coefficient and wherein the adder logic is sized for a mantissa of a truncated coefficient, and is configured to operate on most significant bits of a current index value and a mantissa of the truncated coefficient to produce most significant bits of an interim value, that is added to least significant bits of current index value to obtain the interim value.
[0010] In some but not necessarily all examples, the apparatus comprises means for initialization of the first register to a variable value and means for initialization of the second register to a variable value.
[0011] In some but not necessarily all examples, the apparatus comprises: in parallel, multiple circuit modules each of which comprises a second register for a current Hamming Weight, a third register for a current index value; and adder logic, wherein the parallel circuits share the first register and/or share the memory.
[0012] In some but not necessarily all examples, the apparatus is implemented in hardware comprising application specific circuitry.
[0013] In some but not necessarily all examples, the apparatus comprises means for configuring the apparatus for use in a transmitter or for use in a receiver.
[0014] In some but not necessarily all examples, the apparatus comprises means for using the current Hamming Weight from the second register and the current index value from the third register as state variables, and the current bit position in the codeword, from the first register, as timebase running sequentially through each bit position in the codeword.
[0015] In some but not necessarily all examples, the second register is used to store the Hamming Weight of the remaining symbols in the codeword that are yet to be processed and the third register is used to store the current index value for the remaining symbols in the codeword that are yet to be processed.
[0016] In some but not necessarily all examples, the apparatus comprises a lookup table of coefficients values is stored in the memory. In some but not necessarily all examples, the lookup table of coefficient values is based on enumerative coding.
[0017] In some but not necessarily all examples, the lookup table of coefficient values is based on constant composition enumerative coding or variable composition enumerative coding. In some but not necessarily all examples, the lookup table is a table of truncated coefficient values.
[0018] In some but not necessarily all examples, each clock cycle updates the pointer to a current bit position in the first register, wherein the updated current bit position in the first register selects a row/column in a coefficient table and wherein the Hamming Weight in the second register selects a column/row of the coefficient table.
[0019] In some but not necessarily all examples, the apparatus comprises means for transmitting a modulated signal controlled by the codeword.
[0020] In some but not necessarily all examples, an optical high speed transmission system comprises, in a transmitter, the apparatus.
[0021] In some but not necessarily all examples, the apparatus comprises means for:
converting each data block to an index value; and
enumeratively decoding the index value each clock cycle to obtain with each clock cycle a symbol of the modulation codeword and conditionally, in dependence upon the obtained symbol, obtaining a new index value for enumeratively decoding at the next clock cycle to obtain the next symbol of the modulation codeword.
[0022] In some but not necessarily all examples, the interim value is the current index value minus the coefficient provided from the memory.
[0023] In some but not necessarily all examples, the apparatus comprises means for using the interim value to determine a condition, wherein when the condition is satisfied the adder logic is configured cause updating of the second and third registers comprising writing the updated index value based on the determined condition to the third register and determining a symbol for the codeword at the current bit position.
[0024] In some but not necessarily all examples, the apparatus is configured to:
determine the condition based on a difference between the coefficient and the current index value provided by the third register,
wherein if the difference
i) has a first sense, then a first symbol is found in the codeword at the current bit position, the second register value is decremented by 1 and the third register value is decremented by the coefficient; and
ii) has a second sense, then a second symbol is found in the codeword at the current bit position and there is no change to the second register or the third register. In some but not necessarily all examples, the apparatus comprises means for receiving a modulated signal controlled by the codeword.
[0025] In some but not necessarily all examples, an optical high speed transmission system comprises, in a receiver, the apparatus.
[0026] In some but not necessarily all examples, the interim value is the current index value added to the coefficient provided from the memory.
[0027] In some but not necessarily all examples, the apparatus comprises means for:
enumeratively encoding a symbol of a received codeword each clock cycle to obtain with each clock cycle an updated index value wherein the updated value is conditionally dependent upon the symbol; and
converting a final index value obtained after enumeratively encoding a last symbol of the received codeword to a data block.
[0028] In some but not necessarily all examples, the apparatus is configured to:
 i) decrement the second register value by 1 and increment the third register value by the coefficient when the codeword at the current bit position has a first symbol; and
 ii) not change the second register value and the third register value when the codeword at the current bit position has a second symbol.
[0029] According to various, but not necessarily all, embodiments there is provided an equivalent method and/or computer readable instructions.
[0030] According to various, but not necessarily all, embodiments there is provided an apparatus comprising means for:
converting each data block to an index value; and
enumeratively decoding the index value each clock cycle to obtain with each clock cycle a symbol of the modulation codeword and conditionally, in dependence upon the obtained symbol, obtaining a new index value for enumeratively decoding at the next clock cycle to obtain the next symbol of the modulation codeword.
[0031] According to various, but not necessarily all, embodiments there is provided an apparatus comprising means for:
enumeratively encoding a symbol of a received codeword each clock cycle to obtain with each clock cycle an updated index value wherein the updated value is conditionally dependent upon the symbol; and
converting a final index value obtained after enumeratively encoding a last symbol of the received codeword to a data block.
[0032] According to various, but not necessarily all, embodiments, the interim value is the index value or new index value. In a transmitter, the interim value is the current index value minus the coefficient provided from the memory. In a receiver, the interim value is the current index value added to the coefficient provided from the memory.
[0033] According to various, but not necessarily all, embodiments there is provided an apparatus comprising:
a first register, for a pointer to a current bit position, updated by a clock cycle;
a second register;
a third register for a current index value; and
adder logic configured to:
determine an interim value based on a coefficient determined by a value of the first register and a value of the second register,
cause updating of the second and third registers comprising writing an updated index value based on the interim value to the third register.
According to various, but not necessarily all, embodiments there is provided an apparatus comprising means for:
converting data blocks that have a first fixed bit length to respective codewords, for controlling transmission, that have a second fixed bit length greater than the first fixed bit length and that have constrained but variable symbol composition comprising:
converting each data block of fixed first bit length to an index value;
enumeratively decoding the index value to obtain a codeword of the second fixed bit length that has a symbol composition that can vary with the index value.
[0034] In some but not necessarily all examples, the index value is used in combination with a Hamming Weight to define a unique route through a codeword trellis.
[0035] In some but not necessarily all examples, increasing the index value by one maps to a next codeword of the second fixed bit length and Hamming Weight in the range WminWmax in an ordered set of codewords of the second length and Hamming weight in the range WminWmax .
[0036] In some but not necessarily all examples, each symbol (S
_{j}) in the codeword has an associated additive term (C
_{n,w}) dependent upon an amount of symbols (n) of the codeword remaining to be decoded and upon a total number (w
_{j}) of those symbols (S
_{j}) remaining in the codeword to be decoded; wherein the index is the sum of the additive terms for the codeword.
[0037] In some but not necessarily all examples, the additive terms for enumerative decoding are
where n is a number of symbols of the codeword remaining to be decoded and w is a Hamming Weight of symbols of the codeword remaining to be decoded;
and wherein the additive terms are generated using a constant composition initialization C_{n,1} = n & C_{w1,w} = 0 or a variable composition initialization C_{n,1} = n + 1 & C_{0,w} = 1.
[0038] In some but not necessarily all examples, the apparatus comprises a memory wherein the additive terms are stored, in the memory, in a lookup table that has a unique value for each different combination of a number of symbols of the codeword remaining to be decoded and a Hamming Weight of symbols of the codeword remaining to be decoded.
[0039] In some but not necessarily all examples, the additive terms are truncated to a fixed number of bits of mantissa and variable number of bits of exponent and wherein the mantissa and a value indicative of the exponent are stored in separate lookup tables.
[0040] In some but not necessarily all examples, a Hamming Weight of the codeword is a programmable parameter
[0041] In some but not necessarily all examples, the codeword of the second fixed bit length has a symbol composition that can vary with the index value up to a maximum value, wherein the maximum value is a programmable parameter.
[0042] In some but not necessarily all examples, the first fixed bit length is a programmable parameter and/or wherein the second fixed bit length is a programmable parameter In some but not necessarily all examples, a probability of occurrence of symbols within the codeword is controlled.
[0043] In some but not necessarily all examples, control parameters of enumerative decoding are used to control a probability of occurrence of multiple symbols within the codeword. In some but not necessarily all examples, by controlling using one or more of: programming the first length, the second length, the Hamming Weight.
[0044] In some but not necessarily all examples, a probability of occurrence
of multiple symbols within the codeword is controlled so that a probability of occurrence of symbols, that result in higher amplitudes when modulated on a carrier, is less than a probability of occurrence of low amplitude symbols.
[0045] In some but not necessarily all examples, the apparatus comprises means for:
for each one of multiple contiguous portions of a data block, using a binary matcher to convert each portion of the data block to a binary bit stream to create multiple binary bit streams while controlling a probability of occurrence of logic ones within each of the multiple binary bit streams; and
using the multiple binary bit streams to control a probability of occurrence of multiple multilevel symbols within the codeword so that a probability of occurrence of high amplitude symbols is less than a probability of occurrence of low amplitude symbols. In some but not necessarily all examples, M1 binary matchers are arranged as a binary tree, where M is the size of a symbol alphabet of the codeword, and wherein a first subset and second subset of the multiple M1 binary matchers are configured to generate symbols for the codeword and wherein the second subset of the multiple M1 binary matchers are configured to select the order of the generated symbols in the codeword.
[0046] In some but not necessarily all examples, the selector is constant composition. In some but not necessarily all examples, all M1 binary matchers are in parallel. In some but not necessarily all examples, there are M1 or log2M binary matchers.
[0047] In some but not necessarily all examples, the apparatus comprises means for enumeratively decoding the index value each clock cycle to obtain with each clock cycle a symbol of the codeword and conditionally, in dependence upon the obtained symbol, obtaining a new index value for enumeratively decoding at the next clock cycle to obtain the next symbol of the codeword.
[0048] In some but not necessarily all examples, the apparatus comprises means for using a current Hamming Weight and a current index value (I) as state variables, and a current bit position in the codeword (n) as timebase running through each bit position in the codeword.
[0049] In some but not necessarily all examples, the apparatus is implemented in hardware comprising application specific circuitry (ASICs).
[0050] In some but not necessarily all examples, the apparatus comprises:
a first register, for a pointer to a current bit position, updated each clock cycle;
a second register for a current Hamming Weight;
a third register for a current index value;
adder logic configured to:
determine an interim value based on a coefficient provided from a memory and the current index value,
use the interim value to determine a symboldependent condition, wherein when the condition is satisfied the adder logic is configured to
cause updating of the second and third registers comprising writing an updated index value based on the determined condition to the third register and determining a symbol for the codeword at the current bit position.
[0051] In some but not necessarily all examples, the interim value is the current index value minus the coefficient provided from the memory, the condition is that the interim value is greater than or equal to zero.
[0052] In some but not necessarily all examples, each clock cycle updates the pointer to a current bit position in the first register, wherein the updated current bit position in the first register selects a row in a coefficient table and wherein the Hamming Weight in the second register selects a column of the coefficient table.
[0053] In some but not necessarily all examples, the coefficient table is a shared coefficient table.
[0054] In some but not necessarily all examples, the apparatus comprises means for: truncation of the additive terms to a fixed number of bits of mantissa and variable number of bits of exponent, and storing the mantissa separately to the exponent, wherein the adder logic is sized for the mantissa, and the adder logic is configured to operate on MSB of current index value and mantissa of the coefficient to produce an MSB interim value, that is added to LSB of current index value to obtain the interim value.
[0055] In some but not necessarily all examples, the apparatus comprises means for transmitting a modulated signal controlled by the codeword.
[0056] In some but not necessarily all examples, an optical high speed transmission system comprises, in a transmitter, the apparatus.
[0057] According to various, but not necessarily all, embodiments there is provided a method comprising:
converting data blocks that have a first fixed bit length to respective codewords that have a second fixed bit length greater than the first fixed bit length and that have constrained but variable symbol composition comprising:
converting each data block of fixed first bit length to an index value;
enumeratively decoding the index value to obtain a codeword of the second fixed bit length that has a symbol composition that can vary with the index value.
[0058] According to various, but not necessarily all, embodiments there is provided computer readable instructions that when implemented using hardware enable:
converting data blocks that have a first fixed bit length to respective codewords that have a second fixed bit length greater than the first fixed bit length and that have constrained but variable symbol composition comprising:
converting each data block of fixed first bit length to an index value;
enumeratively decoding the index value to obtain a codeword of the second fixed bit length that has a symbol composition that can vary with the index value.
[0059] According to various, but not necessarily all, embodiments there is provided an apparatus comprising means for:
converting codewords that have a second fixed bit length and a constrained but
variable symbol composition to respective data blocks that have a first fixed bit length less than the second fixed bit length comprising:
enumeratively encoding the codeword of the second fixed bit length to obtain an index value that varies with a variable symbol composition of the codeword converting each index value to a data block of fixed first bit length.
[0060] In some but not necessarily all examples, the index value is used in combination with a Hamming Weight to define a unique route through a codeword trellis.
[0061] In some but not necessarily all examples, increasing the index value by one maps to a next codeword of the second fixed bit length and Hamming Weight in the range WminWmax in an ordered set of codewords of the second length and Hamming weight in the range WminWmax .
[0062] In some but not necessarily all examples, each symbol (S
_{j}) in the codeword has an associated additive term (C
_{n,w}) dependent upon an amount of symbols (n) of the codeword remaining to be decoded and upon a total number (w
_{j}) of those symbols (S
_{j}) remaining in the codeword to be decoded; wherein the index is the sum of the additive terms for the codeword.
[0063] In some but not necessarily all examples, the additive terms for enumerative decoding are
where n is a number of symbols of the codeword remaining to be decoded and w is a Hamming Weight of symbols of the codeword remaining to be decoded;
and wherein the additive terms are generated using a constant composition initialization C_{n,1} = n & C_{w1,w} = 0 or a variable composition initialization C_{n,1} = n + 1 & C_{0,w} = 1.
[0064] In some but not necessarily all examples, the apparatus comprises a memory wherein the additive terms are stored, in the memory, in a lookup table that has a unique value for each different combination of a number of symbols of the codeword remaining to be decoded and a Hamming Weight of symbols of the codeword remaining to be decoded.
[0065] In some but not necessarily all examples, the additive terms are truncated to a fixed number of bits of mantissa and variable number of bits of exponent and wherein the mantissa and a value indicative of the exponent are stored in separate lookup tables.
[0066] In some but not necessarily all examples, a Hamming Weight of the codeword is a programmable parameter
[0067] In some but not necessarily all examples, the codeword of the second fixed bit length has a symbol composition that can vary with the index value up to a maximum value, wherein the maximum value is a programmable parameter.
[0068] In some but not necessarily all examples, the first fixed bit length is a programmable parameter and/or wherein the second fixed bit length is a programmable parameter In some but not necessarily all examples, a probability of occurrence of symbols within the codeword is controlled.
[0069] In some but not necessarily all examples, the apparatus comprises means for using a current Hamming Weight and a current index value (I) as state variables, and a current bit position in the codeword (n) as timebase running through each bit position in the codeword.
[0070] In some but not necessarily all examples, the apparatus is implemented in hardware comprising application specific circuitry (ASICs).
[0071] In some but not necessarily all examples, the apparatus comprises:
a first register, for a pointer to a current bit position, updated each clock cycle;
a second register for a current Hamming Weight;
a third register for a current index value; and
adder logic configured to:
determine an interim value based on a coefficient provided from a memory and the current index value,
cause updating of the second and third registers comprising writing an updated index value based on the interim value to the second register.
[0072] In some but not necessarily all examples, each clock cycle updates the pointer to a current bit position in the first register, wherein the updated current bit position in the first register selects a row in a coefficient table and wherein the Hamming Weight in the second register selects a column of the coefficient table.
[0073] In some but not necessarily all examples, the coefficient table is a shared coefficient table.
[0074] In some but not necessarily all examples, the apparatus comprises means for: truncation of the additive terms to a fixed number of bits of mantissa and variable number of bits of exponent, and storing the mantissa separately to the exponent, wherein the adder logic is sized for the mantissa, and the adder logic is configured to operate on MSB of current index value and mantissa of the coefficient to produce an MSB interim value, that is added to LSB of current index value to obtain the interim value.
[0075] In some but not necessarily all examples, the apparatus comprises means for receiving a modulated signal controlled by the codeword.
[0076] In some but not necessarily all examples, an optical high speed reception system comprises, in a receiver, the apparatus.
[0077] According to various, but not necessarily all, embodiments there is provided a method comprising:
converting codewords that have a second fixed bit length and a constrained but variable symbol composition to respective data blocks that have a first fixed bit length less than the second fixed bit length comprising:
enumeratively encoding the codeword of the second fixed bit length to obtain an index value that varies with a variable symbol composition of the codeword converting each index value to a data block of fixed first bit length.
[0078] According to various, but not necessarily all, embodiments there is provided computer readable instructions that when implemented using hardware enable: converting codewords that have a second fixed bit length and a constrained but variable symbol composition to respective data blocks that have a first fixed bit length less than the second fixed bit length comprising:
enumeratively encoding the codeword of the second fixed bit length to obtain an index value that varies with a variable symbol composition of the codeword converting each index value to a data block of fixed first bit length.
[0079] According to various, but not necessarily all, embodiments there is provided an apparatus comprising means for:
converting a data block to an index value;
enumeratively decoding the index value to obtain a codeword for controlling transmission; and
controlling the enumerative decoding to control a probability of occurrence of multiple symbols, for modulating transmission, within the codeword.
[0080] According to various, but not necessarily all, embodiments there is provided a method comprising:
converting a data block to an index value;
enumeratively decoding the index value to obtain a codeword for controlling transmission; and
controlling the enumerative decoding to control a probability of occurrence of multiple symbols, for modulating transmission, within the codeword.
[0081] According to various, but not necessarily all, embodiments there is provided computer readable instructions that when implemented using hardware enable:
converting a data block to an index value;
enumeratively decoding the index value to obtain a codeword for controlling transmission; and
controlling the enumerative decoding to control a probability of occurrence of multiple symbols, for modulating transmission, within the codeword.
[0082] According to various, but not necessarily all, embodiments there is provided an apparatus comprising means for:
converting a data block to a codeword, for controlling transmission, that has a symbol composition that can vary; and
controlling a probability of occurrence of multiple symbols, for modulating transmission, within the codeword so that a probability of occurrence of high amplitude symbols is less than a probability of occurrence of low amplitude symbols.
[0083] According to various, but not necessarily all, embodiments there is provided an equivalent method and/or computer readable instructions.
[0084] According to various, but not necessarily all, embodiments there is provided an apparatus comprising means for:
for each one of multiple contiguous portions of a data block, using a binary matcher to convert each portion of the data block to a binary bit stream to create multiple binary bit streams while controlling a probability of occurrence of logic ones within each of the multiple binary bit streams; and
using the multiple binary bit streams to control a probability of occurrence of multiple multilevel symbols within a codeword, for controlling transmission, so that a probability of occurrence of high amplitude modulation symbols is less than a probability of occurrence of low amplitude modulation symbols.
[0085] According to various, but not necessarily all, embodiments there is provided an equivalent method and/or computer readable instructions.
[0086] According to various, but not necessarily all, embodiments there is provided an apparatus comprising means for:
converting each data block to an index value; and
enumeratively decoding the index value each clock cycle to obtain with each clock cycle a symbol of the modulation codeword and conditionally, in dependence upon the obtained symbol, obtaining a new index vale for enumeratively decoding at the next clock cycle to obtain the next symbol of the modulation codeword.
[0087] According to various, but not necessarily all, embodiments there is provided an equivalent method and/or computer readable instructions.
[0088] According to various, but not necessarily all, embodiments there is provided an apparatus comprising means for:
enumeratively encoding a symbol of a received codeword each clock cycle to obtain with each clock cycle an updated index value wherein the updated value is conditionally dependent upon the symbol; and
converting a final index value obtained after enumeratively encoding a last symbol of the received codeword to a data block.
[0089] According to various, but not necessarily all, embodiments there is provided an apparatus comprising:
a first register, for a pointer to a current bit position, updated by a clock cycle;
a second register for a current Hamming Weight;
a third register for a current index value; and
adder logic configured to:
determine an interim value based on a coefficient provided from a memory and the current index value,
cause updating of the second and third registers comprising writing an updated index value based on the interim value to the second register.
[0090] According to various, but not necessarily all, embodiments, the interim value is the index value or new index value. In a transmitter, the interim value is the current index value minus the coefficient provided from the memory. In a receiver, the interim value is the current index value added to the coefficient provided from the memory. According to various, but not necessarily all, embodiments there is provided an equivalent method and/or computer readable instructions.
[0091] According to various, but not necessarily all, embodiments there is provided examples as claimed in the appended claims.
BRIEF DESCRIPTION
[0092] Some example embodiments will now be described with reference to the accompanying drawings in which:
FIG. 1 shows an example embodiment of the subject matter described herein;
FIG. 2 shows another example embodiment of the subject matter described herein;
FIG. 3 shows another example embodiment of the subject matter described herein;
FIG. 4A shows another example embodiment of the subject matter described herein;
FIG. 4B shows another example embodiment of the subject matter described herein;
FIG. 5 shows another example embodiment of the subject matter described herein;
FIG. 6 shows another example embodiment of the subject matter described herein;
FIG. 7 shows another example embodiment of the subject matter described herein;
FIG. 8 shows another example embodiment of the subject matter described herein;
FIG. 9 shows another example embodiment of the subject matter described herein;
FIG. 10 shows another example embodiment of the subject matter described herein;
FIG. 11 shows another example embodiment of the subject matter described herein;
FIG. 12 shows another example embodiment of the subject matter described herein;
FIG. 13 shows another example embodiment of the subject matter described herein;
FIG. 14 shows another example embodiment of the subject matter described herein;
FIG. 15 shows another example embodiment of the subject matter described herein;
FIG. 16 shows another example embodiment of the subject matter described herein;
FIG. 17 shows another example embodiment of the subject matter described herein;
FIG. 18 shows another example embodiment of the subject matter described herein;
FIG. 19 shows another example embodiment of the subject matter described herein;
FIG. 20 shows another example embodiment of the subject matter described herein;
FIG. 21 shows another example embodiment of the subject matter described herein.
DEFINITIONS
[0093] Distribution matching: A distribution matcher accepts any sequence of bits b
_{k} (of length N
_{b}) and produces a block of N symbols S
_{k} (a codeword) drawn from an alphabet of size M with a desired arbitrary probability distribution. A binary distribution matcher has M=2.
[0094] Shaping rate: is the input block size to output block size Nb/N
[0095] Fixed length distribution matching: Fixed length to fixed length distribution matching: Nb is fixed, N is fixed.
[0096] Probabilistic amplitude shaping: Probabilistic selection of symbols from a symbol alphabet such that constellation points with higher amplitude are selected less frequently and in random order than constellation points with lower amplitude. A frequency of selection of symbols is dependent upon a probabilityamplitude distribution. The shape of the probabilityamplitude distribution causes lowamplitude symbols to be selected more frequently on average than highamplitude symbols. Hamming Weight: the Hamming weight of a sequence is the number of symbols that are different from the zerosymbol of the alphabet used. For a binary alphabet, it is the number of ones. The Hamming weight for a particular symbol in a sequence is the number of times that particular symbol is used in the sequence.
[0097] Constant composition: The Hamming Weight for the codewords is a fixed value. For example, all codewords of length N have W ones. The number of combinations is given by binomial coefficient
[0098] Variable composition: The Hamming Weight for the codewords is any value up to a fixed value (up to the codeword length). For example, all codewords of length N have k ones with k=0, 1, 2..W, where W≤N The number of combinations is given by
Variable composition has more combination available than constant composition.
[0099] Enumerative coding: an indexing algorithm that assigns a unique integer index between 0 and the number of codewords N for uniquely referencing a codeword amongst the N codewords ordered for example in lexicographic or colexicographic order.
[0100] Multilevel distribution matcher: a distribution matcher that operates at symbolic level, where different combinations of bits represent an amplitude (level) of a symbol
DETAILED DESCRIPTION
[0101] FIG 1 illustrates an example of a data communication system 100 comprising a transmitter apparatus 101 and a receiver apparatus 103.
[0102] In this example, the system 100 is an optical communication system. In some, but not necessarily all, examples, it is a high speed coherent optical communication system. In this context high speed means an information rate of >100 Gb/s or, in some circumstances, >1 Tb/s.
[0103] The transmitter 101 is configured to transmit a modulated carrier signal 35 through an optical channel 40 to a receiver 103. A stream of symbols 21 has been modulated onto a carrier signal by a modulator34 to form the modulated carrier signal 35.
[0104] The data 10 comprises information bits. The information bits are used to generate a sequence of symbols 21 (a codeword). The sequence of symbols 21 are channel coded 32 and modulated 34 before being transmitted via the communication channel 40. The transmitted modulated signal 35 is controlled by the codeword and transfers the data 10.
[0105] In this example, quadrature amplitude modulation is used to modulate the carrier signal. In quadrature amplitude modulation (QAM), an ordered group of N binary bits ( a symbol) is converted to a unique combination of phase shift and amplitude. This unique combination can be represented as a point (X, Y) in an Argand diagram. A constellation diagram is an Argand diagram that represents the location, that is the constellation point (X, Y) of each modulation symbol. The set of M symbols for a given modulation is a modulation alphabet, where M=2
^{N}. A particular alphabet M is used for MQAM modulation and this defines a constellation of M distinct points each of which represents a symbol. The modulator 34 modulates a stream of symbols of the constellation onto the carrier signal
[0106] The transmitted information is demodulated 50 and channel decoded 52 to reproduce the sequence of symbols 21. The sequence of symbols 21 are used to recover the data 10.
[0107] In some but not necessarily all examples, the data 10 is processed, for example converted to an index which is enumeratively decoded, to produce the sequence of symbols 21.
[0108] In some but not necessarily all examples, a block of data 10 of size Nb is converted to a larger block of symbols 21 (a codeword) of size N by the matcher 20.
[0109] In some examples, the block of symbols 21 (the codeword) of size N has a variable composition (a variable Hamming Weight W) that can be controlled. Controlling the Hamming Weight W used by the matcher 20 can be used to control the probability distribution of the symbols.
[0110] In some but not necessarily all examples, the symbols can be produced with a controlled probability distribution, through a distribution matcher (DM) 20. For example, in probabilistic amplitude shaping, NQAM symbols 35 of lower amplitude can be produced with a higher probability than symbols of a higher amplitude.
[0111] In the example illustrated, the symbols produced by the distribution matcher (DM) 20 (the amplitudes) are
combined with the signs of parity bits generated by the systematic encoder 32, before being modulated by a modulator 34 and transmitted.
[0112] In some but not necessarily all examples the communication channel 40 is an optical transmission system, for example, an optical high speed transmission system.
[0113] At the receiver end, after decoding 50, 52, a dematcher 60 transforms the sequence of symbols 21 back into the information bit stream, data 10.
[0114] Very high bit rates can be achieved. It at least some examples, the hardware complexity of the matcher 20 is such that it enables implementation using application specific integrated circuit (ASIC) technology. In at least some examples, the hardware complexity of the dematcher 60 is such that it enables implementation using application specific integrated circuit (ASIC) technology
[0115] The distribution matcher 20 is a device that accepts any sequence of bits b
_{k} (of length, say, N
_{b}) and produce a block of N symbols, S
_{k} drawn from an alphabet of size M with the desired probability distribution.
[0116] The ratio of the input block size N
_{b} over the output block size N is defined as Shaping Rate, and determines the performances / amount of bit redundancy of the distribution matcher. The distribution matcher 20 accepts any block of N
_{b} bits, and produces a block of N symbols, where N> N
_{b}. Not all sequences of symbols are used, there is symbol redundancy.
[0117] The distribution dematcher 60 reverses the process performed by the distribution matcher 20. It receives a block of N symbols, S
_{k} drawn from an alphabet of size M, with the desired probability distribution, by the distribution matcher 20 and reproduces the original sequence of bits b
_{k} (of length N
_{b}) received by the distribution matcher 20.
[0118] There is a onetoone correspondence between blocks of bits and blocks of symbols. This enables the dematcher 60 to retrieve a block of bits from a block of symbols.
[0119] In this way, fixed length sequences of symbols are associated to fixedlength sequences of bits. The block length N
_{b} is a fixed variable, and buffering is not needed. Also synchronization between blocks of bits and blocks of symbols is maintained because both N and N
_{b} are fixed. The fixedlength to fixedlength distribution matcher 20 does not need a buffer to manage random fluctuations of the number of information bits carried by a block of symbols 21.
[0120] The shaping rate is the ratio of the input block size to the output block size (Nb/N). Although this is fixed, it can be varied.
[0121] The Hamming weight W of a block of symbols 21 (a codeword) is the number of symbols that are different from the zerosymbol of the alphabet used. For a binary alphabet, it is the number of ones.
[0122] Constant composition matching keeps the Hamming Weight W for the codewords produced fixed at a value W.
[0123] The matcher 20/dematcher 60 use a onetoone mapping between a codeword of N symbols containing respectively K
_{1}, K
_{2}... K
_{M} of the symbols S
_{1}, S
_{2}... S
_{M}, taken in any order, and a sequence of N
_{b} bits.
[0124] In constant composition matching, the values K
_{1}, K
_{2}... K
_{M} are the same and thus only one value of K exists for all sequences.
and the distribution matcher incurs a rate loss H(S)N
_{b}/N.
[0125] For a binary alphabet, all codewords of length N exhibit k=K (ones). The total number of combinations is given by the binomial coefficient (N over K).
[0126] The Hamming Weight W has a fixed value for constant composition matching, it is constrained to a single value. For binary symbols, the constant composition codewords of length N are those codewords which exhibit k (ones), with a single fixed k selected from k = 0:N
[0127] Variable composition matching allows the Hamming Weight W for the codewords produced to vary up to a maximum fixed value W.
[0128] In at least some examples, a probability distribution of symbols 21 in a codeword can be controlled by controlling the Hamming Weight.
[0129] In the example provided above, the composition of the output symbol sequence 21 is variable. The values K
_{1}, K
_{2}... K
_{M} are variable.
[0130] The Hamming Weight W has a maximum value for variable composition. and is constrained to a value from a set of values. For binary symbols, the variable composition codewords of length N are those codewords which exhibit k (ones), with each k = 0:N. Variable composition has more available codewords of length N that constant composition.
[0131] The advantage of variable composition matching over constant composition matching can be appreciated from
FIG 2. FIG 2 plots the information rate loss (H(p)R) for the case M=2 (binary alphabet) and various desired distributions [1p p] of output binary symbol probability. The constant composition matching curves are plotted as dashed curves for various lengths N (written just besides each curve to the right), whereas solid curves refer to variable composition matching and the block lengths are shown in the legend and written beside each curve to the left. It can be seen that variable composition matching achieves a smaller rate loss for any fixed block length, or the same rate loss with much smaller block lengths. For "large" values of p, the advantage can be more than one order of magnitude.
[0132] FIG 3 illustrates a matching algorithm 110 that can be performed by the matcher 20. An index i(x) 23 produced from a block of N
_{b} bits b
_{k} 10 produces a block of N symbols, S
_{k} 21. The integer index uniquely identifies a symbol sequence 21 of hamming weight W. The integer index in combination with the Hamming Weight defines a unique route through a codeword trellis.
[0133] FIG 4A illustrates a corresponding dematching algorithm that can be performed by the dematcher 60.
[0134] A block of N symbols, S
_{k} 21 reproduces the index i(x) 23 which can be converted back to the block of N
_{b} bits b
_{k} 10.
[0135] In the example illustrated the matcher 20 operates as an enumerative decoder by converting an index to a block x of symbols 21. The dematcher 60 operates as an enumerative encoder by converting a block x of symbols 21 to an index.
[0136] The block of N
_{b} bits b
_{k} 10 and the block of N symbols, S
_{k} 21 are constrained sequences. The constraints are the fixed lengths Nb, N and the Hamming Weight W. The Hamming Weight W has a fixed value for constant composition matching. The Hamming Weight W has a maximum value for variable composition matching.
[0137] For simplicity of explanation, a binary alphabet (M=2) is used in the following example. However, it should be appreciated that other symbols alphabets can be used e.g. M>2.
[0138] Enumerative encoding establishes a onetoone mapping between bit sequences of length N
_{b}, and symbol sequences of length N (N> N
_{b}) and hamming weight W via integer indices.
[0139] There is a onetoone mapping between bit sequences of length N
_{b}, and enumeratively encoded integer indices i(x) 23. For example, bit sequences of length N
_{b} 10 can be converted to an enumeratively encoded integer index i(x) 23 using binarytodecimal conversion.
[0140] There is a onetoone mapping between enumeratively encoded integer indices i(x) 23 and symbol sequences 21 of length N (N> N
_{b}) and hamming weight W. Enumeratively encoding the symbol sequences 21 of length N (N> N
_{b}) and hamming weight W will produce the enumeratively encoded integer index i(x) 23.
[0141] Each enumeratively encoded integer index i(x) 23 uniquely maps to a particular symbol sequence 21 of length N (N> N
_{b}) and hamming weight W.
[0142] Increasing the index value by one maps to a next symbol sequence (codeword) of length N (N> N
_{b}) and Hamming Weight W in a lexicographically ordered set of codewords of length N and Hamming weight W. The lexicographic ordering sorts codewords as an ordered sequence of symbols based primarily on a position of the symbol and secondarily a value of the symbol. The lexicographic ordering sorts codewords 'alphabetically' (reading left to right) where each symbol in the codeword represents a letter in a word. Lexicographical order can be 'forward' starting at the least significant symbol e.g. 000, 001, 010, 011, 100, 101, 110, 111 or reverse starting at the most significant symbol e.g. 111, 110, 101, 100, 011, 010, 001, 000. Colexicographic ordering is also possible. Colexicographic ordering sorts codewords 'alphabetically' (but reading right to left) where each symbol in the codeword represents a letter in a word. Lexicographical order can be 'forward' starting (at the right) at the least significant symbol e.g. 000, 100, 010, 110, 001, 101, 011, 111 or reverse starting at the right at the most significant symbol e.g. 111, 011, 101, 001, 110, 010,100, 000.
[0143] The indices i(x) 23 are contiguous. Each index corresponds to a symbol sequence 21 (a unique route through a trellis).
[0144] To encode N
_{b} information bits, indices range from 0 to 2
^{N}_{b}1. The matcher addresses 2
^{N}_{b} sequences, the remaining symbol sequences (2
^{N} 2
^{N}_{b}) are not used (symbol redundancy).
[0145] Assume S is the set of codewords, for example binary symbol sequences (M=2), of length N and Hamming weight W. Rank the codewords of symbols 21 in lexicographic order (left to right). Let us consider the codewords {x
_{n1}; x
_{n2}... x
_{0}} of n ≤ N symbols, containing w ones (w = 1...W). Let C
_{n,w} be the number of such codewords where w = number of ones. We have C
_{n,n+1} = 0 (by definition) and C
_{n,1} = n (by inspection), and the recursion
n is a number of symbols of the codeword remaining to be decoded and w is a Hamming Weight of symbols of the codeword remaining to be decoded.
[0146] The route from the node equivalent to C
_{n1,w} e.g. (n1, w) to the destination equivalent to C
_{n,w} e.g. (n, w) is via a transition equivalent to an addition of "0" to the symbol sequence. The number of routes to the destination node via "0" is therefore C
_{n1,w}.
[0147] The route from the node equivalent to C
_{n1,w1} e.g. (n1, w1) to the destination equivalent to C
_{n,w} e.g. (n, w) is via the transition equivalent to an addition of "1" to the symbol sequence. The number of routes to the destination node via "1" is therefore C
_{n1,w1}.
[0148] As the symbol sequence is a binary sequence there are no other routes to the destination.
[0149] Therefore the total number of routes to the destination is therefore C
_{n1,w} + C
_{n1,w1}.
and the following relation holds
[0150] The route through a codeword trellis needs to accurately identify the position of each "1" in the symbol sequence. Each progression of a route through the trellis via addition of a "1" or "0" to the symbol sequence at position n1 increases n by 1 to n. Each progression of a route through the trellis via addition of a "1" to the symbol sequence at position n1 increases w by 1 thus the route jumps from node (n1, w1) to node (n,w). Each progression of a route through the trellis via addition of a "0" to the symbol sequence at position n1 leaves w unchanged thus the route jumps from node (n1, w) to node (n,w).
[0151] Suppose that we already have indexing rules to enumerate the two types of sequences of length n1, with indices from 0 to C
_{n1,w}1 for the route to the node (n1, w), and from 0 to C
_{n1,w1}1 for the route to the node (n1, w1).
[0152] If the sequences of length n has the leftmost bit equal to "0", it can be enumerated with the indices from 0 to C
_{n1,w}1 as there is no requirement to locate a "1".
[0153] If the sequences of length n has the leftmost bit equal to "1", it requires a range of at least C
_{n1,w1} indices to locate w1 "1"s at one of n1 positions within a sequence of n symbols. It can be enumerated with the indices from C
_{n1,w} to C
_{n1,w}+ C
_{n1,w1}1 = C
_{n,w}, i.e., we add C
_{n1,w} to the index of the block of length n1. The index contains the term C
_{n1,w} only if there is a one in position n1.If we apply this rule recursively, each index is thus computed as a sum of binomial coefficients.
[0154] The binomial coefficients C
_{n,w} can be found from Pascal's triangle:
[0155] The lookup table of binomial coefficients for fixed composition (de)matching is
Table: 1
C_{n,w}  w=1  2  3 
n=0 
0 


1 
1 
0 

2 
2 
1 
0 
3 
3 
3 
1 
4 
4 
6 
4 
5 
5 
10 
10 
6 

15 
20 
7 


35 
[0156] Each symbol S
_{j} e.g. "1" in the symbol codeword 21 has an associated additive term. The additive term C
_{n,w} is dependent upon the total number of those symbols S
_{j} e.g. "1"s remaining to be decoded in the codeword w, and upon the position n of the current symbol S
_{j} e.g. "1" being decoded in the codeword (This is dependent upon an amount of symbols (n) of the codeword remaining to be decoded). The index is the sum of the additive terms for the codeword.
Table:2
codeword trellis  x codeword  W  additive terms of index (binomial term C_{n,wn,w} n=3...0 w=4...1)  index (W=4)  index (W=3)  index (W=2)  index (W=1) 



1111 
1 
1 
1 
1 
4 
C_{3,4}=0 
C_{2,3}=0 
C_{1,2}=0 
C_{0,1}=0 
0 





111 
1110 
1 
1 
1 
0 
3 
C_{3,3}=1 
C_{2,2}=1 
C_{1,1}=1 


3 



11 

1101 
1 
1 
0 
1 
3 
C_{3,3}=1 
C_{2,2}=1 

C_{0,1}=0 

2 




110 
1100 
1 
1 
0 
0 
2 
C_{3,2}=3 
C_{2,1}=2 




5 

1 


1011 
1 
0 
1 
1 
3 
C_{3,3}=1 

C_{1,2}=0 
C_{0,1}=0 

1 




101 
1010 
1 
0 
1 
0 
2 
C_{3,2}=3 

C_{1,1}=1 



4 


10 

1001 
1 
0 
0 
1 
2 
C_{3,2}=3 


C_{0,1}=0 


3 



100 
1000 
1 
0 
0 
0 
1 
C_{3,1}=3 






3 



0111 
0 
1 
1 
1 
3 

C_{2,3}=0 
C_{1,2}=0 
C_{0,1}=0 

0 




011 
0110 
0 
1 
1 
0 
2 

C_{2,2}=1 
C_{1,1}=1 



2 


01 

0101 
0 
1 
0 
1 
2 

C_{2,2}=1 

C_{0,1}=0 


1 



010 
0100 
0 
1 
0 
0 
1 

C_{2,1}=2 





2 
0 

001 
0011 
0 
0 
1 
1 
2 


C_{1,2}=0 
C_{0,1}=0 


0 




0010 
0 
0 
1 
0 
1 


C_{1,1}=1 




1 

00 
000 
001 
0 
0 
0 
1 
1 



C_{0,1}=0 



0 



000 
0 
0 
0 
0 
0 








[0157] The index value is a unique value for each different combination of a number of symbols of the codeword remaining to be decoded (n) and a Hamming Weight (w) of symbols of the codeword remaining to be decoded.
[0158] Referring to
FIG 3, the matcher 20 performs matching algorithm 110.
[0159] At block 112, the algorithm variables I, n, w are declared.
[0160] I = i(x) = the integer index determined by binarytodecimal conversion of the block of Nb binary bits 10
n = N1, where N is the length of the symbol codeword x
w = W, where W is the fixed Hamming weight of the symbol codeword x (for constant composition matching)
[0161] The algorithm additionally uses x
_{n} which is a value of binary symbol at position n in the codeword x, and uses C
_{n,w} which is the binomial coefficient from lookup table. For instance Table 1 for the case of n=7 w=3.
[0162] At block 114, read the value of the binomial coefficient C
_{n,w} from lookup Table 1. If the value of C
_{n,w} is greater than I then the process moves to block 116. However, if the value of C
_{n,w} is less than or equal to I then the process moves to block 118.
[0163] At block 116, the value x
_{n} of binary symbol at position n in the codeword x is set to zero ("0"). The value n identifying a bit position in the codeword x is decremented by 1.
[0164] At block 118, the value x
_{n} of binary symbol at position n in the codeword x is set to one ("1"). The value n identifying a bit position in the codeword x is decremented by 1. The value w that represents the Hamming Weight of the remaining symbols of codeword x to be found, is decreased by 1. The value I that represents the index for the remaining symbols of codeword x to be found, is decreased by the binomial coefficient C
_{n,w}.
[0165] At block 120, if the last binary symbol of the codeword has been determined the method ends. Otherwise the method iterates, returning to block 114.
[0166] The method 110 comprises iterating:
if a current index I for a current bit at position n is less than a current threshold C
_{n,w}, set a bit x
_{n} at the current bit position n to zero,
determine a new threshold C_{n1, w} using a new bit position (n1) and the current Hamming Weight,
change the current bit position n to the new bit position (n1),
replace the current threshold with the new threshold C_{n1, w}
otherwise if a current index I for a current bit at position n is greater than or equal to the current threshold C_{n,w},
set the bit x_{n} at the current bit position n to one,
create a new index (I:=IC_{n,w}) by subtracting the threshold C_{n,w} from the current index I,
create a new Hamming Weight (w1) by subtracting one from a current Hamming weight;
determine a new threshold C_{n1, w1} using a new bit position (n1) and the new Hamming Weight (w1),
change the current bit position n to a new bit position (n1), and
replace the current index with the new index, the current threshold with the new threshold and the current Hamming Weight with the new Hamming weight.
[0167] Referring to the binomial table (Table: 1), starting from the last row n=N1 in the column w=W, and decreasing n, we look for the first binomial coefficient C
_{n1,w} (in column w=W) that is less than or equal to I. This entry identifies the position n
_{1} of the first 1 in the sequence x. We subtract C
_{n1,w} from I, namely
and we repeat the search, looking for the first row n
_{2}, in the column w=W1, such that C
_{n2,W1} ≤ I. We recursively apply the procedure until we reach I=0 and we have identified all W of the "1" positions, n
_{1},n
_{2}...n
_{W}.
[0168] Referring to
FIG 4A, the dematcher 60 performs dematching algorithm 130.
[0169] At block 132, the algorithm variables i(x), n, w are declared.
i(x) = the integer index is initial set to zero.
n = N1, where N is the length of the symbol codeword x
w = W, where W is the fixed Hamming weight of the symbol codeword x (for constant composition matching)
[0170] The algorithm additionally uses x
_{n} which is a value of binary symbol at position n in the codeword x, and uses C
_{n,w} which is the binomial coefficient from lookup Table 1.
[0171] At block 134, read the value x
_{n} of the of binary symbol at position n in the codeword x. If the value x
_{n} of the of binary symbol at position n in the codeword x is "1" then the method moves to block 136. If the value x
_{n} of the of binary symbol at position n in the codeword x is "0" then the method moves to block 138.
[0172] At block 136, the integer index i(x) is increased by the binomial coefficient C
_{n,w} from Table 1 and the current remaining Hamming Weight w is decreased by 1. The method moves to block 138.
[0173] At block 138, the value n identifying a bit position in the codeword x is decremented by 1.
[0174] At block 140, if the last binary symbol of the codeword has been processed the method ends outputting the value i(x) 23. Otherwise the method iterates, returning to block 134.
[0175] FIG 4A illustrates another example of the method 60 illustrated in FIG 4A. In FIG 4A, the method 60 is applied in forward bitorder (from last to first codeword bit). In FIG 4B, the method 60 is applied in reverse bitorder (from first to last codeword bit). The method 60 illustrated in FIG 4B can, for example, be a preferred approach when using truncated arithmetic. The lookup table of C
_{n,w} values can be untruncated or truncated (as described below). The differences between FIG 4A and FIG 4B are that, in FIG 4B, at block 132 the algorithm variables n, w are declared as value
"0", at block 136 the current remaining Hamming Weight w is increased by 1, at block 138 the value n identifying a bit position in the codeword x is incremented by 1, and at block 140, if the
first binary symbol of the codeword has been processed the method ends outputting the value i(x) 23.
[0176] The method 110 comprises iterating:
if a value x_{n} of the of binary symbol at position n in the codeword x is "1" increase the current index i(x) by C_{n,w}, and create a new Hamming Weight (w1) by subtracting one from a current Hamming weight,
then
change the current bit position n to a new bit position (n1).
[0177] If a constant weight sequence x = {x
_{N1}; x
_{N2}...x
_{0}} has W ones in positions n
_{1}>n
_{2}...n
_{W}.
[0178] The lexicographic index i(x) of the sequence x is calculated as
[0179] To encode the sequence x the entries C
_{n,w} of the binomial lookup table are used
[0180] We can use this formula to build a lookup table to map integer indices i(x) to symbol codewords x 21 and viceversa.
[0181] Example of process at matcher 20 for creating codeword x 21 for transmission.
[0182] Let the binary bit sequence be: 1011, N=8, W=3
[0183] C
_{n1,3} = 10 returns n
_{1} = 5. I = 11  10 = 1.
C
_{n2,2} = 1 returns n
_{2} = 2. I = 1  1 = 0.
C
_{n3,1} = 0 returns n
_{3} = 0.
x = 0 0 1 0 0 1 0 1 (ones at n
_{1} = 5, n
_{2} = 2, n
_{3} = 0)
[0184] The equivalent process at dematcher 20 after receiving codeword x 21.
Let x = 0 0 1 0 0 1 0 1.
W=3, N=8
dec2bin (11) to recover the binary bit sequence 10 namely: 1011
[0185] The abovedescribed algorithms 110, 130 are particularly suited for implementation in digital logic with a large degree of parallelism (as typically happens in very high rate optical applications).
[0186] FIG 5 illustrates an example of a matcher 20. The matcher 20 is implemented in hardware configured to perform the method 110. The hardware can comprise application specific circuitry (ASICs).
[0187] The matcher uses a current Hamming Weight (w) and a current index value (I) as state variables, and a current bit position (n) in the codeword (x) as timebase running through each bit position in the codeword e.g. from N1 down to 0.
[0188] For a position n in a codeword x, the symbols at positions N1, N2...Nn have been determined, the remaining symbols at positions Nn1, Nn2...1, 0 are yet to be determined.
[0189] The value w represents the Hamming Weight of the remaining symbols in the codeword x 21 that are yet to be determined.
[0190] The value I represents the index i(x) for the remaining symbols in the codeword x 21 that are yet to be determined.
[0191] Initially n=N1, w=W and I=i(x). The value of i(x) is the value 23 determined from the bit sequence 10. These values are written to the respective registers 150, 152, 154 and are used to determine each bit x
_{n} of a codeword x.
[0192] A first register 150 is used to store the current value of n. The value n is a pointer to a current bit position in the codeword. This value is decremented by 1 with each clock cycle from an initial value of N1 to 0.
[0193] A second register 152 is used to store the current value of Hamming Weight (w). This is the Hamming Weight of the remaining symbols in the codeword from bit position n to bit position 0 that are yet to be determined. This value is decremented by 1 whenever a "1" is found in the codeword at the current bit position n determined by the first register 150.
[0194] A third register 154 is used to store the current index value I. This is the index i(x) for the remaining symbols in the codeword from bit position n to bit position 0 that are yet to be determined. This value is decremented by C
_{n,w} whenever a "1" is found in the codeword at the current bit position n determined by the first register 150.
[0195] A lookup table of coefficients values C
_{n,w} is stored in memory 156. There are values for at least the combinations of n=0, 1, 2...N1 and w=0, 1..W. The memory 156 can be any suitable memory. It can for example be a RAM or ROM memory.
[0196] The lookup table of C
_{n,w} values can be based on enumerative coding. For example they may be based on constant composition enumerative encoding as illustrated in Table 1. In other examples they may be based on variable composition enumerative coding (as described below). The lookup table of C
_{n,w} values can be based on enumerative coding and can be untruncated or truncated (as described below).
[0197] The value of n from the register 150 addresses a set {C
_{n,w}} of multiple coefficients C
_{n,w} that have the same n but different w. A particular one of the coefficients C
_{n,w}, in the set of multiple coefficients {C
_{n,w}} is selected by the value of w provided to the multiplexer by the register 152. The selected one of the coefficients C
_{n,w} is the coefficient that corresponds to the current value of the current bit position n from the first register 150 and the current value of the current Hamming Weight (w) provided by the second register 152.
[0198] For example, a complete row in the table can be selected by the current value of the current bit position n (provided from the first register 150) and be provided to the multiplexer 160. A column value w, within that row n, is selected by the value of the current Hamming Weight (w) provided by the second register 152 as a selection input to the multiplexer 160.
[0199] The output of the multiplexer 160 to the adder logic 162 is therefore the value C
_{n,w} where n is defined by the first register 150 and w is defined by the second register 152.
[0200] The adder logic 162 is configured to determine a condition based on I Cn,w (e.g. ICn,w ≥0), where Cn,w is the coefficient provided from a memory 156 and I is the current index value provided by the third register 154. When the condition is satisfied the adder logic is configured to identify a symbol for the codeword at the current bit position and cause updating of the second register 152 for w and the third register 156 for I. Updating the third register 156 comprises writing an updated index value ICn,w based on the determined condition to the second register 152.
[0201] The adder logic 162 is configured to determine I C
_{n,w}. If I< C
_{n,w} then x
_{n} is given the value "0" and there is no change to the registers 152, 154. If I≥ C
_{n,w} then x
_{n} is given the value "1", the value I C
_{n,w} is read into the register 154 and the register 152 is updated by reducing its current value by 1.
[0202] FIG 6 illustrates an example of a dematcher 20. The dematcher 20 is implemented in hardware configured to perform the method 110. The hardware can comprise application specific circuitry (ASICs).
[0203] The dematcher uses a current Hamming Weight (w) and a current index value (I) as state variables, and a current bit position (n) in the codeword (x) as timebase running through each bit position in the codeword e.g. from N1 down to 0 (this corresponds to the example of method 60 illustrated in FIG 4A).
[0204] For a position n in a codeword x, the symbols at positions N1, N2...Nn have been processed to determine the current index, the remaining symbols at positions Nn1, Nn2...1, 0 are yet to be processed.
[0205] The value w represents the Hamming Weight of the remaining symbols in the codeword x 21 that are yet to be processed.
[0206] The value I represents the index i(x) for the remaining symbols in the codeword x 21 that are yet to be processed.
[0207] The algorithm variables I, n, w are declared. In this example the algorithm variables are initially declared as n=N1, w=W and I=0. These values are written to the respective registers 150, 152, 154. The bits x
_{n} of the codeword x are provided as inputs. The output is the ultimate value of I=i(x) in the register 154.
[0208] A first register 150 is used to store the current value of n. The value n is a pointer to a current bit position in the codeword. This value is changed by 1. In this example, it is decremented by 1 with each clock cycle from an initial value of N1 to 0.
[0209] A second register 152 is used to store the current value of Hamming Weight (w). This is the Hamming Weight of the remaining symbols in the codeword from bit position n to bit position 0 that are yet to be processed. This value is changed by 1. In this example, it is decremented by 1 whenever a "1" is found in the codeword at the current bit position n determined by the first register 150.
[0210] A third register 154 is used to store the current index value I. This is the index i(x) for the remaining symbols in the codeword that are yet to be processed. In this example, the remaining symbols in the codeword that are yet to be processed are from bit position n to bit position 0. This value is incremented by C
_{n,w} whenever a "1" is found in the codeword at the current bit position n determined by the first register 150. While in this example the dematcher 20 performs the example of method 60 illustrated in FIG 4A, it can be simply adapted to perform the method of FIG 4B. The algorithm variables n, w are declared as value "0", the current remaining Hamming Weight w is increased by 1 not decreased by 1, the value n identifying a bit position in the codeword x is incremented by 1 each clock cycle not decremented.
[0211] A table of coefficients values C
_{n,w} is stored in memory 156. There are values for at least the combinations of n=0, 1, 2...N1 and w=0, 1..W. The memory 156 can be any suitable memory. It can for example be a RAM or ROM memory.
[0212] The table of C
_{n,w} values can be based on enumerative coding. For example they may be based on constant composition enumerative encoding as illustrated in Table 1. In other examples they may be based on variable composition enumerative coding (as described below). The table of C
_{n,w} values can be based on enumerative coding can be untruncated or truncated (as described below).
[0213] The value of n from the register 150 addresses a set {C
_{n,w}} of multiple coefficients C
_{n,w} that have the same n but different w. A particular one of the coefficients C
_{n,w}, in the set of multiple coefficients {C
_{n,w}} is selected by the value of w provided to the multiplexer by the register 152. The selected one of the coefficients C
_{n,w} is the coefficient that corresponds to the current value of the current bit position n from the first register 150 and the current value of the current Hamming Weight (w) provided by the second register 152.
[0214] For example, a complete row in the table can be selected by the current value of the current bit position n (provided from the first register 150) and be provided to the multiplexer 160. A column value w, within that row n, is selected by the value of the current Hamming Weight (w) provided by the second register 152 as a selection input to the multiplexer 160.
[0215] The output of the multiplexer 160 to the adder logic 162 is therefore the value C
_{n,w} where n is defined by the first register 150 and w is defined by the second register 152.
[0216] The adder logic 162 is configured to determine I+ C
_{n,w} where C
_{n,w} is the coefficient provided from a memory 156 and I is the current index value provided by the third register 154.
[0217] If x
_{n} has the value "1", the sum I+ C
_{n,w} is read into the register 154 and the value of w in the register 152 is changed by 1, for example decremented by 1.
[0218] If x
_{n} has the value "0", the sum I+ C
_{n,w} is not read into the register 154 (the value in the register is unchanged) and the value of w in the register 152 is not changed (e.g. not decremented) by 1 (the value in the register 152 is unchanged).
[0219] The same circuitry (registers 150, 152, 154; memory table 156; multiplexer 160 and adder logic 162) can be used for the matcher 20 and the dematcher 60. The components can be the same and operate in the same way.
[0220] In both the matcher 20 and the dematcher 60, adder logic 162 is configured to:
determine an interim value based on a coefficient provided from the memory and the current index value, and cause updating of the second and third registers comprising writing an updated index value based on the interim value to the second register. The circuitry can be configured to operate as a matcher 20 or a dematcher 60.
[0221] In the matcher 20 selected coefficient C
_{n,w} is subtracted (I C
_{n,w} ) to create the interim value whereas in a dematcher 60 the selected coefficient C
_{n,w} is added (I+ C
_{n,w}) to create the interim value. Also whereas the matcher 20 receives as input the index value 23 (I=i(x)) and produces the codeword (x) 21, the dematcher receives 60 as input the codeword (x) 21 and produces the index 23
[0222] In a matcher 20 the current index value in the third register 154 is decremented on output symbol determination whereas in a dematcher 60 the current index value in the third index register 154 is incremented on input symbol identification.
[0223] For the matcher 20, the interim value I C
_{n,w} is used to determine a symboldependent condition I C
_{n,w} ≥0. When the condition is satisfied the adder logic 162 is configured to cause updating of the second and third registers 152, 154 comprising writing an updated index value (I:= I C
_{n,}) based on the determined condition to the third register 156 and determine a symbol for the codeword at the current bit position. When the updated index value is based on the current index value minus the coefficient provided from the memory. The condition is that the current index value minus the coefficient provided from the memory is greater than or equal to zero.
[0224] For the dematcher 60, an identification of a particular symbol at the current bit position is used to determine a symboldependent condition. When the condition is satisfied the adder logic 162 is configured to cause updating of the second and third registers 152, 154 comprising writing an updated index value I+ C
_{n,w} to the third register. The updated index value is dependent upon on the determined condition because C
_{n,w} is dependent upon the particular symbol. For example in FIG 4A when the particular symbol is "0" the the second and third registers 152, 154 are not updated the coefficient C
_{n,w} is zero valued, however, when the particular symbol is "1" the the second register 152 has its value decremented by 1 and the third register 154 is updated using I+ C
_{n,w} where C
_{n,w} is provided from the memory 156. The updated index value I+ C
_{n,w} is based on the current index value plus a symboldependent coefficient dependent upon one or more coefficients provided from the memory.
[0225] The hardware architecture described above for the matcher 20 and dematcher 60 has a number of advantages.
[0226] For a definite set of parameters (N, W), the number Nb of input bits used to produce the index i(x) and the number N of symbols in the codeword x are fixed. E.g. for N=8, W=3 there are 8!/5!3!= 56 codewords maximum, so Nb is log
_{2}(56)=5 bits. Consequently, in 8 clock cycles, the matcher circuit 20 receives 5 input bits from data 10 and generates 8 output bits of codeword 21, one per clock cycle: this simplifies a lot the handling of input and output buffers, as the data stream is extremely regular. Given that the matcher circuit 20 generates one bit per clock cycle, there can be many identical matcher circuits 20 in parallel when the datastream 10 has a certain degree of parallelism (e.g. 128 or 256 bits wide).
[0227] The many parallel matcher circuits can share the timebase. They may for example share the same register 150.
[0228] The lookup table 156 can be shared, as its output is needed at the same time by all the parallel matcher circuits 20.
[0229] Changing the parameters (N, W) of the matcher is simply a matter of changing the initialization of the register 150 (for n) and the register 152 (for w) in the circuit 20.
[0230] The lookup table 156 for C
_{n,w} is unique, and different N, W settings use different subsets of the table. E.g. if we want to run a N=5, W=2 matcher, we simply initialize the register 150 (for n) to 5 and the register 152 (for w) to 2. The encoding cycle will now last 5 clock cycles, but the rest remains unchanged.
Variable Composition
[0231] The above described enumerative coding can be used for indexing more generally constrained sequences. Let us consider, for instance, binary sequences of length N and weight W or W1. It can be shown that we have only to replace the binomial coefficients
with
C
_{n,w} We could evaluate the tables of binomials for W and W1, and sum them creating a cumulative Pascal triangle where
[0232] The binomial coefficients C
_{n,w} can be found from extending the Cumulative Pascal triangle:
[0233] The table of binomial coefficients C
_{n,w} for variable composition (de)matching is
Table: 3
C_{n,w}  w=1  2  3 
n=0 
1 
1 
1 
1 
2 
2 
2 
2 
3 
4 
4 
3 
4 
7 
8 
4 
5 
11 
15 
5 
6 
16 
26 
6 

22 
42 
7 


64 
[0234] An alternative approach is to observe that the recursion
still holds. Thus the table can be filled by recursion, with a different initialization, namely
[0235] The matcher can be further generalized to produce a sequence with any range of Hamming weights. For instance, if we accept any binary sequence of Hamming weight 0,1,2,...,W we just need to initialize C
_{n,1} = n + 1 and
and fill the table using the recursion
[0236] Example. Let N=8, W=3. The number of sequences we need to index is
The table of binomial coefficients, filled as described above is given in Table 3.
[0237] Example of process at matcher 20 for creating codeword x 21 for transmission.
[0238] Let the binary bit sequence be: 11011, N=8, W=3
[0239] Following the method described in FIG 3 and in relation to circuitry 20 of FIG 5:
C_{n1,3} = 26 returns n_{1} = 5. I = 27  26 = 1. w=2
C_{n2,2} = 1 returns n_{2} = 0. I = 1  1 = 0.
Since there is no C_{n3,1} ≤ I = 0, there is not another one.
x = 0 0 1 0 0 0 0 1 (ones at n_{1} = 5, n_{2} = 0)
[0240] It should be noted that the codeword has a Hamming Weight of 2 which satisfies the constraint of variable composition, namely a Hamming Weight less than or equal to W, which is three in this example.
[0241] The equivalent process at dematcher 20 after receiving codeword x 21.
[0242] Let x = 0 0 1 0 0 0 0 1.
[0243] W=3, N=8
Following the method described in FIG 4A and in relation to circuitry 60 of FIG 6:
dec2bin (27) to recover the binary bit sequence 10 namely: 11011
[0244] The hardware matcher circuit 20 is exactly the same as in the previous example (FIG 5) only the memory table 156 needs to have different binomial coefficients C
_{n,w}. The hardware matcher circuit 60 is exactly the same as in the previous example (FIG 6) only the memory table 156 needs to have different binomial coefficients C
_{n,w}. The matcher 20 and dematcher 60 pair use the same coefficients C
_{n,w}.
[0245] The memory table 156 can be configured to be updatable so that the coefficients C
_{n,w} can be updated.
[0246] Matching 110 using variable composition coefficients C
_{n,w}. achieves a smaller rate loss for any fixed block length, or the same rate loss with much smaller block lengths with respect to constant composition coefficients C
_{n,w.}.
Truncation
[0247] The binomial coefficients C
_{n,w.}. are stored in a lookup table. However, if N is large, large tables containing extremely large numbers are required. To save space, a smaller subset of integers is used by truncating the binomial coefficients C
_{n,w.}. The binomial coefficients C
_{n,w.} are represented as integers using a fixed number of bits (b) of mantissa and a variable number of bits (a) of exponent. We represent binary numbers like
[0248] Let CÌ‚
_{n,w} be the truncated binomial coefficients. We can enumerate only CÌ‚
_{n1,w} and CÌ‚
_{n1,w1} sequences having a zero or a one in position n1, respectively. To be able to enumerate CÌ‚
_{n,w} sequences of length n we have
[0249] Besides, CÌ‚
_{n,w} is one of the integer values that we can represent. Therefore, for the truncated coefficients we obtain the recursion
where t(c) is the function that truncates c to the nearest allowed value.
[0250] The matcher 20 and the dematcher 60 use exact arithmetic, with operands taken from the lookup table of truncated binomials CÌ‚
_{n,w}. Since the mathematics is exact, the dematcher 20 can sum the truncated binomials in any order. The matcher 20 starts subtracting the binomials in the last column of the lookup table. Exact arithmetic is considerably simplified by the short mantissa.
[0251] Example. Let N=8, W=3, b=3. The allowed values of the truncation function are any value up to 2^b, 8 in this example, and then truncation starts allowing (1010)=10, (1100)=12, (1110)=14, (10000)=16, (10100)=20, (11000)=24, (11100)=28, (100000)=32... The table of truncated binomial coefficients is the same as in the previous example but for CÌ‚
_{7,3} = t(CÌ‚
_{6,3} + CÌ‚
_{6,2}) = t(20 + 15) = t(35) = t(100011)
_{2} = (100000)
_{2} = 32, and CÌ‚
_{6,2} = t(CÌ‚
_{5,2} + CÌ‚
_{5,1}) = t(10 + 5) = t(15) = t(1111)
_{2} = (1110)
_{2} = 14 and CÌ‚
_{6,3} = t(CÌ‚
_{5,3} + CÌ‚
_{5,2}) = t(10 + 10) = t(20) = t(10100)
_{2} = (10100)
_{2} = 20
[0252] Therefore Table 1 for constant composition
C_{n,w}  w=1  2  3 
n=0 
0 


1 
1 
0 

2 
2 
1 
0 
3 
3 
3 
1 
4 
4 
6 
4 
5 
5 
10 
10 
6 

15 
20 
7 


35 
becomes
Table 4
C_{n,w}  w=1  2  3 
n=0 
0 


1 
1 
0 

2 
2 
1 
0 
3 
3 
3 
1 
4 
4 
6 
4 
5 
5 
10 
10 
6 

14 
20 
7 


32 
[0253] The Table 3 for variable composition
C_{n,w}  w=1  2  3 
n=0 
1 
1 
1 
1 
2 
2 
2 
2 
3 
4 
4 
3 
4 
7 
8 
4 
5 
11 
15 
5 
6 
16 
26 
6 

22 
42 
7 


64 
becomes
Table 5
CÌ‚_{n,w}  w=1  2  3 
n=0 
1 
1 
1 
1 
2 
2 
2 
2 
3 
4 
4 
3 
4 
7 
8 
4 
5 
10 
14 
5 
6 
14 
24 
6 

20 
32 
7 


48 
[0254] The maximum index, corresponding to the sequence 11100000, is 48+20+6=74. We can enumerate 75 sequences, instead of 93 (64+22+6 +1, from Table 3)
[0255] Pascal triangle with b=2 digits mantissa:
[0256] Cumulative Pascal triangle with 2 digits mantissa:
[0257] Encoding (dematcher)
Let x = 1 1 1 00000.
[0258] Decoding (matcher)
Let i(x)=74.
[0259] Example of process at matcher 20 for creating codeword x 21 for transmission.
[0260] Let the binary bit sequence be: 100110, N=8, W=3
[0261] Following the method described in FIG 3 and in relation to circuitry 20 of FIG 5 but using the coefficients C
_{n,w} of Table 5:
CÌ‚_{n1,3} = 48 returns n_{1} = 7. I = 74  48 = 26. w=2
CÌ‚_{n2,2} = 20 returns n_{2} = 6. I = 26  20 = 6. w=1
CÌ‚_{n3,1} = 6 returns n_{3} = 5.
x = 1 1 1 0 0 0 0 0 (ones at n_{1} = 7, n_{2} = 6, n_{3} = 5)
[0262] The equivalent process at dematcher 20 after receiving codeword x 21.
Let x = 1 1 1 0 0 0 0 0
W=3, N=8
Following the method described in FIG 4A and in relation to circuitry 60 of FIG 6 using C
_{n,w} of Table 5:
dec2bin (74) to recover the binary bit sequence 10 namely: 100110.
[0263] The same hardware implementation as for previous embodiments (FIGs3&5 and 4&6) can be used. Different coefficients can be loaded into the memory table 156.
[0264] FIG 7 illustrates a modified matcher 20 that operates in a manner similar to that illustrated in FIG 5. In this example, the memory table 156 is split into a memory table 156A and a memory table 156B.
[0265] The lookup table 156A, stored in a memory, is for the mantissa of the truncated coefficient. The lookup table 156B, stored in the same or different memory is for the exponent of the truncated coefficient.
[0266] The lookup table 156A and the lookup table 156B together represent the table of truncated coefficients.
[0267] The memory table 156A and the memory table 156B are addressed simultaneously by the vale n from the register 150. They provide the truncated coefficients for the provided n, that is they provide a row from the table of coefficients. The memory table 156A provides the mantissa of each coefficient in row n and the memory table 156B provides the exponent of each coefficient in the row n.
[0268] The adder logic 162 is not sized for the full width (size) of a coefficient, it is sized for just the mantissa.
[0269] The mantissa selected by w from register 152 using multiplexer 160A is subtracted from the relevant Most Significant Bits of I (which are, when the algorithm starts, the truncated expression of I), and combined at multiplexer 164 with the remaining Least Significant Bits that are unchanged. This operation correctly generates x
_{n}, and is bit exact for the full I width (in case I is being updated). This reduces the adder size, which in turn enables high speed operations, and decouples the size of the adder logic 162 (depending on the mantissa) from the size of I (which can be huge in case of very long codewords).
[0270] Each clock cycle n is decrementing, and the current I value has to be confronted with the next line of the C
_{n,w} table, where possibly the exponents in the truncated coefficients are changing. This is taken into account by the (left) shift block 166, which is (left) shifting I in order to have the same exponent representation in ingress to the adder logic 162.
[0271] In particular the Exponent table 156B stores the Delta Exponent going from n to n1; as this can depend also on the fact that w is kept or decremented (in turn depending on x
_{n}), two Delta Exponent tables are stored, one for constant w and the other for decreasing w. The selection is further done via x
_{n}, as indicated in the Fig.
[0272] A similar arrangement can be used for a dematcher circuitry 60. The differences between to matcher circuit 20 using a split coefficient table 156A, 156B as illustrated in FIG 7 and a dematcher circuit 60 using a split coefficient table 156A, 156B (not illustrated) are equivalent to the differences between matcher circuit 20 using a coefficient table 156 as illustrated in FIG 5 and a dematcher circuit 60 using a coefficient table 156 as illustrated in FIG 6. The process at the dematcher is executed in the reverse order with respect to the matcher 20. That means that it starts from the last x
_{n} transmitted, and not from the first one.
Multilevel distribution matching
Multinomial Matcher
[0273] FIG 8 illustrates a matcher 20 performing a matching algorithm 110 as previously described with reference to FIG 3. In the example of FIG 3, the alphabet of symbols used for the codeword x 21 is a binary alphabet (M=2). In the example of FIG 8, the alphabet of symbols used for the codeword x 21 is an alphabet M=4.
[0274] In this example, there is a Hamming weight W
_{j} in respect of each symbol S
_{j}. The Hamming weight W
_{j} is the number of occurrences of the S
_{j} in the codeword x.
[0275] The value w is replaced by w
_{j} which is the number of occurrences of the symbol S
_{j} in the remainder of the codeword x. The initialization at block 132 defines a starting value for each w
_{j} as W
_{j}.
[0276] The value C
_{n,w} is replaced by
where j is the index representing symbol S
_{j}. The block 114 comprises a search through all j to find
[0277] The block 118 is specific to a particular symbol Sj. It identifies the position n as symbol Sj and updates the Hamming Weight w
_{j} for symbol S
_{j}.
[0278] The method can enumeratively decode the index value I
_{n} each clock cycle (each iteration) to obtain with each clock cycle (iteration) a symbol S
_{n} of the codeword 21 and conditionally, in dependence upon the obtained symbol S
_{n}, obtain 118 a new index vale I
_{n1} for enumeratively decoding at the next clock cycle (next iteration) to obtain the next symbol (n1) of the codeword 21 The new index value new index vale I
_{n1} is conditionally dependent upon
which is conditionally dependent upon the obtained symbol S
_{j}.
[0279] In the example of FIG 3, the alphabet of symbols used for the codeword x 21 is a binary alphabet (M=2). If the symbol is logic zero, then index value is unchanged.
[0280] FIG 9 illustrates a dematcher 60 performing a dematching algorithm 130 as previously described with reference to FIG 4A (or alternatively FIG 4B). In the example of FIG 4A, the alphabet of symbols used for the codeword x 21 is a binary alphabet (M=2). In the example of FIG 9, the alphabet of symbols used for the codeword x 21 is an alphabet M=4.
[0281] In this example, there is a Hamming weight Wj in respect of each symbol S
_{j}. The Hamming weight W
_{j} is the number of occurrences of the S
_{j} in the codeword x.
[0282] The value w is replaced by w
_{j} which is the number of occurrences of the symbol Sj in the remainder of the codeword x. The initialization at block 132 defines a starting value for each w
_{j} as W
_{j}.
[0283] The block 134 is specific to a particular symbol S
_{j}. The block 134 comprises a search through all j.
[0284] The block 134 identifies the symbol S
_{j} at position n in the codeword x and, at block 136, updates the Hamming Weight w
_{j} for symbol S
_{j}. In block 136 the value C
_{n,w} is replaced by
where j is the index representing symbol S
_{j}.
[0285] The EC matcher 20 / dematcher 60 can be directly generalized to any alphabet size M. For a quaternary alphabet (M=4), for instance, the binomial coefficients C
_{n,w} of the CCEC should be replaced by the multinomial coefficients
and the recursion becomes:
[0287] The multinomial algorithm of FIG 8 will reduce to the binomial algorithm of FIG 3 when j=1, the alphabet is M=j+1. The multinomial algorithm of FIG 9 will reduce to the binomial algorithm of FIG 4A when j=1, the alphabet is M=j+1.
[0288] The matcher 60 and dematcher pair can be used as a distribution matcher 20 and distribution dematcher pair 60. A distribution matcher 20 produces symbol codewords 21 with a controlled probability distribution. For example, in probabilistic amplitude shaping, symbols of lower amplitude can be produced with a higher probability than symbols of a higher amplitude.
[0289] The probability of occurrence of the symbols Sj is controlled by choosing W
_{j}, which determines the maximum number of occurrences of symbol type Si, in the codeword 21 produced by the matcher 20. Either fixed or variable composition coefficients can be used in truncated or untruncated form.
[0290] Probabilistic amplitude shaping can be achieved by assigning lower symbol count W
_{j} for symbols S
_{j} that have larger amplitude and by assigning higher symbol count W
_{j} for symbols S
_{j} that have lower amplitude.
[0291] A larger alphabet M of symbols is used for the codeword x 21. This allows symbols S
_{j} of different amplitude (different level) to be used to create codewords x 21.
[0292] The probability of occurrence of the symbols S
_{j} is controlled by choosing W
_{j}, which determines the maximum number of occurrences of symbol type S
_{j}, in the codeword 21 produced by the matcher 20. Variable composition coefficients can be used in truncated or untruncated form.
[0293] Probabilistic amplitude shaping can be achieved by assigning lower symbol count W
_{j} for symbols S
_{j} that have larger amplitude and by assigning higher symbol count W
_{j} for symbols S
_{j} that have lower amplitude.
Multiple binomial Matchers
[0294] The following examples discuss multilevel distribution matchers 220. A larger alphabet M of symbols is used for the codeword x 21. This allows symbols Sj of different amplitude (different level) to be used to create codewords x 21.
[0295] The multilevel distribution matcher 20 can control the probability p
_{j} of occurrence of the symbols S
_{j} in the codeword. The multilevel distribution matcher 20 is configured to generate symbols with independent statistics.
[0296] The probabilities of symbol occurrence depend on the choice of the control parameters for enumerative coding such as, for example, values of W and values for the shaping rate (R=Nb/N) which determines the number of information bits Nb per codeword length N. The Hamming Weight of the codeword can be a programmable parameter. For example, the codeword can have a symbol composition that can vary with an index value up to a maximum value, wherein the maximum value is a programmable parameter. The length Nb can be a programmable parameter. The length N can be a programmable parameter.
[0297] Probabilistic amplitude shaping can be achieved by assigning lower symbol count W
_{j} for symbols S
_{j} that have larger amplitude and by assigning higher symbol count W
_{j} for symbols
_{sj} that have lower amplitude.
[0298] The multilevel distribution matchers 220 in the following example comprise multiple binary distribution matchers 20.
[0299] For example, the following examples of multilevel distribution matchers 220 are discussed:.
Binary tree of M1 binary distribution matchers (FIG 10, 11);
M1 binary distribution matchers comprising multiple parallel generator binary distribution matchers and one or more selector binary distribution matchers (FIG 12, 14);
M1 parallel generator binary distribution matchers (FIG 17); and
log_{2}M parallel generator binary distribution matchers (FIG 19).
[0300] In these examples, for each one of multiple contiguous portions of a data block, multilevel distribution matcher 220 uses a binary matcher to convert each portion of the data block to a binary bit stream to create multiple binary bit streams while controlling a probability of occurrence of logic ones within each of the multiple binary bit streams; and uses the multiple binary bit streams to control a probability of occurrence of multiple multilevel symbols within the codeword. In some but not necessarily all examples, a probability of occurrence of high amplitude symbols is controlled to be less than a probability of occurrence of low amplitude symbols.
[0301] As illustrated in the examples of FIGS 10 and 11, a binary tree of binary distribution matchers 20 can be configured as a codeword trellis. Each node in the trellis is represented by binary distribution matcher. The combination of binary distribution matchers arranged as a binary tree forms a multilevel distribution matcher that can create a codeword that uses an alphabet M.
[0302] A codeword 21 that uses an alphabet of M symbols and is produced by a multilevel distribution matcher 220 can be represented as a bifurcating tree where each node except the end nodes are connected downstream to two child nodes and each node except the root node is connected upstream to one parent node. The level q in the binary tree has M=2^q nodes in that level and 2^0 +2^1 + 2^2 +...2^(q1) =M1 nodes in the previous upstream levels. Thus M1 binary distribution matchers 20 arranged as a binary tree can create a codeword that uses an alphabet M.
[0303] The equivalent dematcher 60 can have the same architecture M1 binary distribution matchers 20 arranged as a binary tree. Each node except the start nodes are connected upstream to two child nodes and each node except the root node is connected upstream to one parent node. The level q in the binary tree has M=2^q nodes in that level and 2^0 +2^1 + 2^2 +...2^(q1) =M1 nodes in the following levels. Thus M1 binary distribution matchers 20 arranged as a binary tree can process a codeword that uses an alphabet M.
[0304] FIG 10 schematically illustrates a multilevel distribution matcher 220 configured as binary tree 200 of binary distribution matchers 20A, 20B, 20C. In this example, M=4 and there are three binary distribution matchers in two levels.
[0305] The binary distribution matcher 20C at the root produce a "1" with probability pc. The binary distribution matcher 20C can be a matcher 20 as previously described where M=2. The probability can, for example, be controlled by selecting W used for constant composition binary enumerative coding, as previously described. The first bit of the symbol produced is therefore "1" with probability pc and the first bit of the symbol produced is therefore "0" with probability (1pc).
[0306] One binary distribution matcher 20A at the next level produces a "1" with probability pa. The binary distribution matcher 20A can be a matcher 20 as previously described where M=2. The probability can, for example, be controlled by selecting W used for constant or variable composition binary enumerative coding in the binary matcher 20A, as previously described. The second bit of the symbol produced is therefore "1" with probability pa and the second bit of the symbol produced is therefore "0" with probability (1pa). For this distribution matcher 20, the first bit of the symbol produced is "1" with probability pc. The probability of symbol 11 (S1) being output is therefore p1=pc*pa. The probability of symbol 10 (S2) being output is therefore p2=pc*(1pa). One binary distribution matcher 20B at the next level (same as 20A) produces a "1" with probability pb. The binary distribution matcher 20B can be a matcher 20 as previously described where M=2. The probability can, for example, be controlled by selecting W used for constant or variable composition binary enumerative coding in the distribution matcher 20B, as previously described. The second bit of the symbol produced is therefore "1" with probability pb and the second bit of the symbol produced is therefore "0" with probability (1pb). For this distribution matcher 20B, the first bit of the symbol produced is "0" with probability (1pc). The probability of symbol 01 (S3) being output is therefore p3=(1pc)*pa. The probability of symbol 00 (S4) being output is therefore p4=(1pc)*(1pa).
[0307] The 4 symbol probabilities p1, p2, p3, p4 are arbitrary and independent. The probability of the symbols Sj is controlled by choosing W for each binary distribution matchers 20A, 20B, 20C.
[0308] pa, pb, pc are the bit level probabilities generated by binary distribution matchers 20A, 20B, 20C working at 3 different rates Ra, Rb, Rc (determined by three different Hamming Weights W). The binary distribution matcher 20C working with bit probability pc can use Constant Composition. The binary distribution matchers 20A, 20B working with bit probability pa and pb can be Constant or Variable Composition.
[0309] FIG 11 schematically illustrates a multilevel distribution matcher 220 configured as a binary tree 200 of binary distribution matchers 20. In this example, M=8 and there are 7 binary distribution matchers 20 in three levels.
[0310] The binary tree 200 represent a trellis. The route through the trellis determines the output symbol S. The bitwise probability of each bit in the symbol is determined by a different binary distribution matcher 20. Consequently, each route through the trellis occurs with a distinct controllable probability formed by the product of each bitwise probability for each step in the route.
[0311] The 8 symbol probabilities are arbitrary and independent. Pa, Pb, Pc, Pd, Pe, Pf, Pg are the bit level probabilities generated by DMs working at 7 different rates. The binary distribution matcher 20 working with bit probability pa, pb, pc (the internal nodes) use Constant Composition binary enumerative coding as previously described. The binary distribution matchers 20 working with bit probability Pd, Pe, Pf, Pg (the last level of binary distribution matchers 20) uses Constant or Variable Composition binary enumerative coding as previously described.
[0312] FIG 12 & 14 illustrates an example of a multilevel binary distribution matcher 220 to generate sequences with 4 level symbols (M=4). A first set 20A, 20B of the multiple M1=3 binary matchers 20 is configured to generate symbols 21A, 21B for inclusion in the codeword 21 and a second set 20C of the multiple M1=3 binary matchers 20 is configured to select the order of the generated symbols 21A, 21B in the codeword 21. The generators can be variable composition or constant composition. The selector(s) are constant composition. The symbol mapper 203 combines an ordered symbol sequence 21AB, selected from the generated symbols 21A, 21B, with symbols 21C from second set 20C of the multiple M1=3 binary matchers 20 to generate the M=4 level symbols codeword 21 with the desired statistics.
[0313] In this example, the binary distribution matcher 20C selects whether the distribution matcher 20A or the distribution matcher 20B provides a bit of the symbol.
[0314] In this example, the binary distribution matcher 20C selects whether the distribution matcher 20A or the distribution matcher 20B provides a bit of the symbol for each of L bits. The symbol mapper 203 interleaves a binary symbol form the sequence 21AB, selected from the generated symbols 21A, 21B, with a binary symbol 21C from the output of the distribution matcher 20C to generate the codeword 21 with M=4 level symbols.
[0315] In the example of Fig 12, the binary distribution matcher 20A puts forward pa*L candidates for selection. The binary distribution matcher 20B puts forward pb*L candidates for selection. The binary distribution matcher 20C selects the distribution matcher 20A to provide the bit of the symbol with a probability pc and selects the distribution matcher 20B to provide the bit of the symbol with a probability (1pc). Engineering symbol probabilities (S1, S2, S3, S4) is done by combining the bit probabilities of the binarytree of binary distribution matchers (pa, pb, pc) according to the equations in FIG 10.
[0316] The bit probabilities depend on the choice of the values of W for each binary distribution matcher 20A, 20B, 20C and on the shaping rates of the binary distribution matchers 20A, 20B, 20C. The shaper rate R=Nb/N determines the number of information bits Nb per codeword length N.
[0317] The values of W are the Hamming Weights used by the binary distribution matchers 20A, 20B. 20C and can be controlled. The values of the shaper rates used by the binary distribution matchers 20A, 20B. 20C can be controlled.
[0318] In the binary tree architecture, the binary distribution matcher 20C which acts as a selector is constant composition.
[0319] A codeword 21A of length pa*L is produced by the binary distribution matcher 20A according to the enumerative encoding algorithm using appropriate values for N, Nb and W. The values N, Nb and W control the bitwise probability pa of producing a "1" bit within the codeword.
[0320] A codeword 21B of length pb*L is produced by the binary distribution matcher 20B according to the enumerative encoding algorithm using appropriate values for N, Nb and W. The values N, Nb and W control the bitwise probability pb of producing a "1" bit within the codeword.
[0321] A codeword 21C of length L is produced by the binary distribution matcher 20C according to the enumerative encoding algorithm using appropriate values for N, Nb and W. The values N, Nb and W control the bitwise probability pc of producing a "1" bit within the codeword.
[0322] The sequences of bits at the binarytree architecture output 21 are generated by selecting and reordering bits from the codewords 21A, 21B generated by the binary distribution matchers 20A, 20B using the bit values of the codeword 21C generated by the binary distribution matchers 20C. The reordered sequence of symbols 21AB is interleaved with the symbols 21C.
[0323] This is illustrated in
FIG 15. In FIG 15, R1(j) is the i
^{th} bit of the binarytree output codeword 21A which is generated by the binary distribution matcher 20A working at rate Ra with bit probability of ones pa.
[0324] R2(k) is the k
^{th} bit of the binarytree output codeword 21B which is generated by the binary distribution matcher 20B working at rate Rb with bit probability of ones pb. R3(i) is the i
^{th} bit of the binarytree output codeword 21C which is generated by the binary distribution matcher 20C working at rate Rc with bit probability of ones pc. RO(i) is the i
^{th} bit of the codeword 21AB which is output.
[0325] The following pseudo code produces the codeword 21AB from the codewords 21A, 21B, 21C:
Let i=j=k=1
Repeat
{If R3(i) ="1", then
RO(i) =R1(j)
j++
i++
Endif
If R3(i) ="0", then
RO(i) =R2(k)
k++
i++
Endif}
[0326] In the example of FIG 15, the ith codeword 21AB at memory output is
11
00000
111, where underline is from codeword 21A. Similarly, the codeword 21 at the symbol mapper output is [1
1 0
1 1
0 1
0 0
0 0
0 0
0 1
1 0
1 0
1], where underline is from the codeword 21AB and the other bits are from the codeword 21C.
[0327] Referring back to FIG 12, in one but not necessarily all implementations, a memory 202 is used to store the codewords 21A with size of pa*L symbols produced by the binary distribution matcher 20A and the codewords 21B of size of pb*L symbols is produced by the binary distribution matcher 20B.
[0328] The distribution matcher 20A fills the memory 202 with L codewords 21A of length pa*L and L codewords 21B of length pb*L and produces L codewords 21.
[0329] L parallel distribution matchers 20A can be used to fill the memory 202 with L codewords 21A of length pa*L. L parallel distribution matchers 30B can be used to fill the memory 202 with L codewords 21B of length pa*L.
[0330] The parallel distribution matchers 20A, 20B can share the same coefficient table 156; 156A, 15B.
[0331] Alternatively, W=pc*L parallel distribution matchers 20A can be used to fill the memory 202 with W codewords 21A of length L. LW=(1pc)*L parallel distribution matchers 20B can be used to fill the memory 202 with LW codewords 21B of length L. This generates L^2 output bits (L codewords of length L), of which a pool A of L^2*pc=W*L bits have a probability pa, while the remaining pool B of L^2*(1pc)=(LW)*L bits have a probability pb. The pool A 210A has W rows each of length L. The pool B 210B has (LW) rows each of length L. The combination 210 of pool A and pool B, illustrated in FIG 13, comprises W rows each of length L (pool A 210A) and (LW) rows each of length L (pool B 210B). Each column of the combined pool 210 comprises W bits from pool A and (LW) bits from pool B. Now we can (via L "C type" matchers 130C) select whether to transmit, for each of the L bits of the output codeword 21, from the pool A or from pool B.
[0332] This is illustrated in
FIG 15. In FIG 15, R1(j) is the i
^{th} bit of the ith column of pool 210A. R2(k) is the k
^{th} bit of the ith column of the pool 210B. R3(i) is the i
^{th} bit of the binarytree output codeword 21C which is generated by the binary distribution matcher 20C. RO(i) is the i
^{th} bit of the codeword 21AB which is output.
[0333] As illustrated in FIG 12, the output 21C of binary distribution matcher 20C and L relevant selected bits 21AB can be used together produce output symbols 21, which then have the desired occurrence probabilities.
[0334] Each codeword 21 at the symbol mapper 203 output is composed of a sequence of two bits symbols (4 levels case of figure 12). The first (least significant) bit of each 2bitssymbol is taken from the reordered sequence 21AB, while the second bit of each 2bitssymbol is taken from the output 21C of the binary distribution matcher 20C. Interleaving is done by the symbol mapper 203. Thus, the codewords 21 are composed by all bits from 21A, 21B and 20C.
[0335] In the example illustrated in
FIG 14, the distribution matcher 20 is similar to the distribution matcher 20 illustrated in FIG 12. However, in this case a single binary distribution matcher 20* is used to operate as binary distribution matcher 20A and binary distribution matcher 20B.
[0336] The binary distribution matcher 20* generates W*L bits with bit probability pa and then switches to generate (LW)*L bits with bit probability pb. As before L^2 bits are stored in a memory 202 as a combined pool 210.
[0337] The binary distribution matcher 20C (constant composition) generates L codewords with bit probability pc with Hamming Weight W and length L to reorder the bits in each column of the combined pool 210. The L reordered columns are the L output codewords 21AB form the memory 202. The sequence of symbols 21AB is interleaved with the symbols 21C by symbol mapper 203 to produce codewords 21.
[0338] There is also a second embodiment or mode of operation for the circuit of figure 14, which is preferable because it requires a smaller size of memory 202:.
 1) The Binary DM 20* generates W bits with probability pa and then switches to generate (LW) bits at pb.
 2) L bits are stored in a memory array.
 3) The binary distribution matcher 20C (constant composition) generates codewords 21C with bit probability pc with Hamming Weight W and length L to reorder the elements in the DM 20* codeword.
 4) The reordered codeword 21AB from the memory 202 is routed to the symbol mapper 203.
[0339] FIG 16 illustrates an example of a binary distribution matcher 20* that can be used to operate as binary distribution matcher 20A and binary distribution matcher 20B.
[0340] The binary distribution matcher 20* generates bits with probability pa or bits with probability pb depending on the control bit from the distribution matcher 20C, working with bit probability pc.
[0341] The binary distribution matcher 20* is similar to the matcher 20 illustrated in FIG 5. It differs in that it can operate as a binary distribution matcher 20A at rate Ra (using respective second register 152A for storing the current Hamming Weight w for the rate Ra and third register 154A for storing the current index value for the rate Ra) or as a binary distribution matcher 20B at rate Rb (using respective second register 152B for storing the current Hamming Weight w for the rate Rb and third register 154B for storing the current index value for the rate Rb).
[0342] The time base and the first register 150 is shared, the adder logic 162 is shared, the table of coefficients 156 is shared, and the coefficient selection circuitry 160 is shared.
[0343] The binary distribution matcher 20A is initialized to work with values for la, Wa, Na depending on the rate Ra. The binary distribution matcher 20B is initialized to work with values for Ib, Wb, Nb depending on the rate Rb. For this reason the coefficients C
_{n,w} are selected as a function of the algorithm iteration and of the rate. The C
_{n,w} generator 156 will produce the required coefficient to complete each iteration as a function of the rate, searching from a unique positions within the reference table for rate Ra and for rate Rb.
[0344] The distribution matcher 20C, working with bit probability pc selects whether the binary distribution matcher 20* operates as either:
 i) binary distribution matcher 20A at rate Ra (probability pa) using the second register 152A and third register 154A
 ii) binary distribution matcher 20B at rate Rb (probability pb) using the second register 152B and third register 154B.
[0345] A resorting memory 202 is not required.
[0346] Fig 17 illustrates an example of a multilevel distribution matcher 220 comprising multiple, parallel binary distribution matchers 20. This architecture is for the case of a quaternary M=4, although it can be easily generalized for higher cardinality by using more parallel binary distribution matchers 20.
[0347] Each one of the M1 parallel binary distribution matchers 20n is responsible for producing statistics for a particular symbol Sn from a different portion of the input bit sequence, data 10 . Each parallel binary distribution matcher 20n can receive the same number of bits Nb from the input data sequence of size M*Nb. The statistics are controlled by controlling the parameters of the binary distribution matcher 20n such as the shaper rate Nb/N, the number of output bits N in the block 710n produced by the binary distribution matcher 20n and the initialization of the Hamming Weight W (for constant composition enumerative coding or variable composition enumerative coding).
[0348] A bit demultiplexer 170 demultiplexes the data 10. It partitions the input information block 10 in three contiguous subblocks 10A, 10B, 10C each routed to a different binary distribution matcher 20. The size Nb of each subblock 10A, 10B, 10C can be the same or chosen according to the shaping rate of the associated binary distribution matcher 20, which depends on the statistics of symbols.
[0349] The first binary distribution matcher 20A generates the statistics for symbol S1, the second binary distribution matcher 20B generates the statistics for symbol S2, the third binary distribution matcher 20C generates the statistics for symbol S3.
[0350] Then the trellis coefficients of the three binary distribution matchers 20 are:
first binary distribution matcher 20A
second binary distribution matcher 20B
third binary distribution matcher 20C
[0351] Where n3 is the size of the output block 710C from the third binary distribution matcher 20C, n2 is the size of the output block 710B from the second binary distribution matcher 20B, n is the size of the output block 710A from the first binary distribution matcher 20A.
[0352] The three output blocks 710A, 710B, 710C with the coded bits generated by the binary distribution matchers 20A, 20B, 20C are routed to a bit multiplexer 172.
[0353] The bit multiplexer 172 generates symbols S1, S2, S3 and S4 as a function of the sequences 710A, 710B, 710C generated by the three binary distribution matchers 20A, 20B, 20C, using priority encoding. The logical ones in the sequence generated by first binary distribution matcher 20A indicate the positions of symbol S1 in the output codeword 21. The logical ones in the sequence generated by second binary distribution matcher 20B indicate the positions of symbol S2 in the output codeword 21. The logical ones in the sequence generated by third binary distribution matcher 20C indicate the positions of symbol S3 in the output codeword 21. The remaining unallocated positions in the output codeword 21 indicate positions of the symbol S4 in the output codeword.
[0354] The bitmux 172 combines symbols S1, S2, S3, S4 at their respective positions into a single output block of size n, which is the output codeword 21. As described, this is accomplished taking advantage of the size and distributions of logical ones and zeros in each output block 710A, 710B, 710C.
[0355] For example, referring to FIG 18, let [S4=251 S3=166 S2=73 S1=22] be the desired composition (Hamming Weight W
_{n}) of symbols S
_{n} for a quaternary alphabet in the output codeword 21. Where S1/S2/S3/S4 refers to the exact (constant composition enumerative coding) or to the maximum (variable composition enumerative coding) number symbols of each type in the final codeword 21 at the output of the bitmux 712.
[0356] The first binary distribution matcher 20A gives the statistics for S1 and is assigned w1 = 22 and n = 512 to generate an output block 710A of 512 bits, containing logical ones only in w1 = 22 positions of the output block 710A of length n = 512 bits. The positions of the 22 logical ones in the sequence of length n indicate the positions of the symbols S1 in the codeword 21 of length
[0357] The second binary distribution matcher 20B gives the statistics for S2 and is assigned w2 = 73 and n2 = 490 to generate an output block 710B of 490 bits containing logical ones only in w2 = 73 positions of the output block 710B of length n2=490 bits. The positions of the 73 logical ones in the sequence of length n indicate the positions of the symbols S2 in the codeword 21 of length n.
[0358] The third binary distribution matcher 20C gives the statistics for S3 and is assigned w3 = 166 and n3 = 417 to generate an output block 710C of 417 bits containing logical ones only in w3=166 positions of the output block 710C of length n3=417 bits. The positions of the 166 logical ones in the sequence of length n indicate the positions of the symbols S3 in the codeword 21 of length n.
[0359] The remaining 251 positions in the sequence of length n indicate the positions of the symbols S4 in the codeword 21 of length n
[0360] Other symbol orderings are possible, of course.
[0361] Fig 19 illustrates an example of a multilevel distribution matcher 220 comprising multiple, parallel binary distribution matchers 20. This architecture is for the case of a quaternary M=4, although it can be easily generalized for higher cardinality by using more parallel binary distribution matchers 20.
[0362] Each one of the log
_{2}M parallel binary distribution matchers 20n is responsible for producing statistics for a particular bit of a symbol Sn from a different portion of the input bit sequence, data 10 . Each parallel binary distribution matcher 20n can receive the same number of bits Nb from the input data sequence of size M*Nb. The statistics are controlled by controlling the parameters of the binary distribution matcher 20n such as the shaper rate Nb/N, the number of output bits N in the block 710n produced by the binary distribution matcher 20n and the initialization of the Hamming Weight W (for constant composition enumerative coding or variable composition enumerative coding).
[0363] If the probability distribution of the sequence S is separable, for instance with M=4,
The probability that the first binary distribution matcher 20A produces a logical 1 for the most significant bits (MSB) is pa.
[0364] The probability that the first binary distribution matcher 20A produces a logical 0 for MSB is 1pa.
[0365] The probability that the second binary distribution matcher 20B produces a logical 1 for the least significant bits (LSB) is pb.
[0366] The probability that the second binary distribution matcher 20B produces a logical 0 for the LSB is 1pb.
[0367] The probabilities p1, p2, p3, p4 of the symbols S1=[1,1], S2=[1,0], S3=[0,1], S4=[0,0] is
p
_{1} = p
_{a}p
_{b}; P
_{2} = p
_{a}(1  p
_{b}); p
_{3} = (1  p
_{a})p
_{b}; p
_{4} = (1  p
_{a})(1  p
_{b})]
we can obtain a multilevel matcher combining log
_{2} M binary matchers to generate binary labels with the desired probability distribution. The binary labels of the respective bits of a symbol are combined alternatively by the symbol mapper (MSB from first binary distribution matcher 20A, LSB from second binary distribution matcher 20B), to create a codeword 21 of symbols.
[0368] Fig 20 illustrates an example of a controller 300. Implementation of a controller 300 may be as controller circuitry. The controller 300 may be implemented in hardware alone, have certain aspects in software including firmware alone or can be a combination of hardware and software (including firmware).
[0369] As illustrated in Fig 20 the controller 300 may be implemented using instructions 306 that enable hardware functionality. The instructions 306 are machinereadable instructions that are implemented using hardware.
[0370] In some but not necessarily all examples, the controller 300 may be implemented using executable instructions of a computer program 306 in a generalpurpose or specialpurpose processor 302 that may be stored on a computer readable storage medium (disk, memory etc) to be executed by such a processor 302.
[0371] The processor 302 is configured to read from and write to a memory 304. The processor 302 may also comprise an output interface via which data and/or commands are output by the processor 302 and an input interface via which data and/or commands are input to the processor 302.
[0372] The memory 304 stores a computer program 306 comprising computer program instructions (computer program code) that controls the operation of the apparatus 20, 20*, 60, 220 when loaded into the processor 302. The computer program instructions, of the computer program 306, provide the logic and routines that enables the apparatus to perform the methods illustrated in preceding Figs. The processor 302 by reading the memory 304 is able to load and execute the computer program 306.
[0373] The apparatus 20, 20*, 60, 220 therefore comprises:
at least one processor 302; and
at least one memory 304 including computer program code
the at least one memory 304 and the computer program code configured to, with the at least one processor 302, cause the apparatus 20, 20*, 60, 220 at least to perform one or more of:
conversion between data of a fixed length and a modulation codeword of a fixed length;
constrained but variable modulation symbol composition;
enumerative coding (decoding or encoding);
using truncated coefficients for coding;
use of Hamming Weight and a coding index value as stored state variables and coding bit position as a clocked value;
controlling probability of occurrence of multiple modulation symbols, for example using parameters of enumerative decoding (Nb, N, W); and
as a binary matcher to convert each portion of a data block to a binary bit stream.
[0374] As illustrated in
Fig 21, the computer program 306 may arrive at the apparatus 20, 20*, 60, 220 via any suitable delivery mechanism 310. The delivery mechanism 310 may be, for example, a machine readable medium, a computerreadable medium, a nontransitory computerreadable storage medium, a computer program product, a memory device, a record medium such as a Compact Disc ReadOnly Memory (CDROM) or a Digital Versatile Disc (DVD) or a solid state memory, an article of manufacture that comprises or tangibly embodies the computer program 306. The delivery mechanism may be a signal configured to reliably transfer the computer program 306. The apparatus 20, 20*, 60, 220 may propagate or transmit the computer program 306 as a computer data signal.
[0375] The machinereadable instructions that when implemented using hardware (e.g. computer program instructions) for causing an apparatus to perform at least one or more of:
conversion between data of a fixed length and a modulation codeword of a fixed length;
constrained but variable modulation symbol composition;
enumerative coding (decoding or encoding);
using truncated coefficients for coding;
use of Hamming Weight and a coding index value as stored state variables and coding bit position as a clocked value;
controlling probability of occurrence of multiple modulation symbols, for example using parameters of enumerative decoding (Nb, N, W); and
as a binary matcher to convert each portion of a data block to a binary bit stream.
[0376] The machinereadable instructions (e.g. computer program instructions) may be comprised in a computer program, a nontransitory computer readable medium, a computer program product, a machine readable medium. In some but not necessarily all examples, the computer program instructions may be distributed over more than one computer program.
[0377] Although the memory 304 is illustrated as a single component/circuitry it may be implemented as one or more separate components/circuitry some or all of which may be integrated/removable and/or may provide permanent/semipermanent/ dynamic/cached storage.
[0378] Although the processor 302 is illustrated as a single component/circuitry it may be implemented as one or more separate components/circuitry some or all of which may be integrated/removable. The processor 302 may be a single core or multicore processor.
[0379] References to 'computerreadable storage medium', 'computer program product', 'tangibly embodied computer program' etc. or a 'controller', 'computer', 'processor' etc. should be understood to encompass not only computers having different architectures such as single /multi processor architectures and sequential (Von Neumann)/parallel architectures but also specialized circuits such as fieldprogrammable gate arrays (FPGA), application specific circuits (ASIC), signal processing devices and other processing circuitry. References to computer program, instructions, code etc. should be understood to encompass software for a programmable processor or firmware such as, for example, the programmable content of a hardware device whether instructions for a processor, or configuration settings for a fixedfunction device, gate array or programmable logic device etc.
[0380] As used in this application, the term 'circuitry' may refer to one or more or all of the following:
 (a) hardwareonly circuitry implementations (such as implementations in only analog and/or digital circuitry) and
 (b) combinations of hardware circuits and software, such as (as applicable):
 (i) a combination of analog and/or digital hardware circuit(s) with software/firmware and
 (ii) any portions of hardware processor(s) with software (including digital signal processor(s)), software, and memory(ies) that work together to cause an apparatus, such as a mobile phone or server, to perform various functions and
 (c) hardware circuit(s) and or processor(s), such as a microprocessor(s) or a portion of a microprocessor(s), that requires software (e.g. firmware) for operation, but the software may not be present when it is not needed for operation.
[0381] This definition of circuitry applies to all uses of this term in this application, including in any claims. As a further example, as used in this application, the term circuitry also covers an implementation of merely a hardware circuit or processor and its (or their) accompanying software and/or firmware. The term circuitry also covers, for example and if applicable to the particular claim element, a baseband integrated circuit for a mobile device or a similar integrated circuit in a server, a cellular network device, or other computing or network device.
[0382] In at least some example, the matcher 20 converts data blocks 10 to respective modulation codewords, for controlling transmission, comprising:
converting each data block of fixed first bit length to an index value;
enumeratively decoding the index value to obtain a codeword for controlling transmission.
[0383] In at least some examples, the data block has a first fixed bit length and the obtained codeword has a second fixed bit length greater than the first fixed bit length.
[0384] In at least some examples, the obtained codeword has a constrained but variable symbol composition that can vary with the index value.
[0385] In at least some examples, the means control enumerative decoding to control a probability of occurrence of multiple symbols, for modulating transmission, within the codeword. For example, a probability of occurrence of high amplitude symbols can be controlled to be less than a probability of occurrence of low amplitude symbols
[0386] In at least some examples, the dematcher 60 comprises means for converting codewords to respective data blocks comprising: enumeratively encoding the codeword to obtain an index value; and converting each index value to a data block.
[0387] In at least some examples, the data block has a first fixed bit length and the codeword has a second fixed bit length greater than the first fixed bit length.
[0388] In at least some examples, the codeword has a constrained but variable symbol composition and the index value varies with a variable symbol composition of the codeword.
[0389] The blocks illustrated in at least some of the Figs may represent steps in a method and/or sections of code in the computer program 306. The illustration of a particular order to the blocks does not necessarily imply that there is a required or preferred order for the blocks and the order and arrangement of the block may be varied. Furthermore, it may be possible for some blocks to be omitted.
[0390] Components illustrated in the FIGs as being interconnected are operationally coupled and any number or combination of intervening elements can exist (including no intervening elements).
[0391] Where a structural feature has been described, it may be replaced by means for performing one or more of the functions of the structural feature whether that function or those functions are explicitly or implicitly described.
[0392] As used here 'module' refers to a unit or apparatus that excludes certain parts/components that would be added by an end manufacturer or a user. The apparatus 20, 20*, 60, 220 can be a module.
[0393] The above described examples find application as enabling components of: automotive systems; telecommunication systems; electronic systems including consumer electronic products; distributed computing systems; media systems for generating or rendering media content including audio, visual and audio visual content and mixed, mediated, virtual and/or augmented reality; personal systems including personal health systems or personal fitness systems; navigation systems; user interfaces also known as human machine interfaces; networks including cellular, noncellular, and optical networks; adhoc networks; the internet; the internet of things; virtualized networks; and related software and services.
[0394] The term 'comprise' is used in this document with an inclusive not an exclusive meaning. That is any reference to X comprising Y indicates that X may comprise only one Y or may comprise more than one Y. If it is intended to use 'comprise' with an exclusive meaning then it will be made clear in the context by referring to "comprising only one.." or by using "consisting".
[0395] In this description, reference has been made to various examples. The description of features or functions in relation to an example indicates that those features or functions are present in that example. The use of the term 'example' or 'for example' or 'can' or 'may' in the text denotes, whether explicitly stated or not, that such features or functions are present in at least the described example, whether described as an example or not, and that they can be, but are not necessarily, present in some of or all other examples. Thus 'example', 'for example', 'can' or 'may' refers to a particular instance in a class of examples. A property of the instance can be a property of only that instance or a property of the class or a property of a subclass of the class that includes some but not all of the instances in the class. It is therefore implicitly disclosed that a feature described with reference to one example but not with reference to another example, can where possible be used in that other example as part of a working combination but does not necessarily have to be used in that other example.
[0396] Although embodiments have been described in the preceding paragraphs with reference to various examples, it should be appreciated that modifications to the examples given can be made without departing from the scope of the claims.
[0397] Features described in the preceding description may be used in combinations other than the combinations explicitly described above.
[0398] Although functions have been described with reference to certain features, those functions may be performable by other features whether described or not.
[0399] Although features have been described with reference to certain embodiments, those features may also be present in other embodiments whether described or not.
[0400] The term 'a' or 'the' is used in this document with an inclusive not an exclusive meaning. That is any reference to X comprising a/the Y indicates that X may comprise only one Y or may comprise more than one Y unless the context clearly indicates the contrary. If it is intended to use 'a' or 'the' with an exclusive meaning then it will be made clear in the context. In some circumstances the use of 'at least one' or 'one or more' may be used to emphasis an inclusive meaning but the absence of these terms should not be taken to infer and exclusive meaning.
[0401] The presence of a feature (or combination of features) in a claim is a reference to that feature or (combination of features) itself and also to features that achieve substantially the same technical effect (equivalent features). The equivalent features include, for example, features that are variants and achieve substantially the same result in substantially the same way. The equivalent features include, for example, features that perform substantially the same function, in substantially the same way to achieve substantially the same result.
[0402] In this description, reference has been made to various examples using adjectives or adjectival phrases to describe characteristics of the examples. Such a description of a characteristic in relation to an example indicates that the characteristic is present in some examples exactly as described and is present in other examples substantially as described.
[0403] Whilst endeavoring in the foregoing specification to draw attention to those features believed to be of importance it should be understood that the Applicant may seek protection via the claims in respect of any patentable feature or combination of features hereinbefore referred to and/or shown in the drawings whether or not emphasis has been placed thereon.