Field of the invention
The present invention relates to the use of low power wide area infrastructure for the purpose of localizing mobile items.
Description of related art
Wide area localization is typically based on GPS (Global Positioning System) techniques or on cellular (GSM) infrastructure. Both types of localization methods may be regarded as complex and power intensive, requiring high investments to set up the underlying systems whilst individual receivers for both techniques rapidly drain the available power sources for mobile devices. These existing problems become even more relevant for low-powered mobile devices as envisaged for use in the currently emerging Internet of Things (loT) infrastructures.
An loT infrastructure may be characterized as including a low-power, low-bandwidth wide area network such as a Long-range Wide Area Network ("LoRaWAN) made up of mainly stationary access points or gateways, capable of providing communication channels to a multitude of low-power, low-bandwidth mobile or immobile devices. These devices are in most cases meant to be produced at a fraction of the costs of GPS or GSM devices with very limited power resources or batteries. When looking at applications of loT, location tracking is considered an important field of use, allowing for example the location of materials and devices on a campus or larger industrial sites or to track deliveries in a neighborhood or within larger buildings. LoRa networks currently use (license-free) sub-gigahertz radio frequency bands, i.e. in the range of 150 MHz to 1 GHz, like the 169 MHz, 433 MHz, 868 MHz (Europe) and 915 MHz (North America) frequency bands. The typical data rates are in the range of 10 bps to 100 kbps.
As described in recent surveys such as the LoRaWAN GEOLOCATION WHITEPAPER as prepared by the LoRaAlliance™ Strategy Committee, January 2018 (https://lora-alliance.org/sites/default/files/2018-04/geolocation_whitepaper.pdf
) or in "A Survey of Indoor Localization Systems and Technologies" by F. Zafari et al, arXiv:1709.01015v3 [cs.NI] 16 Jan 2019
, the LoRaWAN™ protocol provides basically two methods for geolocation determination: Received Signal Strength (RSS) based, for coarse location determination, or Time Difference Of Arrival (TDOA), for finer accuracy with the latter typically achieving an accuracy of 20m to 200m depending on the existing infrastructure, e.g. the density of gateways, obstacles etc.. In TDOA, several gateways simultaneously receive the same uplink message, and the end-device location is determined using multi-lateration techniques.
To achieve higher accuracy, it has been proposed to use a fingerprint classification of LoRa signals, see for example Wongeun Choi, Yoon-Seop Chang, Yeonuk Jung, and Junkeun Song. "Lowpower lora signal-based outdoor positioning using fingerprint algorithm". ISPRS International Journal of Geo-Information, 7(11), 2018
and Micha Burger, "LoRa localisation improvement using GPS Fingerprint and Al Algorithms". Master thesis, Ecole polytechnique fédérale de Lausanne, 2018
Compared to the known fingerprinting-based methods using time-stamped signals, the received signal strength (RSS) based approach is considered to be one of the simplest approaches requiring little or no additional circuitry over and above the normal communication tasks of devices and gateways. The RSS is the actual signal power strength received at the receiver, usually measured in decibel-milliwatts (dBm) or milliWatts (mW). The RSS can be used to estimate the distance between a transmitter (Tx) and a receiver (Rx) device; the higher the RSS value the smaller the distance between Tx and Rx. The absolute distance can be estimated using several different signal propagation models given that the transmission power or the power at a reference point is known.
RSS based localization may require in one variant trilateration or N-point lateration, i.e., the RSS at the device is used to estimate the absolute distance between the user device and at least three reference points, such as the gateway locations; then basic geometry/trigonometry is applied for the user device to obtain its location relative to the reference points. In a similar manner, in a second variant the RSS at the reference points is used to obtain the location of the user device. In the latter case, a central controller or ad-hoc communication between reference points is needed for the total RSS collection and processing. While the RSS based approach is simple and cost efficient, it suffers from poor localization accuracy (especially in non-line of-sight conditions) due to additional signal attenuation resulting from transmission through walls and other big obstacles and severe RSS fluctuation due to multipath fading and indoor noise. Such problems require more sophisticated approaches.
The typical applications for a low power location determination are loT applications based on small sensors/devices that are battery powered and operate over a limited space such as asset management requiring infrequent position updates where status and location of various assets, track goods, people and animals are monitored or to locate parcels and orders. Small mobile LoRa nodes may be attached to different components in an assembly line to break down the time spent at each step. Some other examples include geo-fencing to detect the movement of a normally stationary object: this can be implemented in construction sites, utility yards, airports campuses to protect against theft.
In view of the known art, it may be therefore seen as a problem of the present invention to provide improved but robust location or tracking methods using the communication and signals available in LoRaWAN networks.
Summary of the invention
RSS based location or tracking methods for low-power wide area networks are provided, substantially as shown in and/or described in connection with at least one of the figures, and as set forth more completely in the claims.
These and other aspects, advantages and novel features of the present invention, as well as details of an illustrated embodiment thereof, will be more fully understood from the following description and drawings.
Brief Description of the Drawings
The invention will be better understood with the aid of the description of embodiments given by way of example and illustrated by the figures, in which:
Fig. 1 shows basic steps in accordance with an example of the invention;
Figs. 2A and 2B illustrate an architecture of a multistage LSTM neural network;
Fig. 3A and 3B illustrate an encoder stage and a decoder stage of a transformer;
Figs. 4A and 4B show examples of the location dependent uncertainty for an x and a y coordinate of a location of a mobile device; and
Fig. 5 show a determined location and its uncertainty overlaid over a geographical map of LoRaWAN area/campus.
Fig 1. shows basic steps in accordance with an example of the present invention. In a first step 10 a model for linking signal strength and locations is either built or provided which is capable of accepting input values relating to RSS, or relating to RSS and/or ESP, or relating to RSS and/or SNR, or a combination of RSS, ESP, and SNR, of electromagnetic signals transmitted between a mobile device to be localized or tracked and one or more access points or gateways of a low-power wide area network (LoRaWAN) with known locations. In a second step 20 a mobile device is moving across the reception area of the low-power wide area network while either continuously or intermittently transmitting or receiving signals to or from the one or more gateways. In the third step 30 the signals are evaluated to derive values representative of RSS, SNR or ESP, and as required as input to the model. In the step 40 the values of step 30 are used as input to the model. And in step 50 the model generates values representative of the location of the mobile device and makes available the generated location values together with a location dependent measure of uncertainty in the location.
The RSS related signals may be typically measured in units such as dBm or relative units as in the various measures of the Received Signal Strength Indicator (RSSI. It represents a measurement of how well the receiver "hears" a signal from the transmitter.
The Signal-to-Noise Ratio (SNR) is the ratio between the receiver power signal and the noise floor power level which is the sum of the powers of all unwanted interfering signal sources that can corrupt the transmitted signal. LoRa typically operated below the noise floor giving negative SNR values.
Using RSS and SNR a combined value such as the Estimated Signal Power (ESP) may be calculated using for example the relation
However, any other available measure representative of signal strength may be used to train the model and to determine a mobile device. It should however be noted that RSS related measurements are typically very badly correlated with the distance between the object and a stationary gateway. For that reason, RSS has so far not been considered for accurate location determination, i.e. with sub-100m resolution. Using a method as proposed herein can improve the resolution even when using RSS.
The model as referred to in Fig. 1 may be built by evaluating the relevant inputs at known locations, i.e. by collecting a dataset with for example GPS coordinates. The difference between the coordinates as predicted by the model and the GPS coordinates ("the ground truth") is typically minimized to increase the accuracy of the model. In the case of neural networks this evaluation step is also known as "training". It is worth noting that among the many different modelling methods not all are equally efficient in solving the problem of converting the chosen input values into location values. It was found that a neural network of a class which accepts parallel inputs to influence the current output, such as Recurrent Neural Networks (RNN) are well suited to this problem. The parallel inputs, which in this case may be a subset of the accumulated temporal sequence of measurements as defined by a time-shifted window, are fed into a corresponding parallel architecture of network units or cells.
Further efficiency gain may be made by using the neural network to determine the value of a theoretically continuous parameter, e.g. a value for the x or y coordinates, as output instead of a probability of a class or classes, e.g. grid cells. This avoids having to split a wide area into a grid or other type of distinct location classes as is done in known methods using for example the correlation between a signal fingerprint of the received signal and a cell of a grid in fingerprinting-based approaches.
Preferably the neural network used is a sequence-to-sequence RNN which uses a sequence as input and outputs a sequence. In particular, the model may include a combination of encoder and decoder where the encoder is a part of the network that takes the input sequence and maps it to an encoded representation of the sequence. The encoded representation, which depending on the type of network used may for example be the final hidden state of the trained network is then used by the decoder network to generate an output sequence.
Of the known RNNs it is further preferred to use a class known as Long Short-Term Memory (LSTM) models as described for example in: Sepp Hochreiter, Jurgen Schmidhuber, "Long short-term memory", Neural Comput., 9(8), pp. 1735-1780, November 1997
or in U.S. patent 9620108 B2
. Though since evolved into many different model architectures such as "peephole" LSTM or "gated recurrent unit" (GRU), it may be regarded as common feature of LSTM to include layers of cells each having a forget gate that controls how much of a previous input is retained for the calculation of the present output. A common LSTM unit may be composed of a cell, an input gate to the cell, an output and a forget gate. The cell remembers values over arbitrary time intervals and the three gates are used to regulate the flow of information into and out of the cell, with particularly the forget gate being used to control how much of the input to a previous cell in a parallel architecture of cells or layer is preserved.
The LSTM or other neural networks may include a layer of parallel cells allowing for the simultaneous input of two or more position related signals the RSS. It is particularly preferred to have a sufficient number of parallel cells to apply a time window to the position related signal of preferably between 3 to 15 consecutive samples. Hence as a mobile device being moved through the LoRaWAN area generates a time sequence of position related signals used for the purpose of the present invention the time window determines how many consecutive signals may be used as simultaneous inputs to the neural network. The window size can on the one hand be too large and consequently dilute the context and introduce noise in the prediction. On the other hand, a small window may not be enough for the model to construct an understanding of the patterns in the data. It is found that windows sizes between 3 to 15 time steps may work best. Time steps and window size may be regarded as being equivalent to the number of previous samples or data points considered.
Besides being arranged as a stack of parallels cells, the neural network may be best multi-layered in the sense of having at least two layers of parallel units with the second or any further layers receiving the output of a previous layer as inputs in a form of vertical stacking.
As shown in FIG. 2A, each LSTM unit or cell 20 is composed as main elements of an input gate 21, a neuron with a self-recurrent connection (a connection to itself) 22, a forget gate 23 and an output gate 24 and a memory input gate 25 and a memory output gate 26 allowing a transfer of memory states between cells. The self-recurrent connection has a weight of 1.0 and ensures that, barring any outside interference, the state of a memory cell can remain constant from one timestep to another. The gates serve to modulate the interactions between the memory cell itself and its environment. The input gate can allow incoming signal to alter the state of the memory cell or block it. On the other hand, the output gate can allow the state of the memory cell to have an effect on other neurons or prevent it. Finally, the forget gate can modulate the memory cell's self-recurrent connection, allowing the cell to remember or forget its previous state, as needed.
It is further found that a multilayer LSTM model as shown in Fig. 2B having four layers of individual LSTM cells 20 as shown in Fig. 2A with an input vector 27 of five inputs (four previous time samples and one current) followed by a dense layer 28 to generate an output 29 of the estimated location. It should be noted that as is common in Al representations the individual LSTM layer is representative of any number of layers, i.e. 4 in the present example. Further improvements may be gained by introducing dropout layers between LSTM layers. Another slight improvement may be gained by using bidirectional layers that look at the data in both directions, i.e. from the past to the current time step and vice versa.
In an attention cell there are calculated (during training and in operation) for each position a measure of attention or weights to indicate the importance of previous or (in a bidirectional network) of all other locations for the calculation of the present output location. Attention cells at a higher layer of the neural network may relate the attention measure to hidden states, i.e. outputs of a previous layer of the network, instead of explicit locations. The attention measure may increase the accuracy of the location prediction by training the model for each location a location dependent measure of which other locations have the highest weight or relevance for the determination of the present location. Using bidirectionality in the neural even previously calculated locations may be corrected in the light of location of the same mobile device as determined later. When attention is calculated intrinsically based on a currently processed input, i.e., a sequence of inputs, it may be referred to as self-attention. The attention may also be multi-headed, i.e. calculated based on the same training set but using a plurality of different initial conditions or initial weights.
The input may also be encoded using the position (or time) it has with respect to the sequence of inputs. This step is known as positional encoding or positional embedding.
A further preferred neural network model which may be used for the localization determination of the present invention is referred to as "transformer" and known as such from the field of natural language processing, see for example Ashish Vaswani, et al, "Attention is all you need" in: I. Guyon et al (editors) Advances in Neural Information Processing Systems, vol. 30, pages 5998-6008. Curran Associates, Inc., 2017
, or J. Alammar "The Illustrated Transformer", http://jalammar.github.io/illustrated-transformer/
, or Jacob Devlin, et al., "BERT: pre-training of deep bidirectional transformers for language understanding" CoRR, abs/1810.04805, 2018
). The transformer includes an attention measure which is not dependent on recurrent processing referred to as "self-attention".
In the standard transformer the self-attention may be determined for each position in the input sequence, by using three vectors K, Q and V created by multiplying the input vector by 3 weight matrices. These vectors represent an abstraction of the input called Key, Query and Value respectively. After a series of calculations, a score for each element in the sequence is assigned. This score represents how much a certain element is important in encoding another element. i.e. the attention. However, as an importance difference it should be noted that the standard attention in the LSTMs model outputs a matrix containing the importance of the input elements with respect to the optimization task at hand while the self-attention in the transformer computes also the weight of each input occurrence with respect to the other.
The self-attention computation may be extended to a multi-headed self-attention by training the model multiple times with different weight matrices and concatenating the outputs for each position and multiplying it by a trainable weight matrix. The self-attention or the multi-headed self-attention may improve the expressivity of the model by projecting the feature space multiple times and aggregating to encode a better representation of the input. This is especially important due to noisy signal measurements and the scarcity of features in the typical location determining problem.
It may be preferred to precede the transformer with a positional embedding as described above giving each input a value representative of its position is a sequence or a vector of inputs.
An example of an encoder and decoder is shown in Figs. 3A and 3B, respectively. The encoder 30 consists of a positional embedding 31 feeding an input to a self-attention layer 32 and a feedforward layer 33. Each of these layers has a residual connection to a normalization layer 34, 35, as described for example in Jimmy Lei Ba, Jamie Ryan Kiros, and Geoffrey E. Hinton. "Layer normalization". arXiv:1607.06450v1 [stat.ML] Submitted on 21 Jul 2016
. The single layer encoder shown in Fig. 3A may be representative of a multi-layer stack of such encoders where the output of each encoder is passed as the input of the next one (without the positional embedding stage 31). The encoder is used to generate a (feature) representation 36 of the input using the self-attention mechanism as described above.
The second part of the model is the inference block or decoder 37 as shown in Fig 3B. In deviation from the standard transformed as described for example by A. Vaswani (see above), a decoder 37 is used which includes three convolutional neural networks (CNNs) 38 followed by two feedforwards layers 39. CNNs are thought of as automatic features extractors and since the features span over a 2D space composed of the time window and the generated feature representation 36 from the encoder (i.e, it is considered to fit well with the location problem). In a further change from the known transformer, the loss function to train the model is a mean square error loss function for regression.
The methods described herein may significantly increase the accuracy of a location determination. Comparing different methods yields the following results:
|METHOD||Mean distance error [m]|
|Attention based LSTMs
In Table 1 the first method used is triangulation based on Friis law which relates signal strength ESP and distance d to a receiver as d = a * log(ESP) + b where a and b are fitting parameters for each receiver. This method is not based on a trained model or on Al related methods. The Random Forest method is a basic neural network method and the LSTM and transformer-based methods are described above. Thus, the neural network-based methods as described herein can be used to reduce the mean distance error to around 25 meters (or better if larger training data sets can be acquired). This is a significant improvement over known methods based solely on RSS related measurement and compares well with standard GPS data (about 5 -10 m).
While the previously described systems and methods are aimed at improving the accuracy of the location determination it is also found to be of value to a user to be able to determine the accuracy or uncertainty of the output generated. The results of trained neural networks are often treated as deterministic functions and generate the output without providing such a measure of uncertainty. It is therefore seen as an important extension of the above methods to add a method for determining an uncertainty measure. Such an accuracy or uncertainty measure may be defined as fulfilling three performance conditions:
- The uncertainty estimation should capture the confidence of the model in its own prediction. If the prediction is far away from the actual (ground-truth) value, then the uncertainty should increase.
- The uncertainty interval represents the spatial area where the model supposes the actual location exist. Therefore, any ground truth of the training set (e.g. GPS location data) should fall inside the boundaries of the interval.
- The uncertainty interval should give the user a meaningful insight of the location. A very wide interval reveals uselessness for any practical application. Hence a preferred accuracy or uncertainty measure may be designed to minimize the interval width while respecting the two previous conditions.
The uncertainty may be generated through the use of a difference between the underlying ground truth used in training such as the GPS data and the location as generated by the model. The difference is typically used during the training of the model. However, when the model is applied at a later stage i.e. when the method is operational, there is no further ground truth available. It is therefore preferred to derive the uncertainty from the model itself, hence not relying on additional external measurements which may either not exist or be cumbersome to acquire and increase the cost of implementation of the proposed methods.
Based on this theoretical basis which links dropouts with uncertainty, the training of the model may be regarded as a two-stage process including an initial training of the model based on a first set of (training) data and a second phase which may be regarded as test phase. The latter test phase uses a second (test) data set which itself has not been used during the initial training phase to test the stability of the predictions, i.e. the uncertainty, of the trained model.
The uncertainty of a prediction is determined by using a set of different dropout conditions ("dropout masks") with each set setting for example different probabilities for dropouts. The dropouts may be applied in horizontal or in vertical direction, however, to better correspond to probabilities it is preferred use dropout masks which generate both vertical and horizontal dropouts. The use of a multitude of dropout conditions may be regarded as Monte Carlo testing of the model.
Whereas in the more common interpretation of dropouts random nodes are removed with a probability p at each training step, in the Bayesian interpretation, at each training step the posterior distribution of the weights is updated and in test time the posterior is used to determine the predictive distribution. In terms of implementation, the dropouts uncertainty estimation sums up to performing standard and recurrent dropouts across the layers at test time and run multiple times in a Monte Carlo fashion. This yields a distribution composed of the different posteriori approximations that represents a confidence interval of the model. There are different ways of how to extract the uncertainty from the distribution: it's possible to use the maximum and the minimum as boundaries. However, this approach is prone to having large intervals due to outliers. It is preferred to calculate the mean and the standard deviation and depending on the required confidence, the intervals may be defined as a multiple of the standard deviation of the distribution centered at the mean.
In Figs. 4A and 4B there is shown the determined uncertainty of the position for the x and y coordinate, respectively. The uncertainty is represented by the areas below and above the solid curve indicating the mean value for the respective coordinate/position. A second solid curve represents the position as per GPS (ground truth).
When overlaying the result over a map the positions as show in Fig. 5 the GPS and the determined mean location are shown as circles with the uncertainty represented as ellipse forming the envelop around the uncertainties as determined for that position by the model
Embodiments of the subject matter and the functional operations described in this specification can be implemented in digital electronic circuitry, in tangibly-embodied computer software or firmware, in computer hardware, including the structures disclosed in this specification and their structural equivalents, or in combinations of one or more of them. Embodiments of the subject matter described in this specification can be implemented as one or more computer programs, i.e., one or more modules of computer program instructions encoded on a tangible non-transitory program carrier for execution by, or to control the operation of, data processing apparatus. Alternatively, or in addition, the program instructions can be encoded on an artificially generated propagated signal, e.g., a machine-generated electrical, optical, or electromagnetic signal, that is generated to encode information for transmission to suitable receiver apparatus for execution by a data processing apparatus. The computer storage medium can be a machine-readable storage device, a machine-readable storage substrate, a random or serial access memory device, or a combination of one or more of them.
The term "data processing apparatus" encompasses all kinds of apparatus, devices, and machines for processing data, including by way of example a programmable processor, a computer, or multiple processors or computers. The apparatus can include special purpose logic circuitry, e.g., an FPGA (field programmable gate array) or an ASIC (application specific integrated circuit). The apparatus can also include, in addition to hardware, code that creates an execution environment for the computer program in question, e.g., code that constitutes processor firmware, a protocol stack, a database management system, an operating system, or a combination of one or more of them.
A computer program (which may also be referred to or described as a program, software, a software application, a module, a software module, a script, or code) can be written in any form of programming language, including compiled or interpreted languages, or declarative or procedural languages, and it can be deployed in any form, including as a stand-alone program or as a module, component, subroutine, or other unit suitable for use in a computing environment. A computer program may, but need not, correspond to a file in a file system. A program can be stored in a portion of a file that holds other programs or data, e.g., one or more scripts stored in a markup language document, in a single file dedicated to the program in question, or in multiple coordinated files, e.g., files that store one or more modules, sub programs, or portions of code. A computer program can be deployed to be executed on one computer or on multiple computers that are located at one site or distributed across multiple sites and interconnected by a communication network.
The processes and logic flows described in this specification can be performed by one or more programmable computers executing one or more computer programs to perform functions by operating on input data and generating output. The processes and logic flows can also be performed by, and apparatus can also be implemented as, special purpose logic circuitry, e.g., an FPGA (field programmable gate array) or an ASIC (application specific integrated circuit).
Computers suitable for the execution of a computer program include, by way of example, can be based on general or special purpose microprocessors or both, or any other kind of central processing unit. Generally, a central processing unit will receive instructions and data from a read only memory or a random access memory or both. The essential elements of a computer are a central processing unit for performing or executing instructions and one or more memory devices for storing instructions and data. Generally, a computer will also include, or be operatively coupled to receive data from or transfer data to, or both, one or more mass storage devices for storing data, e.g., magnetic, magneto optical disks, or optical disks. However, a computer need not have such devices. Moreover, a computer can be embedded in another device, e.g., a mobile telephone, a personal digital assistant (PDA), a mobile audio or video player, a game console, a Global Positioning System (GPS) receiver, or a portable storage device, e.g., a universal serial bus (USB) flash drive, to name just a few.
Computer readable media suitable for storing computer program instructions and data include all forms of non-volatile memory, media and memory devices, including by way of example semiconductor memory devices, e.g., EPROM, EEPROM, and flash memory devices; magnetic disks, e.g., internal hard disks or removable disks; magneto optical disks; and CD ROM and DVD-ROM disks. The processor and the memory can be supplemented by, or incorporated in, special purpose logic circuitry.
To provide for interaction with a user, embodiments of the subject matter described in this specification can be implemented on a computer having a display device for displaying information to the user and a keyboard and a pointing device, e.g., a mouse or a trackball, by which the user can provide input to the computer. Other kinds of devices can be used to provide for interaction with a user as well; for example, feedback provided to the user can be any form of sensory feedback, e.g., visual feedback, auditory feedback, or tactile feedback; and input from the user can be received in any form, including acoustic, speech, or tactile input. In addition, a computer can interact with a user by sending documents to and receiving documents from a device that is used by the user; for example, by sending web pages to a web browser on a user's client device in response to requests received from the web browser.
Embodiments of the subject matter described in this specification can be implemented in a computing system that includes a back end component, e.g., as a data server, or that includes a middleware component, e.g., an application server, or that includes a front end component, e.g., a client computer having a graphical user interface or a Web browser through which a user can interact with an implementation of the subject matter described in this specification, or any combination of one or more such back end, middleware, or front end components. The components of the system can be interconnected by any form or medium of digital data communication, e.g., a communication network. Examples of communication networks include a local area network ("LAN") and a wide area network ("WAN"), e.g., the Internet.
The computing system can include clients and servers. A client and server are generally remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other.
While this specification contains many specific implementation details, these should not be construed as limitations on the scope of any invention or of what may be claimed, but rather as descriptions of features that may be specific to particular embodiments of particular inventions. Certain features that are described in this specification in the context of separate embodiments can also be implemented in combination in a single embodiment. Conversely, various features that are described in the context of a single embodiment can also be implemented in multiple embodiments separately or in any suitable subcombination. Moreover, although features may be described above as acting in certain combinations and even initially claimed as such, one or more features from a claimed combination can in some cases be excised from the combination, and the claimed combination may be directed to a subcombination or variation of a subcombination.
Similarly, while operations are depicted in the drawings in a particular order, this should not be understood as requiring that such operations be performed in the particular order shown or in sequential order, or that all illustrated operations be performed, to achieve desirable results. In certain circumstances, multitasking and parallel processing may be advantageous. Moreover, the separation of various system modules and components in the embodiments described above should not be understood as requiring such separation in all embodiments, and it should be understood that the described program components and systems can generally be integrated together in a single software product or packaged into multiple software products.
Particular embodiments of the subject matter have been described. Other embodiments are within the scope of the following claims. For example, the actions recited in the claims can be performed in a different order and still achieve desirable results. As one example, the processes depicted in the accompanying figures do not necessarily require the particular order shown, or sequential order, to achieve desirable results. In certain implementations, multitasking and parallel processing may be advantageous.
A method of determining a location of a mobile device communicating with one or more gateways of a long-range low-power wide-area network (LoRaWAN) comprising the steps of:
- using a model capable of accepting input values relating to a signal strength measure (RSS) of electromagnetic signals transmitted between the mobile device to be localized or tracked and the one or gateways of the long-range low-power wide-area network (LoRaWAN) having known locations;
- collecting the signals while the mobile device is moving across the reception area of the long-range low-power wide-area network (LoRaWAN) and either continuously or intermittently transmits signals to the one or more gateways;
- evaluating the signals to derive values representative of the signal strength measure (RSS; RSSI) as required as input signals to the model;
- using the inputs signals as input to the model; and
- using the model to generate values representative of the location of the mobile device and to make available the generated location values together with a location dependent measure of uncertainty in the location.
2. The method of claim 1, wherein the model includes elements (20,32) which determine the influence of prior input values or prior locations in the generation of a current values representative of the location of the mobile device
3. The method of claim 1 or 2, wherein the model is a trained neural network model having a sequential encoder (20,30) and decoder stage (28,37).
4. The method of claim 3, wherein the encoder comprises a recurrent neural network (RNN) or long short-term memory (LSTM) or the encoder stage of a transformer.
5. The method of any of the preceding claims, wherein the location values are treated in the model as continuous.
6. The method of any of the preceding claims, wherein a temporal sequence (22) of 3 to 15 subsequent input signals is used as a simultaneous input to the model and wherein an input layer of the model has sufficient parallel units to receive said sequence.
7. The method of any of the preceding claims 2 to 6, wherein model comprises self-attention elements (32) determining weights representative of the importance of other locations or other input signals representative of locations to the generated values representative of the actual location and wherein output of the self-attention elements is used as input to the decoder (37).
8. The method of any of the preceding claims, wherein the model comprises a position embedding (31) encoding the input signals in accordance with their positions within a temporal sequence of input signals.
9. The method of any of the preceding claims 2 to 8, wherein the location dependent uncertainty is determined by testing a trained version of the neural network model on a further set of training data not used during training of the model with a multitude of different dropout conditions.
10. The method of claim 9, wherein dropouts include horizontal dropouts between parallel units of the neural network.
11. The method of claim 9, wherein dropouts include horizontal dropouts between parallel units of the neural network and vertical dropouts between vertically stacked layers of the neural network model.
12. A system comprising data processing equipment configured to perform the operations according to of any one of claims 1 -11.