TECHNICAL FIELD
[0001] This disclosure relates to computer technology, and more particularly to the field
of computer vision image technology.
BACKGROUND
[0002] When a user uses a navigation product, whether driving navigation or walking navigation,
it is necessary to obtain the position of the user in real time to accurately plan
the route of the user.
[0003] GPS (Global Positioning System), as a widely used positioning scheme, is susceptible
to satellite conditions, weather conditions, and data link transmission conditions.
For example, in on/under bridge scenarios, main and auxiliary road scenarios, indoor
and tall-building dense commercial areas, GPS is not available. Therefore, there is
a need for a new positioning method to solve the navigation positioning problem in
the scenarios such as on/under the viaduct, the main and auxiliary roads, the indoor
and tall-building dense commercial areas, and the like.
SUMMARY
[0004] The embodiments of the disclosure provides a method and an apparatus for visual positioning
based on mobile edge computing.
[0005] According to a first aspect, an embodiment of the present disclosure provides a method
for visual positioning based on mobile edge computing, including:
receiving, by a mobile edge computing node, an environmental image captured by a to-be-positioned
device in an area covered by the mobile edge computing node;
determining by the mobile edge computing node a target image matching the environment
image from a plurality of candidate images, and calculating position and pose information
of the to-be-positioned device based on position and pose information of a device
for capturing the target image; and
sending by the mobile edge computing node the position and pose information of the
to-be-positioned device to the to-be-positioned device, so that the to-be-positioned
device determines positioning information in the electronic map according to the position
and pose information.
[0006] According to a second aspect, an embodiment of the present disclosure provides an
apparatus for visual positioning based on a mobile edge computing, including:
a receiving module, configured to receive an environment image captured by a to-be-positioned
device in an area covered by the mobile edge computing node;
a calculation module, configured to determine a target image matching the environment
image from a plurality of candidate images, and calculating position and pose information
of the to-be-positioned device based on position and pose information of a device
for capturing the target image; and
a sending module, configured to send the position and pose information of the to-be-positioned
device to the to-be-positioned device, so that the to-be-positioned device determines
positioning information in the electronic map according to the position and pose information.
[0007] According to a third aspect, an embodiment of the present disclosure provides an
electronic device including:
at least one processor; and
a memory in communication with the at least one processor; where,
the memory stores instructions executable by the at least one processor, the instructions
being executed by the at least one processor to enable the at least one processor
to perform the method for visual positioning based on mobile edge computing provided
in any of the embodiments.
[0008] In a fourth aspect, an embodiment of the present disclosure further provides a non-transitory
computer-readable storage medium storing computer instructions for causing the computer
to perform the method for visual positioning based on mobile edge computing provided
in any of the embodiments.
[0009] In a fifth aspect, an embodiment of the present disclosure provides a computer program,
and the computer program, when executed by a computer, causes the computer to perform
the method for visual positioning based on mobile edge computing provided in any of
the embodiments.
[0010] According to an embodiment of the present disclosure, a positioning method based
on computer vision is adopted, and the positioning method is particularly suitable
for positioning in a complex scenario.
[0011] It should be understood that the description in this section is not intended to identify
key or critical features of the embodiments of the disclosure, nor is it intended
to limit the scope of the disclosure. Other features of the present disclosure will
become readily apparent from the following description.
BRIEF DESCRIPTION OF THE DRAWINGS
[0012] The drawings are intended to provide a better understanding of the present disclosure
and are not to be construed as limiting the disclosure, where:
FIG. 1a is a flow chart of a first method for visual positioning based on mobile edge
computing in an embodiment of the present disclosure;
FIG. 1b is a schematic diagram of a coverage of a MEC node according to an embodiment
of the present disclosure;
FIG. 2a is a flow chart of a second method for visual positioning based on mobile
edge computing in an embodiment of the present disclosure;
FIG. 2b is a schematic diagram of a calculation flow of position and pose information
according to an embodiment of the present disclosure;
FIG. 3 is a flowchart of a third method for visual positioning based on mobile edge
computing according to an embodiment of the present disclosure;
FIG. 4a is a flowchart of a fourth method for visual positioning based on mobile edge
computing according to an embodiment of the present disclosure;
FIG. 4B is a flowchart of a system for visual positioning based on mobile edge computing
according to an embodiment of the present disclosure;
FIG. 5 is a block diagram of an apparatus for visual positioning based on mobile edge
computing according to an embodiment of the present disclosure;
FIG. 6 is a block diagram of a mobile edge computing node adapted to implement the
method for visual positioning based on mobile edge computing according to embodiments
of the present disclosure.
DETAILED DESCRIPTION OF EMBODIMENTS
[0013] Exemplary embodiments of the present disclosure are described below in connection
with the accompanying drawings, in which various details of the embodiments of the
present disclosure are included to facilitate understanding, and are to be considered
as exemplary only. Accordingly, one of ordinary skill in the art will recognize that
various changes and modifications may be made to the embodiments described herein
without departing from the scope and spirit of the present disclosure. Also, for clarity
and conciseness, descriptions of well-known functions and structures are omitted from
the following description.
[0014] According to an embodiment of the present disclosure, FIG. 1a is a flowchart of a
first method for visual positioning based on mobile edge computing in the embodiment
of the present disclosure. The embodiment of the present disclosure is applicable
to a case where a device is positioned, and is particularly applicable to a case where
GPS is unavailable in scenarios such as on/under the viaduct, the main and auxiliary
roads, the indoor and tall-building dense commercial areas. The method is performed
by an apparatus for visual positioning based on mobile edge computing, which is implemented
in software and/or hardware and is specifically arranged in a mobile edge computing
node having a certain data computing capability.
[0015] As shown in FIG. 1a, a method for visual positioning based on mobile edge computing
includes S110 to S130.
[0016] S110 includes receiving by the mobile edge computing node an environment image captured
by a to-be-positioned device in an area covered by the mobile edge computing node.
[0017] For ease of description and differentiation, a device needing local positioning is
referred to as a to-be-positioned device, for example, a mobile terminal such as a
mobile phone, a smart watch, or a fixed terminal such as a desktop computer.
[0018] When the to-be-positioned device is located in scenarios that GPS is not available,
such as on/under the viaduct, the main and auxiliary roads, the indoor and tall-building
dense commercial areas, the camera may be turned on to capture an environment around
the to-be-positioned device, to obtain an environment image. In order to improve the
positioning accuracy and highlight the characteristics of the geographical location,
generally photographs of landmark buildings around the device need to be taken.
[0019] After the shooting is completed, the environment image is transmitted to the nearest
mobile edge computing (MEC) node. The MEC node provides Internet technical service
environment, computing, and storage functions within a radio access network (RAN).
The MEC node is logically independent of the rest of the network, which is important
for applications with high security requirements. In addition, MEC nodes generally
have high computational power and are therefore particularly suitable for analyzing
and processing large amounts of data. Meanwhile, since the MEC node is geographically
close to the user or the information source, the delay of the network in response
to the user request is greatly reduced, and the possibility of network congestion
in the transmission network and the core network part is also reduced. Different MEC
nodes have different coverage areas, so that a plurality of MEC nodes process environment
images transmitted in different coverage areas.
[0020] FIG. IB is a schematic diagram of MEC node coverage according to an embodiment of
the present disclosure. The to-be-positioned device at time T1 is located in the coverage
area of the first MEC node, and subsequently, the to-be-positioned device at time
T2 is located in the coverage area of the second MEC node. The to-be-positioned device
sends the environment image captured at time T1 to the first MEC node, and sends the
environment image captured at time T2 to the second MEC node.
[0021] S120 includes determining by the mobile edge computing node a target image matching
the environment image from a plurality of candidate images, and calculating position
and pose information of the to-be-positioned device according to position and pose
information of the device for capturing the target image.
[0022] In the present embodiment, the MEC node pre-stores a plurality of candidate images
and position and pose information of the device in each captured candidate image.
The plurality of candidate images are images taken at different locations, for example,
images taken from multiple locations for multiple landmark buildings. The position
and pose information includes position information and pose information, that is,
information of six degrees of freedom (including translation along the x-axis, translation
along the y-axis, translation along the z-axis, rotation about the x-axis, rotation
about the y-axis, and rotation about the z-axis) of the device in the earth coordinate
system (including the x-axis, the y-axis, and the z-axis).
[0023] Matching with the environment image is being consistent with the image content of
the environment image, for example, the image includes the same kind of entities and
the same orientation. Optionally, the MEC node also stores entity types and orientations
in each candidate image which are identified in advance. The entity type and orientation
in the environment image are identified, a candidate image consistent with the entity
type and orientation in the environment image is selected as the target image, and
may be regarded that the position information and pose information of the device for
capturing the target image is consistent with the position and pose information of
the to-be-positioned device.
[0024] S130 includes sending by the mobile edge computing node the position and pose information
of the to-be-positioned device to the to-be-positioned device, so that the to-be-positioned
device determines the positioning information in the electronic map according to the
position and pose information.
[0025] After receiving the position and pose information, the to-be-positioned device determines
the positioning information according to the position and pose information, and displays
the positioning information in an electronic map running in the to-be-positioned device.
[0026] Illustratively, in the two-dimensional electronic map, a position in the electronic
map is determined based on a translation position along the x-axis and the y-axis,
and an orientation in the electronic map is determined based on an angle of rotation
about the z-axis.
[0027] Illustratively, in the three-dimensional electronic map, a position in the electronic
map is determined based on a translation position along the x-axis, the y-axis, and
the z-axis, and an orientation in the electronic map is determined based on an angle
of rotation about the x-axis, the y-axis, and the z-axis.
[0028] In this embodiment, the MEC node is used as an execution body, and a plurality of
candidate images and the position and pose information of a device for capturing the
candidate images are pre-deployed in the MEC node to form localization and close-distance
deployment, so that time consumption of data in network transmission can be effectively
reduced, requirements on network backhaul bandwidth and network load can be reduced,
and a real-time and reliable positioning demand can be satisfied in practical applications.
By determining the target image matching the environment image from a plurality of
candidate images, calculating the position and pose information of the to-be-positioned
device according to the position and pose information of the device for capturing
the target image, and obtaining the positioning information therefrom, such that the
positioning information can be effectively obtained by running a visual positioning
algorithm in the MEC node by using a computer vision-based positioning method, regardless
of whether the user turns on the GPS positioning system or not, and when the to-be-positioned
device is located in scenarios that GPS is not available, such as on/under the viaduct,
the main and auxiliary roads, the indoor and tall-building dense commercial areas,
high-precision positioning can still be performed.
[0029] In the above-described embodiment and the following embodiment, a plurality of candidate
images is taken within the coverage area of the MEC node. Since the environmental
image received by the MEC node is captured by a device within the coverage area, the
target image matched thereto should also be captured within the coverage area of the
MEC node. When determining the target image, only a small number of candidate images
needs to be matched with the environment image, thereby effectively accelerating the
calculation of the visual positioning.
[0030] According to an embodiment of the present disclosure, FIG. 2a is a flowchart of a
second method for visual positioning based on mobile edge computing in the embodiment
of the present disclosure. The embodiment of the present disclosure optimizes a calculation
process of position and pose information on the basis of the technical solutions of
the above-mentioned embodiments.
[0031] A second method for visual positioning based on mobile edge computing, as shown in
FIG. 2a, includes S210 to S240.
[0032] S210 includes receiving by the mobile edge computing node an environment image captured
by a to-be-positioned device in an area covered by the mobile edge computing node.
[0033] S220 includes determining by the mobile edge computing node a target image matching
the environment image from the plurality of candidate images.
[0034] FIG. 2b is a schematic diagram of a calculation flow of position and pose information
according to an embodiment of the present disclosure. As shown in FIG. 2b, the MEC
node performs feature extraction on the target image to obtain image features of the
target image. Specifically, a feature extraction model, based on SIFT (Scale-invariant
feature transform) or based on a deep neural network, such as NetVLAD, is selected.
NetVLAD is a convolutional neural network model with VLAD (Vector of locally aggregated
descriptors) layers.
[0035] In this embodiment, as shown in FIG. 2b, a server (such as a cloud server) performs
feature extraction on each candidate image in advance to obtain image features of
each candidate image, and sends the image features of the candidate image to a MEC
node of a corresponding coverage area according to the coverage area of the MEC node,
and stores the image features of the candidate image in the image feature library
of the MEC node.
[0036] The MEC node then searches for the target image matching the image features of the
environment image from the plurality of candidate images using an approximate nearest
neighbor search algorithm. In this embodiment, the target image matching the environment
image is determined by image feature matching, and the feature matching is performed
by an approximate nearest neighbor search algorithm. The approximate nearest neighbor
search algorithm may be a graph-based, tree-based, or hash-based algorithm for an
image feature of a given environmental image, the k most similar image features are
found from image features of a plurality of candidate images, such as the aforementioned
image feature library, where k is a natural number, such as 1.
[0037] According to the present embodiment, image matching is performed from an image feature
dimension, a matched target image can be accurately found, and matching efficiency
can be effectively improved by an approximate nearest neighbor search algorithm.
[0038] S230 includes calculating by the mobile edge computing node a matching feature point
pair set between the environment image and the target image.
[0039] The entity types in the environment image and the target image are the same, and
the entity orientations may be slightly different, that is, some feature points in
the image features are not matched. In order to improve the positioning accuracy,
the position and pose information of the device for capturing the environment image
need to be adjusted according to the point pair set to obtain the position and pose
information of the to-be-positioned device.
[0040] Optionally, a best-bin-first algorithm or a Random Sample Consensus (RANSAC) algorithm
is used to find a plurality of matching feature point pairs from the image features
of the environment image and the image features of the target image to form a feature
point pair set. In FIG. 2b, pixels of the original image mapped from the feature point
pairs are connected, and the matching relationship of the feature point pairs is visually
represented.
[0041] S240 includes calculating by mobile edge computing node the position and pose information
of the to-be-positioned device according to the position and pose information of the
device for capturing the target image and the feature point pair set.
[0042] As shown in FIG. 2b,
I1 and
I2 are the matched target image and the environment image respectively.
P1 and
P1 are the coordinates of the point P in the actual space in the coordinate system of
the device for capturing the target image and the coordinate system of the to-be-positioned
device, respectively.
p1 and
p2 are pixels of
P1 and
P1 in the corresponding images, and
O1 and
O2 are the optical centers of the cameras for capturing the target image and the environment
image, respectively, with the equation (1):

here, the rotation matrix from the target image to the environment image is set to
R, the translation vector is set to t, and there is equation (2):

[0043] Since the position and pose information of the device for capturing the target image
is stored in the MEC node in advance, for obtaining the position and pose information
of the to-be-positioned device, the motion from the device for capturing the target
image to the to-be-positioned device needs to be estimated, that is, the purpose of
estimation is to solve R and t.
[0044] According to the formula (2) and the principle of pinhole imaging, equation (3) is
obtained:

where,
K1 and
K2 are internal parameters of the camera for capturing the target image and the environment
image, respectively. R and t can be solved based on equation (3). Then, on the basis
of the position and pose information of the device for capturing the target image,
the position and pose information of the to-be-positioned device is obtained by rotating
and translating according to R and t.
[0045] A group of position and pose information of a to-be-positioned device can be obtained
based on each pair of feature points, and a plurality of groups of position and pose
information can be obtained after calculation of the set of feature point pairs is
completed. The final position and pose information is estimated by the least square
method for multiple sets of position and pose information.
[0046] In the present embodiment, the position and pose information of the to-be-positioned
device is calculated according to the position and pose information of the device
for capturing the target image and the feature point pair set by using the pose estimation
algorithm and the pinhole imaging principle, and the position and pose information
of the device for capturing the environmental image is adjusted to obtain the position
and pose information of the to-be-positioned device, thereby improving the positioning
accuracy.
[0047] According to an embodiment of the present disclosure, FIG. 3 is a flowchart of a
third method for visual positioning based on mobile edge computing in the embodiment
of the present disclosure. The embodiment of the present disclosure is optimized on
the basis of the technical solutions of the above-mentioned embodiments.
[0048] Optionally, the operation "the mobile edge computing node determines the target image
matching the environment image from the plurality of candidate images, and calculates
the position and pose information of the to-be-positioned device according to the
position and pose information of the device for capturing the target image" is specified
to "the mobile edge computing node determines the target image matching the environment
image through the visual positioning model, and calculates the position and pose information
of the to-be-positioned device according to the position and pose information of the
device for capturing the target image";
[0049] Optionally, before the operation "the mobile edge computing node receives the environmental
image captured by the to-be-positioned device within the coverage area of the mobile
edge computing node", an update mechanism of the MEC node is provided by adding "the
mobile edge computing node acquires, from a server, updated multiple candidate images,
position and pose information of the device for capturing the updated multiple candidate
images, and an updated visual positioning model."
[0050] A third method for visual positioning based on mobile edge computing, as shown in
FIG. 3, includes S310 to S340.
[0051] S310 includes acquiring, from the server, by the mobile edge computing node updated
multiple candidate images position and pose information of a device for capturing
the updated multiple candidate images, and an updated visual positioning model.
[0052] In order to be able to continuously iterate the visual positioning effect of the
MEC node, the candidate images, the position and pose information, and the visual
positioning model of the MEC node need to be synchronously updated according to the
update situation in the server. Optionally, if the MEC node also stores the image
features of the candidate image, the image features of the candidate image need to
be synchronously updated.
[0053] Optionally, the MEC node periodically requests updated multiple candidate images,
position and pose information, and a visual positioning model within coverage of the
MEC node from the server. Alternatively, when the server updates, the updated multiple
candidate images, the position and pose information, and the visual positioning model
are updated to the MEC node based on the coverage area of the MEC node.
[0054] S320 includes receiving by the mobile edge computing node an environment image captured
by a to-be-positioned device in an area covered by the mobile edge computing node.
[0055] S330 includes determining by the mobile edge computing node a target image matching
the environment image by using the visual positioning model, and calculating the position
and pose information of the to-be-positioned device according to the position and
pose information of the device for capturing the target image.
[0056] In the present embodiment, a calculation algorithm of the position and pose information
is encapsulated in a visual positioning model. Optionally, the visual positioning
model includes an image feature extraction unit, a similar candidate image calculation
unit, an image feature point matching unit, and a position and pose calculation unit.
The image feature extraction unit is configured to perform feature extraction on the
target image to obtain image features of the target image. The similar candidate image
calculation unit is configured to search for a target image matching an image feature
of an environment image from a plurality of candidate images using an approximate
nearest neighbor search algorithm. The image feature point matching unit is configured
to calculate a feature point pair set matching the environment image and the target
image. The position and pose calculation unit is configured to calculate the position
and pose information of the to-be-positioned device based on the position and pose
information and the feature point pair set of the device for capturing the target
image.
[0057] S340 includes sending by the mobile edge computing node the position and pose information
of the to-be-positioned device to the to-be-positioned device, so that the to-be-positioned
device determines the positioning information in the electronic map according to the
position and pose information.
[0058] The present embodiment can continuously iterate the visual positioning effect of
the MEC node by updating the candidate images, the position and pose information,
and the visual positioning model in the MEC node. By encapsulating the calculation
algorithm of the position and pose information in the visual positioning model, the
entire algorithm can be conveniently updated and maintained.
[0059] In the embodiment of the present disclosure, FIG. 4a is a flowchart of a fourth method
for visual positioning based on mobile edge computing in the embodiment of the present
disclosure. The embodiment of the present disclosure optimizes a receiving process
of an environmental image on the basis of the technical solutions of the above-mentioned
embodiments.
[0060] A fourth method for visual positioning based on mobile edge computing, as shown in
FIG. 4a, includes S410 to S430.
[0061] S410 includes receiving by the mobile edge computing node, through the fifth generation
mobile communication technology, an environmental image captured by a to-be-positioned
device in an area covered by the mobile edge computing node.
[0062] FIG. 4B is a flow chart of a visual positioning system based on mobile edge computation
according to an embodiment of the present disclosure. FIG. 4B includes a cloud server,
a core network, n MEC nodes, a plurality of 5G base stations connected to each MEC
node, and two handsets used as devices to be positioned.
[0063] Each MEC node downloads or updates, from a cloud server in advance, multiple candidate
images captured by a device in a corresponding coverage area, position and pose information
and a visual positioning model of the device for capturing the candidate images. Further,
image features of the candidate image may also be downloaded or updated from the cloud
server.
[0064] When a problem occurs by GPS positioning in a user navigation process, an environment
image can be obtained by turning on a mobile phone camera to shoot a nearby conspicuous
building, and upload the environment image through a 5G (5th generation mobile networks)
network. After receiving the environment image, the 5G base station uploads the environment
image to the MEC node covering the area in which the handset is located by selecting
a near MEC node. S420 and S430 are then performed by the MEC node.
[0065] S420 includes determining by the mobile edge computing node a target image matching
the environment image from a plurality of candidate images, and calculating the position
and pose information of the to-be-positioned device based on the position and pose
information of the device for capturing the target image.
[0066] S430 includes sending by the mobile edge computing node the position and pose information
of the to-be-positioned device to the to-be-positioned device, so that the to-be-positioned
device determines the positioning information in the electronic map according to the
pose information.
[0067] In this embodiment, the visual positioning algorithm and the candidate image are
deployed to the MEC node in advance based on the MEC deployment mode. After the to-be-positioned
device captures the environment image, the near MEC node is selected through the 5G
network access selection, and the 5G network has the advantages of low delay and high
concurrency, so that the calculation of the visual positioning can be accelerated
and the visual positioning with low delay can be provided.
[0068] According to an embodiment of the present disclosure, FIG. 5 is a structural diagram
of an apparatus for visual positioning based on mobile edge computing in the embodiment
of the present disclosure. The embodiment of the present disclosure is applicable
to a case in which a device is positioned, and the apparatus is implemented in software
and/or hardware and is specifically configured in an MEC node having a certain data
operation capability.
[0069] Apparatus 500 for visual positioning based on mobile edge computing, as shown in
FIG. 5, includes a receiving module 501, a calculation module 502, and a transmitting
module 503.
[0070] The receiving module 501 is configured to receive an environment image captured by
a to-be-positioned device in an area covered by the mobile edge computing node.
[0071] The calculation module 502 is configured to determine, from a plurality of candidate
images, a target image matching an environment image, and calculating position and
pose information of a to-be-positioned device according to the position and pose information
of the device for capturing the target image.
[0072] The transmitting module 503 is configured to transmit the position and pose information
of the to-be-positioned device to the to-be-positioned device, so that the to-be-positioned
device determines the positioning information in the electronic map according to the
pose information.
[0073] In this embodiment, the MEC node is used as an execution body, and the position and
pose information of multiple candidate images and a device for capturing the candidate
images are pre-deployed in the MEC node to form localization and close-distance deployment,
so that time consumption of data in network transmission can be effectively reduced,
requirements on network backhaul bandwidth and network load can be reduced, and a
real-time and reliable positioning demand can be satisfied in practical applications.
By determining the target image matching the environment image from multiple candidate
images, calculating the position and pose information of the to-be-positioned device
according to the position and pose information of the device for capturing the target
image, and obtaining the positioning information therefrom, the positioning information
can be effectively obtained by running a visual positioning algorithm in the MEC node
by using a computer vision-based positioning method, regardless of whether the user
turns on the GPS positioning system or not, and when the to-be-positioned device is
in scenarios where GPS is unavailable such as on/under the viaduct, the main and auxiliary
roads, the indoor and tall-building dense commercial areas.
[0074] Further, the multiple candidate images are captured within the coverage area of the
mobile edge computing node.
[0075] Further, the calculation module 502 includes a determination unit, a feature point
pair set calculation unit, and a position and pose information calculation unit. The
determining unit is configured to determine a target image matching the environment
image from the multiple candidate images; The feature point pair set calculation unit
is configured to calculate a feature point pair set matching the environment image
and the target image. The position and pose information calculation unit configured
to calculate the position and pose information of the to-be-positioned device based
on the position and pose information of the device for capturing the target image
and the feature point pair set.
[0076] Further, the determination unit is specifically configured to perform feature extraction
on the target image to obtain image features of the target image; and searching for
a target image matching an image feature of an environment image from the multiple
candidate images using an approximate nearest neighbor searching algorithm.
[0077] Further, the calculation module 502 is configured to determine a target image matching
the environment image through the visual positioning model, and calculate the position
and pose information of the to-be-positioned device according to the position and
pose information of the device for capturing the target image.
[0078] Further, the apparatus further includes an updating module, configured to acquire
updated multiple candidate images from the server, position and pose information of
the device for capturing the updated multiple candidate images, and an updated visual
positioning model before receiving the environment image captured by the to-be-positioned
device in the coverage area of the mobile edge computing node.
[0079] Further, the receiving module 501 is specifically configured to receive the environment
image captured by the to-be-positioned device in the coverage area of the mobile edge
computing node through the fifth generation mobile communication technology.
[0080] The above-mentioned apparatus for visual positioning based on the mobile edge computing
can execute the method for visual positioning based on the mobile edge computing provided
in any one of the embodiments of the present disclosure, and has corresponding functional
modules and beneficial effects for executing the method for visual positioning based
on the mobile edge computing.
[0081] According to an embodiment of the present disclosure, the present disclosure further
provides an MEC node and a readable storage medium.
[0082] As shown in FIG. 6, FIG. 6 is a block diagram of an MEC node for implementing the
method for visual positioning based on the mobile edge computing according to the
embodiment of the present disclosure. The MEC node is intended to represent various
forms of digital computers, such as laptop computers, desktop computers, worktables,
personal digital assistants, servers, blade servers, mainframe computers, and other
suitable computers. The MEC node may also represent various forms of mobile devices,
such as personal digital processing, cellular telephones, smart phones, wearable devices,
and other similar computing devices. The components shown herein, their connections
and relationships, and their functions are by way of example only and are not intended
to limit the implementation of the present disclosure as described and/or claimed
herein.
[0083] As shown in FIG. 6, the MEC node includes one or more processors 601, a memory 602,
and an interface for connecting components, including a high speed interface and a
low speed interface. The various components are interconnected by different buses
and may be mounted on a common motherboard or otherwise as desired. The processor
may process instructions executed within the MEC node, including instructions stored
in or on a memory to display graphical information of the GUI on an external input/output
device, such as a display device coupled to an interface. In other embodiments, multiple
processors and/or multiple buses may be used with multiple memories, if desired. Similarly,
multiple MEC nodes may be connected, with each MEC node providing some of the necessary
operations (e.g., as a server array, a set of blade servers, or a multiprocessor system).
In Fig. 6, processor 601 is used as an example.
[0084] The memory 602 is a non-transitory computer readable storage medium provided by the
present disclosure. The memory stores instructions executable by the at least one
processor to cause the at least one processor to perform the method for visual positioning
based on the moving edge computing provided according to the disclosure. The non-transitory
computer-readable storage medium of the present disclosure stores computer instructions
for causing a computer to perform the method of visual positioning based on mobile
edge computing provided according to the disclosure.
[0085] The memory 602, as a non-transitory computer-readable storage medium, may be used
to store non-transitory software programs, non-transitory computer-executable programs,
and modules, such as program instructions/modules corresponding to a method for visual
positioning based on mobile edge computing in an embodiment of the present disclosure
(e.g., including a receiving module 501, a computing module 502, and a sending module
503 as shown in FIG. 5). The processor 601 executes various functional applications
and data processing of the server by running non-transitory software programs, instructions,
and modules stored in the memory 602, i.e., implementing the method for visual positioning
based on mobile edge computing in the method embodiment described above.
[0086] The memory 602 may include a storage program area and a storage data area, where
the storage program area may store an operating system, an application program required
for at least one function. The storage data area may store data or the like created
by use of an electronic MEC node that implements a method for visual positioning based
on mobile edge computing. In addition, memory 602 may include high speed random access
memory, and may also include non-transitory memory, such as at least one magnetic
disk storage device, flash memory device, or other non-transitory solid state storage
device. In some embodiments, memory 602 may optionally include remotely disposed memory
relative to processor 601, which may be connected via a network to an MEC node performing
a method for visual positioning based on mobile edge computing. Examples of such networks
include, but are not limited to, the Internet, enterprise intranets, local area networks,
mobile communication networks, and combinations thereof.
[0087] The MEC node performing the method for visual positioning based on the mobile edge
computing may further include an input device 603 and an output device 604. The processor
601, the memory 602, the input device 603, and the output device 604 may be connected
via a bus or otherwise, as illustrated in FIG. 6.
[0088] The input device 603 may receive input number or character information, and generate
key signal input related to user settings and functional control of the MEC node performing
the method for visual positioning based on the mobile edge computing, such as a touch
screen, a keypad, a mouse, a track pad, a touch pad, a pointer bar, one or more mouse
buttons, a track ball, a joystick, or the like. The output device 604 may include
a display device, an auxiliary lighting device (e.g., an LED), a tactile feedback
device (e.g., a vibration motor), and the like. The display device may include, but
is not limited to, a liquid crystal display (LCD), a light emitting diode (LED) display,
and a plasma display. In some embodiments, the display device may be a touch screen.
[0089] Various embodiments of the systems and technologies described herein may be implemented
in digital electronic circuit systems, integrated circuit systems, dedicated ASICs
(application specific integrated circuits), computer hardware, firmware, software,
and/or combinations thereof. These various embodiments may include: being implemented
in one or more computer programs that can be executed and/or interpreted on a programmable
system that includes at least one programmable processor. The programmable processor
may be a dedicated or general-purpose programmable processor, and may receive data
and instructions from a storage system, at least one input device, and at least one
output device, and transmit the data and instructions to the storage system, the at
least one input device, and the at least one output device.
[0090] These computing programs (also referred to as programs, software, software applications,
or codes) include machine instructions of the programmable processor and may use high-level
processes and/or object-oriented programming languages, and/or assembly/machine languages
to implement these computing programs. As used herein, the terms "machine readable
medium" and "computer readable medium" refer to any computer program product, device,
and/or apparatus (for example, magnetic disk, optical disk, memory, programmable logic
apparatus (PLD)) used to provide machine instructions and/or data to the programmable
processor, including machine readable medium that receives machine instructions as
machine readable signals. The term "machine readable signal" refers to any signal
used to provide machine instructions and/or data to the programmable processor.
[0091] In order to provide interaction with a user, the systems and technologies described
herein may be implemented on a computer, the computer includes: a display apparatus
for displaying information to the user (for example, CRT (cathode ray tube) or LCD
(liquid crystal display) monitor); and a keyboard and a pointing apparatus (for example,
mouse or trackball), and the user may use the keyboard and the pointing apparatus
to provide input to the computer. Other types of devices may also be used to provide
interaction with the user; for example, feedback provided to the user may be any form
of sensory feedback (for example, visual feedback, auditory feedback, or tactile feedback);
and any form (including acoustic input, voice input, or tactile input) may be used
to receive input from the user.
[0092] The systems and techniques described herein may be implemented in a computing system
including a backend component (e.g., as a data server), or a computing system including
a middleware component (e.g., an application server), or a computing system including
a front-end component (e.g., a user computer having a graphical user interface or
a web browser through which a user may interact with embodiments of the systems and
techniques described herein), or a computing system including any combination of such
backend component, middleware component, or front-end component. The components of
the system may be interconnected by any form or medium of digital data communication
(e.g., a communication network). Examples of communication networks include local
area networks (LANs), wide area networks (WANs), the Internet, and block chain networks.
[0093] The computer system may include a client and a server. The client and server are
typically remote from each other and typically interact through a communication network.
The relationship between the client and the server is generated by a computer program
running on the corresponding computer and having a client-server relationship with
each other.
[0094] It should be understood that the various forms of processes shown above may be used
to reorder, add, or delete steps. For example, the steps described in the present
disclosure may be performed in parallel, sequentially, or in different orders. As
long as the desired results of the technical solution disclosed in the present disclosure
can be achieved, no limitation is made herein.
[0095] The above specific embodiments do not constitute limitation on the protection scope
of the present disclosure. Those skilled in the art should understand that various
modifications, combinations, sub-combinations and substitutions may be made according
to design requirements and other factors. Any modification, equivalent replacement
and improvement made within the spirit and principle of the present disclosure shall
be included in the protection scope of the present disclosure.