[0001] The present invention relates to a method and system for a tactical visualization.
In particular the present invention relates to an on-helmet apparatus or an on-gun
apparatus.
[0002] Prior art defines so called head-up display that is any transparent display that
presents data without requiring users to look away from their usual viewpoints. The
origin of the name stems from a pilot being able to view information with the head
positioned "up" and looking forward, instead of angled down looking at lower instruments
(source: Wikipedia).
[0003] Although they were initially developed for military aviation, HUDs are now used in
commercial aircraft, automobiles, computer gaming, and other applications.
[0004] It would be advantageous to provide tactical visualization for man, especially supporting
team operations.
[0005] A publication of
US 20130229716 A1, entitled "Tactical riflescope with smartphone dock" discloses optical systems and
devices that enable a mobile device (e.g., smartphone or other mobile phone, personal
media player, and/or other personal electronic device) to be coupled with an optical
device (e.g., a riflescope, spotting scope, etc.) such that information shown on the
display of the mobile device is viewable to a user looking into the eyepiece of the
optical device. Additionally or alternatively, an image from the optical device can
be communicated to the mobile device. A modular design can utilize an apparatus configured
to encase a mobile device, which can be coupled with the optical device via and optical
and/or electrical interface.
[0006] The aim of the development of the present invention is an improved and cost effective
method and system for a tactical visualization.
[0007] An object of the present invention is a system for a tactical visualization, the
system comprising: a data bus communicatively coupled to a memory; a controller communicatively
coupled to the system bus; the system further comprising: means for visualizing tactical
situation; at least one camera; wherein the controller is configured to process data
from the camera in order to detect objects and identify objects of interest and assign
locations to these objects versus a reference point in space as well as determine
probabilities of handling each object of interest; a radio transceiver configured
to transmit the processed data to a command center and to receive data from the command
center; wherein the controller is configured to execute the steps of the method according
to the present invention.
[0008] Preferably, the means for visualizing is an on-helmet display or an on-gun display
or optical signaling means.
[0009] Preferably, the camera is configured to sense visible light as well as infrared light
and is supplemented with microwave radar configured to detect objects.
[0010] Preferably, the controller processed data based on object recognition or movement
detection or detection of sources of heat.
[0011] Preferably, the controller is connected to a user control means configured to allow
a person using the system to control its functions.
[0012] Preferably, the system comprises an orientation sensor, a geolocation system and
an inertial navigation system.
[0013] Preferably, the visualizing means is configured to provide augmented reality capability
wherein a picture from the camera is mixed with information received from the command
center and/or the controller.
[0014] Preferably, the system is configured to place graphical indicators in proximity to
objects and objects of interest.
[0015] Preferably, the graphical indicators also include a location of a reference point,
location of each of the objects of interest with reference to the reference point,
identifier of a team member assigned to a given object of interest and priority of
each object of interest.
[0016] Another object of the present invention is a method for a tactical visualization,
the method comprising the steps of: identifying objects and objects of interest on
an image acquired by a camera; generating a descriptor for each identified object;
transmitting the collected data to a command center; awaiting data from the command
center and receiving the data; processing the received information and superimposing
it on the image from camera; and providing movement guidance on the composite image
and displaying the composite image on a visualizing means.
[0017] Preferably, the descriptor comprises type of the object, its location with reference
to the camera, information regarding location of the system and orientation of the
system.
[0018] Another object of the present invention is a computer program comprising program
code means for performing all the steps of the computer-implemented method according
to the present invention when said program is run on a computer.
[0019] Another object of the present invention is a computer readable medium storing computer-executable
instructions performing all the steps of the computer-implemented method according
to the present invention when executed on a computer.
[0020] These and other objects of the invention presented herein are accomplished by providing
a method and system for a tactical visualization. Further details and features of
the present invention, its nature and various advantages will become more apparent
from the following detailed description of the preferred embodiments shown in a drawing,
in which:
Fig. 1 presents an on-helmet system according to the present invention;
Figs. 2 A-C present examples of graphical indicators;
Fig. 3 presents an embodiment of an on-helmet system according to the present invention;
Fig. 4 presents a method according to the present invention; and
Fig. 5 shows an exemplary content presented on a display.
NOTATION AND NOMENCLATURE
[0021] Some portions of the detailed description which follows are presented in terms of
data processing procedures, steps or other symbolic representations of operations
on data bits that can be performed on computer memory. Therefore, a computer executes
such logical steps thus requiring physical manipulations of physical quantities.
[0022] Usually these quantities take the form of electrical or magnetic signals capable
of being stored, transferred, combined, compared, and otherwise manipulated in a computer
system. For reasons of common usage, these signals are referred to as bits, packets,
messages, values, elements, symbols, characters, terms, numbers, or the like.
[0023] Additionally, all of these and similar terms are to be associated with the appropriate
physical quantities and are merely convenient labels applied to these quantities.
Terms such as "processing" or "creating" or "transferring" or "executing" or "determining"
or "detecting" or "obtaining" or "selecting" or "calculating" or "generating" or the
like, refer to the action and processes of a computer system that manipulates and
transforms data represented as physical (electronic) quantities within the computer's
registers and memories into other data similarly represented as physical quantities
within the memories or registers or other such information storage.
[0024] According to the present invention tactical operation environment is for example
firefighting operations, police operations, tactical games such as paintball or the
like.
[0025] In a tactical operation environment it is crucial that every person sees targets
(objects of interest in general) that are within reach and also is aware of probability
of servicing or eliminating such targets (objects of interest).
[0026] In other words a person needs to be presented with a tactical image of a location
of operation with superimposed information on targets. For example, in case of police
or military such information may relate to a probability or eliminating a target with
identification of target hit difficulty/probability level.
[0027] For example, in case there are three targets within weapon's range, of which one
is much closer than the others, the second is further but on the same elevation as
the shooter and the third is much higher than the shooter. Then the target, most easy
to hit, is the closest target. The second target is more difficult and the third target
is most difficult of the three.
[0028] The task of an electronic equipment of a gun or helmet is to identify location of
potential objects of interest versus the location of the person holding the gun (or
wearing a helmet), calculation of a distance and angle of elevation of a barrel. In
case there are more than one objects of interest, the system calculates probability
of servicing (or hitting, eliminating, handling) for each of the objects of interest.
For example, the object of interest easiest to be services may have the highest priority.
The aforementioned calculations shall be executed taking into account skills of a
particular person operating the gun.
[0029] As there may be teams of persons working in tactical operations of firefighters,
police or army, it is beneficial to assign objects of interest at the level of teams.
Different team members may be assigned different objects of interest based on location
and skills of the team members. For example in case two soldiers are at different
distances to a target then the soldier further from the target may receive a task
of engaging the target in case he is a better shooter and a hit with a single shot
is more probable than in case of the soldier who is closer to the target. The determination
of probabilities and abilities may also take into account kinds of weapon and kinds
of ammunition suitable for a task. Team members may be presented with additional information
regarding selection of means for a task, for example ammunition.
[0030] The system operates based on cameras, rangefinders, radars and the like and includes
analysis of locations of detected objects of interest as well as team members executing
a tactical operation.
[0031] As a result there may be crated a list of objects of interest related to assets assigned
to servicing these objects of interest.
[0032] Fig. 1 presents an on-helmet system according to the present invention. The system
is installed as helmet electronic equipment.
[0033] The system may be realized using dedicated components or custom made FPGA or ASIC
circuits. The system comprises a data bus 101 communicatively coupled to a memory
104. Additionally, other components of the system are communicatively coupled to the
system bus 101 so that they may be managed by a controller 106.
[0034] The memory 104 may store computer program or programs executed by the controller
106 in order to execute steps of the method according to the present invention.
[0035] Each member of a tactical team may be equipped with means for visualizing 102 tactical
situation. These means may be for example: on-helmet display (eg. LCD, OLED), or an
on-gun display (eg. LCD, OLED), or optical signalling means attached to a gun signalling
that an object of interest has been assigned as well as direction in which the gun
shall be pointed. The optical signalling means may comprise four LEDs positioned after
a gunsight wherein the LEDs from a cross (ie. A top LED, a bottom LED, a left LED
and a right LED).
[0036] The system comprises at least one camera 109. Preferably the camera senses visible
light as well as infrared light. Optionally the camera may be supplemented with microwave
radars for detecting objects as well as missiles, bullets etc. Further the controller
106 processes data from camera(s) and optionally radar(s) in order to detect objects
and identify objects of interest and assigning locations to these objects versus a
reference point in space as well as determining probabilities of servicing (eg. hitting)
each object of interest (optionally also by different team members).
[0037] The radio transceiver 105 is responsible for transmitting processed data to a command
center that may process a full tactical view of a location of operation. The system
may also receive information from the command center (such as assigned objects of
interest and priorities of actions for each team member).
[0038] A method for detecting objects and identifying objects of interest is based on object
recognition, movement detection or detection of sources of heat with an infrared camera.
The optical system may also work based on reference pictures for detection of new
objects in a scene.
[0039] The controller 106 may be connected to a user control means such as buttons or a
joystick allowing the person using the system to control its functions, preferably
without moving a rifle away from an arm. Such functions may include setup of the system,
switching on/off, switching between different viewing modes.
[0040] In order to support precise information on location and orientation, the system comprises
an orientation sensor 108, a geolocation system 103 (eg. GLONASS or GPS) and an inertial
navigation system 107. By using these means, the system as well as a command center
are aware of location of team members as well as direction towards which the team
member system is aimed.
[0041] A version of the system wherein the visualizing means 102 is a display may provide
augmented reality capability wherein a picture from the camera 109 is mixed with information
received from the command center and/or the controller 106. Owing to this a person
will see graphical indicators placed in proximity to objects and objects of interest.
Such graphical indicators may identify objects status (eg. Civilian or enemy or ally
etc.) and/or priorities and/or probabilities of proper servicing an object. The graphical
indicators may distinguish own objects of interest with reference to objects of interest
of other team members.
[0042] A version of the system equipped with the optical signaling means will signal only
one object of interest.
[0043] The graphical indicators may also include a location of a reference point; location
of each of the objects of interest (with reference to the reference point); identifier
of a team member assigned to a given object of interest; priority of each object of
interest.
[0044] There may be provided several modes of operation of the system. In the first mode
"manual" the commander decides on the allocation of objects of interest for individual
subordinates. In the second mode "automatic" computer analyzing the situation tactically
allocates objects of interest based on probability of servicing a particular objects
of interest by individual team members. In the third mode "semiautomatic" the objects
of interest are assigned by the computer and subsequently authorized by the commander.
In case certain objects of interest remain allocated, the procedure of assignment
of unassigned objects of interest starts again.
[0045] Figs. 2 A-C present examples of graphical indicators presented as superimposed on
image captured by the camera 109. The exemplary indicator of Fig. 2A identifies an
object of interest that should be handled as a third object after handling object
'+1' and '+2'. The exemplary indicator of Fig. 2B identifies an object of interest
that should be handled as a fifth object by another team member. The '-' sign refers
to another team member. The exemplary indicator of Fig. 2C identifies an object of
interest that should not be handled.
[0046] The graphical indicators may be different for different types of recognized objects
such as persons, small vehicles, large vehicles and may have different configuration
(such as color) for objects of interest, team members, civilians etc.
[0047] Fig. 3 presents an embodiment of an on-helmet system according to the present invention.
The helmet 301 comprises a camera 109 and a visualization means 102 as indicated in
Fig. 1. The remaining elements of the electronic system are built in into the helmet
301 itself.
[0048] Fig. 4 presents a process of assigning objects of interest to team members. First
the objects and objects of interest are identified 401 by a suitable system on an
image acquired by a camera. For example such system is video object recognition system.
Subsequently, at step 402, the system generates a descriptor for each object. Such
descriptor may include besides type of the object its location with reference to a
camera, information regarding location of a team member (the personal system) and
orientation of a helmet or gun (the personal system). Next, at step 403, the collected
data are transmitted to a command center. In a simpler form only image from the camera
may be transmitted to the command center.
[0049] Subsequently, at step 404, the system awaits data from the command center which processes
data received preferably from at least two transmitters carried by team members.
[0050] At the command center there is calculated probability of handling each object of
interest by each of the team members. In case of application where weapons are involved
such calculation may be executed for each carried weapon and each carried ammunition
type. This means that in case of shooters there may be output several probability
values for different configurations of weapon and ammunition. The combination of best
weapon an ammunition is preferably selected for a given object of interest.
[0051] There may be created a table wherein each object of interest (row) is associated
with probability results (columns) obtained for different team members. The table
may also comprise additional column providing the value of highest probability of
handling the object of interest among the team members.
[0052] The table may be sorted according to this additional column ascending. For an object
of interest of the first row there is selected a team member that has the highest
probability of handling the object of interest. After a team member has been associated
with the object of interest, the row is removed from the table and the table is resorted
in the same manner s previously. The process is repeated until all table rows have
been processed.
[0053] After the objects of interest have been assigned to team members, the command center
transmits to each team member information regarding assigned objects of interest to
this particular team member. Each team member's electronic system receives, at step
404, information from the command center.
[0054] The received information is processed and superimposed 405 on the image from camera
109 mounted on a helmet or on a gun.
[0055] Subsequently, at step 406, movement guidance is provided and in particular aiming
guidance is provided as a part of a composite image data. For example, in case of
a gun there may be displayed suitable graphical indicators such as arrows indicating
direction in which the gun is to be moved or aimed. After the gun has been aimed at
objects of interest, the controller will superimpose graphical indicators related
to objects of interest, their priorities and/or probabilities.
[0056] Optionally, after the gun has been aimed at objects of interest there may be give
guiding indications 406 with respect to optical gunsight. The guiding indications
are calculated based on ballistic algorithms and ballistic parameters of a gun and
ammunition.
[0057] A version of the system equipped with the optical signaling means will signal only
one object of interest and after handling that object another object of interest is
indicated according to a list of objects of interest. The LEDs indicate position of
the gun correct for handling the object of interest. For example in case the barrel
is too low, the upper LED is red and the lower LED is green. In case the barrel is
too high, the upper LED is green and the lower LED is red. In case the barrel is too
far left, the left LED is green and the right LED is red. In case the barrel is too
far right, the left LED is red and the right LED is green. When the gun is properly
aimed all four LEDs are green. Optionally, there may be an additional diode signaling
type of an object. Depending on number of colors or illuminating patterns one diode
may signal more than two different types of an object.
[0058] The system may comprise a manipulator in a form of for example a joystick or a touchpad,
preferably operated by a thumb of a hand that pulls a trigger. The manipulator is
preferably positioned such that in order to operate it a shooter does not have to
move a hand properly set up for a shot. The middle and/or ring and/or little finger
may operate additional controls (eg. buttons).
[0059] Fig. 5 shows an exemplary content presented on a display. There is presented image
from a camera 109 and superimposed information next to each object of interest.
[0060] It can be easily recognized, by one skilled in the art, that the aforementioned method
for a tactical visualization may be performed and/or controlled by one or more computer
programs. Such computer programs are typically executed by utilizing the computing
resources in a computing device. Applications are stored on a non-transitory medium.
An example of a non-transitory medium is a nonvolatile memory, for example a flash
memory or volatile memory, for example RAM. The computer instructions are executed
by a processor. These memories are exemplary recording media for storing computer
programs comprising computer-executable instructions performing all the steps of the
computer-implemented method according the technical concept presented herein.
[0061] While the invention presented herein has been depicted, described, and has been defined
with reference to particular preferred embodiments, such references and examples of
implementation in the foregoing specification do not imply any limitation on the invention.
It will, however, be evident that various modifications and changes may be made thereto
without departing from the broader scope of the technical concept. The presented preferred
embodiments are exemplary only, and are not exhaustive of the scope of the technical
concept presented herein.
[0062] Accordingly, the scope of protection is not limited to the preferred embodiments
described in the specification, but is only limited by the claims that follow.
1. A system for a tactical visualization, the system comprising:
• a data bus (101) communicatively coupled to a memory (104);
• a controller (106) communicatively coupled to the system bus (101);
the system being
characterized in that it comprises:
• means for visualizing (102) tactical situation;
• at least one camera (109);
• wherein the controller (106) is configured to process data from the camera (109)
in order to detect objects and identify objects of interest and assign locations to
these objects versus a reference point in space as well as determine probabilities
of handling each object of interest;
• a radio transceiver (105) configured to transmit the processed data to a command
center and to receive data from the command center;
• wherein the controller (106) is configured to execute the steps of the method according
to claim 10.
2. The system according to claim 1 characterized in that the means for visualizing (102) is an on-helmet display or an on-gun display or optical
signaling means.
3. The system according to claim 1 characterized in that the camera (109) is configured to sense visible light as well as infrared light and
is supplemented with microwave radar configured to detect objects.
4. The system according to claim 1 characterized in that the controller (106) processed data based on object recognition or movement detection
or detection of sources of heat.
5. The system according to claim 1 characterized in that the controller (106) is connected to a user control means configured to allow a person
using the system to control its functions.
6. The system according to claim 1 characterized in that the system comprises an orientation sensor (108), a geolocation system (103) and
an inertial navigation system (107).
7. The system according to claim 1 characterized in that the visualizing means (102) is configured to provide augmented reality capability
wherein a picture from the camera (109) is mixed with information received from the
command center and/or the controller (106).
8. The system according to claim 7 characterized in that the system is configured to place graphical indicators in proximity to objects and
objects of interest.
9. The system according to claim 8 characterized in that the graphical indicators also include a location of a reference point, location of
each of the objects of interest with reference to the reference point, identifier
of a team member assigned to a given object of interest and priority of each object
of interest.
10. A method for a tactical visualization, the method being
characterized in that it comprises the steps of:
• identifying (401) objects and objects of interest on an image acquired by a camera
(109);
• generating (402) a descriptor for each identified object;
• transmitting (403) the collected data to a command center;
• awaiting (404) data from the command center and receiving the data;
• processing (405) the received information and superimposing it on the image from
camera (109); and
• providing movement guidance (406) on the composite image and displaying the composite
image on a visualizing means (102).
11. The method according to claim 10 characterized in that the descriptor comprises type of the object, its location with reference to the camera
(109), information regarding location of the system and orientation of the system.
12. A computer program comprising program code means for performing all the steps of the
computer-implemented method according to claim 10 when said program is run on a computer.
13. A computer readable medium storing computer-executable instructions performing all
the steps of the computer-implemented method according to claim 10 when executed on
a computer.
Amended claims in accordance with Rule 137(2) EPC.
1. A system for a tactical visualization, the system comprising:
• a data bus (101) communicatively coupled to a memory (104);
• a controller (106) communicatively coupled to the system bus (101);
the system being
characterized in that it comprises:
• means for visualizing (102) tactical situation;
• at least one camera (109);
• wherein the controller (106) is configured to process data from the camera (109)
in order to detect objects and identify objects of interest and assign locations to
these objects versus a reference point in space as well as determine probabilities
of handling each object of interest, based on skills of a particular person carrying
a gun, and determine a distance and angle of elevation of a barrel of the gun;
• wherein the controller (106) is further configured to distinguish between objects
of interest that should be handled, objects of interest that should not be handled
and objects of interest that are to be handled by another team member;
• a radio transceiver (105) configured to transmit the processed data to a command
center and to receive data from the command center;
• wherein the controller (106) is configured to execute the steps of the method according
to claim 10.
2. The system according to claim 1 characterized in that the means for visualizing (102) is an on-helmet display or an on-gun display or optical
signaling means.
3. The system according to claim 1 characterized in that the camera (109) is configured to sense visible light as well as infrared light and
is supplemented with microwave radar configured to detect objects.
4. The system according to claim 1 characterized in that the controller (106) processed data based on object recognition or movement detection
or detection of sources of heat.
5. The system according to claim 1 characterized in that the controller (106) is connected to a user control means configured to allow a person
using the system to control its functions.
6. The system according to claim 1 characterized in that the system comprises an orientation sensor (108), a geolocation system (103) and
an inertial navigation system (107).
7. The system according to claim 1 characterized in that the visualizing means (102) is configured to provide augmented reality capability
wherein a picture from the camera (109) is mixed with information received from the
command center and/or the controller (106).
8. The system according to claim 7 characterized in that the system is configured to place graphical indicators in proximity to objects and
objects of interest.
9. The system according to claim 8 characterized in that the graphical indicators also include a location of a reference point, location of
each of the objects of interest with reference to the reference point, identifier
of a team member assigned to a given object of interest and priority of each object
of interest.
10. A method for a tactical visualization, the method being
characterized in that it comprises the steps of:
• identifying (401) objects and objects of interest on an image acquired by a camera
(109);
• generating (402) a descriptor for each identified object;
• transmitting (403) the collected data to a command center;
• for each objects of interest calculating a distance and angle of elevation of a
barrel of a gun as well as calculating probability of servicing for each of the objects
of interest, taking into account skills of a particular person carrying the gun;
• for each objects of interest, distinguishing between objects of interest that should
be handled, objects of interest that should not be handled and objects of interest
that are to be handled by another team member;
• awaiting (404) data from the command center and receiving the data;
• processing (405) the received information and superimposing it on the image from
camera (109); and
• providing movement guidance (406) on the composite image and displaying the composite
image on a visualizing means (102).
11. The method according to claim 10 characterized in that the descriptor comprises type of the object, its location with reference to the camera
(109), information regarding location of the system and orientation of the system.
12. A computer program comprising program code means for performing all the steps of the
computer-implemented method according to claim 10 when said program is run on a computer.
13. A computer readable medium storing computer-executable instructions performing all
the steps of the computer-implemented method according to claim 10 when executed on
a computer.