(19)
(11)EP 4 036 856 A1

(12)EUROPEAN PATENT APPLICATION

(43)Date of publication:
03.08.2022 Bulletin 2022/31

(21)Application number: 21154728.6

(22)Date of filing:  02.02.2021
(51)International Patent Classification (IPC): 
G06T 7/246(2017.01)
(52)Cooperative Patent Classification (CPC):
G06T 7/246; G06T 2207/30244; G06T 2207/30232
(84)Designated Contracting States:
AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR
Designated Extension States:
BA ME
Designated Validation States:
KH MA MD TN

(71)Applicant: Axis AB
223 69 Lund (SE)

(72)Inventors:
  • Xie, Haiyan
    223 69 Lund (SE)
  • Chen, Jiandan
    223 69 Lund (SE)

(74)Representative: AWA Sweden AB 
Box 5117
200 71 Malmö
200 71 Malmö (SE)

 
Remarks:
Amended claims in accordance with Rule 137(2) EPC.
 


(54)UPDATING OF ANNOTATED POINTS IN A DIGITAL IMAGE


(57) There is provided mechanisms for updating a coordinate of an annotated point in a digital image due to camera movement. A method is performed by an image processing device. The method comprises obtaining a current digital image of a scene. The current digital image has been captured by a camera subsequent to movement of the camera relative the scene. The current digital image is associated with at least one annotated point. Each at least one annotated point has a respective coordinate in the current digital image. The method comprises identifying an amount of the movement by comparing position indicative information in the current digital image to position indicative information in a previous digital image of the scene. The previous digital image has been captured prior to movement of the camera. The method comprises updating the coordinate of each at least one annotated point in accordance with the identified amount of movement and a camera homography.




Description

TECHNICAL FIELD



[0001] Embodiments presented herein relate to a method, an image processing device, a computer program, and a computer program product for updating a coordinate of an annotated point in a digital image due to camera movement.

BACKGROUND



[0002] In general terms, surveillance cameras are video cameras used for the purpose of observing an area. Surveillance cameras are often connected to a recording device or a network, such as an internet protocol (IP) network. In some examples, images recorded by the surveillance cameras are monitored in real-time by a security guard or law enforcement officer. Cameras and other type of recording equipment used to be relatively expensive and required human personnel to monitor camera footage, but automated software has been developed for analysis of footage in terms of digital images captured by digital image video cameras. Such automated software can be configured for digital image analysis as well as organization of the thus captured digital video footage into a searchable database.

[0003] In some surveillance scenarios, there could be a need to capture a scene having a Field of View (FoV) including one or more regions for which privacy should be preserved. The image content of such one or more regions should thus be excluded from any recording in the video surveillance system. In some surveillance scenarios, where a surveillance zone is defined there could be a need to capture a scene having a FoV that is wider than the surveillance zone. Since only image content from inside the surveillance zone need to be analyzed, any image content outside the surveillance zone should be excluded from any analysis in the video surveillance system (although the image content outside the surveillance zone does not need to be excluded from any recording in the video surveillance system). In respect to both these scenarios, a region of interest (ROI) can be defined as a portion of an image that is to be subjected to filtering or other type of operation. An ROI can thus be defined such that the one or more regions for which privacy should be preserved is filtered out. An ROI can be defined by a binary mask. The binary mask is of the same size as the image to be processed. In the binary mask, pixels that define the ROI are set to 1 whereas all other pixels are set to 0 (or vice versa, depending on the purpose of the binary mask; to include or exclude the image content inside the ROI).

[0004] In other surveillance scenarios, a virtual line extending between two end-points, and defining a virtual trip wire, could by the video surveillance system be added in the scene for crossline detection; an alarm, or other type of indication, is to be issued whenever an object (such as a human being, vehicle, or other type of object) crosses the virtual line in the scene.

[0005] One way to define the corner points of an ROI and/or the end-points of the virtual line is to use annotated points. Each annotated point has a coordinate which commonly is manually or semi-automatically set. However, once set, the coordinates of the annotated points might suffer from camera drifting due to vibration, camera position drift, FoV orientation shift, temperature changes, mechanical aging, and so on. This might cause the corner points of the ROI and/or the end-points of the virtual line to change, thereby affecting the location of the privacy mask, which part of the surveillance zone that is actually under surveillance, and/or the location of the virtual line. Further, when the camera is replaced, the annotated points need to be manually relabelled.

[0006] Hence, there is a need for an improved handling of annotated points when a camera is subjected to camera movement (as, for example, caused by any of: camera drifting due to vibration, camera position drift, FoV orientation shift, temperature changes, mechanical aging, camera replacement).

SUMMARY



[0007] An object of embodiments herein is to address the above issues by providing a method, an image processing device, a computer program, and a computer program product for updating a coordinate of an annotated point in a digital image due to camera movement (where, as above, the camera movement is caused by any of: camera drifting due to vibration, camera position drift, FoV orientation shift, temperature changes, mechanical aging, camera replacement).

[0008] According to a first aspect there is presented a method for updating a coordinate of an annotated point in a digital image due to camera movement. The method is performed by an image processing device. The method comprises obtaining a current digital image of a scene. The current digital image has been captured by a camera subsequent to movement of the camera relative the scene. The current digital image is associated with at least one annotated point. Each at least one annotated point has a respective coordinate in the current digital image. The method comprises identifying an amount of the movement by comparing position indicative information in the current digital image to position indicative information in a previous digital image of the scene. The previous digital image has been captured prior to movement of the camera. The method comprises updating the coordinate of each at least one annotated point in accordance with the identified amount of movement and a camera homography.

[0009] According to a second aspect there is presented an image processing device for updating a coordinate of an annotated point in a digital image due to camera movement. The image processing device comprises processing circuitry. The processing circuitry is configured to cause the image processing device to obtain a current digital image of a scene. The current digital image has been captured by a camera subsequent to movement of the camera relative the scene. The current digital image is associated with at least one annotated point. Each at least one annotated point has a respective coordinate in the current digital image. The processing circuitry is configured to cause the image processing device to identify an amount of the movement by comparing position indicative information in the current digital image to position indicative information in a previous digital image of the scene. The previous digital image has been captured prior to movement of the camera. The processing circuitry is configured to cause the image processing device to update the coordinate of each at least one annotated point in accordance with the identified amount of movement and a camera homography.

[0010] According to a third aspect there is presented a video surveillance system. The video surveillance system comprises a camera and an image processing device according to the second aspect.

[0011] According to a fourth aspect there is presented a computer program for updating a coordinate of an annotated point in a digital image due to camera movement, the computer program comprising computer program code which, when run on an image processing device, causes the image processing device to perform a method according to the first aspect.

[0012] According to a fifth aspect there is presented a computer program product comprising a computer program according to the fourth aspect and a computer readable storage medium on which the computer program is stored. The computer readable storage medium could be a non-transitory computer readable storage medium.

[0013] Advantageously, these aspects resolve the issues noted above with respect to camera movement.

[0014] Advantageously, these aspects provide accurate updating of the coordinate of the annotated point in the digital image due to camera movement.

[0015] Advantageously, these aspects enable automatic re-location, or recalibration, of annotated points when they shift due to camera movement.

[0016] Other objectives, features and advantages of the enclosed embodiments will be apparent from the following detailed disclosure, from the attached dependent claims as well as from the drawings.

[0017] Generally, all terms used in the claims are to be interpreted according to their ordinary meaning in the technical field, unless explicitly defined otherwise herein. All references to "a/an/the element, apparatus, component, means, module, step, etc." are to be interpreted openly as referring to at least one instance of the element, apparatus, component, means, module, step, etc., unless explicitly stated otherwise. The steps of any method disclosed herein do not have to be performed in the exact order disclosed, unless explicitly stated.

BRIEF DESCRIPTION OF THE DRAWINGS



[0018] The inventive concept is now described, by way of example, with reference to the accompanying drawings, in which:

Fig. 1 is a schematic diagram illustrating a scenario according to embodiments;

Fig. 2 schematically illustrates annotated points and position indicative information in digital images according to an embodiment;

Fig. 3, Fig. 4, and Fig. 5 are flowcharts of methods according to embodiments;

Fig. 6 and Fig. 7 schematically illustrate example scenarios according to embodiments;

Fig. 8 is a schematic diagram showing functional units of an image processing device according to an embodiment; and

Fig. 9 shows one example of a computer program product comprising computer readable storage medium according to an embodiment.


DETAILED DESCRIPTION



[0019] The inventive concept will now be described more fully hereinafter with reference to the accompanying drawings, in which certain embodiments of the inventive concept are shown. This inventive concept may, however, be embodied in many different forms and should not be construed as limited to the embodiments set forth herein; rather, these embodiments are provided by way of example so that this disclosure will be thorough and complete, and will fully convey the scope of the inventive concept to those skilled in the art. Like numbers refer to like elements throughout the description. Any step or feature illustrated by dashed lines should be regarded as optional.

[0020] Fig. 1 is a schematic diagram illustrating a scenario where embodiments presented herein can be applied. The schematic diagram of Fig. 1 represents a surveillance scenario. A camera 130 is configured to capture digital images, within a FoV 140, of a scene 100. In the illustrative example of Fig. 1 the scene 100 comprises a vehicle 110 and a person 120. The camera 130 comprises an image processing device 150. The camera 130 and the image processing device 150 might be part of a video surveillance system. Such a video surveillance system could be applied to any of the above surveillance scenarios.

[0021] As noted above there is a need for an improved handling of annotated points when a camera 130 is subjected to camera movement (as, for example, caused by any of:
camera drifting due to vibration, camera position drift, FoV orientation shift, temperature changes, mechanical aging, camera replacement).

[0022] The embodiments disclosed herein therefore relate to mechanisms for updating a coordinate of an annotated point in a digital image due to camera movement. In order to obtain such mechanisms there is provided an image processing device 150, a method performed by the image processing device 150, and a computer program product comprising code, for example in the form of a computer program, that when run on an image processing device 150, causes the image processing device 150 to perform the method.

[0023] The embodiments disclosed herein enable feature-linked annotated points (of a previous digital image) to automatically calibrate the annotated points (of the current digital image) to solve the above issues relating to camera movement caused by any of camera drifting due to vibration, camera position drift, FoV orientation shift, temperature changes, mechanical aging, camera replacement. The embodiments disclosed herein are based on detecting movement of the camera 130 and updating at least one coordinate of an annotated points by comparing position indicative information of a current digital image to that of a previous digital image of the same scene 100. This will now be illustrated with reference to Fig. 2. Fig. 2 at (a) schematically illustrates annotated points a1, a2 as well as position indicative information in terms of key points k1, k2, k3, k4, k5, k6 in a previous digital image 200a. Fig. 2 at (b) schematically illustrates the same annotated points



and key points











as in the previous digital image 200a but in a current digital image 200b of the same scene as the previous digital image 200a. In Fig. 2 is also marked at C the image center and a radius r extending from the image center C. Possible usage of the parameters C and r will be disclosed below. It is assumed that camera movement has occurred sometime in between capturing of the previous digital image 200a and capturing of the current digital image 200b. Therefore, as can be seen by comparing Fig. 2 at (b) to Fig. 2 at (a), the camera movement has caused the coordinates of the annotated points a1, a2 (as well as the coordinates of the key points k1:k6) to be shifted.

[0024] Fig. 3 is a flowchart illustrating embodiments of methods for updating a coordinate of an annotated point a1, a2 in a digital image 200b due to camera movement. The camera movement can be due to either one and the same camera 130 having been moved so that the FoV 140 changes, or that one camera 130 is replaced by another camera 130 (which, potentially, also causes the FoV 140 to change). The methods are performed by the image processing device 150. The methods are advantageously provided as computer programs 920. In particular, the image processing device 150 is configured to perform steps S102, S104, S106:

S102: The image processing device 150 obtains a current digital image 200b of a scene 100. The current digital image 200b has been captured by the camera 130 subsequent to movement of the camera 130 relative the scene 100. The current digital image 200b is associated with at least one annotated point



Each at least one annotated point



has a respective coordinate in the current digital image 200b.

S104: The image processing device 150 identifies the amount of the movement by comparing position indicative information in the current digital image 200b to position indicative information in a previous digital image 200a of the scene 100. The previous digital image 200a has been captured prior to movement of the camera 130 (either by the same camera 130 or by another camera 130).

S106: The image processing device 150 updates the coordinate of each at least one annotated point (from a1, a2 to



in accordance with the identified amount of movement and a camera homography (of the camera 130).



[0025] This method enables automatic re-location, or recalibration, of annotated points when they shift due to camera movement.

[0026] Embodiments relating to further details of updating a coordinate of an annotated point



in a digital image 200b due to camera movement as performed by the image processing device 150 will now be disclosed.

[0027] There can be different examples of position indicative information. In some aspects, the position indicative information in the current digital image 200b and the previous digital image 200a is defined by a respective set of keypoints k1:k6,

In particular, in some embodiments, the position indicative information in the current digital image 200b is represented by a first set of keypoints

extracted from the current digital image 200b, and the position indicative information in the previous digital image 200a is represented by a second set of keypoints k1:k6 extracted from the previous digital image 200a.

[0028] In some aspects, it is the matching between the keypoints

in the current digital image 200b and the keypoints k1:k6 in the previous digital image 200a that yields the amount of camera movement. In particular, in some embodiments, the comparing involves location-wise matching of the first set of keypoints

to the second set of keypoints k1:k6. The amount of movement is then identified from how much the coordinates in the current digital image 200b of the first set of keypoints k1:k6 differ from the coordinates in the previous digital image 200a of the second set of keypoints k1:k6.

[0029] How discriminative keypoints can be selected based on ranking their similarity will be disclosed next. In some aspects, the matching is performed by means of minimizing the distance between feature vectors in the current digital image 200b and feature vectors in the previous digital image 200a. In particular, in some embodiments, a respective first feature vector is determined for each keypoint k1:k6 in the first set of keypoints

and a respective second feature vector is determined for each keypoint k1:k6 in the second set of keypoints k1:k6. The first set of keypoints

is then matched to the second set of keypoints k1:k6 by finding a set of pairs of one of the first feature vectors and one of the second feature vectors that yields minimum distance between the first feature vectors and the second feature vectors among all sets of pairs of one of the first feature vectors and one of the second feature vectors.

[0030] Properties of the second set of keypoints k1:k6 will now be disclosed. In some embodiments, the second set of keypoints k1:k6 is a subset of all keypoints k1:k6 extractable from the previous digital image 200a. How selection of the most appropriate keypoints can be performed, in terms of how this subset of all keypoints k1:k6 can be determined, will now be disclosed.

[0031] In some aspects, any keypoints k1:k6 with similar pairs of feature vectors are excluded from being included in the second set of keypoints k1:k6. That is, in some embodiments, those of all keypoints k1:k6 with second feature vectors most similar to a second feature vector of another keypoint k1:k6 extracted from the previous digital image 200a are excluded from being included in the second set of keypoints k1:k6.

[0032] In some aspects, location information is used when determining which of all keypoints k1:k6 to be included in the second set of keypoints k1:k6. In particular, in some embodiments, which of all keypoints k1:k6 to be included in the second set of keypoints k1:k6 is dependent on their location in the previous digital image 200a. In this respect, as will be disclosed next, the location could be relative to the image center and/or the annotated points.

[0033] In some aspects, those of all keypoints k1:k6 located closest to the image center are candidates to be included in the second set of keypoints k1:k6. In particular, in some embodiments, the second set of keypoints k1:k6 is restricted to only include keypoints k1:k6 located within a predefined radius, denoted r in Fig. 2, from a center, denoted C in Fig. 2, of the previous digital image 200a.

[0034] In some aspects, those of all keypoints k1:k6 located closest to the annotated points a1, a2 are candidates to be included in the in the second set of keypoints k1:k6. In particular, in some embodiments, the previous digital image 200a is associated with at least one annotated point a1, a2, where each at least one annotated point a1, a2 has a respective coordinate in the previous digital image 200a, and the second set of keypoints k1:k6 is restricted to only include keypoints k1:k6 located within a predefined radius from the coordinate of the at least one annotated point a1, a2 in the previous digital image 200a.

[0035] Thereby, the above embodiments, aspects, and examples enable the most appropriate keypoints to be selected according to the coordinate of the at least one annotated point, and the image centre.

[0036] In some aspects, the previous digital image 200a is selected from a set of previous digital images 200a of the scene 100. Examples of how the previous digital image 200a can be selected from this set of previous digital images 200a of the scene 100 will now be disclosed.

[0037] In some embodiments, each of the previous digital images 200a in the set of previous digital images 200a has its own time stamp, and which previous digital image 200a to select is based on comparing the time stamps to a time stamp of the current digital image 200b. The previous digital image 200a can thus be selected based on a time stamp comparison such that the previous digital image 200a has been captured at the same time of day as the current digital image 200b, or closest in time to the current digital image 200b.

[0038] In further examples, which previous digital image 200a to select is based on illumination, image quality, etc. Also combination of such parameters is envisioned. For example, in order to get the best image quality for optimizing the matching of the keypoints, the previous digital image 200a as well as the current digital image 200b can be extracted within a certain time interval, for example, during daytime. Further, similar time stamps could yield most similar image quality and illumination with respect to the previous digital image 200a and the current digital image 200b.

[0039] Aspects of how the coordinate of each at least one annotated point a1, a2 can be updated will now be disclosed. In some embodiments, the coordinate of each at least one annotated point



is updated by application of a homography matrix to the coordinate of each at least one annotated point



in the current digital image 200b. The homography matrix is dependent on the identified amount of movement and the camera homography.

[0040] Further aspects of further possible actions taken by the digital image processing device 150 will now be disclosed. In some examples, an alarm is triggered if the camera movement causes any annotated point



to move out from the image. That is, in some embodiments, the digital image processing device 150 is configured to perform (optional) step S108:
S108: A notification is issued in case updating the coordinate of each at least one annotated point



causes any of the at least one annotated point



to have a coordinate outside the current digital image 200b.

[0041] Thereby, if a surveillance region of interest has moved out from the FoV 140, then an alarm can be triggered.

[0042] In some aspects, the herein disclosed embodiments can be described in two stages; a feature extraction stage and a calibration stage. The feature extraction stage involves the processing of the previous digital image whereas the calibration stage is applied after camera movement.

[0043] Aspects of the feature extraction stage will now be disclosed with reference to the flowchart of Fig. 4.

[0044] S201: At least one annotated point is set in the digital image.

[0045] Each such annotated point might represent a corner of an ROI or an end-point of a virtual line.

[0046] S202: Keypoints are searched for in the digital image.

[0047] The keypoints might be searched for using a keypoint detector. Non-limiting examples of keypoint detectors are a Harris corner detector, a scale-invariant feature transform (SIFT) detector, and descriptors based on convolutional neural network (CNN). The keypoint detector identifies the coordinates of each keypoint as well as an associated descriptor (or feature vector). In some non-limiting examples the descriptor is provided as a 32 to 128 variable vector. The descriptors make each of the keypoints distinct and identifiable from the other keypoints. Without loss of generality, the total number of keypoints found is denoted K.

[0048] S203: The most appropriate keypoints (descriptors) are selected.

[0049] Step S203 might be implemented in terms of a number of sub-steps, or sub-routines.

[0050] MK discriminative keypoints are selected by ranking their similarity. In one example, the similarity between a pair of two keypoints is computed from the Euclidean distance between the descriptors (or feature vectors) of the two keypoints. If the Euclidean distance is smaller than a pre-set threshold distance, this pair of keypoints are defined as similar, and therefore discarded.

[0051] The M keypoints may be divided randomly into g subsets Sg. Each such subset consists of N keypoints from the original M keypoints, where N « M. In some examples, at least 4 keypoints are selected, that is N ≥ 4. The N keypoints are selected as having the minimum distances, in terms of coordinates, to the image centre and/or the annotated points, as defined by the minimization criterion in (1):

where the weighted distance, Sk, is defined as:



[0052] In (2), d(ki, aj) is the Euclidean distance, in terms of coordinates, between each keypoint ki to each annotated point aj in the digital image, and d(ki, C) is the Euclidean distance, in terms of coordinates, between each keypoint ki to the image center C, and wk and wc are their respective weights. Further, J denotes the total number of annotated points. The weights can be used to either prioritize keypoints closer to the annotation points, or keypoints closer to the image center. The minimization criterion in (1) is used because the closer the distance, in terms of coordinates, between the keypoints and the annotation points in the digital image, the less the probability is for the keypoints to get lost, or even be moved out from the digital image, when the camera 130 is moved.

[0053] Further, the distance between each two keypoints ki, kj in each subset might fulfil:



[0054] The summation of distances d(ki, kj) between each two keypoints is larger than a threshold Dt, which avoids the possibility of keypoints converging together. This can be implemented by different techniques, for example, the RANdom SAmple Consensus (RANSAC) algorithm.

[0055] For example, as illustrated in Fig. 2, keypoints k1-k4 have smaller distance, in terms of coordinates, to the annotated points and the image centre than keypoints k5 and k6, and thus keypoints k1-k4 are selected whilst keypoints k5 and k6 are discarded.

[0056] In surveillance applications, the locations of the annotated points and the image centre are usually the most important positions.

[0057] S204: The coordinates and descriptors (feature vectors) of the selected keypoints, as well as the coordinates of the annotated points, are stored.

[0058] Also camera settings, e.g. the camera focal length might be stored to enable calibration, or normalization, of the camera view, etc.

[0059] Aspects of the calibration stage (as applied after the camera 130 has been moved or replaced) will now be disclosed with reference to the flowchart of Fig. 5.

[0060] S301: The coordinates of the annotated points and the keypoints (with their coordinates and descriptors) of a previous digital image of the same scene are obtained.

[0061] S302: Keypoints in the current image are identified. How to identify the keypoints in the current image has been disclosed above with reference to the description of Fig. 3.

[0062] S303: Matching keypoints are identified using feature vectors determined from the keypoints

in the current digital image 200b and the keypoints k1:k6, in the previous digital image 200a

[0063] S304: The homography matrix is determined from the matching keypoints.

[0064] In this respect, the homography matrix is applied in order to account for camera movement, in terms of camera rotation and/or translation, between the previous digital image and the current digital image. The digital images are assumed to be normalized with respect to each other. Normalization can be achieved using information of the camera focal length used when the previous digital image was captured, and the camera focal length used when the current digital image was captured. In some non-limiting examples, the homography matrix H is computed from the matched annotated points by using Single Value Decomposition (SVD). Further aspects of how the homography matrix can be determined will be disclosed below.

[0065] S305: The homography matrix is applied to the at least one annotated point in the current digital image.

[0066] The updated coordinates, denoted A', of the at least one annotated point in the current digital image is then determined from the coordinates, denoted A, of the corresponding at least one annotated point in the previous digital image according to:

where H denotes the homography matrix.

[0067] Further aspects of how the homography matrix H can be determined will now be disclosed.

[0068] Assume, without loss of generality, that there are four pairs of corresponding annotated points. Denote by a1, a2, a3, a4 the annotated points in the previous digital image and denote by







the annotated points in the current digital image. Each annotated point has a respective x-coordinate and a respective y-coordinate. Assume further that the focal length is normalized with respect to the previous digital image and the current digital image. The homography matrix H can then be computed as follows.

[0069] First, determine the following 2-by-9 matrices:

where the indices i = 1, 2,3,4 correspond to the four annotated points. Then stack all these 2-by-9 matrices into one matrix P, which fulfils:



[0070] Expression (4) is thus equivalent to:



[0071] Assume further that the matrix P has an SVD decomposition as follows:

where the columns of V correspond to the eigenvectors of PTP and yield a solution of h. The homography matrix H can be reshaped from h. In order to improve homography matrix H, by minimizing the estimation error, a cost function could be applied. By knowing the homography matrix H, the projected coordinates of the point a(x, y) from the previous digital image to the point a'(x, y) from the current digital image is thus given by:



[0072] That is:



[0073] The coordinates for annotated point pi' can thus be found by inserting x = xi, y = yi,

and

in the expression (5) for i = 1, 2, 3, 4.

[0074] Fig. 6 schematically illustrates a first scenario where the herein disclosed embodiments can be applied. At (a) is illustrated a previous digital image 600a of a scene 610 as captured by a camera 130 of a video surveillance system. The scene comprises a building 620. For privacy reasons, the windows 630a:630f (not shown in Fig. 6(a)) of the building are to be excluded from the surveillance. Six different privacy masks 640a:640f, each covering a window 630a:630f, are therefore applied. The lower-left corners of the privacy masks are defined by annotated points a1:a6. At (b) is illustrated a current digital image 600b of the scene as captured by the camera 130 after camera movement. As noted at (b), since the annotated points a1:a6 have the same coordinates as in the previous digital image 600a, the camera movement causes the privacy masks to be moved, thus partly revealing the windows 630a:630f. At (c) is illustrated the current digital image 600b' of the scene as captured by the camera 130 after camera movement upon application of at least some of the herein disclosed embodiments for updating the coordinates of the annotated points

in the current digital image due to the camera movement. As can be seen, the updating causes the privacy masks to be moved back to their correct positions, thus again properly covering the windows 630a:630f.

[0075] Fig. 7 schematically illustrates a second scenario where the herein disclosed embodiments can be applied. At (a) is illustrated a previous digital image 700a of a scene as captured by a camera 130 of a video surveillance system. The scene comprises buildings 710 and a parking lot 720. The parking lot defines a ROI 730 and is under surveillance. An ROI is therefore defined based on the corners of the parking lot. The corners of the ROI are defined by annotated points (represented by the single annotated point a1). At (b) is illustrated a current digital image 700b of the scene as captured by the camera 130 after camera movement. As noted at (b), since the annotated points (again represented by the single annotated point a1) have the same coordinates as in the previous digital image 700a, the camera movement causes the ROI 730 to be moved, thus partly excluding some parts of the parking lot 720 from surveillance. At (c) is illustrated the current digital image 700b' of the scene as captured by the camera 130 after camera movement upon application of at least some of the herein disclosed embodiments for updating the coordinates of the annotated points (represented by the single annotated point

) in the current digital image due to the camera movement. As can be seen, the updating causes the ROI 730 to be moved back to its correct position, thus again properly covering the parking lot 720.

[0076] Fig. 8 schematically illustrates, in terms of a number of functional units, the components of an image processing device 150 according to an embodiment. Processing circuitry 810 is provided using any combination of one or more of a suitable central processing unit (CPU), multiprocessor, microcontroller, digital signal processor (DSP), etc., capable of executing software instructions stored in a computer program product 910 (as in Fig. 9), e.g. in the form of a storage medium 830. The processing circuitry 810 may further be provided as at least one application specific integrated circuit (ASIC), or field programmable gate array (FPGA).

[0077] Particularly, the processing circuitry 810 is configured to cause the image processing device 150 to perform a set of operations, or steps, as disclosed above. For example, the storage medium 830 may store the set of operations, and the processing circuitry 810 may be configured to retrieve the set of operations from the storage medium 830 to cause the image processing device 150 to perform the set of operations. The set of operations may be provided as a set of executable instructions.

[0078] Thus, the processing circuitry 810 is thereby arranged to execute methods as herein disclosed. The storage medium 830 may also comprise persistent storage, which, for example, can be any single one or combination of magnetic memory, optical memory, solid state memory or even remotely mounted memory. The image processing device 150 may further comprise a communications interface 820 at least configured for communications with the camera 130, potentially with other functions, nodes, entities and/or devices, such as functions, nodes, entities and/or devices of a video surveillance system. As such the communications interface 820 may comprise one or more transmitters and receivers, comprising analogue and digital components. The processing circuitry 810 controls the general operation of the image processing device 150, e.g., by sending data and control signals to the communications interface 820 and the storage medium 830, by receiving data and reports from the communications interface 820, and by retrieving data and instructions from the storage medium 830. Other components, as well as the related functionality, of the image processing device 150 are omitted in order not to obscure the concepts presented herein.

[0079] The image processing device 150 may be provided as a standalone device or as a part of at least one further device. For example, the image processing device 150 and the camera 130 might be part of a video surveillance system.

[0080] A first portion of the instructions performed by the image processing device 150 may be executed in a first device, and a second portion of the of the instructions performed by the image processing device 150 may be executed in a second device; the herein disclosed embodiments are not limited to any particular number of devices on which the instructions performed by the image processing device 150 may be executed. Hence, the methods according to the herein disclosed embodiments are suitable to be performed by an image processing device 150 residing in a cloud computational environment. Therefore, although a single processing circuitry 810 is illustrated in Fig. 8 the processing circuitry 810 may be distributed among a plurality of devices, or nodes. The same applies to the computer program 920 of Fig. 9.

[0081] Fig. 9 shows one example of a computer program product 910 comprising computer readable storage medium 930. On this computer readable storage medium 930, a computer program 920 can be stored, which computer program 920 can cause the processing circuitry 810 and thereto operatively coupled entities and devices, such as the communications interface 820 and the storage medium 830, to execute methods according to embodiments described herein. The computer program 920 and/or computer program product 910 may thus provide means for performing any steps as herein disclosed.

[0082] In the example of Fig. 9, the computer program product 910 is illustrated as an optical disc, such as a CD (compact disc) or a DVD (digital versatile disc) or a Blu-Ray disc. The computer program product 910 could also be embodied as a memory, such as a random access memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM), or an electrically erasable programmable read-only memory (EEPROM) and more particularly as a non-volatile storage medium of a device in an external memory such as a USB (Universal Serial Bus) memory or a Flash memory, such as a compact Flash memory. Thus, while the computer program 920 is here schematically shown as a track on the depicted optical disk, the computer program 920 can be stored in any way which is suitable for the computer program product 910.

[0083] The inventive concept has mainly been described above with reference to a few embodiments. However, as is readily appreciated by a person skilled in the art, other embodiments than the ones disclosed above are equally possible within the scope of the inventive concept, as defined by the appended patent claims.


Claims

1. A method for updating a coordinate of an annotated point (



) in a digital image (200b) due to camera movement, the method being performed by an image processing device (150), the method comprising:

obtaining (S102) a current digital image (200b) of a scene (100), the current digital image (200b) having been captured by a camera (130) subsequent to movement of the camera (130) relative the scene (100), wherein the current digital image (200b) is associated with at least one annotated point (



), and wherein each at least one annotated point (



) has a respective coordinate in the current digital image (200b);

identifying (S104) an amount of the movement by comparing position indicative information in the current digital image (200b) to position indicative information in a previous digital image (200a) of the scene (100), the previous digital image (200a) having been captured prior to movement of the camera (130); and

updating (S106) the coordinate of each at least one annotated point (



) in accordance with the identified amount of movement and a camera homography.


 
2. The method according to claim 1, wherein the position indicative information in the current digital image (200b) is represented by a first set of keypoints (

) extracted from the current digital image (200b), and wherein the position indicative information in the previous digital image (200a) is represented by a second set of keypoints (k1:k6) extracted from the previous digital image (200a).
 
3. The method according to claim 2, wherein the comparing involves location-wise matching of the first set of keypoints (

) to the second set of keypoints (k1:k6), and wherein the amount of movement is identified from how much coordinates in the current digital image (200b) of the keypoints (

) of the first set of keypoints (k1:k6) differ from coordinates in the previous digital image (200a) of the keypoints (k1:k6) of the second set of keypoints (k1:k6).
 
4. The method according to claim 2 or 3, wherein a respective first feature vector is determined for each keypoint (

) in the first set of keypoints (

) and a respective second feature vector is determined for each keypoint (k1:k6) in the second set of keypoints (k1:k6), and wherein the first set of keypoints (

) is matched to the second set of keypoints (k1:k6) by finding a set of pairs of one of the first feature vectors and one of the second feature vectors that yields minimum distance between the first feature vectors and the second feature vectors among all sets of pairs of one of the first feature vectors and one of the second feature vectors.
 
5. The method according to any of claims 2 to 4, wherein the second set of keypoints (k1:k6) is a subset of all keypoints (k1:k6) extractable from the previous digital image (200a).
 
6. The method according to claim 5, wherein those of all keypoints (k1:k6) with second feature vectors most similar to a second feature vector of another keypoint (k1:k6) extracted from the previous digital image (200a) are excluded from being included in the second set of keypoints (k1:k6).
 
7. The method according to claim 5 or 6, wherein which of all keypoints (k1:k6) to be included in the second set of keypoints (k1:k6) is dependent on their location in the previous digital image (200a).
 
8. The method according to any of claims 5 to 7, wherein the second set of keypoints (k1:k6) is restricted to only include keypoints (k1:k6) located within a predefined radius (r) from a center (C) of the previous digital image (200a).
 
9. The method according to any of claims 5 to 8, wherein the previous digital image (200a) is associated with at least one annotated point (a1, a2) having a respective coordinate in the previous digital image (200a), and wherein the second set of keypoints (k1:k6) is restricted to only include keypoints (k1:k6) located within a predefined radius from the coordinate of the at least one annotated point (a1, a2) in the previous digital image (200a).
 
10. The method according to any preceding claim, wherein the previous digital image (200a) is selected from a set of previous digital images (200a) of the scene (100), wherein each of the previous digital images (200a) in the set of previous digital images (200a) has its own time stamp, and wherein which previous digital image (200a) to select is based on comparing the time stamps to a time stamp of the current digital image (200b).
 
11. The method according to any preceding claim, wherein the coordinate of each at least one annotated point (



) is updated by application of a homography matrix to the coordinate of each at least one annotated point (



) in the current digital image (200b), and wherein the homography matrix is dependent on the identified amount of movement and the camera homography.
 
12. The method according to any preceding claim, wherein the method further comprises:
issuing (S108) a notification in case updating the coordinate of each at least one annotated point (



) causes any of the at least one annotated point (



) to have a coordinate outside the current digital image (200b).
 
13. The method according to any preceding claim, wherein there are at least two annotated points (



), and wherein the at least two annotated points (



) define a border of any of: a privacy mask in the current digital image (200b), a virtual trip wire in the scene (100), a surveillance zone in the scene (100).
 
14. An image processing device (150) for updating a coordinate of an annotated point (



) in a digital image (200b) due to camera movement, the image processing device (150) comprising processing circuitry (810), the processing circuitry being configured to cause the image processing device (150) to:

obtain a current digital image (200b) of a scene (100), the current digital image (200b) having been captured by a camera (130) subsequent to movement of the camera (130) relative the scene (100), wherein the current digital image (200b) is associated with at least one annotated point (



), and wherein each at least one annotated point (



) has a respective coordinate in the current digital image (200b);

identify an amount of the movement by comparing position indicative information in the current digital image (200b) to position indicative information in a previous digital image (200a) of the scene (100), the previous digital image (200a) having been captured prior to movement of the camera (130); and

update the coordinate of each at least one annotated point (



) in accordance with the identified amount of movement and a camera homography.


 
15. A computer program (920) for updating a coordinate of an annotated point (



) in a digital image (200b) due to camera movement, the computer program comprising computer code which, when run on processing circuitry (810) of an image processing device (150), causes the image processing device (150) to:

obtain (S102) a current digital image (200b) of a scene (100), the current digital image (200b) having been captured by a camera (130) subsequent to movement of the camera (130) relative the scene (100), wherein the current digital image (200b) is associated with at least one annotated point (



), and wherein each at least one annotated point (



) has a respective coordinate in the current digital image (200b);

identify (S104) an amount of the movement by comparing position indicative information in the current digital image (200b) to position indicative information in a previous digital image (200a) of the scene (100), the previous digital image (200a) having been captured prior to movement of the camera (130); and

update (S106) the coordinate of each at least one annotated point (



) in accordance with the identified amount of movement and a camera homography.


 


Amended claims in accordance with Rule 137(2) EPC.


1. A method for updating a coordinate of an annotated point (

,

) in a digital image (200b) due to camera movement, the method being performed by an image processing device (150), the method comprising:

obtaining (S102) a current digital image (200b) of a scene (100), the current digital image (200b) having been captured by a camera (130) subsequent to movement of the camera (130) relative the scene (100), wherein the current digital image (200b) is associated with at least one annotated point (

,

), and wherein each at least one annotated point (

,

) has a respective coordinate in the current digital image (200b);

identifying (S104) an amount of the movement by comparing position indicative information in the current digital image (200b) to position indicative information in a previous digital image (200a) of the scene (100), the previous digital image (200a) having been captured prior to movement of the camera (130), wherein the position indicative information in the current digital image (200b) is represented by a first set of keypoints

extracted from the current digital image (200b), wherein the position indicative information in the previous digital image (200a) is represented by a second set of keypoints (k1:k6) extracted from the previous digital image (200a), wherein the second set of keypoints (k1:k6) is a subset of all keypoints (k1:k6) extractable from the previous digital image (200a), wherein the previous digital image (200a) is associated with at least one annotated point (a1, a2) having a respective coordinate in the previous digital image (200a), and wherein the second set of keypoints (k1:k6) is restricted to only include keypoints (k1:k6) located within a predefined radius from the coordinate of the at least one annotated point (a1, a2) in the previous digital image (200a); and

updating (S106) the coordinate of each at least one annotated point (

,

) in accordance with the identified amount of movement and a camera homography.


 
2. The method according to claim 1, wherein the comparing involves location-wise matching of the first set of keypoints

to the second set of keypoints (k1:k6), and wherein the amount of movement is identified from how much coordinates in the current digital image (200b) of the keypoints

of the first set of keypoints (k1:k6) differ from coordinates in the previous digital image (200a) of the keypoints (k1:k6) of the second set of keypoints (k1:k6).
 
3. The method according to claim 1 or 2, wherein a respective first feature vector is determined for each keypoint

in the first set of keypoints

and a respective second feature vector is determined for each keypoint (k1:k6) in the second set of keypoints (k1:k6), and wherein the first set of keypoints

is matched to the second set of keypoints (k1:k6) by finding a set of pairs of one of the first feature vectors and one of the second feature vectors that yields minimum distance between the first feature vectors and the second feature vectors among all sets of pairs of one of the first feature vectors and one of the second feature vectors.
 
4. The method according to any preceding claim, wherein those of all keypoints (k1:k6) with second feature vectors most similar to a second feature vector of another keypoint (k1:k6) extracted from the previous digital image (200a) are excluded from being included in the second set of keypoints (k1:k6).
 
5. The method according to any preceding claim, wherein which of all keypoints (k1:k6) to be included in the second set of keypoints (k1:k6) is dependent on their location in the previous digital image (200a).
 
6. The method according to any preceding claim, wherein the second set of keypoints (k1:k6) is restricted to only include keypoints (k1:k6) located within a predefined radius (r) from a center (C) of the previous digital image (200a).
 
7. The method according to any preceding claim, wherein the previous digital image (200a) is selected from a set of previous digital images (200a) of the scene (100), wherein each of the previous digital images (200a) in the set of previous digital images (200a) has its own time stamp, and wherein which previous digital image (200a) to select is based on comparing the time stamps to a time stamp of the current digital image (200b).
 
8. The method according to any preceding claim, wherein the coordinate of each at least one annotated point (

,

) is updated by application of a homography matrix to the coordinate of each at least one annotated point (

,

) in the current digital image (200b), and wherein the homography matrix is dependent on the identified amount of movement and the camera homography.
 
9. The method according to any preceding claim, wherein the method further comprises:
issuing (S108) a notification in case updating the coordinate of each at least one annotated point (

,

) causes any of the at least one annotated point (

,

) to have a coordinate outside the current digital image (200b).
 
10. The method according to any preceding claim, wherein there are at least two annotated points (

,

), and wherein the at least two annotated points (

,

) define a border of any of: a privacy mask in the current digital image (200b), a virtual trip wire in the scene (100), a surveillance zone in the scene (100).
 
11. An image processing device (150) for updating a coordinate of an annotated point (

,

) in a digital image (200b) due to camera movement, the image processing device (150) comprising processing circuitry (810), the processing circuitry being configured to cause the image processing device (150) to:

obtain a current digital image (200b) of a scene (100), the current digital image (200b) having been captured by a camera (130) subsequent to movement of the camera (130) relative the scene (100), wherein the current digital image (200b) is associated with at least one annotated point (

,

), and wherein each at least one annotated point (

,

) has a respective coordinate in the current digital image (200b);

identify an amount of the movement by comparing position indicative information in the current digital image (200b) to position indicative information in a previous digital image (200a) of the scene (100), the previous digital image (200a) having been captured prior to movement of the camera (130), wherein the position indicative information in the current digital image (200b) is represented by a first set of keypoints

extracted from the current digital image (200b), wherein the position indicative information in the previous digital image (200a) is represented by a second set of keypoints (k1:k6) extracted from the previous digital image (200a), wherein the second set of keypoints (k1:k6) is a subset of all keypoints (k1:k6) extractable from the previous digital image (200a), wherein the previous digital image (200a) is associated with at least one annotated point (a1, a2) having a respective coordinate in the previous digital image (200a), and wherein the second set of keypoints (k1:k6) is restricted to only include keypoints (k1:k6) located within a predefined radius from the coordinate of the at least one annotated point (a1, a2) in the previous digital image (200a); and

update the coordinate of each at least one annotated point (

,

) in accordance with the identified amount of movement and a camera homography.


 
12. A computer program (920) for updating a coordinate of an annotated point (

,

) in a digital image (200b) due to camera movement, the computer program comprising computer code which, when run on processing circuitry (810) of an image processing device (150), causes the image processing device (150) to:

obtain (S102) a current digital image (200b) of a scene (100), the current digital image (200b) having been captured by a camera (130) subsequent to movement of the camera (130) relative the scene (100), wherein the current digital image (200b) is associated with at least one annotated point (

,

), and wherein each at least one annotated point (

,

) has a respective coordinate in the current digital image (200b);

identify (S104) an amount of the movement by comparing position indicative information in the current digital image (200b) to position indicative information in a previous digital image (200a) of the scene (100), the previous digital image (200a) having been captured prior to movement of the camera (130), wherein the position indicative information in the current digital image (200b) is represented by a first set of keypoints

extracted from the current digital image (200b), wherein the position indicative information in the previous digital image (200a) is represented by a second set of keypoints (k1:k6) extracted from the previous digital image (200a), wherein the second set of keypoints (k1:k6) is a subset of all keypoints (k1:k6) extractable from the previous digital image (200a), wherein the previous digital image (200a) is associated with at least one annotated point (a1, a2) having a respective coordinate in the previous digital image (200a), and wherein the second set of keypoints (k1:k6) is restricted to only include keypoints (k1:k6) located within a predefined radius from the coordinate of the at least one annotated point (a1, a2) in the previous digital image (200a); and

update (S106) the coordinate of each at least one annotated point (

,

) in accordance with the identified amount of movement and a camera homography.


 




Drawing






















Search report









Search report