Field
[0001] This specification relates to generating a movement indication for a scene.
Background
[0002] Motion data can be extracted from video data. Information, such as breathing data
for a human subject, can be extracted from said motion data. However, there remains
a need for alternative arrangements for generating and using motion data.
Summary
[0003] In a first aspect there is described an apparatus comprising: means for receiving
depth data for a first plurality of pixels of a video image of a scene; means for
determining depth data over time for each of a second plurality of pixels of the video
image, wherein the second plurality of pixels comprises at least some of the first
plurality of pixels; and means for processing the determined depth data of successive
instances of the second plurality of pixels to generate movement data, wherein each
instance of the second plurality of pixels comprises depth data of the second plurality
of pixels.
[0004] The movement data may comprise breathing pattern data, and/or pulse data.
[0005] The second plurality of pixels may be non-contiguous.
[0006] The second plurality of pixels may comprise a random or pseudo-random selection of
the pixels of the video image.
[0007] The apparatus may further comprise means for selecting said second plurality of pixels
such that a distribution of said second plurality of pixels complies with a distribution
function.
[0008] The apparatus may further comprise means for sensing said depth data.
[0009] The means for sensing said depth data may comprise a multi-modal camera including
an infrared projector and an infrared sensor.
[0010] The means for processing the depth data may perform frequency domain filtering.
[0011] The frequency domain filtering may identify movements with frequencies in a normal
range of breathing rates.
[0012] The apparatus may further comprise means for determining a mean distance measurement
for the second plurality of pixels between their two successive instances.
[0013] The means for processing the determined depth data may generate said movement data
based, at least in part, on determining whether a mean distance measurement between
successive instances of depth data is greater than a threshold value.
[0014] The second plurality of pixels may be a subset of the first plurality of pixels,
or the first plurality of pixels and the second plurality of pixels may be identical.
[0015] The various means of the apparatus may comprise: at least one processor; and at least
one memory including computer program code, the at least one memory and the computer
program code configured, with the at least one processor, to cause the performance
of the method of the second aspect, below.
[0016] In a second aspect, there is provided a method comprising: receiving depth data for
a first plurality of pixels of a video image of a scene; determining depth data over
time for each of a second plurality of pixels of the video image, wherein the second
plurality of pixels comprises at least some of the first plurality of pixels; and
processing the determined depth data of successive instances of the second plurality
of pixels to generate movement data, wherein each instance of the second plurality
of pixels comprises depth data of the second plurality of pixels.
[0017] The movement data may comprise breathing pattern data, and/or pulse data.
[0018] The second plurality of pixels may be non-contiguous.
[0019] The second plurality of pixels may comprise a random or pseudo-random selection of
the pixels of the video image.
[0020] The method may further comprise selecting said second plurality of pixels such that
a distribution of said second plurality of pixels complies with a distribution function.
[0021] The method may further comprise sensing said depth data.
[0022] Sensing said depth data may comprise using a multi-modal camera including an infrared
projector and an infrared sensor.
[0023] Processing the depth data may comprise frequency domain filtering.
[0024] The frequency domain filtering may identify movements with frequencies in a normal
range of breathing rates.
[0025] The method may further comprise determining a mean distance measurement for the second
plurality of pixels between their two successive instances.
[0026] Processing the determined depth data may generate said movement data based, at least
in part, on determining whether a mean distance measurement between successive instances
of depth data is greater than a threshold value.
[0027] The second plurality of pixels may be a subset of the first plurality of pixels,
or the first plurality of pixels and the second plurality of pixels may be identical.
[0028] In a third aspect, this specification describes any apparatus configured to perform
any method as described with reference to the second aspect.
[0029] In a fourth aspect, this specification describes computer-readable instructions which,
when executed by computing apparatus, cause the computing apparatus to perform any
method as described with reference to the second aspect.
[0030] In a fifth aspect, this specification describes a computer program comprising instructions
for causing an apparatus to perform at least the following: receive video data for
a scene; determine a movement measurement for at least some of a plurality of subframes
of the video data; weight the movement measurements to generate a plurality of weighted
movement measurements, wherein the weighting is dependent on the subframe; and generate
a movement indication for the scene from a combination (such as a sum) of some or
all of the weighted movement measurements.
[0031] In a sixth aspect, this specification describes a computer-readable medium (such
as a non-transitory computer readable medium) comprising program instructions stored
thereon for performing at least the following: receive depth data for a first plurality
of pixels of a video image of a scene; determine depth data over time for each of
a second plurality of pixels of the video image, wherein the second plurality of pixels
comprises at least some of the first plurality of pixels; and process the determined
depth data of successive instances of the second plurality of pixels to generate movement
data, wherein each instance of the second plurality of pixels comprises depth data
of the second plurality of pixels.
[0032] In a seventh aspect, this specification describes an apparatus comprising: at least
one processor; and at least one memory including computer program code which, when
executed by the at least one processor, causes the apparatus to: receive depth data
for a first plurality of pixels of a video image of a scene; determine depth data
over time for each of a second plurality of pixels of the video image, wherein the
second plurality of pixels comprises at least some of the first plurality of pixels;
and process the determined depth data of successive instances of the second plurality
of pixels to generate movement data, wherein each instance of the second plurality
of pixels comprises depth data of the second plurality of pixels.
[0033] In an eighth aspect, this specification describes an apparatus comprising: a first
input for receiving depth data for a first plurality of pixels of a video image of
a scene; a first control module for determining depth data over time for each of a
second plurality of pixels of the video image, wherein the second plurality of pixels
comprises at least some of the first plurality of pixels; and a second control module
for processing the determined depth data of successive instances of the second plurality
of pixels to generate movement data, wherein each instance of the second plurality
of pixels comprises depth data of the second plurality of pixels. The first and second
control modules may be combined.
Brief description of the drawings
[0034] Examples will now be described, by way of non-limiting examples, with reference to
the following schematic drawings, in which:
FIG. 1 is block diagram of a system in accordance with an example embodiment;
FIG. 2 shows an example image in accordance with an example embodiment;
FIG. 3 is a flow chart showing an algorithm in accordance with an example embodiment;
FIG. 4 shows an example image being processed in accordance with an example embodiment;
FIG. 5 is block diagram of a system in accordance with an example embodiment;
FIG. 6 is a flow chart showing an algorithm in accordance with an example embodiment;
FIG. 7 is a flow chart showing an algorithm in accordance with an example embodiment;
FIG. 8 shows an initialisation data in accordance with an example embodiment;
FIGS. 9A and 9B show results in accordance with an example embodiment;
FIGS. 10A and 10B show results in accordance with an example embodiment;
FIG. 11 is a flow chart showing an algorithm in accordance with an example embodiment;
FIG. 12 shows output data in accordance with an example embodiment;
FIG. 13 shows an example image being processed in accordance with an example embodiment;
FIG. 14 shows an example image being processed in accordance with an example embodiment;
FIG. 15 is a block diagram of a system in accordance with an example embodiment; and
FIGS. 16A and 16B show tangible media, respectively a removable memory unit and a
compact disc (CD) storing computer-readable code which when run by a computer perform
operations according to example embodiments.
Detailed description
[0035] Sleep disorders are associated with many problems, including psychiatric and medical
problems. Sleep disorders are sometimes divided into a number of subcategories, such
as intrinsic sleep disorders, extrinsic sleep disorders and circadian rhythm based
sleep disorders.
[0036] Examples of intrinsic sleep disorders include idiopathic hypersomnia, narcolepsy,
periodic limb movement disorder, restless legs syndrome, sleep apnoea and sleep state
misperception.
[0037] Examples of extrinsic sleep disorders include alcohol-dependent sleep disorder, food
allergy insomnia and inadequate sleep routine.
[0038] Examples of circadian rhythm sleep disorders include advanced sleep phase syndrome,
delayed sleep phase syndrome, jetlag and shift worker sleep disorder.
[0039] Better understanding of sleep physiology and pathophysiology may aid in improving
the care received by individuals suffering with such difficulties. Many sleep disorders
are primarily diagnosed based on self-reported complaints. Lack of objective data
can hinder case understanding and care provided. By way of example, data relating
to breathing patterns of a patient may be valuable.
[0040] Non-contact, discrete longitudinal home monitoring may represent the most suitable
option for monitoring sleep quality (including breathing patterns) over a period of
time and may, for example, be preferred to clinic-based monitoring as the sleep patterns
can vary between days depending on food intake, lifestyle and health status.
[0041] FIG. 1 is block diagram of a system, indicated generally by the reference numeral
10, in accordance with an example embodiment. The system 10 includes a bed 12 and
a depth camera 14. The depth camera may, for example, be mounted on a tripod 15. As
discussed further below, the depth camera 14 may capture both depth data and video
data of a scene including the bed 12 (and any patient on the bed).
[0042] In the use of the system 10, a patient may sleep on the bed 12, with the depth camera
14 being used to record aspects of sleep quality (such as breathing patterns indicated
by patient movement). The bed 12 may, for example, be in a patient's home, thereby
enabling home monitoring, potentially over an extended period of time.
[0043] FIG. 2 shows an example output, indicated generally by the reference numeral 20,
in accordance with an example embodiment. The output 20 is an example of depth data
generated by the depth camera 14 of the system 10. The depth data may be data of a
scene that is fixed. For example, the scene may be captured using a camera having
a fixed position and having unchanging direction and zoom. In this way, changes in
the depth data can be monitored over time (e.g. over several hours of sleep) and may
be compared with other data (e.g. collected over days, weeks or longer and/or collected
from other users).
[0044] The depth camera 14 may produce depth matrices, where each element of the matrix
corresponds to the distance from an object in the image to the depth camera. The depth
matrices can be converted to grey images, as shown in FIG. 2 (e.g. the darker the
pixel, the closer it is to the sensor). If the object is too close or too far away,
the pixels may be set to zero such that they appear black.
[0045] FIG. 3 is a flow chart showing an algorithm, indicated generally by the reference
numeral 30, in accordance with an example embodiment.
[0046] The algorithm 30 starts at operation 32, where depth data is received (for example
from the camera 14 described above). The depth camera 14 (e.g. a camera incorporating
an RGB-D sensor) may determine depth data for a first plurality of pixels of an image
of a scene. The scene may relate to a human subject, for example captured during sleep.
[0047] Where the term "image" is used herein, it is to be understood that the image may
be a video image, for example a series of frames of image data.
[0048] An example depth camera may capture per-pixel depth information via infra-red (IR)
mediated structured light patterns with stereo sensing or time-of-flight sensing to
generate a depth map. Some depth cameras include three-dimensional sensing capabilities
allowing synchronized image streaming of depth information. Many other sensors are
possible.
[0049] At operation 34, a second plurality of pixels is selected. By way of example, FIG.
4 shows an example output, indicated generally by the reference numeral 40, in accordance
with an example embodiment. The output 40 shows the data of the output 20 and also
shows the second plurality of pixels 42 (two of which are labelled in FIG. 4). The
second plurality of pixels may be randomly (or pseudo-randomly) selected from the
plurality of pixels of the image 20 (although, as discussed further below, other distributions
of the pixels 42 are possible). (Note that the pixels 42 are shown as relatively large
pixels to ensure that they are visible in FIG. 4. In reality, such pixels may be smaller.)
[0050] As shown in FIG. 4, the second plurality of pixels may be non-contiguous, for example
distributed across the extent of the image of the scene. In this way, the second plurality
of pixels can provide a plurality of "pinpricks" of data from the overall image. Clearly,
the use of a plurality of pinprick-pixels can significantly reduce data storage and
processing requirements when compared with considering the entire image. By "non-contiguous"
it is meant that the second plurality of pixels are not located in such a way that
they form a single contiguous area. In practice, the non-contiguous pixels may be
located in a plurality of smaller contiguous groups that are not contiguous with one
another. However, the best coverage of the scene may be obtained by ensuring the such
groups are as small as possible, for example that each group is less than a threshold
number of pixels. In some examples, the threshold number of pixels may be 2, such
that none of the second plurality of pixels is located immediately adjacent another.
[0051] The first plurality of pixels (obtain in the operation 32) may include data from
all pixels in the image 40; however, this is not essential in all examples. Thus,
the first plurality of pixels may be a subset of the full set of pixels of the image.
[0052] The second plurality of pixels comprises some or all of the first plurality of pixels.
(In the event that the second plurality of pixels comprises all of the first plurality
of pixels, the operation 34 may be omitted.) Thus, whilst is some embodiments the
second plurality of pixels is a subset of the first plurality of pixels, in other
embodiments, the first and second plurality of pixels may be identical.
[0053] At operation 36, depth data for successive instances of the second plurality of pixels
are determined to generate data that can be used to provide an indication of a degree
of movement between one instance of depth data and the next.
[0054] Finally, a movement indication, based on the outputs of the operation 36, is provided
in operation 38. By way of example, the operation 38 may determine whether movement
is detected in any of the second plurality of pixels. Alternatively, the movement
indication may be generated from a combination (such as a sum, e.g. a weighted sum)
of some or all of the second plurality of pixels.
[0055] The movement data generated in operation 38 is based on the depth data determined
in operation 36. Each instance of the second plurality of pixels comprises depth data
of a consistent set of pixels of the scene such that the movement data is derived
from data from the same set of pixels collected over time. In one example implementation,
the successive instances of the of the second plurality of pixels were separated by
5 seconds (although, of course, other separations are possible). Separating successive
instances of the depth data by a period such as 5 seconds even if more data are available
may increase the likelihood of measurable movement occurring between instances and
may reduce the data storage and processing requirements of the system.
[0056] More generally, an instance of one or more pixel may be the values of those pixels
at a particular moment in time, for example in a single frame of a in a stream of
image data such as a video. Successive instances of the pixel(s) may be adjacent frames
of the stream. Alternatively, the video stream may be sampled at instances of time
with successive instances of pixels being the values of pixels at each sample. Described
above is an example where the sample period is used that is a fixed temporal value
(e.g. a certain number of seconds); however, other approaches to sampling may be used.
For example, the stream may be sampled (and an instance of the pixel(s) defined) in
response to a repeating even that does not have a fixed period - for example each
time a detected noise level associated with the monitored scene increases above a
threshold amount, in which case the change in pixel data will be representative of
movement between successive noises. Many possible approaches can be used to determine
when an instance of the pixel data is defined
[0057] As discussed further below, the movement measurements for each of the second plurality
of pixels of the depth data may be determined on the basis of the mean distance evolution
of the pixels. A distance measurement may be determined for the second plurality of
pixels of each measurement instance, with the distance measurement enabling one measurement
instance to be compared with other measurement instance(s).
[0058] The movement data generated in the operation 38 provides movement data for the second
plurality of pixels. However, as discussed further below, movement in the second plurality
of pixels may be representative of movement in the overall image. Moreover, by using
only a subset of data points in the image (perhaps for a subset of time instance in
the obtained data), the quantity of data stored and/or processed can be substantially
reduced.
[0059] FIG. 5 is block diagram of a system, indicated generally by the reference numeral
50, in accordance with an example embodiment. The system 50 comprises an imaging device
52 (such as the camera 14), a data storage module 54, a data processing module 56
and a control module 58.
[0060] The data storage module 54 may store measurement data (e.g. the depth data for the
first plurality of pixels, as obtained in the operation 32 of the algorithm 30) under
the control of the control module 58, with the stored data being processed by data
processor 56 to generate an output (e.g. the output of the operation 38 of the operation
30). The output may be provided to the control module 58. In an alternative arrangement,
the data storage module 54 may store the depth data of the second plurality of depth
data (e.g. outputs of the operation 34), with the stored data being processed by data
processor 56 to generate the output (e.g. the output of the operation 38). A potential
advantage of storing the second plurality of data is a reduced data storage requirement.
[0061] The camera 14 of the system 10 and/or the imaging device 52 of the system 50 may
be a multimodal camera comprising a colour (RGB) camera and an infrared (IR) projector
and infrared sensor. The sensor may send an array of near infra-red (IR) light into
the field-of-view of the camera 14 or the imaging device 52, with a detector receiving
reflected IR and an image sensor (e.g. a CMOS image sensor) running computational
algorithms to construct a real-time, three-dimensional depth value mesh-based video.
Information obtained and processed in this way may be used, for example, to identify
individuals, their movements, gestures and body properties and/or may be used to measure
size, volume and/or to classify objects. Images from depth cameras can also be used
for obstacle detection by locating a floor and walls.
[0062] In one example implementation, the imaging device 52 was implemented using a Kinect
(RTM) camera provided by the Microsoft Corporation. In one example, the frame rate
of the imaging device 52 was 33 frames per second.
[0063] Depth frames may be captured by the imaging device 52 and stored in the data storage
module 54 (e.g. in the form of binary files). In an example implementation, the data
storage 54 was used to record depth frames for subjects sleeping for period ranging
between 5 and 8 hours. In one example implementation, the data storage requirement
for one night of sleep was of the order of 200GB when the entirety of each frame was
stored; however, it was possible to reduce this storage requirement by approximately
98% using the approach described herein.
[0064] The stored data can be processed by the data processor 56. The data processing may
be online (e.g. during data collection), offline (e.g. after data collection) or a
combination of the two. The data processor 56 may provide an output, as discussed
further below.
[0065] FIG. 6 is a flow chart showing an algorithm, indicated generally by the reference
numeral 60, showing an example implementation of the operation 38 of the algorithm
30, in accordance with an example embodiment. As described above, the operation 38
determines movement between successive instances of the captured depth data.
[0066] The algorithm 60 starts at operation 62 where a mean distance value for the second
plurality of pixels of an image is determined. Then, at operation 64, distances between
images instances (based on the distances determined in operation 62) are determined.
As discussed further below, a distance between images instances that is higher than
a threshold value may be indicative of movement between successive images.
[0067] By way of example, consider the depth data of the second plurality of pixels generated
in the operation 36.
[0068] After removing all the zero pixels in the frames the mean distance of the second
plurality of pixels at instance i is defined by:

Where:
D ∈ G\Z with
G = {set of pixels in the second plurality}; and
Z = {set of zero value pixels in the second plurality}.
[0069] In this context, the "distance" of a pixel refers to the difference in its depth
between the two instances. For example, if the pixel has a depth of 150 cm in a first
instance and a depth of 151 cm in a second instance then the distance of the pixel
between the two instances is 151 - 150 = 1 cm.
[0070] Having computed the mean distance of the second plurality of pixels of successive
frames of a dataset (the operation 62), a determination of distance between image
instances (indicative of movement in the relevant scene) can be made (the operation
64).
[0071] One method for determining movement is as follows. First, a mean is defined based
on the set of all the non-zero value pixels. Next, if the difference between the mean
distance of two consecutive frames is above a certain threshold θ
1, a change in position is noticed. This is given by:
M(
i) -
M(
i - 1) ≥
θ1.
[0072] Several values based on standard deviations and maximum differences are tested in
order to find the best threshold values (as discussed further below).
[0073] FIG. 7 is a flow chart showing an algorithm, indicated generally by the reference
numeral 70, in accordance with an example embodiment. The algorithm 70 starts at operation
72 where an initialisation phase is carried out. The initialisation phase is conducted
without a patient being present (such that there should be no movement detected).
As described below, the initialisation phase is used to determine a noise level in
the data. Next, at operation 74, a data collection phase is carried out. Finally,
at operation 76, instances of the data collected in operation 74 having data indicative
of movement are identified (thereby implementing the operation 38 of the algorithm
30 described above). The instances identified in operation 76 are based on a threshold
distance level that is set depending on the noise level determined in the operation
72.
[0074] FIG. 8 shows an initialisation data, indicated generally by the reference numeral
80 in accordance with an example embodiment. The initialisation data 80 includes mean
distance data 82, a first image 83, a second image 84 and a representation of the
noise 85. The first image 83 corresponds to the mean distance data of a data point
86 and the second image 84 corresponds to the mean distance data of a data point 87.
[0075] It should be noted that although the data 80 may be based on full images (similar
to the image 20 discussed above), similar initialisation data could have been generated
based on the second plurality of pixels described above.
[0076] The mean distance data 82 shows how the determined mean frame distance changes over
time (a time period of 18 minutes is shown in FIG. 8). As the data 80 was collected
without motion (e.g. without a patient being on the bed 12 of the system 10), the
variations in the mean distance data 82 are representative of noise. The noise representation
85 expresses the noise as a Gaussian distribution. That distribution can be used to
determine a standard deviation for the noise data (as indicated by the standard deviation
shown in the data 82). A maximum difference between two consecutive mean values is
also plotted on the output 80.
[0077] The operation 72 seeks to assess the intrinsic noise in the frames of the output
80. In this way, the intrinsic variation in the data at rest can be determined so
that the detection of motion will not be confused by noise.
[0078] The operation 72 may be implemented using the principles of the algorithm 30 described
above. Thus, video data (i.e. the output 82) may be received. Distances in the selected
second plurality of pixels can then be determined (see operation 36 of the algorithm
30). (Although, as noted above, the noise could be based on the entire image, e.g.
the first plurality of pixels.)
[0079] In the data collection phase 74 of the algorithm 70, the subject goes to sleep and
the relevant dataset is collected. Then, in operation 76, the knowledge of the noise
is exploited to determine the likelihood that a change in distance data is indicative
of actual movement (rather than noise). By way of example, movement may be deemed
to have occurred in the event that determined distance data is more than 55 standard
deviations (as determined in operation 72) away from the previous mean distance.
[0080] An experiment has been conducted on fourteen nights including eleven different subjects
in order to validate the reproducibility of the experiment. The data were stored as
videos which serve as ground truth. The true detection of movements was manually noted
looking at the videos and was compared to the detection of movements provided by the
algorithm being tested.
[0081] FIGS. 9A and 9B show results, indicated generally by the reference numerals 92 and
94 respectively, in accordance with example embodiments. The results are based on
a mean distance evolution (i.e. how the mean distance determinations change over time)
for the fourteen subjects. The percent of good detection (true positives) averaging
all experiments is shown together with the overestimation of movements (false positives).
The results 92 and 94 are based on frames of image data having the form described
above with reference to FIG. 2 (i.e. without selecting a subset of the pixels for
processing).
[0082] The results 92 show the number of correct movement identifications (true positives)
and the number of false movement identification (false positives) for different standard
deviation threshold levels. With the movement threshold at two standard deviations
(such that a mean distance change of at least two standard deviations is detected),
a true positives measurement of 45% was detected and a false positives measurements
of 81% was detected. As the threshold level was increased, the number of false positives
reduced (to zero at five standard deviations). However, at five standard deviations,
the true positives level had reduced to 26%. Accordingly, the results 92 show poor
performance.
[0083] The results 94 show the number of correct movement identifications (true positives)
and the number of false movement identification (false positives) when using maximum
distance differences as the threshold value. The performance was also poor.
[0084] FIGS. 10A and 10B show results, indicated generally by the reference numerals 102
and 104 respectively, in accordance with example embodiments. The results are based
on a mean distance evolution for the fourteen subjects. The percent of good detection
(true positives) averaging all experiments is shown together with the overestimation
of movements (false positives). The results 102 and 104 are based on considering pixel
depth evolution of a plurality of pixels of image data (e.g. the second plurality
of pixels discussed above).
[0085] The results 102 show the number of correct movement identifications (true positives)
and the number of false movement identification (false positives) for different standard
deviation threshold levels. With the movement threshold at 55 standard deviations,
a true positives measurement of 91% was detected and a false positives measurement
of 4% was detected. Thus, the results for the second plurality of pixels were significantly
better than the results for the full frame arrangement described above with reference
to FIGS. 9A and 9B. The results 104 show a similarly good performance when using maximum
distance differences as the threshold value.
[0086] FIG. 11 is a flow chart showing an algorithm, indicated generally by the reference
numeral 110, in accordance with an example embodiment.
[0087] The algorithm 110 starts at operation 112 where measurement data is obtained by sensing.
The operation 112 may, for example, implement some or all of the algorithms 30 or
70 described above. The operation 112 may, for example, provide data indicative of
movement over time.
[0088] At operation 114, frequency-domain filtering of the measurement data is conducted.
Thus, the operation 114 may implement means for processing determined changes in depth
data by incorporating frequency domain filtering. Such filtering may, for example,
be used to identify movement with frequencies within a particular range.
[0089] One example use of the principles described herein is in the monitoring of breathing
patterns of a patient. In such an embodiment, the operation 114 may perform frequency-domain
filtering in order to identify movements with frequencies in a normal range of breathing
rates. Thus, movement data can be filtered to identify movement that might be indicative
of breathing. For example, a band-pass filter centred on typical breathing frequencies
may be used to implement the operation 114.
[0090] FIG. 12 shows output data, indicated generally by the reference numeral 120, in accordance
with an example embodiment. The data 120 compares the performance of depth data processing
in accordance with the principles described herein (labelled "Kinect" in FIG. 12),
with the performance of a measurement belt (labelled "Belt" in FIG. 12).
[0091] The output 122 shows the "Kinect" and "Belt" performance in the time domain, wherein
movement amplitude is plotted against time. The output 123 shows the "Kinect" performance
in the frequency domain. The output 124 shows the "Belt" performance in the frequency
domain.
[0092] As can be seen in FIG. 12, the performance of depth data processing in accordance
with an example embodiment described herein ("Kinect") is similar to the performance
of with an example measurement belt ("Belt").
[0093] In the arrangement described above with reference to FIG. 4, the selection of pixels
(in operation 34 of the algorithm 30) was random (or pseudorandom). In other embodiments,
the selection of pixels may be subject to a distribution function (with or without
a random or pseudorandom element). A variety of functions are described herein (which
functions may be used alone or in any combination).
[0094] FIG. 13 shows an example image, indicated generally by the reference numeral 130,
being processed in accordance with an example embodiment. A plurality of pixels 132
that collectively form the pixels selected in the operation 34 are shown. The pixels
are spread according to a function that favours locations close to the centre of the
image. The distribution 130 also has a random element. The logic here is that the
camera/imaging device is likely to be directed such that objects of interest are close
to the centre of the field of view.
[0095] The weighting of a distribution function may be at least partially dependent on visual
characteristics of the scene. For example, areas above the level of the bed 12 in
the system 10 described above may be weighted higher in a distribution function than
areas with surfaces below the surface on the bed (on the assumption that movement
is more likely to occur above the level of the bed than below the level of the bed).
Alternatively, or in addition, areas on the bed 12 in the system 10 may be weighted
higher in the distribution function. The second plurality of pixels may be selected
on the basis of such distribution functions, favouring pixel locations within areas
that have a higher weighting.
[0096] FIG. 14 shows an example image, indicated generally by the reference numeral 140,
being processed in accordance with an example embodiment. A plurality of pixels 142
that collectively form the pixels selected in the operation 34 are shown. The pixels
are spread randomly (or pseudorandomly) but subject to being positioned on a bed identified
in the image. The logic here is that a sleeping patient can be expected to be located
on the bed.
[0097] The examples described above generally relate to sleep monitoring in general, and
the detection of data relating to sleep disorders in particular. This is not essential
to all embodiments. For example, the principles discussed herein can be used to determine
movement in a scene for other purposes. For example, a similar approach may be taken
to detect movement in a largely static scene, for example for monitoring an area to
detect the movement of people, animals, vehicles, etc. within it.
[0098] Moreover, the examples described above generally relate to determining breathing
patterns of a patient. This is not essential to all embodiments. Many other movements
may be detected. For example, the principles described herein could be applied to
the detection of a heartbeat of a patient.
[0099] For completeness, FIG. 15 is a schematic diagram of components of one or more of
the example embodiments described previously, which hereafter are referred to generically
as processing systems 300. A processing system 300 may have a processor 302, a memory
304 closely coupled to the processor and comprised of a RAM 314 and ROM 312, and,
optionally, user input 310 and a display 318. The processing system 300 may comprise
one or more network/apparatus interfaces 308 for connection to a network/apparatus,
e.g. a modem which may be wired or wireless. Interface 308 may also operate as a connection
to other apparatus such as device/apparatus which is not network side apparatus. Thus
direct connection between devices/apparatus without network participation is possible.
User input 310 and display 318 may be connected to a remote processor like ground
control station. Remote connection may be LTE or 5G type fast connection between remote
processor and processor.
[0100] The processor 302 is connected to each of the other components in order to control
operation thereof.
[0101] The memory 304 may comprise a non-volatile memory, such as a hard disk drive (HDD)
or a solid state drive (SSD). The ROM 312 of the memory 314 stores, amongst other
things, an operating system 315 and may store software applications 316. The RAM 314
of the memory 304 is used by the processor 302 for the temporary storage of data.
The operating system 315 may contain code which, when executed by the processor implements
aspects of the algorithms 30, 60, 70 and 110 described above. Note that in the case
of small device/apparatus the memory can be most suitable for small size usage i.e.
not always hard disk drive (HDD) or solid state drive (SSD) is used.
[0102] The processor 302 may take any suitable form. For instance, it may be a microcontroller,
a plurality of microcontrollers, a processor, or a plurality of processors.
[0103] The processing system 300 may be a standalone computer, a server, a console, or a
network thereof. The processing system 300 and needed structural parts may be all
inside device/apparatus such as IoT device/apparatus i.e. embedded to very small size.
[0104] In some example embodiments, the processing system 300 may also be associated with
external software applications. These may be applications stored on a remote server
device/apparatus and may run partly or exclusively on the remote server device/apparatus.
These applications may be termed cloud-hosted applications. The processing system
300 may be in communication with the remote server device/apparatus in order to utilize
the software application stored there.
[0105] FIGS. 16A and 16B show tangible media, respectively a removable memory unit 365 and
a compact disc (CD) 368, storing computer-readable code which when run by a computer
may perform methods according to example embodiments described above. The removable
memory unit 365 may be a memory stick, e.g. a USB memory stick, having internal memory
366 storing the computer-readable code. The memory 366 may be accessed by a computer
system via a connector 367. The CD 368 may be a CD-ROM or a DVD or similar. Other
forms of tangible storage media may be used. Tangible media can be any device/apparatus
capable of storing data/information which data/information can be exchanged between
devices/apparatus/ network.
[0106] Embodiments of the present invention may be implemented in software, hardware, application
logic or a combination of software, hardware and application logic. The software,
application logic and/or hardware may reside on memory, or any computer media. In
an example embodiment, the application logic, software or an instruction set is maintained
on any one of various conventional computer-readable media. In the context of this
document, a "memory" or "computer-readable medium" may be any non-transitory media
or means that can contain, store, communicate, propagate or transport the instructions
for use by or in connection with an instruction execution system, apparatus, or device,
such as a computer.
[0107] Reference to, where relevant, "computer-readable storage medium", "computer program
product", "tangibly embodied computer program" etc., or a "processor" or "processing
circuitry" etc. should be understood to encompass not only computers having differing
architectures such as single/multi-processor architectures and sequencers/parallel
architectures, but also specialised circuits such as field programmable gate arrays
FPGA, application specify circuits ASIC, signal processing devices/apparatus and other
devices/apparatus. References to computer program, instructions, code etc. should
be understood to express software for a programmable processor firmware such as the
programmable content of a hardware device/apparatus as instructions for a processor
or configured or configuration settings for a fixed function device/apparatus, gate
array, programmable logic device/apparatus, etc.
[0108] As used in this application, the term "circuitry" refers to all of the following:
(a) hardware-only circuit implementations (such as implementations in only analogue
and/or digital circuitry) and (b) to combinations of circuits and software (and/or
firmware), such as (as applicable): (i) to a combination of processor(s) or (ii) to
portions of processor(s)/software (including digital signal processor(s)), software,
and memory(ies) that work together to cause an apparatus, such as a server, to perform
various functions) and (c) to circuits, such as a microprocessor(s) or a portion of
a microprocessor(s), that require software or firmware for operation, even if the
software or firmware is not physically present.
[0109] If desired, the different functions discussed herein may be performed in a different
order and/or concurrently with each other. Furthermore, if desired, one or more of
the above-described functions may be optional or may be combined. Similarly, it will
also be appreciated that the flow diagrams of Figures 3, 6, 7 and 11 are examples
only and that various operations depicted therein may be omitted, reordered and/or
combined.
[0110] It will be appreciated that the above described example embodiments are purely illustrative
and are not limiting on the scope of the invention. Other variations and modifications
will be apparent to persons skilled in the art upon reading the present specification.
For example, it would be possible to extend the principles described herein to other
applications, such as the control of robots or similar objects.
[0111] Moreover, the disclosure of the present application should be understood to include
any novel features or any novel combination of features either explicitly or implicitly
disclosed herein or any generalization thereof and during the prosecution of the present
application or of any application derived therefrom, new claims may be formulated
to cover any such features and/or combination of such features.
[0112] Although various aspects of the invention are set out in the independent claims,
other aspects of the invention comprise other combinations of features from the described
example embodiments and/or the dependent claims with the features of the independent
claims, and not solely the combinations explicitly set out in the claims.
[0113] It is also noted herein that while the above describes various examples, these descriptions
should not be viewed in a limiting sense. Rather, there are several variations and
modifications which may be made without departing from the scope of the present invention
as defined in the appended claims.