[0001] The present invention relates to a computer-implemented method for training an artificial
intelligence based neural network applicable for classifying digital images to be
considered as a security document or not without authenticating a security feature,
a computer-implemented method for copy prevention of at least one security document,
a banknote detector, and a computer program product.
[0002] Security-relevant documents such as flight tickets or banknotes are often subject
to reproducing actions, for example counterfeiting actions. One measure to approach
forged documents relates to the assessment of the authenticity of questionable documents.
However, this approach is a downstream activity in the sense that the original document
has been reproduced already at the time of performing the authentication process on
the questionable document. Therefore, authentication-related measures are less desirable.
[0003] The reproduction of the original document can be performed by means of scanning devices,
printers and/or copying devices. A reproduction can also be considered a data copying
process such as a data transformation. In this regard, avoiding the reproducing action
itself, in case a document when being reproduced could potentially be considered an
original security document, is favorable. In this case the reproduction is avoided
before even being performed. Such methods exist with regard to specialized security
features included in the security documents. This means, for a given document in question
it is evaluated whether this document comprises a security feature originally contained
within the security document. However, these approaches are unfavorable for several
reasons. Firstly, the evaluation with regard to security features is complex and requires
sophisticated devices. Secondly, such an evaluation process would need to contain
specific information on the security features. If counterfeiters would lay open the
evaluation process they would gain this information. Thirdly, counterfeiters may attack
security features and included amended features within the document in question which
could lead to incorrect results of the evaluation method.
[0004] Various security features exist for the purpose of preventing the actions described
above. They may be for example printed graphical design elements that are recognized
by special detectors in the aforesaid devices. The detectors may then trigger a response,
which interferes with the desired action, such as refusal to process, or printing
a highly degraded image. Such graphical elements may be designed so as to have the
appearance of being a part of the security document artwork. An example of the use
of such elements may be found in
U.S. Patent 5845008. In other cases, special signals, which are visibly nearly imperceptible, may be
added to the printed designs so that they are recognized by special detectors in the
aforesaid devices, which may then trigger responses as described above. An example
of the use of such elements may be found in
U.S. Patent 6,449,377.
[0005] These security features, however, suffer from inherent vulnerabilities. Graphical
design elements, even when the attempt is made to make them look like part of the
artwork, may often be readily recognized by skilled persons for their intended securing
purpose. The result is that they may be altered just slightly enough so that the special
detectors no longer identify them and thus fail to interrupt the reproducer's desired
action. They may also be misused by applying said elements to other documents not
intended to be protected by the legitimate users so that persons are unable to complete
scanning, copying or printing actions on said documents.
[0006] Special signals such as digital watermarks can also have the undesirable trait of
appearing to distort the printed document. In the case of banknote artwork, this can
be especially undesirable. The distortion can be lessened, albeit at the expense of
signal strength; usually a compromise is sought.
[0007] Artificial Intelligence in combination with machine learning is increasingly being
used for applications like facial recognition and other object identification. In
such applications, there are an infinite number of potential images, which may need
to be robustly recognized. An application, for example, which is trained to recognize
an image of a gray squirrel as such may encounter any one of a huge variation of gray
squirrel sizes, poses, ages, color shades, lighting or any other of a number of individual
characteristics. An application designed to reliably recognize an individual person's
face will have to face similar variations, which at the very least adds to the computational
complexity and computing resource needs of the application.
[0008] The objective technical problem to be solved can be considered to consist in providing
a method for training an artificial intelligence based neural network applicable for
classifying digital images to be considered as a security document or not without
authenticating a security feature and a method for copy prevention of a security document
without authenticating a security feature making use of the so trained neural network
which is improved compared to the prior art.
[0009] According to the present invention, the neural network is not trained to authenticate
digital images, in particular digital images of security documents. Moreover, the
inventive method for copy prevention of a security document is not replacing any authentication
process. In contrast, the inventive method for copy prevention generally may represent
an additional measure, which may be applied so as to evaluate whether a reproduction
of a digital sample image could be perceived by an unbiased human observer as a security
document.
[0010] The problem is solved by the subject matter of the independent claims. Preferred
embodiments are indicated within the dependent claims and the following description,
each of which, individually or in combination, may represent aspects of the invention.
The advantages and preferred embodiments described with regard to the indicated devices
are correspondingly to be transferred to the according methods and vice versa.
[0011] The present invention uses an inventively trained artificial intelligence based neural
network in order to determine whether a digital image of a document may be copied
/ reproduced and, thus, does not make use of the presence of security features representing
code for copy protection. Thus, documents which shall be copy prevented do not need
to comprise a code for copy protection in order to specifically prevent reproducing
a digital image. According to the present invention the design of the document to
be copy protected does not need to be distorted by use of an additional code for copy
protection, which also reduces a risk that counterfeiters identify an area of the
code for protection. In addition, the absence of a code for copy protection on a document
reduces a risk that such a code for copy prevention may be hacked or the code may
be used illegally on other items for illegally stopping their reproduction. The inventive
method for copy protection using an inventively trained artificial intelligence based
neural network is particularly suitable for high throughput sorting and/or copying/reproduction
solutions of security documents, in particular of banknotes. It may be performable
on shorter time scales as it may require less time for determination whether a document
shall be reproduced or not than common authentication methods of security documents,
which require authentication of the specific code for copy protection.
[0012] According to a first aspect, a computer-implemented method for training an artificial
intelligence based neural network is provided. The neural network is applicable for
classifying digital images to be considered as a security document (in the following
also indicated with A for reference purposes) or not. This means, the method may be
configured to train the network for classifying a digital image to be considered as
a security document A or for classifying a digital image such that it is not considered
as a security document.
[0013] The method comprises providing at least one digital image A1 of at least one security
document as a reference.
[0014] The method also comprises providing a set of digital training images (in the following
also indicated with B1 for reference purposes). Based on the digital training images
the neural network may be trained with regard to the classification process. The digital
training images are altered compared to the digital image of the security document.
[0015] The set of digital training images comprises a first subset of positive digital training
images (in the following also indicated with B1-1 for reference purposes) having a
visual impact of an alteration such that an unbiased human observer would consider
a reproduction of the respective digital training image to represent the security
document or multiple security documents.
[0016] The set of digital training images also comprises a second subset of negative digital
training images (in the following also indicated with B1-2) having a visual impact
of the alteration such that the unbiased human observer would not consider a reproduction
of the respective digital training image to represent the security document or multiple
security documents.
[0017] The method further comprises providing ground truth for each digital training image
to the artificial intelligence based neural network. The ground truth represents at
least one acceptance level of one or more unbiased human observers as to whether a
reproduction of the respective digital training image is to be considered representing
or not representing the security document or multiple security documents. In other
words, the ground truth is used to train the neural network with regard to a decision
process of the classification. The ground truth describes how the specific digital
training images are to be interpreted in this training process. The at least one acceptance
level comprises respectively one or more responses of the unbiased human observers
regarding the determination whether a reproduction of a respective training image
is considered to represent or not represent the security document. For example, in
case of one unbiased human observer the acceptance level represents the respective
response of this human observer. In case of two, three, four or more human observers,
the responses of the two, three, four or more human observers are respectively used
to represent the acceptance level.
[0018] The so designed method may advantageously train the artificial intelligence based
neural network with regard to digital training images relative to at least one security
document. Accordingly, the neural network may be trained for each digital training
image as to which acceptance level a reproduction of the specific digital training
image would be considered the security document by an unbiased human observer. Accordingly,
the neural network may be trained a decision behavior when a reproduction of a digital
image is to be considered the security document.
[0019] Within the context of all aspects of the present invention, the terms "can", "could",
"may" or the like also include the indicative / realis mood of the correlated verb.
For example, the expression "the data file can be suitable to describe the properties
of the image means of the digital code" also includes the indicative / realis mood
"the data file is suitable to describe the properties of the image means of the digital
code".
[0020] Within the context of the present invention, the expressions "image(s) to be considered
as a security document" or "digital image(s) to be considered as a security document"
or "reproduction of an image to be considered as a security document" or "reproduction
of a digital image to be considered as a security document" means that an unbiased
human observer could (at least up to a certain acceptance level) perceive / regard
/ interpret a reproduction of the (digital) image as the security document. In other
words, the unbiased human observer does neither authenticate the security document,
its (digital) image nor its reproduction. Instead, the unbiased human observer upon
viewing/observing or using the digital image considers / has the impression / imagines
at least for a certain acceptance level that the (digital) image or its reproduction
represents the security document or a specimen thereof. Therefore, the feature expression
as indicated above, may also be regarded as to whether the unbiased human observer
would accept the (digital) image or its reproduction as a specimen of the security
feature without any underlying authentication process.
[0021] Within the context of the present invention, a digital image refers to a digital
code usually written in a computer language and therefore being computer-readable
code representing a specific image as a data file. The data file can be suitable to
describe the properties of the image by means of the digital code.
[0022] The digital image of the security document can have a resolution within the range
of 50 dpi to 2000 dpi, in particular within the range of 100 dpi to 1000 dpi, further
in particular within the range of 200 dpi and 600 dpi, further in particular in the
range of 300 dpi to 400 dpi.
[0023] Within the context of the present method, a reproduction of a digital image refers
to a hard copying and/or printing process such that the digital image is physically
processed to be permanently visible on printable media, at least for a certain amount
of time such as several years. In addition, reproducing a digital image may also include
a data processing, transforming or saving process with regard to the data underlying
the respective digital image.
[0024] The digital training images may be altered compared to the reference. Within the
context of the present invention, an altered digital training image may be considered
a digital training image which has a different or reduced quality compared to the
reference. In an alternative, the digital training image may from a viewpoint of reproduction,
e.g. printing, have a similar quality, but may be doctored or modified so as to distinguish
from the reference security document.
[0025] According to an example, the reference security document may, e.g., be specific banknote
including a portrait of the British Queen. The digital training image may then have
similar quality with regard to printing properties such as resolution, shape, dimensions,
colors, etc. However, the portrait of the British Queen may be replaced by a different
person, e.g. the British Prime Minister or any other related person so that an unbiased
human observer would consider this document be represent a security document. Such
altered digital training images may according to the present invention be regarded
as an alteration , such that an unbiased human observer would consider a reproduction
of the respective digital training image to represent the security document or multiple
security documents. The alteration may include a degradation. According to an alternative
embodiment, the portrait of the British Queen may be replaced by a portrait of the
President of the United States of America or any unrelated person so that an unbiased
human observer would directly understand that this document is not to be considered
as a security document. In this case, the digital training image may still be regarded
as altered or degraded, but the unbiased human observer would not consider a reproduction
of the respective digital training image to represent the security document or multiple
security documents.
[0026] The altered digital training images can be degraded. In this case, the digital training
images can be based on training documents B which have been chemically or physically
attacked. This means that training documents may have been chemically or physically
degraded. For example, a chemical reactive substance may have been applied or the
training document may have been scratched. The digital training image may then be
acquired after the underlying training document has been attacked. Accordingly, the
digital training image may show modifications caused by the attack on the underlying
training document.
[0027] In an alternative, a digital image may have been acquired based on a non-altered
training document, but the digital image may have been digitally attacked itself.
For example, the digital image may have been modified by applying a digital filter.
In this case, the digital training image may be the digitally attacked digital image.
[0028] The altered digital training images may differ from the digital image of the security
document with regard to at least one of a pixilation, a resolution, a definition,
a general aspect, a shape, a color, a color distribution, an image processing filter,
and an aspect ratio. For example, the digital training image may have a resolution,
meaning a pixel density per unit area, which is reduced compared to the resolution
of the reference. Accordingly, the visual impact may be different. Still, when being
reproduced the resolution of the digital training image may be sufficient such that
the unbiased human observer would consider the reproduction to represent the security
document. The general aspect refers to features of the digital image of the security
document which are not equally included in the digital training document. For example,
the security document may partially show a specific landscape including several mountains.
The digital training image may show a different landscape having the same number of
mountains or may show in principle the same landscape but may miss some of the mountains
included in the security document. The aspect ratio refers to a general ratio of length
to width of the security document and the digital training image. The definition refers
to a total number of pixels of the respective item with regard to horizontal and vertical
directions. The image processing filter may include a noise reduction filter, a blur
filter, a so-called neural filter using AI to process images, and similar digital
filters.
[0029] The altered digital training images may differ from the digital image of the security
document with regard to at least one of a perspective angle, an underlying illumination,
a coloration, a fold, or a crease. The perspective angle may refer to the angle at
which the digital training image appears to have been acquired. For example, the security
document may have a rectangular shape. Due to the perspective angle during the acquisition
of the digital training image (or due to a respective digital attack), the digital
training image may fail to have a rectangular shape. Moreover, certain features within
the digital training image may be distorted given a specific perspective angle. The
illumination may refer to a brightness distribution present in the digital training
document which differs from the brightness distribution the security document shows
when it is sufficiently illuminated in a top view. In a similar manner, the digital
training image may differ from the security document according to a coloration, i.e.
a specific color of at least a portion or, more generally, a color distribution across
the digital training image. A fold or a crease may have been present in a training
document based on which the digital training document is acquired. Accordingly, the
fold or the crease will generally also be perceptible within the digital training
image.
[0030] Furthermore, the security document may comprise one or more graphical design features,
such as a portrait, e.g. of the British Queen, or an architectural image (bridge,
building, etc.) or a graphical image of nature (plants or parts thereof, such as a
leaf, (so called floral / botanical emblems or floral / botanical elements), or fauna
/ animals (so called wildlife emblems or wildlife elements), etc.). The altered digital
training image may then differ from the digital image of the security document in
that the digital training image may comprise at least one different design feature
substituting the corresponding design feature of the security document, such as a
different portrait of a different related or unrelated person, a different related
or unrelated architectural image or a different related or unrelated image out of
nature. In this regard, the digital training image may be considered being altered.
The neural network may be trained accordingly in this regard, i.e. whether the unbiased
human observer considers the respectively altered digital images as relating to a
security document or not. According to one embodiment, the unbiased human observer
may consider the altered image as relating to the security document in case the graphical
design feature is substituted by a related graphical design feature, e.g. for the
British Pound banknote the portrait of Queen Elisabeth may be substituted by the portrait
of the British prime minister or a portrait of another member of the British royal
family. According to another embodiment, the unbiased human observer may consider
the altered image as relating to the security document in case the graphical design
feature is substituted by an unrelated graphical design feature, e.g. for the British
Pound banknote the portrait of Queen Elisabeth may be substituted by the portrait
of the president of the United Sates of America or another country. According to the
present invention, architectural images may be regarded as being related to each other
in case they belong to the same category, e.g. bridges, buildings etc., or in case
they belong to architectural images regarded to represent the same country (e.g.,
Tower Bridge, Westminster Bridge and/or Westminster Abbey, Big Ben representing Great
Britain; or e.g. Eiffel Tower and Pont Neuf representing France). According to another
embodiment of the present invention, graphical images of nature may be regarded as
being related to each other in case they belong to the same category, e.g. plants
or parts thereof, such as leaf, animal, etc., or in case they belong to graphical
images of nature regarded to represent the same country (e.g. Kangaroo, Platypus and
Koala representing Australia) .
[0031] This means that the digital training images are generally not equal with the digital
image of the security document. However, the differences may be small enough that
a reproduction of the digital training image is still to be considered the security
document. In other words, the neural network is advantageously trained to reflect
the finding that also digital images differing from the digital image of the security
document may be considered by an unbiased human observer to represent the security
document when being reproduced, at least at a specific acceptance level. In this regard,
the acceptance level may describe the misbelief of an unbiased human observer. Although
considerable deviations between the digital training image and the security document
may be present, the observer may still consider a reproduction of the digital training
image to represent the security document. If the security document and the digital
training image are directly compared to each other, such deviations may be easily
recognizable. However, the perception and remembrance of humans is limited. For example,
it is well-known that people often accept counterfeit banknotes that differ significantly
in appearance from an authentic banknote. Accordingly, an unbiased human observer
may at least up to a certain extent (the acceptance level) generally consider differing
items to be same if the differences are not too strong. For example, a forged banknote
may be considered by a human observer to represent an original banknote. The inventive
neural network is trained to advantageously include these differences and the specifics
as to the human perception and remembrance.
[0032] The acceptance level may be considered to describe a similarity metrics between the
altered digital training images and the reference as to which degree these (a reproduction)
are considered by a respective number of human observers to distinguish from each
other.
[0033] The unbiased human observer does not need to be an expert in the field but is considered
a person commonly using the security document.
[0034] The ground truth may represent acceptance levels of at least four unbiased human
observers. In this case, the ground truth may comprise at least five different acceptance
levels. Since different humans might differently judge as to whether a reproduction
of the digital training image represents a security document, this uncertainty is
included within the training process by increasing the number of the decisive unbiased
human observers. Therefore, the ground truth advantageously includes an improved distribution
with regard to the acceptance level. For example, if there are four unbiased human
observers, these generally result in five different acceptance levels, wherein the
distribution with regard to the acceptance level as to whether a reproduction is considered
to represent the security document or multiples thereof or whether this is not the
case may be one of : 4/0, 3/1, 2/2, 1/3, 0/4.
[0035] The security document may be a banknote. In this case, two digital images comprising
a front side image and a reverse side image of the banknote may be provided as two
references. Then, each positive digital training image may have a visual impact of
the alteration such that the unbiased human observer would consider a reproduction
of the respective digital training image to represent the front side image and/or
the reverse side image of the security document or multiple security documents. Since
the reverse side of a banknote is also printed and, in particular printed differently
than the front side, the positive digital training images can in principle match one
of both sides or a combination of both. In any case, such a digital training image
is a positive digital training image since the unbiased human observer may consider
a reproduction of the specific digital training image to represent the banknote, at
least with regard to a single side thereof or various combinations. Therefore, both
sides of the banknote have to be provided as a reference and the set of training images
is accordingly adapted.
[0036] Within the context of all aspects of the present invention, artificial intelligence
(AI) refers to a software or hardware based technique which is configured to draw
decisions, e.g. a computer implemented algorithm. The AI can also be configured to
automatically exploit provided data with regard to the intended purpose and to automatically
provide the user with the respective result.
[0037] The method may be executed using a deep neural network having one or more neural
layers. Each layer may exhibit independently from other layers a number of neurons.
Each layer may have a branched or non-branched architectural structure. Accordingly,
the neural network may be advanced such that the training mechanism may be performed
with a higher degree of detail.
[0038] The artificial intelligence based neural network of all inventive aspects may comprise
machine learning capabilities. Accordingly, the neural network may be configured to
learn from the training mechanism and to generalize the determining process based
on the provided digital training images.
[0039] The inventive neural network including machine learning capabilities can be configured
to include multiple inputs to improve the process of drawing decisions. In other words,
the network can be configured to recognize several similar inputs in order to improve
the probability of the accuracy of drawing a decision compared to the accuracy drawing
a decision based on a single input.
[0040] The method of the first inventive aspect may be performed for a closed set of multiple
security documents. Then, the artificial intelligence based neural network may be
trained for every security document of the closed set of security documents. Accordingly,
the closed set may comprise or consist of a specific number of security documents
for which the neural network may be trained. That may be advantageous if the neural
network shall be used for security documents of a specific type for which only a certain
number of different elements exist. For example, the closed set of security documents
can be determined by the different banknote denominations of a limited number of currency
systems.
[0041] All features and embodiments disclosed with respect to the first aspect of the present
invention are combinable alone or in (sub-)combination with any one of the second
to fifth aspects of the present invention including each of the preferred embodiments
thereof, provided the resulting combination of features is reasonable to a person
skilled in the art.
[0042] According to a second aspect, a computer-implemented method for copy prevention of
at least one security document A is provided.
[0043] The method may comprise providing a digital sample image C1. The method may also
comprise applying an artificial intelligence based neural network for classifying
the digital sample image into a first category or a second category. The neural network
may be trained according to the herein before described method.
[0044] The digital sample image may be classified into the first category if the neural
network determines that a reproduction of at least a portion of the digital sample
image could be considered by the unbiased human observer to represent the at least
one security document or multiple security documents.
[0045] In an alternative, the digital sample image may be classified into the second category
if the neural network determines that for no portion of the digital sample image a
reproduction could be considered by the unbiased human observer to represent the at
least one security document or multiple security documents.
[0046] Furthermore, the method may comprise preventing the digital sample image from being
reproduced if the neural network classifies the digital sample image into the first
category.
[0047] Within the context of the present application the digital sample image refers to
a digital image of a sample document C. For example, there could be a sample document
for which it is to be determined whether a reproduction should be prevented or be
allowed. Obviously, if an unbiased human observer could consider any portion of the
sample document to represent the security document, a reproduction is to be avoided.
Then an effective measure is provided as to avoid a reproduction of candidates which
could be potentially used inappropriately or illegally as an (original) security document.
[0048] Therefore, a digital image of the sample document could be acquired to achieve a
digital sample image. This digital sample image may then be provided to the neural
network. As the neural network may be trained according to the details described hereinbefore,
it is aware of digital images of at least one security document as a reference. The
neural network may then be enabled to determine whether a reproduction of the digital
sample image could be considered by an unbiased human observer to represent the security
document or multiples thereof. During this process, the neural network may consider
differences between the digital sample image and the reference. Although these differences
may be present, the network may determine, at least at a certain acceptance level,
that a reproduction of at least a portion of the digital sample image may be considered
by an unbiased human observer to represent the security document. If this condition
is determined to be true, at least to a certain acceptance level, then the digital
sample image may be classified into the first category. This means that the first
category comprises or consists of digital sample images which potentially could be
misused. Accordingly, for the digital sample images classified into the first category
a reproduction could be prevented to prohibit incorrect use which may be achieved
by the current method. Preventing a reproduction may include preventing a processing
of the digital sample image such that no hard-copy of the digital sample image can
be acquired. Additionally or alternatively, the preventing measure can also include
that the data underlying the digital sample image are prevented from being processed,
transformed or saved.
[0049] Within the context of the present method, the security document can be one of a banknote,
a check, a bill, a ticket, a passport, or a flight ticket. For these document types
an unauthorized reproduction of a respective digital image holds significant risks
for both, economical and security-related reasons. These risks are avoided or at least
reduced by means of the method for copy prevention as herein described before.
[0050] Preventing the digital sample image to be reproduced if the inventively trained neural
network classifies the digital sample image into the first category may include the
activation of prohibiting means. The prohibiting means can be a software or hardware
implemented structure. The prohibiting means may be configured to forbid a reproducing
means such as a printer or a copying machine to reproduce the digital sample image.
The prohibiting means may also be configured to prevent a data saving or data transforming
process. This can be achieved by a master-/slave-system, wherein the prohibiting means
may have control over the data processing unit included in such devices or in common
data handling systems. Additionally or alternatively, the data representing the digital
sample image may be amended by the prohibiting means such that they are not readable
or are unable to be processed by the reproducing device, i.e. the printer or copying
machine. Amending the data can comprise amending the data to include a mark/an attribute/a
flag, wherein the mark/attribute/flag prevents the data from being reproduced.
[0051] The digital sample image may be altered compared to the digital image of the security
document. The digital sample image may in particular be degraded compared to the digital
image of the security document. The altered digital sample image may differ from the
digital image of the security document with regard to at least one of a perspective
angle, an underlying illumination, a coloration, a fold, or a crease. The specifics
explained with regard to alteration (synonymous: degradation or modification) described
herein within the context of digital training images may similarly apply to altered
digital sample images. Similar, as in case of the altered digital training images,
a digital sample image may not be at optimum due to various reasons, in particular
due to degradation. Moreover, the image acquisition may not be optimized due to limitations
of the illumination or due to a non-optimized arrangement of the image acquisition
means. In addition, the sample document itself may differ from the security document,
for example due to circulation or exchange of features, in particular graphical design
features. Accordingly, such differences may be present in the digital sample image
itself. However, as the neural network is trained accordingly, it may still be enabled
to determine whether a reproduction of the digital sample image would be considered
by an unbiased human observer as being related to a security document or not. In other
words, since the neural network may have been trained using the digital training images
which are altered compared to the reference, the neural network may make up for the
differences being present between the altered digital sample image and the reference.
The neural network may be configured to access the differences between the altered
digital sample image and the reference appropriately, such that it is adapted to evaluate
as to how an unbiased human observer would consider a reproduction of at least a portion
of the digital sample image in view of the reference. The security document may comprise
a front side and a reverse side. Then, the digital sample image may be classified
into the first category if the neural network determines that a reproduction of at
least a portion of the digital sample image could be considered by the unbiased human
observer to represent the front side and/or the reverse side of the security document
or multiple security documents. In an alternative, the digital sample image may then
be classified into the second category if the neural network determines that for no
portion of the digital sample image a reproduction could be considered by the unbiased
human observer to represent the front side and/or the reverse side of the security
document or multiple security documents.
[0052] The ground truth provided in the method for training the neural network may represent
a range of acceptance levels comprising or consisting of a first subrange and a second
subrange. Then, the method for copy prevention could be modified insofar that a digital
sample image may be classified into the first category if the neural network determines
an acceptance level according to the first subrange that the reproduction of at least
a portion of the digital sample image could be considered by the unbiased human observer
to represent the at least one security document or multiple security documents. In
an alternative, the digital sample image could be classified into the second category
if the neural network determines an acceptance level according to the second subrange
that for no portion of the digital sample image a reproduction could be considered
by the unbiased human observer to represent the at least one security document or
multiple security documents. In this scenario, the first subrange may be larger than
the second subrange.
[0053] The acceptance level may be considered to describe a similarity metrics between the
altered digital sample images and the reference as to which degree these (a reproduction)
are considered by a respective number of human observers to distinguish from each
other.
[0054] This means that the neural network may be trained such that it may determine multiple
different acceptance levels as to whether a reproduction of the digital sample image
is considered by an unbiased human observer to present the security document. Advantageously,
the range of acceptance levels is asymmetrically distributed with regard to the classifying
mechanism of the digital sample image. In other words, only if the acceptance level
is very low, the digital sample image may be classified into the second category for
which a reproduction is not necessarily prevented. According to the larger subrange
of acceptance levels, the digital sample image may instead be classified into the
first category so that a reproduction is prevented. In other words, in case the first
subrange is larger than the second subrange, there may be an asymmetry between the
number of digital sample images classified into the first category and into the second
category, at least when assuming a homogeneous distribution of the digital sample
images with regard to the different acceptance levels.
[0055] In a simple scenario, an unbiased human observer could consider a reproduction of
the digital sample image to represent the security document at a 50 % acceptance level
ranging from 0 to 1. Presuming that the first subrange is larger than the second subrange,
this digital sample image, due to the 50 % acceptance level, will be classified into
the first category. The threshold between the subranges can in particular be determined
to be very low, e.g. 0.3 vs. 0.7 for a range of 0 to 1, further 0.2 vs. 0.8 for a
range of 0 to 1, e.g. 0.1 vs. 0.9 for a range of 0 to 1, furthermore e.g. 0.01 vs.
0.99 for the same range of the acceptance levels.
[0056] The asymmetric distribution with regard to classifying the digital sample image into
the first category or the second category depending on the determined acceptance level
may also lead to an asymmetry of the rate between false-positive events compared to
the false-negative events. The number of false-positives may be larger, in particular
much larger, than the number of false-negatives. Here, false-positive refers to a
configuration according to which the neural network determines that a reproduction
of at least a portion of the digital sample image would be considered by the unbiased
human observer to represent the security document although the observer would actually
consider the reproduction to not represent the security document. False-negative may
refer to a configuration according to which the neural network determines that the
unbiased human observer would not consider the reproduction of at least a portion
of the digital sample image to represent the security document although the unbiased
human observer would actually indeed consider the reproduction of at least a portion
of the digital sample image to represent a security document The reason behind the
asymmetry between false-positives and false-negatives may be given by the threshold
with regard to the acceptance level between the first and the second subrange. If
the threshold acceptance level between those subranges is low, a digital sample image
is rather classified into the first category while it is classified into the second
category only in rare cases. Therefore, the number of false-negatives will be smaller,
in particular much smaller, than the number of false-positives.
[0057] The specific sizes of the first and second subranges with regard to the acceptance
level may be a result of the training process applied on the neural network.
[0058] In an alternative, the neural network may be trained such that the first and second
subranges would in principle be of the same size with regard to the overall range
of acceptance levels. In this case, the asymmetry may be introduced manually by setting
a respective threshold acceptance level before or during the usage of the neural network
for copy prevention of a security document. The setting or the adaption of the threshold
acceptance level may be for example motivated by the classification of digital sample
images. If the second category comprises or consists of digital sample images for
which a reproduction should be prevented, the threshold acceptance level may be adapted
accordingly.
[0059] In another alternative, the threshold acceptance level may be adapted by the neural
network itself based on its intrinsic machine learning capabilities.
[0060] The method for copy prevention of at least one security document can be based on
a code, wherein the respective code of the method can have a binary size between 100
kB and 50 MB, in particular between 200 kB and 10 MB, further in particular between
500 kB and 1 MB. Since the code has a comparable small size the code can advantageously
be implemented also in non-high end data processing devices such as a scanning device,
a printer, a copying device etc.
[0061] The method for copy prevention of at least one security document can be configured
to be executable within a period of time less than 60 sec, in particular within a
period of time between 100 msec and 30 sec, in particular within a period of time
between a time period less than 1 sec. The so configured method can advantageously
be applied also during regular data processing procedures in real-time, such as a
print of a digital image with acceptable expenditure of time. In particular, the method
for copy prevention may be executable at a speed that substantially does not slow
down the reproduction process, e.g. a printing process. In this case, the processing
of the method according to the second aspect of the present invention could take place
within the mechanical latency of the printing device. For example, when using a printer,
e.g. an inkjet printer, according to one embodiment this could mean that only a few
lines of the digital image may be reproduced / printed before the reproduction / printing
operation is stopped due to copy prevention. Still, this embodiment nonetheless achieves
the purpose of the invention. All features and embodiments disclosed with respect
to the second aspect of the present invention are combinable alone or in (sub-)combination
with any one of the first or third to fourth aspects of the present invention including
each of the preferred embodiments thereof, provided the resulting combination of features
is reasonable to a person skilled in the art.
[0062] According to a third aspect of the present invention a banknote detector comprising
or consisting of a communicating means and a data processing unit is provided. The
communicating means may be configured to receive a digital sample image and to provide
the digital sample image to the data processing unit. The data processing unit may
be configured for carrying out the method for copy prevention of at least one security
document. The banknote detector may be configured to prevent a reproduction of the
digital sample image.
[0063] The inventive banknote detector may in particular be implemented in devices for reproduction
of a sample document such as a copying machine or a printer. Accordingly, the banknote
detector may be enabled to advantageously prevent the reproduction process of a sample
document or a digital sample image if the digital sample image is classified into
the first category as explained herein before.
[0064] The inventive banknote detector may be software-implemented. The banknote detector
may be comprised within a device for reproducing the digital sample image. In an alternative,
the banknote detector may be configured as a cloud-based or server-based application.
[0065] According to an alternative, the banknote detector may at least partially be hardware-implemented.
In this case, at least some of the features of the banknote detector may be implemented
by hardware-based data processing components, such as a CPU or a network communication
device coupled with a CPU. Even the neural network may at least partially be implemented
hardware-based, such as using quantum computing devices. In an alternative, the neural
network may be software-implemented, e.g. by processor commands to be executed by
the CPU.
[0066] The inventive banknote detector may be entirely realized as a digital code and written
in a computer language. Accordingly, the banknote detector could easily be embedded
in a firmware of reproducing devices, in particular in copying machines or printers.
Moreover, updates on the banknote detector may then simply be achieved by providing
upgraded firmware versions of such devices. In an alternative, such devices may comprise
only a client portion of the banknote detector, whereas the banknote detector itself
is comprised within a cloud service or a server. The client portion may then be configured
to communicate with the cloud or the server to run the banknote detector within the
cloud or on the server with regard to local digital sample images present at the client.
It may be necessary to transfer the data relating to the digital sample image to the
cloud or the server in this case.
[0067] The inventive banknote detector may be further configured to evaluate the digital
sample image with regard to authenticity if the digital sample image is classified
into the first category. The digital sample image may be evaluated with regard to
at least one security feature included in the at least one security document. This
embodiment, is advantageous, as it allows to further specify the digital image classified
into the first category, i.e. considered to represent a security document.
[0068] In contrast thereto, the method for training an artificial intelligence based neural
network of the first aspect of the present invention as well as the method for copy
prevention of at least one security document according to the second aspect of the
present invention do not depend on any security features included within the security
document and, thus, do not authenticate the security document. As set out previously,
the present invention compared to a common authentication method has the advantage
that the classification whether a document shall be further processed, in particular
copied, printed, or otherwise reproduced, or not can be conducted quicker than conducting
the full authentication steps for the code of copy prevention. Furthermore, the document
design is not distorted by the use of a code for copy protection and also the risk
of hacking an illegal application of the code for copy protection is reduced. In principle,
the evaluation with regard to the digital sample images is independent from such security
features/codes of copy protection. Thus, if a digital sample image is classified into
the first category, reproduction is to be prevented.
[0069] In order to provide a more sophisticated evaluation of the document including authentication,
the banknote detector may further comprise authentication means in order to evaluate
one or more security features including a code for copy protection and may be configured
to determine the authenticity of the digital sample image compared to the reference
in order to identify digital images which are based on counterfeit documents. For
example, as additional authentication measure, the banknote detector could be configured
to evaluate the digital sample image with regard to one or more codes for copy protection,
e.g., the so-called Eurion constellation, also called Omron rings. In this case, the
combination of the inventive copy prevention method in addition to the authentication
method of the codes for copy protection provides a further advantage in case the code
for copy protection, e.g. Eurion constellation, would be illegally hacked and illegally
applied to non-security documents. According to prior art copy prevention method,
the codes of copy protection illegally applied to the document would be authenticated
and the copy operation would be stopped, regardless of the nature of the document,
i.e. regardless whether an unbiased human observer would consider the(digital) image
of the document or its reproduction as a security document. In contrast thereto, the
copy prevention method of the second inventive aspect requires the categorization
of the digital image into the first category, which means that at least to a certain
acceptance level the image or its reproduction could be regarded by an unbiased human
observer as a security document, or into the second category, which means that the
image or its reproduction at least to a certain acceptance level could not be regarded
by an unbiased human observer as a security document. In case the digital image would
fall in the second category, the copy prevention may not take place, even if the Eurion
constellation would be authenticated with the additional authentication method. Accordingly,
an illegally applied code of copy protection does not allow illegal copy protection.
[0070] Another advantage of combining the copy prevention method of the second inventive
aspect with the prior art authentication method results in a copy prevention in case
the (digital) image of counterfeit document does not carry codes of copy prevention
- an, thus, the reproduction would not be stopped by the prior art authentication
method, in case it could be considered by an unbiased human observer as a security
document.
[0071] The method for training an artificial intelligence based neural network according
to the first aspect of the present invention as well as the method for copy prevention
of at least one security document according to the second aspect of the present invention,
thus may be applied sequentially or simultaneously with a method of evaluating the
authenticity of at least one security feature in order to identify digital images
which are based on counterfeit documents so as to prevent a reproduction of those
images. In case of a sequential application, the authentication method is applied
subsequent to the inventive copy protection method based on the application of the
artificial intelligence based neural network.
[0072] The banknote detector can comprise a lower ARM-type multicore CPU or similar CPU
commonly used in mobile devices. The device can further comprise a main memory within
the range of 4 M B to 8 GB, further in particular within the range of 16 MB to 2 GB,
further in particular within the range of 64 MB to 512 MB, further in particular within
the range of 128 MB to 256 MB. The method for copy prevention of at least one security
document can be configured to be executable on the indicated CPU types in a local
or remote fashion using a main memory of the indicated size.
[0073] All features and embodiments disclosed with respect to the third aspect of the present
invention are combinable alone or in (sub-)combination with any one of the first,
second and fourth aspects of the present invention including each of the preferred
embodiments thereof, provided the resulting combination of features is reasonable
to a person skilled in the art.
[0074] According to a fourth aspect, a computer program product comprising or consisting
of instructions which, when the program is executed by a data processing unit, cause
the data processing unit to apply an artificial intelligence based neural network
for classifying the digital sample image into a first category or a second category.
In this case, the neural network may be trained according to the herein before described
method and the classification process may be achieved as also described herein before.
[0075] The computer program product can be stored encrypted and/or error-coded. Several
of the underlying techniques and instructions should be kept secret for security reasons.
Accordingly, if the code is stored encrypted the underlying techniques and instructions
can advantageously be prevented from being made public.
[0076] In an alternative, the computer program product can be open-access. Generally, this
does not hold any risks since the program cannot really be exploited in the same way
as e.g. a digital watermark detector could be. In case of the digital watermark, when
the code is exploited, a counterfeiter could be allowed to reconstruct the digital
watermark signal and thus apply it to unauthorized images. However, in case of the
method for copy prevention according to the second inventive aspect, a reproduction
of at least a portion of a digital sample image is either be considered by the unbiased
human observer to represent the security document or not. Hence, exploiting the present
computer program product does not hold similar risks.
[0077] All features and embodiments disclosed with respect to the fourth aspect of the present
invention are combinable alone or in (sub-)combination with any one of the first to
third aspects of the present invention including each of the preferred embodiments
thereof, provided the resulting combination of features is reasonable to a person
skilled in the art.
[0078] Further aspects and characteristics of the invention will ensue from the following
description of preferred embodiments of the invention with reference to the accompanying
drawings, wherein
- FIG. 1 shows a simplified schematic drawing of a method for training an artificial
intelligence based neural network applicable for classifying digital images to be
considered as a security document or not,
- FIG. 2 shows a simplified schematic drawing of a method for copy prevention of at
least one security document,
- FIG. 3 shows a simplified schematic drawing of a banknote detector, and
- FIG. 4 shows a simplified schematic drawing of a computer program product.
[0079] All of the features disclosed hereinafter with respect to the example embodiments
and/or the accompanying figures can alone or in any subcombination be combined with
features of the aspects of the present invention including features of preferred embodiments
thereof, provided the resulting feature combination is reasonable to a person skilled
in the art.
[0080] Figure 1 shows a simplified schematic drawing of a method 100 for training an artificial
intelligence based neural network 150. The method 100 is described herein below with
reference to the device-type neural network 150 for illustrative purposes. However,
this should not be construed as limiting the method 100.
[0081] The neural network 150 may be applicable for classifying digital images to be considered
as a security document 110 or not. The neural network 100 may be a deep neural network
having multiple layers. Within this method 100, the neural network 150 is trained
based on three inputs.
[0082] Firstly, a digital image of a security document 110 is provided as a reference to
the neural network 150. Accordingly, the security document 110 represents the reference
for the digital images to be classified so as to be considered as a security document
110 or not.
[0083] Secondly, a set of digital training images 120 is provided to the neural network
150. The digital training images 120 may generally differ from the digital image of
the security document 110. In particular, the digital training images 120 may be altered
with respect to the digital image of the security document 110. The alteration of
the digital training images 120 may be with regard to at least one aspect as described
herein above.
[0084] In a particular example, the alteration may comprise a degradation which may be based
on that the digital training images may at least partially comprise one of worn ink,
small holes, heat damage up to a certain percentage of the surface, additional graffiti,
stains, marks, tape, staples, or tears.
[0085] The set of training images 120 comprises or consists of a first subset of positive
digital training images 125 and a second subset of negative digital training images
130. In this regard, a positive digital training image 125 may have a visual impact
of an alteration such that an unbiased human observer would consider a reproduction
of the respective digital training image 125 to represent the security document 110
or multiple security documents 110. A negative digital training image 130 may have
a visual impact of an alteration such that an unbiased human observer would not consider
a reproduction of the respective digital training image 130 to represent the security
document 110 or multiple thereof.
[0086] A reproduction may in particular be considered as a hard-copy-type multiplication,
for example by means of a printer or a copying machine, or as a data transforming,
saving, or processing action.
[0087] Naturally it is equally of interest whether an image which is desired to be reproduced
represents a single security document or multiples thereof. Both events need to be
considered.
[0088] The unbiased human observer was explained herein before.
[0089] Thirdly, ground truth 140 is provided to the neural network 150. The ground truth
140 represents at least one acceptance level of one or more unbiased human observers
as to whether a reproduction of a respective digital training image 120 for each positive
digital training image 125 and each negative digital training image 130 is to be considered
by the neural network 150 to represent the security document 110 or multiples thereof.
[0090] The acceptance level may be considered to quantize how an unbiased human observer
would interpret a reproduction of the digital training image 120 in relation to the
security document 110.
[0091] Based on the method 100, the neural network 150 may be trained a decision behavior
as to whether the reproduction of the provided digital training images 120 is to be
considered as a security document 110 or not. This decision behavior is based on the
acceptance level of the at least one unbiased human observer.
[0092] Within the method 100, the security document 100 and the set of digital training
images 120 may each be closed sets. For example, for a given series of security documents,
e.g. EURO-banknotes, the set of digital training images 120 may comprise or consist
of different positive and negative digital training images 125, 130, respectively.
In a further example, security documents 110 as a reference and digital training images
120 may be provided for different denominations of different currencies.
[0093] Exemplarily, the set of digital training images 120 may comprise several hundreds
or thousands of each, the positive digital training images 125 and the negative digital
training images 130.
[0094] Figure 2 shows a simplified schematic drawing of a method 200 for copy prevention
of at least one security document 110. Again, for illustrative purposes the method
200 is described with reference to the neural network 150. However, this is not to
be understood as limiting the method 200.
[0095] The neural network 150 may in particular be trained according to the method 100 for
training the artificial intelligence based neural network 150. Accordingly, the neural
network may be applicable for classifying digital images to be considered as a security
document 110 or not. Since the neural network 150 is trained according to the method
100 for training, the neural network 150 is aware of the at least one security document
110 as a reference. Of course, multiple security documents 110 may be trained with
the neural network 150.
[0096] Within the method 200 for copy prevention, a digital sample image 210 is provided
to the neural network. The digital sample image 210 generally differs from the security
document 110 provided to the neural network before. In particular, the digital sample
image 210 may be altered compared to the security document 110. The alteration may
show up as described before.
[0097] Then, the neural network 150 classifies this digital sample image 210 into the first
category if the neural network 150 determines that a reproduction of at least a portion
of the digital sample image 210 could be considered by the unbiased human observer
to represent the at least one security document 110 or multiple security documents
110. In an alternative, the neural network 150 classifies this digital sample image
210 into the second category if the neural network 150 determines that for no portion
of the digital sample image 210 a reproduction could be considered by the unbiased
human observer to represent the at least one security document 110 or multiple security
documents 110. In this regard, the neural network considers the differences between
the digital sample image 210 and the security document 110. However, based on the
trained decision behavior the neural network 150 may determine at least up to which
acceptance level an unbiased human observer could consider a reproduction of at least
a portion of the digital sample image 210 to represent the security document 110.
[0098] Moreover, the neural network prevents a reproduction of the digital sample image
210 if it is classified into the first category.
[0099] Figure 3 shows a simplified schematic drawing of a banknote detector 300. The banknote
detector 300 is software-implemented within a copying machine 320 and is configured
for carrying out the method 200 for copy prevention of at least one security document.
[0100] In an exemplary scenario, a sample document 310 is provided to the copying machine
320 which is desired to be reproduced by a user of the copying machine 300. The copying
machine 320 is configured to acquire a digital sample image 325 based on the sample
document 310. The digital sample image 325 is provided to a communicating means 330,
for example a communication interface of the banknote detector 300. The communicating
means 330 is configured to provide the digital sample image 325 to a data processing
unit 335. The data processing unit 335 comprises an artificial intelligence based
neural network 340. The neural network 340 is configured to carry out the method for
copy prevention 200 described before. The neural network 340 classifies the digital
sample image 325 into a first or a second category. If the digital sample image 325
is classified into the first category, the banknote detector activates prohibiting
means 350. The prohibiting means 350 are configured to interrupt the desired reproduction
process at an interrupt 355. Accordingly, a reproduction of the digital sample image
325 may be avoided. If the digital sample image 325 is classified into the second
category by the neural network 340, the prohibiting means 350 are not activated. Accordingly,
the reproduction process is not avoided and reproductions 310a of the sample document
310 may be produced.
[0101] Figure 4 shows a simplified schematic drawing of a computer program product 400.
The computer program product may in particular comprise or consist of instructions,
which, when executed by a data processing unit cause the data processing unit to carry
out steps related to classifying digital sample images. Moreover, the computer program
product 400 may also comprise or consist of instructions, which when executed by the
data processing unit, cause the data processing unit to prevent a reproduction of
digital sample images which have been classified into the first category.
[0102] In an alternative, the data processing unit may be caused based on the contained
instructions to active prohibiting means that may be arranged and configured to prevent
a reproduction of digital sample images, which have been classified into the first
category.
[0103] Although the invention has been described hereinabove with reference to specific
embodiments, it is not limited to these embodiments and no doubt further alternatives
will occur to the skilled person that lie within the scope of the invention as claimed.
1. Computer-implemented method (100) for training an artificial intelligence based neural
network (150) applicable for classifying digital images to be considered as a security
document (110, A) or not without authenticating a security feature, the method (100)
comprising or consisting of:
a) providing at least one digital image (A1) of at least one security document (110,
A) as a reference,
b) providing a set of digital training images (120, B1), wherein the digital training
images (120, B1) are altered compared to the digital image (A1) of the security document
(110, A),
the set of digital training images (120, B1) comprising or consisting of a first subset
of positive digital training images (125, B1-1) having a visual impact of an alteration
such that an unbiased human observer would consider a reproduction of the respective
digital training image (125, B1-1) to represent the security document (110, A) or
multiple security documents (110, A),
the set of digital training images (120, B1) comprising or consisting of a second
subset of negative digital training images (130, B1-2) having a visual impact of the
alteration such that the unbiased human observer would not consider a reproduction
of the respective digital training image (130, B1-2) to represent the security document
(110, A) or multiple security documents (110, A), and
c) providing ground truth (140) for each digital training image (120, B1) in step
b) to the artificial intelligence based neural network (150), wherein the ground truth
(140) represents at least one acceptance level of one or more unbiased human observers
as to whether a reproduction of the respective digital training image (120, B1) is
to be considered representing or not representing the security document (110, A) or
multiple security documents (110, A).
2. The method (100) of claim 1, wherein at least one altered digital training image (120,
B1) in step b) is degraded, wherein a degradation of the digital training image is
based on:
a training document (B) which has been chemically or physically attacked, or
a digital image of a training document (B), wherein the digital image has been digitally
attacked.
3. The method (100) according to any of the preceding claims, wherein at least one altered
digital training image (120, B1) in step b) differs from the digital image (A1) of
the security document (110, A) with regard to at least one of a resolution, a definition,
a general aspect, a shape, a color, a color distribution, and an aspect ratio.
4. The method (100) according to any of the preceding claims, wherein at least one altered
digital training image (120, B1) in step b) differs from the digital image (A1) of
the security document (110, A) with regard to at least one of a perspective angle,
an underlying illumination, a coloration, a fold, or a crease.
5. The method (100) according to any of the preceding claims, wherein the security document
(110, A) comprises one or more graphical design features, in particular a portrait
or an architectural image or a graphical image of nature, and wherein at least one
digital training image (120, B1) in step b) differs from the digital image (A1) of
the security document (110, A) in that at least one design feature is substituted
by a different design feature, in particular a different portrait or a different architectural
image or a different image of nature.
6. The method (100) according to any of the preceding claims, wherein the ground truth
(140) represents acceptance levels of at least four unbiased human observers, and
wherein the ground truth (140) comprises or consists of at least five different acceptance
levels.
7. The method (100) according to any of the preceding claims, wherein the security document
(110, A) is a banknote, wherein in step a) two digital images respectively comprising
a front side image (A2) and a reverse side image (A3) of the banknote are provided,
and wherein in step b) each positive digital training image (125, B1-1) has a visual
impact of the alteration such that the unbiased human observer would consider a reproduction
of the respective digital training image (125, B1-1) to represent the front side image
(A2) and/or the reverse side image (A3) of the security document (110, A) or multiple
security documents (110, A).
8. The method (100) according to any of the preceding claims, wherein the method is executed
using a deep neural network (150) having one or more neural layers, in particular
wherein each layer exhibits independently from other layers a number of neurons and/or
wherein each layer has a branched or non-branched architectural structure.
9. The method (100) according to any of the preceding claims, wherein the artificial
intelligence based neural network (150) comprises machine learning capabilities.
10. The method (100) according to any of the preceding claims, wherein the method (100)
is performed for a closed set of multiple security documents (110, A), wherein the
artificial intelligence based neural network (150) is trained for every security document
(110, A) of the closed set of security documents (110, A).
11. A computer-implemented method (200) for copy prevention of at least one security document
(110, A) without authenticating a security feature, comprising or consisting of:
a) providing a digital sample image (210, C1),
b) applying an artificial intelligence based neural network (150) for classifying
the digital sample image (210, C1) into a first category or a second category, wherein
the neural network (150) is trained according to the method (100) of any of the claims
1 to 9,
wherein the digital sample image (210, C1) is classified into the first category if
the neural network (150) determines that a reproduction of at least a portion of the
digital sample image (210, C1) could be considered by the unbiased human observer
to represent the at least one security document (110, A) or multiple security documents
(110, A),
wherein the digital sample image (210, C1) is classified into the second category
if the neural network (150) determines that for no portion of the digital sample image
(210, C1) a reproduction could be considered by the unbiased human observer to represent
the at least one security document (110, A) or multiple security documents (110, A),
and
c) preventing the digital sample image (210, C1) from being reproduced if the neural
network (150) classifies the digital sample image (210, C1) into the first category.
12. The method (200) of claim 11, wherein the digital sample image (210, C1) is altered
compared to the digital image (A1) of the security document (110, A), wherein the
altered digital sample image (210, C1) differs from the digital image (A1) of the
security document (110, A) with regard to at least one of a perspective angle, an
underlying illumination, a coloration, a fold, or a crease.
13. The method (200) according to any of claims 11 and 12, wherein the security document
(110, A) comprises or more graphical design features, in particular a portrait or
an architectural image or a graphical image of nature, and wherein the digital sample
image (210, C1) differs from the digital image (110, A) of the security document (110,
A) in that at least one design feature is substituted by a different design feature,
in particular a different portrait or a different architectural image or a different
graphical image of nature.
14. The method (200) according to any of claims 11 and 13, wherein the security document
(110, A) comprises a front side and a reverse side, and wherein the digital sample
image (210, C1) is classified into the first category in step b) of claim 11 if the
neural network (150) determines that a reproduction of at least a portion of the digital
sample image (210, C1) could be considered by the unbiased human observer to represent
the front side and/or the reverse side of the security document (110, A) or multiple
security documents (110, A), wherein the digital sample image (210, C1) is classified
into the second category in step b) of claim 11 if the neural network (150) determines
that for no portion of the digital sample image (210, C1) a reproduction could be
considered by the unbiased human observer to represent the front side and/or the reverse
side of the security document (110, A) or multiple security documents (110, A).
15. The method (200) according to any of the claims 11 to 14, wherein in step c) of claim
1 the ground truth (140) represents a range of acceptance levels comprising or consisting
of a first subrange and a second subrange,
wherein in step b) of claim 11 a digital sample image (210, C1) is classified into
the first category if the neural network (150) determines an acceptance level according
to the first subrange that the reproduction of at least a portion of the digital sample
image (210, C1) could be considered by the unbiased human observer to represent the
at least one security document (110, A) or multiple security documents (110, A),
wherein in step b) of claim 11 a digital sample image (210, C1) is classified into
the second category if the neural network (150) determines an acceptance level according
to the second subrange that for no portion of the digital sample image (210, C1) a
reproduction could be considered by the unbiased human observer to represent the at
least one security document (110, A) or multiple security documents (110, A), and
wherein the first subrange is larger than the second subrange.
16. A banknote detector (300) comprising or consisting of a communicating means (330)
and a data processing unit (335), wherein the communicating means (330) is configured
to receive a digital sample image (210, 325, C1) and to provide the digital sample
image (210, 325, C1) to the data processing unit (335), wherein the data processing
unit (335) is configured for carrying out the method (200) according to any of the
claims 11 to 15, and wherein the banknote detector (300) is configured to prevent
a reproduction of the digital sample image (210, 325, C1).
17. A banknote detector (300) according to claim 16, wherein the banknote detector (300)
is software-implemented, and
wherein the banknote detector (300) is comprised within a device for reproducing the
digital sample image (210, 325, C1), or wherein the banknote detector (300) is configured
as a cloud-based or server-based application.
18. The banknote detector (300) according to any of the claims 16 and 17, wherein the
banknote detector (300) is further configured to evaluate the digital sample image
(210, 325, C1) with regard to authenticity if the digital sample image (210, 325,
C1) is classified into the first category in step b) of claim 11, wherein the digital
sample image (210, 325, C1) is evaluated with regard to at least one security feature
included in the at least one security document (110, A).
19. A computer program product (400) comprising or consisting of instructions which, when
the program is executed by a data processing unit, cause the data processing unit
to carry out step b) of claim 11.