TECHNICAL FIELD
[0001] The present specification relates to the field of data processing technologies, and
in particular, to article damage detection methods and apparatuses and article damage
detectors.
BACKGROUND
[0002] With the improvement of living standards, many articles are frequently replaced.
Mobile phones are used as an example. Old mobile phones replaced with new devices
are usually put aside by users, which causes a resource waste. Recycling of used articles
can enable obsolete articles to be reused and put into a new industrial chain link,
so that resources can be better integrated and possible environmental pollution can
be reduced.
[0003] With the emergence of artificial intelligence technologies, online recycling over
the Internet becomes a new business model. In terms of online recycling, a damage
degree of a recycled article is usually determined based on a picture of the article,
and is used as an important factor in price estimation. Damage detection accuracy
greatly affects the real value and price estimation of the recycled article. Therefore,
it is very important for the development of the online recycling industry to improve
article damage detection accuracy.
[0004] US 2017/256051 A1 discloses in various implementations, a condition of one or more screens of a device
that may be determined. A user may select a return application that facilitates capturing
image(s) of device screen(s). A user may capture images of a screen of using a camera
of the same device or another device. In some implementations, the user may position
the device proximate a mirror such that the device can capture an image of one or
more screens of the device. The captured image(s) may be processed and/or analyzed
to determine if the screen of the device is damaged. In some implementations, notifications
based on the condition of the device screen(s) may be transmitted. A price for the
device may be determined, in some implementations based on the condition of the screen.
[0005] CN 107 194 323 A discloses a vehicle loss assessment image obtaining method and apparatus, a server
and a terminal device. A client obtains shooting video data and sends the shooting
video data to the server; the server detects video images in the shooting video data
and identifies damaged parts in the video images; the server classifies the video
images based on the detected damaged parts and determines a candidate image classification
set of the damaged parts; and vehicle loss assessment images are selected out from
the candidate image classification set according to preset screening conditions. By
utilizing the method and the apparatus, high-quality loss assessment images meeting
loss assessment processing demands can be automatically and quickly generated; the
loss assessment processing demands are met; and the loss assessment image obtaining
efficiency is improved.
SUMMARY
[0007] The present invention is defined by the attached claims. The present specification
provides an article damage detection method, including: obtaining at least two images
that are time sequentially related and show a detected article at different angles;
and inputting the images to a detection model in time order, to determine a damage
detection result, where the detection model includes a first sub-model and a second
sub-model, the first sub-model identifies respective features of each image, a feature
processing result of each image is input to the second sub-model, the second sub-model
performs time series analysis on the feature processing result to determine the damage
detection result, and the first sub-model and the second sub-model are obtained by
performing joint training by using training samples labeled with article damage.
[0008] The present specification further provides an article damage detection apparatus,
including: an image sequence acquisition unit, configured to obtain at least two images
that are time sequentially related and show a detected article at different angles;
and a detection model application unit, configured to input the images to a detection
model in time order, to determine a damage detection result, where the detection model
includes a first sub-model and a second sub-model, the first sub-model identifies
respective features of each image, a feature processing result of each image is input
to the second sub-model, the second sub-model performs time series analysis on the
feature processing result to determine the damage detection result, and the first
sub-model and the second sub-model are obtained by performing joint training by using
training samples labeled with article damage.
[0009] The present specification provides a computer device, including a storage medium
and a processor, where the storage medium stores a computer program that can be run
by the processor, and when the processor runs the computer program, the steps of the
article damage detection method are performed.
[0010] The present specification provides a computer-readable storage medium, where the
computer-readable storage medium stores a computer program, and when the computer
program is run by a processor, the steps of the article damage detection method are
performed.
[0011] The present specification further provides an article damage detector, including:
a photographing module, configured to generate, based on a photographing instruction
from a calculation and control module, at least two images of a detected article that
are time sequentially related; a movement module, configured to drive relative movement
between a camera of the photographing module and the detected article based on a movement
instruction from the calculation and control module; and the calculation and control
module, configured to enable, by using the movement instruction and the photographing
instruction, the photographing module to generate the at least two images that are
time sequentially related and show the detected article at different angles, and determine
a damage detection result based on the images, where the damage detection result is
generated by using the previous article damage detection method or apparatus.
[0012] It can be seen from the previous technical solutions that in the implementations
of the article damage detection methods and apparatuses in the present specification,
the images that are time sequentially related and show the detected article at different
angles are input to the detection model, the first sub-model in the detection model
identifies the respective features of each image, the feature processing result is
input to the second sub-model after feature processing, and the second sub-model performs
time series analysis on the feature processing results of the images to determine
the damage detection result. The images at different angles can more comprehensively
show a real condition of the article, and therefore a more uniform and complete detection
result can be obtained by performing time series analysis on the feature processing
results of the images. Therefore, damage detection accuracy can be greatly improved
in the implementations of the present specification.
[0013] It can be seen that in the implementations of the article damage detector in the
present specification, when enabling, by using the movement instruction, the movement
module to drive the relative moment between the camera and the detected article, the
calculation and control module enables, by using the photographing instruction, the
photographing module to generate the at least two images of the detected article that
are time sequentially related, and obtains, based on the generated images, the damage
detection result generated by using the article damage detection method or apparatus
in the present specification. As such, damage detection accuracy is greatly improved
while it is more convenient to perform article damage detection.
BRIEF DESCRIPTION OF DRAWINGS
[0014]
FIG. 1 is a flowchart illustrating an article damage detection method, according to
an implementation of the present specification;
FIG. 2 is a structural diagram of hardware of a device for running an article damage
detection method, according to an implementation of the present specification, or
of a device in which an article damage detection apparatus is located, according to
an implementation of the present specification;
FIG. 3 is a logical structural diagram of an article damage detection apparatus, according
to an implementation of the present specification;
FIG. 4 is a schematic structural diagram of an article damage detector, according
to an implementation of the present specification; and
FIG. 5 is a schematic structural diagram of a detection model in an application example
of the present specification.
DESCRIPTION OF IMPLEMENTATIONS
[0015] Implementations of the present specification provide new article damage detection
methods. A detection model is built by using a first sub-model and a second sub-model
that are cascaded; the first sub-model uses images of a detected article that are
obtained at different angles and generated in time order as inputs, to obtain feature
processing results of the images, and outputs the feature processing results to the
second sub-model; and the second sub-model performs time series analysis on the feature
processing results of the images to determine a damage detection result. As such,
damage on the detected article can be found more comprehensively by using the images
at different angles, and damage found in the images can be combined into a uniform
detection result through time series analysis, thereby greatly improving damage detection
accuracy.
[0016] The implementations of the article damage detection methods in the present specification
can run on any device with computing and storage capabilities, for example, a mobile
phone, a tablet computer, a personal computer (PC), a laptop, or a server. Alternatively,
functions in the implementation of the article damage detection method in the present
specification can be implemented by logical nodes running on two or more devices.
[0017] In the implementations of the present specification, a machine learning model that
uses at least two images that are time sequentially related as inputs, which is referred
to as the detection model, is used to perform article damage detection. The detection
model includes two cascaded sub-models. The first sub-model identifies respective
features of each image to generate a feature processing result of each image, and
the feature processing results of the images are input to the second sub-model in
time order. The second sub-model performs time series analysis on the feature processing
results of the images to determine a damage detection result.
[0018] The first sub-model includes a deep convolutional neural network (DCNN). The second
sub-model includes a long short-term memory (LSTM) network. A more accurate damage
detection result can be determined if the LSTM network also employs an attention mechanism.
[0019] The detection model in the implementations of the present specification is a model
trained using supervised learning training, and the entire detection model is trained
by using training samples labeled with article damage. In other words, joint training
is performed on the first sub-model and the second sub-model, a training loss of the
entire model is fed back to both the first sub-model and the second sub-model for
parameter update, and parameters of the two sub-models are simultaneously optimized,
to optimize an overall prediction accuracy of the detection model. In addition to
the label indicating article damage, each training sample includes at least two images
of the article that are time sequentially related.
[0020] A form of the damage detection result is determined based on a need in an actual
application scenario. Implementations are not limited. For example, the damage detection
result can be a classification result indicating whether there is damage on the detected
article, can be a degree of a certain type of damage on the detected article, can
be a classification result indicating whether there are two or more types of damage
on the detected article, or can be degrees of two or more types of damage on the detected
article. Types of damage can include scratches, damage, stains, adhesives, etc. Sample
data can be labeled based on a determined form of the damage detection result, and
the damage detection result in this form can be obtained by using the trained detection
model.
[0021] The feature processing result output by the first sub-model to the second sub-model
includes information used by the second sub-model to generate the damage detection
result. The feature processing result output by the first sub-model to the second
sub-model is a damage detection result of each single image. Implementations are not
limited.
[0022] For example, if the damage detection result output by the detection model is a classification
result of each of one or more types of damage (namely, the possibility that there
is each of one or more types of damage on the detected article), the feature processing
result can include a respective classification result that is of each type of damage
in the single image of the detected article and is generated after the first sub-model
performs feature extraction and damage discovery on each image, and performs feature
fusion on a feature extraction result and a damage discovery result, or can include
a respective tensor that specifies damage detection information of each type of damage
in the single image of the detected article. The second sub-model performs time series
analysis based on detection information of each type of damage in the at least two
images, to obtain a classification result of each type of damage on the detected article.
[0023] For another example, assume that the first sub-model in the detection model is the
DCNN network. In this case, the first sub-model can use an output of the last convolution
layer or pooling layer in a DCNN network (namely, an output before processing at a
fully connected layer and an output prediction layer is performed) as the feature
processing result, can use an output of the fully connected layer in the DCNN network
as the feature processing result, or can use an output of the output prediction layer
as the feature processing result.
[0024] In the implementations of the present specification, a procedure of the article damage
detection method is shown in FIG. 1.
[0025] Step 110: Obtain at least two images that are time sequentially related and show
a detected article at different angles.
[0026] The at least two images that are time sequentially related and show the detected
article at different angles can be photos of the detected moving article that are
consecutively taken, can be recorded videos (the video includes multiple images arranged
in time order) of the detected moving article, can be photos of the detected article
that are consecutively taken by using a mobile camera, can be videos of the detected
article that are recorded by using a mobile camera, can be at least two photos or
videos consecutively taken or recorded by changing a photographing angle, or can be
a combination thereof.
[0027] The at least two images that are time sequentially related and show the detected
article at different angles can be automatically generated, for example, can be generated
by using an article damage detector in the present specification, or can be generated
by manually holding a photographing device (for example, a mobile phone). Implementations
are not limited.
[0028] In the present implementation, a device for running the article damage detection
methods can independently generate the images, can receive the images from another
device, or can read the images from a predetermined storage location. Implementations
are not limited. For example, when the methods in the present implementation runs
on a mobile phone, the images can be generated by taking a photo or recording a video
by using a camera of the mobile phone. For another example, the method in the present
implementation can run on a server of a certain application (APP), and a client of
the App uploads multiple obtained photos or recorded videos to the server.
[0029] Step 120: Input the images to a detection model in time order, to determine a damage
detection result.
[0030] The trained detection model is used, and the obtained images are input to the detection
model in time order, to determine the damage detection result.
[0031] Some damage on the detected article cannot be captured by the photographing device
at a specific angle. The detected article is photographed at different angles, so
that omission of damage on the article in the image can be reduced. More photographing
angles indicate a more comprehensive direction and a higher possibility that the images
can truly show a condition of the article. Damage on the article showed in the images
may be inconsistent (for example, damage A, B, and C is obtained in image 1, and damage
B and D is obtained in image 2). After damage discovery is performed by using the
first sub-model in the detection model, time series analysis is performed for the
same damage found in these images that are time sequentially related, to obtain a
complete and uniform view of damage on each part of the detected article, thereby
improving damage detection accuracy.
[0032] In addition, a damage detection report can be automatically generated based on the
damage detection result, and the value of detected article is estimated. For a form
of the damage detection report, a specific method for generating the damage detection
report, and a specific algorithm used for value estimation, references can be made
to the existing technology. Details are omitted for simplicity.
[0033] It can be seen that in the implementations of the article damage detection method
in the present specification, the detection model is built by using the first sub-model
and the second sub-model that are cascaded, and the images that are time sequentially
related and show the detected article at different angles are input to the detection
model; the first sub-model outputs the feature processing result of each image to
the second sub-model; and the second sub-model performs time series analysis on the
feature processing results of the images to determine the damage detection result.
As such, damage on the detected article can be found more comprehensively by using
the images at different angles, and damage found in the images can be combined into
a complete, uniform, and more accurate detection result.
[0034] Corresponding to the previous procedure implementation, implementations of the present
specification further provide an article damage detection apparatus. The apparatus
can be implemented by software, can be implemented by hardware, or can be implemented
by a combination of hardware and software. Software implementation is used as an example.
As a logical apparatus, the apparatus is formed by reading a corresponding computer
program by a central processing unit (CPU) in a device in which the apparatus is located
and running the computer program in a memory. In terms of hardware, in addition to
the CPU, the memory, and the storage medium shown in FIG. 2, the device in which the
article damage detection apparatus is located usually includes other hardware such
as a chip for sending and receiving radio signals and/or other hardware such as a
card configured to implement a network communications function.
[0035] FIG. 3 illustrates an article damage detection apparatus, according to implementations
of the present specification. The apparatus includes an image sequence acquisition
unit and a detection model application unit. The image sequence acquisition unit is
configured to obtain at least two images that are time sequentially related and show
a detected article at different angles. The detection model application unit is configured
to input the images to a detection model in time order, to determine a damage detection
result. The detection model includes a first sub-model and a second sub-model, the
first sub-model identifies respective features of each image, a feature processing
result of each image is input to the second sub-model, and the second sub-model performs
time series analysis on the feature processing result to determine the damage detection
result. The first sub-model and the second sub-model are obtained by performing joint
training by using training samples labeled with article damage.
[0036] Optionally, the second sub-model is an LSTM network based on an attention mechanism.
[0037] Optionally, the at least two images that are time sequentially related and show the
detected article at different angles include at least one of the following: photos
of the detected moving article that are consecutively taken, recorded videos of the
detected moving article, photos of the detected article that are consecutively taken
by using a mobile camera, and videos of the detected article that are recorded by
using a mobile camera.
[0038] In an example, the damage detection result includes a classification result of each
of one or more types of damage.
[0039] In the previous example, the feature processing result of each image includes a classification
result that is of a type of damage in the single image of the detected article and
is generated after the first sub-model performs feature extraction, damage discovery,
and feature fusion on each image.
[0040] Implementations of the present specification provide a computer device, and the computer
device includes a storage medium and a processor. The storage medium stores a computer
program that can be run by the processor. When the processor runs the stored computer
program, the steps of the article damage detection method in the implementations of
the present specification are performed. For detailed description of the steps of
the article damage detection method, references can be made to the previous content.
Details are omitted for simplicity.
[0041] Implementations of the present specification provide a computer-readable storage
medium. The storage medium stores a computer program. When the computer program is
run by a processor, the steps of the article damage detection method in the implementations
of the present specification are performed. For detailed description of the steps
of the article damage detection method, references can be made to the previous content.
Details are omitted for simplicity.
[0042] Implementations of the present specification provide a new article damage detector.
When instructing a movement module to drive relative movement between a camera and
a detected article, a calculation and control module instructs a photographing module
to perform consecutive photographing or recording on the detected article, to conveniently
and quickly generate multiple images of the detected article that are time sequentially
related and are obtained at different angles, and performs damage detection based
on these images by using the article damage detection method or apparatus in the implementations
of the present specification, to obtain a more accurate detection result.
[0043] A structure of the article damage detector in the present implementation of the present
specification is shown in FIG. 4. The article damage detector includes the calculation
and control module, the movement module, and the photographing module.
[0044] The calculation and control module includes a CPU, a memory, a storage medium, a
communications submodule, etc. The CPU reads a program in the storage medium, and
runs the program in the memory to generate a movement instruction and a photographing
instruction. The communications submodule sends the movement instruction to the movement
module, and sends the photographing instruction to the photographing module.
[0045] The photographing module includes a camera. After receiving the photographing instruction
sent by the calculation and control module, the photographing module performs consecutive
photographing or video recording on the detected article, and generates, based on
the photographing instruction, at least two images of the detected article that are
time sequentially related. The photographing instruction can include one or more photographing-related
parameters, for example, a photographing delay time, a time interval for consecutive
photographing, the quantity of photos that are to be consecutively taken, and duration
for recording a video. The photographing instruction can be set based on a need in
an actual application scenario. Implementations are not limited. In addition, the
calculation and control module can further send a photographing stop instruction,
so that the photographing module stops photographing. The photographing module can
store the generated images in a predetermined storage location, or can send the generated
images to the calculation and control module. Implementations are not limited either.
[0046] The movement module is configured to drive relative movement between the camera of
the photographing module and the detected article based on the movement instruction
from the calculation and control module. Based on factors such as a size and a weight
of the detected article and needs on portability of the article damage detector in
an actual application scenario, the movement module can drive the relative movement
between the camera and the detected article by moving the detected article, by moving
the camera, or by moving both the detected article and the camera.
[0047] In an example, the movement module includes an article movement submodule, and the
detected article is placed on the article movement submodule. After receiving the
movement instruction from the calculation and control module, the article movement
submodule performs upward or downward movement, displacement, and/or rotation based
on the movement instruction, so that the detected article moves based on the received
instruction. In this example, the camera can be fastened, or can move based on the
movement instruction in a movement track different from that of the detected article.
[0048] In another example, the movement module includes a camera movement submodule, and
the camera is installed on the camera movement submodule. After receiving the movement
instruction from the calculation and control module, the camera movement submodule
performs upward or downward movement, displacement, and/or rotation based on the movement
instruction, so that the camera moves based on the received instruction. In this example,
the detected article can be fastened, or can move based on the movement instruction
in a movement track different from that of the camera.
[0049] The movement instruction sent by the calculation and control module can include several
movement-related parameters. The movement instruction can be set based on a need in
an actual application scenario, specific implementations of the movement module, etc.
Implementations are not limited. For example, the movement instruction can include
a displacement length, an upward or downward movement height, a rotation angle, and
a movement speed. In addition, the calculation and control module can further send
a movement stop instruction, so that the movement module stops the relative movement
between the detected article and the camera.
[0050] When performing article damage detection, the calculation and control module sends
the movement instruction to the movement module, to drive the relative movement between
the detected article and the camera, and sends the photographing instruction to the
photographing module, so that the photographing module generates the at least two
images that are time sequentially related and show the detected article at different
angles. The calculation and control module obtains, based on the generated images,
the damage detection result obtained by using the article damage detection methods
or apparatuses in the implementations of the present specification.
[0051] In an implementation, the calculation and control module can locally run the article
damage detection methods or apparatuses in the implementations of the present specification.
The calculation and control module inputs the generated images to a detection model
in time order, and an output of the detection model is the damage detection result.
[0052] In another implementation, the article damage detection method or apparatus in the
implementations of the present specification runs on a server. The calculation and
control module of the article damage detector uploads the generated images to the
server in time order, and the server inputs the images to a detection model in time
order, and returns an output of the detection model to the calculation and control
module.
[0053] In some application scenarios, a light source module can be added to the article
damage detector, and a light control submodule can be added to the calculation and
control module. The light control submodule sends a light source instruction to the
light source module by using the communications submodule. The light source module
provides proper light for the photographing module based on the light source instruction,
to improve image generation quality. The calculation and control module can send,
based on a light condition in a current environment, a light source instruction that
includes parameters such as a light angle and light brightness, so that the light
source module controls one or more light sources to satisfy light needs of photographing.
[0054] In the previous application scenario, if the movement module includes the camera
movement submodule, both the light source of the light source module and the camera
of the photographing module can be installed on the camera movement submodule. When
the camera movement submodule performs upward or downward movement, displacement,
and/or rotation based on the movement instruction, both the camera and the light source
are moved at the same time, so that light fully matches photographing to achieve a
better photographing effect.
[0055] The calculation and control module can further generate a detection report based
on the damage detection result, estimate the price of the detected article, and so
on.
[0056] It can be seen that in the implementations of the article damage detector in the
present specification, when enabling, by using the movement instruction, the movement
module to drive the relative moment between the camera and the detected article, the
calculation and control module enables, by using the photographing instruction, the
photographing module to photograph the detected article, to quickly and conveniently
generate the at least two images of the detected article that are time sequentially
related and are obtained at different angles, and obtains, based on the generated
images, the more accurate detection result obtained by using the article damage detection
method or apparatus in the present specification.
[0057] Specific implementations of the present specification are described above. Other
implementations fall within the scope of the appended claims. In some cases, the actions
or steps described in the claims can be performed in an order different from the order
in the implementations and the desired results can still be achieved. In addition,
the process described in the accompanying drawings does not necessarily need a particular
execution order to achieve the desired results. In some implementations, multi-tasking
and parallel processing can be advantageous.
[0058] In an application example of the present specification, a secondhand mobile device
merchant places a damage detector in a crowded public place. A user can independently
use the damage detector to obtain an estimated recycling price of a secondhand mobile
device. The mobile device can be a mobile phone, a tablet computer, a laptop, etc.
[0059] The damage detector includes a trained detection model, and a structure of the detection
model is shown in FIG. 5. The detection model includes a deep convolutional neural
network sub-model (a first sub-model) and an LSTM sub-model (a second sub-model).
[0060] The detection model uses multiple images that are time sequentially related as inputs.
The deep convolutional neural network sub-model first performs feature extraction
on each image in time order; then identifies a target mobile device from an extracted
feature, and performs damage discovery on the target mobile device; and then performs
fusion on the initially extracted feature and a feature obtained after damage discovery,
to avoid a feature loss that may be caused in the identification process of the target
mobile device and the damage discovery process, and generates a damage classification
result of the single image based on a feature obtained after fusion.
[0061] The deep convolutional neural network sub-model inputs the damage classification
result of the single image to the LSTM sub-model in time order. The LSTM sub-model
performs time series analysis on damage classification results of consecutive single
images, combines the same damage in different single images, and outputs a damage
classification result that can fully show a status of the detected mobile device.
The LSTM sub-model can use an attention mechanism, to achieve a better time series
analysis effect.
[0062] In this application example, the damage classification result includes scratches,
damage, and adhesives. When the detection model is trained, each training sample is
labeled with a value in terms of each type of damage: 0 (there is no this type of
damage) or 1 (there is this type of damage). Several such samples are used to perform
joint training on the deep convolutional neural network sub-model and the LSTM sub-model.
When damage detection is performed by using the trained detection model, the output
is the possibility that there is each type of damage on the detected mobile device.
[0063] The damage detector includes a calculation and control module, a movement module,
a photographing module, and a light source module. The detection model is stored in
a storage medium of the calculation and control module. A server of the secondhand
mobile device merchant can update a stored program (including the detection model)
online by communicating with the calculation and control module.
[0064] The movement module includes a platform for accommodating a mobile device, and the
platform can rotate based on a movement instruction from the calculation and control
module. A camera of the photographing module and a light source of the light source
module are fastened around the platform.
[0065] After the user launches value evaluation of the secondhand mobile device, and inputs
information such as a model and a configuration of the mobile device, the damage detector
prompts the user to place the mobile device on the platform. After the user places
the mobile device, the calculation and control module determines, based on light in
a current environment, light brightness to be used, and sends a light source instruction
to the light source module. The light source module lights the light source based
on light intensity specified in the instruction. The calculation and control module
sends a movement instruction to the movement module, so that the platform rotates
360 degrees. The calculation and control module sends a photographing instruction
to the photographing module, so that the photographing module records a video of an
article on the platform during rotation of the platform. The photographing module
stores the recorded video in a local storage medium.
[0066] After the movement module and the photographing module complete work, the calculation
and control module instructs the light source module to turn off the light, read the
recorded video from the local storage medium, and input images in the video to the
detection model in time order, to obtain a classification result of each type of damage
on the detected mobile device. The calculation and control module calculates an estimated
price of the detected mobile device based on the damage classification result and
information such as the model and the configuration of the detected mobile device,
and displays the estimated price to the user.
[0067] The previous descriptions are merely better examples of implementations of the present
specification, but are not intended to limit the present application.
[0068] In a typical configuration, a computing device includes one or more central processing
units (CPUs), input/output interfaces, network interfaces, and memories.
[0069] The memory can include a non-persistent memory, a random access memory (RAM), and/or
a nonvolatile memory in a computer-readable medium, for example, a read-only memory
(ROM) or a flash memory (flash RAM). The memory is an example of the computer-readable
medium.
[0070] The computer-readable medium includes persistent, non-persistent, removable, and
irremovable media that can store information by using any method or technology. The
information can be a computer-readable instruction, a data structure, a program module,
or other data. Examples of the computer storage medium include but are not limited
to a parameter random access memory (PRAM), a static random access memory (SRAM),
a dynamic random access memory (DRAM), another type of random access memory, a read-only
memory (ROM), an electrically erasable programmable read-only memory (EEPROM), a flash
memory or another memory technology, a compact disc read-only memory (CD-ROM), a digital
versatile disc (DVD) or other optical storage mediums, a cassette, a cassette magnetic
disk storage medium, another magnetic storage device, or any other non-transmission
medium. The computer storage medium can be configured to store information that can
be accessed by the computing device. As described in the present specification, the
computer-readable medium does not include computer-readable transitory media such
as a modulated data signal and carrier.
[0071] It is worthwhile to further note that the terms "include", "comprise", and their
any other variants are intended to cover a non-exclusive inclusion, so that a process,
a method, a product, or a device that includes a list of elements not only includes
those elements but also includes other elements which are not expressly listed, or
further includes elements inherent to such process, method, product, or device. Without
more constraints, an element preceded by "includes a ..." does not preclude the existence
of additional identical elements in the process, method, product, or device that includes
the element.
[0072] A person skilled in the art should understand that an implementation of the present
specification can be provided as a method, a system, or a computer program product.
Therefore, the implementations of the present specification can use a form of hardware
only implementations, software only implementations, or implementations with a combination
of software and hardware. In addition, the implementations of the present specification
can use a form of a computer program product that is implemented on one or more computer-usable
storage media (including but not limited to a disk memory, a CD-ROM, an optical memory,
etc.) that include computer-usable program code.
1. A computer-implemented article damage detection method, comprising:
obtaining at least two images that are time sequentially related and show a detected
article at different angles; and
inputting the images to a detection model in time order, to determine a damage detection
result, wherein the detection model comprises a first sub-model and a second sub-model,
wherein the first sub-model includes a deep convolutional neural network and identifies
respective features of each image, a feature processing result of each image is input
to the second sub-model, wherein the feature processing result is a damage detection
result of each image, the second sub-model includes a long short-term memory and performs
time series analysis on the feature processing result to determine the damage detection
result, wherein the second sub-model performs time series analysis to obtain a classification
result of each type of damage on the detected article, and the first sub-model and
the second sub-model are obtained by performing joint training by using training samples
labeled with article damage.
2. The method according to claim 1, wherein the second sub-model is an LSTM network based
on an attention mechanism.
3. The method according to claim 1, wherein the at least two images that are time sequentially
related and show the detected article at different angles comprise at least one of
the following: photos of the detected moving article that are consecutively taken,
recorded videos of the detected moving article, photos of the detected article that
are consecutively taken by using a mobile camera, and videos of the detected article
that are recorded by using a mobile camera.
4. The method according to claim 1, wherein the damage detection result comprises a classification
result of each of one or more types of damage.
5. The method according to claim 4, wherein the feature processing result of each image
comprises a classification result that is of a type of damage in the single image
of the detected article and is generated after the first sub-model performs feature
extraction, damage discovery, and feature fusion on each image.
6. A computer device, comprising a storage medium and a processor, wherein the storage
medium stores a computer program that can be run by the processor, and when the processor
runs the computer program, the steps of the article damage detection method according
to any one of claims 1 to 5 are performed.
7. A computer-readable storage medium, wherein the computer-readable storage medium stores
a computer program, and when the computer program is run by a processor, the steps
of the article damage detection method according to any one of claims 1 to 5 are performed.
8. An article damage detector, comprising:
a photographing module, configured to generate, based on a photographing instruction
from a calculation and control module, at least two images of a detected article that
are time sequentially related;
a movement module, configured to drive relative movement between a camera of the photographing
module and the detected article based on a movement instruction from the calculation
and control module; and
the calculation and control module, configured to enable, by using the movement instruction
and the photographing instruction, the photographing module to generate the at least
two images that are time sequentially related and show the detected article at different
angles, and determine a damage detection result based on the images, wherein the damage
detection result is generated by using the method according to any one of claims 1
to 5.
9. The article damage detector according to claim 8, wherein the calculation and control
module is configured to determine a damage detection result based on the images, comprising:
the calculation and control module is configured to upload the images to a server,
and receive the damage detection result generated by the server by using the method
according to any one of claims 1 to 5; or
locally run the method according to any one of claims 1 to 5, to determine the damage
detection result.
10. The article damage detector according to claim 8, wherein the movement module comprises
an article movement submodule, configured to accommodate the detected article, and
perform upward or downward movement, displacement, and/or rotation based on the movement
instruction; or
a camera movement submodule, configured to accommodate the camera of the photographing
module, and perform upward or downward movement, displacement, and/or rotation based
on the movement instruction.
11. The article damage detector according to claim 8, wherein the article damage detector
further comprises a light source module, configured to provide light for the photographing
module based on a light source instruction from the calculation and control module;
and
the calculation and control module further comprises a light control submodule, configured
to send the light source instruction to the light source module, so that the light
source module provides light suitable for photographing.
12. The article damage detector according to claim 11, wherein the movement module comprises
a camera movement submodule, configured to accommodate the camera of the photographing
module and a light source of the light source module, and perform upward or downward
movement, displacement, and/or rotation based on the movement instruction.
13. The article damage detector according to claim 8, wherein the detected article comprises
a mobile device.
1. Computerimplementiertes Gegenstandsbeschädigungserkennungsverfahren, das folgende
Vorgänge umfasst:
Erhalten von mindestens zwei Bildern, die in zeitlicher Reihenfolge zueinander in
Bezug stehen und einen erfassten Gegenstand aus unterschiedlichen Winkeln zeigen;
und
Eingeben der Bilder in ein Erkennungsmodell in zeitlicher Reihenfolge zum Bestimmen
eines Beschädigungserkennungsergebnisses, wobei das Erkennungsmodell ein erstes Teilmodell
und ein zweites Teilmodell umfasst, wobei das erste Teilmodell ein "Deep Convolutional
Neural Network" beinhaltet und jeweilige Merkmale jedes Bildes identifiziert, ein
Merkmalsverarbeitungsergebnis jedes Bildes in das zweite Teilmodell eingegeben wird,
wobei das Merkmalsverarbeitungsergebnis ein Beschädigungserkennungsergebnis jedes
Bildes ist, das zweite Teilmodell ein "Long short-term memory" beinhaltet und Zeitreihenanalyse
des Merkmalsverarbeitungsergebnisses zum Bestimmen des Beschädigungserkennungsergebnisses
durchführt, wobei das zweite Teilmodell Zeitreihenanalyse zum Erhalten eines Klassifizierungsergebnisses
jeder Art von Beschädigung am erfassten Gegenstand durchführt und das erste Teilmodell
und das zweite Teilmodell durch Durchführen von gemeinsamem Training unter Verwendung
von mit Gegenstandsbeschädigung gekennzeichneten Trainingsproben erhalten werden.
2. Verfahren nach Anspruch 1, wobei das zweite Teilmodell ein LSTM-Netzwerk ist, das
auf einem Aufmerksamkeitsmechanismus basiert.
3. Verfahren nach Anspruch 1, wobei die mindestens zwei Bilder, die in zeitlicher Reihenfolge
zueinander in Bezug stehen und den erfassten Gegenstand aus unterschiedlichen Winkeln
zeigen, mindestens eines der folgenden Elemente umfassen: Fotos des erfassten, sich
bewegenden Gegenstandes, die nacheinander aufgenommen werden, aufgenommene Videos
des erfassten, sich bewegenden Gegenstandes, Fotos des erfassten Gegenstandes, die
nacheinander unter Verwendung einer mobilen Kamera aufgenommen werden, und Videos
des erfassten Gegenstandes, die unter Verwendung einer mobilen Kamera aufgenommen
werden.
4. Verfahren nach Anspruch 1, wobei das Beschädigungserkennungsergebnis ein Klassifizierungsergebnis
für jede von einer oder mehreren Arten von Beschädigungen umfasst.
5. Verfahren nach Anspruch 4, wobei das Merkmalsverarbeitungsergebnis jedes Bildes ein
Klassifizierungsergebnis umfasst, das eine Art von Beschädigung im einzelnen Bild
des erfassten Gegenstandes darstellt und erzeugt wird, nachdem das erste Teilmodell
Merkmalsextraktion, Beschädigungserkennung und Merkmalsfusion an jedem Bild durchgeführt
hat.
6. Computervorrichtung, umfassend ein Speichermedium und einen Prozessor, wobei das Speichermedium
ein Computerprogramm speichert, das vom Prozessor ausgeführt werden kann, und bei
Ausführung des Computerprogramms durch den Prozessor die Schritte des Gegenstandsbeschädigungserkennungsverfahrens
gemäß einem der Ansprüche 1 bis 5 durchgeführt werden.
7. Computerlesbares Speichermedium, wobei das computerlesbare Speichermedium ein Computerprogramm
speichert und bei Ausführung des Computerprogramms durch einen Prozessor die Schritte
des Gegenstandsbeschädigungserkennungsverfahrens gemäß einem der Ansprüche 1 bis 5
durchgeführt werden.
8. Gegenstandsbeschädigungsdetektor, umfassend:
ein Fotografiermodul, das dafür ausgelegt ist, auf der Grundlage eines Fotografieranweisung
von einem Berechnungs- und Steuermodul mindestens zwei Bilder eines erfassten Gegenstands
zu erzeugen, die in zeitlicher Reihenfolge zueinander in Bezug stehen;
ein Bewegungsmodul, das dafür ausgelegt ist, eine Relativbewegung zwischen einer Kamera
des Fotografiermoduls und dem erfassten Gegenstand auf der Grundlage einer Bewegungsanweisung
vom Berechnungs- und
Steuermodul auszuführen; und
das Berechnungs- und Steuermodul, das dafür ausgelegt ist, unter Verwendung der Bewegungsanweisung
und der Fotografieranweisung das Fotografiermodul in die Lage zu versetzen, die mindestens
zwei Bilder zu erzeugen, die in zeitlicher Reihenfolge zueinander in Bezug stehen
und den erfassten Gegenstand aus unterschiedlichen Winkeln zeigen, und ein Beschädigungserkennungsergebnis
auf der Grundlage der Bilder zu bestimmen, wobei das Beschädigungserkennungsergebnis
unter Verwendung des Verfahrens nach einem der Ansprüche 1 bis 5 erzeugt wird.
9. Gegenstandsbeschädigungsdetektor nach Anspruch 8, wobei das Berechnungs- und Steuermodul
dafür ausgelegt ist, ein Beschädigungserkennungsergebnis auf der Grundlage der Bilder
zu bestimmen, umfassend: das Berechnungs- und Steuermodul dafür ausgelegt ist, die
Bilder auf einen Server hochzuladen und das Beschädigungserkennungsergebnis zu empfangen,
das vom Server unter Verwendung des Verfahrens nach einem der Ansprüche 1 bis 5 erzeugt
wird; oder
das Verfahren nach einem der Ansprüche 1 bis 5 zum Bestimmen des Beschädigungserkennungsergebnisses
lokal durchgeführt wird.
10. Gegenstandsbeschädigungsdetektor nach Anspruch 8, wobei das Bewegungsmodul ein Gegenstandsbewegungsteilmodul
umfasst, das dafür ausgelegt ist, den erfassten Gegenstand aufzunehmen und auf der
Grundlage der Bewegungsanweisung Aufwärts- oder Abwärtsbewegung, Verlagerung und/oder
Drehung durchzuführen; oder
ein Kamerabewegungsteilmodul, das dafür ausgelegt ist, die Kamera des Fotografiermoduls
aufzunehmen und auf der Grundlage der Bewegungsanweisung Aufwärts- oder Abwärtsbewegung,
Verlagerung und/oder Drehung durchzuführen.
11. Gegenstandsbeschädigungsdetektor nach Anspruch 8, wobei der Gegenstandsbeschädigungsdetektor
ferner ein Lichtquellenmodul umfasst, das dafür ausgelegt ist, Licht für das Fotografiermodul
auf der Grundlage einer Lichtquellenanweisung vom Berechnungs- und Steuermodul bereitzustellen;
und
das Berechnungs- und Steuermodul ferner ein Lichtsteuerungsteilmodul umfasst, das
dafür ausgelegt ist, die Lichtquellenanweisung an das Lichtquellenmodul zu senden,
so dass das Lichtquellenmodul ein zum Fotografieren geeignetes Licht bereitstellt.
12. Gegenstandsbeschädigungsdetektor nach Anspruch 11, wobei das Bewegungsmodul ein Kamerabewegungsteilmodul
umfasst, das dafür ausgelegt ist, die Kamera des Fotografiermoduls und eine Lichtquelle
des Lichtquellenmoduls aufzunehmen und auf der Grundlage der Bewegungsanweisung Aufwärts-
oder Abwärtsbewegung, Verlagerung und/oder Drehung durchzuführen.
13. Gegenstandsbeschädigungsdetektor nach Anspruch 8, wobei der erfasste Gegenstand ein
mobiles Gerät umfasst.
1. Procédé de détection d'endommagement d'article mis en œuvre par ordinateur, comprenant
:
l'obtention d'au moins deux images qui sont liées en séquence dans le temps et montrent
un article détecté sous des angles différents ; et
l'entrée des images dans un modèle de détection dans l'ordre temporel, pour déterminer
un résultat de détection d'endommagement, dans lequel le modèle de détection comprend
un premier sous-modèle et un second sous-modèle, dans lequel le premier sous-modèle
comprend un réseau de neurones convolutif profond et identifie des caractéristiques
respectives de chaque image, un résultat de traitement de caractéristique de chaque
image est entré dans le second sous-modèle, dans lequel le résultat de traitement
de caractéristique est un résultat de détection d'endommagement de chaque image, le
second sous-modèle comprend une mémoire à long et court terme et exécute une analyse
de série temporelle sur le résultat de traitement de caractéristique pour déterminer
le résultat de détection d'endommagement, dans lequel le second sous-modèle exécute
une analyse de série temporelle pour obtenir un résultat de classification de chaque
type d'endommagement sur l'article détecté, et le premier sous-modèle et le second
sous-modèle sont obtenus par l'exécution d'un apprentissage conjoint au moyen d'échantillons
d'apprentissage portant des étiquettes d'endommagement d'article.
2. Procédé selon la revendication 1, dans lequel le second sous-modèle est un réseau
LSTM basé sur un mécanisme d'attention.
3. Procédé selon la revendication 1, dans lequel les au moins deux images qui sont liées
en séquence dans le temps et montrent l'article détecté sous des angles différents
comprennent au moins l'une des images suivantes : des photos de l'article détecté
en mouvement qui sont prises consécutivement, des vidéos enregistrées de l'article
détecté en mouvement, des photos de l'article détecté qui sont prises consécutivement
au moyen d'un appareil photo mobile, et des vidéos de l'article détecté qui sont enregistrées
au moyen d'un appareil photo mobile.
4. Procédé selon la revendication 1, dans lequel le résultat de détection d'endommagement
comprend un résultat de classification de chacun d'un ou plusieurs types d'endommagement.
5. Procédé selon la revendication 4, dans lequel le résultat de traitement de caractéristique
de chaque image comprend un résultat de classification qui est d'un type d'endommagement
dans l'image seule de l'article détecté et est généré après que le premier sous-modèle
exécute une extraction de caractéristiques, une découverte d'endommagement et une
fusion de caractéristique sur chaque image.
6. Dispositif informatique comprenant un support de stockage et un processeur, dans lequel
le support de stockage stocke un programme informatique qui peut être exécuté par
le processeur, et lorsque le processeur exécute le programme informatique, les étapes
du procédé de détection d'endommagement d'article selon l'une quelconque des revendications
1 à 5 sont exécutées.
7. Support de stockage informatique, dans lequel le support de stockage lisible par ordinateur
stocke un programme informatique, et lorsque le programme informatique est exécuté
par un processeur, les étapes du procédé de détection d'endommagement d'article selon
l'une quelconque des revendications 1 à 5 sont exécutées.
8. Détecteur d'endommagement d'article, comprenant :
un module de photographie, configuré pour générer, sur la base d'une instruction de
photographie provenant d'un module de calcul et de commande, au moins deux images
d'un article détecté qui sont liées en séquence dans le temps ;
un module de mouvement, configuré pour entraîner un mouvement relatif entre un appareil
photo du module de photographie et l'article détecté sur la base d'une instruction
de mouvement provenant du module de calcul et de commande ; et
le module de calcul et de commande, configuré pour activer, au moyen de l'instruction
de mouvement et de l'instruction de photographie, le module de photographie pour générer
les au moins deux images qui sont liées en séquence dans le temps et montrent l'article
détecté sous des angles différents, et déterminer un résultat de détection d'endommagement
sur la base des images, dans lequel le résultat de détection d'endommagement est généré
au moyen du procédé selon l'une quelconque des revendications 1 à 5.
9. Détecteur d'endommagement d'article selon la revendication 8, dans lequel le module
de calcul et de commande est configuré pour déterminer un résultat de détection d'endommagement
sur la base des images, dans lequel : le module de calcul et de commande est configuré
pour charger les images sur un serveur, et recevoir le résultat de détection d'endommagement
généré par le serveur au moyen du procédé selon l'une quelconque des revendications
1 à 5 ; ou
exécuter localement le procédé selon l'une quelconque des revendications 1 à 5, pour
déterminer le résultat de détection d'endommagement.
10. Détecteur d'endommagement d'article selon la revendication 8, dans lequel le module
de mouvement comprend un sous-module de mouvement d'article, configuré pour recevoir
l'article détecté et exécuter un mouvement vers le haut ou vers le bas, un déplacement
et/ou une rotation sur la base de l'instruction de mouvement ; ou
un sous-module de mouvement d'appareil photo, configuré pour recevoir l'appareil photo
du module de photographie et exécuter un mouvement vers le haut ou vers le bas, un
déplacement et/ou une rotation sur la base de l'instruction de mouvement.
11. Détecteur d'endommagement d'article selon la revendication 8, dans lequel le détecteur
d'endommagement d'article comprend en outre un module de source de lumière, configuré
pour fournir une lumière au module de photographie sur la base d'une instruction de
source de lumière provenant du module de calcul et de commande ; et
le module de calcul et de commande comprend en outre un sous-module de commande de
lumière, configuré pour envoyer l'instruction de source de lumière au module de source
de lumière, afin que le module de source de lumière fournisse une lumière adaptée
à la photographie.
12. Détecteur d'endommagement d'article selon la revendication 11, dans lequel le module
de mouvement comprend un sous-module de mouvement d'appareil photo, configuré pour
recevoir l'appareil photo du module de photographie et une source de lumière du module
de source de lumière, et exécuter un mouvement vers le haut ou vers le bas, un déplacement
et/ou une rotation sur la base de l'instruction de mouvement.
13. Détecteur d'endommagement d'article selon la revendication 8, dans lequel l'article
détecté comprend un dispositif mobile.