(19)
(11)EP 3 443 331 B1

(12)EUROPEAN PATENT SPECIFICATION

(45)Mention of the grant of the patent:
22.07.2020 Bulletin 2020/30

(21)Application number: 17715928.2

(22)Date of filing:  06.04.2017
(51)International Patent Classification (IPC): 
G01N 23/04(2018.01)
(86)International application number:
PCT/EP2017/058270
(87)International publication number:
WO 2017/178334 (19.10.2017 Gazette  2017/42)

(54)

MOBILE IMAGING OF AN OBJECT USING PENETRATING RADIATION

MOBILE BILDGEBUNG EINES OBJEKTS MITTELS EINDRINGENDER STRAHLUNG

IMAGERIE MOBILE D'UN OBJET À L'AIDE D'UN RAYONNEMENT PÉNÉTRANT


(84)Designated Contracting States:
AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR

(30)Priority: 15.04.2016 GB 201606640

(43)Date of publication of application:
20.02.2019 Bulletin 2019/08

(73)Proprietor: Universiteit Antwerpen
2000 Antwerpen (BE)

(72)Inventors:
  • SIJBERS, Jan
    2570 Duffel (BE)
  • DE BEENHOUWER, Jan, Jozef, Victor
    9500 Geraardsbergen (BE)

(74)Representative: DenK iP 
Leuvensesteenweg 203
3190 Boortmeerbeek
3190 Boortmeerbeek (BE)


(56)References cited: : 
WO-A1-2014/086837
US-A1- 2007 098 142
CN-A- 105 302 141
  
  • GONZALEZ-RUIZ ALEJANDRO ET AL: "An Integrated Framework for Obstacle Mapping With See-Through Capabilities Using Laser and Wireless Channel Measurements", IEEE SENSORS JOURNAL, IEEE SERVICE CENTER, NEW YORK, NY, US, vol. 14, no. 1, 1 January 2014 (2014-01-01), pages 25-38, XP011535708, ISSN: 1530-437X, DOI: 10.1109/JSEN.2013.2278394 [retrieved on 2013-10-30]
  • YASAMIN MOSTOFI: "Cooperative Wireless-Based Obstacle/Object Mapping and See-Through Capabilities in Robotic Networks", IEEE TRANSACTIONS ON MOBILE COMPUTING, IEEE SERVICE CENTER, LOS ALAMITOS, CA, US, vol. 12, no. 5, 1 May 2013 (2013-05-01), pages 817-829, XP011498152, ISSN: 1536-1233, DOI: 10.1109/TMC.2012.32
  
Note: Within nine months from the publication of the mention of the grant of the European patent, any person may give notice to the European Patent Office of opposition to the European patent granted. Notice of opposition shall be filed in a written reasoned statement. It shall not be deemed to have been filed until the opposition fee has been paid. (Art. 99(1) European Patent Convention).


Description

Field of the invention



[0001] The invention relates to the field of mobile radiation imaging. More specifically it relates to a system and method for imaging a target object, and a related computer program product.

Background of the invention



[0002] The United States patent US 8,194,822 describes systems and methods for inspecting an object with a scanned beam of penetrating radiation. Scattered radiation from the beam is detected, in either a backward or forward direction. Characteristic values of the scattered radiation are compared to expected reference values to characterize the object.

[0003] Gonzalez-Ruiz Alejandro et al. in IEEE Sensors Journal 14 (2014) pp 25-38 and Yasamin Mostofi in IEEE Transactions on Mobile Computing 12 (2013) pp 817-829 both describe systems for obstacle mapping using wireless channel measurements based on unmanned vehicles in a network.

[0004] CN105302141 discloses X-ray weld radiographic inspection of a pressure-bearing wall of a structure where a robot carrying a X-ray source autonomously walks onto the structure. The robot includes an encoder wheel and transmits wirelessly the encoding information to a second robot carrying a flat-panel detector such that both robots walk synchronously.

[0005] However, a need exists in the art for flexible, unmanned systems for inspecting objects. An autonomous system for imaging objects could be particularly advantageous in hazardous environments, confined spaces and difficult to reach spaces. It may furthermore be advantageous to provide an imaging solution that can autonomously detect and image objects in a volume of space without substantial intervention of an operator.

Summary of the invention



[0006] It is an object of embodiments of the present invention to provide good and efficient means and methods for imaging target objects.

[0007] The above objective is accomplished by a method and device according to the present invention.

[0008] The present invention provides, in a first aspect, a system for imaging a target object. The system comprises a first unmanned vehicle that comprises a source of penetrating radiation, said source being an X-ray tube or a gamma radiation source. The first unmanned vehicle is adapted for positioning the source such as to direct the penetrating radiation toward the target object. The system further comprises a second unmanned vehicle that comprises an image detector for registering a spatial distribution of this penetrating radiation as an image.

[0009] The first unmanned vehicle and the second unmanned vehicle are autonomous vehicles adapted for independent propelled motion. Each of the first and the second unmanned vehicle comprises a positioning unit for detecting a position of the corresponding vehicle, e.g. of the unmanned vehicle in which the positioning unit is comprised.

[0010] The system further comprises a processing unit for controlling the propelled motion of the first unmanned vehicle and the second unmanned vehicle such as to acquire at least two images using the image detector corresponding to at least two different projection directions of the penetrating radiation through the target object. According to the present invention, the processing unit is adapted for controlling the propelled motion of the first unmanned vehicle to position the source such as to direct the penetrating radiation toward the target object, and for controlling the propelled motion of the second unmanned vehicle to position the image detector such as to register the spatial distribution of intensities of said penetrating radiation after being transmitted through the target object.

[0011] The processing unit is furthermore adapted for generating data representative of an internal structure of the target object from these at least two images.

[0012] It is an advantage of embodiments of the present invention that a target object in a difficult to reach location may be imaged efficiently.

[0013] ) It is an advantage of embodiments of the present invention that objects to be imaged may be detected and imaged automatically without requiring prior knowledge of their exact positon.

[0014] It is an advantage of embodiments of the present invention that objects may be easily and safely imaged in a hazardous environment.

[0015] In a system in accordance with embodiments of the present invention, the first unmanned vehicle and/or the second unmanned vehicle may comprise a propulsion system for providing the propelled motion and a power source for sustaining the propelled motion.

[0016] In a system in accordance with embodiments of the present invention, the first unmanned vehicle and/or the second unmanned vehicle may be a ground vehicle, e.g. a wheeled vehicle, a aerial vehicle, a space craft, a watercraft and/or a submersible vehicle.

[0017] In a system in accordance with embodiments of the present invention, the positioning unit may comprise an indoor positioning system and/or an outdoor positioning system.

[0018] In a system in accordance with embodiments of the present invention, the positioning unit may use laser positioning and/or echo positioning to measure a distance with respect to objects in its vicinity.

[0019] In a system in accordance with embodiments of the present invention, the positioning unit may be adapted for determining a relative position of the unmanned vehicle in which it is comprised with respect to the target object and/or the other unmanned vehicle.

[0020] According to the present invention, the source of penetrating radiation comprises an X-ray tube and/or a gamma radiation source.

[0021] In a system in accordance with embodiments of the present invention, the first unmanned vehicle may comprise a pivotable support for allowing a rotation of the source and/or the second unmanned vehicle may comprise a pivotable support for allowing a rotation of the image detector.

[0022] In a system in accordance with embodiments of the present invention, the processing unit may be, fully or in part, integrated in the first unmanned vehicle, in the second unmanned vehicle and/or in a separate base station, e.g. in a base station.

[0023] In a system in accordance with embodiments of the present invention, the processing unit may be adapted for receiving position information from the first unmanned vehicle and/or from the second unmanned vehicle.

[0024] In a system in accordance with embodiments of the present invention, the positioning unit may be adapted for determining the position of the target object when present, in which the first unmanned vehicle and/or the second unmanned vehicle may be adapted for transmitting an indicative position of the target object to the processing unit when the presence of the target object is detected by the positioning unit.

[0025] In a system in accordance with embodiments of the present invention, the processing unit may be adapted for controlling the independent propelled motion of the first unmanned vehicle and the second unmanned vehicle such as to acquire the at least two images, using the image detector, corresponding to the at least two different projection directions of the penetrating radiation through the target object, in which the at least two different projection directions are determined by the processing unit as a uniform angular sampling around the target object.

[0026] In a system in accordance with embodiments of the present invention, the processing unit may be adapted for generating the data representative of the internal structure of the target object by performing a tomographic reconstruction.

[0027] In a system in accordance with embodiments of the present invention, this tomographic reconstruction may comprise a partial reconstruction taking a first plurality of images into account. The processing unit may furthermore be adapted for determining a further projection direction for acquiring a further image of the target object by taking this partial reconstruction into account.

[0028] In a system in accordance with embodiments of the present invention, the tomographic reconstruction may be performed by the processing unit using a discrete algebraic reconstruction method.

[0029] In a system in accordance with embodiments of the present invention, the tomographic reconstruction may comprise a compensation method for taking unpredictable variations in positioning of the first unmanned vehicle, of the second unmanned vehicle and/or of the target object into account.

[0030] In a second aspect, the present invention provides a method for imaging a target object. This method comprises providing a first unmanned vehicle that comprises a source of penetrating radiation, and providing a second unmanned vehicle that comprises an image detector for registering a spatial distribution of this penetrating radiation as an image. The first unmanned vehicle and the second unmanned vehicle are autonomous vehicles adapted for independent propelled motion. Each of the first and the second unmanned vehicle comprises a positioning unit for detecting a position of the corresponding vehicle.

[0031] The method further comprises controlling, using a processing unit, the propelled motion of the first unmanned vehicle to position the source such as to direct the penetrating radiation toward the target object, and controlling, using the processing unit, the propelled motion of the second unmanned vehicle to position the image detector such as to register the spatial distribution of intensities of said penetrating radiation after being transmitted through the target object.

[0032] The method also comprises acquiring at least two images using the image detector, wherein the propelled motion of the first unmanned vehicle and of the second unmanned vehicle are controlled such that the at least two images correspond to at least two different projection directions of the penetrating radiation through the target object.

[0033] The method further comprises generating data representative of an internal structure of the target object from the at least two images, using the processing unit.

[0034] In a method in accordance with embodiments of the present invention, the controlling of the propelled motion of the first and second unmanned vehicle may take positioning information provided by the positioning unit into account.

[0035] A method in accordance with embodiments of the present invention, may further comprise determining the position of the target object by the positioning unit, and this controlling of the propelled motion of the first unmanned vehicle and of the second unmanned vehicle may take the position of the target object into account, e.g. as determined by the positioning unit.

[0036] In a method in accordance with embodiments of the present invention, the propelled motion of the first unmanned vehicle and the second unmanned vehicle may be controlled such as to acquire the at least two images corresponding to the at least two different projection directions, in which the at least two different projection directions are determined by the processing unit as a uniform angular sampling around the target object.

[0037] In a method in accordance with embodiments of the present invention, the generating of the data representative of the internal structure of the target object may comprise a tomographic reconstruction.

[0038] In a method in accordance with embodiments of the present invention, this tomographic reconstruction may comprise a partial reconstruction taking a first plurality of images into account, in which the method may further comprise determining a further projection direction for acquiring a further image of the target object by taking this partial reconstruction into account.

[0039] In a method in accordance with embodiments of the present invention, the tomographic reconstruction may be performed by the processing unit using a discrete algebraic reconstruction method.

[0040] In a method in accordance with embodiments of the present invention, the tomographic reconstruction may comprise a compensation method for taking unpredictable variations in positioning of the first unmanned vehicle, the second unmanned vehicle and/or the target object into account.

[0041] In a further aspect, the present invention provides a computer program product comprising instructions to cause the system according to the present invention to perform the steps of the method according to the present invention.

[0042] Particular and preferred aspects of the invention are set out in the accompanying independent and dependent claims.

[0043] These and other aspects of the invention will be apparent from and elucidated with reference to the embodiment(s) described hereinafter.

Brief description of the drawings



[0044] 

FIG 1 shows an exemplary system for imaging a target object in accordance with embodiments of the present invention.

FIG 2 shows another exemplary system for imaging a target object in accordance with embodiments of the present invention.

FIG 3 illustrates an exemplary method for imaging a target object in accordance with embodiments of the present invention.



[0045] The drawings are only schematic and are non-limiting. In the drawings, the size of some of the elements may be exaggerated and not drawn on scale for illustrative purposes.

[0046] Any reference signs in the claims shall not be construed as limiting the scope.

[0047] In the different drawings, the same reference signs refer to the same or analogous elements.

Detailed description of illustrative embodiments



[0048] The present invention will be described with respect to particular embodiments and with reference to certain drawings but the invention is not limited thereto but only by the claims. The drawings described are only schematic and are non-limiting. In the drawings, the size of some of the elements may be exaggerated and not drawn on scale for illustrative purposes. The dimensions and the relative dimensions do not correspond to actual reductions to practice of the invention.

[0049] Furthermore, the terms first, second and the like in the description and in the claims, are used for distinguishing between similar elements and not necessarily for describing a sequence, either temporally, spatially, in ranking or in any other manner. It is to be understood that the terms so used are interchangeable under appropriate circumstances and that the embodiments of the invention described herein are capable of operation in other sequences than described or illustrated herein.

[0050] Moreover, the terms top, under and the like in the description and the claims are used for descriptive purposes and not necessarily for describing relative positions. It is to be understood that the terms so used are interchangeable under appropriate circumstances and that the embodiments of the invention described herein are capable of operation in other orientations than described or illustrated herein.

[0051] It is to be noticed that the term "comprising", used in the claims, should not be interpreted as being restricted to the means listed thereafter; it does not exclude other elements or steps. It is thus to be interpreted as specifying the presence of the stated features, integers, steps or components as referred to, but does not preclude the presence or addition of one or more other features, integers, steps or components, or groups thereof. Thus, the scope of the expression "a device comprising means A and B" should not be limited to devices consisting only of components A and B. It means that with respect to the present invention, the only relevant components of the device are A and B.

[0052] Similarly it should be appreciated that in the description of exemplary embodiments of the invention, various features of the invention are sometimes grouped together in a single embodiment, figure, or description thereof for the purpose of streamlining the disclosure and aiding in the understanding of one or more of the various inventive aspects. This method of disclosure, however, is not to be interpreted as reflecting an intention that the claimed invention requires more features than are expressly recited in each claim. Rather, as the following claims reflect, inventive aspects lie in less than all features of a single foregoing disclosed embodiment. Thus, the claims following the detailed description are hereby expressly incorporated into this detailed description, with each claim standing on its own as a separate embodiment of this invention.

[0053] Furthermore, while some embodiments described herein include some but not other features included in other embodiments, combinations of features of different embodiments are meant to be within the scope of the invention, and form different embodiments, as would be understood by those in the art. For example, in the following claims, any of the claimed embodiments can be used in any combination.

[0054] In the description provided herein, numerous specific details are set forth. However, it is understood that embodiments of the invention may be practiced without these specific details. In other instances, well-known methods, structures and techniques have not been shown in detail in order not to obscure an understanding of this description.

[0055] Where in the present description is referred to an unmanned vehicle, reference is made to an uncrewed vehicle or unmanned vehicle that can operate without a human operator on board. Particularly, in the present disclosure, such unmanned vehicle refers to a vehicle capable of sensing its environment and navigating in response to control instructions from a processing unit in accordance with embodiments of the present invention. These control instructions may be determined by the processing unit without human intervention, e.g. hence the reference to 'autonomous', which should be interpreted in the context of the system as a whole and not necessarily to the vehicle as an independent unit as such. The vehicle may be ground vehicle, water vehicle, air vehicle or space vehicle, but may explicitly exclude mobile machines adapted for moving along a fixed track, e.g. having only a single degree of freedom of translation, which are thus not considered as unmanned vehicles in the context of present disclosure.

[0056] In a first aspect, the present invention relates to a system for imaging a target object. This system comprises a first unmanned vehicle and a second unmanned vehicle. The first and second unmanned vehicle are each an autonomous vehicle, adapted for independent propelled motion. Furthermore, each of the first and the second unmanned vehicle comprises a positioning unit for detecting a position of the corresponding vehicle, e.g. a navigation unit for controlling the motion of the unmanned vehicle in response to a detected position of the vehicle.

[0057] The first unmanned vehicle comprises a source of penetrating radiation, and this first unmanned vehicle is adapted for positioning the source such as to direct the penetrating radiation toward the target object. The second unmanned vehicle comprises an image detector for registering a spatial distribution of the penetrating radiation as an image. This second unmanned vehicle is adapted for positioning the image detector such as to register the spatial distribution of the penetrating radiation when transmitted through the target object.

[0058] The system also comprises a processing unit, e.g. a processor, for controlling the propelled motion of respectively the first and the second unmanned vehicle such as to acquire at least two images using the image detector. These at least two images correspond to at least two different projection directions of the penetrating radiation through the target object. The processing unit is further adapted for generating data representative of an internal structure of the target object from the at least two images.

[0059] Referring to FIG 1 and FIG 2, exemplary systems 10 in accordance with embodiments of the present invention are shown schematically. Such system 10 is adapted for imaging a target object 15. For example, the target object may be difficult to reach by conventional imaging systems, e.g. a blade of a wind turbine, a submerged object, a conduit or device in a crawl space, a monumental piece of art, an object in a hazardous environment, such as a vacuum, toxic or irradiated environment, and/or an archaeological artefact in an unstable and/or delicate excavated building.

[0060] This system 10 comprises a first unmanned vehicle 11 and a second unmanned vehicle 12. The first unmanned vehicle 11, respectively the second unmanned vehicle 12, is an autonomous vehicle adapted for independent propelled motion. Particularly, the first and/or second unmanned vehicle may be adapted for autonomous movement in at least two dimensions, e.g. allowing substantially unconstrained translation in at least two dimensions, for example for autonomous movement three dimensions, e.g. allowing substantially unconstrained translation in three dimensions. The first and/or second unmanned vehicle may allow, in addition to an autonomous translation movement, a rotation movement of at least a part thereof, e.g. a rotation of a radiation source of the first unmanned vehicle and/or a rotation of a detector of the second unmanned vehicle. This rotation movement may be a rotation around a single axis of rotation, around a pair of orthogonal axes or three orthogonal axes. For example, the unmanned vehicle may allow a substantially unconstrained positioning and orienting of a component, e.g. the source, respectively the detector, providing six degrees of freedom, e.g. three degrees of freedom of translation and three degrees of freedom of rotation.

[0061] For example the vehicle 11,12 may comprise a propulsion system, such as at least one propulsion engine, wheels, rotors, fluid jet outlets or a combination thereof. For example, each vehicle 11,12 may be a ground vehicle, aerial vehicle, space craft, watercraft, e.g. a boat, ship or hovercraft, and/or submersible vehicle. The vehicle 11,12 may be a drone. Each vehicle 11,12 may comprise a power source for sustaining the propelled motion, e.g. a combustion fuel tank, fuel cell, battery and/or solar power module.

[0062] Furthermore, each of the first and the second unmanned vehicle 11,12 comprises a positioning unit 13 for detecting a position of the corresponding vehicle 11,12. The vehicle 11,12 may comprise a navigation unit for controlling the motion of the unmanned vehicle in response to a detected position of the vehicle, e.g. as supplied by the positioning unit 13. The positioning unit 13 may comprise an indoor indoor positioning system for locating the vehicle inside a building, and/or an outdoor positioning system for locating the vehicle, e.g. with respect to global coordinate system. The positioning unit may use radio waves, magnetic fields, acoustic signals, optical imaging, or other sensory information to determine a position of the vehicle with respect to a reference coordinate system. The positioning unit may comprise laser positioning and/or echo positioning to measure a distance with respect to objects in its vicinity. The reference coordinate system may be provided by anchor nodes with known position, such as global positioning system (GPS) satellites, cellular telephone network stations, WiFi access points or markers. The positioning system may furthermore be adapted for determining a relative position with respect to the target object 15 and/or the other vehicle. The positioning system may perform a trilateration operation to determine the position of the corresponding vehicle, the object and/or the other vehicle.

[0063] The first unmanned vehicle 11 comprises a source 14 of penetrating radiation, for example a Röntgen tube, e.g. an X-ray tube, or a gamma radiation source. For example, the source 14 may be a compact source of penetrating radiation, such as the XRS-3 lightweight X-ray tube, commercially available from Golden Engineering, Centerville, USA. The first unmanned vehicle 11 is adapted for positioning the source 14 such as to direct the penetrating radiation toward the target object 15. For example, the unmanned vehicle 11 may position itself at a position such as to direct the radiation emissions, e.g. a cone of radiation, toward the object 15 to be imaged. Furthermore, the first unmanned vehicle 11 may comprise a pivotable support, e.g. at least one gimbal, that allows rotation of the source 14 about at least one axis of rotation. For example, two or three gimbals, mounted on to the other with orthogonal pivot axes, may be provided to improve stability of the source 14 under movement of the vehicle 11, and/or to provide an accurate control of the emission direction of the penetrating radiation.

[0064] The second unmanned 12 vehicle comprises an image detector 16 for registering a spatial distribution of the penetrating radiation as an image. This second unmanned vehicle 12 is adapted for positioning the image detector 16 such as to register the spatial distribution of the penetrating radiation 17 when transmitted through the target object 15.

[0065] For example, the image detector 16 may be a light-weight, small X-ray digital image sensor, e.g. comprising a photodiode array. Such photodiode array may have a detector surface area of less than or equal to 1 m2, e.g. less than or equal to 30 cm x 30 cm, e.g. less than or equal to 20 cm x 20 cm, such as, for example 120 mm by 120 mm, or even less, e.g. less than or equal to 10 cm x 10 cm. This image detector 16 may comprise an array of pixels, e.g. at least 100 by 100 pixels, e.g. at least 500 by 500 pixels, or at least 1000 by 1000 pixels, e.g. 2400 by 2400 pixels, or even a larger number of pixels, e.g. 10000 by 10000 pixels. Each pixel may be adapted for quantifying the amount of radiation incident thereon, e.g. using a digital output value, such as a 4 bit value, an 8 bit value, a 12 bit value, a 16 bit value, a 24 bit value, a 32 bit value, or even a higher analog-to-digital conversion resolution. Particularly, the detector may have a suitably high dynamic range, e.g. in combination with low noise characteristics. For example, the C7942CA-22 may be a suitable, commercially available image detector, available from Hamamatsu Photonics, Shizuoka Pref., Japan.

[0066] Furthermore, the second unmanned vehicle 12 may comprise a pivotable support, e.g. at least one gimbal, that allows rotation of the image detector 16 about at least one axis of rotation. For example, two or three gimbals, mounted on to the other with orthogonal pivot axes, may be provided to improve stability of the image detector 16 under movement of the vehicle 12, and/or to provide an accurate control of the image plane orientation with respect to the incident penetrating radiation.

[0067] The system also comprises a processing unit 18, e.g. a processor, for controlling the propelled motion of respectively the first and the second unmanned vehicle 11,12 such as to acquire at least two images using the image detector 16. For example, the processor may comprise a computer, an application specific integrated circuit, a field programmable gate array and/or a microprocessor, programmed and/or configured for controlling the motion of the unmanned vehicles. For example, the processing unit may comprise a digital storage memory, input and/or output means, and/or a communication module. The processing unit may be integrated in the first unmanned vehicle 11, in the second unmanned vehicle 12, or in a separate base station. Furthermore, the processing unit may comprise different components, e.g. a first component integrated in the first unmanned vehicle 11 and a second component integrated in the second unmanned vehicle 12, which are adapted for exchanging data, e.g. over a wireless communication link.

[0068] The processing unit may be adapted for receiving position information from the first and/or second unmanned vehicle 11,12, identifying a position of the vehicle with respect to a common position reference. The processing unit may be adapted for sending control instructions to the first and/or second unmanned vehicle 11, 12 to adjust the position of the vehicle to a target location.

[0069] For example, the processing unit 18 may be adapted for receiving, e.g. as user input or via an external control interface, an indicative position of the target object 15, or an indicative position of the target object may be predetermined, e.g. stored as a configuration parameter in a memory accessible to the processing unit. Likewise, a spatial extent of the object may be received, or known as a predetermined parameter. In response to this information, the processing unit may determine a first position for the first unmanned vehicle 11 and a second position for the second unmanned vehicle 12, such that radiation emitted by the source mounted on the first unmanned vehicle target impinges on the detector mounted on the second unmanned vehicle, and the target object 15 may be assumed to be positioned in between the first and the second unmanned vehicle, such that the penetrating radiation impinging on the detector 16 varies as function of position on the imaging detector due to an interaction between the penetrating radiation and the structure of the target object.

[0070] Alternatively, the first and/or second unmanned vehicle 11,12 may comprise a position detector for determining the position of the target object when present. For example, an laser range finder, a camera with image recognition system, an echolocator or a radio detector may be provided on one or each of the vehicles 11,12 to detect the presence of the target object, and for transmitting an indicative position of the object to the processing unit when the presence of this object is detected. Thus, the system may be adapted for finding the target object, e.g. as identified by recognition of its appearance in a camera image, the presence of a marker, or other suitable detection mechanism known in the art. The processing unit may then respond to such indicative position by controlling the movement of the first and second unmanned vehicle, e.g. similarly as described hereinabove where such information was predetermined or received as input from a user.

[0071] When the first and second unmanned vehicle have reached the target position, as controlled by the processing unit, an image may be acquired by the detector. For example, the processing unit may control the source to emit radiation and, simultaneously, the detector to acquire an image. This image may be transmitted by the detector to the processing unit.

[0072] As described hereinabove, the processing unit 18 is adapted for controlling the propelled motion of respectively the first and the second unmanned vehicle 11,12 such as to acquire at least two images using the image detector 16. For example, after acquiring the first image, as described hereinabove. These at least two images correspond to at least two different projection directions of the penetrating radiation through the target object.

[0073] The processing unit may thus be adapted for sending control instructions to the first and/or second unmanned vehicle 11, 12, after acquiring the first image, to adjust the position of the vehicles to second target locations. As mentioned hereinabove, the processing unit 18 may have access to an indicative position of the target object 15, as predetermined, received from a user or inferred from sensor data received from the first and/or second unmanned vehicle 11,12. Taking this information and the first locations that the first, respectively the second, unmanned vehicle where positioned in to acquire the first image into account, the processing unit may determine a second position for the first unmanned vehicle 11 and a second position for the second unmanned vehicle 12, such that radiation emitted by the source mounted on the first unmanned vehicle target impinges on the detector mounted on the second unmanned vehicle, and the target object 15 may be assumed to be positioned in between the first and the second unmanned vehicle, such that the penetrating radiation impinging on the detector 16 varies as function of position on the imaging detector due to an interaction between the penetrating radiation and the structure of the target object. When the first and second unmanned vehicle have reached the second positions, as controlled by the processing unit, a second image may be acquired by the detector. For example, the processing unit may control the source to emit radiation and, simultaneously, the detector to acquire the second image, which may be transmitted by the detector to the processing unit. These second positions may be determined by the processing unit such as to differ sufficiently from the first positions, such that the first and second image, and optionally in further iterations any further image, correspond to at least two different projection directions of the penetrating radiation through the target object.

[0074] The processing unit 18 is further adapted for generating data representative of an internal structure of the target object from the at least two images. For example, the processing unit may perform a three-dimensional reconstruction, e.g. a tomographic reconstruction, of the target object 15. For example, the plurality of images may correspond to projection directions around the object that constitute a uniform angular sampling over an angle of at least 180°, e.g. an angle of 180° plus the fan-angle of the spatial configuration. However, embodiments of the present invention are not limited to such conventional computed tomography acquisitions. For example, the control unit may perform a partial reconstruction when a first plurality of images has been obtained, and determine a next projection angle in order to maximize the expected information content of a next image to be acquired.

[0075] The processing unit may perform an algebraic reconstruction method for reconstructing the internal structure of the object from a plurality of images, for example a discrete algebraic reconstruction method. A dynamic angle selection method may be applied to determine a sequence of projection angles for acquiring the at least two images. Furthermore, a variable distance approach may be applied. The determining of the positions of the first and second unmanned vehicle to define a projection for acquiring any of the images may also be subject to additional constraints. For example, physical constraints of the movement of the unmanned vehicles may be taken into account, e.g. constraints representative of obstructions, such as walls or other confinements of the range of motion. Such physical constraints may be received from at least one sensor integrated in the first and/or second unmanned vehicle, e.g. an echo-location sensor, camera system with a suitable image recognition system and/or a laser rangefinder. Alternatively or additionally, the processing unit may have access to a model of the physical space in which the unmanned vehicles operate, e.g. as received from a user input or pre-stored in an integrated memory or on removable data carrier. In embodiments of the present invention, the processing unit may implement a method as disclosed in "Towards in loco X-ray computed tomography," a doctoral thesis by Andrei Dabravolski, publicly available online via http://anet.uantwerpen.be/docman/irua/c3935c/11225.pdf, and from the Universiteit Antwerpen Library, Antwerp, Belgium (see library reference http://hdl.handle.net/10067/1305000151162165141).

[0076] Particular features of this disclosure that may be incorporated can be found in sections 2.2, 3.2 and 4.2, which may be implemented in isolation or in combination in embodiments of the present invention, embodiments of the present invention not necessarily being limited by the disclosure in these explicitly referenced sections alone.

[0077] The processing unit may also be adapted for taking small variations in positioning of the unmanned vehicles 11,12 into account. For example, while the processing unit may control the movement of the unmanned vehicles 11,12 to determined locations, the final position reached by the unmanned vehicles 11,12 may be inaccurate, due to, for example, measurement errors of positioning sensors. The processing unit may therefore apply an alignment method when generating the data representative of the internal structure of the target object, e.g. a tomographic reconstruction. Such alignment methods are known in the art, e.g. as disclosed by Folkert Bleichrodt et al. in "An alignment method for fan beam tomography," proceedings of the 1st International Conference on Tomography of Materials and Structures (ICTMS) 2013, p. 103-106. Particularly, a method as disclosed in section 2 of this paper may be implemented by the processing unit in accordance with embodiments of the present invention, embodiments of the present invention not necessarily being limited to features disclosed in this section alone. Furthermore, where the prior art disclosure assumes fixed and known positions of the detector and radiation source, while the object may be subject to small and unpredictable motion in between projection image acquisitions, the skilled person will understand that the disclosed approach is not necessarily limited thereto, and may, for example, equally apply to unpredictable variations in position of the source, the detector or the object, or any combination thereof.

[0078] However, the processing unit 18 may, additionally or alternatively, be adapted for generating the data representative of the internal structure of the target object from the at least two images using a model of the internal structure of the target object. For example, the at least two projection images may be compared to a simulated radiograph determined from such model to identify internal features of the target object, e.g. to identify a corresponding feature in each of the plurality of projection images.

[0079] In a second aspect, the present invention also relates to a method for imaging a target object. This method comprises providing a first unmanned vehicle comprising a source of penetrating radiation, and providing a second unmanned vehicle comprising an image detector for registering a spatial distribution of the penetrating radiation as an image. This first and second unmanned vehicle are autonomous vehicles adapted for independent propelled motion, and the first and the second unmanned vehicle comprise a positioning unit for detecting a position of the corresponding vehicle. The method further comprises controlling, using a processing unit, the propelled motion of the first unmanned vehicle to position the source such as to direct the penetrating radiation toward the target object. The method also comprises controlling, using the processing unit, the propelled motion of the second unmanned vehicle to position the image detector such as to register the spatial distribution of the penetrating radiation when transmitted through the target object.

[0080] The method further comprises acquiring at least two images using the image detector, wherein the propelled motion of the first unmanned vehicle and of the second unmanned vehicle is controlled such that the at least two images correspond to at least two different projection directions of the penetrating radiation through the target object. The method also comprises generating data representative of an internal structure of the target object from the at least two images, using the processing unit.

[0081] A method in accordance with embodiments of the present invention may relate to the operation of a system in accordance with embodiments of the present invention. Details of a method in accordance with embodiments of the second aspect of the present invention may therefore be clear from the description provided hereinabove relating to embodiments of the first aspect of the present invention.

[0082] Referring to FIG 3, an exemplary method 100 for imaging a target object in accordance with embodiments of the present invention is illustrated. The method 100 comprises providing 101 a first unmanned vehicle 11 that comprises a source 14 of penetrating radiation, and providing 102 a second unmanned vehicle 12 that comprises an image detector 16 for registering a spatial distribution of the penetrating radiation as an image. The first and the second unmanned vehicle are autonomous vehicles adapted for independent propelled motion, and each of the first and the second unmanned vehicle comprises a positioning unit 13 for detecting a position of the corresponding vehicle.

[0083] The method 100 further comprises controlling 103, using a processing unit 18, the propelled motion of the first unmanned vehicle 11 to position the source 14 such as to direct the penetrating radiation 17 toward the target object 15, and controlling 104, using this processing unit 18, the propelled motion of the second unmanned vehicle 12 to position the image detector such as to register the spatial distribution of the penetrating radiation when transmitted through the target object 15.

[0084] The steps of controlling 103 the first unmanned vehicle motion and controlling 104 the second unmanned vehicle motion may advantageously take positioning information provided by the positioning unit 13 into account, e.g. positioning information provided by the positioning unit of the first unmanned vehicle and/or provided by the positioning unit of the second unmanned vehicle.

[0085] The method 100 may for example comprise a step of determining 107 the position of the target object 15 by the positioning unit, e.g. by processing information provided by the positioning unit in the processing unit 18. This position of the target object may be determined relative to the first unmanned vehicle and/or the second unmanned vehicle, or may be determined in an absolute coordinate reference frame. However, in a method in accordance with embodiments of the present invention, the position of the target object may alternatively be known a-priori, or provided as an input, e.g. by a user.

[0086] Controlling 103,104 the propelled motion of the first unmanned vehicle and of the second unmanned vehicle may take the position of the target object 15 into account, e.g. regardless of whether this position information is determined from sensor data provided by the positioning unit 13, is predetermined or is provided as input to the method by other means.

[0087] The method also comprises a step of acquiring 105 at least two images using the image detector 16, wherein the propelled motion of the first unmanned vehicle 11 and of the second unmanned vehicle 12 is controlled 103,104 such that the at least two images correspond to at least two different projection directions of the penetrating radiation through the target object 15.

[0088] For example, the propelled motion of the first unmanned vehicle and/or of the second unmanned vehicle may be controlled 103,104 such as to acquire the at least two images corresponding to the at least two different projection directions, in which the at least two different projection directions are determined by the processing unit 18 as a uniform angular sampling around the target object 15.

[0089] The method further comprises generating 106 data representative of an internal structure of the target object from the at least two images, using the processing unit 18. For example, generating 106 the data representative of the internal structure of the target object 15 may comprise performing a tomographic reconstruction. Such tomographic reconstruction may comprise a partial reconstruction that takes a first plurality of images of the at least two images into account. Furthermore, the method may comprise determining a further projection direction for acquiring a further image of the target object by taking the partial reconstruction into account.

[0090] The tomographic reconstruction may be performed by using a discrete algebraic reconstruction method.

[0091] The tomographic reconstruction may comprise applying a compensation method for taking unpredictable variations in positioning of the first unmanned vehicle 11, the second unmanned vehicle 12 and/or the target object 15 into account.

[0092] In a third aspect, the present invention also relates to a computer program product for, if executed by a processing unit, e.g. a processing unit in a system in accordance with embodiments of the first aspect of the present invention, performing steps of a method in accordance with embodiments of the second aspect of the present invention. These performed steps comprise at least the steps of controlling 103 a propelled motion of a first unmanned vehicle 11, controlling 104 a propelled motion of a second unmanned vehicle 12 and generating 106 data representative of an internal structure of a target object from at least two acquired images.


Claims

1. A system (10) for imaging a target object (15), said system comprising:

- a first unmanned vehicle (11) comprising a source (14) of penetrating radiation, said first unmanned vehicle (11) being adapted for positioning said source (14) such as to direct said penetrating radiation (17) toward said target object (15),

- a second unmanned vehicle (12) comprising a detector,

wherein the first and the second unmanned vehicle are autonomous vehicles adapted for independent propelled motion,
wherein each of the first and the second unmanned vehicle comprises a positioning unit (13) for detecting a position of the corresponding vehicle,
the system further comprising a processing unit (18) for controlling said propelled motion of the first and the second unmanned vehicle,
characterized in that
the source of penetrating radiaton comprises an X-ray tube for producing X-ray radiation and/or a gamma radiation source,
said detector is a detector (16) adapted for registering a spatial distribution of intensities of said penetrating radiation after being transmitted through the target object (15), the spatial distribution being referred to as an image,
and the processing unit (18) being adapted for controlling said propelled motion of the first and the second unmanned vehicle such as to acquire at least two images using said detector (16) corresponding to at least two different projection directions of said penetrating radiation through the target object (15), said processing unit (18) being further adapted for generating data representative of an internal structure of said target object from said at least two images.
 
2. The system according to claim 1, wherein the first unmanned vehicle (11) and/or the second unmanned vehicle (12) comprises a propulsion system for providing said propelled motion and a power source for sustaining said propelled motion.
 
3. The system according to claim 1 or claim 2, wherein the first unmanned vehicle (11) and/or the second unmanned vehicle (12) is a ground vehicle, a aerial vehicle, a space craft, a watercraft and/or a submersible vehicle.
 
4. The system according to any of the previous claims, wherein said positioning unit (13) comprises an indoor positioning system and/or an outdoor positioning system, is adapted to use laser positioning and/or echo positioning to measure a distance with respect to objects in its vicinity and/or is adapted for determining a relative position of said unmanned vehicle in which it is comprised with respect to the target object (15) and/or the other unmanned vehicle.
 
5. The system according to any of the previous claims, in which the first unmanned vehicle (11) comprises a pivotable support for allowing a rotation of the source (14) and/or wherein the second unmanned vehicle (12) comprises a pivotable support for allowing a rotation of said detector (16).
 
6. The system according to any of the previous claims, wherein said processing unit (18) is, fully or in part, integrated in the first unmanned vehicle (11), the second unmanned vehicle (12) and/or a separate base station and/or wherein said processing unit (18) is adapted for receiving position information from the first unmanned vehicle (11) and/or from the second unmanned vehicle (12).
 
7. The system according to any of the previous claims, wherein said positioning unit (13) is adapted for determining the position of the target object (15) when present, said first and/or second unmanned vehicle (11,12) being adapted for transmitting an indicative position of the target object (15) to the processing unit (18) when the presence of said target object is detected by said positioning unit (13).
 
8. The system according to any of the previous claims, wherein said at least two different projection directions are determined by the processing unit (18) as a uniform angular sampling around said target object (15).
 
9. The system according to any of the previous claims, in which said processing unit (18) is adapted for generating said data representative of said internal structure of said target object (15) by performing a tomographic reconstruction.
 
10. The system according to claim 9, in which said tomographic reconstruction comprises a partial reconstruction taking a first plurality of images into account, said processing unit (18) being furthermore adapted for determining a further projection direction for acquiring a further image of the target object by taking said partial reconstruction into account and/or wherein said tomographic reconstruction is performed by said processing unit using a discrete algebraic reconstruction method and/or wherein said tomographic reconstruction comprises a compensation method for taking unpredictable variations in positioning of the first unmanned vehicle (11), the second unmanned vehicle (12) and/or the target object (15) into account.
 
11. A method (100) for imaging a target object, said method comprising:

- providing (101) a first unmanned vehicle (11) comprising a source (14) of penetrating radiation, and providing (102) a second unmanned vehicle (12) comprising a detector (16), wherein the first and the second unmanned vehicle are autonomous vehicles adapted for independent propelled motion, and wherein each of the first and the second unmanned vehicle comprises a positioning unit (13) for detecting a position of the corresponding vehicle,

- controlling (103), using a processing unit (18), said propelled motion of the first unmanned vehicle (11) to position said source (14) such as to direct said penetrating radiation (17) toward said target object (15),

- controlling (104), using said processing unit (18), said propelled motion of the second unmanned vehicle (12) to position said detector for registering spatial distribution of said penetration radiation when transmitted through said target object (15),

characterized in that

- said penetrating radiation is x-ray radiation or gamma radiation,

- said detector is a detector (16) adapted for registering a spatial distribution of intensities of said penetrating radiation after being transmitted through the target object (15), the spatial distribution being referred to as an image,
and the method comprises
acquiring (105) at least two images using said detector (16), wherein said propelled motion of the first unmanned vehicle (11) and of the second unmanned vehicle (12) is controlled (103,104) such that said at least two images correspond to at least two different projection directions of said penetrating radiation through the target object (15), and

- generating (106) data representative of an internal structure of said target object from said at least two images, using said processing unit (18).


 
12. The method according to claim 11, wherein said controlling (103,104) of the propelled motion of the first and second unmanned vehicle takes positioning information provided by said positioning unit (13) into account and/or wherein the method further comprising determining the position of said target object (15) by said positioning unit, and wherein said controlling (103,104) of the propelled motion of the first unmanned vehicle and of the second unmanned vehicle takes said position of said target object (15) into account, and/or wherein the at least two different projection directions are determined by the processing unit (18) as a uniform angular sampling around said target object (15).
 
13. The method according to any of the claims 11 to 12, in which said generating (106) of said data representative of said internal structure of said target object (15) comprises a tomographic reconstruction.
 
14. The method according to claim 13, in which said tomographic reconstruction comprises a partial reconstruction taking a first plurality of images into account, said method further comprising determining a further projection direction for acquiring a further image of the target object by taking said partial reconstruction into account, and/or wherein said tomographic reconstruction is performed by said processing unit using a discrete algebraic reconstruction method and/or wherein said tomographic reconstruction comprises a compensation method for taking unpredictable variations in positioning of the first unmanned vehicle (11), the second unmanned vehicle (12) and/or the target object (15) into account.
 
15. A computer program product comprising instructions to cause the system of any of claims 1 to 10 to perform steps of a method in accordance with any of claims 11 to 14.
 


Ansprüche

1. System (10) zur Bildgebung eines Zielobjekts (15), wobei das System umfasst:

- ein erstes unbemanntes Fahrzeug (11), umfassend eine Quelle (14) durchdringender Strahlung, wobei das erste unbemannte Fahrzeug (11) zum Positionieren der Quelle (14) ausgelegt ist, um die durchdringende Strahlung (17) in Richtung des Zielobjekts (15) zu leiten,

- ein zweites unbemanntes Fahrzeug (12), umfassend einen Detektor,

wobei das erste und das zweite unbemannte Fahrzeug autonome Fahrzeuge sind, ausgelegt zur unabhängigen angetriebenen Bewegung,
wobei jedes des ersten und zweiten unbemannten Fahrzeugs eine Positionierungseinheit (13) zum Detektieren einer Position des entsprechenden Fahrzeugs umfasst,
wobei das System weiter eine Verarbeitungseinheit (18) zum Steuern der angetriebenen Bewegung des ersten und zweiten unbemannten Fahrzeugs umfasst,
dadurch gekennzeichnet, dass
die Quelle durchdringender Strahlung eine Röntgenröhre zum Produzieren von Röntgenstrahlung und/oder eine Gammastrahlungsquelle umfasst,
wobei der Detektor ein zum Registrieren einer räumlichen Verteilung von Intensitäten der durchdringenden Strahlung, nachdem sie durch das Zielobjekt (15) hindurchgelassen wurde, ausgelegter Detektor (16) ist, wobei die räumliche Verteilung als ein Bild bezeichnet wird,
und die Verarbeitungseinheit (18) ausgelegt ist zum Steuern der angetriebenen Bewegung des ersten und zweiten unbemannten Fahrzeugs, um mindestens zwei Bilder unter Verwendung des Detektors (16) zu erhalten, die mindestens zwei unterschiedlichen Richtungen der durchdringenden Strahlung durch das Zielobjekt (15) entsprechen, wobei die Verarbeitungseinheit (18) weiter ausgelegt ist zum Erzeugen von Daten aus den mindestens zwei Bildern, die für eine interne Struktur des Zielobjekts repräsentativ sind.
 
2. System nach Anspruch 1, wobei das erste unbemannte Fahrzeug (11) und/oder das zweite unbemannte Fahrzeug (12) ein Antriebssystem zum Bereitstellen der angetriebenen Bewegung und eine Energiequelle zum Aufrechterhalten der angetriebenen Bewegung umfasst.
 
3. System nach Anspruch 1 oder Anspruch 2, wobei das erste unbemannte Fahrzeug (11) und/oder das zweite unbemannte Fahrzeug (12) ein Bodenfahrzeug, ein Luftfahrzeug, ein Raumfahrzeug, ein Wasserfahrzeug und/oder ein Unterwasserfahrzeug ist.
 
4. System nach einem der vorstehenden Ansprüche, wobei die Positionierungseinheit (13) ein Innenraum-Positionierungssystem und/oder ein Outdoor-Positionierungssystem umfasst, ausgelegt ist, um Laserpositionierung und/oder Schallpositionierung zu verwenden, um einen Abstand bezüglich Objekten in seiner Nähe zu messen, und/oder ausgelegt ist zum Bestimmen einer relativen Position des unbemannten Fahrzeugs, in dem es bezüglich des Zielobjekts (15) und/oder des anderen unbemannten Fahrzeugs umfasst ist.
 
5. System nach einem der vorstehenden Ansprüche, in dem das erste unbemannte Fahrzeug (11) einen schwenkbaren Träger zum Ermöglichen einer Rotation der Quelle (14) umfasst, und/oder wobei das zweite unbemannte Fahrzeug (12) einen schwenkbaren Träger zum Ermöglichen einer Rotation des Detektors (16) umfasst.
 
6. System nach einem der vorstehenden Ansprüche, wobei die Verarbeitungseinheit (18), ganz oder zum Teil, in dem ersten unbemannten Fahrzeug (11), dem zweiten unbemannten Fahrzeug (12) und/oder einer separaten Basisstation integriert ist und/oder wobei die Verarbeitungseinheit (18) ausgelegt ist zum Empfangen von Positionsinformationen von dem ersten unbemannten Fahrzeug (11) und/oder von dem zweiten unbemannten Fahrzeug (12).
 
7. System nach einem der vorstehenden Ansprüche, wobei die Positionierungseinheit (13) ausgelegt ist zum Bestimmen der Position des Zielobjekts (15), falls vorhanden, wobei das erste und/oder zweite unbemannte Fahrzeug (11,12) zum Übertragen einer indikativen Position des Zielobjekts (15) an die Verarbeitungseinheit (18) ausgelegt sind, wenn das Vorhandensein des Zielobjekts durch die Positionierungseinheit (13) detektiert wird.
 
8. System nach einem der vorstehenden Ansprüche, wobei die mindestens zwei unterschiedlichen Projektionsrichtungen durch die Verarbeitungseinheit (18) als eine gleichförmige Winkelabtastung um das Zielobjekt (15) herum bestimmt werden.
 
9. System nach einem der vorstehenden Ansprüche, in dem die Verarbeitungseinheit (18) zum Erzeugen der Daten, die für die interne Struktur des Zielobjekts (15) repräsentativ sind, durch Durchführen einer tomographischen Rekonstruktion ausgelegt ist.
 
10. System nach Anspruch 9, in dem die tomographische Rekonstruktion eine teilweise Rekonstruktion umfasst, die eine erste Vielzahl von Bildern einbezieht, wobei die Verarbeitungseinheit (18) außerdem zum Bestimmen einer weiteren Projektionsrichtung zum Erhalten eines weiteren Bilds des Zielobjekts durch Einbeziehen der teilweisen Rekonstruktion ausgelegt ist, und/oder wobei die tomographische Rekonstruktion durch die Verarbeitungseinheit unter Verwendung eines diskreten algebraischen Rekonstruktionsverfahrens durchgeführt wird und/oder wobei die tomographische Rekonstruktion ein Kompensationsverfahren zum Einbeziehen unvorhersehbaren Variationen beim Positionieren des ersten unbemannten Fahrzeugs (11), des zweiten unbemannten Fahrzeug (12) und/oder des Zielobjekts (15) umfasst.
 
11. Verfahren (100) zur Bildgebung eines Zielobjekts, wobei das Verfahren umfasst:

- Bereitstellen (101) eines ersten unbemannten Fahrzeugs (11), umfassend eine Quelle (14) durchdringender Strahlung und Bereitstellen (102) eines zweites unbemanntes Fahrzeug (12), umfassend einen Detektor (16), wobei das erste und das zweite unbemannte Fahrzeug autonome Fahrzeuge sind, ausgelegt zur unabhängig angetriebenen Bewegung, und wobei jedes des ersten und zweiten unbemannten Fahrzeugs eine Positionierungseinheit (13) zum Detektieren einer Position des entsprechenden Fahrzeugs umfasst,

- Steuern (103), unter Verwendung einer Verarbeitungseinheit (18), der angetriebenen Bewegung des ersten unbemannten Fahrzeugs (11), um die Quelle (14) zu positionieren, um die durchdringende Strahlung (17) in Richtung des Zielobjekts (15) zu leiten,

- Steuern (104), unter Verwendung einer Verarbeitungseinheit (18), der angetriebenen Bewegung des zweiten unbemannten Fahrzeugs (12), um den Detektor zum Registrieren räumlicher Verteilung der durchdringenden Strahlung zu positionieren, wenn sie durch das Zielobjekt (15) hindurchgelassen wurde, dadurch gekennzeichnet, dass

- die durchdringende Strahlung Röntgenstrahlung oder Gammastrahlung ist,

- der Detektor ein zum Registrieren einer räumlichen Verteilung von Intensitäten der durchdringenden Strahlung, nachdem sie durch das Zielobjekt (15) hindurchgelassen wurde, ausgelegter Detektor (16) ist, wobei die räumliche Verteilung als ein Bild bezeichnet wird und das Verfahren umfasst
Erhalten (105) von mindestens zwei Bildern unter Verwendung des Detektors (16), wobei die angetriebene Bewegung des ersten unbemannten Fahrzeugs (11) und des zweiten unbemannten Fahrzeugs (12) so gesteuert (103,104) wird, dass die mindestens zwei Bilder mindestens zwei unterschiedlichen Projektionsrichtungen der durchdringenden Strahlung durch das Zielobjekt (15) entsprechen, und

- Erzeugen (106) von Daten aus den mindestens zwei Bildern unter Verwendung der Verarbeitungseinheit (18), die für eine interne Struktur des Zielobjekts repräsentativ sind.


 
12. Verfahren nach Anspruch 11, wobei das Steuern (103,104) der angetriebenen Bewegung des ersten und zweiten unbemannten Fahrzeugs durch die Positionierungseinheit (13) bereitgestellte Positionierungsinformationen einbezieht und/oder wobei das Verfahren weiter das Bestimmen der Position des Zielobjekts (15) durch die Positionierungseinheit umfasst, und wobei das Steuern (103,104) der angetriebenen Bewegung des ersten unbemannten Fahrzeugs und des zweiten unbemannten Fahrzeugs die Position des Zielobjekts (15) einbezieht und/oder wobei die mindestens zwei unterschiedlichen Projektionsrichtungen durch die Verarbeitungseinheit (18) als eine gleichförmige Winkelabtastung um das Zielobjekt (15) herum bestimmt werden.
 
13. Verfahren nach einem der Ansprüche 11 bis 12, in dem das Erzeugen (106) der Daten, die für die interne Struktur des Zielobjekts (15) repräsentativ sind, eine tomographische Rekonstruktion umfasst.
 
14. Verfahren nach Anspruch 13, in dem die tomographische Rekonstruktion eine teilweise Rekonstruktion umfasst, die eine erste Vielzahl von Bildern einbezieht, wobei das Verfahren weiter das Bestimmen einer weiteren Projektionsrichtung zum Erhalten eines weiteren Bilds des Zielobjekts durch Einbeziehen der teilweisen Rekonstruktion umfasst, und/oder wobei die tomographische Rekonstruktion durch die Verarbeitungseinheit unter Verwendung eines diskreten algebraischen Rekonstruktionsverfahrens durchgeführt wird und/oder wobei die tomographische Rekonstruktion ein Kompensationsverfahren zum Einbeziehen unvorhersehbaren Variationen beim Positionieren des ersten unbemannten Fahrzeugs (11), des zweiten unbemannten Fahrzeug (12) und/oder des Zielobjekts (15) umfasst.
 
15. Computerprogrammprodukt, umfassend Anweisungen, um das System nach einem der Ansprüche 1 bis 10 zu veranlassen, Schritte eines Verfahrens in Übereinstimmung mit einem von Ansprüchen 11 bis 14 durchzuführen.
 


Revendications

1. Système (10) d'imagerie d'un objet cible (15), ledit système comprenant :

- un premier véhicule sans équipage (11) comprenant une source (14) de rayonnement pénétrant, ledit premier véhicule sans équipage (11) étant adapté pour positionner ladite source (14) de manière à diriger ledit rayonnement pénétrant (17) vers ledit objet cible (15),

- un second véhicule sans équipage (12) comprenant un détecteur,

dans lequel le premier et le second véhicule sans équipage sont des véhicules autonomes adaptés pour un mouvement propulsé indépendant,
dans lequel chacun du premier et du second véhicule sans équipage comprend une unité de positionnement (13) pour détecter une position du véhicule correspondant,
le système comprenant en outre une unité de traitement (18) pour commander ledit mouvement propulsé du premier et du second véhicule sans équipage,
caractérisé en ce que
la source de rayonnement pénétrant comprend un tube à rayons X pour produire un rayonnement de rayons X et/ou une source de rayonnement gamma,
ledit détecteur est un détecteur (16) adapté pour enregistrer une distribution spatiale des intensités dudit rayonnement pénétrant après sa transmission à travers l'objet cible (15), la distribution spatiale étant désignée comme étant une image,
et l'unité de traitement (18) étant adaptée pour commander ledit mouvement propulsé du premier et du second véhicule sans équipage de manière à acquérir au moins deux images en utilisant ledit détecteur (16) correspondant à au moins deux différentes directions de projection dudit rayonnement pénétrant à travers l'objet cible (15), ladite unité de traitement (18) étant en outre adaptée pour générer des données représentatives d'une structure interne dudit objet cible à partir desdites au moins deux images.
 
2. Système selon la revendication 1, dans lequel le premier véhicule sans équipage (11) et/ou le second véhicule sans équipage (12) comprend un système de propulsion pour fournir ledit mouvement propulsé et une source d'énergie pour entretenir ledit mouvement propulsé.
 
3. Système selon la revendication 1 ou la revendication 2, dans lequel le premier véhicule sans équipage (11) et/ou le second véhicule sans équipage (12) est un véhicule terrestre, un véhicule aérien, un engin spatial, une embarcation et/ou un véhicule submersible.
 
4. Système selon l'une quelconque des revendications précédentes, dans lequel ladite unité de positionnement (13) comprend un système de positionnement intérieur et/ou un système de positionnement extérieur, est adaptée pour utiliser un positionnement par laser et/ou un positionnement par écho pour mesurer une distance par rapport à des objets à proximité et/ou est adaptée pour déterminer une position relative dudit véhicule sans équipage dans lequel elle est comprise par rapport à l'objet cible (15) et/ou l'autre véhicule sans équipage.
 
5. Système selon l'une quelconque des revendications précédentes, dans lequel le premier véhicule sans équipage (11) comprend un support pivotant pour permettre une rotation de la source (14) et/ou dans lequel le second véhicule sans équipage (12) comprend un support pivotant pour permettre une rotation dudit détecteur (16).
 
6. Système selon l'une quelconque des revendications précédentes, dans lequel ladite unité de traitement (18) est, entièrement ou en partie, intégrée dans le premier véhicule sans équipage (11), le second véhicule sans équipage (12) et/ou une station de base séparée et/ou dans lequel ladite unité de traitement (18) est adaptée pour recevoir des informations de position provenant du premier véhicule sans équipage (11) et/ou du second véhicule sans équipage (12).
 
7. Système selon l'une quelconque des revendications précédentes, dans lequel ladite unité de positionnement (13) est adaptée pour déterminer la position de l'objet cible (15) lorsqu'il est présent, ledit premier et/ou second véhicule sans équipage (11, 12) étant adapté pour transmettre une position indicative de l'objet cible (15) à l'unité de traitement (18) lorsque la présence dudit objet cible est détectée par ladite unité de positionnement (13).
 
8. Système selon l'une quelconque des revendications précédentes, dans lequel lesdites au moins deux différentes directions de projection sont déterminées par l'unité de traitement (18) en tant qu'échantillonnage angulaire uniforme autour dudit objet cible (15).
 
9. Système selon l'une quelconque des revendications précédentes, dans lequel ladite unité de traitement (18) est adaptée pour générer lesdites données représentatives de ladite structure interne dudit objet cible (15) en réalisant une reconstruction tomographique.
 
10. Système selon la revendication 9, dans lequel ladite reconstruction tomographique comprend une reconstruction partielle prenant en compte une première pluralité d'images, ladite unité de traitement (18) étant en outre adaptée pour déterminer une direction de projection supplémentaire pour acquérir une image supplémentaire de l'objet cible en prenant en compte ladite reconstruction partielle et/ou dans lequel ladite reconstruction tomographique est réalisée par ladite unité de traitement en utilisant un procédé de reconstruction algébrique discrète et/ou dans lequel ladite reconstruction tomographique comprend un procédé de compensation pour prendre en compte des variations imprévisibles du positionnement du premier véhicule sans équipage (11), du second véhicule sans équipage (12) et/ou de l'objet cible (15).
 
11. Procédé (100) d'imagerie d'un objet cible, ledit procédé comprenant :

- la fourniture (101) d'un premier véhicule sans équipage (11) comprenant une source (14) de rayonnement pénétrant, et la fourniture (102) d'un second véhicule sans équipage (12) comprenant un détecteur (16), dans lequel le premier et le second véhicule sans équipage sont des véhicules autonomes adaptés pour un mouvement propulsé indépendant, et dans lequel chacun du premier et du second véhicule sans équipage comprend une unité de positionnement (13) pour détecter une position du véhicule correspondant,

- la commande (103), en utilisant une unité de traitement (18), dudit mouvement propulsé du premier véhicule sans équipage (11) pour positionner ladite source (14) de manière à diriger ledit rayonnement pénétrant (17) vers ledit objet cible (15),

- la commande (104), en utilisant ladite unité de traitement (18), dudit mouvement propulsé du second véhicule sans équipage (12) pour positionner ledit détecteur pour enregistrer une distribution spatiale dudit rayonnement de pénétration lorsqu'il est transmis à travers ledit objet cible (15), caractérisé en ce que

- ledit rayonnement pénétrant est un rayonnement de rayons X ou un rayonnement gamma,

- ledit détecteur est un détecteur (16) adapté pour enregistrer une distribution spatiale des intensités dudit rayonnement pénétrant après sa transmission à travers l'objet cible (15), la distribution spatiale étant désignée comme étant une image,
et le procédé comprend
l'acquisition (105) d'au moins deux images en utilisant ledit détecteur (16), dans lequel ledit mouvement propulsé du premier véhicule sans équipage (11) et du second véhicule sans équipage (12) est commandé (103, 104) de telle manière que lesdites au moins deux images correspondent à au moins deux différentes directions de projection dudit rayonnement pénétrant à travers l'objet cible (15), et

- la génération (106) des données représentatives d'une structure interne dudit objet cible à partir desdites au moins deux images, en utilisant ladite unité de traitement (18).


 
12. Procédé selon la revendication 11, dans lequel ladite commande (103, 104) du mouvement propulsé du premier et du second véhicule sans équipage prend en compte les informations de positionnement fournies par ladite unité de positionnement (13) et/ou dans lequel le procédé comprenant en outre la détermination de la position dudit objet cible (15) par ladite unité de positionnement, et dans lequel ladite commande (103, 104) du mouvement propulsé du premier véhicule sans équipage et du second véhicule sans équipage prend en compte ladite position dudit objet cible (15), et/ou dans lequel les au moins deux différentes directions de projection sont déterminées par l'unité de traitement (18) en tant qu'échantillonnage angulaire uniforme autour dudit objet cible (15).
 
13. Procédé selon l'une quelconque des revendications 11 à 12, dans lequel ladite génération (106) desdites données représentatives de ladite structure interne dudit objet cible (15) comprend une reconstruction tomographique.
 
14. Procédé selon la revendication 13, dans lequel ladite reconstruction tomographique comprend une reconstruction partielle prenant en compte une première pluralité d'images, ledit procédé comprenant en outre la détermination d'une direction de projection supplémentaire pour acquérir une image supplémentaire de l'objet cible en prenant en compte ladite reconstruction partielle, et/ou dans lequel ladite reconstruction tomographique est réalisée par ladite unité de traitement en utilisant un procédé de reconstruction algébrique discrète et/ou dans lequel ladite reconstruction tomographique comprend un procédé de compensation pour prendre en compte des variations imprévisibles du positionnement du premier véhicule sans équipage (11), du second véhicule sans équipage (12) et/ou de l'objet cible (15).
 
15. Produit de programme informatique comprenant des instructions pour amener le système selon l'une quelconque des revendications 1 à 10 à réaliser les étapes d'un procédé selon l'une quelconque des revendications 11 à 14.
 




Drawing











Cited references

REFERENCES CITED IN THE DESCRIPTION



This list of references cited by the applicant is for the reader's convenience only. It does not form part of the European patent document. Even though great care has been taken in compiling the references, errors or omissions cannot be excluded and the EPO disclaims all liability in this regard.

Patent documents cited in the description




Non-patent literature cited in the description