(19)
(11)EP 2 880 594 B1

(12)EUROPEAN PATENT SPECIFICATION

(45)Mention of the grant of the patent:
04.11.2020 Bulletin 2020/45

(21)Application number: 13822898.6

(22)Date of filing:  26.07.2013
(51)International Patent Classification (IPC): 
G06T 7/11(2017.01)
G06T 7/174(2017.01)
G06T 7/155(2017.01)
G06K 9/36(2006.01)
(86)International application number:
PCT/US2013/052286
(87)International publication number:
WO 2014/018865 (30.01.2014 Gazette  2014/05)

(54)

SYSTEMS AND METHODS FOR PERFORMING SEGMENTATION AND VISUALIZATION OF MULTIVARIATE MEDICAL IMAGES

SYSTEME UND VERFAHREN ZUR SEGMENTIERUNG UND VISUALISIERUNG MULTIVARIATER MEDIZINISCHER BILDER

SYSTÈMES ET PROCÉDÉS DE MISE EN UVRE DE SEGMENTATION ET DE VISUALISATION D'IMAGES MÉDICALES À PLUSIEURS VARIABLES


(84)Designated Contracting States:
AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR

(30)Priority: 26.07.2012 US 201213558784

(43)Date of publication of application:
10.06.2015 Bulletin 2015/24

(73)Proprietor: General Electric Company
Schenectady, NY 12345 (US)

(72)Inventors:
  • BLASKOVICS, Tamas
    H-6722 Szeged (HU)
  • RUSKO, Laszlo
    H-6722 Szeged (HU)
  • FIDRICH, Marta
    H-6722 Szeged (HU)

(74)Representative: Fennell, Gareth Charles et al
Kilburn & Strode LLP Lacon London 84 Theobalds Road
London WC1X 8NL
London WC1X 8NL (GB)


(56)References cited: : 
WO-A1-2008/065594
US-A1- 2008 317 317
US-A1- 2003 053 668
US-A1- 2011 158 491
  
  • Wei-Hung Cheng: "MRI-Based Attenuation Correction for PET/MRI and Medical Image Registration Methods on Graphics Processing Units", , 1 September 2009 (2009-09-01), pages 1-32, XP055284425, Retrieved from the Internet: URL:http://www.cs.kent.edu/~wcheng/Dissert ation/Dissertation%20Proposal.doc [retrieved on 2016-06-29]
  • Darko Zikic: "MR based Attenuation Correction for PET With Application in Small Animal PET Imaging", , 29 January 2005 (2005-01-29), pages 1-20, XP055284442, Retrieved from the Internet: URL:http://campar.in.tum.de/twiki/pub/Stud ents/IdpDarko/map_presentation.pdf [retrieved on 2016-06-29]
  • RUSKO L ET AL: "Automatic segmentation of the liver from multi- and single-phase contrast-enhanced CT images", MEDICAL IMAGE ANALYSIS, OXFORD UNIVERSITY PRESS, OXOFRD, GB, vol. 13, no. 6, 1 December 2009 (2009-12-01), pages 871-882, XP026718620, ISSN: 1361-8415, DOI: 10.1016/J.MEDIA.2009.07.009 [retrieved on 2009-07-23]
  • JAGER F ET AL: "Nonrigid Registration of Joint Histograms for Intensity Standardization in Magnetic Resonance Imaging", IEEE TRANSACTIONS ON MEDICAL IMAGING, IEEE SERVICE CENTER, PISCATAWAY, NJ, US, vol. 28, no. 1, 1 January 2009 (2009-01-01), pages 137-150, XP011233211, ISSN: 0278-0062, DOI: 10.1109/TMI.2008.2004429
  • HONG ET AL.: 'Integrated registration and visualization of MR and PET brain images.' MEDICAL IMAGING 2004: VISUALIZATION, IMAGE-GUIDED PROCEDURES, AND DISPLAY. PROCEEDINGS OF SPIE vol. 5367, 2004, SPIE, BELLINGHAM, WA, XP055255648 Retrieved from the Internet: <URL:http://proceedings.spiedigitallibrary. org/proceeding.aspx?articleid=840921> [retrieved on 2014-01-24]
  • T.M. DESERNO.: 'Biomedical Image Processing, Biological and Medical Physics.' BIOMEDICAL ENGINEERING 2011, BERLIN HEIDELBERG, XP008179305 Retrieved from the Internet: <URL:URL:http://link.springer.com/chapter/1 0.1007/978-3-642-15816-2-1 #page-1> [retrieved on 2014-01-24]
  • SCULLY: '3D Segmentation In The Clinic: A Grand Challenge II at MICCAI 2008 - MS Lesion Segmentation' 14 July 2008, page 2, XP055255663 Retrieved from the Internet: <URL:http://grand-challenge2008.bigr.nl/pro ceedings/pdfs/msls08/282_Scully.pdf> [retrieved on 2014-01-24]
  • PLUIM ET AL.: 'Mutual information based registration of medical images: a survey.' IEEE TRANSACTIONS ON MEDICAL IMAGING vol. 22, no. 8, August 2003, XP011099100 Retrieved from the Internet: <URL:http://www.cs.jhu.edu/ ~cis/cista/746/papers/mutual info survey.pdf> [retrieved on 2014-01-24]
  
Note: Within nine months from the publication of the mention of the grant of the European patent, any person may give notice to the European Patent Office of opposition to the European patent granted. Notice of opposition shall be filed in a written reasoned statement. It shall not be deemed to have been filed until the opposition fee has been paid. (Art. 99(1) European Patent Convention).


Description


[0001] The subject matter described herein relates generally to imaging systems, and more particularly, to systems and methods for performing image segmentation and visualization of multivariate medical images.

[0002] Imaging systems are widely used to generate images of various anatomical features or objects of interest. For example, in an oncology examination, a patient may go through a series of examinations, using for example, a computed tomography (CT) system, a positron emission tomography (PET) system, an ultrasound system, an x-ray system, a magnetic resonance (MR) system, a single photon emission computed tomography (SPECT) system, and/or other imaging systems. The series of examinations is performed to continuously monitor the patient's response to treatment. The images acquired during the examination may be displayed or saved to enable a physician to perform a diagnosis of the patient. Thus, the patient may be scanned with one or more imaging systems selected to provide the most relevant images needed by the physician to perform the medical diagnosis.

[0003] See for example:

Wei-Hung Cheng: "MRI-Based Attenuation Correction for PET/MRI and Medical Image Registration Methods on Graphics Processing Units", 1 September 2009 (2009-09-01), pages 1-32, XP055284425; and

RUSKO L ET AL: "Automatic segmentation of the liver from multi- and single-phase contrast-enhanced CT images" MEDICAL IMAGE ANALYSIS, OXFORD UNIVERSITY PRESS, OXFORD, GB, vol. 13, no. 6, 1 December 2009 (2009-12-01), pages 871-882, XP026718620.



[0004] In operation, the images may be sequentially processed to solve a clinical problem, such as for example, patient screening, diagnosing, monitoring, etc. For example, the MR system may be configured to acquire T1-weighted images and T2weighted images to enable the physician to assess pathologic tissues such as inflammations and tumors. Moreover, CT images may be utilized to enable the physician to visualize the anatomy, i.e. vessels, bones, etc. Additionally, dual-energy CT (DECT)images may be utilized to visualize different materials, such as for example, iodine, water, or calcium. However, typical segmentation algorithms are configured to be utilized with images acquired from a single imaging system. For example, a CT algorithm may be utilized to segment CT images. Moreover, a different algorithm may be utilized to segment PET images. As a result, the information provided to physician to determine a diagnosis of the patient is typically presented as separate images acquired by different imaging system. Thus, the physician is not presented with joint information which may provide additional information that is relevant to the diagnosis.

SUMMARY OF THE INVENTION



[0005] In one aspect, there is provided a method as defined in claim 1. In another aspect, there is provided a system as defined in claim 3. In a further aspect, there is provided a computer-readable medium as defined in claim 4. The scope of the present invention is defined by the appended claims and only the embodiments of the disclosure described herein that are consistent with the claims fall within their scope.

BRIEF DESCRIPTION OF THE DRAWINGS



[0006] 

Figure 1 is a simplified block diagram of a computed tomography (CT) imaging system that is formed in accordance with various embodiments

Figure 2 is a flowchart of a method for automatically generating a fusion image in accordance with various embodiments.

Figure 3 is an image that may be generated in accordance with various embodiments.

Figure 4 is an image that may be automatically selected in accordance with various embodiments.

Figure 5 is a histogram that may be generated in accordance with various embodiments.

Figure 6 is another histogram that may be generated in accordance with various embodiments.

Figure 7 is a joint histogram that may be generated using the histograms shown in Figures 5 and 6 in accordance with various embodiments.

Figure 8 is a flowchart of a method for visualizing an image in accordance with various embodiments.

Figure 9 is a color image that may be generated in accordance with various embodiments.

Figure 10 is a flowchart of a method for segmenting a multivariate image that includes a plurality of images in accordance with various embodiments.

Figure 11 is a joint histogram that may be generated in accordance with various embodiments.

Figure 12 is the same histogram as shown in Figure 11, where a different color is assigned for every peak that may be generated in accordance with various embodiments.

Figure 13 is a label image that may be generated based on the colored histogram of Figure 12 (i.e. clustering in joint histogram space) in accordance with various embodiments.

Figure 14 is a label image that may be generated based on the label image of Figure 13 incorporating spatial connectivity of voxels (i.e. clustering in image space) that may be displayed in accordance with various embodiments.

Figure 15 is a flowchart of another method for segmenting a multivariate image that includes several images in accordance with various embodiments.

Figure 16 is a pictorial drawing of a computed tomography (CT) imaging system constructed in accordance with various embodiments.

Figure 17 is a schematic block diagram of the CT imaging system of Figure 16.



[0007] The foregoing summary, as well as the following detailed description of various embodiments, will be better understood when read in conjunction with the appended drawings. To the extent that the figures illustrate diagrams of the functional blocks of the various embodiments, the functional blocks are not necessarily indicative of the division between hardware circuitry. Thus, for example, one or more of the functional blocks (e.g., processors or memories) may be implemented in a single piece of hardware (e.g., a general purpose signal processor or a block of random access memory, hard disk, or the like) or multiple pieces of hardware. Similarly, the programs may be stand alone programs, may be incorporated as subroutines in an operating system, may be functions in an installed software package, and the like. It should be understood that the various embodiments are not limited to the arrangements and instrumentality shown in the drawings.

[0008] As used herein, an element or step recited in the singular and proceeded with the word "a or "an" should be understood as not excluding plural of said elements or steps, unless such exclusion is explicitly stated. Furthermore, references to "one embodiment" are not intended to be interpreted as excluding the existence of additional embodiments that also incorporate the recited features. Moreover, unless explicitly stated to the contrary, embodiments "comprising" or "having" an element or a plurality of elements having a particular property may include additional such elements not having that property.

[0009] Although various embodiments are described with respect to a computed tomography (CT) imaging system, it should be noted that various embodiments, including methods and systems for providing joint-information based segmentation and visualization of multivariate images described herein may be used with other imaging systems. For example, the method and system may be utilized with a positron emission tomography (PET) system, a single photon emission computed tomography (SPECT) system, a magnetic resonance imaging (MR) system, an ultrasound imaging system, and/or an x-ray system, among others.

[0010] In various embodiments, a method and/or system is provided that may be utilized to generate a fusion image. As used herein, in various embodiments a fusion image is a single image that is generated using information acquired from two or more different imaging modalities or a single imaging modality operated to perform different scanning protocols. In some embodiments, the fusion image provides improved visual enhancement of differences between the images utilized to form the fusion image, such as abnormal regions, lesions, and/or contours of various organs or tumors. Accordingly, the fusion image facilitates improving a physician's diagnosis and decreasing a time by the physician to form the diagnosis.

[0011] In various other embodiments, the methods and/or systems may also be utilized to improve image segmentation. Segmentation is performed using an algorithm that receives two images as an input. The algorithm then utilizes the two images to identify contours of an organ, lesions, etc. Accordingly, in various embodiments, an image acquired from a first imaging modality and an image acquired from a second different imaging modality may be utilized to generate a fusion image. A technical effect of various embodiments is to automatically generate a fusion image which may then be utilized to facilitate improving an accuracy and a robustness of the segmentation process. For example, anatomical images may be fused with functional images to facilitate improving tumor detection and segmentation.

[0012] Figure 1 is a simplified block diagram of an imaging system 10 that is formed in accordance with various embodiments. Although the illustrated embodiment is described with respect to a CT imaging system 10, it should be realized that the methods described herein may be utilized with any imaging system.

[0013] In the illustrated embodiment, the imaging system 10 includes an x-ray source 12 that is configured to emit radiation, e.g., x-rays 14, through a volume containing a subject 16, e.g. a patient being imaged. In the embodiment shown in Figure 1, the imaging system 10 also includes an adjustable collimator 18. In operation, the emitted x-rays 14 pass through an opening of the adjustable collimator 18 which limits the angular range associated with the x-rays 14 passing through the volume in one or more dimensions. More specifically, the collimator 18 shapes the emitted x-rays 14, such as to a generally cone or generally fan shaped beam that passes into and through the imaging volume in which the subject 16 is positioned. The collimator 18 may be adjusted to accommodate different scan modes, such as to provide a narrow fan-shaped x-ray beam in a helical scan mode and a wider cone-shaped x-ray beam in an axial scan mode. The collimator 18 may be formed, in one embodiment, from two cylindrical disks that rotate to adjust the shape or angular range of the x-rays 14 that pass through the imaging volume. Optionally, the collimator 18 may be formed using two or more translating plates or shutters. In various embodiments, the collimator 18 may be formed such that an aperture defined by the collimator 18 corresponds to a shape of a radiation detector 20.

[0014] In operation, the x-rays 14 pass through or around the subject 16 and impinge on the detector 20. The detector 20 includes a plurality of detector elements 22 that may be arranged in a single row or a plurality of rows to form an array of detector elements 22. The detector elements 22 generate electrical signals that represent the intensity of the incident x-rays 14. The electrical signals are acquired and processed to reconstruct images of one or more features or structures within the subject 16. In various embodiments, the imaging system 10 may also include an anti-scatter grid (not shown) to absorb or otherwise prevent x-ray photons that have been deflected or scattered in the imaging volume from impinging on the detector 20. The anti-scatter grid may be a one-dimensional or two-dimensional grid and/or may include multiple sections, some of which are one-dimensional and some of which are two-dimensional.

[0015] The imaging system 10 also includes an x-ray controller 24 that is configured to provide power and timing signals to the x-ray source 12. The imaging system 10 further includes a data acquisition system 26. In operation, the data acquisition system 26 receives data collected by readout electronics of the detector 20. The data acquisition system 26 may receive sampled analog signals from the detector 20 and convert the data to digital signals for subsequent processing by a processor 28. Optionally, the digital-to-analog conversion may be performed by circuitry provided on the detector 20.

[0016] The processor 28 is programmed to perform functions described herein, and as used herein, the term processor is not limited to just integrated circuits referred to in the art as computers, but broadly refers to computers, microcontrollers, microcomputers, programmable logic controllers, application specific integrated circuits, and other programmable circuits, and these terms are used interchangeably herein. The processor 28 may be embodied as any suitably appropriate computing device, e.g., a computer, personal digital assistant (PDA), laptop computer, notebook computer, a hard-drive based device, smartphone, or any device that can receive, send, and store data.

[0017] The imaging system 10 also includes a fusion image generating module 50 that is configured to receive an image or a series of images, such as a series of images 52, and implement or perform various methods described herein. In various embodiments, the series of images 52 include images acquired from two different imaging modalities. For example, the series of images 52 may include a CT image and an MR image. The series of images 52 may include a CT image and a PET image. The series of images 52 may further include a PET image and an ultrasound image, etc. It should therefore be realized that the series of images 52 may include images acquired from a combination of any two imaging modalities described herein. Accordingly, the series of images 52 may include images acquired from the CT imaging system 10, a PET system 60, an ultrasound system 62, an x-ray system 64, a MR system 66, a SPECT system 68, and/or other imaging systems, or a combination thereof.

[0018] The fusion image generating module 50 may be implemented as a piece of hardware that is installed in the processor 28. Optionally, the fusion image generating module 50 may be implemented as a set of instructions that are installed on the processor 28. The set of instructions may be stand alone programs, may be incorporated as subroutines in an operating system installed on the processor 28, may be functions that are installed in a software package on the processor 28, or may be a combination of software and hardware. It should be understood that the various embodiments are not limited to the arrangements and instrumentality shown in the drawings.

[0019] Figure 2 is a flowchart of a method 100 for automatically generating a fusion image in accordance with various embodiments. The method 100 may be implemented as a set of instructions on the fusion image generating module 50 and/or the processor 28 both shown in Figure 1. The method 100 may be provided as a non-transitory machine-readable medium or media having instructions recorded thereon for directing the processor 28 or the image type recognition module 50 to perform one or more embodiments of the methods described herein. The medium or media may be, for example, any type of CD-ROM, DVD, floppy disk, hard disk, optical disk, flash RAM drive, or other type of computer-readable medium or a combination thereof.

[0020] In operation, the method 100 automatically generates a fusion image, such as a fusion image 54 shown in Figure 1. Referring again to Figure 2, at 102, a series of images, such as the images 52 shown in Figure 1, are input to the fusion image generating module 50. As described above, the series of images 52 may include images acquired from two different imaging modalities. Accordingly, in various embodiments, the series of images 52 may include a dual-energy CT (DECT) image, a CT image, a PET image, a SPECT image, an ultrasound image, a multi-phase CT image, and/or a MR image, or a combination thereof. Therefore, a portion of the series of images 52 may be acquired using for example, the PET system 60, the ultrasound system 62, the x-ray system 64, the MR system 66, the SPECT system 68, and/or other imaging systems.

[0021] In various other embodiments, the series of images 52 may include images acquired using the same imaging modality operating using different scanning protocols. For example, the series of images 52 may include MR images acquired using different scan protocols. The series of images 52 may include DECT images, wherein each of the images is acquired at a different keV level, etc.

[0022] The series of images 52 may also include images acquired using a contrast agent, wherein each image is acquired at a different contrast phase. For example, in various embodiments, a contrast agent may be injected into a patient. The patient may then be subsequently scanned to generate the series of images 52. In various other embodiments, the patient is not injected with the contrast agent prior to scanning the patient to generate the series of images 52. It should therefore be realized that in various embodiments, administering a contrast agent to the subject is optional.

[0023] The series of images 52 may also be obtained from data collected during a previous scan of the subject, wherein the series of images 52 have been stored in a memory. Optionally, the series of images 52 may be obtained during real-time scanning of the subject. For example, the methods described herein may be performed on images as the images are received from the imaging system 10 during a real-time examination of the subject. Accordingly, in various embodiments, the user may select the series of images desired for subsequent processing. For example, the user may select the series of images 52 for subsequent processing or the user may select any other series of images for processing.

[0024] At 104, a region of interest is selected using the series of images 52. For example, Figure 3 is an image 200 that may form a portion of the series of images 52. Accordingly, at 104 the user may manually select a region of interest 250 on the image 200. The region of interest 250 may represent any region that the user desires to segment or visualize. In the illustrated embodiment, the image 200 is a water image. In various other embodiments, at 102 the entire image may be selected as the region of interest.

[0025] At 106, the fusion image generating module 50 is configured to automatically select at least one informative image from the series of images 52. For example, Figure 4 is an image 202 that may be automatically selected at 106. In the illustrated embodiment, the image 202 is an iodine image also acquired using the CT imaging system 10. The informative image 202 may be automatically selected using information within the images themselves. More specifically, the information within the region of interest 250 may be utilized to calculate entropy of the image or other values, such as for example, noise, edges, or homogeneous regions. The remaining images within the series of images 152 may then be analyzed to find similar values to the values calculated in the region of interest 250. As a result, the fusion image generating module 50 is configured to identify at least one image, such as for example, the image 202 that has similar values to the region of interest 250 selected on the image 200.

[0026] Referring again to Figure 2, at 108 a joint histogram is generated using the image 200 and the informative image 202 selected at 106. In operation, to generate the joint histogram, a histogram of the image 200 is generated and a histogram of the image 202 is generated. Figure 5 is an image of a histogram 204 that may be generated using the image 200. Moreover, Figure 6 is an image of a histogram 206 that may be generated using the image 202.

[0027] Referring again to Figure 2, the images 200 and 202 are then utilized to generate a joint histogram 208 as shown in Figure 7. More specifically, the joint histogram 208 represents the histogrammed information generated using the images 200 and 202. Thus, in the illustrated embodiment, the joint histogram 208 includes water information acquired from the image 200 and iodine information acquired from the image 202. It should be realized that although the exemplary embodiment is described as generating a joint histogram generated from two images. The joint histogram 208 may be generated using more than two images. For example, the joint histogram 208 may be generated using three images, etc.

[0028] Referring again to Figure 2, at 110 the joint histogram 208 is utilized to generate a fusion image. As described above, a fusion image is a single image that is generated using information acquired from two or more different imaging modalities or a single modality operated to perform different scanning protocols such as the images 200 and 202 described above.

[0029] Figure 8 is a flowchart of a method 300 for generating an enhanced fusion image shown at 110 in Figure 2. As described above, to generate a fusion image, at 102, a series of images, such as the images 52 shown in Figure 1, are input to the fusion image generating module 50. The images may be physiological images such as medical images and/or feature images, i.e. images that show bone, etc. At 104, a region of interest is selected using the series of images 52. At 106, the fusion image generating module 50 is configured to automatically select at least one informative image from the series of images 52. Additionally, at 108 a joint histogram, such as the joint histogram 208 shown in Figure 7, is utilized to generate a fusion image.

[0030] Referring to Figure 8, at 302 the fusion image generating module 50 is configured to automatically locate a look-up table (LUT). As used herein, a LUT is a multi-dimensional array or matrix in which each value may be located using two or more indexing variables. In various embodiments, the indexing variables are derived from the information in the joint histogram. In various embodiments, the fusion image generating module 50 may identify a pre-defined LUT, such as the LUT 56 shown in Figure 1. In operation, information related to each pixel in the joint histogram 208 may be input to the LUT 56. For example, the fusion image generating module 50 may be configured to determine a mean value and/or a variance value for various organs, tissues, bones, etc. using the histogram 208. Based on, for example, the mean and variance values, the fusion image generating module 50 may calculate a window level setting. The window level setting may then be input to the LUT 56 which then outputs a color for each voxel in the histogram 208. In various other embodiments, the fusion image generating module 50 may not identify a pre-defined LUT. In this case, the fusion image generating module 50 may be configured to generate a LUT based on the information in the joint histogram 208 and assign a different color for every peak in the joint histogram 208.

[0031] Accordingly, the LUT 56 is configured to assign a value to each pixel in the images 50 and 52 that is based on the data acquired from the LUT 56. The values output from the LUT 56 may then be utilized to generate a color image, such as the color image 350 shown in Figure 9. More specifically, the outputs from the LUT 56 may be utilized to generate revised pixel values for the images 50 and 52. For example, the LUT 56 may assign a color to intensity pairs such that each set of intensity pairs in the images 50 and 52 is assigned the same color. The revised images, after being modified by the LUT 56, may then be fused together to form the image 350 also known herein as the fusion image 350. Referring again to Figure 8, at 308, the fusion image generating module 50 is configured to display the fusion image 350. Accordingly, in operation using different pre-defined and/or application specific LUT's facilitates enhancing a difference between various organs, tissue, and/or structures in the image. Moreover, in various embodiments, assuming that three images are being fused, the LUT 56 may assign colors to three intensity pairs, etc.

[0032] Referring again to Figure 2, at 112 a joint histogram is utilized to perform image segmentation. Figure 10 is a flowchart illustrating a method 400 for segmenting an enhanced fusion image shown at 112 in Figure 2. As described above, to generate a fusion image, at 102, a series of images, such as the images 52 shown in Figure 1, are input to the fusion image generating module 50. Additionally, the images may be physiological images such as medical images and/or feature images, i.e. images that show bone, etc. At 104, a region of interest is selected using the series of images 52. At 106, the fusion image generating module 50 is configured to automatically select at least one informative image from the series of images 52. Additionally, at 108 a joint histogram, such as a joint histogram 450 shown in Figure 11, is generated using the images selected at 106.

[0033] In operation, the method 400 is operable to utilize information in one or more images, e.g. the images 50 and 52 for example, to improve the segmentation process. In operation, the method 400 works in both histogram space and image space. For example, a clustering of the voxels is first calculated in the histogram space. The clusters are then morphologically refined in the image space. Subsequently, anatomical structures may be identified and a segmentation performed on the anatomical structures in the image space for each anatomical structure.

[0034] At 402, clusters are determined using the joint histogram 208. The clusters may be identified using various clustering methods. For example, in various embodiments, a K-Means Clustering may be utilized. K-means clustering is an iterative technique that is used to partition an image into K clusters. The algorithm may include the following: (1) Pick K cluster centers, either randomly or based on some heuristic; (2) Assign each pixel in the image to the cluster that minimizes the variance between the pixel and the cluster center; (3) Re-compute the cluster centers by averaging all of the pixels in the cluster; (4) Iterate (2) and (3) until convergence is attained (e.g., no pixels change clusters). In this case, variance is the squared or absolute difference between a pixel and a cluster center. The difference is typically based on pixel color, intensity, texture, and location, or a weighted combination of these factors. K may be selected manually, randomly, or by a heuristic.

[0035] In the exemplary embodiment described herein, a histogram-based clustering method is utilized. In operation, a histogram, such as the histogram 450, is computed from all of the pixels in the image. The peaks and valleys in the histogram 450 are then used to locate the clusters in the image. Color or intensity may be used as the measure. Figure 12 illustrates an exemplary image 452 illustrating the results of the clustering performed at 402.

[0036] Referring again to Figure 10, at 404 a label image is generated based on the clusters identified at 402. For example, Figure 13 illustrates an exemplary label image 454 that may be generated at 404. In various embodiments, the pixels or voxels in the images may be labeled as a member of a tissue, a fat, a water, an iodine, a bone, etc. by assigning each of the various clusters a different color. Optionally, the clusters in the label image may be assigned different shades or patterns to enable the label image to be generated as a black and white or gray scale image. The label image 454 is then segmented in image space. More specifically, at 406 the segmentation process may include performing a morphological filtering of the label image 454. Morphological filtering may be implemented to eliminate organs, vessels, etc. that are smaller than a predetermined size. Morphological filtering may also be implemented to perform cavity filling, cavity opening, cavity closing, etc.

[0037] At 408, 3D connected regions are identified. As used herein, connected regions are regions or voxels that have a similar intensity value. For example, voxels identified as water have a similar intensity value and are therefore determined to be connected regions. At 410, the connected regions are segmented. In various embodiments, the regions may be segmented using, for example various conventional algorithms.

[0038] At 412, the regions segmented at 410 are identified based on the specific characteristics of each region. The various characteristics may include, for example, pixel intensity, pixel position, a position of the organ, a position of an organ with respect to surrounding structures, etc. In various embodiments, the regions may be identified using an artificial intelligence based algorithm. More specifically, the artificial intelligence based algorithm is configured to utilize the various imaging statistics, values, etc. to identify the various regions. In various embodiments, the algorithm may be trained using a large set of known images to generate a training dataset. The training dataset may then be utilized to train the algorithm to identify various characteristics that enable the algorithm to determine the different regions. Accordingly, in operation, the training dataset may include information of the shape of exemplary organs, expected outlines of various organs, expected pixel intensity values, etc. The known values in the training dataset may then be compared to the values in the images to segment each connected region.

[0039] Referring again to Figure 10, at 414 an anatomy specific segmentation is performed. For example, in various embodiments, the fusion generating module 50 may automatically select a liver segmentation algorithm to segment a liver, a spleen segmentation to segment the spleen, a heart segmentation, a kidney segmentation, etc. Accordingly, at 414, a specific segmentation algorithm is utilized to segment a respective connected region. It should be realized the segmentation procedure is specifically selected based on the organ, structure, etc. and the presence or absence of a contrast agent. Moreover, it should be realized that while various embodiments describe a segmentation procedure, any image processing procedure may be automatically performed and the segmentation procedure described herein is exemplary only.

[0040] At 416, the segmented image generated at 414 is displayed to the user. For example, Figure 14 illustrates an exemplary image 456 that may be displayed to a user. As shown in the image 456, the various organs, tissues, and structures are each labeled using a different color to enable a user to easily distinguish the various features in the image.

[0041] Figure 15 is a flowchart illustrating another method 500 for segmenting an enhanced fusion image shown at 112 in Figure 2. In various embodiments, the method 500 may be utilized to perform segmentation when the series of images 52 includes more than three images. At 502, a region of interest is selected as described above. At 504, the series of images 52 are ordered based on the information. More specifically, as described above, the information may include entropy or other values, such as for example, noise, edges, or homogeneous regions. Thus, the series of images may be ordered such that the images having a similar entropy are positioned together, images having similar edges are ordered together, etc. At 506, the first two images in the series of images are selected for subsequent processing. At 508, a joint histogram is generated using the images selected at 506. At 510, the clusters are determined using the joint histogram as described above. At 512, a label image is generated using the clusters determined at 510.

[0042] At 514, the fusion image generating module 50 is configured to determine if any of the images in the series of images 52 has not been selected for subsequent processing. For example, assume that the series of images 52 includes four images. Moreover, assume that at 506 the first two images in the series of images 52 were selected for subsequent processing. In the exemplary embodiment, the series of images 52 includes two images that have not been processed. Accordingly, at 514, if the fusion image generating module 50 determines that each of the images in the series of images 52 has not been processed, the method 500 proceeds to step 516 wherein the next image in the series of images 52 is selected. Additionally, the label image generated during the previous iteration is selected. The method 500 then proceeds to step 508 wherein a joint histogram is generated using the third image and the label image from the previous iteration. The steps 514 and 516 are iterated until each of the images in the series of images 52 has been processed to generate a joint histogram.

[0043] At 518, a morphological filtering is performed as described above in method 400. At 520, 3D connected regions are identified. At 522, the connected regions are segmented. At 524, the regions segmented at 522 are identified based on the specific characteristics of each region. At 526, an anatomy specific segmentation is performed. At 528, the segmented image generated at 526 is displayed to the user.

[0044] Described herein are embodiments that are configured to generate a fusion image which improves a visualization of the joint-information of all images. The improved visualization enhances hard-to-detect differences and abnormal regions, improves the diagnostic process and is less time consuming. The fusion image also improves a user's ability to detect lesions and contour the borders of the organs or tumors. More specifically, the various embodiments described herein use multiple images, e.g. CT, MR, and other images which are fused together to improve the accuracy and robustness of the segmentation procedure. The various embodiments also enable anatomical images to be fused with functional images to improve tumor detection and segmentation. Thus, using the various embodiments described herein, the differences between organs, lesions and tissues on the images may be enhanced in both 2D and 3D visualization, and segmentation may be improved. Various methods described herein may be applied to any imaging modality and any region of the body.

[0045] The various methods and the fusion image generating module 50 may be implemented in an exemplary imaging system. For example, Figure 16 is a pictorial view of an imaging system that is formed in accordance with various embodiments. Figure 17 is a block schematic diagram of a portion of the imaging system shown in Figure 16. Although various embodiments are described in the context of a CT imaging system, it should be understood that other imaging systems capable of performing the functions described herein are contemplated as being used.

[0046] Referring to Figures 16 and 17, the CT imaging system 600 includes a gantry 604, which includes an x-ray source 606 that projects a beam of x-rays 608 toward a detector array 610 on the opposite side of the gantry 604. The detector array 610 is formed by a plurality of detector rows (not shown) including a plurality of the detectors 602 that together sense the projected x-rays that pass through an object, such as a patient 612 that is disposed between the detector array 610 and the x-ray source 606. Each detector 602 produces an electrical signal that represents the intensity of an impinging x-ray beam and hence can be used to estimate the attenuation of the beam as the beam passes through the patient 612. During a scan to acquire x-ray projection data, the gantry 604 and the components mounted therein rotate about a center of rotation 614. Figure 17 shows only a single row of detectors 602 (i.e., a detector row). However, the multi-slice detector array 610 includes a plurality of parallel detector rows of detectors 602 such that projection data corresponding to a plurality of quasi-parallel or parallel slices can be acquired simultaneously during a scan.

[0047] Rotation of components on the gantry 604 and the operation of the x-ray source 606 are controlled by a control mechanism 616 of the CT imaging system 600. The control mechanism 616 includes an x-ray controller 618 that provides power and timing signals to the x-ray source 606 and a gantry motor controller 620 that controls the rotational speed and position of components on the gantry 604. A data acquisition system (DAS) 622 in the control mechanism 616 samples analog data from the detectors 602 and converts the data to digital signals for subsequent processing. An image reconstructor 624 receives sampled and digitized x-ray data from the DAS 622 and performs high-speed image reconstruction. The reconstructed images, i.e. the series of images 52, are applied as an input to a computer 626 that stores the image in a storage device 628. The image reconstructor 624 can be specialized hardware or computer programs executing on the computer 626. In various embodiments, the computer 626 may include the fusion image generating module 50 described above.

[0048] The computer 626 also receives commands and scanning parameters from an operator via an operator workstation 630 that has a keyboard and/or other user input and/or marking devices, such as a mouse, trackball, or light pen. An associated display 632, examples of which include a cathode ray tube (CRT) display, liquid crystal display (LCD), or plasma display, allows the operator to observe the reconstructed image and other data from the computer 626. The display 632 may include a user pointing device, such as a pressure-sensitive input screen. The operator supplied commands and parameters are used by the computer 626 to provide control signals and information to the DAS 622, the x-ray controller 618, and the gantry motor controller 620. In addition, the computer 626 operates a table motor controller 634 that controls a motorized table 636 to position the patient 612 in the gantry 604. For example, the table 636 moves portions of the patient 612 through a gantry opening 638.

[0049] Various embodiments described herein provide a tangible and non-transitory machine-readable medium or media having instructions recorded thereon for a processor or computer to operate an imaging apparatus to perform one or more embodiments of methods described herein. The medium or media may be any type of CD-ROM, DVD, floppy disk, hard disk, optical disk, flash RAM drive, or other type of computer-readable medium or a combination thereof.

[0050] The various embodiments and/or components, for example, the modules, or components and controllers therein, also may be implemented as part of one or more computers or processors. The computer or processor may include a computing device, an input device, a display unit and an interface, for example, for accessing the Internet. The computer or processor may include a microprocessor. The microprocessor may be connected to a communication bus. The computer or processor may also include a memory. The memory may include Random Access Memory (RAM) and Read Only Memory (ROM). The computer or processor further may include a storage device, which may be a hard disk drive or a removable storage drive such as a floppy disk drive, optical disk drive, and the like. The storage device may also be other similar means for loading computer programs or other instructions into the computer or processor.

[0051] As used herein, the term "computer" or "module" may include any processor-based or microprocessor-based system including systems using microcontrollers, reduced instruction set computers (RISC), application specific integrated circuits (ASICs), logic circuits, and any other circuit or processor capable of executing the functions described herein. The above examples are exemplary only, and are thus not intended to limit in any way the definition and/or meaning of the term "computer".

[0052] The computer or processor executes a set of instructions that are stored in one or more storage elements, in order to process input data. The storage elements may also store data or other information as desired or needed. The storage element may be in the form of an information source or a physical memory element within a processing machine.

[0053] The set of instructions may include various commands that instruct the computer or processor as a processing machine to perform specific operations such as the methods and processes of the various embodiments of the subject matter described herein. The set of instructions may be in the form of a software program. The software may be in various forms such as system software or application software. Further, the software may be in the form of a collection of separate programs or modules, a program module within a larger program or a portion of a program module. The software also may include modular programming in the form of object-oriented programming. The processing of input data by the processing machine may be in response to user commands, or in response to results of previous processing, or in response to a request made by another processing machine.

[0054] As used herein, the terms "software" and "firmware" are interchangeable, and include any computer program stored in memory for execution by a computer, including RAM memory, ROM memory, EPROM memory, EEPROM memory, and non-volatile RAM (NVRAM) memory. The above memory types are exemplary only, and are thus not limiting as to the types of memory usable for storage of a computer program.

[0055] It is to be understood that the above description is intended to be illustrative, and not restrictive. For example, the above-described embodiments (and/or aspects thereof) may be used in combination with each other. In addition, many modifications may be made to adapt a particular situation or material to the teachings of the various embodiments of the described subject matter without departing from their scope. While the dimensions and types of materials described herein are intended to define the parameters of the various embodiments of the invention, the embodiments are by no means limiting and are exemplary embodiments. Many other embodiments will be apparent to one of ordinary skill in the art upon reviewing the above description. The scope of the various embodiments of the inventive subject matter should, therefore, be determined with reference to the appended claims. In the appended claims, the terms "including" and "in which" are used as the plain-English equivalents of the respective terms "comprising" and "wherein." Moreover, in the following claims, the terms "first," "second," and "third," etc. are used merely as labels, and are not intended to impose numerical requirements on their objects.

[0056] This written description uses examples to disclose the various embodiments of the invention, including the preferred mode, and also to enable one of ordinary skill in the art to practice the various embodiments of the invention, including making and using any devices or systems and performing any incorporated methods. The patentable scope of the various embodiments of the invention is defined by the claims.


Claims

1. A method (500) for segmenting an enhanced fusion image (54), said method comprising:

obtaining (102) a series of images (52) of an object of interest, the series of images (52) comprising more than three images acquired from two or more different imaging modalities or a single modality operated to perform different scanning protocols;

ordering (504) the series of images based on information about the images in the series of images, said information relating to entropy, noise, edges or homogenous regions of the images;

selecting (506) the first image and the second image in the ordered series of images for processing;

repeating, until all images in the series of images (52) have been processed, the steps of:

generating (508) a joint histogram (208) using the selected images;

determining (510) clusters using the joint histogram (208);

generating (512) a label image (454) using the determined clusters;

determining (514) if any of the images of the series of images has not been selected for processing; and

if it is determined that each of the images in the series of images has not been processed, selecting (516) the next image in the series of images and the generated label image (454); and

when all images in the series of images (52) have been processed:

performing (518) a morphological filtering of the generated label image (454);

extracting (520) connected regions;

segmenting (522) the connected regions;

identifying (524) the segmented regions based on specific characteristics of each region;

performing (526) anatomy specific segmentation; and

displaying the anatomy specific segmented image.


 
2. The method (100) of Claim 1, wherein the first image (52) is acquired using a first imaging modality and the second image (52) is acquired using a second imaging modality.
 
3. An imaging system (10) comprising:

an imaging scanner (60, 62, 64, 66, 68); and

a processor (28) coupled to the imaging scanner, the processor (28) configured to implement the method according to any preceding claim.


 
4. A non-transitory computer-readable medium having instructions recorded thereon configured to cause a processor to perform the method of any of claims 1 to 2.
 


Ansprüche

1. Verfahren (500) zum Segmentieren eines Enhanced-Fusion-Bilds (54), wobei das Verfahren umfasst:

Erhalten (102) einer Reihe von Bildern (52) eines interessierenden Objekts, wobei die Reihe von Bildern (52) mehr als drei Bilder umfasst, erfasst von zwei oder mehr verschiedenen Bildgebungsmodalitäten oder einer einzelnen Modalität, betrieben zum Durchführen verschiedener Abtastprotokolle;

Ordnen (504) der Reihe von Bildern auf Basis von Informationen über die Bilder in der Reihe von Bildern, wobei die Informationen Entropie, Rauschen, Kanten oder homogene Gebiete der Bilder betreffen;

Wählen (506) des ersten Bilds und des zweiten Bilds in der geordneten Reihe von Bildern zur Verarbeitung;

Wiederholen, bis alle Bilder in der Reihe von Bildern (52) verarbeitet worden sind, der folgenden Schritte:

Erzeugen (508) eines gemeinsamen Histogramms (208) unter Verwendung der gewählten Bilder;

Bestimmen (510) von Clustern unter Verwendung des gemeinsamen Histogramms (208);

Erzeugen (512) eines Labelbilds (454) unter Verwendung der bestimmten Cluster;

Bestimmen (514), ob irgendeines der Bilder der Reihe von Bildern nicht für die Verarbeitung gewählt worden ist; und

falls bestimmt wird, dass jedes der Bilder in der Reihe von Bildern nicht verarbeitet worden ist, Wählen (516) des nächsten Bilds in der Reihe von Bildern und des erzeugten Labelbilds (454); und

wenn alle Bilder in der Reihe von Bildern (52) verarbeitet worden sind:

Durchführen (518) einer morphologischen Filterung des erzeugten Labelbilds (454);

Extrahieren (520) von verbundenen Gebieten;

Segmentieren (522) der verbundenen Gebiete;

Identifizieren (524) der segmentierten Gebiete auf Basis von spezifischen Charakteristika jedes Gebiets;

Durchführen (526) einer anatomiespezifischen Segmentierung; und

Anzeigen des anatomiespezifischen segmentierten Bilds.


 
2. Verfahren (100) nach Anspruch 1, wobei das erste Bild (52) unter Verwendung einer ersten Bildmodalität erfasst wird und das zweite Bild (52) unter Verwendung einer zweiten Bildmodalität erfasst wird.
 
3. Bildgebungssystem (10), umfassend:

einen Bildgebungsscanner (60, 62, 64, 66, 68); und

einen Prozessor (28), der an den Bildgebungsscanner gekoppelt ist, wobei der Prozessor (28) ausgelegt ist zum Implementieren des Verfahrens nach einem vorhergehenden Anspruch.


 
4. Nichtvorübergehendes computerlesbares Medium mit darauf aufgezeichneten Anweisungen, ausgelegt, um zu bewirken, dass ein Prozessor das Verfahren nach einem der Ansprüche 1 bis 2 durchführt.
 


Revendications

1. Procédé (500) permettant de segmenter une image de fusion améliorée (54), ledit procédé comprenant :

l'obtention (102) d'une série d'images (52) d'un objet d'intérêt, la série d'images (52) comprenant plus de trois images acquises à partir de deux ou plus de deux modalités d'imagerie différentes ou d'une seule modalité utilisée pour réaliser différents protocoles de balayage ;

la commande (504) de la série d'images sur la base d'informations sur les images de la série d'images, lesdites informations concernant l'entropie, le bruit, les bords ou les régions homogènes des images ;

la sélection (506) de la première image et de la deuxième image de la série d'images commandées pour le traitement ;

la répétition, jusqu'à ce que toutes les images de la série d'images (52) aient été traitées, des étapes de :

génération (S08) d'un histogramme conjoint (208) à partir des images sélectionnées ;

détermination (510) des groupes à l'aide de l'histogramme conjoint (208) ;

génération (512) d'une image d'étiquette (454) en utilisant les groupes déterminés ;

détermination (514) si l'une des images de la série d'images n'a pas été sélectionnée pour le traitement ; et

s'il est déterminé que chacune des images de la série d'images n'a pas été traitée, sélection (516) de l'image suivante de la série d'images et de l'image d'étiquette générée (454) ; et

lorsque toutes les images de la série d'images (52) ont été traitées :

réalisation (518) d'un filtrage morphologique de l'image d'étiquette générée (454) ;

extraction (520) de régions connectées ;

segmentation (522) des régions connectées ;

identification (524) des régions segmentées sur la base de caractéristiques spécifiques de chaque région ;

réalisation (526) de segmentation spécifique à l'anatomie ; et

affichage de l'image segmentée spécifique à l'anatomie.


 
2. Procédé (100) selon la revendication 1, la première image (52) étant acquise en utilisant une première modalité d'imagerie et la deuxième image (52) étant acquise en utilisant une deuxième modalité d'imagerie.
 
3. Système d'imagerie (10) comprenant :

un scanner d'imagerie (60, 62, 64, 66, 68) ; et

un processeur (28) couplé au scanner d'imagerie, le processeur (28) étant configuré pour mettre en Ĺ“uvre le procédé selon l'une quelconque des revendications précédentes.


 
4. Support non transitoire lisible par ordinateur sur lequel sont enregistrées des instructions, configuré pour amener un processeur à réaliser le procédé selon l'une quelconque des revendications 1 à 2.
 




Drawing






































Cited references

REFERENCES CITED IN THE DESCRIPTION



This list of references cited by the applicant is for the reader's convenience only. It does not form part of the European patent document. Even though great care has been taken in compiling the references, errors or omissions cannot be excluded and the EPO disclaims all liability in this regard.

Non-patent literature cited in the description