(19)
(11)EP 3 705 047 A1

(12)EUROPEAN PATENT APPLICATION

(43)Date of publication:
09.09.2020 Bulletin 2020/37

(21)Application number: 20161178.7

(22)Date of filing:  05.03.2020
(51)International Patent Classification (IPC): 
A61B 6/03(2006.01)
G06T 5/00(2006.01)
A61B 6/00(2006.01)
(84)Designated Contracting States:
AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR
Designated Extension States:
BA ME
Designated Validation States:
KH MA MD TN

(30)Priority: 05.03.2019 US 201916292695

(71)Applicant: Siemens Healthcare GmbH
91052 Erlangen (DE)

(72)Inventors:
  • Sahbaee Bagherzadeh, Pooyan
    Mount Pleasant, SC 29466 (US)
  • Sharma, Puneet
    Princeton Junction, New Jersey 08550 (US)

(74)Representative: Patentanwälte Bals & Vogel 
Sendlinger Strasse 42A
80331 München
80331 München (DE)

  


(54)ARTIFICIAL INTELLIGENCE-BASED MATERIAL DECOMPOSITION IN MEDICAL IMAGING


(57) For material decomposition in medical imaging, a machine-learned model is trained to decompose. For example, spectral CT data for a plurality of locations is input, and the machine-learned model outputs the material composition. Using information from surrounding locations for the decomposition by the machine-learned model for a given location may allow for more accurate material decomposition and/or three or more material decomposition.




Description

BACKGROUND



[0001] The present embodiments relate to material decomposition in medical imaging. Spectral computed tomography (CT) decomposes measurements at different energies into different base materials using material decomposition techniques. For example, bone and contrast agent (e.g., iodine) are distinguished in a two-material decomposition. Existing material decomposition algorithms decompose for individual locations (e.g., voxels) based on the measurements for the individual locations. Beam hardening artifacts and statistical noise due to the tradeoff between limited photons in systems with a high number of energy bins and width of the energy bins in systems with low number of energy bins cause inaccuracies in decomposition.

[0002] Dual energy scans cannot separate more than two materials unless one of the materials has k-edge characteristics (i.e., measurement at more than two energy levels do not provide additional info). Three or more materials may be decomposed using additional bins or energy thresholds. The statistical noise level substantially increases for material decomposition for more than two materials, resulting in errors and inaccuracies. With more energy thresholds or energy bins for distinguishing between a larger number of materials, there are fewer photons in each energy bins since the total clinical dose is limited. As the energy bins are narrowed for more materials, there is more beam hardening.

SUMMARY



[0003] By way of introduction, the preferred embodiments described below include methods, systems, instructions, and non-transitory computer readable media for material decomposition in medical imaging. A machine-learned model is trained to decompose. For example, spectral CT data for a plurality of locations is input, and the machine-learned model outputs the material composition. Using information from surrounding locations for the decomposition by the machine-learned model for a given location may allow for more accurate material decomposition and/or three or more material decomposition.

[0004] In a first aspect, a method is provided for material decomposition in a medical imaging system. A patient is scanned with a spectral computed tomography (CT) system. The scanning provides spectral CT data representing the patient. A machine-learned model generates, in response to the spectral CT data, the material decomposition for each of a plurality of locations. An image of the material decomposition for the plurality of the locations is displayed.
Preferred is a method, wherein scanning comprises scanning with the spectral CT system using dual energy or photon counting.
Further, alternatively or additionally a method is preferred,
Preferred is alternatively or additionally a method, wherein generating comprises generating with the machine-learned model having been trained using CT-based material decomposition imaging as ground truth, and/or wherein generating comprises generating with the machine-learned model having been trained using CT-based material decomposition imaging of a phantom with known materials as ground truth.
Further, alternatively or additionally a method is preferred, wherein generating comprises generating with the machine-learned model having been trained using synthesized CT model-based material decomposition imaging as ground truth.
Preferred is alternatively or additionally a method, wherein generating comprises generating by the machine-learned model in response to the spectral CT data, scan characteristic, and injection information, and/or wherein generating comprises generating by the machine-learned model having been trained with an image quality rating for training samples.
Further, alternatively or additionally a method is preferred, wherein generating comprises generating with the machine-learned model comprising a convolutional neural network, and/or wherein generating comprises generating with the machine-learned model comprising a recurrent neural network having a long-short term memory.
Preferred is alternatively or additionally a method, wherein generating comprises generating for each of the plurality of the locations based on the spectral CT data for the location and a plurality of neighboring ones of the locations, and/or wherein generating comprises generating by the machine-learned model having been trained with a dictionary embedding.
Further, a method is preferred, further comprising performing spectral CT material decomposition imaging, wherein the material decomposition is a function of output of the spectral CT material decomposition and the machine-learned model.
Preferred is a method, wherein generating comprises generating the material decomposition as a decomposition of three or more materials, in particular wherein generating comprises generating concentrations at each of the locations for the three or more materials, and wherein displaying comprises displaying the image as a first material map for one of the three or more materials and displaying second and third material maps as other images for others of the three or more materials.

[0005] In a second aspect, a system is provided for material decomposition. A medical scanner for scanning a patient is configured to output data representing an interior region of the patient. An image processor is configured to apply an artificial intelligence to decompose two or more materials represented by the data. A display is configured to display an image for at least one of the two or more materials.
Preferred is a system, wherein the computed tomography scanner comprises a spectral computed tomography scanner.
Further, a system is preferred, wherein the data represents a plurality of locations of the interior region, and wherein the artificial intelligence uses the data from neighboring ones of the locations in the decomposition of the two or more materials for each of the locations.
Preferred is a system, wherein the artificial intelligence was trained to decompose the two or more materials as at least three materials.
Further, a system is preferred, wherein the artificial intelligence comprises a machine-learned neural network having been trained from synthetically generated material decomposition using a model of the medical scanner, scanning a phantom of known materials, or both the synthetically generated material decomposing using the model and scanning the phantom.

[0006] In a third aspect, a method is provided for material decomposition in a medical imaging system. A patient is scanned with the medical imaging system, providing data representing the patient. A machine-learned model generates, in response to the data, the material decomposition for each of a plurality of locations based on the data for neighboring ones of the locations. The material decomposition is a three or more material decomposition. An image or images of the material decomposition for the plurality of the locations is displayed.

[0007] The present invention is defined by the following claims, and nothing in this section should be taken as a limitation on those claims. Further aspects and advantages of the invention are discussed below in conjunction with the preferred embodiments and may be later claimed independently or in combination.

BRIEF DESCRIPTION OF THE DRAWINGS



[0008] The components and the figures are not necessarily to scale, emphasis instead being placed upon illustrating the principles of the invention. Moreover, in the figures, like reference numerals designate corresponding parts throughout the different views.

Figure 1 is a flow chart diagram of one embodiment of a method for material decomposition in a medical imaging system;

Figure 2 illustrates training of an artificial intelligence for material decomposition;

Figure 3 shows images of concentration of different materials from material decomposition; and

Figure 4 is one embodiment of a system for material decomposition.


DETAILED DESCRIPTION OF THE DRAWINGS AND PRESENTLY PREFERRED EMBODIMENTS



[0009] Deep learning or other machine learning is used for multi-material decomposition in medical imaging. Being trained by thousands of images and decomposed images, such as from traditional two and/or three material decomposition algorithms, the machine-learned model is not limited to use the information only from the individual voxels to extract the decomposed information. Machine learning may train on labeled data with spatial information (e.g., planar and z axis) and/or temporal information (e.g., timing relative to contrast agent injection) for each location. Having access to the information over time and/or space may allow for more accurate classification of the pixel material and/or better estimation of the density of each material. Different material concentration and hence material-specific maps are provided in a way that has minimal impact from beam hardening artifacts and statistical noise.

[0010] Figure 1 is a flow chart of one embodiment of a method for material decomposition in a medical imaging system. A machine-learned model is used in material decomposition. Since the machine-learned model is trained from images, spatial consideration is provided for decomposing the materials for each given location from measurements.

[0011] The medical imaging system performs the acts. A medical imager, such as a CT system, performs act 10. An image processor performs act 12. The image processor uses a display screen to perform act 14. In one embodiment, the system of Figure 4 performs the acts. In other embodiments, different devices perform any one or more of the acts. In one example, the CT system performs all the acts. In yet another example, a workstation, computer, portable or handheld device (e.g., tablet or smart phone), server, or combinations thereof performs one or more of the acts.

[0012] The acts are performed in the order shown (e.g., top to bottom or numerical) or other orders. Additional, different, or fewer acts may be provided. For example, the method is performed without act 14. As another example, acts for configuring a medical scanner, such as for selecting the application and/or materials to be decomposed, are provided.

[0013] In act 10, a medical imaging system scans a patient. Any medical imaging system may be used. For example, a spectral CT system scans the patient. The spectral CT system includes an x-ray source or sources that may operate at different energies. Alternatively or additionally, a detector allows for detection at different energies, such as having multiple detectors or thresholds applied to detected x-ray photons. Dual energy or photon counting CT systems may be used. Using spectral CT, measurements reconstructed in Hounsfield or other density or attenuation values from different energies may be used to derive material composition.

[0014] The scan provides data representing an interior region of the patient. Spectral CT data is provided by a spectral CT system. Other data may be provided by a spectral CT or other medical imaging system. After reconstruction, the data represents a planar region of the patient. A volume may be represented, such as a stack of slices or planar regions.

[0015] The data is a frame of data representing the patient. The data may be in any format. While the terms "image" and "imaging" are used, the image or imaging data may be in a format prior to actual display of the image. For example, the medical image may be a plurality of scalar values representing different locations in a Cartesian or polar coordinate format different than a display format (i.e., scan or voxel data). As another example, the medical image may be a plurality red, green, blue (e.g., RGB) values output to a display for generating the image in the display format. The medical image may not yet be a displayed image, may be a currently displayed image, or may be previously displayed image in the display or other format. The image or imaging is a dataset that may be used for anatomical imaging, such as scan data representing spatial distribution of anatomy of the patient.

[0016] The data is obtained by scanning the patient. The scan data may be provided in or by the medical scanner, loaded from memory, and/or by transfer via a computer network. For example, previously acquired scan data is accessed from a memory or database. As another example, scan data is transmitted over a network after acquisition from scanning a patient.

[0017] Data different than the scan data may be acquired. For example, task-specific information is acquired. The task-specific information may be an identification of the region scanned (e.g., body region), existence of contrast agent in the scanned region, pathology information, scan settings (e.g., x-ray source energies, energy thresholds, photon count range, gantry position, range of movement, and/or reconstruction settings), and/or patient medical information. As another example, injection protocol information is acquired. The time of injection, the time of scan, the volume of contrast agent injected, the change in injection over time, the type of contrast agent, and/or other contrast agent information may be acquired.

[0018] In act 12, an image processor generates a material decomposition for one or more locations in the scanned region of the patient. The image processor determines the material composition for each of a plurality of locations, such as determining for each pixel or voxel location.

[0019] The material decomposition may be a classification, such as identifying whether a given material exists at the location. The classification may be two material, three material, or four or more materials. The material decomposition is for material labeling, identifying whether one of two or more materials is at the location. Alternatively or additionally, the material decomposition provides concentration, density, or relative amount (e.g., volume, attenuation, or density) of the different materials at each location. The concentration of each material for each location is determined.

[0020] Some example, two material decompositions may be of bone and iodine (e.g., contrast agent), blood and plaque, kidney soft tissue and kidney stone, lung tissue and lung vessel, gout (e.g., uric acid crystals) and blood, brain hemorrhage (e.g., marrow and bone), heart pulmonary blood volume (PBV)(e.g., fat and soft tissue) or bone marrow (e.g., cerebrospinal fluid and hemorrhaging tissue). Some example three material decompositions may be liver virtual non-contrast (VNC) (e.g., fat, soft tissue, and iodine), lung PBV (e.g., air, soft tissue, and iodine), or virtual unenhanced (air, water, and iodine). Three material decompositions may allow for use and distinguishing between tissue and two or more different types of contrast agents. Any type of materials may be used in decomposition of two or more materials in the patient.

[0021] The image processor generates the material composition with a machine-learned model. In response to input of data (e.g., the spectral CT data), the machine-learned model outputs the material decomposition for each of a plurality of locations. The input is by applying values of an input feature vector, such as values of the scan data, to the machine-learned model. For example, the spectral CT scan data for a patient is input. Features derived from the spectral CT data (e.g., Haar wavelets) may be input.

[0022] Other data than scan data may be input. The other data may include clinical data, biochemical measurements, genetic data, patient history, family history, or other patient data. The other data may be a scan characteristic and/or injection information. The scan characteristic may be any task specific information, such as scan settings and/or the application (e.g., lung material decomposition). The injection information may be any information from the injection protocol (e.g., timing, volume, and/or type of contrast agent) or other contrast agent information where contrast agents are used.

[0023] The scan characteristic and/or injection information is used to select a material or task-specific machine-learned model to apply and/or is used as input the machine-learned model. The machine-learned model may have been trained based on a database of past image acquisitions covering a wide range of scan phases (i.e., relative timing of the scan to contrast agent injection) and conditions (e.g., scan settings and/or task). For each available dataset in the training data, the task-specific information and information from the injection protocols, if any, may be combined with the actual acquired images to find a correlation between the values at a given location and time and concentration values for each material at the location. In other embodiments, scan characteristic and/or injection information is not used to select the model and/or as input to the model.

[0024] In one embodiment, the data from the scan is segmented. The segmentation isolates data representing a particular region (e.g., organ) in the patient. The segmented data is input. The segmentation may be used to identify the organ or region represented in the scan. This identification may be used as task-specific information for selecting the model and/or an input to the model. For example, the segmentation results or identification are used to identify or help identify the materials to be decomposed (e.g., heart verses lungs and corresponding materials).

[0025] The machine-learned model generates a material decomposition. By applying the values of the input vector, the output is generated. The machine-learned model is trained to generate the output in response to the input. Machine learning uses training data of labeled or ground truth material decomposition to learn to classify (e.g., material labeling) and/or to regress (e.g., concentrations of different materials). The training data is used as knowledge of past cases to train the model. The training associates the features of the input vector with material decomposition.

[0026] Any machine learning or training may be used. A probabilistic boosting tree, support vector machine, neural network, sparse auto-encoding classifier, Bayesian network, or other now known or later developed machine learning may be used. Any semi-supervised, supervised, or unsupervised learning may be used. Hierarchal or other approaches may be used.

[0027] In one embodiment, the classification and/or regression is by a machine-learned model learned with deep learning. Any deep learning approach or architecture may be used. For example, a convolutional neural network (CNN) is used. A supervised deep learning-based CNN may be trained on images with similar material and the Hounsfield (HU) values from individual voxels. Reconstructed CT images and decomposed material images as labeled voxels are used for training. As another example, the deep learning is based on a recurrent neural network (RNN) architecture. The RNN may include a long-short term memory architecture where the memory or memories are used for input data, features, and/or outputs for other locations and/or from other times relative to the injection of contrast agent. The spatial and/or temporal priori information is used to predict the decomposition of a location, which may reduce the impact of beam hardening and statistical noise. In contrast enhanced CT images with one or more contrast agents, RNN with the capability of using the time dependent information from each location may help predict the concentration of contrast agent in different organs. Other neural network architectures or networks may be used, such as an image-to-image, generative adversarial network, or U-net where scan data is input and a spatial representation is generated from deep learned features.

[0028] The neural network may include convolutional, sub-sampling (e.g., max pooling), fully connected layers, and/or other types of layers. By using convolution, the number of possible features to be tested is limited. The fully connected layers operate to fully connect the features as limited by the convolution layer after maximum pooling. Other features may be added to the fully connected layers, such as non-imaging or clinical information. Any combination of layers may be provided.

[0029] In another embodiment, sparse dictionary encoding is used for the machine learning. The machine-learned model is trained to provide a dictionary embedding. The signatures or other sparse encoding represented in the input data is learned for material decomposition, similar to word embedding (e.g., word2vec) trained from text data.

[0030] The machine-learned model, with or without deep learning, is trained to associate the ground truth (output material decomposition) to the extracted values of one or more features. Deep learning learns the features from the input data. Other types of machine learning may associate based on manually programmed or hand-coded input features. The machine-learning uses training data with ground truth to learn to estimate or classify based on the input vector. The resulting machine-learned model is an architecture with learned weights, kernels, connections, and/or other operations.

[0031] Figure 2 shows one embodiment for training with machine learning. The machine-learned model is an artificial intelligence (AI) 27 learned from training data. The training data includes samples from one or more sources, such as synthetically generated images 24, decomposed material maps 23 with material classification and/or concentration, and/or decomposed material maps 25 from one or more phantoms with different materials. Scan and/or injection information 26 may also be provided as an input (i.e., included for each training sample). The decomposed material maps 23 is from spectral CT scans 20 of many different patients where traditional material decomposition processes 22 are applied to the spectral CT data 21 including spatial and/or temporal distribution of measurements. The scan and/or injection information 26 is provided as part of the scan 20. To train, a large database of ground-truth labeled samples is acquired.

[0032] One of the major challenges or limitations in developing supervised deep learning models is the lack of enough annotated images for training due to the inherent difficulties of manual segmentation and labeling the contrast enhanced images. In one embodiment, the machine-learned model is trained using spectral CT-based material decomposition imaging as the ground truth. Spectral CT imaging is performed. The reconstructed scan data at the different energies are used as the input samples. Traditional two or three material decomposition software is applied. Previously validated 2-matrial and 3-material decomposition algorithms provide sufficient supervised labeled images. Any material decomposition-based applications, including iodine map, VNC, 3-material, etc. may be a reliable source of supervised training data, providing material labeling and/or concentration estimates as the ground truth.

[0033] In another embodiment, the machine-learned model is trained using CT-based material decomposition imaging of a phantom with known materials as ground truth. The training data contains images from scans of phantoms with inserts of different materials. The known materials provide the labels or ground truth for the images. The scan data used to form the images is used as the input samples. Using the phantom, the ground truth may be the known concentration of each material in each insert or the proportion of different materials in mixed inserts. The phantoms may be anthropomorphic or cylindrical. Vials with different known concentrations may be used.

[0034] In yet another embodiment, the machine-learned model is trained using synthesized CT model-based material decomposition imaging as ground truth. The scan region is modeled using known characteristics of different materials. Any physics or biomechanical model may be used. The model of the scan region may be repeated many different times with different spatial and/or temporal distributions of materials and/or concentrations. The modeling may be constrained by statistical analysis of samples from patients. A scanner simulation, such as DRASIM (Siemens), DukeSim (Duke), CatSim (GE), or computational anthropomorphic phantom (e.g., XCAT) is performed from each model of the scan region. The result is scan data simulated from the scan region model without actual scanning of the patient. The ground truth material labels and/or concentrations are known from the scan region model. The simulated scan data and material decomposition ground truth maps form the training data. The training data is simulated, providing synthetic data from modeling rather than from scanning patients or phantoms.

[0035] The training may use quality information. Training samples with better quality in the scan data and/or other input values and/or with better quality in the ground truth are weighted more heavily in the training than poor quality input and/or ground truth. The machine-learned model is trained with an image quality rating for training samples or ground truth. To account for image quality (e.g., image noise in the ground truth material decomposition maps), each ground truth image of material is augmented with the quantification of an "image quality" metric. Any image quality metric may be used, such as signal-to-noise ratio, level of artifact, or measure of beam hardening. The image quality rating may be computed algorithmically based on different image features (e.g., signal-to-noise ratio), annotated by a clinical team, or a combination of both. The training uses the rating in the optimization to minimize differences of the machine-learned model output from the ground truth. Where the rating is not used, the training optimizes the model to minimize differences of the estimated or classified output from the ground truth without rating-based weighting.

[0036] After the model is trained, the model is applied to previously unseen data. For a new patient, the scan data with or without other input information is applied to the machine-learned model. The machine-learned model outputs the material decomposition. The material decomposition is output for one or more locations. The material decomposition is output as a class membership (e.g., what material is at the location or material label) and/or a regression of concentration (e.g., amount of each material at the location).

[0037] Since the machine-learned model is trained on and uses as input a spatial and/or temporal distribution of measures from the patient, the output for a given location at a given time is informed by information from other times and/or locations. For example, the material decomposition for one location is based on scan data (e.g., spectral CT data) for that location and a plurality of neighboring locations. The window defining the neighboring locations may be of any size. The training data contains not only the scan data for individual voxels as the input but also for neighboring voxels in the reconstructed images and ground truth. Neighboring locations in the same plane and/or neighboring locations from other slices (e.g., planar neighbor voxels as well as z-axis (previous and next slices)) are used.

[0038] Alternatively or additionally, the output material decomposition from neighboring locations formed from the scan data is used for decomposing for a given location. In yet other embodiments, values for one or more features calculated for the other locations by or in the machine-learned model are used. The neighboring information (e.g., feature values or material decomposition) is priori information to predict the decomposition of a location (e.g., a voxel).

[0039] Using neighboring information may reduce the impact of beam hardening and/or statistical noise. In contrast enhanced CT images with one or more contrast agents, time dependent information from each voxel may help to predict the concentration of contrast agents in different organs.

[0040] The machine-learned model is applied for different locations. For each location, the scan data from surrounding locations and/or times relative to the injection protocol are used to classify or estimate. The machine-learned model is applied to different windows of scan data to decompose the materials for different locations. Alternatively, the scan data is input for multiple locations, and the machine-learned model outputs material decomposition for the multiple locations.

[0041] In act 14 of Figure 1, the image processor generates and a display displays an image of the material decomposition for the plurality of the locations. The material decomposition image may be visualized on the CT or medical scanner or on another device, such as an imaging workstation.

[0042] The image is of one material. For example, a distribution of locations labeled for the material is displayed. Images for distribution of other materials may be displayed. Material maps for the different materials are generated as images and displayed. Figure 3 shows an example where the scan data is shown as a reconstructed grayscale image 30. The AI 27 outputs material decomposition used for three images 31, 32, 33, where one image is provided for each of two materials (iodine image 31; gadolinium image 32) and a virtual non-contrast image 33. The images 31, 32, 33 may be for three different materials. Alternatively, different materials modulate different aspects of a same image, such as using different colors for different materials. The one image shows spatial distributions of the different materials, such as spatial distribution of two different contrast agents with or without bone. Any material decomposition imaging may be used.

[0043] Images showing material composition at different times relative to an injection initiation may be generated and displayed. A video of material decomposition as a function of time may be provided. The variation over time of material decomposition for a single location or line of locations may be displayed as a graph or graphs.

[0044] Quantities may be calculated from the material decompositions, such as an area or volume. The quantities may be displayed with or separately from the images. The material decomposition image or images may be overlaid on or displayed adjacently to a CT image of the patient tissue without material decomposition.

[0045] The material decomposition and/or images generated from the material decomposition may be transmitted to the display (e.g., a monitor, workstation, printer, handheld, or computer). Alternatively or additionally, the transmission is to a memory, such as a database of patient records, or to a network, such as a computer network.

[0046] In one embodiment, the machine-learned model is used for part of material decomposition, and spectral CT (i.e., traditional or algorithm-based) material decomposition is used for another part of the material decomposition. Both may be used independently. The resulting decompositions are displayed adjacent to each other for comparison. Alternatively, the results are averaged or combined to form the decomposition image or images. Both approaches may be used in a hybrid. For example, the algorithm-based material decomposition is applied. The machine-learned model is trained to refine the material decomposition from the algorithm-based material decomposition results with or without input of other data (e.g., scan data). In any of these or other combinations, the resulting images are a function of output of the spectral CT material decomposition and the machine-learned model.

[0047] Figure 4 shows a system for material decomposition. The system implements the method of Figure 1 or another method to output material composition for an interior region of a patient, such as shown in Figure 3. An AI 27 is used to perform material decomposition from data for a patient.

[0048] The system includes a medical scanner 40, an image processor 42, a memory 44, a graphical user interface (GUI) 47 with a user input 48 and a display 49, and one or more machine-learned models as the AI 27. Additional, different, or fewer components may be provided. For example, a network or network connection is provided, such as for networking with a medical imaging network or data archival system or networking between the scanner 40 and the image processor 42. In another example, the user input 48 is not provided. As another example, a server is provided for implementing the image processor 42 and/or AI 27 remotely from the scanner 40.

[0049] The image processor 42, memory 44, user input 48, display 49, and/or AI 27 are part of the medical scanner 40. Alternatively, the image processor 42, memory 44, user input 48, display 49, and/or AI 27 are part of an archival and/or image processing system, such as associated with a medical records database workstation or server, separate from the CT scanner 40. In other embodiments, the image processor 42, memory 44, user input 48, display 49, and/or AI 27 are a personal computer, such as desktop or laptop, a workstation, a server, a network, or combinations thereof.

[0050] The medical scanner 40 is a medical diagnostic imaging scanner for material decomposition. For example, the medical scanner 40 is a spectral CT scanner operable to transmit and/or detect radiation at different energies. A gantry supports a source or sources of x-rays and supports a detector or detectors on opposite sides of a patient examination space from the source or sources. The gantry moves the source(s) and detector(s) about the patient to perform a CT scan. Various x-ray projections are acquired by the detector from different positions relative to the patient. Computed tomography solves for the two or three-dimensional distribution of the response from the projections, reconstructing the scan data into a spatial distribution of density or attenuation of the patient.

[0051] The medical scanner 40 is configured by an application and/or settings to output data representing an interior region of the patient. The medical scanner 40 scans the patient. After reconstruction (e.g., computed tomography or Fourier transform), scan data representing different locations in the patient is provided by the medical scanner 40. By scanning at different times or in an on-going manner, scan data representing the patient at different times may be generated.

[0052] The memory 44 may be a graphics processing memory, a video random access memory, a random-access memory, system memory, cache memory, hard drive, optical media, magnetic media, flash drive, buffer, database, combinations thereof, or other now known or later developed memory device for storing data. The memory 44 is part of the medical scanner 40, part of a computer associated with the image processor 42, part of a database, part of another system, a picture archival memory, or a standalone device.

[0053] The memory 44 stores patient data, such as scan data, scan characteristics (e.g., settings or other task-specific information), and/or injection information. Any of the patient data discussed herein may be stored, such as values for features in the machine-learned model, material decompositions, and/or images. Training data may be stored. Model parameters or values for generating the training data may be stored. The memory 44 alternatively or additionally stores weights, connections, filter kernels, and/or other information embodying one or more machine-learned models (e.g., the AI 27). The memory 44 may alternatively or additionally store data during processing, such as storing information discussed herein or links thereto.

[0054] The memory 44 or other memory is alternatively or additionally a non-transitory computer readable storage medium storing data representing instructions executable by the programmed image processor 42 or a processor implementing the AI 27. The instructions for implementing the processes, methods and/or techniques discussed herein are provided on non-transitory computer-readable storage media or memories, such as a cache, buffer, RAM, removable media, hard drive or other computer readable storage media. Non-transitory computer readable storage media include various types of volatile and nonvolatile storage media. The functions, acts or tasks illustrated in the figures or described herein are executed in response to one or more sets of instructions stored in or on computer readable storage media. The functions, acts or tasks are independent of the particular type of instructions set, storage media, processor or processing strategy and may be performed by software, hardware, integrated circuits, firmware, micro code and the like, operating alone, or in combination. Likewise, processing strategies may include multiprocessing, multitasking, parallel processing, and the like.

[0055] In one embodiment, the instructions are stored on a removable media device for reading by local or remote systems. In other embodiments, the instructions are stored in a remote location for transfer through a computer network or over telephone lines. In yet other embodiments, the instructions are stored within a given computer, CPU, GPU, or system.

[0056] The image processor 42 is a general processor, central processing unit, control processor, graphics processor, digital signal processor, three-dimensional rendering processor, application specific integrated circuit, field programmable gate array, artificial intelligence processor, digital circuit, analog circuit, combinations thereof, or other now known or later developed device for applying the AI 27 to scan data and generating a classification or estimation of material composition. The image processor 42 is a single device or multiple devices operating in serial, parallel, or separately. The image processor 42 may be a main processor of a computer, such as a laptop or desktop computer, or may be a processor for handling some tasks in a larger system, such as in the medical scanner 40. The image processor 42 is configured by instructions, design, hardware, and/or software to perform the acts discussed herein.

[0057] The image processor 42 is configured to apply the AI 27 to decompose scan data to two or more materials represented by the scan data. The AI 27 uses the data from neighboring ones of the locations in the decomposition to find the two or more materials for each of the locations. The AI 27 was trained to decompose for two, three, or more materials. The materials may depend on the application or medical pathology of interest. The AI 27 for that pathology or application may be selected and used to decompose for the appropriate or task-specific materials.

[0058] The AI 27 uses learned knowledge to decompose. The AI 27 is machine-learned model, such as a machine-learned neural network, and was trained from synthetically generated material decomposition using a model of the medical scanner, scanning a phantom of known materials, or both the synthetically generated material decomposing using the model and scanning the phantom. The training data may additionally or alternatively be from an algorithm for material decomposition used in medical imaging.

[0059] The image processor 42 is configured to apply the input feature vector to the AI 27. The image processor 42 may be configured to calculate values for features and input the values to the AI 27 or use the values as part of the AI 27. The AI 27 is implemented by the image processor 42 or other processor with access to the definition or learned parameters defining the AI 27 stored in the memory 44 or other memory.

[0060] The image processor 42, using the AI 27, is configured to output a material decomposition for one or more locations. For example, different combinations of the input data corresponding to different windows of locations are used to determine the material decomposition for each of the sample or scan locations, voxel locations, pixel locations, or locations in a region of interest. The image processor 42 is configured to generate one or more images of material composition.

[0061] The image processor 42 may be configured to generate a graphic user interface (GUI) 88 for input of values or data and/or for material decomposition images. The GUI 47 includes one or both of the user input 48 and the display 49. The GUI 47 provides for user interaction with the image processor 42, medical scanner 40, and/or AI 27. The interaction is for inputting information (e.g., selecting patient files) and/or for reviewing output information (e.g., viewing material decomposition images). The GUI 47 is configured (e.g., by loading an image into a display plane memory) to display the material decomposition images.

[0062] The user input device 85 is a keyboard, mouse, trackball, touch pad, buttons, sliders, combinations thereof, or other input device. The user input 48 may be a touch screen of the display 49. User interaction is received by the user input device 85, such as a designation of a region of tissue (e.g., a click or click and drag to place a region of interest). Other user interaction may be received, such as for activating the material decomposition.

[0063] The display 49 is a monitor, LCD, projector, plasma display, CRT, printer, or other now known or later developed devise for outputting visual information. The display 49 receives images of one or more materials from the material decomposition. The material decomposition images are displayed on the display 49. Graphics, text, quantities, spatial distribution of anatomy or function, or other information from the image processor 42, memory 44, CT scanner 40, or machine-learned classifiers 90 may be displayed.

[0064] While the invention has been described above by reference to various embodiments, it should be understood that many changes and modifications can be made without departing from the scope of the invention. It is therefore intended that the foregoing detailed description be regarded as illustrative rather than limiting, and that it be understood that it is the following claims, including all equivalents, that are intended to define the spirit and scope of this invention.


Claims

1. A method for material decomposition in a medical imaging system, the method comprising:

scanning a patient with a spectral computed tomography (CT) system, the scanning providing spectral CT data representing the patient;

generating, by a machine-learned model in response to the spectral CT data, the material decomposition for each of a plurality of locations;

displaying an image of the material decomposition for the plurality of the locations.


 
2. The method according to claim 1, wherein scanning comprises scanning with the spectral CT system using dual energy or photon counting.
 
3. The method according to claims 1 or 2, wherein generating comprises generating with the machine-learned model having been trained using CT-based material decomposition imaging as ground truth,
and/or
wherein generating comprises generating with the machine-learned model having been trained using CT-based material decomposition imaging of a phantom with known materials as ground truth.
 
4. The method according to any of the preceding claims, wherein generating comprises generating with the machine-learned model having been trained using synthesized CT model-based material decomposition imaging as ground truth.
 
5. The method according to any of the preceding claims, wherein generating comprises generating by the machine-learned model in response to the spectral CT data, scan characteristic, and injection information,
and/or
wherein generating comprises generating by the machine-learned model having been trained with an image quality rating for training samples.
 
6. The method according to any of the preceding claims, wherein generating comprises generating with the machine-learned model comprising a convolutional neural network,
and/or
wherein generating comprises generating with the machine-learned model comprising a recurrent neural network having a long-short term memory.
 
7. The method according to any of the preceding claims, wherein generating comprises generating for each of the plurality of the locations based on the spectral CT data for the location and a plurality of neighboring ones of the locations,
and/or
wherein generating comprises generating by the machine-learned model having been trained with a dictionary embedding.
 
8. The method according to any of the preceding claims, further comprising performing spectral CT material decomposition imaging, wherein the material decomposition is a function of output of the spectral CT material decomposition and the machine-learned model.
 
9. The method according to any of the preceding claims, wherein generating comprises generating the material decomposition as a decomposition of three or more materials, in particular
wherein generating comprises generating concentrations at each of the locations for the three or more materials, and wherein displaying comprises displaying the image as a first material map for one of the three or more materials and displaying second and third material maps as other images for others of the three or more materials.
 
10. A system for material decomposition, the system comprising:

a computed tomography scanner for scanning a patient, the medical scanner configured to output data representing an interior region of the patient;

an image processor configured to apply an artificial intelligence to decompose two or more materials represented by the data; and

a display configured to display an image for at least one of the two or more materials.


 
11. The system according to claim 10, wherein the computed tomography scanner comprises a spectral computed tomography scanner.
 
12. The system according to claims 10 or 11, wherein the data represents a plurality of locations of the interior region, and wherein the artificial intelligence uses the data from neighboring ones of the locations in the decomposition of the two or more materials for each of the locations.
 
13. The system according to any of the preceding claims 10 to 12, wherein the artificial intelligence was trained to decompose the two or more materials as at least three materials.
 
14. The system according to any of the preceding claims 10 to 13, wherein the artificial intelligence comprises a machine-learned neural network having been trained from synthetically generated material decomposition using a model of the medical scanner, scanning a phantom of known materials, or both the synthetically generated material decomposing using the model and scanning the phantom.
 
15. A method for material decomposition in a computed tomography imaging system, the method comprising:

scanning a patient with the computed tomography imaging system, the scanning providing data representing the patient;

generating, by a machine-learned model in response to the data, the material decomposition for each of a plurality of locations based on the data for neighboring ones of the locations, the material decomposition comprising a three or more material decomposition;

displaying an image or images of the material decomposition for the plurality of the locations.


 




Drawing










Search report









Search report