BACKGROUND OF THE INVENTION
Field of the Invention
[0001] The present invention relates to a stereoscopic visualization system for endoscope
and, more particularly, to a stereoscopic visualization system for endoscope using
shape-from-shading algorithm to generate stereo images.
Description of Related Art
[0002] Minimally invasive surgery has become an indispensable part in surgical treatment
of current medical behavior and can be performed by endoscope-assisted surgical instruments
to allow smaller incision and less tissue trauma, thereby shortening patient's recovery
cycle and reducing overall medical expense. However, conventional minimally invasive
surgery all employs monoscopic endoscope, which only displays two-dimensional (2D)
images lacking depth information. Therefore, it is challenging for a surgeon to accurately
move surgical instruments to a correct location inside a patient's body. Surgeons
usually perceive depth in 2D images according to motion parallax, monocular cues and
other indirect evidences for positioning accuracy. Providing stereo images capable
of directly providing depth perception without going through additional means, such
as motion parallax, monocular cues and other indirect evidences, is still the best
approach in resolving the conventional inaccurate positioning issue at the cost of
a dual-camera endoscope. Despite the advantages of depth information or stereo images
required by surgeons, the dual-camera endoscope has the drawback of being much more
expensive than the monoscopic endoscope and is less accepted accordingly.
SUMMARY OF THE INVENTION
[0003] An objective of the present invention is to provide a stereoscopic visualization
system and a stereoscopic visualization method using shape-from-shading algorithm
capable of providing stereoscopic images with a monoscopic endoscope through the shape-from-shading
algorithm.
[0004] To achieve the foregoing objective, the stereoscopic visualization system for endoscope
using shape-from-shading algorithm includes a monoscopic endoscope, a three-dimensional
(3D) display, and an image conversion device.
[0005] The monoscopic endoscope may capture the two-dimensional (2D) images.
[0006] The image conversion device may be connected between the monoscopic endoscope and
the 3D display and may have an input port for endoscope and a 2D-to-3D conversion
unit.
[0007] The input port for endoscope may be connected to the monoscopic endoscope to receive
the 2D image from the monoscopic endoscope.
[0008] The 2D-to-3D conversion unit may apply shape from shading algorithm adapted to calculate
a direction of a light source for the 2D image, and may calculate a depth map based
upon information of light distribution and shading of the 2D image, and may apply
depth image based rendering algorithm to convert the 2D image to a stereoscopic image
with the information of light distribution and shading of the 2D image.
[0009] The image output port may be connected with the 2D-to-3D image conversion unit and
the 3D display to receive the stereo images and display the stereo image on the 3D
display.
[0010] To achieve the foregoing objective, the stereoscopic visualization method for endoscope
using shape-from-shading algorithm includes steps of:
capturing a two-dimensional (2D) image, wherein an image-capturing unit is used to
acquire a 2D image from a monoscopic endoscope with illumination from a light source;
calculating a light direction and a camera position for the 2D image;
generating a depth map of the 2D image using the shape-from-shading method, wherein
the shape-from-shading method combines the light direction and an iterative approach
to solve equations involving a gradient variation of pixel intensity values in the
2D image; and
generating a stereoscopic image by combining the depth map and the 2D image.
[0011] Given the foregoing stereoscopic visualization system and method using shape-from-shading
method, the 2D image taken by the monoscopic endoscope is processed by the shape-from-shading
algorithm to calculate depth information in generation of a depth map, and the 2D
image along with the depth map form the stereoscopic image that is outputted to the
3D display for users to view the converted stereoscopic image. As there is no need
to replace a monoscopic endoscope with a dual-lens endoscope and modify the hardware
structure of the existing monoscopic endoscope, the issues of no stereoscopic image
available to monoscopic endoscope and costly dual-lens endoscope encountered upon
the demand of stereoscopic images can be resolved.
[0012] Other objectives, advantages and novel features of the invention will become more
apparent from the following detailed description when taken in conjunction with the
accompanying drawings.
BRIEF DESCRIPTION OF THE DRAWINGS
[0013]
FIG. 1 is a functional block diagram of a stereoscopic visualization system for endoscope
using shape-from-shading algorithm in accordance with the present invention.
FIG. 2 is a flow diagram of a stereoscopic visualization method for endoscope using
shape-from-shading algorithm in accordance with the present invention.
DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENTS
[0014] With reference to FIG. 1, a stereoscopic visualization system for endoscope using
shape-from-shading algorithm in accordance with the present invention includes a monoscopic
endoscope 20, a three-dimensional (3D) display 30, and an image conversion device
10.
[0015] The image conversion device 10 is connected between the monoscopic endoscope 20 and
the 3D display 30, and has an input port for endoscope 11, a 2D-to-3D image conversion
unit 12, and an image output port 13. The input port for endoscope 11 is connected
to the monoscopic endoscope 20. The 2D-to-3D image conversion unit 12 is electrically
connected to the input port for endoscope 11, acquires a 2D image from the monoscopic
endoscope 20, generates a depth map of the 2D image, and converts the 2D images and
the depth map into a stereoscopic image using shape-from-shading algorithm built in
the 2D-to-3D image conversion unit 12. The image output port 13 is electrically connected
to the 2D-to-3D image conversion unit 12, and also connected to the 3D display 30,
and outputs the stereoscopic image to the 3D display 30 such that the 3D display 30
displays the converted stereoscopic images.
[0016] With reference to FIG. 2, a stereoscopic visualization method for endoscope using
shape-from-shading algorithm in accordance with the present invention is performed
by the 2D-to-3D image conversion unit 12 to convert the 2D images from the monoscopic
endoscope 20 into the stereoscopic images, and includes the following steps.
[0017] Step S1: Calibrating a camera of the monoscopic endoscope. With reference to "
Image processing, analysis and machine vision, 2nd edition, vol. 68, PWS, 1998, pp.
448-457", a camera calibration method is used to calculate intrinsic parameters of the camera
of the monoscopic endoscope. The camera calibration method estimates a camera posture
by rotating and displacing a calibration template, and solves a nonlinear equation
to obtain the intrinsic parameters and extrinsic parameters.
[0018] Step S2: Capturing a 2D image. An image-capturing device is used to acquire a 2D
image from the camera of the monoscopic endoscope. The image-capturing device may
have a resolution being standard definition (SD) or high definition (HD). The camera
of the monoscopic endoscope may have a 30 degree lens or a wide angle lens.
[0020] The shape-from-shading algorithm can be described by calculation of light distribution
of a light source in the following.
[0021] Assume that a camera is located at C(α, β, γ), which can be pre-determined with the
illumination position estimation. Given a set of coordinates of each pixel x = (
x,
y)in the 2D image, a surface normal n and a light vector 1 at a 3D point M corresponding
to the pixel x can be represented as:

where
u(x) is the depth at point x and
ux, uy are the spatial derivatives.
[0022] Hence, an image irradiance equation can be expressed as follows in terms of the proposed
parametrizations of
l and n without ignoring the distance attenuation term between the light source and
surface reflection to solve a conventional Lambertian SFS (Shape-from-shading) model.

where p is a surface albedo.
[0023] After the substitution
v = In
u is performed, a Hamiltonian, which is known as a spatial transformation between the
position of the camera and the light source, can be obtained as follows:

where

[0024] The depth map of the image caused by light distribution can thus be generated after
iterations of calculation of the foregoing equations. As being almost the same, the
light vector and the camera position vector can be simplified to be the same vector.
[0025] Step S4: Creating a disparity map using the depth map. The depth map is composed
of a gray-level image containing information relating to the distance of scene objects
on the 2D image from a viewpoint. During the course of converting the depth map into
a 3D stereo image pair, a disparity map is generated. Disparity values in the disparity
map are inversely proportional to the corresponding pixel intensity values of the
depth maps but are proportional to a focal length of a camera of the monoscopic endoscope
and an interorbital width of a viewer.
[0026] Step S5: Generate a left image and a right image for stereo vision. The disparity
map acquired during the course of converting the depth map into the 3D stereo image
pair is used for generation of a left eye image and a right eye image. Each disparity
value of the disparity map represents a distance between two corresponding points
in the left eye image and the right eye image for generation of the left eye image
and the right eye image associated with the 3D stereo image pair. The generated left
eye image and right eye image can be further processed for various 3D display formats,
such as side-by-side, interlaced and other 3D display formats, for corresponding 3D
displays to display.
[0027] As can be seen from the foregoing description, the depth information can be calculated
from the 2D image by using the shape-from-shading algorithm. After generation of the
depth map, the 2D images can be combined with the depth maps to generate corresponding
stereoscopic images without either replacing the conventional monoscopic endoscope
with a dual-lens endoscope or altering the hardware structure of the conventional
monoscopic endoscope. Accordingly, the issues arising from the conventional monoscopic
endoscope providing no 3D stereo images and the costly dual-lens endoscope can be
resolved.
[0028] Even though numerous characteristics and advantages of the present invention have
been set forth in the foregoing description, together with details of the structure
and function of the invention, the disclosure is illustrative only. Changes may be
made in detail, especially in matters of shape, size, and arrangement of parts within
the principles of the invention to the full extent indicated by the broad general
meaning of the terms in which the appended claims are expressed.
1. A stereoscopic visualization system for endoscope using shape-from-shading algorithm,
comprising:
a monoscopic endoscope (20) capturing the two-dimensional (2D) images;
a three-dimensional (3D) display (30); and
an image conversion device (10) connected between the monoscopic endoscope (20) and
the 3D display (30), and having:
an input port for endoscope (11) connected to the monoscopic endoscope (20) to receive
the 2D image from the monoscopic endoscope (20);
a 2D-to-3D conversion unit (12) applying a shape from shading algorithm adapted to
calculate a direction of a light source for the 2D image, and calculating a depth
map based upon information of light distribution and shading of the 2D image, and
applying a depth image based rendering algorithm to convert the 2D image to a stereoscopic
image with the information of light distribution and shading of the 2D image; and
an image output port (13) connected with the 2D-to-3D image conversion unit and the
3D display (30) to receive the stereo images and display the stereo image on the 3D
display (30).
2. A stereoscopic visualization method for endoscope using shape-from-shading algorithm,
comprising steps of:
capturing a two-dimensional (2D) image, wherein an image-capturing unit is used to
acquire a 2D image from a monoscopic endoscope with illumination from a light source;
calculating a light direction and a camera position for the 2D image;
generating a depth map of the 2D image using shape-from-shading algorithm, wherein
the shape-from-shading algorithm combines the light direction and an iterative approach
to solve equations involving a gradient variation of pixel intensity values in the
2D image; and
generating a stereoscopic image by combining the depth map and the 2D image.
3. The stereoscopic visualization method of claim 2, wherein the shape-from-shading algorithm
is based on calculation of light distribution of a light source as follows:
assume that a camera is located at C(α, β, γ), which can be pre-determined with a
illumination position estimation, a set of coordinates of each pixel x = (x,y)in the 2D image, a surface normal n and a light vector 1 at a 3D point corresponding
to the pixel x of the 2D image are represented as:


where u(x) is a depth at point x and ux, uy are spatial derivatives;
an image irradiance equation is expressed as follows in terms of the light vector
l and the surface normal n without ignoring distance attenuation between the light
source and surface reflection to solve a Lambertian SFS (Shape-from-shading) model:

where p is a surface albedo;
after the substitution v = Inu is performed, a Hamiltonian, which is known as a spatial transformation between the
position of the camera and the light source, is obtained as follows:

where

the depth map of the 2D image caused by light distribution is generated after iterations
of calculation of the foregoing equations, and the light vector and the camera position
vector are simplified to be the same vector.
4. The stereoscopic visualization method of claim 2, wherein the stereoscopic image is
generated according to the depth image based rendering algorithm to provide different
views of the 2D image with the 2D image and the depth map.
Amended claims in accordance with Rule 137(2) EPC.
1. A stereoscopic visualization system for endoscope using shape-from-shading algorithm,
comprising:
a monoscopic endoscope (20) capturing the two-dimensional (2D) images;
a three-dimensional (3D) display (30); and
an image conversion device (10) connected between the monoscopic endoscope (20) and
the 3D display (30), and having:
an input port for endoscope (11) connected to the monoscopic endoscope (20) to receive
the 2D image from the monoscopic endoscope (20);
a 2D-to-3D conversion unit (12) applying a shape from shading algorithm adapted to
calculate a direction of a light source for the 2D image, and calculating a depth
map based upon information of light distribution and shading of the 2D image, and
applying a depth image based rendering algorithm to convert the 2D image to a stereoscopic
image with the depth map, wherein a disparity map is created by using the depth map,
disparity values in the disparity map are inversely proportional to the corresponding
pixel intensity values of the depth maps and are proportional to a focal length of
the monoscopic endoscope (20) and an interorbital width of the 3D display (30), and
the stereoscopic image is generated by using the disparity map; and
an image output port (13) connected with the 2D-to-3D image conversion unit and the
3D display (30) to receive the stereo images and display the stereo image on the 3D
display (30).
2. A stereoscopic visualization method for endoscope using shape-from-shading algorithm,
comprising steps of:
capturing a two-dimensional (2D) image, wherein an image-capturing unit is used to
acquire a 2D image from a monoscopic endoscope with illumination from a light source;
calculating a light direction and a camera position for the 2D image;
generating a depth map of the 2D image using shape-from-shading algorithm, wherein
the shape-from-shading algorithm combines the light direction and an iterative approach
to solve equations involving a gradient variation of pixel intensity values in the
2D image;
using the depth map to create a disparity map, wherein disparity values in the disparity
map are inversely proportional to the corresponding pixel intensity values of the
depth maps and are proportional to a focal length of the monoscopic endoscope and
an interorbital width of the 3D display;
generating a stereoscopic image by combining the depth map and the 2D image, wherein
the stereoscopic image is generated by using the disparity map.
3. The stereoscopic visualization method of claim 2, wherein the shape-from-shading algorithm
is based on calculation of light distribution of a light source as follows:
assume that a camera is located at C(α, β, y), which can be pre-determined with a
illumination position estimation, a set of coordinates of each pixel x = (x,y) in the 2D image, a surface normal n and a light vector I at a 3D point corresponding to the pixel x of the 2D image are represented as:


where u(x) is a depth at point x and ux,uy are spatial derivatives;
an image irradiance equation is expressed as follows in terms of the light vector
I and the surface normal n without ignoring distance attenuation between the light source and surface reflection
to solve a Lambertian SFS (Shape-from-shading) model:

where ρ is a surface albedo;
after the substitution v = lnu is performed, a Hamiltonian, which is known as a spatial transformation between the
position of the camera and the light source, is obtained as follows:

where

the depth map of the 2D image caused by light distribution is generated after iterations
of calculation of the foregoing equations, and the light vector and the camera position
vector are simplified to be the same vector.
4. The stereoscopic visualization method of claim 2, wherein the stereoscopic image is
generated according to the depth image based rendering algorithm to provide different
views of the 2D image with the 2D image and the depth map.