BACKGROUND OF THE INVENTION
1. Field of the Invention
[0001] The invention relates generally to graphics display systems and more particularly
to rendering images in a graphics display system.
2. Description of the Related Art
[0002] In a typical graphics processing system, a graphics processor processes geometric
and color information so as to render pixel information used to control the illumination
of individual pixels on a graphics display screen. Ordinarily, for each pixel on a
graphics display screen there exists at least one pixel storage element for storing
pixel information used to control the illumination of that pixel.
[0003] For example, referring to the illustrative drawing of Figure 1 there is shown a typical
earlier graphics processing system 20. The system 20 includes a graphics processor
22 which receives geometric and color information on line 24, processes the graphics
information, and provides pixel information to a memory system 26. The memory system
26, in turn, provides the stored pixel information to a video digital-to-analog converter
30. The converter 30 converts the stored pixel information for each pixel into video
signals used by a video display 34 to produce a visual image on a graphics display
screen 38.
[0004] The graphics display screen 38 comprises a two dimensional grid which includes an
NXM array of pixels; where NxM usually is on the order of 1280 x 1024. The memory
system 26 includes a plurality of pixel storage elements (not shown). Each pixel storage
element in the memory system 26 corresponds to a respective pixel on the graphics
display screen 38. Furthermore, each pixel storage element stores multiple bits of
information such as, for example, color information which determines the color of
illumination of a corresponding pixel on the display screen 38; or depth information
which indicates the depth from a viewpoint. Thus, there is a correspondence between
the multiple bit pixel storage elements of the memory system 26 and pixels of the
NxM array of pixels of the display screen 38.
[0005] Generally, in order to produce an image of a line segment on the graphics display
screen 38, for example, geometric information in the form of the (x,y) coordinates
of the pixels on the display screen 38 that contain the end-points of a line segment
to be drawn are provided to the graphics processor 22 together with the color information
for the two end-points. The geometric and color information is processed so as to
render pixel image information which is stored in pixel storage elements of the memory
system 26 that correspond to the pixels of the display screen 38 to be illuminated
to portray the line segment.
[0006] A problem that frequently has been encountered in displaying a line segment by illuminating
individual pixels of a display screen 38 is the appearance of a staircase effect.
The illustrative drawings of Figure 2 show an example of a line segment having end-points
P
a and P
b which is displayed by illuminating the shaded pixels. The staircase effect is readily
apparent in the shaded pixels of line segment P
aP
b.
[0007] One approach to avoiding the staircase effect in a line segment has been to gradually
decrease the illumination of pixels used to portray the line segment such that pixels
disposed farther from the actual line segment do not appear as bright as those closer
it. In this manner, the staircase effect is made less noticeable to the eye. The illustrative
drawing of Figure 3 shows a line segment in which the appearance of the staircase
effect is diminished using such gradual shading techniques.
[0008] While earlier techniques for reducing a staircase effect generally have been acceptable,
there are shortcomings with their use. More specifically, such earlier techniques
often have not been readily susceptible to highly parallel processing in hardware.
In order to rapidly process pixel information for a huge number of pixels, it often
is desirable to simultaneously (in parallel) process pixel information for multiple
pixels. Furthermore, in order to provide smooth animation of images, the pixel information
must be periodically updated usually at a very high rate, typically on the order of
ten times per second. Parallel processing supports such high speed periodic updating.
[0009] One earlier approach to reducing the staircase effect, for example, has been to provide
a set of look-up tables which contain pixel information that can be retrieved for
storage in pixel storage elements. According to this earlier technique, for each pixel,
a computer software program retrieves pixel information from such look-up tables based
upon factors such as the slope of a line segment to be portrayed and the distance
of such a pixel from the line segment. Unfortunately, parallel access to and retrieval
from such look-up tables, in order to simultaneously process pixel information for
multiple pixels, is difficult to implement in hardware.
[0010] Thus, there has been a need for a method for generating an image of a line segment
on a graphics display screen which avoids the appearance of the staircase effect and
which can be readily implemented using highly parallel processing techniques. The
present invention meets this need.
SUMMARY OF THE INVENTION
[0011] The present invention provides a method for generating pixel color information for
use in producing an image of a line segment on a graphics display screen. The method
includes a step of denoting a planar region of the display screen that encompasses
the line segment. Intensity values and color values are assigned for at least three
selected pixels encompassed by the at least one planar region. Final pixel color information
is interpolated for each respective pixel encompassed by the planar region based upon
the assigned intensity values and the assigned color values.
[0012] Thus, a method is provided in which images of line segments can be produced on a
graphics screen using interpolation techniques over a planar region of the screen.
Such interpolation techniques are readily susceptible to highly parallel processing
and to implementation in hardware. Moreover, the method of the present invention advantageously
can be employed to minimize the appearance of a staircase effect in screen images
of a line segment.
[0013] These and other features and advantages of the present invention will become apparent
from the following description of an exemplary embodiment thereof, as illustrated
in the accompanying drawings.
BRIEF DESCRIPTION OF THE DRAWINGS
[0014] The purpose and advantages of the present invention will be apparent to those skilled
in the art from the following detailed description in conjunction with the appended
drawings in which:
Figure 1 is a block diagram of a typical earlier processing system;
Figure 2 is an example of a line segment on the graphics screen of the processing
system of Figure 1 illustrating a staircase effect;
Figure 3 is an example of the line segment of Figure 2 illustrating a diminished staircase
effect;
Figure 4a is a block diagram of a processing system in accordance with a presently
preferred embodiment of the invention;
Figure 4b is a block diagram showing details of the first memory unit of the system
of Figure 4a;
Figure 4c is a conceptual diagram of a pixel color element of the frame buffer (or
double buffer) of the first memory unit of Figure 4b;
Figure 4d is a conceptual diagram of a pixel depth element of the depth buffer of
the first memory unit of Figure 4b;
Figure 5a is an example of a line segment on the graphics screen of the preferred
embodiment of Figure 4a;
Figure 5b illustrates a parallelogram produced by the interface unit of the preferred
embodiment of Figure 4a;
Figure 5c illustrates the parallelogram of Figure 5b in which individual pixels along
opposed vertical edges have different assigned intensity (α) values;
Figure 5d illustrates the parallelogram of Figure 5b and illustrates a tile element;
Figure 6 illustrates a tile element of the display screen of the embodiment of Figure
4a;
Figure 7 shows a geometric figure in which an edge-seeking algorithm is applied to
identify pixels encompassed by the geometric figure; and
Figure 8 is a schematic diagram of an underflow/overflow correction circuit of the
graphics processors of the embodiment of Figure 4a; and
Figure 9 illustrates an alternative parallelogram in which intensity values are assigned
along opposed horizontal edges.
DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENT
[0015] The present invention comprises a novel method for producing an image of a line segment
on a graphics display screen. The following description is presented to enable any
person skilled in the art to make and use the invention, and is provided in the context
of a particular application and its requirements. Various modifications to the preferred
embodiment will be readily apparent to those skilled in the art, and the generic principles
defined herein may be applied to other embodiments and applications without departing
from the spirit and scope of the invention. Thus, the present invention is not intended
to be limited to the embodiment shown, but is to be accorded the widest scope consistent
with the principles and features disclosed herein.
[0016] Referring to the illustrative drawings of Figure 4, there is shown a block diagram
of a processing system 40 of a presently preferred embodiment of the invention. The
processing system 40 includes a main processor 41 (within dashed lines) and a graphics
unit 42 (within dashed lines). The main processor 41 provides geometric and color
information on line 56. The graphics unit 42 processes the geometric and color information
so as to render digital pixel color and pixel depth information. The video digital-to-
analog converter 43 converts the digital pixel color information into analog information
that can be used by a graphics display 47 to portray an image, such as an image of
line segment P₁P₂, on a graphics display screen 48.
[0017] The present invention provides a novel method for producing an image of a line segment
on the graphics display screen 48. In a currently preferred form, the method involves
taking geometric and color information, (XYZRGB), for each end point of the line segment,
and converting this information into geometric and color parameters which represent
vertices of a parallelogram on the screen which is bisected by the line segment. The
geometric and color information of three pixels encompassed by (or located directly
on) the parallelogram then are used to interpolate color and depth information for
pixels encompassed by the parallelogram. This interpolated color and depth information
then can be used to produce the image of a line segment on the screen.
[0018] The main processor 41 includes a central processing unit 44, a floating point processing
unit 46, a cache memory 50 and a main memory 52, all of which are coupled to a 32-bit
bus 54. The main processor 41 runs application programs that produce the geometric
and color information that can be processed by the graphics unit 42.
[0019] The graphics unit 42 includes an interface unit 58, first and second graphics processors
60,62 and first and second memory units 64,66. The interface unit 58 receives the
geometric and color information from the ma in processor 41 and uses that received
information to produce parameters used by the first and second graphics processors
60,62 to produce the pixel color and depth information which is then stored in the
respective first and second memory units 64,66.
[0020] In operation, an application running on the main processor 41, for example, can proving
geometric ana color information for multiple different, and possibly overlapping,
images to be produced on the display screen 48. The graphics unit 42 individually
processes the information for each such different image and stores the resulting information
in its memory units 64, 66. For each such different image, the interface unit 58 produces
a different set of parameters. The first and second graphics processors 60,62 use
the parameters produced by the interface unit 58 to determine which pixels on the
screen 48 are to be illuminated with what colors in order to portray the image.
[0021] More specifically, the first and second graphics processors 60,62, in response to
the parameters produced by the interface unit 58, perform a linear interpolation in
order to determine the pixel color and depth information to be stored in the first
and second memory units 64,66. Furthermore, the graphics processors 60,62 use an edge-seeking
algorithm to identify the geometric "edges" of an image to be portrayed on the screen
48 in order to determine which pixels are to be involved in portraying such an image.
Each pixel color storage element contains twenty-four bits of RGB color information:
eight bits for red, 8 bits for green, and eight bits for blue. Moreover, each Pixel
depth storage element also includes twenty-four bits of depth information.The first
and second graphics processors 60,62 each process pixel information (color or depth)
for five pixels at a time; that is, 120 bits of information at a time. Thus, the two
graphics processors 60,62 together can simultaneously process the color or depth information
for ten pixels (240 bits) at a time.
[0022] The first and second memory units 64,66 comprise a plurality of dual-port random
access memories. Each respective pixel on the graphics display screen 48 corresponds
to a different respective 24-bit pixel color storage element of one of either the
first or the second memory units 64,66. Aso, each respective pixel on the screen corresponds
to a different respective 24-bit pixel depth storage unit. In order to generate a
visual image on the screen 48 based upon stored pixel color information, the stored
pixel color information is read from the memories 64,66 and is provided to the video
digital-to-analog converter 43. The converter 43 produces analog signals used by the
graphics display 47 to generate the image. Thus, for each dual-port RAM, one port
is used for access by one of the graphics processors 60,62, and the other port is
used for access by the video digital-to-analog converter 43.
[0023] In order to permit the appearance of continuous or smooth motion of images portrayed
on the screen 48, the images typically are updated on the order of at least ten times
per second. In the course of each updating of images, the contents of the pixel color
storage elements for every pixel on the screen 48 are initialized. During each initialization,
the contents of each pixel color storage element and each pixel depth storage element
of the first and second memory units 64,66 is set to a background color value. The
geometric and color information provided by the main processor 41 then is used by
the graphics unit 42, as described above, to determine which respective pixels on
the screen 48 are to be illuminated with a color other than the background color,
and to access the corresponding respective pixel color storage elements so as to store
pixel color information that corresponds to such different colors.
[0024] The process of initializing pixel depth storage elements and pixel color storage
elements now will be explained in more detail. Referring to the drawings of Fig. 4b,
there is shown a block diagram illustrating details of the first memory unit 64. It
will be appreciated that the first and second memory units 64,66 are substantially
identical, and that the following discussion applies to the second memory unit 66
as well. The first memory unit 64 includes a depth buffer (Z buffer) 86, a double
buffer 88 and a frame buffer 90, all of which are coupled to a shared 120-bit data
bus 92. First control lines 94 provide row/read/write control signals to the depth,
double and frame buffers 86,88,90. Second control lines 96 provide separate chip-enable
signals to each of those three buffers.
[0025] Each pixel on the display screen 48 corresponds to a respective 24-bit pixel depth
storage element of the depth buffer 86, to a respective 24-bit pixel color storage
element of the double buffer 88 and to another respective 24-bit pixel color storage
element of the frame buffer 90. As explained below, respective pixel depth storage
and pixel color storage elements of the three buffers are logically organized into
respective five-element units which correspond to five out of ten pixels of respective
5x2 tile elements. The other five pixels of a respective tile element correspond to
respective pixel depth storage and pixel color storage elements of the second memory
system 66.
[0026] Referring to the illustrative drawings of Fig. 4c, there is shown a conceptual drawing
of a 24-bit pixel color storage element. Eight bits represent red; eight bits represent
green; and eight bits represent blue. For each pixel color storage element of the
double buffer 88, there is an identical corresponding pixel color storage element
of the frame buffer 90. Referring to the illustrative drawings of Fig. 4d, there is
shown a conceptual drawing of a 24-bit pixel depth storage element. All twenty-four
bits can be used to represent a depth.
[0027] In order to produce an image on the display screen, stored pixel color information
is read from the frame buffer 90 and is provided to the video digital-to-analog converter
(DAC) 43. The DAC 43 converts these digital values into analog signal values used
by the graphics display 47 to produce an image on the screen 48.
[0028] In order to create a smoothly animated image, the pixel color information in the
frame buffer 90 should be updated and provided to the DAC 43 at least approximately
ten times per second. The process of updating the contents of the frame buffer 90
involves first updating the contents of the double buffer 88, and then copying the
contents of the double buffer 88 into the frame buffer 90.
[0029] In an alternative embodiment (not shown), for example, instead of copying the contents
of a double buffer into a frame buffer after the contents of such a double buffer
have been updated, outputs from such a double buffer and such a frame buffer can be
multiplexed (or switched) such that the roles of the two buffers are reversed. In
that case, the most recently updated one of the two buffers is coupled to provide
pixel color information directly to a DAC. While the other buffer operates as a double
buffer and is updated with new pixel color information.
[0030] Updating of the double buffer 88 involves simultaneously initializing both the depth
buffer 86 and the double buffer 88. Initialization involves writing a single 24-bit
pixel depth value to all pixel depth storage elements of the depth buffer 86, and
involves writing a single 24-bit pixel color value to all pixel color storage elements
of the double buffer 88. In accordance with the present invention, during initialization,
the same 24-bit value is written to all pixel storage elements of both the depth buffer
86 and the double buffer 88. In particular, that same 24-bit value is a 24-bit value
representing a background color specified by an application program running on the
main processor 41.
[0031] The first graphics processor 60 controls such simultaneous initialization by providing
on the first control lines 94, read/write control signals that instruct the depth
and double buffers 86,88 to write information from the shared 120-bit bus 92. In the
course of providing such write signals, the first graphics processor 60 provides on
the second control lines 96, chip-enable signals that cause both the depth buffer
86 and the double buffer 88 to simultaneously write digital information provided by
the first graphics processor 60 on the shared bus 92. The graphics processor 60 provides
24-bit pixel (background) color values on the 120-bit shared bus 92 for five pixels
at a time until all pixel storage elements of the depth and storage elements have
been initialized by loading all of them with the same pixel value.
[0032] In the presently preferred embodiment, the process of updating the frame buffer 90
also involves the application of hidden surface removal techniques. These techniques
can ensure that, where multiple images in a view overlap one another, only the closer
of those images is visible in the view. Portraying the closer image involves ensuring
that pixel color information for the closer of such overlapping images is stored in
the double buffer 88 for any pixels for which such images overlap.
[0033] The implementation of hidden surface removal techniques involves use of the depth
buffer 86. The first graphics processor 60 calculates interpolated pixel depth and
calculates interpolated pixel color information for pixels involved in displaying
images on the screen 48. For each such pixel, the first graphics processor 60 reads
a currently stored depth value from a corresponding pixel depth element of the depth
buffer 86. It compares the currently stored depth value for that pixel to the calculated
(interpolated) c th value for the pixel. If the calculated depth value is closer than
the currently stored depth value, then the first graphics processor writes the newly
calculated depth value into the depth storage element corresponding to the pixel under
consideration; it also writes the newly calculated color value for that pixel into
the color storage element corresponding to the pixel. Otherwise, it leaves the currently
stored depth and color values unchanged for the pixel under consideration.
[0034] In the course of applying the hidden surface technique, a floating point depth value
(Z) in the range 0≦Z≦1, provided by an application program running on the main processor
41 is converted into a 24-bit binary depth value. This conversion is performed so
that the provided depth value can be readily used to compute calculated (interpolated)
depth values for comparison with 24-bit values currently stored in the depth buffer
86. Furthermore, since each pixel storage element of the depth buffer is initialized
with a 24-bit depth value corresponding to the background color, it is necessary to
scale the converted depth value provided by the application process to compensate
for this initialization.
[0035] In the presently preferred embodiment, this scaling is performed as follows. The
binary background color value is converted to a floating point value. For a binary
background color value less than 2²³, the converted binary depth value is:
depth = (background color value)
+ ((2²⁴ - 1) - background color value) *Z
For a binary background color value greater than 2²³he converted binary depth value
is:
depth = (background color value *Z.
It will be appreciated that scaling in this manner ensures that a larger range of
scaled depth values is available for use during the application of hidden surface
removal techniques.
[0036] It will be understood that the first graphics processor 60 can render pixel depth
and pixel color information for multiple images in a view. In cases where images overlap,
the above-described surface-hiding technique ensures that more distant images (or
more distant portions thereof) are hidden behind closer images.
[0037] The operation of the data processing system 40 to produce an image of a line segment
on the screen 48 now will be explained in more detail in the context of an example
which is described below. The example will focus on the steps involved in illuminating
pixels to produce an image of line segment P₁P₂ shown in the illustrative drawing
of Fig. 5a.
[0038] The graphics processors 60, 62 produce pixel color, depth and intensity values for
storage by the memory units 64,66 by performing linear interpolations using a plane
equation of the form:
Q = -

x -

y +

x1 +

y1 + Q1
where,
Q1 = Ax1 + By1 + C
Q2 = Ax2 + By2 + C
Q3 = Ax3 + By3 + C
where, Q represents red, green or blue values for color interpolations; represents
Z values for depth interpolations; and represents α values for intensity interpolations.
[0039] While linear interpolation is used in the presently preferred embodiment, it will
be appreciated that alternate approaches could be employed to compute pixel color
and depth information such as quadratic interpolation.
[0040] The main processor 41 provides to the interface unit 58 geometric and color information
P₁(X₁Y₁R₁G₁B₁Z₁) and P₂(X₂Y₂R₂G₂B₂Z₂) about the end-points of the line segment P₁P₂.
The coordinate pair (X₁Y₁) and (X₂Y₂) provides the location in the pixel array of the
graphics screen 48 of the pixels that contain the end-points P₁ and P₂. Color information
(R₁G₁B₁) and (R₂G₂B₂) respectively provide the colors of the end- points P₁ and P₂.
Finally, depth information Z₁ and Z₂ provides the depth (distance from a viewer) of
the end-points. Depth information is used, for example, in hidden surface removal
in case some images on the screen 48 overlay other images on the screen. In the case
of such overlaying, surfaces having "closer" depth values are portrayed and surfaces
having "farther" depth values are hidden.
[0041] In response to the geometric and color informationl the interface unit 58 produces
parameters such as coordinates for a parallelogram, an intensity value (α) scale,
selected starting values to be used in performing linear interpolation and constant
values to be used in
performing linear interpolations. Referring to the illustrative drawing of Figure 5b,
there is shown a parallelogram (P₃P₄P₅P₆) which is bisected by the line segment P₁P₂,
and which has opposed parallel edges which encompass pixels containing the end-points
P₁ and P₂ of the line segment. It will be appreciated that the parallelogram denotes
a planar region of the display screen 48.
[0042] The interface unit 58 produces an intensity scaling factor α which, as explained
below, is used to progressively scale the intensity of illumination of pixels used
to portray the line segment such that pixels vertically displaced farther from the
line segment P₁P₂ less intensely are illuminated. In particular, referring to the
following Table and to the illustrative drawings of Figure 5c, the intensity values
on the left edge of the parallelogram vary from α=0.0 at P₃, to α=1.0 at P₁, to α=2.0
at P₅. Similarly, the intensity values vary along the left edge of the parallelogram
from α=0.0 at P₄, to α=1.0 at P₂, to α=2.0 at P₆. As explained below, values of α
in the range from 1.0 to 2.0 are mapped to a range from 1.0 to 0.0 in the course of
interpolation calculations so as to produce α intensity values that progressively
decrease with vertical distance from the line segment.
[0043] The following Table 1 shows assigned α values for points shown in Fig. 5c along the
edge of the parallelogram.
TABLE 1
Points |
α |
P₃,P₄ |
0.0 |
P₇,P₈ |
0.25 |
P₉,P₁₀ |
0.50 |
P₁₁,P₁₂ |
0.75 |
P₁,P2 |
1.00 |
P₁₃,P₁₄ |
1.25 |
P₁₅,P₁₆ |
1.50 |
P₁₇,P₁₈ |
1.75 |
P₅,P₆ |
2.00 |
[0044] The interface unit 58 also selects three points encompassed by (and on the edges
of) the parallelogram for use in interpolating color, depth and intensity information
(RGBZα) for pixels encompassed by the parallelogram. For example, unit 58 could select
points P₃, P₄ and P₅.
[0045] The (RGBZα) values for the three selected points then are used by the interface unit
58 to calculate -

=

;

=

Q can represent RGB,Z or α. Thus, the interface unit 58 calculates: dR/dx, dR/dy dG/dx,
dG/dy, dB/dx, dB/dy, dZ/dx, dZ/dy, dα/dx and dα/dy. In the presently preferred embodiment,
a = (Y4-Y3) (Q5-Q4)-(Y5-Y4) (Q4-Q3)
b = (Q4-Q3) (X5-X4)-(Q5-Q4) (X4-X3)
c = (X4-X3) (Y5-Y4)-(X5-X4) (Y4-Y3)
where the respective (xy) coordinates of the selected points P₃,P₄ and P₅ are: (X373),
(X4Y4) and (X5Y5).
[0046] After the interface unit 58 has produced the parallelogram coordinates, has assigned
a values, has selected three points encompassed by the parallelogram and has calculated
the constant values listed above, the first and second graphics processors 60,62 use
this information both to determine which pixel image color storage elements are to
be updated with new pixel color information in order to render an image of the P₁P₂
line segment and to actually interpolate updated pixel color and depth information.
[0047] More particularly, the first and second graphics processors 60,62 use an edge-seeking
algorithm to determine which pixels are to be updated. In the presently preferred
embodiment, an edge-seeking algorithm is used in which "tile" elements are employed.
A "tile", element is a set of ten physically contiguous 24-bit pixels arranged on
the screen 48 in a 5x2 pixel array. Figure 6 illustrates a 5x2 tile element comprising
ten pixels (numbered "1" through "10".
[0048] The screen 48 is divided into a multiplicity of such tile elements. Correspondingly,
the memory units 64,66 are organized such that for each tile element, there are ten
logically contiguous pixel storage elements for storing color information Also, there
are ten logically contiguous pixel storage elements for storing depth information.
[0049] In brief, the edge-seeking algorithm operates as follows. A starting tile element
is selected. In Fig. 7, that tile element is labeled "1". In a presently preferred
form of the invention, the starting tile element is the tile element that contains
the uppermost vertex of the geometric figure in question (in this example, triangle
T₁,T₂,T₃). The algorithm first searches tile elements to the left of the starting
tile element "1" for an edge running through any of them. In this example, it finds
none. Next, the algorithm searches tile elements to the right of the starting tile
element for an edge running through any of them. It determines that the tile element
labeled "2" has an edge running through it. Next, the algorithm moves down to the
coordinates of the tile element labeled "3" which is directly below the starting tile
element "1". From the coordinates of tile element "3, it once again searches to the
left and then to the right. The algorithm finds that tile elements labeled "3" through
"6" are wholly or partially encompassed by edges of the triangle T₁,T₂,T₃. The algorithm
determines that there is no bottom edge of the triangle through tile element "3".
So, it moves down to the coordinates of the tile element labeled "7", and repeats
its left, and then right search, and it identifies tile elements "8" and "7" and "9"
through "14" as being wholly or partially encompassed. The algorithm proceeds in this
manner until it identifies the last two tile elements wholly or partially encompassed.
They are labeled "52" and "53" respectively.
[0050] Although the above example of the use of an edge-seeking algorithm is provided for
a triangle T₁T₂T₃, it will be understood that it can just as readily be applied to
the parallelogram of Figures 5a-5d.
[0051] Furthermore, while the presently preferred embodiment employs an edge-seeking algorithm,
it will be appreciated that other more traditional approaches can be used to identify
pixels or tile elements encompassed by the parallelogram.
[0052] The first and second graphics processors 60,62 interpolate color, depth and intensity
values for pixels of tile elements found to be wholly or partially encompassed by
a geometric figure in question. Referring to the illustrative drawing of Figure 5d,
for example, there is shown a tile element comprising a 5x2 array of pixels labelled
"1" through "10" found, through application of the edge-seeking algorithm, to be (partially)
encompassed by the parallelogram P₃P₄P₅P₆. pixels "1" through"4" and "6" through "9"
are encompassed within the parallelogram. Pixels "5" and "10" are disposed outside
the parallelogram. Since the tile element is (partially) encompassed by the parallelogram,
the planar equation discussed above is used for each pixel in the tile element to
interpolate color (RGB), depth (Z) and intensity (α) values for the respective pixel.
[0053] For each respective pixel in the tile element, a final red, green and blue color
values are calculated from respective interpolated red, green and blue color values
and a respective interpolated intensity value as follows:
COLOR
final = COLOR
interpolated * α
interpolated
[0054] It will be appreciated that since the intensity value (α) decreases with vertical
distance from the line segment pixels displaced vertically farther from the line segment
tend to gradually fade-out, leading to a reduced staircase effect.
[0055] As mentioned above, the a (intensity) values falling in the range from 1.0 < α ≦
2.0 must be mapped to a range from 1.0 < α ≦ 0.0 before being applied to the above
equation used to compute COLOR
final. In a present embodiment of the invention, an underflow/overflow (U/O) correction
circuit 68 illustrated in the schematic diagram of Fig. 8 is used to achieve such
mapping.
[0056] The U/O circuit 68 includes a computational unit, in the form of a nine-bit adder
70, a plurality of inverting mechanisms, in the form of nine Exclusive-OR gates 72,
and a control unit, in the form in an AND gate 74. The nine-bit adder 70 comprises
nine one-bit adders 70-0 through 70-8 coupled in a carry-chain. The respective one-bit
address 70-0 through 70-8 of the nine-bit adder 72 have respective outputs coupled
to respective first inputs 76-0 through 76-8 of the respective exclusive-OR gates
72-0 through 72-8. The output 73 of the AND gate 74 is coupled to respective second
inputs 78-0 through 78-8 of the Exclusive-OR gates 72-0 through 72-8. A first input
80 of the AND gate 74 is coupled to the output of the ninth one-bit adder 70-8, the
highest order one-bit adder in the carry-chain. A second input 82 to the AND gate
74 is coupled to receive a control signal.
[0057] In operation, the U/O circuit 68 can both interpolate next intensity values α
n and map such interpolated intensity values from the range from 1.0 < α ≦ 2.0 to the
range from 1.0 < α ≦ 0.0. In particular, the "A" inputs of the nine-bit adder 70 receive
an eight bit previously interpolated intensity value α
p which comprises nine bits (α
po through α
p8). The "B" inputs of the nine-bit adder 70 receive a constant value dα/dx, for example,
which comprises nine bits dα
i/dx (dα
o/dx through dα₈/dx). The lowest order previously interpolated intensity value and
the constant bit dα
o/dx are provided to one-bit adder 70-0. The highest order previously interpolated
intensity value bit α
p8 and the constant bit dα₈/dx are provided to one-bit adder 70-7. It should be appreciated
that the following discussion can be Applied to computation of

as well.
[0058] The U/O circuit 68 interpolates a nine-bit next intensity value α
n which comprises eight bits a
ni; (α
no through α
n8). As long as the real number value of the next intensity value α
n is within the range 0.0 < α
n ≦ 1.0 then the value of the next highest order intensity bit, α
n8, is logical "zero". If the real number value of the highest order intensity value
α
n is in the range 1.0 < α
n ≦ 2.0, then the next highest order intensity bit, α
n8, is a logical "one".
[0059] By providing a logical "one" control signal to the second input 82 of the AND gate
74, the AND gate 74 is caused to provide on line 73, is a logical "one" signal only
when the highest order intensity bit, α
n8, is logical "one". The result of a logical one signal on line 73 is to cause the
respective Exclusive-OR-gates 72-0 through 72-7 to invert bits provided to them by
the respective one-bit adders 70-0 through 70-7. This inverting advantageously, can
be used to map intensity values in the range 1.0 < α
n ≦ 2.0 into the range 0.0 < α
n ≦ 1.0 as illustrated in the following table 2.
TABLE 2
α Assigned Floating Point |
α Assigned Hex |
α Mapped Hex Mapped |
0.0 |
OX00 |
-- |
0.25 |
OX40 |
-- |
0.50 |
OX80 |
-- |
1.0 |
OXFF |
-- |
1.5 |
OX180 |
OX7F |
1.75 |
OX1C0 |
OX3F |
2.0 |
OX1FF |
OX0 |
[0060] Referring to the illustrative drawing of Figure 5c and to Tables 1 and 2, it will
be appreciated that the operation of the U/O circuit 68 can be used to map next intensity
values α
n in hexadecimal form, from the range 1.0 < α
n ≦ 2.0 onto the range 0.0<α
n<1.0. Furthermore, it will be appreciated that the U/O circuit 68 performs this mapping
such that next interpolated intensity values, α
n, for pixels decrease with increasing vertical distance of the pixels from the line
segment. Moreover, such decrease occurs at approximately the same rate for pixels
lying above as for pixels lying below the line segment P₁P₂.
[0061] Referring to the illustrative drawing of Figure 9, there is shown a parallelogram
on which an intensity value (α) scale has been assigned for points along opposed horizontal
axes of the parallelogram. The intensity value scale for the parallelogram in Figure
9 is set forth in the following Table 3:
Table 3
Points |
α |
Pn,Po,PpPq |
0.0 |
Pr,Ps,Pt,Pu |
0.5 |
Pl,Pm |
1.0 |
[0062] In the case of the parallelogram in Figure 9, the graphics unit 42 performs interpolations
for color, depth and intensity values in the two parallelogram regions P
nP
oP
lP
m and P
lP
mP
pP
q which are bisected by line segment P
lP
m. The U/O correction circuit 68 is not employed to map intensity values since the
intensity value (α) already is properly scaled.
[0063] While one embodiment of the invention has been described in detail herein, it will
be appreciated that various modifications can be made to the preferred embodiment
without departing from the scope of the invention. Thus, the foregoing description
is not intended to limit the invention which is defined in the appended claims in
which:
1. A method for generating pixel color information for use in producing an image of
a line segment on a graphics display screen, comprising the steps of:
denoting at least one planar region of the display screen that encompasses the line
segment;
assigning respective intensity values for at least three selected pixels encompassed
by the at least one planar region;
assigning respective color values for the at least three selected pixels; and
interpolating respective final color values for each respective pixel encompassed
by the at least one planar region based upon the respective assigned intensity values
and the respective assigned color values.
2. The method of Claim 1 wherein said step of denoting at least one planar region
includes assigning coordinates for a parallelogram.
3. The method of Claim 1 wherein said step of denoting at least one planar region
includes assigning coordinates for a parallelogram bisected by the line segment.
4. The method of Claim 1 and further including the step of:
identifying pixels on the display screen encompassed by the at least one planar region.
5. The method of Claim 4 wherein said step of identifying involves use of an edge-seeking
algorithm.
6. The method of Claim 1 and further including the step of:
identifying pixels on the display screen encompassed by the planar region; and
wherein said step of interpolating involves interpolating final color values only
for pixels that are substantially encompassed by the at least one planar region.
7. The method of Claim 1 wherein said step of denoting includes assigning coordinates
representing the at least one planar region.
8. The method of Claim 1 wherein:
said step of denoting includes assigning coordinates representing the at least one
planar region; and
said step of identifying pixels on the display screen encompassed by the at least
one planar region based upon the assigned coordinates.
9. The method of Claim 1 wherein said step of interpolating respective final color
values includes the steps of:
interpolating respective intensity values for each respective pixel encompassed by
the at least one planar region based upon the assigned intensity values;
interpolating respective color values for each respective pixel encompassed by the
at least one planar region based upon the respective assigned color values; and
calculating respective final color values for each respective pixel encompassed by
the at least one planar region based upon the respective interpolated intensity values
and the respective interpolated color values.
10. The method of Claim 9 wherein said steps of interpolating respective intensity
values and respective color values involves performing linear interpolations.
11. The method of Claim 9 wherein said step of calculating involves, for each respective
pixel encompassed by the at least one planar region, multiplying the respective interpolated
intensity value by the respective interpolated color value.
12. A method for generating pixel color information for use in producing an image
of a line segment on a graphics display screen, comprising the steps of:
denoting at least one planar region of the display screen that encompasses the line
segment;
assigning respective intensity values for at least three selected pixels encompassed
by the at least one planar region;
assigning respective color values for the at least three selected pixels;
interpolating respective intensity values for each respective pixel encompassed by
the at least one planar region based upon the assigned intensity values;
interpolating respective color values for each respective pixel encompassed by the
at least one planar region based upon the at least one respective color value; and
calculating respective final color values for each respective pixel encompassed by
the at least one planar region based upon the respective interpolated intensity values
and the respective interpolated color values.
13. The method of Claim 12 wherein said step of interpolating respective intensity
values involves reducing respective interpolated intensity values that exceed a prescribed
full intensity value to respective intensity values that do not exceed the prescribed
full intensity value.
14. The method of Claim 12 wherein said step of denoting at least one planar region
includes assigning coordinates for a parallelogram.
15. The method of Claim 12, wherein:
said step of denoting at least one planar region includes assigning coordinates for
first and second contiguous parallelograms.
16. The method of Claim 12, wherein:
said step of denoting at least one planar region includes assigning coordinates for
first and second contiguous parallelograms;
said steps of assigning involve, assigning respective intensity values and assigning
respective color values for each of at least three respective first selected pixels
encompassed by the first parallelogram, and assigning respective intensity values
and assigning respective color values for each of at least three respective second
selected pixels encompassed by the second parallelogram; and
said steps of interpolating involve interpolating respective intensity values and
interpolating respective color values for each respective pixel encompassed by the
first parallelogram and for each respective pixel encompassed by the second parallelogram.
17. The method of Claim 12, wherein:
said step of denoting et least one planar region of the display screen includes assigning
coordinates for first and second contiguous psrallelograms;
said steps of assigning involve, assigning respective intensity values and assigning
respective color values for each of at least three respective first selected pixels
encompassed by the first parallelogram, and assigning respective intensity values
and assigning respective color values for each of at least three respective second
selected pixels encompassed by the second parallelogram;
said steps of interpolating involve, interpolating respective intensity values and
interpolating respective color values for each respective pixel encompassed by the
first parallelogram and for each respective pixel encompassed by the second parallelogram;
and
said step of calculating involves, (i) calculating respective final color values for
each respective pixel encompassed by the first parallelogram based upon the interpolated
intensity values and the interpolated color values for respective pixels encompassed
by the first parallelogram and involves, (ii) calculating final color values for each
respective pixel encompassed by the second parallelogram based upon the interpolated
intensity values and the interpolated color values for respective pixels encompassed
by the second parallelogram.
18. A method for generating pixel color information for use in producing an image
of a line segment on a graphics display screen, comprising the steps of:
denoting at least one planar region of the display screen that encompasses the line
segment and that is in the shape of a parallelogram;
assigning respective intensity values for at least three selected pixels encompassed
by the at least one planar region;
assigning respective color values for the at least three selected pixels;
identifying respective pixels substantially encompassed by the at least one planar
region using an edge-seeking algorithm;
linearly interpolating respective intensity values for each respective pixel encompassed
by the at least one planar region based upon the assigned intensity values;
linearly interpolating respective color values for each respective pixel encompassed
by the at least one planar region based upon the respective assigned color values;
and
calculating respective final color values for each respective pixel encompassed by
the at least one planar region based upon the respective interpolated intensity values
and the respective interpolated color values.