Technical Field
[0001] The present invention relates to lighting 3D computer generated models, and more
particularly it relates to a method of interacting with the image based lighting surface.
Background
[0002] As computers get more powerful and 3D rendering software more sophisticated, computers
are being used more and more in the creation of photo real images and animation. This
computer generated imagery is replacing photography and cinematography in the real
world for making images of new product designs, such as consumer goods and cars for
marketing and advertising, or for creating virtual characters and visual effects in
movies.
[0003] The ingredients required to make these images using a computer are 3D geometry representing
the physical forms of the objects being visualised, shaders that describe the appearance
of the geometry (the materials), virtual cameras which specify the views that are
being rendered and finally the light sources creating the illumination. All of these
elements are generally contained within a computer file called a 'scene' that can
be opened by a 3D software package. 3D software packages let the user create/assemble
and adjust these ingredients and then compute images using this data.
[0004] In 3D software the scene is represented and interacted with via viewports. These
viewports act as windows onto the scene; either views through the cameras, or importantly
orthographic views like front, side and top. An example of a typical user interface
in existing 3D software is shown in Figure 1. Users can move around these views; so
rotate the camera, zoom in and out. With the orthographic views users can only move
up, down, left, and right on a plane and zoom in and out, they are locked to the orthographic
projection. The user needs to use all of these different viewports to accurately position
objects with the scene. Objects are first selected and then tools used to move their
position. The users swaps to the top view to understand the relative position of the
objects in the plan view, then the user swaps to the side view to check the height
they are placing objects. Placing objects in 3D space using the cameras view is virtually
impossible as the user doesn't have the ability to judge height and depth. Using the
front, side and top views are essential for placing objects and light sources within
a scene.
[0005] Until recently these viewports would have been always shown in wireframe or with
simple shading. A wireframe view is far faster to compute and this simple representation
of the scene enables users to more easily find and interact with the objects when
they a represented in this simple manner. Additionally traditional CG lights are represented
with icons to show their position, direction, scale, type. These icons could be moved
like any other object within the scene. But the CG lights had no physical form, so
the icons were needed as a way to place and interact with the lights.
[0006] As computing power has increased, only recently has it become common place to have
a fully rendered interactive viewport with the real world physical behaviour of lighting
transport, reflections and materials delivering an image in real-time, or near real-time,
with a quality and feel that is very close to the final production quality rendering.
This viewport can be called a 'virtual interactive photograph', where changes to the
scene update the virtual photograph straight away. An example of the user interface
that includes a real-time rendered view, as well as orthographic views, is shown in
Figure 2.
Lighting in computer graphics
[0007] There are currently 3 basic technologies for lighting 3D computer generated objects.
The first we will call traditional CG light sources. These CG light sources are the
oldest technology, coming from the origins of computer graphics. These light sources
include; directional (representing parallel light rays form a faraway object, like
the sun), spot light, area light and point light. These light sources are represented
by icons in the 3D software viewports showing their placement and characteristics.
[0008] The next type of light source is an emitter. This is a piece of geometry assigned
with a shader with illumination properties. The material may also have transparency
properties. The most common emitter form is a rectangular plane with a high dynamic
range image mapped on to its surface and an alpha channel perhaps for transparency.
Emitters are a more realistic way to light a scene because the light source has a
physical form. It can be seen in reflections, just like the real-world. The old fashioned
CG light sources create illumination and fake being seen in reflections via a specular
component in the shaders of the objects in the scene.
[0009] The final type of lighting is image based lighting, which is a kind of emitter that
totally surrounds the 3D scene. Every object is contained within this 3D object, usually
a large or infinite sphere, with a high dynamic range image mapped to its surface.
This HDR image creates illumination and is seen in reflections. The most common application
is to use a high dynamic range spherical photograph of real world location and place
this around the scene to create the illusion that the synthetic object is actually
within the real world environment.
[0010] Alternatively there are methods allowing users to make their own spherical HDRI maps,
placing nodes with properties onto a HDR rectangular canvas to place lighting and
effects with their shapes distorted correctly such that the shapes are correctly shown
when the image is mapped and distorted onto the lighting sphere. This creation and
adjustment of a HDRI lighting map can be done in real-time, and therefore provides
a method for real time lighting creation and real-time augmentation and adjustment
of existing HDR environments using this interactive HDR canvas. An example of this
is described in
US 2010/0253685.
[0011] There are 2 ways to position the light sources within the scene. The first is to
select them and move them using the different orthographic viewports. The user can
view the virtual photograph viewport whilst these lighting position changes are happening
to judge their effect. Alternatively many software packages allow the user to look
from the point of view of the light source and use camera navigation tools to position
the light source. But once again this needs to be done in conjunction with the virtual
photograph view to see the results of the changes being made.
[0012] Now computer rendering is technically able to produce images that are as realistic
as photographs, then 3D artists are now looking to photographic lighting techniques
to make their images of products and cars look as good as a professional photo shoot.
3D artists are now learning that the placement of reflections of light in their subject
matter is just as important as the illumination. The problem is that it's not easy
to position illumination and reflections where you want them with the current solutions.
[0013] When looking through the light source as a camera or indeed just looking at the 3D
scene, the only place you can actually tell if the lighting is being placed correctly
is within the virtual photograph viewport, this is where you know if a light is catching
the reflection you want. The lighting is totally dependent of the view point of the
camera.
[0014] Techniques have been shown before based on a concept of painting with light, essentially
drawing strokes onto a 3D model to place reflections and illumination - these were
mapped back onto a 3D form that surrounded the object. These methods fail to capture
many essential ingredients required by the 3D artist wishing to produce professional
quality lighting and we feel this is why this approach has not been utilised in 3D
software over 5 years after this research was published. Photographers don't light
objects by using a paint brush and strokes. Photographers place distinctive 'shapes'
in reflections and control the transition of light across these shapes to control
their appearance in the subject. Photographers and 3D artists also iterate their lighting
many times until they are happy that they have achieved the desired effect. It is
therefore a very creative process, and lighting needs to be easily adjustable at all
times.
[0015] Image based lighting is a powerful lighting technology that is also underutilised
with its history as being a method to place an object into a static HDR photographed
environment. Image based lighting is a very efficient method for creating realistic
lighting and reflections, and can be very interactive. The industry has not fundamentally
changed the way we approach lighting computer generated objects since 3D graphics
was invented. A new method of lighting 3D objects is needed that is easier and faster
than the trial and error methods currently used to try to place reflections and illumination
around a 3D object. Users want to put a reflection or place illumination where they
want it on an object and fast. Then users want to adjust the placed items easily to
try out different lighting ideas. Real-time rendering of very realistic images provides
instant feedback on lighting changes which is totally liberating and this is not being
taken full advantage of.
Summary
[0016] According to a first embodiment of the invention, there is provided a computer implemented
method of interacting with a three dimensional image based lighting surface in order
to adjust the lighting properties of the surface, the method comprising:
defining an image plane and a user viewpoint for the lighting surface;
rendering and displaying on a display of a computer a scene containing an object in
situ within the lighting surface taking into account said image plane and said user
viewpoint;
by way of a user interaction with the displayed scene, receiving an identification
of a point on the image plane;
tracing a ray from the user viewpoint through the identified point on the image plane
and either,
- determining a surface intersection point of the ray with said surface or
- determining an object intersection point of the ray with said object and tracing a
further ray either being a reflection of the ray from the object or being normal to
the surface of the object at said object intersection point, and determining a surface
intersection point of the further ray with said surface,
and
adjusting the lighting properties of the surface in the region of the surface intersection
point, either by
adding to the image based lighting surface, in the region of the said surface intersection
point, a light source having a geometry including mapping the geometry of the light
source onto the three dimensional image based lighting surface, or
selecting a light source having a geometry in the region of the said surface intersection
point and subtracting the selected light source or modifying the properties of the
selected light source.
[0017] Embodiments of the present invention may provide a better approach to creating and
adjusting lighting for 3D objects in computer graphics. Embodiments may provide highly
interactive, intuitive, precise and fast methods that enable a 3D artist to light
an object to a far higher quality than was possible fore in a much shorter timescale.
[0018] Said lighting surface may be a sphere or a cuboid, and may comprise a photographic
or computer generated image.
[0019] If more than one light source is located at or in the region of said surface intersection
point, the user is provided with a means to select the desired lighting source from
the more than one lighting source located at or in the region of said surface intersection
point.
[0020] Said step of adjusting the lighting properties may comprise a step of mapping the
geometry of a light source onto the lighting surface, and may comprise tracing rays
from the light source to an origin of the lighting surface.
[0021] The method may comprise displaying on a display of the computer a canvas being a
two dimensional mapping of the three dimensional lighting surface.
[0022] Said step of receiving an identification of a point on the image plane may comprise
receiving an input on the rendered scene from a user operated pointing device such
as a mouse, tablet or touch screen.
[0023] The method may comprise, subsequent to said step of adjusting the lighting properties
of the surface, re-rendering and displaying the scene using a modified lighting surface
in substantially real-time.
[0024] According to a second embodiment of the invention there is provided a system configured
to interact with a three dimensional image based lighting surface, the system comprising:
a display configured to display a rendered scene containing an object in situ within
the lighting surface taking into account an image plane and a user viewpoint;
a graphical user interface configured to receive an identification of a point on the
image plane by way of a user interaction with the displayed scene; and
a processor configured to:
render the scene to be displayed on the display;
trace a ray from the user viewpoint through the identified point on the image plane
and either,
- determine a surface intersection point of the ray with said surface or
- determine an object intersection point of the ray with said object and trace a further
ray either being a reflection of the ray from the object or being normal to the surface
of the object at said object intersection point, and determine a surface intersection
point of the further ray with said surface,
and
adjust the lighting properties of the surface at or in the region of the surface intersection
point, either by
adding to the image based lighting surface, in the region of said surface intersection
point, a light source having a geometry including mapping the geometry of the light
source onto the three dimensional image based lighting surface, or
selecting a light source having a geometry in the region of said surface intersection
point and subtracting the selected light source or modifying the properties of the
selected light source.
[0025] The user interaction with the displayed scene may be by way of a user operated pointing
device such as a mouse, tablet or touch screen.
[0026] The processor may be further configured to re-render the scene to be displayed on
the display in substantially real-time, subsequent to adjusting the lighting properties
of the surface.
[0027] According to a third embodiment of the invention, there is provided a computer program
product comprising a computer readable medium having thereon computer program code,
such that, when the computer program code is run, it makes the computer execute a
method of interacting with a three dimensional image based lighting surface according
to any one of the statements corresponding to the first embodiment above.
Brief Description of the Drawings
[0028]
Figure 1 shows a user interface for known 3D software which includes orthogonal views;
Figure 2 shows a user interface for known 3D software which includes orthogonal views
as well as a real-time rendered view;
Figure 3 shows the determination of three locations at the image based lighting surface
using rays;
Figure 4 shows mapping of lighting sources onto a spherical lighting surface;
Figure 5 shows a real-time rendered camera view with which a user is interacting;
Figure 6 shows the image based lighting sphere resulting from the user input shown
in Figure 5;
Figure 7 shows the HDR canvas of the lighting sphere of Figure 6;
Figure 8 shows rendered views and corresponding HDR canvasses for lighting sources
at three different positions;
Figure 9 shows an object in a rendered image with a user interacting with the light
properties within the rendered image;
Figure 10 shows the HDR canvas corresponding to the rendered image of Figure 9;
Figure 11 shows a representation of the spherical light surface for the rendered image
of Figure 9;
Figure 12 shows detailed placement of a lighting source using reflection at a zoomed
in portion of the scene;
Figure 13 shows a real-time rendered camera view with which a user is interacting;
Figure 14 shows the image based lighting sphere resulting from the user input shown
in Figure 13;
[0029] Using these three determined locations, it is possible to interact with the lighting
environment of the 3D object. For example:
- Select the elements in the high dynamic range image data set that contribute to illuminating
or reflecting in the pixel selected.
- Move data elements in the high dynamic range image data to illuminate or reflect in
that pixel.
- Move elements in the high dynamic range data to the new direct position selected.
[0030] The method described above enables real-time lighting design directly onto a real-time
rendered image.
[0031] When lighting is added to the lighting design, these are known as lighting nodes
or lighting sources. They can be added to a position on the image based lighting surface
using the methods described above. The lighting sources have a geometry, or shape,
that must be mapped onto the spherical surface of the lighting surface. Figure 4 shows
three lighting sources, one at each of the intersection points. The lighting sources
show on Figure 4 as straight lines that touch the lighting surface at the intersection
points. In order to incorporate the lighting source into the lighting surface, the
geometry of the source must be mapped onto the spherical shape of the lighting surface.
As can be seen from Figure 4, this is done by tracing rays (adjustment rays) from
points in the lighting source (just two outer points, indicating the edges of the
lighting source are shown in Figure 4) back to the origin of the ray that traced the
projected lighting adjustments to the surface. The lighting source geometry is then
mapped onto the spherical lighting surface at the points at which the adjustment rays
pass through it.
[0032] Some of the possible interactions that the user is able to make will now be described
in greater detail.
Moving lighting sources on the HDR canvas using reflection mode
[0033] As shown in Figure 5, the user clicks on the model in the rendered view at the point
where they would like a selected lighting source to be seen in the reflection. As
shown in Figure 6, the software looks at the geometry under this pixel location and
generates a reflection ray from the camera viewpoint to this point on the 3-D object
and bounces this reflection ray until it intersects with the image based lighting
sphere. This provides the exact location where the lighting source needs to be moved
to. The software instantly moves the lighting source to this UV position on the HDR
canvas, shown in Figure 7, and generates new HDR data with the lighting source in
this new position and correctly distorting its shape for spherical mapping. The rendered
view is updated with this new HDR data instantly and re-rendered with the lighting
source seen instantly in the position where the user clicked.
[0034] The user could click and drag the lighting source around by its reflection until
it is in the most desirable position. This is shown in Figure 8, where the rendered
view and the corresponding HDR canvas are shown for a reflection that is moved by
the user at three different positions.
Moving lighting sources on the HDR canvas using reflection mode
[0035] In figure 9, the user is clicking on the rendered view to move the lighting source
into a new position. This lighting source is brightening the exposure of the HDR background
under the source on the canvas, as shown in Figure 10. Figure 11 shows the corresponding
image based lighting sphere. Lighting sources can be any shape, brightness and colour
across their shape, transparency across their shape, and blend with content underneath
and use masks too. A lighting source can be used to make local adjustments to existing
HDR data.
[0036] The user can much more quickly, easily and efficiently interact with the lighting
by directly driving the positioning and selection of these lighting sources by interacting
with the 3D rendered view and placing the sources based on illumination and reflection
modes. The user puts the changes where they see they want them in order to create
a more pleasing appearance, using a sophisticated source based non-destructive editing
and creation environment for HDR illumination and reflection data for computer graphics.
Accurate positioning of lighting sources on the HDR canvas using zoom and reflection
mode
[0037] As shown in Figure 12, the user selects a region of the view to zoom into. The user
then positions a lighting source using reflection mode on a detailed area. The camera
view has not been moved at all, the view is still an area of the original view but
zoomed in, so reflection placements made here will be in the same place when zoomed
out to see the full view again. When the user zooms out, the full effect of that precision
placement can be seen, including reflections on other parts of the object caused by
the same lighting source added.
Moving lighting sources on the HDR canvas using illumination mode
[0038] Illuminating the same place on the model as a reflection will require a light source
in a very different position to reflection mode.
[0039] As shown in Figure 13, the user clicks on the model in the rendered view where the
user wants the selected lighting source to illuminate the model. As shown in Figure
14, the software looks at the geometry under this pixel location and generates a ray
normal to this geometry that intersects the image based lighting sphere. This intersection
provides the exact location where a lighting source needs to be added or moved to.
The software instantly adds or moves a lighting source to this UV position on the
HDR canvas, shown in Figure 15, and generates new HDR data with the lighting source
in this new position and correctly distorts its shape for spherical mapping.
The rendered view is updated with this new HDR data instantly and re-rendered with
the lighting source seen instantly illuminating the position where the user clicked.
The user could click and drag the lighting source around to place illumination where
they want it.
Selecting lighting sources on the HDR canvas using reflection mode
[0040] As the user can now place lights directly onto the model with greater accuracy, it
is expected that the user will probably place a lot more lights than with traditional
lighting techniques. This is further expected as placing tiny reflection details boosts
the dynamic feel of a render, giving it a feel that is much closer to a real photograph.
[0041] Keeping track of the position of each light source and its effect would be very difficult,
and would be very data and processor intensive. Therefore, allowing the user to select
the lighting sources using this method directly from the 3D rendered model is beneficial
in order to allow the user to easily continue editing and adjusting the lighting and
reflection environment.
[0042] As shown in Figure 16, the user clicks on the model in the rendered view on the lighting
source reflection that they wish to select. The software looks at the geometry under
this pixel location and generates a ray, as shown in Figure 17, from the camera view
point to this point on the 3D object and bounces a reflection ray until it intersects
with the image based lighting sphere. The software then does a hit test to check which
light source is under this UV location and selects it. The user can now adjust the
location and properties of the selected lighting source. Figure 18 shows the selected
lighting source on the HDR canvas.
Selecting lighting sources on the HDR canvas using reflection mode when there are
layered lighting sources
[0043] In a similar way to the previous method, the user clicks on the model in the rendered
view on the lighting source's reflection that they wish to select (Figure 19). The
software looks at the geometry under this pixel location, as shown in Figure 20, and
generates a ray from the camera view point to this point on the 3D object and bounces
a reflection ray until it intersects with the image based lighting sphere. The software
does a hit test to check which lights are under this location. Now, as there are two
possible lighting sources located at this position on the lighting sphere, the software
provides a list of light sources for the user to choose from, as shown in Figure 19,
in order to select the lighting source they want to adjust. Figure 21 shows the HDR
canvas for the situation where two lighting sources are layered on the lighting surface.
Placing lighting sources directly onto the image based lighting sphere in the rendered
view
[0044] In this mode, as shown in Figure 22, the user simply clicks in the rendered camera
view and the position where the user clicks is calculated as a position on the image
based lighting sphere seen in the background. The selected lighting source is then
added or moved to this location. Figures 23 and 24 show the image based lighting sphere
and HDR canvas respectively for this situation.
[0045] This allows the user to rim light (backlight) objects easily or use the camera to
look inside the lighting sphere and click on the sphere at any time to place a source
in that position, or select an existing source directly.
[0046] Selecting lighting notes using reflections means that it is possible to use this
new method without ever needing to show the HDR canvas to the user at all. All decisions
can be made directly on the 3D model, for example regarding position, scaling, brightness,
colour etc. This saves a lot of screen space within software in which it is implemented.
This also means that the system can be used to deliver lighting technology control
through a far simpler interface, even over the Internet, on tablets or mobile devices.
Figure 25 shows the method implemented on a desktop computer through a browser, with
the user interacting with the lighting of the rendered view using a mouse cursor.
Additionally because the rendered view itself is being interacted with, it is not
limited to clicking, and so this can be done not only using a mouse cursor, but also
other user inputting means, for example using a finger on a touch screen, or a tablet
and stylus such as those widely used by computer graphic designers, for example a
Wacom™ tablet etc. Figure 26 shows the method implemented using a tablet and stylus.
Figure 27 shows the method implemented on a touch screen device, which the user interacting
with the lighting of the rendered view using a finger. Any user input that enables
the user to touch and drag on the rendered view will be useable with the methods described
herein.
[0047] Using the described methods, the user will feel like they are lighting a final image
by interacting directly with the image itself. It is not intended that the methods
themselves will adjust the properties of lighting sources. It is expected that the
methods of interacting with the rendered view described herein will allow direct manipulation
of some lighting source properties, for example scale, shape, brightness, and colour.
For example clicking and dragging or using multitouch input on a touch screen device
to scale the light and orient its rotation at the same time.
1. A computer implemented method of interacting with a three dimensional image based
lighting surface in order to adjust the lighting properties of the surface, the method
comprising:
defining an image plane and a user viewpoint for the lighting surface;
rendering and displaying on a display of a computer a scene containing an object in
situ within the lighting surface taking into account said image plane and said user
viewpoint;
by way of a user interaction with the displayed scene, receiving an identification
of a point on the image plane;
tracing a ray from the user viewpoint through the identified point on the image plane
and either,
• determining a surface intersection point of the ray with said surface or
• determining an object intersection point of the ray with said object and tracing
a further ray either being a reflection of the ray from the object or being normal
to the surface of the object at said object intersection point, and determining a
surface intersection point of the further ray with said surface,
and
adjusting the lighting properties of the surface in the region of the surface intersection
point either by
adding to the image based lighting surface, in the region of the said surface intersection
point, a light source having a geometry including mapping the geometry of the light
source onto the three dimensional image based lighting surface, or
selecting a light source having a geometry in the region of the said surface intersection
point and subtracting the selected light source or modifying the properties of the
selected light source.
2. A method as claimed in claim 1, wherein said lighting surface is a sphere or a cuboid.
3. A method as claimed in claim 1 or 2, wherein said lighting surface comprises a photographic
or computer generated image.
4. A method as claimed in any one of the preceding claims, wherein, if more than on light
source is located at or in the region of said surface intersection point, said step
of selecting a light source comprises providing the user with a means to select the
desired lighting source from the more than one lighting source located at or in the
region of said surface intersection point.
5. A method as claimed in any one of the preceding claims, wherein said step of mapping
comprises tracing rays from an added light source to an origin of the lighting surface.
6. A method as claimed in any one of the preceding claims and comprising displaying on
a display of the computer a canvas being a two dimensional mapping of the three dimensional
lighting surface.
7. A method as claimed in any one of the preceding claims, wherein said step of receiving
an identification of a point on the image plane comprises receiving an input on the
rendered scene from a user operated pointing device such as a mouse, tablet or touch
screen.
8. A method as claimed in any one of the preceding claims and comprising, subsequent
to said step of adjusting the lighting properties of the surface, re-rendering and
displaying the scene using the modified lighting surface in substantially real-time.
9. A system configured to interact with a three dimensional image based lighting surface,
the system comprising:
a display configured to display a rendered scene containing an object in situ within
the lighting surface taking into account an image plane and a user viewpoint;
a graphical user interface configured to receive an identification of a point on the
image plane by way of a user interaction with the displayed scene; and
a processor configured to:
render the scene to be displayed on the display;
trace a ray from the user viewpoint through the identified point on the image plane
and either,
• determine a surface intersection point of the ray with said surface or
• determine an object intersection point of the ray with said object and trace a further
ray either being a reflection of the ray from the object or being normal to the surface
of the object at said object intersection point, and determine a surface intersection
point of the further ray with said surface,
and
adjust the lighting properties of the surface in the region of the surface intersection
point either by
adding to the image based lighting surface, in the region of said surface intersection
point, a light source having a geometry including mapping the geometry of the light
source onto the three dimensional image based lighting surface, or
selecting a light source having a geometry in the region of said surface intersection
point and subtracting the selected light source or modifying the properties of the
selected light source.
10. A system as claimed in claim 9, wherein the user interaction with the displayed scene
is by way of a user operated pointing device such as a mouse, tablet or touch screen.
11. A system as claimed in claim 9 or 10, wherein the processor is further configured
to re-render the scene to be displayed on the display in substantially real-time,
subsequent to adjusting the lighting properties of the surface.
12. A computer program product comprising a computer readable medium having thereon computer
program code, such that, when the computer program code is run, it makes the computer
execute a method of interacting with a three dimensional image based lighting surface
according to any one of the claims 1 to 8.
1. Computerimplementiertes Verfahren des Zusammenwirkens mit einer Beleuchtungsfläche,
basierend auf einem dreidimensionalen Bild, um die Beleuchtungseigenschaften der Oberfläche
einzustellen, umfassend die folgenden Schritte: Definieren einer Bildebene und eines
Nutzerstandpunktes für die Beleuchtungsoberfläche; Schaffen und Darstellen einer Szene
auf einem Display eines Computers, die an Ort und Stelle ein Objekt enthält, und zwar
innerhalb der Beleuchtungsfläche, wobei die genannte Bildebene und der genannte Nutzerstandpunkt
Berücksichtigung finden; Empfangen einer Identifizierung eines Punktes auf der Bildebene
mit Hilfe einer Nutzerzusammenwirkung mit der dargestellten Szene; Ziehen eines Strahls
vom Nutzerstandpunkt durch den identifizierten Punkt auf der Bildebene und entweder
Bestimmen eines Oberflächenschnittpunktes des Strahls mit der Oberfläche oder Bestimmen
eines Objektschnittpunktes des Strahls mit dem Objekt und Ziehen eines weiteren Strahls
entweder als Reflektion des Strahls von dem Objekt oder als normalen Strahl zu der
Oberfläche des Objektes an dem Objektschnittpunkt und Bestimmen eines Oberflächenschnittpunktes
des weiteren Strahls mit der Oberfläche und Einstellen der Beleuchtungseigenschaften
der Oberfläche in dem Bereich des Oberflächenschnittpunktes, und zwar entweder durch
Hinzufügen einer Lichtquelle zu der bildbasierten Beleuchtungsoberfläche in dem Bereich
des besagten Oberflächenschnittpunktes, wobei die Lichtquelle eine Geometrie hat,
zu der die Aufzeichnung der Geometrie der Lichtquelle auf dem dreidimensionalen Bild
gehört, das auf der Lichtquelle basiert, oder Auswählen einer Lichtquelle mit einer
Geometrie in dem Bereich des Flächenschnittpunktes und Abziehen der ausgewählten Lichtquelle
oder Modifizieren der Eigenschaften der ausgewählten Lichtquelle.
2. Verfahren nach Anspruch 1, dadurch gekennzeichnet, daß die Beleuchtungsfläche eine Kugel oder ein Quader ist.
3. Verfahren nach Anspruch 1 oder 2, dadurch gekennzeichnet, daß die Beleuchtungsfläche ein fotografisches oder mit einem Computer erzeugtes Bild
aufweist.
4. Verfahren nach einem der vorhergehenden Ansprüche, dadurch gekennzeichnet, daß dann, wenn mehr als eine Lichtquelle an oder in dem Bereich des Oberflächenschnittpunktes
liegt, der Schritt des Auswählens der Lichtquelle beinhaltet, daß der Nutzer ein Mittel
erhält, um die gewünschte Lichtquelle aus der mehr als einen Lichtquelle auszuwählen,
die an oder in dem Bereich des Oberflächenschnittpunktes liegen.
5. Verfahren nach einem der vorhergehenden Ansprüche, dadurch gekennzeichnet, daß der Schritt des Aufzeichnens umfaßt, daß Strahlen von einer zusätzlichen Lichtquelle
zu einem Ursprung der Beleuchtungsfläche gezogen werden.
6. Verfahren nach einem der vorhergehenden Ansprüche, gekennzeichnet durch Darstellen einer Leinwand auf einem Display des Computers, wobei die Leinwand eine
zweidimensionale Aufzeichnung der dreidimensionalen Lichtfläche aufweist.
7. Verfahren nach einem der vorhergehenden Ansprüche, dadurch gekennzeichnet, daß der Schritt des Empfangens eines Identifikationspunktes auf der Bildebene umfaßt,
daß ein Eingangssignal auf der Szene empfangen wird, die von einer benutzerbedienten
Anzeigeeinrichtung geliefert wird, beispielsweise einer Maus, einem Tablet oder Touchscreen.
8. Verfahren nach einem der vorhergehenden Ansprüche, dadurch gekennzeichnet, daß nach dem Schritt des Einstellens der Beleuchtungseigenschaften der Oberfläche die
Szene geliefert und dargestellt wird, die die modifizierte Beleuchtungsfläche verwendet,
und zwar in im wesentlichen Realzeit.
9. System zum Zusammenwirken eines dreidimensionalen Bildes auf der Grundlage einer Beleuchtungsoberfläche,
gekennzeichnet durch ein Display, das so konfiguriert ist, daß eine gelieferte Szene dargestellt wird,
die ein Objekt an Ort und Stelle mit der Beleuchtungsoberfläche aufweist, wobei eine
Bildebene und ein Nutzerstandpunkt in Betracht gezogen werden; ferner gekennzeichnet durch ein grafisches Nutzerinterface, das so gestaltet ist, daß auf der Bildebene ein Identifikationspunkt
empfangen wird, und zwar durch ein Zusammenwirken des Nutzers mit der dargestellten
Szene; und durch einen Prozessor, der so gestaltet ist, daß die auf dem Display darzustellende Szene
geliefert wird und von dem Benutzerstandpunkt durch den identifizierten Punkt auf der Bildebene ein Strahl gezogen wird, und zwar entweder
durch Bestimmen eines Oberflächenschnittpunktes des Strahls mit der Oberfläche oder das
Bestimmen eines Objektschnittpunktes des Strahls mit dem Objekt, wobei ein weiterer
Strahl entweder als Reflektion des Strahls von dem Objekt gezogen wird oder normal
zu der Oberfläche des Objektes an dem Objektschnittpunkt gezogen wird und ein Oberflächenschnittpunkt
des weiteren Strahls mit der Oberfläche bestimmt wird, und Einstellen der Beleuchtungseigenschaften
der Oberfläche in dem Bereich des Oberflächenschnittpunktes entweder durch Hinzufügen der bildbasierten Beleuchtungsfläche in dem Bereich des Flächenschnittpunktes,
wobei eine Lichtquelle eine Geometrie aufweist, einschließlich des Zeichnens der Geometrie
der Lichtquelle auf die dreidimensionale bildbasierte Beleuchtungsoberfläche, oder
Auswählen einer Lichtquelle mit einer Geometrie in dem Bereich des Oberflächenschnittpunktes
und Abziehen der ausgewählten Lichtquelle oder Modifizieren der Eigenschaften der
ausgewählten Lichtquelle.
10. System nach Anspruch 9, dadurch gekennzeichnet, daß die Nutzerzusammenwirkung mit der dargestellten Szene mit Hilfe einer nutzerbetätigten
Zeigevorrichtung erfolgt, beispielsweise einer Maus, eines Tablets oder Touchscreens.
11. System nach Anspruch 9 oder 10, dadurch gekennzeichnet, daß der Prozeß des weiteren so konfiguriert ist, daß er erneut die auf dem Display darzustellende
Szene in im wesentlichen Realzeit liefert, nachdem die Beleuchtungseigenschaften der
Oberfläche eingestellt worden sind.
12. Computerprogrammprodukt, umfassend ein computerlesbares Medium, auf dem sich ein Computerprogrammcode
befindet, und zwar derart, daß dann, wenn der Computerprogrammcode läuft, der Computer
in der Lage ist, ein Verfahren des Zusammenwirkens mit einem dreidimensionalen Bild
auf der Grundlage der Beleuchtungsfläche gemäß einem der Ansprüche 1 bis 8 durchzuführen.
1. - Procédé, mis en oeuvre par ordinateur, d'interaction avec une surface d'éclairage
basée sur une image tridimensionnelle afin d'ajuster les propriétés d'éclairage de
la surface, le procédé comprenant :
définir un plan d'image et un point de vue d'utilisateur pour la surface d'éclairage
;
rendre et afficher, sur un dispositif d'affichage d'un ordinateur, une scène contenant
un objet in situ dans la surface d'éclairage, en tenant compte dudit plan d'image
et dudit point de vue d'utilisateur ;
au moyen d'une interaction d'utilisateur avec la scène affichée, recevoir une identification
d'un point sur le plan d'image ;
tracer un rayon depuis le point de vue d'utilisateur en passant par le point identifié
sur le plan d'image et :
• déterminer un point d'intersection de surface du rayon avec ladite surface, ou
• déterminer un point d'intersection d'objet du rayon avec ledit objet et tracer un
rayon supplémentaire qui est une réflexion du rayon depuis l'objet ou qui est normal
à la surface de l'objet au niveau dudit point d'intersection d'objet, et déterminer
un point d'intersection de surface du rayon supplémentaire avec ladite surface,
et
ajuster les propriétés d'éclairage de la surface dans la région du point d'intersection
de surface par :
ajout à la surface d'éclairage basée sur une image, dans la région dudit point d'intersection
de surface, d'une source de lumière ayant une géométrie, comprenant mapper la géométrie
de la source de lumière sur la surface d'éclairage basée sur une image tridimensionnelle,
ou
sélection d'une source de lumière ayant une géométrie dans la région dudit point d'intersection
de surface et soustraction de la source de lumière sélectionnée ou modification des
propriétés de la source de lumière sélectionnée.
2. - Procédé selon la revendication 1, dans lequel ladite surface d'éclairage est une
sphère ou un pavé droit.
3. - Procédé selon la revendication 1 ou 2, dans lequel ladite surface d'éclairage comprend
une image photographique ou générée par ordinateur.
4. - Procédé selon l'une quelconque des revendications précédentes, dans lequel, si plus
d'une source de lumière est située à ou à l'intérieur de la région dudit point d'intersection
de surface, ladite étape de sélection d'une source de lumière comprend fournir à l'utilisateur
un moyen pour sélectionner la source d'éclairage souhaitée parmi les sources d'éclairage
situées à ou à l'intérieur de la région dudit point d'intersection de surface.
5. - Procédé selon l'une quelconque des revendications précédentes, dans lequel ladite
étape de mappage comprend tracer des rayons d'une source de lumière ajoutée à une
origine de la surface d'éclairage.
6. - Procédé selon l'une quelconque des revendications précédentes et comprenant afficher,
sur un dispositif d'affichage de l'ordinateur, une toile qui est un mappage bidimensionnel
de la surface d'éclairage tridimensionnelle.
7. - Procédé selon l'une quelconque des revendications précédentes, dans lequel ladite
étape de réception d'une identification d'un point sur le plan d'image comprend recevoir
une entrée sur la scène rendue, provenant d'un dispositif de pointage actionné par
un utilisateur, tel qu'une souris, une tablette ou un écran tactile.
8. - Procédé selon l'une quelconque des revendications précédentes et comprenant, après
ladite étape d'ajustement des propriétés d'éclairage de la surface, rendre et afficher
sensiblement en temps réel la scène à l'aide de la surface d'éclairage modifiée.
9. - Système configuré pour interagir avec une surface d'éclairage basée sur une image
tridimensionnelle, le système comprenant :
un dispositif d'affichage configuré pour afficher une scène rendue contenant un objet
in situ dans la surface d'éclairage, en tenant compte d'un plan d'image et d'un point
de vue d'utilisateur ;
une interface utilisateur graphique configurée pour recevoir une identification d'un
point sur le plan d'image au moyen d'une interaction d'utilisateur avec la scène affichée
; et
un processeur configuré pour :
rendre la scène à afficher sur le dispositif d'affichage ;
tracer un rayon depuis le point de vue d'utilisateur en passant par le point identifié
sur le plan d'image et :
• déterminer un point d'intersection de surface du rayon avec ladite surface, ou
• déterminer un point d'intersection d'objet du rayon avec ledit objet et tracer un
rayon supplémentaire qui est une réflexion du rayon depuis l'objet ou qui est normal
à la surface de l'objet au niveau dudit point d'intersection d'objet, et déterminer
un point d'intersection de surface du rayon supplémentaire avec ladite surface,
et
ajuster les propriétés d'éclairage de la surface dans la région du point d'intersection
de surface par :
ajout à la surface d'éclairage basée sur une image, dans la région dudit point d'intersection
de surface, d'une source de lumière ayant une géométrie, comprenant mapper la géométrie
de la source de lumière sur la surface d'éclairage basée sur une image tridimensionnelle,
ou
sélection d'une source de lumière ayant une géométrie dans la région dudit point d'intersection
de surface et soustraction de la source de lumière sélectionnée ou modification des
propriétés de la source de lumière sélectionnée.
10. - Système selon la revendication 9, dans lequel l'interaction d'utilisateur avec la
scène affichée est réalisée au moyen d'un dispositif de pointage actionné par un utilisateur,
tel qu'une souris, une tablette ou un écran tactile.
11. - Système selon la revendication 9 ou 10, dans lequel le processeur est en outre configuré
pour rendre de nouveau la scène à afficher sur le dispositif d'affichage sensiblement
en temps réel, après l'ajustement des propriétés d'éclairage de la surface.
12. - Produit programme d'ordinateur comprenant un support lisible par ordinateur ayant,
sur celui-ci, un code de programme d'ordinateur, de telle sorte que, lorsque le code
de programme d'ordinateur est exécuté, il amène l'ordinateur à mettre en oeuvre un
procédé d'interaction avec une surface d'éclairage basée sur une image tridimensionnelle
selon l'une quelconque des revendications 1 à 8.