Computer-generated Trompe l`œil

Transcription

Computer-generated Trompe l`œil
Computer-generated Trompe l’œil
Naoki Sasaki∗
The University of Tokyo
Takeo Igarashi
The University of Tokyo
Figure 1: We print the image on the left and put it on a specific position. From a specific view, the printed paper looks as if there were 3D
objects on the paper. The image on the right is the photo taken in that view.
Abstract
We present a computational method to generate a trompe l’œil on a
paper, a 2D image depicting a virtual 3D model in such a way that
the model appears to be a real 3D entity when a person views the
printed image from specific viewpoint. This method is similar to
augmented reality methods which show a virtual 3D model on top
of a real world image captured by a camera considering photometric matching. The difficulty specific to our problem setting is that
the color of a pixel sent to the printer is different from the color perceived by the person in the real world, because the perceived color
is influenced by the environment such as paper and lighting conditions. We address this problem by adjusting the gray-scale color
sent to the printer considering these influences.
CR Categories: I.3.3 [Computer Graphics]: Three-Dimensional
Graphics and Realism—Display Algorithms I.3.6 [Computer
Graphics]: Computing Methodologies—Methodology and Techniques;
Keywords: trompe l’œil, global illumination
1
Introduction
Trompe l’œil is an art technique in which a two-dimensional (2D)
image depicting a three-dimensional (3D) object is generated in
such a way that the 2D image appears to be a real 3D object when
viewed from a specific view. Figure 1 shows an example of such an
∗ e-mail:naoki.sasaki@ui.is.s.u-tokyo.ac.jp
image produced by our method. Such artworks have been painted
on the walls and ceilings of buildings to add 3D effects.
However, it would be difficult to manually draw trompe l’œil images, because we would need to consider how the shape looks from
a specific view and how the environmental light affects its color.
A typical approach is to place a camera in the target viewpoint and
draw the image in the environment by constantly checking the camera view. However, this method is laborious and demands expert
skills of the artist.
Therefore, we developed a computational method to generate a
gray-scale trompe l’œil image using a printer. First, the user captures the lighting environment at the target location and renders
a virtual 3D object in the virtual environment using the captured
lighting environment. Next, the system generates image data by
transforming the rendered image and adjusting the color so that the
virtual object appears to be real when the printed image at the target
location is viewed from a specific view. This image is then sent to
the printer.
Most of the processing steps are similar to those used in augmented
reality, in which the system creates a virtual 3D model on the top
of a real-world image captured by a camera, so that the 3D model
appears to be a real object placed in a real environment. The particular difficulty we face here is that the color of a pixel sent to the
printer is different from that perceived by an observer in the real
world. This is because the perceived color is influenced by the environment aspects, such as paper color and lighting conditions. We
address this problem by adjusting the color sent to the printer.
2
Related work
Our work is inspired by Nagai Hideyuki’s art work (Figure 2). He
uses pencils and crayons to create trompe l’œil on paper. Beever
uses chalks to draw trompe l’œil on pavement [Beever 2010]. Both
of them keep a camera at the viewpoint. They check how the drawing image looks from the camera view while they are drawing.
There are several projector-camera systems for a textured
screen [Nayar et al. 2003; Ashdown et al. 2006; Grossberg et al.
2004]. However, there exist some problem to use projector for
showing tromp l’œil. For example, people should pay attention not
to block off the light and people do not put some objects in front of
tromp l’œil such as a pen in Figure 1.
!"#!
!$#!
h''''("&)'*+,('
xy
!"#'"-&'!$#!
!%#!
!&#!
Figure 3: Gray-scale adjustment method. (a) Images sent to the
printer. (b) Captured images. (c) Photograph with synthetic objects
to transformed to print size. (d) Desired trompe l’œil image.
4 Conclusion and future work
We propose a computational method to generate a gray-scale
trompe l’œil image using a printer. The particular difficulty is that
we adjust the color of a pixel sent to the printer in such a way to
perceive as desired color.
Figure 2: Two trompe l’œil examples drawn on papers. People feel
c
as if there were 3D objects on the paper. ⃝2013
Nagai Hideyuki
All rights reserved http://nagaihideyukiart.jimdo.com/
3
Our approach
There are four major steps in our framework: Step 1: We first calibrate lens distortion and detect the camera position using Zhang’s
approach [Zhang 2000]. Step 2: We compose the 3D synthetic objects in the photo. Step 3: We transform the shape of the paper in
the photo to a printing size image. Step 4: We adjust the color of the
image by considering the effects of paper color and environmental
light.
To adjust the gray-scale color, we need to consider environment
aspects, such as paper color and light. Nayar et al. presented a
projector-camera system for a textured screen [Nayar et al. 2003].
They demonstrated the approach for gray-scale images. They displayed a set of 255 images (in the case of an 8-bit per channel projector) and recorded their corresponding camera images for each
pixel. They created a 1D lookup table from the input and measured images. From this table, they changed the displayed image to
achieve the desired image. We use this same approach to achieve
the desired image. In our case, it was difficult to make this 1D
lookup table in a similar manner, because we needed to print 255
images and manually place these printouts at the same location. We
used a linear interpolation of 9 samples: we printed 9 images (Figure 3a) with different gray-scale values and captured these printed
images in the target camera position (Figure 3b). Let Mxy be the
pixel value in the captured image at (x, y) coordinates, hxy be the
monotonic response function, and Ixy be pixel value sent to the
printer. Then, Mxy can be represented as follows:
Mxy = hxy (Ixy )
(1)
We determined the response function hxy using the samples (Figure
3a, b) and used it to compute the pixel values sent to the printer (Figure 3d) that are needed to produce desired captured image(Figure
3c).
In future work, we are planning to extend our approach from grayscale to color image. Another direction is generating a tromp l’œil
that consists of two or more planes such as the left image in Figure
2. In addition, we would like to develop a computational method
that makes a tromp l’œil that consists of arbitrary curved surface.
References
A SHDOWN , M., O KABE , T., S ATO , I., AND S ATO , Y. 2006. Robust content-dependent photometric projector compensation. In
Proceedings of IEEE International Workshop on Projector Camera Systems (PROCAMS) 2006.
B EEVER , J. 2010. Pavement chalk artist : the three-dimensional
drawings of Julian Beever. Firefly Books.
G ROSSBERG , M. D., P ERI , H., NAYAR , S. K., AND B EL HUMEUR , P. N. 2004. Making one object look like another:
Controlling appearance using a projector-camera system. In
CVPR (1), 452–459.
NAYAR , S. K., P ERI , H., G ROSSBERG , M. D., AND B EL HUMEUR , P. N. 2003. A projection system with radiometric
compensation for screen imperfections. In First IEEE International Workshop on Projector-Camera Systems (PROCAMS2003).
Z HANG , Z. 2000. A flexible new technique for camera calibration.
IEEE Trans. Pattern Anal. Mach. Intell. 22, 11 (Nov.), 1330–
1334.