Physically based image synthesis methods, a research direction in computer graphics (CG), are capable of simulating optical measuring systems in their entirety and thus constitute an interesting approach for the development, simulation, optimization, and validation of such systems. In addition, other CG methods, so-called procedural modeling techniques, can be used to quickly generate large sets of virtual samples and scenes thereof that comprise the same variety as physical testing objects and real scenes (e.g., if digitized sample data are not available or difficult to acquire). Appropriate image synthesis (rendering) techniques result in a realistic image formation for the virtual scenes, considering light sources, material, complex lens systems, and sensor properties, and can be used to evaluate and improve complex measuring systems and automated optical inspection (AOI) systems independent of a physical realization. In this paper, we provide an overview of suitable image synthesis methods and their characteristics, we discuss the challenges for the design and specification of a given measuring situation in order to allow for a reliable simulation and validation, and we describe an image generation pipeline suitable for the evaluation and optimization of measuring and AOI systems.

Current physically based image synthesis techniques constitute a major leap compared to previously used, mostly phenomenological approaches. The simulation of light transport is at the core of physically based image synthesis methods and crucial to generate images that are on par with images made by physical image acquisition systems. Light transport simulation nowadays is almost exclusively computed using Monte Carlo (MC) or Markov chain Monte Carlo (MCMC) methods, which can account for complex light–matter interactions and naturally handle spectral emission, absorption, and scattering behavior (measured or derived from models) described by geometric optics. (MC)MC methods can also comprise the simulation of complex lens systems to accurately compute the resulting irradiance onto a virtual sensor.

Essentially, all (MC)MC rendering methods compute an estimate of the light transport by sampling, that is, stochastically generating, paths on which light propagates from light sources to sensors, their main difference being the path sampling strategy. Until sufficient convergence, the variance of this estimation is apparent as noise in the images. Because of this, even simplistic realizations of these methods are versatile and, in principle, capable of achieving the desired, and required, results. However, their application is only practical when an (MC)MC method is used which is well suited for a given scenario; otherwise, the computation time can easily become prohibitively long, even in seemingly simple cases. For example, one would choose different methods for computing light transport for complex high-frequency light transport phenomena (recognizable by multiple glossy or specular reflection) than for highly scattering media.

As indicated, many different rendering algorithms and sampling strategies exist in the realm of (MC)MC methods, and they all exhibit different performance and noise characteristics, which are strongly linked to the type of light–matter interactions and the geometry configurations occurring in a scene. As such, it is not straightforward for the non-expert to select the appropriate method. For this purpose, we identify light interactions and phenomena constituting different challenges for the image synthesis, and discuss state-of-the-art algorithms, such as, for example, path tracing and bi-directional path tracing (BDPT).

Particular attention has to be paid to the simulation of complex lens systems which can increase the computation time by orders of magnitude when implemented naively. We discuss the use of a state-of-the-art approach to efficient rendering with realistic lenses in the context of measuring and AOI systems.

Illustration of basic ray tracing.

There are two major strategies for determining the color of an image pixel: rasterization and ray tracing.

Ray tracing, also sometimes referred to as ray casting, determines the
visibility of surfaces by tracing rays of light from the virtual view
point, that is, the viewer's eye or the image sensor, to the objects
in the scene. The view point represents the center of origin and the
image a window on an arbitrary view plane. For each pixel of the image
a view ray is sent originating in the view point through the pixel
into the scene in order to find an intersection with a surface. By
recursive application of this ray casting, as illustrated in Fig.

Illustration of the surface interaction at point

Rasterization, on the other hand, projects geometric primitives one by one onto the image window. A depth buffer, also called a z-buffer, is utilized to determine the closest and thus visible primitive for each pixel.

Usually, the perspective projection is carried out in three steps. First, the projection transformation, expressed in homogeneous coordinates, is applied. Afterwards, the projective coordinates are dehomogenized by the normalization transformation, mapping the view frustum to the unit cube. The resulting device coordinates can then be mapped to image coordinates by discarding the depth component, as is done for orthographic projection.

All primitives can be processed in parallel using a single instruction multiple data (SIMD) approach and minor synchronization via the depth buffer. This allows for a very fast pipelined hardware implementation in the form of modern graphics processing units (GPUs).

Simplified, one could say that ray tracing starts with the pixels and then determines ray intersections with the scene geometry, while rasterization starts with the geometry, projecting it onto the image plane. The availability of modern GPUs makes rasterization feasible for interactive real-time application.

The surface interactions during ray tracing can be described easily, as
illustrated in Fig.

The radiance consists of the light that is emitted in that direction at the
surface point,

All terms combined lead to Eq. (

Monte Carlo (MC) methods are a broad class of algorithms that numerically
evaluate an integral by repeated random sampling. They are suitable for
high-dimensional problems, and as such a good choice to compute the value of
the surface integral, as ray tracing is a high-dimensional problem because of
multiple reflections and the recursive nature of the rendering equation. The
Monte Carlo evaluation of the rendering equation leads to approximation
Eq. (

Markov chain Monte Carlo (MCMC) methods sample from a probability
distribution based on a Markov chain. This allows the construction of a
Markov chain with a desired distribution. Also, in the context of rendering,
Markov chains make it possible to construct new paths by mutation of existing
paths, a fact that is exploited by the Metropolis light transport (MLT) that
is described in Sect.

Illustration of distributed ray tracing.

(MC)MC rendering methods are the predominant way to compute light
transport simulations nowadays. In all their diversity, they all
share the concept of stochastically creating paths, connecting
the sensor to the light sources. In this section, we discuss
different (MC)MC rendering algorithms and sampling strategies
suitable for the simulation of measuring situations. As mentioned
in introductory Sect.

Ray tracing is the basic approach of casting rays starting from the
camera and recursively casting successive rays on reflections.
Recursive evaluation of the rendering equation, described in
Sect.

Summary. Distributed ray tracing is the straightforward implementation of recursive Monte Carlo evaluation of the rendering equation and as such easy to implement, but more recent methods do not suffer from exponential growth.

Illustration of path tracing.

Path tracing, introduced by

One problem of ray tracing and path tracing is that a ray or path that
does not hit a light source transports no energy and thus has no contribution
to the image pixel. At the same time, it is unlikely to hit a light source,
in general. To alleviate this problem, optionally, next event estimation
(NEE,

While path tracing in most cases also converges without NEE, with NEE paths that reach light sources, and thus transport much energy, are generated explicitly and therefore in general earlier. This means that images of early stages of the rendering give a better impression of the illumination of the scene. Thus, NEE is also a good method in case rough preview images are useful.

Path tracing is able to render caustics

A caustic, in optics, is a light bundling pattern created by objects or materials focusing or diverting light by refraction or reflection.

as long as the light source has finite area (i.e., not a point light source). However, the corresponding transport paths are typically sampled with low probability. Thus, path tracing is not well suited for handling such situations.Summary. Path tracing is easy to implement and suitable if there are no caustics from small light sources and if direct connections to the light sources are possible. Therefore, path tracing is a good candidate for measurement setups where objects are recorded in transmitted light, as here the light source is usually big and always directly visible.

Light tracing, sometimes also referred to as backward ray tracing

Next event estimation (NEE) is also optionally possible; in this case, connections to the camera are sampled. However, for light tracing NEE is more problematic in the context of specular surfaces and lens models: specular surfaces limit the reflection path to just one direction. Therefore, it is not possible to sample a different direction that would hit the sensor. Obviously, this is the case when the virtual camera contains a lens system in front of the sensor.

Summary. Like path tracing, light tracing is easy to implement, but it only works well if the light sources directly emit to the surfaces visible to the camera. Otherwise, path tracing (or forward ray tracing) is still employed to determine visual surfaces, as in the next method.

Illustration of light tracing.

Bi-directional path tracing, introduced by

Illustration of bi-directional path tracing (BDPT).

However, increasing the number of possible path combinations also means that
a technique is necessary to keep the variance of the Monte Carlo estimation
low, as the reuse of “light” and “eye” subpaths in the path combinations
results in high variance (apparent as high-frequency noise in the resulting
image) due to correlation between the paths

Summary. As mentioned, BDPT is non-trivial to implement because it needs MIS to be practical. BDPT works well in scenes where the deterministic connections are not blocked. This should usually be the case for AOI settings as the measurement setup is usually designed in a way that the objects are well lit and visible to the camera, that is, the objects are visible to both camera and light source.

The many-light methods originate from the instant radiosity algorithm
proposed by

Summary. The many-light approach works very well for mostly diffuse scenes
and is easy to implement. On the downside, it has problems with glossy
surfaces and is even impossible to use with specular surfaces; cf.

Photon mapping is the somewhat confusing yet widely used short name for
Global Illumination using Photon Maps introduced by

Afterwards, the scene is rendered with path tracing, but instead of treating
the nodes of the light paths as VPLs as in the many-light methods and
directly connecting to them with NEE, the photon map is used to compute the
illumination. That is, the energy at a surface point is estimated by counting
the photons in the local environment of the intersection point (density
estimation); cf. Fig.

Summary. Photon mapping works well with diffuse surfaces and can render
caustics efficiently. For glossy surfaces, it (gracefully) degrades to path
tracing. Photon mapping can be made robust and has the advantage of producing
images with low noise levels, but the density estimation also causes a bias
(systematic error); cf.

More extensions and variants to bi-directional methods exist, such as, e.g.,

Illustration of photon mapping.

The family of the Metropolis light transport methods use the
Metropolis–Hastings (MH) algorithm, introduced by

The basic idea is to first find an important path, that is, a path that
transports much energy and thus has high influence on the resulting image,
and then to generate similar paths by mutation of the existing important
path. This is one of the best strategies for difficult situations, such as,
for example, the scene illustrated in Fig.

Illustration of Metropolis light transport (MLT) with one important path that reaches the single, concealed light source of this difficult scene, and two mutations of this path.

Numerous
variants exist, e.g., sampling in primary space

Summary. MLT methods have in common that they are very powerful in exploring
difficult light transport phenomena (e.g., caustics). However, they have to
be initialized repeatedly with independent MC samplers (e.g., BDPT) and thus
rely on those to detect actual occurrences of said light phenomena. In image
rendering, this results in images where individual components are rendered
with little noise but all occurrences of the light phenomena are only found
over time. Also, MLT methods share the property that they are difficult to
implement (an exception is

In interactive image synthesis, simple perspective projection is often used
that corresponds to the projection of a pinhole camera. But rendering systems
also often use simplified approaches such as the pinhole camera or the thin
lens model. While it is possible to just include even a complex lens in the
virtual scene and trace rays through it, this leads to very inefficient
rendering, as, for straightforward ray tracing, 95 % of all ray samples, or
more, might not leave a real-world lens system and enter the actual scene

The reason for this lies in the fact that straightforward ray or path tracing
starts at the sensor and will simply sample locations on the image sensor and
directions off the sensor to generate rays, and, thus, many of the rays will
hit the housing of the camera or the aperture. This can be avoided by
implementing importance sampling (cf. Sect.

In

Emission spectrum of a tungsten halogen lamp (Model 3900 by Illumination Technologies, Inc.).

Emission spectrum of a fluorescent ceiling light.

Raw sensor response of the ELiiXA UC4/UC8 line scan camera by e2v.

We demonstrated this approach for shards of glass, as sorting glass is one challenging practical application for AOI. In this paper, we focus on the aspects of image synthesis.

These modifications make a shard generation in negligible computation time possible by adding surface detail in real time.

Our implementation includes real-time rendering of shard distributions
by hardware rasterization on the GPU using OpenGL 4.2

As glass is an optically semi-transparent material, a method of transparency rendering is necessary. OpenGL itself only provides alpha blending that can be used to render transparent materials. But rendering transparency using alpha blending implies rendering the objects in a sorted order. As sorting for every rendered frame is impractical and even not always possible (e.g., for mutually overlapping objects), order-independent transparency rendering (OIT) techniques have been invented.

Images rendered in real time using the color matching functions of
the CIE 1931 standard colorimetric observer.

Our previous implementation as presented in

As our previous publications

Most importantly, we have used RGB rendering. The synthesis was therefore reduced to only three values describing red, green, and blue norm stimuli, and could not reproduce spectral effects such as dispersion.

Images rendered in real time using the ELiiXA sensor sensitivity
function depicted in Fig.

Recently, we have enhanced our real-time method to support spectral rendering and to replicate more physical aspects of real image acquisition systems; that is, we now support the simulation of real light sources and sensor responses of real image sensors.

Of course, spectral rendering is especially important in the context of color filters that limit the light spectrum to a narrow spectral range combined with lighting by a non-uniform illumination.

Light emission and transport we now describe as full spectra. That is, we use
the spectral data of real, measured light sources. The light interaction is
computed using measured absorption spectra (of course, for other use cases
reflection spectra are equally possible). The resulting spectral power
distribution is multiplied by a color matching or sensor sensitivity
function, as depicted in Fig.

A synthetic image of procedurally generated virtual glass shards on a diffuse reflecting surface generated by our Monte Carlo rendering framework.

Close-up view of a synthetic shard.

In the case of the CIE color matching functions

Simulation of chromatic adaptation can account for the difference in white reference of the simulated light emission and of the computer screen that is used for viewing.

For our real-time implementation we use a binning approach to spectral rendering. That is, the full spectrum of the visible light is quantized into bins of a certain width; thus, the spectra are approximated by a step function. The spectra can either be approximated by equal width bins, which is a sufficient approximation for smooth spectra, or it can be adaptive, as is necessary in the case of illumination with light depicting narrow bright peaks.

A close-up view of the scene also displayed in
Fig.

Double-Gauss lens by

While light sources such as incandescent light bulbs or halogen lamps have
quite smooth spectra, see Fig.

This can be simulated with spectral rendering, and helps to design an
acquisition setup that does not exhibit such problems by choosing a suitable
combination of sensor and illumination; see Fig.

Both image sets have been generated with our real-time shard generation
and rendering implementation, and demonstrate the capabilities of this
approach. While Fig.

While it is possible to add support to rasterization for handling more complex light interaction phenomena such as, for example, refraction and dispersion – and has in fact been realized in our simulation in the form of a proof of concept implementation for light refraction – one has to accept that rasterization has its limits and at a certain point a ray tracing approach begins to be more feasible, and even more efficient.

Support for refraction can be added to a rasterization approach by ray marching, that is, by determining the intersection point of a ray with a surface not analytically, but by iteratively taking steps along the ray and checking each time whether an intersection has taken place. Obviously, it is much more straightforward to add light refraction to a ray tracer. Dispersion constitutes a similar case. But the aforementioned point is surely exceeded when global illumination or simulation of real lens systems including monochromatic and chromatic aberration is asked for. In these cases, a more efficient approach for ray casting than ray marching is called for, as demonstrated in the next part of this section.

Same scene as in Fig.

There is a downside to the efficiently generated virtual glass shards that we
described in Sect.

Therefore, we vouched for a new method of generating shards of broken glass, dropping the requirement of real-time generation and instead introducing a pool of precomputed shard meshes.

Most existing fracturing methods in the field of computer graphics are based on Voronoi partitions or tessellations, but many implementations generate only undetailed, flat intersection surfaces that are also not well tessellated and thus not suitable for surface perturbation without remeshing. An example of this is the Cell Fracture implementation of the Blender 3-D modeling software by the Blender Foundation.

One notable exception is the method introduced by

Synthetic image using the simple thin lens model as perceived by the CIE 1931 standard colorimetric observer, CIE D65 hemispherical illumination.

Same as in Fig.

Same as in Fig.

Same as Fig.

Double-Gauss lens

Same as Fig.

Same as Fig.

Same as Fig.

Figure

As the ray tracing framework does not use spectral binning but instead
supports full spectral rendering with Monte Carlo spectral sampling,
it can simulate dispersion. Figure

It is interesting to note that the spectra of the illumination and
interacting surfaces might also affect the rendering time or image quality.
Uniform sampling of the emission spectrum leads to noisy images in the case
of spectra with bright narrow peaks compared to images using a more
homogeneous spectral illumination, as samples fall equally on every part of
the spectrum. This is illustrated in Fig.

Importance sampling that results in a higher sampling rate in regions with higher radiance while still being an unbiased estimation can leverage this problem without increasing the sampling count (and thereby increasing the rendering time).

We use our method and implementation presented in

Figure

Recorded image of real glass shards obtained by a physical line scan camera (unknown type and sensor) of an AOI image acquisition system (Fraunhofer IOSB, 2012) in transmitted light of a fluorescent illumination, scaled to approximately fit the size of the simulated shards.

To conclude this section, we present an overview of renderings with
different lenses as well as color matching and sensor sensitivity functions,
respectively. Figures

For comparison, Fig.

As we generate the virtual scenes using physical simulation of gravity and
collisions of rigid bodies, we can easily generate consecutive scenes
separated by small time differences, and thus also render images with motion
blur of the shards sliding along a sloping surface, as can be seen in
Fig.

Tables

Please note that we configured our rendering implementation with fixed settings of 2048 samples per pixel regardless of the actual variance to ensure high quality; an adaptive approach could result in shorter rendering times.

The rendering durations are roughly on the same scale, with the exception of
Figs.

These exceptions amply demonstrate that path tracing is a well-suited rendering method for applications in transmitted light. The background consists of a large light source that is easy to hit; therefore, direct connections to the light source are easily possible, while in the case of a diffuse surface the paths that hit it need at least one more path segment to reach the hemispherical surrounding light.

Thus, recording images of the shards for this measurement problem in
transmitted light is not only an advisable choice for the subsequent image
processing task as the shards appear more clearly in the resulting images
without casting shadows on the background; see Fig.

Computation times for image series 1: variation in lens model,
Figs.

Computation times for image series 2: variation in illumination and
sensor, Figs.

In summary, we described an entire image synthesis pipeline including all relevant components as well as their efficient and accurate implementation with regard to the simulation and validation of optical measuring systems.

In addition to a real-time image pipeline based on hardware rasterization on the GPU, including spectral rendering using a binning approach, we implemented a second pipeline using physically based simulation of light transport including efficient simulation of real-world lens systems. Altogether, this resulted in a realistic simulation of the image formation of an automated optical inspection (AOI) image acquisition system.

Meanwhile, in

The simulated image formation now includes measured spectral light sources, complex light–matter interaction using measured absorption spectra, and realistic lens systems, and reproduces the sensor response of real sensors. One last thing that is left, is the simulation of sensor noise.

The EMVA 1288 Standard for Machine Vision

No data sets were used in this article.

The authors declare that they have no conflict of interest. Edited by: M. Fischer Reviewed by: two anonymous referees