In this paper we propose new models of two complementary optical sensors to obtain 2.5-D measurements of opaque surfaces: a deflectometric and a plenoptic sensor. The deflectometric sensor uses active triangulation and works best on specular surfaces, while the plenoptic sensor uses passive triangulation and works best on textured, diffusely reflecting surfaces. We propose models to describe the measurement uncertainties of the sensors for specularly to diffusely reflecting surfaces under consideration of typical disturbances like ambient light or vibration. The predicted measurement uncertainties of both sensors can be used to obtain optimized measurements uncertainties for varying surface properties on the basis of a combined sensor system. The models are validated exemplarily based on real measurements.
Automated quality inspection of product surfaces requires a fast and robust sensor, capable of detecting all relevant defects without damaging the surface. Optical measurement techniques fulfill these requirements but are highly dependent on the surface properties. For example, pattern projection and passive stereoscopic methods require diffuse reflectance, while deflectometric methods depend on specular reflectance of the inspected surface. Many surfaces are partially specular or a mixture of diffusely and specularly reflecting parts and cannot be robustly measured with only one method. By combining several measurement methods into a single sensor system that adapts its algorithms to exploit the advantages of the single methods, we are capable of measuring surfaces with a large variety of surface properties. To demonstrate the principle, we propose uncertainty models for plenoptic and deflectometric sensors, and based on the models we simulate both sensors under similar circumstances on varying partially specular surfaces.
Plenoptic cameras have been used in computational imaging for several decades
now. However,
First, in Sect.
Simulation parameters, with corresponding symbols (S) for deflectometry (D), plenoptic (P) and units (U).
Deflectometry, as well as the plenoptic method, relies on the recognition of spatial light patterns to identify unique positions that can be triangulated. The reliability of this recognition depends on the pattern contrast. When the pattern contrast is very low, noise introduced by the camera dominates the pattern. In the following section we introduce a systematic approach to describe the pattern contrast and a reduction of this contrast depending on its spatial frequencies. Despite the 2-D nature of image-based measurements we decided to describe our approach in 1-D, since the direction has no impact on the results.
In deflectometry the camera integrates light emitted at some point
Now, the image radiance is the convolution of the screen radiance and the
point spread functions of the screen (
Instead of using PSFs in the spatial domain, the imaging properties of
optical systems can be described in the Fourier domain, depending on the
spatial frequency
Additionally, we normalize the MTF to
We will introduce the patterns shown on the screen
The radiance distribution
The irradiance on the camera sensor is linked with the radiance from
Eq. (
Now we obtain the irradiance distribution on the camera sensor,
Things change for the passive plenoptic setup, where the surface itself is
considered as an emitter of structured light. We assume that the surface
itself reflects unstructured light from the environment depending on the
surface texture
In the best case for plenoptic measurements, the surface pattern consists of
intensity steps with an modulation of
This pattern is superimposed by the specular reflection of pattern in the
environment. Hence, in contrast to deflectometry, the specular reflection
decreases the measurable patterns contrast for the plenoptic setup. A
detailed definition of the surface MTF will be discussed in
Sect.
Following
The bandwidth of the sinc function depends on the aperture of a pixel on the
sensor and is assumed to be equal to the pixel pitch
In Fig.
Camera MTF of deflectometric camera, plenoptic camera and camera with diffraction-limited lens.
The OTF introduced in the previous section describes a camera in focus; in
this section we will discuss a camera out of focus. Let the camera be in
focus at some object distance
Using the thin lens equation
The size of a point on the image plane
The OTF for an image out of focus depends on the size of this point
Deflectometric camera MTF for defocused image of screen
points with 1.0 m distance and focal plane distances
Figure
In many situations the camera is shaking during exposure due to vibrations
caused by heavy machines, etc.
Of course, the measured surface may also be subject to vibrations, but due to
the complexity of the implications of changing surface normals during
exposure it is not covered here. The influence of translational camera motion
blur on the MTF according to Eq. (
Motion blur MTF caused by camera motion in image space.
Image of a reflected pattern for five different surfaces ranging from high to low gloss for a
pattern on the screen with spatial frequency
Reflectance surface MTF for different surface gloss parameters.
Lambertian surface MTF for different surface gloss parameters.
With more surface roughness the reflectance decreases from specular to
diffuse; see Fig.
This matches the MTF measurements of the surfaces shown in
Fig.
While deflectometry utilizes the specularity of the surface, passive
triangulation approaches like plenoptic-camera-based methods rely on
Lambertian reflectance. Hence, for the plenoptic camera we consider the
specular component as an additional noise component and therefore model the
Lambertian surface MTF
Figure
In the previous section we described how surface roughness increases the
amount of light scattered by the surface. If more ambient light is present in
the scene, the light scattered in the direction of the camera also increases.
This can be measured as the Michelson contrast, i.e., the ratio of the difference
and the sum of the maximum and minimum radiance
If the ambient radiance reflected into the camera equals half the radiance of the maximum pattern intensity, the pattern contrast decreases to one-half of the original contrast. Calculating the influence of ambient radiance emitted by the surface requires knowing the location of the ambient light sources and the surface BRDF (bidirectional reflectance distribution function). On specular reflecting surfaces, ambient light does not influence the contrast of a reflected pattern, but more diffuse reflection increases the amount of ambient radiance reflected into the camera. The contrast of a surface texture is influenced by the amount of specularly reflected light.
Deflectometry
In the following section we will look at the phase-shifting algorithm and the phase noise model.
Let the origin
To get the absolute position on the screen
One popular phase-unwrapping method is the heterodyne method, which uses two
different pattern frequencies. See
A model describing the phase noise of Eq. (
Here
The two parameters influenced by the environment and the surface are
Measurement uncertainty
In the following section the uncertainty of the surface height
The lateral uncertainty
The angular uncertainty of the camera
Either the size of one projected pixel from Eqs. (
The slant uncertainty of each surface segment
Combining the above equations, the uncertainty of the surface height
Assume that the specular surface has a convex spherical shape. Then the screen appears smaller, which results in a smaller fringe period. In the simple case of a plane mirror, the effective fringe pattern frequency depends only on the distance from the camera to the screen:
If the surface is curved (at least piecewise) like a sphere with radius
This equation has a pole at
A plenoptic camera is a single sensor system which records a 4-D light-field
representation of a scene in a single image. That means a point in the object
space does not only correspond to a single image point in the image, as it
would be for a regular camera, but to multiple image points. In other words,
a plenoptic camera does not only capture a single ray emitted from a certain
point in the object space but multiple light rays with different incident
angles. Hence, the four dimensions describe two spatial dimensions and two
angular dimensions. Even though plenoptic sensors for industrial applications
are still expensive
The 4-D light field recorded by a plenoptic camera enables tasks like 3-D measurement or software-based refocusing after an image is captured. Industrial tasks for plenoptic cameras may include 3-D microscopy or the inspection of production parts.
Here we describe the principle of plenoptic depth measurement based on the
concept of a focused plenoptic camera developed by
Image projection of a focused plenoptic camera in the Galilean mode. One object point is projected to multiple micro images on the sensor.
Raw image recorded by a focused plenoptic camera. Different to a regular camera, a plenoptic camera has a
micro lens array (MLA) placed in front of the sensor (see Fig.
Figure
The image distance
Based on disparities
In contrast to deflectometry, the plenoptic camera is a passive measurement system that relies on high-contrast patterns on the surface to be measured. Besides, the surface has to have Lambertian reflectance to obtain correct measurements.
In the following we define a model to predict the measurement accuracy of the
plenoptic camera for a certain measurement setup. This measurement setup is
shown in Fig.
Plenoptic measurement setup. Expected measurement uncertainty can be predicted based on the given geometric setup, the surface pattern and the MTF of the plenoptic camera.
In analogy to deflectometry and to obtain a general definition of the surface
contrast we define the surface structure as a fringe pattern similar to
Eq. (
Of course, the pattern on the surface which is captured by the camera will
never be a perfect fringe pattern but can always be modeled as a mixture
of frequencies. However, this formulation gives us the possibility to model
the camera response dependent on the frequency
In a local region one can consider the imaging process just as a scaling of the fringe pattern on the surface in combination with a frequency-dependent attenuation of the intensity modeled by the MTF of the imaging system. Therefore, by applying the assumption of being in a local region around a certain point, perspective distortion does not have to be considered.
In contrast to a regular camera, which can be defined mathematically by a
pinhole camera model, the imaging scale of a plenoptic camera is not
proportional to the distance between main lens and object
Besides the scaling of the pattern due to perspective projection, the fringe
pattern is compressed if the viewing angle of the plenoptic camera
Based on the defined scaling factors one can define the following relations
between surface
Following the Shannon–Nyquist sampling theorem we can calculate from
Eq. (
Here,
Similarly to in deflectometry, we can describe the imaging properties of the
plenoptic camera by its MTF. In a plenoptic camera, we have a sequence of two
optical systems: the main lens and the MLA. This results in an MTF of the
complete plenoptic camera that depends on the distance to the surface. This
can be formulated as the sequence of two MTFs with a nonlinear connection
between
For simplification, we approximate the complete MTF
For a Raytrix camera the MLA consists of three different types of micro lenses to increase the depth of field of the camera. Therefore, strictly speaking one has to define three different MTFs for the respective micro lens types.
For the simulations in Sect.
Depth measurements are obtained based on disparities
For a given fringe pattern
The MLA in a plenoptic camera is in most cases arranged on a hexagonal grid.
Therefore, for each micro lens multiple epipolar lines in all possible
directions are obtained. Figure
Epipolar lines in a hexagonally arranged MLA. Due to the reason that the micro images are rectified to each other by nature, the epipolar line for a pair of micro images is defined by the vector between the respective principal points. The figure exemplarily shows one epipolar line (blue) for the five shortest stereo baseline distances (red).
As defined in the EMVA1288 standard, the signal noise
For multifocus plenoptic cameras
For simplification, we do not consider the focus disparity error here. Besides, by choosing an appropriate camera setup one can assure that a pair of focused micro images is always present for a given object point.
Based on the theory of propagation of uncertainties, one is able to calculate
the standard deviation of the measured object distance
In the following section, we present simulation results for the measurement
uncertainty of both the deflectometric and the plenoptic sensor. They are
derived for an exemplary setup shown in Table
We assume that the overall shape of the reflecting surface is flat. Thus we
can apply Eq. (
The first result, depicted in Fig.
Deflectometric measurement uncertainty of the phase on the screen, showing a pattern with fringe
frequency translating to
In contrast, higher spatial frequencies (and thereby shorter period lengths)
in Eq. (
Deflectometric measurement uncertainty of the surface normal for several focus distances and surface MTF parameters.
It can be seen that
Measurement uncertainty of plenoptic and deflectometric sensor for several surface MTF parameters.
The pattern is shown on the surface and also shown on the screen.
Both pattern frequencies are given relative to the camera sensor image frequencies
In Fig
Similar to Sect.
In Table
Measurement uncertainty of the plenoptic sensor for different
main lens focal lengths
Figure
For the following simulations, which model the effect of motion blur as well
as different surface roughnesses, the focal length was set to
Figure
Measurement uncertainty of plenoptic and deflectometric sensor with motion blur
While increasing surface roughness has a negative effect on the
deflectometric measurement results, due to less specular reflectance, it has
the opposite effect on the plenoptic measurements. For the plenoptic setup
best results are expected at a completely Lambertian reflectance (surface
gloss
Finally, in Fig.
Measurement uncertainty of surface height
For high-gloss surfaces the deflectometric sensor is most accurate with the camera focused on the screen. For low-gloss surfaces it is preferable to focus on the surface and use patterns with low spatial frequencies.
In this paper we present quite complex and complete mathematical models for the measurement process of a deflectometric and a plenoptic sensor system, although for such complex models it is almost impossible to validate them entirely. Hence, we validate our models on the basis of only two distinct setups: one for deflectometry and one for plenoptic.
For both measurement systems we use the same configurations as given in
Table
For both sensor systems we obtain measurements based on the case of a
thermometer that consists of surfaces with different reflectance properties.
This case is shown in Fig.
Case of a thermometer used to validate the proposed sensor models. The case consists of different surfaces with different reflectance properties.
Since for the plenoptic camera we cannot influence the contrast of the
surface pattern on the case, we performed a second experiment, in which we
generated a fringe pattern on a screen and recorded this pattern with the plenoptic
camera. Here, we measured the measurement uncertainty for different fringe
frequencies. This setup is shown in Fig.
We used the deflectometric sensor shown in
Fig.
Setup used to measure the measurement uncertainty of the deflectometric sensor for different frequencies of the fringe pattern on the surface.
The measurements of the thermometer surface were taken using 24 different
pattern frequencies in the range
Measurements of
The red boxes mark areas on the surface with different reflection properties:
case and display. Measurements are taken per pixel and then averaged over
these two areas. On the one hand, we calculated a spline interpolation
Measurements of the surface MTF (points) and extrapolated data (lines) for two areas on the surface (display and case) and two illumination conditions (dark and bright surrounding).
On the other hand, we estimated
Measurements of
Hence, using Eq. (
Comparison of the predicted measurement uncertainty (lines) and standard deviation (points)
of
The predicted uncertainties for
Using the setup which is shown in Fig.
Setup used to measure the measurement uncertainty of the plenoptic camera for different frequencies of the fringe pattern on the surface.
Figure
However, the standard deviation
Measurement uncertainty of the plenoptic sensor for different main
lens focal lengths
Figure
Case recorded by the plenoptic camera. Intensity
In contrast to the deflectometric setup, we are not able to validate our
model based on the recordings of this case, since we are not able to
influence the pattern on the surface of the case. However, we still measured
the empirical standard deviation for two different positions on the case.
Here, the mean standard deviation is calculated based on a set of 40 images
for all valid points seen in Fig.
For the display we measured a mean standard deviation of 8.8 mm and for
the inscription a mean standard deviation of 12.8 mm. Intuitively one
would expect to obtain a higher uncertainty for the display than for the
inscription on the case. However, the depth estimation is already filtering
out uncertain estimates, which leads to a sparser depth map on the display.
This sparsity must also be taken into account when rating the
results. However, as can be seen from deflectometry, both the case and its
display are not perfect Lambertian surfaces. Hence, the obtained accuracy
conforms quite well to the simulations shown in Fig.
In this paper we proposed two models to predict the measurement uncertainty
of a deflectometric and a plenoptic sensor. Based on our introduced models,
we have shown that, for a given measurement setup, there exists an optimum
fringe pattern that results in the lowest measurement uncertainty. In case of
the deflectometric sensor, the achieved height measurement uncertainty ranges
between
While the deflectometric sensor has much lower uncertainty for surface changes (3 orders of magnitude for partially specular surfaces), it measures surface normals instead of distances, which have to be integrated to obtain the surface height. The plenoptic measurement could help to regularize this integration by providing a relatively rough but robust distance measure.
While the simulation shows plausible results for the proposed models, we furthermore were able to validate the models exemplarily by real measurements.
The raw image data can be made available upon request.
MZ and his supervisor MH developed the theory and evaluation for deflectometry, and NZ and his supervisor FQ did the same for the plenoptic sensor.
The authors declare that they have no conflict of interest.
This article is part of the special issue “Evaluating measurement data and uncertainty”. It is not associated with a conference.
This work was financed by Baden-Württemberg Stiftung gGmbH. This is an updated and revised version of a paper written in the German language and published in “Technisches Messen 84 (2017) 2” in 2017. It is published in JSSS with the kind permission of the publisher De Gruyter. Edited by: Klaus-Dieter Sommer Reviewed by: five anonymous referees