Simcenter Testing Solutions Digital Image Correlation: Camera Considerations

2021-10-17T15:38:12.000-0400
Simcenter Testlab Other Hardware

Summary


Details

Digital Image Correlation (DIC) is a technique that uses images from digital cameras to determine shape, displacements, and deformation fields at the surface of an object under any kind of loading. 

To ensure Digital Image Correlation delivers reliable and accurate results, it is important to understand the measurement principle, how to select the right equipment, and how to prepare the setup (Figure 1). 
 
User-added image
Figure 1: A Digital Image Correlation data acquisition setup for a material test.  Deformation images (left) are displayed from two cameras (center) from a material test stand (right).

This article will provide a general introduction on camera imaging, photography principles and how the different element of the setup can impact the results.

See the knowledge article: "Digital Image Correlation for Static Testing" for application examples.

Index:
1.    Defining the test requirement
2.    Basics of photography
     2.1.    Cameras
     2.2.    Lenses
     2.3.    The camera pinhole model
     2.4.    Taking a good picture: the exposure triangle
3.    Speckling and subset sizes
4.    Example calculations
     4.1.    Optimal distance
     4.2.    Resolution
     4.3.    Speckle size
5.    Conclusions


1.    Defining the test requirements

Digital Image Correlation is a comprehensive technique, which combines image acquisition and image processing to measure the quantities of interest. As with any other experimental techniques, it is important to clearly define the objectives of the test, making sure the correct hardware is selected and the setup is adequately prepared. 

DIC can be used in a wide range of applications: it allows identifying mechanical properties of materials by analyzing static test images; it can measure vibration on aircraft or vehicles and characterize their modal behavior; it identifies the strain and stress fields of components under load; or it can be used to monitor deformations during impacts and other high-speed events.

All these measurements require the area of interest on the object to be speckled, and cameras (usually a pair of them in a stereo configuration) taking pictures of it during the test. But depending on the quantities of interest, whether the application is static or dynamic, the size of the object and its accessibility, the hardware to be used can change significantly.

When planning a measurement, it is then important to have clear from the very beginning:
  • The quantities of interest (displacements, strains, stresses) and if possible, their expected order of magnitude
  • The rate at which these quantities needs to be acquired (low speed, high speed or vibration)
  • The size of the portion of the object that needs to be measured (the Region Of Interest or ROI); this is the area that will have to be speckled, in case of full-field measurement, 
  • The Field-Of-View, which includes the Region Of Interest and its motion/deformation envelope;. This is the actual image in case of single camera, or the overlapping region of the two images in case of a stereo setup.

With this information, it will then be possible to select the appropriate hardware or adapt the requirement to the available one. 

2.    Basics of photography

As Digital Image Correlation is an image processing technique, being able to take high quality pictures, and have control on the parameters that allow doing so, is crucial. For this, some basic knowledge of photography, including camera and lens technologies, is highly recommend. Here, only the basic concepts will be illustrated, but several other resources to study the subject more in depth can be easily found.

2.1    Cameras

Cameras are the core sensing element in a Digital Image Correlation experimental campaign. There are four main aspects to keep in mind when selecting a camera:
  • Sensor specifications: the sensor is the key element in the camera. It has an impact on the quality of the image and its noise level. Important settings to check are the resolution, format, pixel size and quantum efficiency. The sensor type (CMOS or CCD) is also important, although most of the available cameras currently rely on CMOS-type of sensors.
  • Frame rate: this is the speed at which images can be acquired and transferred to a storage medium and is normally expressed in frame rate at maximum resolution. Most of the industrial cameras, in particular high-speed ones, offer the possibility to increase the frame rate by reducing the resolution. Usually, high-speed cameras allow taking images of more than 1Megapixels at more than 1000 frames per seconds
  • Interface: this is how the cameras are connected to a PC. Typical interfaces are USB3 and GigE, with older cameras still relying on FireWire interface. 
  • Lens mount: this will not affect the camera itself, but it’s critical when selecting the lens to be used with the camera
User-added image
Figure 2: Example of camera setups. Left: 5MPx “low-speed” camera, operating at max 75 fps at full resolution. Right: 2MPx “high-speed” cameras, operating at 4980 fps at full resolution.
 
The cameras are responsible to translate light into a digital image, that can be further processed. For DIC applications, image-vision mono-chromatic cameras are normally used.

2.2    Lenses

Lenses are used to converge light rays on the sensor. The main parameter identifying a lens is its focal length. Usually, for a fixed distance between the camera and the object, lenses with higher focal length will have a higher magnification factor, while reducing the field of view (Figure 3). 
 
User-added image
Figure 3: The focal length of a lens and Field Of View (FOV) have an inverse relationship.

As an example, consider the two images in Figure 4. In both cases, the camera and the distance between the object and the camera is the same.
 
User-added image
Figure 4: Example of pictures taken with the same camera position but with lenses with different focal lengths.

The difference between the two images is only the focal length of the lens. Short lenses are used to fit large objects from a relatively short distance. Lenses with longer focal lengths are used to capture details or object that are far away. 

Attention: Lenses have usually a minimum focus distance, which is the minimum distance from the object to be able to focus it. This distance increases with the focal length, and is often mentioned in the datasheet of the lens.

Other important criteria to select a lens are:
  • Maximum aperture: this is normally indicated as “f number”. The higher the max aperture (e.g. f/2), the higher the quality of the lens and the less distortion it will introduce. f=(focal length of lense)/(aperature diameter)
  • Lens mount: as mentioned, this will depend on the camera used.
  • Camera sensor format: lenses are designed to also work with specific type of sensor. Using a lens supporting a different sensor format can cause vignetting or cropping in the final image.

2.3    The pinhole camera model

The pinhole camera model is a mathematical model that describes the relationship between the coordinates of a point in space and its projection on a three-dimensional (3D) plane. This model, shown in Figure 5 assumes an ideal lens, with no distortion, and is then an approximation of reality, but is a very useful model to link the physical object, the camera and lens, and the final picture.
User-added image
Figure 5: Linear pinhole model representation

Based on this model, we can express the relation (Equation 1) between the dimensions of the physical object and its projection on the image as:
 
User-added image
Equation 1

Where:
  • f0 is the focal length of the lens
  • OD is the distance between the object and the sensor
  • w is the dimensions of the physical object, in this case its width
  • Sw is the corresponding dimension on the sensor. This is normally expressed as number of pixels multiplied by the size of a pixel on the sensor, which is available in the camera parameters.

This formula allows the following calculations:
  • Calculate the distance between the camera and the object if the lens is selected
  • Calculate the optimal lens to fit the object in the image if the distance from the camera is a constraint
  • Calculate the optimal speckle size for a given setup to achieve the best possible measurement resolution
  • Calculate the best achievable resolution for a given setup and if an optimal speckle pattern has been applied

The use of the formula will be shown in the next sections. But before, some "good picture" practices are covered.

2.4    Taking a good picture: the exposure triangle

The best possible equipment and the formulas above only account for so much in ensuring good results. Everything relies on the assumption that a good picture is taken. But what does it mean in practice? 

Rule Number #1:

The first rule for a good picture is very simple: fit the Region-of-Interest in the Field-Of-View as much as possible. This will allow to maximally use the pixel resolution of the camera and optimize measurement resolution and accuracy. Obviously, in case large deformation or motion are expected, they need to be considered as it is not possible to move the camera during the test. 

As an example, look at the two images in Figure 6. As the camera is the same in both cases, the Field-of-View is also the same. The pictures were also taken from approximately the same position, but lenses with different focal lengths were used.
 
User-added image
Figure 6: Pictures with the same Region-of-Interests but different Field-Of-Views

The picture on the left was taken with a lens with short focal lens, with the objective of capturing the complete landscape. However, if Region-of-Interest is the three small peaks only, they only occupy a limited portion of the Field-Of-View. From a DIC perspective, this means there is a limited subset of pixels for the analysis, which limits the measurement resolution.

In that case it is recommended either to get closer to the object or to change the lens with one with longer focal length, as was done for the picture on the right. There, the Region-Of-Interest fully fits the Field-Of-View, and it will be possible to fully use the available resolution of the camera.

Rule Number #2:

The second rule is to: take a sharp and well illuminated image. To elaborate on this, the exposure triangle, which is shown in Figure 7, is generally used.
 
User-added image
Figure 7: The exposure triangle linking aperture, exposure time, and sensor sensitivity.
 
When operating a camera, the user has three ways of controlling the image and how it is taken: the aperture, which is a property of the lens, the sensor sensitivity (or ISO) which is a property of the sensor, and the Exposure time or shutter speed, which is the amount of time during which the sensor will receive light. All three settings control the amount of light that reaches the sensor, and although there are several combinations possible to achieve the same exposure (or light) in a picture, they also impact the image in different ways. So let’s break them down and analyze them one at the time: 

The ISO or Sensor Sensitivity, is a property of the sensor of the camera. The ISO indication was introduced for films, and with digital cameras is now used as gain. As such, increasing the ISO will make the sensor more sensitive to light, but since it will act as a gain it will also cause the images to become noisier. For that reason, this value shall always be kept at the factory default (usually ISO 100) to keep noise to a minimum and shall not be changed. 

The Aperture, as already discussed in the previous session, is a property of the lens. Each lens has a diaphragm which can be open or closed. The higher the aperture, the more light will flow. However, as indicated by the small arrow on the Aperture edge in Figure 5, the higher the aperture the narrower the depth of field: but what does that mean in practice? The depth-of-field can be explained mathematically based on optics, but here a more practical approach is taken.

The two images in Figure 8 capture the same scene, with approximately the same exposure (or amount of light). The picture on the left is taken with a high aperture, at f/4. As can be observed, the flags are perfectly in focus, while the background is blurred and this is what a narrow depth-of-field means. On the contrary, by closing the aperture, also the grass on the background is nicely in focus. 
 
User-added image
Figure 8: Effect of aperture on the depth of field
 
Consequently, for relatively flat objects, positioned perpendicularly to the camera, narrow depths of field are generally sufficient. However, for more complex structures with curved surfaces or different components, using smaller aperture might be required. 

At this stage, a question should come naturally: why not always use high aperture? Well, to keep the same exposure level in the image, the exposure time needs to be decreased. In the pictures in Figure 8, to keep the same exposure while reducing the aperture, it was necessary to increase the Exposure Time (which is the third element in our triangle in Figure 7) from 0.0005 to 0.0125 seconds. In this example, where the scene is stationary, this is not a real issue. However, if the picture would have been taken during a windy day, the motion of the flags and the grass would have caused motion blur, which needs to be absolutely avoided during experiments. 

To get a sharp picture during a real experiment, the exposure time needs to be significantly shorter than the motion of the object being photographed, to ensure the image is sharp. However, if the surface of the object is curved, sufficient depth-of-field needs to be ensured to have everything in focus. As increasing the sensor sensitivity is not recommended to avoid adding noise, the solution to this is to add artificial light to brighten the scene. An example of this is shown in Figure 9.
 
User-added image
Figure 9: High intensity LED light illuminate the structure during a rotating test.
 
This is the reason why in the majority (if not all) DIC setup, powerful external artificial lights are required. This becomes even more important for high-speed imaging, vibration testing or testing of rotating structure, where the rate at which deformations occur imposes extremely short exposure times and consequently the use of high-intensity artificial lights.
 
3.    Speckling and subset size

Speckling is one of the most critical parts of a DIC analysis. The pattern is what is used to correlate images to one another. General rules to obtain a good pattern are: 
  • The pattern should be unique, so the features shall be randomly distributed
  • Speckle size shall be as uniform as possible
  • It needs to provide a good contrast (in combination with lights)
  • Avoid reflective paint/paper as they will cause overexposures.
  • Need to adhere to the surface throughout the test.

In general, when defining the speckle pattern for a specific test, the following rules need to be followed:
  • Each pattern feature should have at least 3-5 pixels to avoid aliasing
  • When defining the DIC analysis, each subset should contain at least three features to ensure uniqueness
Figure 10 shows two different speckle patterns. On the left, the speckle was applied by spray painting. 
 
User-added image
Figure 10: Examples of speckle patterns: spray-painted (left) vs numerically generated and printed(right).

The contrast, distribution and randomness of the feature is ideal, but their size is too different. On one side, some of the speckles are smaller than 3 pixels, causing aliasing. On the other side, the difference in size will require the subset size to be increased to ensure 3 features are always included, which will reduce the achievable resolution. 

The speckle on the right, on the other hand, was created using a numerical speckle generator, making sure the 5 pixels/feature rule, as well as the distribution and randomness, are guaranteed. This can then be printed on adhesive paper and attached to the structure. However, in particular when the main objectives are strains, the painting shall always be preferred as it will better adhere to the surface of the object.

A mistake to avoid is to have high-resolution cameras, but without optimizing the speckle size. Such a scenario is shown in Figure 11. In this case, to ensure a good DIC analysis, a subset size of more than 100 should be used.
 
User-added image
Figure 11: Example of speckle patter with 49 pixels per speckle.
 
This would ultimately limit the measurement and spatial resolutions compared to the scenario where, for the same camera resolution, a smaller speckle pattern (and consequently subset size) would be used. However, in this case, less noisy results could be obtained thanks to the spatial aliasing that can be achieved with bigger subsets.

4.    Example calculations

This section has some example calculations based on the previously presented pinhole model (see Equation 1). 

For example, assume the objective of an experiment is to measure the deformation of a plate during a static loading test. The plate is approximately 200x200 mm. Measuring the smallest possible displacement of the plate is the goal.

4.1 Optimal distance

A standard 5 Megapixel (MPx) camera, with a sensor size of 2448x2048 pixels and with a pixel size of 3.45 μm is used. The shortest dimension on the sensor edge (2048 pixels) will be used as it represents the worst-case scenario. Also, the actual dimension will be slightly increased to ensure the object is in view throughout the test. Using a 12 mm wide lens, the minimum distance to fit the Region of Interest (ROI)  in the Field-of-View (FOV) is:
 
User-added image
 
Going for a 25 mm lens, the same calculation would lead to:
 
User-added image
 
For the 12 mm lens, the camera should be 386 mm from the plate.  With the 25 mm lens, the camera should be 803 mm from the plate. Choosing one lens or the other will only depend on the setup configuration in the lab. However, if the distance is fixed, we should always aim for the lens that best fits the object in the field-of-view.

Once the optimal distance is fixed, the best possible measurement resolution can be determined. In this case, the theoretical value of the minimum spatial resolution (0.01 pixels) is used. 

4.2 Resolution

Using the 25 mm lens and the optimal distance, the displacement corresponding to 0.01 pixels is:
 
User-added image

Keeping same distance distance between test object and camera is kept (OD) of 800 mm, but a 12mm lens is used, then the minimum resolution would be:
 
User-added image
 
Using a higher resolution camera would allow having more pixel per unit of length of the object. For example, using a 9 MPx camera with a sensor size of 4096x2160 and the same pixel size, in combination with a 25 mm lens, would result in:
 
User-added image

The difference between the two setups is marginal.  This is because the higher resolution is obtained by having a rectangular sensor, whose shorter dimension has a number of pixels comparable to that of the 5MPx camera.

These calculations, and the possibility to measure with a resolution of 0.01 pixels, requires the speckle to be optimally applied. 

4.3 Speckle size

Using the 5-pixels rule, it is possible to calculate the actual dimension of the speckle. In case of the 5 MPx camera with 25 mm lens, the optimal resolution can be achieved with a speckle size of:
 
User-added image
 
For bigger or smaller speckle sizes, lower resolution cameras, non-optimal distance between the camera and the object, or a non-perfect fit of the Region-Of-Interest in the Field-Of-View, the actual resolution will be lower than the theoretical one.

Knowing this theoretical measurement resolution is crucial to understand whether the measurement will be successful. For the 5 Mpx and 25 mm setup, everything below 1 micron cannot simply be measured with the setup, in the best possible conditions. If the quantities of interest are smaller, the solution is to either go for a higher resolution camera, or zoom in on smaller area of the object. This calculation is  extremely important in case of a vibration test, where the actual displacement typically decreases exponentially for increasing frequencies.

5.    Conclusions

This scope of this article is to provide some general guidelines on how to prepare a test setup and choose the right equipment for Digital Image Correlation measurement. It tries to give some basic information on photography also and which criteria should be followed to take a good picture. More practical DIC measurement information can be found on the International Digital Image Correlation Society (iDICS) web site.  It has an excellent reference guide "A Good Practices Guide for Digital Image Correlation".

Questions? Email william.flynn@siemens.com.

Related Links:

KB Article ID# KB000047773_EN_US

Contents

SummaryDetails

Associated Components

Simcenter Testlab Digital Image Correlation Testlab Environmental Testlab Acoustics Testlab Data Management Testlab Desktop Testlab Durability Testlab General Acquisition Testlab General Processing & Reporting Testlab Rotating Machinery & Engine Testlab Sound Designer Testlab Structural Dynamics Testlab Turbine