Simcenter STAR-CCM+ Swiss Simcenter STAR-CCM+ Knife: [5] Pixel comparison: Integrate image processing tools in your CFD workflow.

Simcenter STAR-CCM+ Simcenter STAR-CCM+ Virtual Reality Teamcenter Share Simcenter Cloud HPC Simcenter STAR-CCM+ Viewer Simcenter STAR-CCM+ Application Specific Solutions


This article shows you how to integrate image processing tools in your CFD-workflow. An example is given where results between different simulations are compared.


The tools that are labeled as "Swiss Simcenter STAR-CCM+ Knife" could be thought as a set of tools with the following properties:

1. They are used sparsely, from time to time,
2. They can save you much work and save you from difficult situations,
3. They must be used with care,
4. If they are used improperly they can affect negatively your simulation results.

Much in the spirit of the well-known Swiss penknife.

This article shows you how to use image processing tools to compare results between different simulations. The study of the influence of mesh refinement on lift/drag ratio for a NACA profile is used as guiding example.

Mesh sensitivity studies are standard in CFD simulation. The goal is either to achieve mesh-independent results or to collect enough data to perform the Richardson extrapolation. During those studies, you run your simulation with meshes having a different degree of refinement. Even if you are only interested in obtaining mesh independent integral data (lift, drag, and so on), you are always interested in localizing the places in your simulation responsible for the differences and their magnitude. So, inevitably, the following question accompanies those studies: How do you compare results between simulations?
Keep in mind, that the program has stored the physical values in the cell centroids of the corresponding meshes. So for results comparison, a common ground is required. Typically the comparison is done by transferring the results from one mesh to the other. Some linear interpolation is involved when mapping the results from the cell centroids of the source mesh to the cell centroids to the destination mesh. Once you have both data at the same (X, Y, Z) location then, you can create field functions with their differences. You can use such field functions to quantify and display the discrepancies. And, with thresholds, you can easily visualize them on the screen. This process is explained in this article.
There is another mapping strategy that you use again and again without knowing it. When looking at two images, you are using your eyes to compare the differences in colors, forms, and shapes formed by a collection of pixels. In this case, the program has performed a mapping from the (X, Y, Z) coordinates of the mesh to the (Xs, Ys) coordinates of your display. So the common ground is your display and both results have been transferred/mapped to it.
Instead of your eyes, you can use an image-processing program to compare the images by comparing the pixels. That comparison can be made in several ways. You need first to define a metric in pixel space. Each pixel is a discretization of your picture showing a combination of red (R), blue (B) and green (G) colors. If you use 8 bits per channel, then you can then represent the color content of a pixel in a 3D space. Examples:
R = (255,0,0), B = (0,255,0), G = (0,0,255)
Black = 0xR + 0xB + 0xG = (0,0,0)
White = 1xR + 1xB + 1xG = (255,255,255)
Yellow = 1xR + 0xB + 1xG = (255,0,255)
Magenta = 1xR + 1xB + 0xG = (255,255,0)
Cyan = 0xR + 1xB + 1xG = (0,255,255)

The color distance could be defined, for example, between pixel P1= (R1, B1, G1) and P2= (R2, B2, G2) as:
User-added image

User-added image

Clearly, choosing a color scale affects the results, therefore you compare the results with a grayscale. In that case, each pixel has coordinates (α ,α ,α ) where 0 <= α <= 255  (and both above defined| distances coincide). So the distance between gray pixels P1 and P2 is simply:

User-added imageα
Your goal then is to compare black and white images of physical quantities, detecting differences by comparing pixels at same locations.
Now, imagine you have the following problem at hand: You want to calculate the lift/drag ratio for a NACA profile at Mach 0.5 and you are currently comparing the influence of the mesh refinement on it. For illustration purposes, you only pick two of the tests: a Coarse Mesh (cell base size 0.1) and a Fine Mesh (cell base size 0.05). Size settings for cells away from the NACA profile and the prismatic layers settings
(high Reynolds strategy) were kept the same for both. 
User-added image
The lift/drag ratio obtained was:
User-added image

That is, the lift/drag ratio with the fine mesh has increased ~16.3% with respect to the coarse one. (Due to a decrease in Lift (~2.1%) and a decrease in Drag (~15.8%), always considering the fine mesh with respect to the coarse one.)
You are still far from mesh-independent results. Nonetheless, you want to inspect where the differences are coming from and proceed to compare black and white versions of the following images:
Velocity magnitude:
User-added image
User-added image
And Z-component of vorticity (clipped to [-2000,2000] /s):
User-added image

You need a common ground for the comparison, therefore both scenes were generated with the same resolution (1396x544) and the legend limits are set as follows: for the lowest limit, you calculate the minimum of each respective lowest limit. Change to maximum for the highest limit. The legend level progression was changed from the default (square root) to linear and changed to a gray level scale:
User-added image

For the comparison, you can use as an image-processing tool, the free package ImageMagick which is designed for batch processing of images. That is, it allows you to combine image-processing operations in a script (shell, DOS, Perl, PHP, and so on) so the operations can be applied to many images, or as a subsystem of some other tool, such as a web application, video processing tool, panorama generator.
Use the compare command within the package, which compares two images and put the difference into a third. The command has additional options which give metric information.
The options used were:

User-added image
Which gives the number of pixels which are distinct ---109312 over 759424 (1396x544)---and generates an image with the pixels that are different, labeled in red.
You see the images that show the differences for pressure below:

User-added image

differences for velocity:
User-added image

and vorticity:
User-added image
Images above show differences in pixels independently of how significant the differences are. Use the next two commands to see where the major discrepancies are coming from.
User-added image
Which tells you that the maximum difference between pixels in pressure images amounts to 12.9412%.

The difference is calculated as follows: The distC is scaled in ImageMagick by a factor of 257. So that the distance between Black and White is 65535 (=257*255). The max. distance between pixels in the image is calculated to be 8481 which gives the 12.9412% when compared with 65535.
The second command is: 


User-added image

Which gives the number of pixels (8), whose difference lies above 10%. The command also generates an image with them labeled as red. Below you see the command output for pixel distance above 10%, 5% and 1% in pressure:
User-added image

You see clearly, that mesh refinement has not affected very much the pressure field. Refinement affects at most the region around the stagnation point. 

Now look at the differences in velocity magnitude:
User-added image

Here you see a major influence of refinement. A key thing is the asymmetry of that influence. Refinement affects at most the lower side of the airfoil, then its wake. 
Finally, for the Z-vorticity:

User-added image

Refinement most affects the vorticity. The reason here is because vorticity needs velocity spatial gradients for its definition, and refinement clearly influences the gradient resolution of every physical quantity. The refinement also has an asymmetric influence, affecting the lower side of the airfoil and the wake.
The complete list of results of the above commands are:

User-added image

The results for the lift/drag ratio---where the discrepancy came mostly because of the change in drag (15.8%) as opposed to change in lift (2.1%)---can then be explained on grounds that:
  • Refinement almost left Pressure unaffected.
  • Refinement asymmetrically affects the velocity field. (On airfoils lower side.)
  • Refinement improves the gradient resolution.
Next step in our crusade of image-processing integration is automation. Please stay tuned. 

See also:

How to analyze differences in results between two simulations (CCM file, direct)
How to analyze differences in results between two simulations (tables)

and previous Swiss STAR-CCM+ Knife tools: 


KB Article ID# KB000037102_EN_US



Associated Components

Design Manager Electronics Cooling In-Cylinder (STAR-ICE) Job Manager Simcenter STAR-CCM+