lensfun  0.3.95.0
How the corrections work

For both the people working on Lensfun and the people working with Lensfun, it is very important to understand how corrections are applied to the images.

Order of the image operations

The image operations are not commutative. Thus, it is important to apply them in the right order. This is:

  1. devignetting
  2. anti-TCA
  3. undistortion
  4. change of projection
  5. perspective correction
  6. scaling

Image corrections

The first three image operations that are applied are the image corrections.

Their order relative to each other is closely connected with the way the lens errors are measured. Vignetting is measured on the pristine image, consequently, it must be corrected before any pixel-warping operations are applied. The same is true for TCA measurements. Distortion is also measured on the pristine image, however, undistortion is not affected by a previous devigneting or anti-TCA. This results in the above enumerated order: First devignetting, then anti-TCA, then undistortion.

Well, the order of devignetting and anti-TCA is still not clearly defined by this actually. But imagine very very hefty TCA. It distorts the red and the blue channel relatively to the green one. This would affect the vignetting measurement considerably. However, the other way round would be much less invasive: A massive vignetting would not change the TCA measurements much. Therefore, vignetting must be eliminated before TCA is eliminated. Otherwise, devignetting would work on the wrong image.

By the way, correcting TCA and distortion one directly after the other makes it possible to do this very efficiently: The pixel coordinates are transformed by both image operators, and after that, only one interpolation and one pixel value lookup are necessary.

Change of projection

Lensfun can also re-map a fisheye to a rectilinear image. Mathematically speaking, a perfect lens follows a well-defined projection (in other places also called “lens type” or “geometry”), like:

\[\begin{aligned} r &= 2f\sin(\theta/2) &\text{equisolid} \\ r &= f\theta &\text{equidistant} \\ r &= 2f\tan(\theta/2) &\text{stereographic} \\ r &= f\sin(\theta) &\text{orthographic} \\ r &= f\tan(\theta) &\text{rectilinear} \end{aligned} \]

Here, \(\theta\) is the angle between the incoming ray of light and the optical axis, \(f\) is the focal length, and \(r\) is the distance of the resulting dot on the sensor from the sensor center.

You see, the rectilinear projection is just one of many. And all of them have their advantages and disadvantages. The first four projections are considered fisheye.

Lensfun can change the projection of the image. But converting e.g. from fisheye to rectilinear is not a correction. A fisheye image is as perfect as a rectilinear image if it follows the respective projection formula. And the image follows the projection formula after a successful distortion correction.

Therefore, the change of projection is perfomed after the image corrections.

Perspective correction

Lensfun can correct the effects of tilting of the camera, also known as perspective correction, see Applying perspective correction. In order for this to work properly, the distortion needs to be corrected, and the image needs to be converted to rectilinear projection if necessary. Therefore, perspective correction is performed after the change of projection.

Scaling

Often it is desirable to scale the resulting image, e.g. to eliminate black areas at the borders caused by one of the previous image operations. Because all other transformations assume the pristine sensor image, in particular for assuming the correct focal length, scaling comes last in the processing.

How it is really done

The actual order perfomed in the Lensfun-calling program will differ from the above list.

Why is this? Understanding this is very important for hacking on Lensfun as well as for using it in your own programs.

When it comes to pixel coordinate transformations, it is very sensful to start with the perfect, rectified image (still empty) and distort its pixel coordinates to the distorted image. This distorted image is the source image (the RAW file), and you can do a simple pixel lookup there, possibly with interpolation. This way, you find all pixel values for your rectified image efficiently and accurately.

However, if you perform pixel lookup this way, things happen the other way round compared to the section before, because you follow the path through the image manipulations the reverse way:

  1. scaling
  2. perspective correction
  3. change of projection
  4. undistortion
  5. anti-TCA
  6. devignetting

This reverse order is the reason why the formulae for distortion models in lfDistortionModel map the undistorted coordinate to the distorted coordinate. This seems to be wrong at first because we want to undistort after all. But given how undistortion is actually done, it makes sense. The same is true for lfTCAModel.

Note that in the sequence scalingperspective correctionchange of projectionundistortionanti-TCA, the resulting coordinates of the previous step are the input coordinates for the respective next step.

In reality, devignetting is performed first nevertheless, because it can be separated from all other operations, by calling lfModifier::ApplyColorModification. Then, you do the coordinate-transforming operations from scaling to anti-TCA. Lensfun can do this in one function call with lfModifier::ApplySubpixelGeometryDistortion. Finally, you do a lookup on the vignetting-corrected RAW data, using the transformed coordinates.