Abstract - Exposure fusion is an efficient method to obtain a well exposed and detailed image from a scene with high dynamic range. However, this method fails when there is camera shake and/or object motions. In this work, we tackle this issue by replacing the pixel-based fusion by a fusion between pixels having similar neighborhood (patches) in images with different exposure settings. In order to achieve this, we compare patches in the luminance domain. We show through several experiments that this procedure yield comparable or better results than the state of the art, at a reasonable computing time.
The top row shows a sequence of bracketed exposure images with decreasing exposure times, large object displacement, misalignments and illumination changes. The reference image is framed in a red box. The bottom image corresponds to the fusion with the proposed method, NLEF. Processing time of 18 seconds for 6 images of size [578, 867].
Supplementary Files
Paper: Non Local Exposure
Fusion
Source code: NLEF
source code
Test scene: Scene to
test the code
Further datasets: IUS Sarajevo HDR, EMPA Media technology.