Abstract
Realistic image synthesis, i.e., the creation of photographs of virtual environments by numerical simulation of light, is present in the lives of most people in the form of movies and advertising. Its results are often almost indistinguishable from reality, but the methods are computationally intensive and often require supercomputers. This thesis presents new methods for realistic image synthesis with the potential to reduce rendering times, costs, and environmental footprint.
The rendering equation describes the transportation of light as it repeatedly scatters around a virtual environment. The light arriving at a virtual sensor produces a virtual photograph. Typical solutions evaluate the pixel colors by randomly sampling numerous paths by which light can reach the sensor and evaluate their average contribution.
The methods presented in this thesis work in the gradientdomain: In addition to sampling the colors, they directly evaluate the finite differences between adjacent pixels and reconstruct the image as an integration problem. This allows the solution to utilize the similarity of the light transport behind closeby pixels. Gradientdomain rendering was recently proposed in the context of Markov Chain Monte Carlo, but the feasibility of gradientdomain rendering in the more common traditional Monte Carlo context was left unanswered. This thesis presents four new gradientdomain Monte Carlo rendering methods.
The first two methods evaluate the image gradients by constructing highly correlated pairs of paths for adjacent pixels. Subtracting their contributions produces typically lowernoise gradients since correlation decreases variance in subtraction. A screened Poisson equation combines the high frequencies captured by the gradient samples with the low frequencies of the color samples. This results in images that typically have less highfrequency noise.
The third method is an extension to animation. It evaluates gradients also in the timedimension by rendering each image in two parts, with random seeds shared between the previous and the next frame. Subtracting the images rendered with the same random seed produces the time component of the gradients. A spatiotemporal reconstruction results in decreased flickering in animation.
The last method is a deep convolutional neural network that replaces the screened Poisson reconstruction. The network is trained to map the sampled colors and gradients to noisefree reconstructions by minimizing a neural perceptual image distance function. This improves the sharpness of the reconstructions. The gradient inputs improve especially the quality of shadows.
Original language  English 

Qualification  Doctor's degree 
Awarding Institution 

Supervisors/Advisors 

Publisher  
Print ISBNs  9789526089072 
Electronic ISBNs  9789526089089 
Publication status  Published  2020 
MoE publication type  G5 Doctoral dissertation (article) 
Keywords
 realistic image synthesis
 gradientdomain rendering
 ray tracing
Fingerprint Dive into the research topics of 'GradientDomain Methods for Realistic Image Synthesis'. Together they form a unique fingerprint.
Cite this
Kettunen, M. (2020). GradientDomain Methods for Realistic Image Synthesis. Aalto University.