Reflectance modeling by neural texture synthesis

Research output: Contribution to journalArticleScientificpeer-review

110 Citations (Scopus)


We extend parametric texture synthesis to capture rich, spatially varying parametric reflectance models from a single image. Our input is a single head-lit flash image of a mostly flat, mostly stationary (textured) surface, and the output is a tile of SVBRDF parameters that reproduce the appearance of the material. No user intervention is required. Our key insight is to make use of a recent, powerful texture descriptor based on deep convolutional neural network statistics for "softly" comparing the model prediction and the examplars without requiring an explicit point-to-point correspondence between them. This is in contrast to traditional reflectance capture that requires pointwise constraints between inputs and outputs under varying viewing and lighting conditions. Seen through this lens, our method is an indirect algorithm for fitting photorealistic SVBRDFs. The problem is severely ill-posed and non-convex. To guide the optimizer towards desirable solutions, we introduce a soft Fourierdomain prior for encouraging spatial stationarity of the reflectance parameters and their correlations, and a complementary preconditioning technique that enables efficient exploration of such solutions by L-BFGS, a standard non-linear numerical optimizer.

Original languageEnglish
Article number65
Pages (from-to)1-13
JournalACM Transactions on Graphics
Issue number4
Publication statusPublished - 11 Jul 2016
MoE publication typeA1 Journal article-refereed
EventACM International Conference and Exhibition on Computer Graphics and Interactive Techniques - Anaheim, United States
Duration: 24 Jul 201628 Jul 2016
Conference number: 43


  • Appearance capture
  • Convolutional neural networks
  • Material appearance
  • Texture synthesis


Dive into the research topics of 'Reflectance modeling by neural texture synthesis'. Together they form a unique fingerprint.

Cite this