Periodic Activation Functions Induce Stationarity

Lassi Meronen*, Martin Trapp, Arno Solin

*Corresponding author for this work

Research output: Chapter in Book/Report/Conference proceedingConference article in proceedingsScientificpeer-review

26 Downloads (Pure)

Abstract

Neural network models are known to reinforce hidden data biases, making them unreliable and difficult to interpret. We seek to build models that `know what they do not know' by introducing inductive biases in the function space. We show that periodic activation functions in Bayesian neural networks establish a connection between the prior on the network weights and translation-invariant, stationary Gaussian process priors. Furthermore, we show that this link goes beyond sinusoidal (Fourier) activations by also covering triangular wave and periodic ReLU activation functions. In a series of experiments, we show that periodic activation functions obtain comparable performance for in-domain data and capture sensitivity to perturbed inputs in deep neural networks for out-of-domain detection.
Original languageEnglish
Title of host publicationAdvances in Neural Information Processing Systems 34 (NeurIPS 2021)
PublisherCurran Associates Inc.
Number of pages13
Publication statusPublished - 2021
MoE publication typeA4 Conference publication
EventConference on Neural Information Processing Systems - Virtual, Online
Duration: 6 Dec 202114 Dec 2021
Conference number: 35
https://neurips.cc

Publication series

NameAdvances in Neural Information Processing Systems
PublisherMorgan Kaufmann Publishers
ISSN (Print)1049-5258

Conference

ConferenceConference on Neural Information Processing Systems
Abbreviated titleNeurIPS
CityVirtual, Online
Period06/12/202114/12/2021
Internet address

Fingerprint

Dive into the research topics of 'Periodic Activation Functions Induce Stationarity'. Together they form a unique fingerprint.

Cite this