Projects per year
Abstract
Neural network models are known to reinforce hidden data biases, making them unreliable and difficult to interpret. We seek to build models that `know what they do not know' by introducing inductive biases in the function space. We show that periodic activation functions in Bayesian neural networks establish a connection between the prior on the network weights and translation-invariant, stationary Gaussian process priors. Furthermore, we show that this link goes beyond sinusoidal (Fourier) activations by also covering triangular wave and periodic ReLU activation functions. In a series of experiments, we show that periodic activation functions obtain comparable performance for in-domain data and capture sensitivity to perturbed inputs in deep neural networks for out-of-domain detection.
Original language | English |
---|---|
Title of host publication | Advances in Neural Information Processing Systems 34 (NeurIPS 2021) |
Publisher | Curran Associates Inc. |
Number of pages | 13 |
Publication status | Published - 2021 |
MoE publication type | A4 Conference publication |
Event | Conference on Neural Information Processing Systems - Virtual, Online Duration: 6 Dec 2021 → 14 Dec 2021 Conference number: 35 https://neurips.cc |
Publication series
Name | Advances in Neural Information Processing Systems |
---|---|
Publisher | Morgan Kaufmann Publishers |
ISSN (Print) | 1049-5258 |
Conference
Conference | Conference on Neural Information Processing Systems |
---|---|
Abbreviated title | NeurIPS |
City | Virtual, Online |
Period | 06/12/2021 → 14/12/2021 |
Internet address |
Fingerprint
Dive into the research topics of 'Periodic Activation Functions Induce Stationarity'. Together they form a unique fingerprint.-
Solin Arno /AoF Fellow Salary: Probabilistic principles for latent space exploration in deep learning
01/09/2021 → 31/08/2026
Project: Academy of Finland: Other research funding