We introduce a new framework for learning in severely resource-constrained settings. Our technique delicately amalgamates the representational richness of multiple linear predictors with the sparsity of Boolean relaxations, and thereby yields classifiers that are compact, interpretable, and accurate. We provide a rigorous formalism of the learning problem, and establish fast convergence of the ensuing algorithm via relaxation to a minimax saddle point objective. We supplement the theoretical foundations of our work with an extensive empirical evaluation.
|Number of pages||11|
|Journal||ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS|
|Publication status||Published - 2018|
|MoE publication type||A4 Article in a conference publication|
|Event||Conference on Neural Information Processing Systems - Palais des Congrès de Montréal, Montréal, Canada|
Duration: 2 Dec 2018 → 8 Dec 2018
Conference number: 32