Learning with Vertically-Partitioned Data, Binary Feedback, and Random Parameter Update

Research output: Chapter in Book/Report/Conference proceedingConference contributionScientificpeer-review

Researchers

Research units

Abstract

Machine learning models can deal with data samples scattered among distributed agents, each of which holds a nonoverlapping set of sample features. In this paper, we propose a training algorithm that does not require communication between these agents. A coordinator can access ground-truth labels and produces binary feedback to guide the optimization process towards optimal model parameters. We mimic the gradient descent technique with information observed locally at each agent. We experimented with the logistic regression model on multiple benchmark datasets and achieves promising results in terms of convergence rate and communication load.

Details

Original languageEnglish
Title of host publicationINFOCOM 2019 - IEEE Conference on Computer Communications Workshops, INFOCOM WKSHPS 2019
Publication statusPublished - 1 Apr 2019
MoE publication typeA4 Article in a conference publication
EventIEEE Conference on Computer Communications - Paris, France
Duration: 29 Apr 20192 May 2019

Publication series

NameIEEE Conference on Computer Communications
ISSN (Print)0743-166X

Conference

ConferenceIEEE Conference on Computer Communications
Abbreviated titleINFOCOM
CountryFrance
CityParis
Period29/04/201902/05/2019

    Research areas

  • distributed features, distributed optimization, gradient descent, logistic regression

ID: 38265139