Two-stream part-based deep representation for human attribute recognition

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Details

Original languageEnglish
Title of host publicationProceedings - 2018 International Conference on Biometrics, ICB 2018
PublisherInstitute of Electrical and Electronics Engineers
Pages90-97
Number of pages8
ISBN (Electronic)9781538642856
StatePublished - 13 Jul 2018
MoE publication typeA4 Article in a conference publication
EventInternational Conference on Biometrics - Gold Coast, Australia
Duration: 20 Feb 201823 Feb 2018
Conference number: 11

Conference

ConferenceInternational Conference on Biometrics
Abbreviated titleICB
CountryAustralia
CityGold Coast
Period20/02/201823/02/2018

Researchers

Research units

  • Linköping University

Abstract

Recognizing human attributes in unconstrained environments is a challenging computer vision problem. State-of-the-art approaches to human attribute recognition are based on convolutional neural networks (CNNs). The de facto practice when training these CNNs on a large labeled image dataset is to take RGB pixel values of an image as input to the network. In this work, we propose a two-stream part-based deep representation for human attribute classification. Besides the standard RGB stream, we train a deep network by using mapped coded images with explicit texture information, that complements the standard RGB deep model. To integrate human body parts knowledge, we employ the deformable part-based models together with our two-stream deep model. Experiments are performed on the challenging Human Attributes (HAT-27) Dataset consisting of 27 different human attributes. Our results clearly show that (a) the two-stream deep network provides consistent gain in performance over the standard RGB model and (b) that the attribute classification results are further improved with our two-stream part-based deep representations, leading to state-of-the-art results.

    Research areas

  • Deep Learning, Human attribute Recognition, Part-based representation

ID: 27193725