Elastic GPU processing for streaming video data

Research output: Chapter in Book/Report/Conference proceedingConference contributionScientificpeer-review

Abstract

We study using multiple GPU devices to accelerate processing of streaming data. The amount of traffic on the Internet is rapidly rising. Much of this rise is due to media streams that need to be processed. However, the global growth rate of processing capacity is lagging behind. We see Fog computing, i.e., processing capacity attached directly to the nodes of the network, as one possibility to address this problem. In this paper, we present our approach to attach scalable processing capacity to network nodes. We use multiple generic purpose accelerators. To show the viability of our approach, we present our simulations and measurements related to a setup where multiple GPUs are attached to an NPU to accelerate processing of video streams. We show that GPU virtualization and communication batching can be used to get scalable processing.

Original languageEnglish
Title of host publication2015 IEEE Global Conference on Signal and Information Processing, GlobalSIP 2015
PublisherIEEE
Pages1260-1264
Number of pages5
ISBN (Print)9781479975914
DOIs
Publication statusPublished - 23 Feb 2016
MoE publication typeA4 Article in a conference publication
EventIEEE Global Conference on Signal and Information Processing - Orlando, United States
Duration: 13 Dec 201516 Dec 2015
http://2015.ieeeglobalsip.org/index.html

Conference

ConferenceIEEE Global Conference on Signal and Information Processing
Abbreviated titleGlobalSIP
Country/TerritoryUnited States
CityOrlando
Period13/12/201516/12/2015
Internet address

Fingerprint

Dive into the research topics of 'Elastic GPU processing for streaming video data'. Together they form a unique fingerprint.

Cite this