Elastic GPU processing for streaming video data

Research output: Chapter in Book/Report/Conference proceedingConference contributionScientificpeer-review


We study using multiple GPU devices to accelerate processing of streaming data. The amount of traffic on the Internet is rapidly rising. Much of this rise is due to media streams that need to be processed. However, the global growth rate of processing capacity is lagging behind. We see Fog computing, i.e., processing capacity attached directly to the nodes of the network, as one possibility to address this problem. In this paper, we present our approach to attach scalable processing capacity to network nodes. We use multiple generic purpose accelerators. To show the viability of our approach, we present our simulations and measurements related to a setup where multiple GPUs are attached to an NPU to accelerate processing of video streams. We show that GPU virtualization and communication batching can be used to get scalable processing.

Original languageEnglish
Title of host publication2015 IEEE Global Conference on Signal and Information Processing, GlobalSIP 2015
Number of pages5
ISBN (Print)9781479975914
Publication statusPublished - 23 Feb 2016
MoE publication typeA4 Article in a conference publication
EventIEEE Global Conference on Signal and Information Processing - Orlando, United States
Duration: 13 Dec 201516 Dec 2015


ConferenceIEEE Global Conference on Signal and Information Processing
Abbreviated titleGlobalSIP
Country/TerritoryUnited States
Internet address


Dive into the research topics of 'Elastic GPU processing for streaming video data'. Together they form a unique fingerprint.

Cite this