Real-time memory efficient data redundancy removal algorithm

Vikas K. Garg, Ankur Narang, Souvik Bhattacherjee

Research output: Chapter in Book/Report/Conference proceedingConference article in proceedingsScientificpeer-review

4 Citations (Scopus)

Abstract

Data intensive computing has become a central theme in research community and industry. There is an ever growing need to process and analyze massive amounts of data from diverse sources such as telecom call data records, telescope imagery, online transaction records, web pages, stock markets, medical records (monitoring critical health conditions of patients), climate warning systems, etc. Removing redundancy in the data is an important problem as it helps in resource and compute efficiency for downstream processing of the massive (1 billion to 10 billion records) datasets. In application domains such as IR, stock markets, telecom and others, there is a strong need for real-time data redundancy removal (referred to as DRR) of enormous amounts of data flowing at the rate of 1GB/s or more. Real-time scalable data redundancy removal on massive datasets is a challenging problem. We present the design of a novel parallel data redundancy removal algorithm for both inmemory and disk-based execution. We also develop queueing theoretic analysis to optimize the throughput of our parallel algorithm on multi-core architectures. For 500 million records, our parallel algorithm can perform complete de-duplication in 255s, on 16 core Intel Xeon 5570 architecture, with in-memory execution. This gives a throughput of 2M records/s. For 6 billion records, our parallel algorithm can perform complete de-duplication in less than 4.5 hours, using 6 cores of Intel Xeon 5570, with disk-based execution. This gives a throughput of around 370K records/s. To the best of our knowledge, this is the highest real-time throughput for data redundancy removal on such massive datasets. We also demonstrate the scalability of our algorithm with increasing number of cores and data.

Original languageEnglish
Title of host publicationCIKM'10 - Proceedings of the 19th International Conference on Information and Knowledge Management and Co-located Workshops
Pages1259-1268
Number of pages10
DOIs
Publication statusPublished - 2010
MoE publication typeA4 Conference publication
EventACM International Conference on Information and Knowledge Management - Toronto, Canada
Duration: 26 Oct 201030 Oct 2010
Conference number: 19

Conference

ConferenceACM International Conference on Information and Knowledge Management
Abbreviated titleCIKM
Country/TerritoryCanada
CityToronto
Period26/10/201030/10/2010

Keywords

  • Bloom filter
  • Data and knowledge mining
  • Data de-duplication
  • High performance computing
  • Queueing theory

Fingerprint

Dive into the research topics of 'Real-time memory efficient data redundancy removal algorithm'. Together they form a unique fingerprint.

Cite this