TY - JOUR
T1 - Asymptotically Optimal Online Caching on Multiple Caches with Relaying and Bypassing
AU - Tan, Haisheng
AU - Jiang, Shaofeng H.C.
AU - Han, Zhenhua
AU - Li, Mingxia
N1 - Funding Information:
Manuscript received June 25, 2020; revised December 16, 2020 and April 2, 2021; accepted April 3, 2021; approved by IEEE/ACM TRANSACTIONS ON NETWORKING Editor B. Ji. Date of publication May 10, 2021; date of current version August 18, 2021. This work was supported in part by the National Key Research and Development Program of China under Grant 2018YFB0803400, in part by the NSFC under Grant 61772489, in part by the Key Research Program of Frontier Sciences (CAS) under Grant QYZDY-SSW-JSC002, in part by the Project of “FANet: PCL Future Greater-Bay Area Network Facilities for Large-Scale Experiments and Applications” under Grant LZC001, and in part by the Fundamental Research Funds for the Central Universities at China. A preliminary version of this work titled “Camul: Online Caching on Multiple Caches with Relaying and Bypassing” was published in Proc. of the 38th Annual IEEE International Conference on Computer Communications (INFOCOM), Paris, France, April 2019 [1]. (Corresponding author: Shaofeng H.-C. Jiang.) Haisheng Tan and Mingxia Li are with the LINKE Laboratory and the CAS Key Laboratory of Wireless-Optical Communications, University of Science and Technology of China (USTC), Hefei 230027, China (e-mail: hstan@ustc.edu.cn; limingxia1999@gmail.com).
Publisher Copyright:
© 1993-2012 IEEE.
PY - 2021/8
Y1 - 2021/8
N2 - Motivated by practical scenarios in areas such as Mobile Edge Computing (MEC) and Content Delivery Networks (CDNs), we study online file caching on multiple caches, where a file request might be relayed to other caches or bypassed directly to the memory when a cache miss happens. We can also choose to fetch files from the memory to caches and conduct file replacement if necessary. We take the relaying, bypassing and fetching costs altogether into consideration. We first show the inherent difficulty of the problem even when the online requests are of uniform costs. We propose an $O(\log {K})$ -competitive randomized online multiple caching algorithm (named Camul) and an $O(K)$ -competitive deterministic algorithm (named Camul-Det), where $K$ is the total number of slots in all caches. Both of them achieve asymptotically optimal competitive ratios. Moreover, our algorithms can be implemented efficiently such that each request is processed in amortized constant time. We conduct extensive simulations on production data traces from Google and a benchmark workload from Yahoo. It shows that our algorithms dramatically outperform state-of-the-art schemes, i.e., reducing the total cost by 85% and 43% respectively compared with important baselines and their strengthened versions with request relaying. More importantly, Camul achieves such a good total cost without sacrificing other performance measures, e.g., the hit ratio, and performs consistently well on various settings of experiment parameters.
AB - Motivated by practical scenarios in areas such as Mobile Edge Computing (MEC) and Content Delivery Networks (CDNs), we study online file caching on multiple caches, where a file request might be relayed to other caches or bypassed directly to the memory when a cache miss happens. We can also choose to fetch files from the memory to caches and conduct file replacement if necessary. We take the relaying, bypassing and fetching costs altogether into consideration. We first show the inherent difficulty of the problem even when the online requests are of uniform costs. We propose an $O(\log {K})$ -competitive randomized online multiple caching algorithm (named Camul) and an $O(K)$ -competitive deterministic algorithm (named Camul-Det), where $K$ is the total number of slots in all caches. Both of them achieve asymptotically optimal competitive ratios. Moreover, our algorithms can be implemented efficiently such that each request is processed in amortized constant time. We conduct extensive simulations on production data traces from Google and a benchmark workload from Yahoo. It shows that our algorithms dramatically outperform state-of-the-art schemes, i.e., reducing the total cost by 85% and 43% respectively compared with important baselines and their strengthened versions with request relaying. More importantly, Camul achieves such a good total cost without sacrificing other performance measures, e.g., the hit ratio, and performs consistently well on various settings of experiment parameters.
KW - edge computing
KW - online algorithm
KW - Online caching
UR - http://www.scopus.com/inward/record.url?scp=85105845726&partnerID=8YFLogxK
U2 - 10.1109/TNET.2021.3077115
DO - 10.1109/TNET.2021.3077115
M3 - Article
AN - SCOPUS:85105845726
VL - 29
SP - 1841
EP - 1852
JO - IEEE-ACM TRANSACTIONS ON NETWORKING
JF - IEEE-ACM TRANSACTIONS ON NETWORKING
SN - 1063-6692
IS - 4
M1 - 9426958
ER -