Parameterized Approximation Schemes for Clustering with General Norm Objectives

Fateme Abbasi, Sandip Banerjee, Jarosław Byrka, Parinya Chalermsook, Ameet Gadekar, Kamyar Khodamoradi, Dániel Marx, Roohani Sharma, Joachim Spoerhase

Research output: Chapter in Book/Report/Conference proceedingConference article in proceedingsScientificpeer-review


This paper considers the well-studied algorithmic regime of designing a $(1+\epsilon)$-approximation algorithm for a k-clustering problem that runs in time $f(k,\epsilon)poly(n)$ (sometimes called an efficient parameterized approximation scheme or EPAS for short1). Notable results of this kind include EPASes in the high-dimensional Euclidean setting for k-center [Badŏiu, Har-Peled, Indyk; STOC’02] as well as k-median, and k-means [Kumar, Sabharwal, Sen; J. ACM 2010]. Our main contribution is a clean and simple EPAS that settles more than ten clustering problems (across multiple well-studied objectives as well as metric spaces) and unifies well-known EPASes. More specifically, our algorithm gives EPASes in the following settings:•Clustering objectives: k-means, k-center, k-median, priority k-center, $\ell$-centrum, ordered k-median, socially fair k-median (aka robust k-median), or any other objective that can be formulated as minimizing a monotone (not necessarily symmetric!) norm of the distances of the points from the solution (generalizing the symmetric formulation introduced by Chakrabarty and Swamy [STOC’19]).•Metric spaces: Continuous high-dimensional Euclidean spaces, metrics of bounded doubling dimension, bounded treewidth metrics, and planar metrics. Prior to our results, EPASes were only known for vanilla clustering objectives (k-means, k-median, and k-center) and each such algorithm is tailored to work for the specific input metric and clustering objective (e.g., EPASes for k means and k-center in $\mathbb{R}^{d}$ are conceptually very different). In contrast, our algorithmic framework is applicable to a wide range of well-studied objective functions in a uniform way, and is (almost) entirely oblivious to any specific metric structures and yet is able to effectively exploit those unknown structures. In particular, our algorithm is not based on the (metric- and objective-specific) technique of coresets. Key to our analysis is a new concept that we call bounded $\epsilon$-scatter dimension—an intrinsic complexity measure of a metric space that is a relaxation of the standard notion of bounded doubling dimension(often used as a source of algorithmic tractability for geometric problems). Our main technical result shows that two conditions are essentially sufficient for our algorithm to yield an EPAS on the input metric M for any clustering objective:(i)The objective is described by a monotone norm, and(ii)the $\epsilon$-scatter dimension of M is upper bounded by a function of $\epsilon$.1Quick remarks: (i) An EPAS is not comparable to polynomial time approximation schemes (PTAS), (ii) before the term EPAS was invented some researchers call this type of approximation schemes a PTAS or simply an approximation scheme (in clustering, it is often assumed that k is small) [1], [2], and (iii) both EPAS and PTAS are implied by the existence of efficient polynomial time approximation schemes (EPTAS).
Original languageEnglish
Title of host publication2023 IEEE 64th Annual Symposium on Foundations of Computer Science (FOCS)
Number of pages23
ISBN (Electronic)9798350318944
ISBN (Print)979-8-3503-1895-1
Publication statusPublished - 2023
MoE publication typeA4 Conference publication
EventAnnual Symposium on Foundations of Computer Science - Santa Cruz, United States
Duration: 6 Nov 20239 Nov 2023
Conference number: 64


ConferenceAnnual Symposium on Foundations of Computer Science
Abbreviated titleFOCS
Country/TerritoryUnited States
CitySanta Cruz


  • Measurement
  • Computer science
  • Atmospheric measurements
  • Clustering algorithms
  • Extraterrestrial measurements
  • Approximation algorithms
  • Particle measurements


Dive into the research topics of 'Parameterized Approximation Schemes for Clustering with General Norm Objectives'. Together they form a unique fingerprint.

Cite this