Pre-reduction graph products: Hardnesses of properly learning DFAs and approximating EDP on DAGs

Parinya Chalermsook, Bundit Laekhanukit, Danupon Nanongkai

Research output: Chapter in Book/Report/Conference proceedingConference contributionScientificpeer-review

4 Citations (Scopus)

Abstract

The study of graph products is a major research topic and typically concerns the term f(G ∗ H), e.g., to show that f(G ∗ H) = f(G)f(H). In this paper, we study graph products in a non-standard form f(R[G ∗ H] where R is a 'reduction', a transformation of any graph into an instance of an intended optimization problem. We resolve some open problems as applications. The first problem is minimum consistent deterministic finite automaton (DFA). We show a tight n1-ε-approximation hardness, improving the n1/14-ε hardness of [Pitt and Warmuth, STOC 1989 and JACM 1993], where n is the sample size. (In fact, we also give improved hardnesses for the case of 'acyclic' DFA and NFA.) Due to Board and Pitt [Theoretical Computer Science 1992], this implies the hardness of properly learning DFAs assuming NP ≠ RP (the weakest possible assumption). This affirmatively answers an open problem raised 25 years ago in the paper of Pitt and Warmuth and the survey of Pitt [All 1989]. Prior to our results, this hardness only follows from the stronger hardness of 'improperly' learning DFAs, which requires stronger assumptions, i.e., either a cryptographic or an average case complexity assumption [Kearns and Valiant STOC 1989 and J.ACM 1994; Daniely et al. STOC 2014]. The second problem is edge-disjoint paths (EDP) on directed acyclic graphs (DAGs). This problem admits an O-approximation algorithm [Chekuri, Khanna, and Shepherd, Theory of Computing 2006] and a matching integrality gap, but so far only an n1/26-ε hardness factor is known [Chuzhoy et al., STOC 2007]. (n denotes the number of vertices.) Our techniques give a tight n1/2-ε hardness for EDP on DAGs, thus resolving its approximability status. As by-products of our techniques: (i) We give a tight hardness of packing vertex-disjoint k-cycles for large k, complimenting [Guruswami and Lee, ECCC 2014] and matching [Krivelevich et al., SODA 2005 and ACM Transactions on Algorithms 2007]. (ii) We give an alternative (and perhaps simpler) proof for the hardness of properly learning DNF, CNF and intersection of halfspaces [Alekhnovich et al., FOCS 2004 and J.Comput.Syst.Sci. 2008]. Our new concept reduces the task of proving hardnesses to merely analyzing graph product inequalities, which are often as simple as textbook exercises. This concept was inspired by, and can be viewed as a generalization of, the 'graph product subadditivity' technique we previously introduced in SODA 2013. This more general concept might be useful in proving other hardness results as well.

Original languageEnglish
Title of host publicationProceedings - Annual IEEE Symposium on Foundations of Computer Science, FOCS
PublisherIEEE
Pages444-453
Number of pages10
ISBN (Electronic)9781479965175
DOIs
Publication statusPublished - 7 Dec 2014
MoE publication typeA4 Article in a conference publication
EventAnnual Symposium on Foundations of Computer Science - Philadelphia, United States
Duration: 18 Oct 201421 Oct 2014
Conference number: 55

Conference

ConferenceAnnual Symposium on Foundations of Computer Science
Abbreviated titleFOCS
Country/TerritoryUnited States
CityPhiladelphia
Period18/10/201421/10/2014

Keywords

  • graph product
  • Hardness of approximation

Fingerprint

Dive into the research topics of 'Pre-reduction graph products: Hardnesses of properly learning DFAs and approximating EDP on DAGs'. Together they form a unique fingerprint.

Cite this