Change search
Refine search result
12 1 - 50 of 60
CiteExportLink to result list
Cite
Citation style
• apa
• harvard1
• ieee
• modern-language-association-8th-edition
• vancouver
• Other style
More styles
Language
• de-DE
• en-GB
• en-US
• fi-FI
• nn-NO
• nn-NB
• sv-SE
• Other locale
More languages
Output format
• html
• text
• asciidoc
• rtf
Rows per page
• 5
• 10
• 20
• 50
• 100
• 250
Sort
• Standard (Relevance)
• Author A-Ö
• Author Ö-A
• Title A-Ö
• Title Ö-A
• Publication type A-Ö
• Publication type Ö-A
• Issued (Oldest first)
• Issued (Newest first)
• Created (Oldest first)
• Created (Newest first)
• Last updated (Oldest first)
• Last updated (Newest first)
• Disputation date (earliest first)
• Disputation date (latest first)
• Standard (Relevance)
• Author A-Ö
• Author Ö-A
• Title A-Ö
• Title Ö-A
• Publication type A-Ö
• Publication type Ö-A
• Issued (Oldest first)
• Issued (Newest first)
• Created (Oldest first)
• Created (Newest first)
• Last updated (Oldest first)
• Last updated (Newest first)
• Disputation date (earliest first)
• Disputation date (latest first)
Select
The maximal number of hits you can export is 250. When you want to export more records please use the 'Create feeds' function.
• 1. Andersson, A.
KTH, Superseded Departments, Numerical Analysis and Computer Science, NADA.
Tight bounds for searching a sorted array of strings2000In: SIAM journal on computing (Print), ISSN 0097-5397, E-ISSN 1095-7111, Vol. 30, no 5, p. 1552-1578Article in journal (Refereed)

Given a k-character query string and an array of n strings arranged in lexicographical order, computing the rank of the query string among the n strings or deciding whether it occurs in the array requires the inspection [GRAPHICS] characters in the worst case.

• 2. Andersson, G.
KTH, Superseded Departments, Numerical Analysis and Computer Science, NADA.
A new way of using semidefinite programming with applications to linear equations mod p2001In: Journal of Algorithms, ISSN 0196-6774, E-ISSN 1090-2678, Vol. 39, no 2, p. 162-204Article in journal (Refereed)

We introduce a new method of constructing approximation algorithms for combinatorial optimization problems using semidefinite programming. It consists of expressing each combinatorial object in the original problem as a constellation of vectors in the semidefinite program. When we apply this technique to systems of linear equations mod p with at most two variables in each equation, we can show that the problem is approximable within (1 - kappa (p))p, where kappa (p)> 0 for all p. Using standard techniques we also show that it is NP-hard to approximate the problem within a constant ratio, independent of p.

• 3. Aumann, Y.
KTH, Superseded Departments, Numerical Analysis and Computer Science, NADA.
Linear-consistency testing2001In: Journal of computer and system sciences (Print), ISSN 0022-0000, E-ISSN 1090-2724, Vol. 62, no 4, p. 589-607Article in journal (Refereed)

We extend the notion of linearity testing to the task of checking linear consistency of multiple functions. Informally, functions are linear if their graphs form straight lines on the plane. Two such functions are consistent if the lines have the same slope. We propose a variant of a test of M. Blum et al. (J. Comput. System Sci. 47 (1993), 549-595) to check the linear consistency of three functions f(1). f(2). f(3) mapping a finite Abelian group G to an Abelian group H: Pick x, y is an element of G uniformly and independently at random and check if f(1)(x) + f(2)(y) = f(3)(x + y). We analyze this test for two cases: (1) G and H are arbitrary Abelian groups and (2) G = F-2(n) and H = F-2. Questions bearing close relationship to linear-consistency testing seem to hav e been implicitly considered in recent work on the construction of PCPs and in particular in the work of J. Hastad [9] (in Proceedings of the Twenty-Ninth Annual ACM Symposium on Theory of Computing. El Paso. Texas, 4-6 May 1997, pp. 1-10). It is abstracted explicitly for the first time here. As an application of our results we give yet another new and tight characterization of NP. namely For All epsilon > 0, NP = MIP1-epsilon 1/2 [O(log n), 3, 1]. That is, every language in NP has 3-prover 1-round proof systems in which the verifier tosses O(log n) coins and asks each of the three provers one question each. The provers respond with one bit each such that the verifier accepts instance of the language with probability 1 - epsilon and rejects noninstances with probability at least;. Such a result is of some interest in the study of probabilistically checkable proofs.

• 4. Austrin, P.
KTH, School of Computer Science and Communication (CSC), Theoretical Computer Science, TCS.
On the usefulness of predicates2012In: 2012 IEEE 27th Annual Conference On Computational Complexity (CCC), IEEE , 2012, p. 53-63Conference paper (Refereed)

Motivated by the pervasiveness of strong in approximability results for Max-CSPs, we introduce a relaxed notion of an approximate solution of a Max-CSP. In this relaxed version, loosely speaking, the algorithm is allowed to replace the constraints of an instance by some other (possibly real-valued) constraints, and then only needs to satisfy as many of the new constraints as possible. To be more precise, we introduce the following notion of a predicate P being \emph{useful} for a (real-valued) objective Q: given an almost satisfiable Max-P instance, there is an algorithm that beats a random assignment on the corresponding Max-Q instance applied to the same sets of literals. The standard notion of a nontrivial approximation algorithm for a Max-CSP with predicate P is exactly the same as saying that P is useful for P itself. We say that P is useless if it is not useful for any Q. Under the Unique Games Conjecture, we can give a complete and simple characterization of useless Max-CSPs defined by a predicate: such a Max-CSP is useless if and only if there is a pair wise independent distribution supported on the satisfying assignments of the predicate. It is natural to also consider the case when no negations are allowed in the CSP instance, and we derive a similar complete characterization (under the UGC) there as well. Finally, we also include some results and examples shedding additional light on the approximability of certain Max-CSPs.

• 5.
KTH, School of Computer Science and Communication (CSC), Theoretical Computer Science, TCS.
KTH, School of Computer Science and Communication (CSC), Theoretical Computer Science, TCS.
On the Usefulness of Predicates2012In: Computing Research Repository, Vol. abs/1204.5662Article in journal (Refereed)

Motivated by the pervasiveness of strong inapproximability results forMax-CSPs, we introduce a relaxed notion of an approximate solution of aMax-CSP. In this relaxed version, loosely speaking, the algorithm is allowed toreplace the constraints of an instance by some other (possibly real-valued)constraints, and then only needs to satisfy as many of the new constraints aspossible.To be more precise, we introduce the following notion of a predicate $P$being \emph{useful} for a (real-valued) objective $Q$: given an almostsatisfiable Max-$P$ instance, there is an algorithm that beats a randomassignment on the corresponding Max-$Q$ instance applied to the same sets ofliterals. The standard notion of a nontrivial approximation algorithm for aMax-CSP with predicate $P$ is exactly the same as saying that $P$ is useful for$P$ itself.We say that $P$ is useless if it is not useful for any $Q$. This turns out tobe equivalent to the following pseudo-randomness property: given an almostsatisfiable instance of Max-$P$ it is hard to find an assignment such that theinduced distribution on $k$-bit strings defined by the instance is notessentially uniform.Under the Unique Games Conjecture, we give a complete and simplecharacterization of useful Max-CSPs defined by a predicate: such a Max-CSP isuseless if and only if there is a pairwise independent distribution supportedon the satisfying assignments of the predicate. It is natural to also considerthe case when no negations are allowed in the CSP instance, and we derive asimilar complete characterization (under the UGC) there as well.Finally, we also include some results and examples shedding additional lighton the approximability of certain Max-CSPs.

• 6.
KTH, School of Computer Science and Communication (CSC), Theoretical Computer Science, TCS.
KTH, School of Computer Science and Communication (CSC), Theoretical Computer Science, TCS.
Randomly Supported Independence and Resistance2009In: STOC'09: PROCEEDINGS OF THE 2009 ACM SYMPOSIUM ON THEORY OF COMPUTING, NEW YORK: ASSOC COMPUTING MACHINERY , 2009, p. 483-492Conference paper (Refereed)

We prove that for any positive integer k, there is a constant C-k such that a randomly selected set of c(k)n(k) log n Boolean vectors with high probability supports a balanced k-wise independent distribution. In the case of k <= 2 a more elaborate argument, gives the strong-er bound ckn(k). Using a recent, result. by Austrin and Mossel this shows that a predicate on t, bits. Chosen at, random among predicates accepting c(2)t(2) input, vectors, is, assuming the Unique Games Conjecture, likely to be approximation resistant. These result's are close to tight,: we show that there are other constants, c(k)(1), such that a randomly selected set of points is unlikely to support a balanced k-wise. independent distribution and for some c > 0, a random predicate accepting ct(2)/log t input, vectors is is non-trivially approximable with high probability. In a different application of the result of Austrin and Mossel we prove that, again assuming the Unique Games Conjecture, any predicate on t bits accepting at least (32/33) - 2(t) inputs is approximation resistant. The results extend front the Boolean domain to larger finite domains.

• 7.
KTH, School of Computer Science and Communication (CSC), Theoretical Computer Science, TCS. Univ Toronto, Dept Comp Sci.
KTH, School of Computer Science and Communication (CSC), Theoretical Computer Science, TCS.
Randomly supported independence and resistance2011In: SIAM journal on computing (Print), ISSN 0097-5397, E-ISSN 1095-7111, Vol. 40, no 1, p. 1-27Article in journal (Refereed)

We prove that for any positive integers q and k there is a constant c(q,k) such that a uniformly random set of c(q,k)n(k) log n vectors in [q](n) with high probability supports a balanced k-wise independent distribution. In the case of k <= 2 a more elaborate argument gives the stronger bound, c(q,k)n(k). Using a recent result by Austrin and Mossel, this shows that a predicate on t bits, chosen at random among predicates accepting c(q,2)t(2) input vectors, is, assuming the unique games conjecture, likely to be approximation resistant. These results are close to tight: we show that there are other constants, c'(q,k), such that a randomly selected set of cardinality c'(q,k)n(k) points is unlikely to support a balanced k-wise independent distribution and, for some c > 0, a random predicate accepting ct(2)/logt input vectors is nontrivially approximable with high probability. In a different application of the result of Austrin and Mossel we prove that, again assuming the unique games conjecture, any predicate on t Boolean inputs accepting at least (32/33).2(t) inputs is approximation resistant. The results extend from balanced distributions to arbitrary product distributions.

• 8.
KTH, School of Computer Science and Communication (CSC), Theoretical Computer Science, TCS.
KTH, School of Computer Science and Communication (CSC), Theoretical Computer Science, TCS.
(2 + epsilon)-Sat Is NP-hard2014In: Proceedings - Annual IEEE Symposium on Foundations of Computer Science, FOCS, 2014, p. 1-10Conference paper (Refereed)

We prove the following hardness result for anatural promise variant of the classical CNF-satisfiabilityproblem: Given a CNF-formula where each clause has widthw and the guarantee that there exists an assignment satisfyingat least g = [w/2]-1 literals in each clause, it is NP-hard tofind a satisfying assignment to the formula (that sets at leastone literal to true in each clause). On the other hand, when g = [w/2], it is easy to find a satisfying assignment via simplegeneralizations of the algorithms for 2-SAT. Viewing 2-SAT σ P as easiness of SAT when 1-in-2 literals are true in every clause, and NP-hardness of 3-SAT as intractability of SAT when 1-in-3 literals are true, our resultshows, for any fixed &amp;epsi; > 0, the hardness of finding a satisfyingassignment to instances of '(2 + &amp;epsi;)-SAT' where the density ofsatisfied literals in each clause is promised to exceed 1/(2+ε). We also strengthen the results to prove that given a (2k + 1)-uniform hypergraph that can be 2-colored such that each edgehas perfect balance (at most k + 1 vertices of either color), itis NP-hard to find a 2-coloring that avoids a monochromaticedge. In other words, a set system with discrepancy 1 is hard todistinguish from a set system with worst possible discrepancy.

• 9.
KTH, School of Computer Science and Communication (CSC), Theoretical Computer Science, TCS.
KTH, School of Computer Science and Communication (CSC), Theoretical Computer Science, TCS.
On the power of many one-bit provers2013In: ITCS 2013 - Proceedings of the 2013 ACM Conference on Innovations in Theoretical Computer Science, New York: Association for Computing Machinery (ACM), 2013, p. 215-220Conference paper (Refereed)

We study the class of languages, denoted by MIP[k, 1-∈, s], which have k-prover games where each prover just sends a single bit, with completeness 1-∈ and soundness error s. For the case that k=1 (i.e., for the case of interactive proofs), Goldreich, Vadhan and Wigderson (Computational Complexity'02) demonstrate that SZK exactly characterizes languages having 1-bit proof systems with "non-trivial" soundness (i.e., 1/2 &lt; s ≤ 1-2∈). We demonstrate that for the case that k ≥ 2, 1-bit k-prover games exhibit a significantly richer structure: •(Folklore) When s ≤ 1/2 k - ∈, MIP[k, 1-∈, s] = BPP; • When 1/2k + ∈ ≤ s &lt; 2/2k -∈, MIP[k, 1-∈, s] = SZK; • When s ≥ 2/2k + ∈, AM ⊆ MIP[k, 1-∈, s]; • For s ≤ 0.62 k/2k and sufficiently large k, MIP[k, 1-∈, s] ⊆ EXP; • For s ≥ 2k/2k, MIP[k, 1, 1-∈, s] = NEXP. As such, 1-bit k-prover games yield a natural "quantitative" approach to relating complexity classes such as BPP, SZK, AM, EXP, and NEXP. We leave open the question of whether a more fine-grained hierarchy (between AM and NEXP) can be established for the case when s ≥ 2/2k + ∈.

• 10. Barak, B.
KTH, School of Computer Science and Communication (CSC), Theoretical Computer Science, TCS.
Making the long code shorter2015In: SIAM journal on computing (Print), ISSN 0097-5397, E-ISSN 1095-7111, Vol. 44, no 5, p. 1287-1324Article in journal (Refereed)

The long code is a central tool in hardness of approximation especially in questions related to the Unique Games Conjecture. We construct a new code that is exponentially more efficient but can still be used in many of these applications. Using the new code we obtain exponential improvements over several known results including the following: (1) For any ε &gt; 0, we show the existence of an n-vertex graph G where every set of o(n) vertices has expansion 1-ε but G's adjacency matrix has more than exp(logδ n) eigenvalues larger than 1 - ε, where δ depends only on ε. This answers an open question of Arora, Barak, and Steurer [Proceedings of the 2010 IEEE 51st Annual Symposium on Foundations of Computer Science, 2010, pp. 563-572] who asked whether one can improve over the noise graph on the Boolean hypercube that has poly(log n) such eigenvalues. (2) A gadget that reduces Unique Games instances with linear constraints modulo K into instances with alphabet k with a blowup of kpolylog(K) , improving over the previously known gadget with blowup of kω(K). (3) An n-variable integrality gap for Unique Games that survives exp(poly(log log n)) rounds of the semidefinite programming version of the Sherali-Adams hierarchy, improving on the previously known bound of poly(log log n). We show a connection between the local testability of linear codes and Small-Set Expansion in certain related Cayley graphs and use this connection to derandomize the noise graph on the Boolean hypercube.

• 11. Barak, Boaz
KTH, School of Computer Science and Communication (CSC), Theoretical Computer Science, TCS.
Making the Long Code Shorter2012In: Foundations of Computer Science (FOCS), 2012 IEEE 53rd Annual Symposium on, IEEE Computer Society, 2012, p. 370-379Conference paper (Refereed)

The long code is a central tool in hardness of approximation, especially in questions related to the unique games conjecture. We construct a new code that is exponentially more efficient, but can still be used in many of these applications. Using the new code we obtain exponential improvements over several known results, including the following: 1) For any ε > 0, we show the existence of an n vertex graph G where every set of o(n) vertices has expansion 1-ε, but G's adjacency matrix has more than exp(logδ n) eigenvalues larger than 1 - ε, where δ depends only on ε. This answers an open question of Arora, Barak and Steurer (FOCS 2010) who asked whether one can improve over the noise graph on the Boolean hypercube that has poly(log n) such eigenvalues. 2) A gadget that reduces unique games instances with linear constraints modulo K into instances with alphabet k with a blowup of Kpolylog(K), improving over the previously known gadget with blowup of 2Ω(K). 3) An n variable integrality gap for Unique Games that survives exp(poly(log log n)) rounds of the SDP + Sherali Adams hierarchy, improving on the previously known bound of poly(log log n). We show a connection between the local testability of linear codes and small set expansion in certain related Cayley graphs, and use this connection to derandomize the noise graph on the Boolean hypercube.

• 12. Blais, Eric
KTH, School of Computer Science and Communication (CSC), Theoretical Computer Science, TCS.
On DNF Approximators for Monotone Boolean Functions2014In: Automata, Languages, and Programming: 41st International Colloquium, ICALP 2014, Copenhagen, Denmark, July 8-11, 2014, Proceedings, Part I, Springer Berlin/Heidelberg, 2014, Vol. 8572, p. 235-246Conference paper (Refereed)

We study the complexity of approximating monotone Boolean functions with disjunctive normal form (DNF) formulas, exploring two main directions. First, we construct DNF approximators for arbitrary monotone functions achieving one-sided error: we show that every monotone f can be e-approximated by a DNF g of size 2(n-Omega)(root n) satisfying g(x) <= f(x) for all x is an element of{0, 1}(n). This is the first non-trivial universal upper bound even for DNF approximators incurring two-sided error. Next, we study the power of negations in DNF approximators for monotone functions. We exhibit monotone functions for which non-monotone DNFs perform better than monotone ones, giving separations with respect to both DNF size and width. Our results, when taken together with a classical theorem of Quine [1], highlight an interesting contrast between approximation and exact computation in the DNF complexity of monotone functions, and they add to a line of work on the surprising role of negations in monotone complexity [2,3,4].

• 13. Boppana, R.
KTH, School of Computer Science and Communication (CSC), Theoretical Computer Science, TCS.
Bounded Independence vs. Moduli2016In: Leibniz International Proceedings in Informatics, LIPIcs, Dagstuhl Publishing , 2016Conference paper (Refereed)

Let k = k(n) be the largest integer such that there exists a k-wise uniform distribution over {0, 1}n that is supported on the set Sm := {x 2 {0, 1}n : Σi xi ≡ 0 mod m}, where m is any integer. We show that Ω(n/m2 logm) ≤ k ≤ 2n/m + 2. For k = O(n/m) we also show that any k-wise uniform distribution puts probability mass at most 1/m + 1/100 over Sm. For any fixed odd m there is k ≥ (1-Ω(1))n such that any k-wise uniform distribution lands in Sm with probability exponentially close to |Sm|/2n; and this result is false for any even m.

• 14. Bukh, Boris
KTH, School of Computer Science and Communication (CSC), Theoretical Computer Science, TCS.
An Improved Bound on the Fraction of Correctable Deletions2017In: IEEE Transactions on Information Theory, ISSN 0018-9448, E-ISSN 1557-9654, Vol. 63, no 1, p. 93-103Article in journal (Refereed)

We consider codes over fixed alphabets against worst case symbol deletions. For any fixed k >= 2, we construct a family of codes over alphabet of size k with positive rate, which allow efficient recovery from a worst case deletion fraction approaching 1 - (2/(k + root k)). In particular, for binary codes, we are able to recover a fraction of deletions approaching 1/(root 2+1) = root 2-1 approximate to 0.414. Previously, even non-constructively, the largest deletion fraction known to be correctable with positive rate was 1 - Theta (1/root k), and around 0.17 for the binary case. Our result pins down the largest fraction of correctable deletions for k-ary codes as 1 - Theta (1/k), since 1 - 1/k is an upper bound even for the simpler model of erasures where the locations of the missing symbols are known. Closing the gap between (root 2 - 1) and 1/2 for the limit of worst case deletions correctable by binary codes remains a tantalizing open question.

• 15. Cheraghchi, M.
KTH, School of Computer Science and Communication (CSC), Theoretical Computer Science, TCS.
Approximating linear threshold predicates2012In: ACM Transactions on Computation Theory, ISSN 1942-3454, Vol. 4, no 1, p. 2-Article in journal (Refereed)

We study constraint satisfaction problems on the domain {-1, 1}, where the given constraints are homogeneous linear threshold predicates, that is, predicates of the form sgn(w 1x 1 + Â· Â· Â· + w nx n) for some positive integer weights w 1, . . . , w n. Despite their simplicity, current techniques fall short of providing a classification of these predicates in terms of approximability. In fact, it is not easy to guess whether there exists a homogeneous linear threshold predicate that is approximation resistant or not. The focus of this article is to identify and study the approximation curve of a class of threshold predicates that allow for nontrivial approximation. Arguably the simplest such predicate is the majority predicate sgn(x 1 + Â· Â· Â· + x n), for which we obtain an almost complete understanding of the asymptotic approximation curve, assuming the Unique Games Conjecture. Our techniques extend to a more general class of "majority-like" predicates and we obtain parallel results for them. In order to classify these predicates, we introduce the notion of Chow-robustness that might be of independent interest.

• 16. Cheraghchi, M
KTH, School of Computer Science and Communication (CSC), Numerical Analysis and Computer Science, NADA. KTH, School of Computer Science and Communication (CSC), Theoretical Computer Science, TCS.
Approximating Linear Threshold Predicates2010Conference paper (Refereed)

We study constraint satisfaction problems on the domain {-1, 1}, where the given constraints are homogeneous linear threshold predicates. That is, predicates of the form sgn(w(1)x(1) + ... + w(n)x(n)) for some positive integer weights w(1), ... ,w(n). Despite their simplicity, current techniques fall short of providing a classification of these predicates in terms of approximability. in fact, it is not easy to guess whether there exists a homogeneous linear threshold predicate that is approximation resistant or not. The focus of this paper is to identify and study the approximation curve of a class of threshold predicates that allow for non-trivial approximation. Arguably the simplest such predicate is the majority predicate sgn(x(1) + ... + x(n)), for which we obtain an almost complete understanding of the asymptotic approximation curve, assuming the Unique Games Conjecture. Our techniques extend to a more general class of "majority-like" predicates and we obtain parallel results for them. In order to classify these predicates, we introduce the notion of Chow-robustness that might be of independent interest.

• 17. Dodis, Y.
KTH, Superseded Departments, Numerical Analysis and Computer Science, NADA.
Randomness extraction and key derivation using the CBC, Cascade and HMAC Modes2004In: ADVANCES IN CRYPTOLOGY - CRYPTO 2004, PROCEEDINGS / [ed] Franklin, M, 2004, p. 494-510Conference paper (Refereed)

We study the suitability of common pseudorandomness modes associated with cryptographic hash functions and block ciphers (CBC-MAC, Cascade and HMAC) for the task of "randomness extraction", namely, the derivation of keying material from semi-secret and/or semi-random sources. Important applications for such extractors include the derivation of strong cryptographic keys from non-uniform sources of randomness (for example, to extract a seed for a pseudorandom generator from a weak source of physical or digital noise), and the derivation of pseudorandom keys from a Diffie-Hellman value. Extractors are closely related in their applications to pseudorandom functions and thus it is attractive to (re)use the common pseudorandom modes as randomness extractors. Yet, the crucial difference between pseudorandom generation and randomness extraction is that the former uses random secret keys while the latter uses random but known keys. We show that under a variety of assumptions on the underlying primitives (block ciphers and compression functions), ranging from ideal randomness assumptions to realistic universal-hashing properties, these modes induce good extractors. Hence, these schemes represent a more practical alternative to combinatorial extractors (that are seldom used in practice), and a better-analyzed alternative to the common practice of using SHA-1 or MD5 (as a single un-keyed function) for randomness extraction. In particular, our results serve to validate the method of key extraction and key derivation from Diffie-Hellman values used in the IKE (IPsec's Key Exchange) protocol.

• 18. Dor, D.
KTH, Superseded Departments, Numerical Analysis and Computer Science, NADA.
On lower bounds for selecting the median2001In: SIAM Journal on Discrete Mathematics, ISSN 0895-4801, E-ISSN 1095-7146, Vol. 14, no 3, p. 299-311Article in journal (Refereed)

We present a reformulation of the 2n + o(n) lower bound of Bent and John [Proceedings of the 17th Annual A CM Symposium on Theory of Computing, 1985, pp. 213-216] for the number of comparisons needed for selecting the median of n elements. Our reformulation uses a weight function. Apart from giving a more intuitive proof for the lower bound, the new formulation opens up possibilities for improving it. We use the new formulation to show that any pair-forming median finding algorithm, i.e., a median finding algorithm that starts by comparing [n/2] disjoint pairs of elements must perform, in the worst case, at least 2.01n + o(n) comparisons. This provides strong evidence that selecting the median requires at least cn + o(n) comparisons for some c > 2.

• 19.
KTH, School of Computer Science and Communication (CSC). Swedish NCSA, Sweden.
KTH, School of Computer Science and Communication (CSC), Theoretical Computer Science, TCS.
Quantum algorithms for computing short discrete logarithms and factoring RSA integers2017In: 8th International Workshop on Post-Quantum Cryptography, PQCrypto 2017, Springer, 2017, Vol. 10346, p. 347-363Conference paper (Refereed)

We generalize the quantum algorithm for computing short discrete logarithms previously introduced by Ekerå [2] so as to allow for various tradeoffs between the number of times that the algorithm need be executed on the one hand, and the complexity of the algorithm and the requirements it imposes on the quantum computer on the other hand. Furthermore, we describe applications of algorithms for computing short discrete logarithms. In particular, we show how other important problems such as those of factoring RSA integers and of finding the order of groups under side information may be recast as short discrete logarithm problems. This gives rise to an algorithm for factoring RSA integers that is less complex than Shor’s general factoring algorithm in the sense that it imposes smaller requirements on the quantum computer. In both our algorithm and Shor’s algorithm, the main hurdle is to compute a modular exponentiation in superposition. When factoring an n bit integer, the exponent is of length 2n bits in Shor’s algorithm, compared to slightly more than n/2 bits in our algorithm.

• 20. Guruswami, V.
KTH, School of Computer Science and Communication (CSC), Theoretical Computer Science, TCS.
Super-polylogarithmic hypergraph coloring hardness via low-degree long codes2014In: Proceedings of the Annual ACM Symposium on Theory of Computing, 2014, p. 614-623Conference paper (Refereed)

We prove improved inapproximability results for hypergraph coloring using the low-degree polynomial code (aka, the"short code" of Barak et. al. [FOCS 2012]) and the techniques pro-posed by Dinur and Guruswami [FOCS 2013] to incorporatethis code for inapproximability results. In particular, we prove quasi-NP-hardness of the following problems on n-vertex hypergraphs: • Coloring a 2-colorable 8-uniform hypergraph with 2 2Ω(√log log n) colors. • Coloring a 4-colorable 4-uniform hypergraph with 22Ω(√ log log n) colors. • Coloring a 3-colorable 3-uniform hypergraph with (log n) Ω(1√ log log log n) colors. In each of these cases, the hardness results obtained are (at least) superpolynomially stronger (if not exponentially stronger as in the third case) than what was previously known for the respective cases. In fact, prior to this result, (log n)O(1) colors was the strongest quantitative bound on the number of colors ruled out by inapproximability results for O(1)-colorable hypergraphs. The fundamental bottleneck in obtaining coloring in approximability results using the low-degree long code was a ultipartite structural restriction in the PCP construction of Dinur-Guruswami. We are able to get around this restriction by simulating the multipartite structure implicitly by querying just one partition (albeit requiring 8 queries), which yields our result for 2-colorable 8-uniform hypergraphs. The result for 4-colorable 4-uniform hypergraphs is obtained via a "query doubling" method exploiting additional properties of the 8-query test. For 3-colorable 3-uniform hyper-graphs, we exploit the ternary domain to design a test with an additive (as opposed to multiplicative) noise function, and analyze its efficacy in killing high weight Fourier coefficients via the pseudorandom properties of an associated quadratic form. The latter step involves extending the key algebraic ingredient of Dinur-Guruswami concerning testing binary Reed-Muller codes to the ternary alphabet.

• 21. Guruswami, V.
KTH, School of Computer Science and Communication (CSC), Numerical Analysis and Computer Science, NADA.
On the list-decodability of random linear codes2010In: Proceedings of the Annual ACM Symposium on Theory of Computing, 2010, p. 409-416Conference paper (Refereed)

We show that the list-decodability of random linear codes is as good as that of general random codes. Specifically, for every fixed finite field double-struck Fq, p ∈ (0,1-1/q) and ε > 0, we prove that with high probability a random linear code C in double-struck F qn of rate (1-H-q(p)-ε) can be list decoded from a fraction p of errors with lists of size at most O(1/ε). This also answers a basic open question concerning the existence of highly list-decodable linear codes, showing that a list-size of O(1/ε) suffices to have rate within ε of the "list decoding capacity" 1-Hq(p). The best previously known list-size bound was qO(1/ε) (except in the q=2 case where a list-size bound of O(1/ε) was known). The main technical ingredient in our proof is a strong upper bound on the probability that ℓ random vectors chosen from a Hamming ball centered at the origin have too many (more than Θ(ℓ)) vectors from their linear span also belong to the ball.

• 22. Guruswami, V.
KTH, Superseded Departments, Numerical Analysis and Computer Science, NADA.
Hardness of approximate hypergraph coloring2002In: SIAM journal on computing (Print), ISSN 0097-5397, E-ISSN 1095-7111, Vol. 31, no 6, p. 1663-1686Article in journal (Refereed)

We introduce the notion of covering complexity of a veri er for probabilistically checkable proofs (PCPs). Such a veri er is given an input, a claimed theorem, and an oracle, representing a purported proof of the theorem. The veri er is also given a random string and decides whether to accept the proof or not, based on the given random string. We de ne the covering complexity of such a veri er, on a given input, to be the minimum number of proofs needed to satisfy the veri er on every random string; i.e., on every random string, at least one of the given proofs must be accepted by the veri er. The covering complexity of PCP verifiers offers a promising route to getting stronger inapproximability results for some minimization problems and, in particular, (hyper) graph coloring problems. We present a PCP veri er for NP statements that queries only four bits and yet has a covering complexity of one for true statements and a superconstant covering complexity for statements not in the language. Moreover, the acceptance predicate of this veri er is a simple not-all-equal check on the four bits it reads. This enables us to prove that, for any constant c, it is NP-hard to color a 2-colorable 4-uniform hypergraph using just c colors and also yields a superconstant inapproximability result under a stronger hardness assumption.

• 23. Guruswami, V.
KTH, Superseded Departments, Numerical Analysis and Computer Science, NADA.
Combinatorial bounds for list decoding2002In: IEEE Transactions on Information Theory, ISSN 0018-9448, E-ISSN 1557-9654, Vol. 48, no 5, p. 1021-1034Article in journal (Refereed)

Informally, an error-correcting code has nice list-decodability properties if every Hamming ball of large radius has a small number of codewords in it. Here, we report linear codes with nontrivial list-decodability: i.e., codes of large rate that are nicely list-decodable, and codes of large distance that are not nicely list-decodable. Specifically, on the positive side, we show that there exist codes of rate R and block length n that have at most c codewords in every Hamming ball of radius H-1 (1-R-1/c) (.) n. This answers the main open question from the work of Elias. This result also has consequences for the construction of concatenated codes of good rate that are list decodable from a large fraction of errors, improving previous results of Guruswami and Sudan in this vein. Specifically, for every epsilon > 0, we present a polynomial time constructible asymptotically good family of binary codes of rate Omega(epsilon(4)) that can be list-decoded in polynomial time from up to a fraction (1/2 - epsilon) of errors, using lists of size O(epsilon(-2)). On the negative side, we show that for every delta and c, there exists tau < delta, c(1) > 0, and an infinite family of linear codes {C-i}(i) such that if n(i) denotes the block length of C-i, then C-i has minimum distance at least delta (.) n(i) and contains more than c(1) (.) n(i)(c) codewords in some Hamming ball of radius tau (.) n(i). While this result is still far from known bounds on the list-decodability of linear codes, it is the first to bound the radius for list-decodability by a polynomial-sized list away from the minimum distance of the code.

• 24. Guruswami, Venkatesan
KTH, School of Computer Science and Communication (CSC), Theoretical Computer Science, TCS.
SUPER-POLYLOGARITHMIC HYPERGRAPH COLORING HARDNESS VIA LOW-DEGREE LONG CODES2017In: SIAM journal on computing (Print), ISSN 0097-5397, E-ISSN 1095-7111, Vol. 46, no 1, p. 132-159Article in journal (Refereed)

We prove improved inapproximability results for hypergraph coloring using the low-degree polynomial code (aka the "short code" of Barak et al. [SIAM J. Comput., 44 (2015), pp. 1287-1324]) and the techniques proposed by Dinur and Guruswami [Israel J. Math., 209 (2015), pp. 611-649] to incorporate this code for inapproximability results. In particular, we prove quasi NP-hardness of the following problems on n-vertex hypergraphs: coloring a 2-colorable 8-uniform hypergraph with 2(2 Omega(root loglg n)) colors; coloring a 4-colorable 4-uniform hypergraph with 2(2 Omega(root loglg n)) colors; and coloring a 3-colorable 3-uniform hypergraph with (log n)(Omega(1/log log log n)) colors. For the first two cases, the hardness results obtained are superpolynomial in what was previously known, and in the last case it is an exponential improvement. In fact, prior to this result, (log n)(O(1))Colors was the strongest quantitative bound on the number of colors ruled out by inapproximability results for O(1) -colorable hypergraphs, and (log log n)(O(1)) for O(1) -colorable, 3-uniform hypergraphs.

• 25. Guruswami, Venkatesan
KTH, School of Computer Science and Communication (CSC), Theoretical Computer Science, TCS.
On the List-Decodability of Random Linear Codes2011In: IEEE Transactions on Information Theory, ISSN 0018-9448, E-ISSN 1557-9654, Vol. 57, no 2, p. 718-725Article in journal (Refereed)

The list-decodability of random linear codes is shown to be as good as that of general random codes. Specifically, for every fixed finite field, Gamma(q), p is an element of (0, 1 - 1/q) and epsilon > 0, it is proved that with high probability a random linear code C in F-q(n) of rate (1 - H-q(p) - epsilon) can be list decoded from a fraction p of errors with lists of size at most O(1/epsilon). This also answers a basic open question concerning the existence of highly list-decodable linear codes, showing that a list-size of O(1/epsilon) suffices to have rate within epsilon of the information-theoretically optimal rate of 1 - H-q(p). The best previously known list-size bound was q(O(1/epsilon)) (except in the q = 2 case where a list-size bound of O(1/epsilon) was known). The main technical ingredient in the proof is a strong upper bound on the probability that l random vectors chosen from a Hamming ball centered at the origin have too many (more than Omega(l)) vectors from their linear span also belong to the ball.

• 26. Guruswami, Venkatesan
KTH, School of Computer Science and Communication (CSC), Theoretical Computer Science, TCS.
BEATING THE RANDOM ORDERING IS HARD: EVERY ORDERING CSP IS APPROXIMATION RESISTANT2011In: SIAM journal on computing (Print), ISSN 0097-5397, E-ISSN 1095-7111, Vol. 40, no 3, p. 878-914Article in journal (Refereed)

We prove that, assuming the Unique Games conjecture (UGC), every problem in the class of ordering constraint satisfaction problems (OCSPs) where each constraint has constant arity is approximation resistant. In other words, we show that if. is the expected fraction of constraints satisfied by a random ordering, then obtaining rho' approximation for any rho' > rho is UG-hard. For the simplest OCSP, the MAXIMUM ACYCLIC SUBGRAPH (MAS) problem, this implies that obtaining a.-approximation for any constant rho > 1/2 is UG-hard. Specifically, for every constant epsilon > 0 the following holds: given a directed graph G that has an acyclic subgraph consisting of a fraction (1-epsilon) of its edges, it is UG-hard to find one with more than (1/2 + epsilon) of its edges. Note that it is trivial to find an acyclic subgraph with 1/2 the edges by taking either the forward or backward edges in an arbitrary ordering of the vertices of G. The MAS problem has been well studied, and beating the random ordering for MAS has been a basic open problem. An OCSP of arity k is specified by a subset Pi subset of S-k of permutations on {1, 2, ... ,k}. An instance of such an OCSP is a set V and a collection of constraints, each of which is an ordered k-tuple of V. The objective is to find a global linear ordering of V while maximizing the number of constraints ordered as in.. A random ordering of V is expected to satisfy rho = vertical bar Pi vertical bar/k! fraction. We show that, for any fixed k, it is hard to obtain rho'-approximation for Pi-OCSP for any rho' > rho. The result is in fact stronger: we show that for every Lambda subset of Pi subset of S-k, and an arbitrarily small epsilon, it is hard to distinguish instances where a (1 - epsilon) fractionof the constraints can be ordered according to. from instances where at most a (rho + epsilon) fraction can be ordered as in Pi. A special case of our result is that the BETWEENNESS problem is hard to approximate beyond a factor 1/3. The results naturally generalize to OCSPs which assign a payoff to the different permutations. Finally, our results imply (unconditionally) that a simple semidefinite relaxation for MAS does not suffice to obtain a better approximation.

• 27.
KTH, Superseded Departments, Numerical Analysis and Computer Science, NADA.
A slight sharpening of LMN2001In: Journal of computer and system sciences (Print), ISSN 0022-0000, E-ISSN 1090-2724, Vol. 63, no 3, p. 498-508Article in journal (Refereed)

N. Linial et al. (1993. J. Assoc. Comput. Mach. 40, No. 3, 607-620) proved that a function computed by a small-depth circuit of limited size has most of its Fourier support on small sets. We improve their bounds. When the bottom fanin is bounded we use essentially their argument, but to reduce the general case to this case without a loss in the asymptotic bounds requires a new argument.

• 28.
KTH, School of Computer Science and Communication (CSC), Theoretical Computer Science, TCS.
An Average-case Depth Hierarchy Theorem for Higher Depths2016In: 2016 IEEE 57th Annual IEEE Symposium on Foundations of Computer Science (FOCS), IEEE Computer Society, 2016, Vol. 2016, p. 79-88Conference paper (Refereed)

We extend the recent hierarchy results of Ross-man, Servedio and Tan [1] to address circuits of almost logarithmic depth. Our proof uses the same basic approach as [1] but a number of small differences enables us to obtain a stronger result by a significantly shorter proof.

• 29.
KTH, School of Computer Science and Communication (CSC), Numerical Analysis and Computer Science, NADA.
Complexity theory, proofs and approximation2005In: European Congress of Mathematics / [ed] Laptev, A, 2005, p. 733-750Conference paper (Refereed)

We give a short introduction to some questions in complexity theory and proceed to describe some recent developments. In particular, we discuss probabilistically checkable proofs and their applications in establishing inapproximability results. In a traditional proof the proof-checker reads the entire proof and decides deterministically whether the proof is correct. In a probabilistically checkable proof the proof-checker randomly verifies only a very small portion of the proof but still cannot be fooled into accepting a false claim except with small probability.

• 30.
KTH, School of Computer Science and Communication (CSC), Theoretical Computer Science, TCS.
Every 2-CSP Allows Nontrivial Approximation2005In: Proceedings of the 37th Annual ACM Symposium on Theory of Computing, 2005, p. 740-746Conference paper (Refereed)

We use semidefinite programming to prove that any constraint satisfaction problem in two variables over any domain allows an efficient approximation algorithm that does provably better than picking a random assignment. To be more precise assume that each variable can take values in [d] and that each constraint rejects t out of the d2 possible input pairs. Then, for some universal constant c, we can, in probabilistic polynomial time, find an assignment whose objective value is, on expectation, within a factor (1- t/d2(1- c/d2 log d)) of optimal.

• 31.
KTH, School of Computer Science and Communication (CSC), Theoretical Computer Science, TCS.
EVERY 2-CSP ALLOWS NONTRIVIAL APPROXIMATION2008In: Computational Complexity, ISSN 1016-3328, E-ISSN 1420-8954, Vol. 17, no 4, p. 549-566Article in journal (Refereed)

We use semidefinite programming to prove that any constraint satisfaction problem in two variables over any domain allows an efficient approximation algorithm that does better than picking a random assignment. Specifically we consider the case when each variable can take values in [d] and that each constraint rejects t out of the d(2) possible input pairs. Then, for some universal constant c, we can, in probabilistic polynomial time, find an assignment whose objective value is, in expectation, within a factor 1 - t/d(2) + ct/d(4) log d of optimal, improving on the trivial bound of 1 - t/d(2).

• 32.
KTH, Superseded Departments, Mathematics.
On bounded occurrence constraint satisfaction2000In: Information Processing Letters, ISSN 0020-0190, E-ISSN 1872-6119, Vol. 74, no 02-jan, p. 1-6Article in journal (Refereed)

An approximation algorithm for a constraint satisfaction problem is said to be nontrivial if its performance ratio is strictly superior to the expected performance of the algorithm which simply chooses a random assignment. We prove that any constraint satisfaction problem where each variable appears a bounded number of times admits a nontrivial polynomial time approximation algorithm.

• 33.
KTH, School of Computer Science and Communication (CSC), Theoretical Computer Science, TCS.
On Nontrivial Approximation of CSPs2007Conference paper (Refereed)

Constraint satisfaction problems, more simply called CSPs are central in computer science, the most famous probably being Satisfiability, SAT, the basic NP-complete problem. In this talk we survey some results about the optimization version of CSPs where we want to satisfy as many constraints as possible. One very simple approach to a CSP is to give random values to the variables. It turns out that for some CSPs, one example being Max-3Sat, unless P=NP, there is no polynomial time algorithm that can achieve a an approximation ratio that is superior to what is obtained by this trivial strategy. Some other CSPs, Max-Cut being a prime example, do allow very interesting non-trivial approximation algorithms which do give an approximation ratio that is substantially better than that obtained by a random assignment. These results hint at a general classification problem of determining which CSPs do admit a polynomial time approximation algorithm that beats the random assignment by a constant factor. Positive results giving such algorithms tend to be based on semi-definite programming while the PCP theorem is the central tool for proving negative result. We describe many of the known results in the area and also discuss some of the open problems.

• 34.
KTH, School of Computer Science and Communication (CSC), Theoretical Computer Science, TCS.
On small-depth Frege proofs for Tseitin for grids2017In: 2017 IEEE 58th Annual Symposium on Foundations of Computer Science (FOCS), IEEE, 2017, p. 97-108Conference paper (Refereed)

We prove a lower bound on the size of a small depth Frege refutation of the Tseitin contradiction on the grid. We conclude that polynomial size such refutations must use formulas of almost logarithmic depth.

• 35.
KTH, School of Computer Science and Communication (CSC), Theoretical Computer Science, TCS.
On the approximation resistance of a random predicate2007In: Approximation, Randomization, and Combinatorial Optimization: Algorithms and Techniques / [ed] Charikar, M; Reingold, O; Jansen, K; Rolim, JDP, BERLIN: SPRINGER-VERLAG BERLIN , 2007, Vol. 4627, p. 149-163Conference paper (Other academic)

A predicate is approximation resistant if no probabilistic polynomial time approximation algorithm can do significantly better then the naive algorithm that picks an assignment uniformly at random. Assuming that the Unique Games Conjecture is true we prove that most Boolean predicates are approximation resistant.

• 36.
KTH, School of Computer Science and Communication (CSC), Numerical Analysis and Computer Science, NADA.
On the Approximation Resistance of a Random Predicate2009In: Computational Complexity, ISSN 1016-3328, E-ISSN 1420-8954, Vol. 18, no 3, p. 413-434Article in journal (Refereed)

A predicate is called approximation resistant if it is NP-hard to approximate the corresponding constraint satisfaction problem significantly better than what is achieved by the naive algorithm that picks an assignment uniformly at random. In this paper we study predicates of Boolean inputs where the width of the predicate is allowed to grow. Samorodnitsky and Trevisan proved that, assuming the Unique Games Conjecture, there is a family of very sparse predicates that are approximation resistant. We prove that, under the same conjecture, any predicate implied by their predicate remains approximation resistant and that, with high probability, this condition applies to a randomly chosen predicate.

• 37.
KTH, School of Computer Science and Communication (CSC), Theoretical Computer Science, TCS.
On the correlation of parity and small-depth circuits2014In: SIAM journal on computing (Print), ISSN 0097-5397, E-ISSN 1095-7111, Vol. 43, no 5, p. 1699-1708Article in journal (Refereed)

We prove that the correlation of a depth-d unbounded fanin circuit of size S with parity of n variables is at most 2(-Omega(n/(log S)d-1)).

• 38.
KTH, School of Computer Science and Communication (CSC), Theoretical Computer Science, TCS.
On the efficient approximability of constraint satisfaction problems2007In: London Mathematical Society Lecture Notes, ISSN 0076-0552, Vol. 346, p. 201-222Article in journal (Refereed)

We discuss some results about the efficient approximability of constraint satisfaction problems.  in particular we focus on the question on an efficient algorithm can perform significantly better than the algorithm that picks a solution uniformly at random.

• 39.
KTH, School of Computer Science and Communication (CSC), Theoretical Computer Science, TCS.
On the np-hardness of max-not-22014In: SIAM journal on computing (Print), ISSN 0097-5397, E-ISSN 1095-7111, Vol. 43, no 1, p. 179-193Article in journal (Refereed)

We prove that, for any epsilon > 0, given a satisfiable instance of Max-NTW (Not-2), it is NP-hard to find an assignment that satisfies a fraction 5/8 + epsilon of the constraints. This, up to the existence of epsilon, matches the approximation ratio obtained by the trivial algorithm that just picks an assignment at random, and thus the result is tight. Said equivalently, the result proves that Max-NTW is approximation resistant on satisfiable instances, and this makes complete our understanding of arity three maximum constraint satisfaction problems with regards to approximation resistance.

• 40.
KTH, School of Computer Science and Communication (CSC), Theoretical Computer Science, TCS.
On the NP-hardness of Max-Not-22012In: Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques, Springer-Verlag , 2012, p. 170-181Conference paper (Refereed)

We prove that, for any ε > 0, it is NP-hard to, given a satisfiable instance of Max-NTW (Not-2), find an assignment that satisfies a fraction 5/8 + ε of the constraints. This, up to the existence of ε, matches the approximation ratio obtained by the trivial algorithm that just picks an assignment at random and thus the result is tight. Said equivalently the result proves that Max-NTW is approximation resistant on satisfiable instances and this makes our understanding of arity three Max-CSPs with regards to approximation resistance complete.

• 41.
KTH, School of Computer Science and Communication (CSC), Theoretical Computer Science, TCS.
Satisfying degree-d equations of GF[2]2011In: Proceedings of the 14th international workshop and 15th international conference on Approximation, randomization, and combinatorial optimization:: algorithms and techniques, 2011, p. 242-253Conference paper (Refereed)

We study the problem where we are given a system of polynomial equations defined by multivariate polynomials over GF[2] of fixed constant degree d > 1 and the aim is to satisfy the maximal number of equations. A random assignment approximates this problem within a factor 2-d and we prove that for any ε > 0, it is NP-hard to obtain a ratio 2-d + ε. When considering instances that are perfectly satisfiable we give a probabilistic polynomial time algorithm that, with high probability, satisfies a fraction 21-d - 21-2d and we prove that it is NP-hard to do better by an arbitrarily small constant. The hardness results are proved in the form of inapproximability results of Max-CSPs where the predicate in question has the desired form and we give some immediate results on approximation resistance of some predicates.

• 42.
KTH, School of Computer Science and Communication (CSC), Theoretical Computer Science, TCS.
Satisfying Degree-d Equations of GF(2)n2011Conference paper (Refereed)

We study the problem where we are given a system of polynomial equations defined by multivariate polynomials over GF[2] of fixed constant degree d > 1 and the aim is to satisfy the maximal number of equations. A random assignment approximates this problem within a factor 2−d and we prove that for any ε > 0, it is NP-hard to obtain a ratio 2−d+ε. When considering instances that are perfectly satisfiable we give a probabilistic polynomial time algorithm that, with high probability, satisfies a fraction 21−d − 21−2d and we prove that it is NP-hard to do better by an arbitrarily small constant. The hardness results are proved in the form of inapproximability results of Max-CSPs where the predicate in question has the desired form and we give some immediate results on approximation resistance of some predicates.

• 43.
KTH, Superseded Departments, Numerical Analysis and Computer Science, NADA.
Some optimal inapproximability results2001In: Journal of the ACM, ISSN 0004-5411, E-ISSN 1557-735X, Vol. 48, no 4, p. 798-859Article in journal (Refereed)

We prove optimal, up to an arbitrary epsilon > 0, inapproximability results for Max-Ek-Sat for k greater than or equal to 3, maximizing the number of satisfied linear equations in an over-determined system of linear equations modulo a prime p and Set Splitting. As a consequence of these results we get improved lower bounds for the efficient approximability of many optimization problems studied previously. In particular, for Max-E2-Sat, Max-Cut, Max-di-Cut, and Vertex cover.

• 44.
KTH, School of Computer Science and Communication (CSC), Theoretical Computer Science, TCS.
Special issue "conference on computational complexity 2009" guest editor's foreword2010In: Computational Complexity, ISSN 1016-3328, E-ISSN 1420-8954, Vol. 19, no 2, p. 151-152Article in journal (Refereed)
• 45.
KTH, School of Computer Science and Communication (CSC), Numerical Analysis and Computer Science, NADA.
The security of the IAPM and IACBC modes2007In: Journal of Cryptology, ISSN 0933-2790, E-ISSN 1432-1378, Vol. 20, no 2, p. 153-163Article in journal (Refereed)

We give new and shorter proofs for message integrity and confidentiality of the IAPM mode and of the IACBC mode proposed by Jutla [6].

• 46.
KTH, School of Computer Science and Communication (CSC), Numerical Analysis and Computer Science, NADA.
The square lattice shuffle2006In: Random structures & algorithms (Print), ISSN 1042-9832, E-ISSN 1098-2418, Vol. 29, no 4, p. 466-474Article in journal (Refereed)

We show that the operations of permuting columns and rows separately and independently mix a square matrix in constant time.

• 47.
KTH, School of Computer Science and Communication (CSC), Numerical Analysis and Computer Science, NADA.
The Square Lattice Shuffle (vol 29, pg 466, 2006)2016In: Random structures & algorithms (Print), ISSN 1042-9832, E-ISSN 1098-2418, Vol. 48, no 1, p. 213-213Article in journal (Refereed)
• 48.
KTH, School of Computer Science and Communication (CSC), Theoretical Computer Science, TCS.
KTH, School of Computer Science and Communication (CSC), Theoretical Computer Science, TCS.
Improved NP-inapproximability for 2-variable linear equations2015In: Leibniz International Proceedings in Informatics, LIPIcs, Schloss Dagstuhl- Leibniz-Zentrum fur Informatik GmbH , 2015, Vol. 40, p. 341-360Conference paper (Refereed)

An instance of the 2-Lin(2) problem is a system of equations of the form "xi + xj = b (mod 2)". Given such a system in which it’s possible to satisfy all but an C ε fraction of the equations, we show it is NP-hard to satisfy all but a Cε fraction of the equations, for any C &lt; 11/8 = 1.375 (and any 0 &lt; ε ≤ 1/8). The previous best result, standing for over 15 years, had 5/4 in place of 11/8. Our result provides the best known NP-hardness even for the Unique-Games problem, and it also holds for the special case of Max-Cut. The precise factor 11 8 is unlikely to be best possible; we also give a conjecture concerning analysis of Boolean functions which, if true, would yield a larger hardness factor of 3/2. Our proof is by a modified gadget reduction from a pairwise-independent predicate. We also show an inherent limitation to this type of gadget reduction. In particular, any such reduction can never establish a hardness factor C greater than 2.54. Previously, no such limitation on gadget reductions was known.

• 49.
KTH, Superseded Departments, Numerical Analysis and Computer Science, NADA.
KTH, Superseded Departments, Numerical Analysis and Computer Science, NADA.
Fitting points on the real line and its application to RH mapping2003In: Journal of Algorithms, ISSN 0196-6774, E-ISSN 1090-2678, Vol. 49, no 1, p. 42-62Article in journal (Refereed)

A natural problem is that of, given an n x n symmetric matrix D, finding an arrangement of n points on the real line such that the so obtained distances agree as well as possible with the by D specified distances. We refer to the variation in which the difference in distance is measured in maximum norm as the MATRIX-TO-LINE problem. The MATRIX-TO-LINE problem has previously been shown to be NP-complete [J.B. Saxe, 17th Allerton Conference in Communication, Control, and Computing, 1979, pp. 480-489]. We show that it can be approximated within 2, but unless P = NP not within 7/5 - delta for any delta > 0. We also show a tight lower bound under a stronger assumption. We show that the MATRIX-TO-LINE problem cannot be approximated within 2 - delta unless 3-colorable graphs can be colored with [4/delta] colors in polynomial time. Currently, the best polynomial time algorithm colors a 3-colorable graph with (O) over tilde (n(3/14)) colors [A. Blum, D. Karger, Inform. Process. Lett. 61 (1), (1997), 49-53]. We apply our MATRIX-TO-LINE algorithm to a problem in computational biology, namely, the Radiation Hybrid (RH) problem. That is, the algorithmic part of a physical mapping method called RH mapping. This gives us the first algorithm with a guaranteed convergence for the general RH problem.

• 50.
KTH, School of Computer Science and Communication (CSC), Theoretical Computer Science, TCS.
Query efficient PCPs with perfect completeness2005In: Theory of Computing, ISSN 1557-2862, Vol. 1, p. 119-149Article in journal (Refereed)

For every integer k >0, and an arbitrarily small constante> 0, we present a PCP characterization of NP where the verifier uses logarithmic randomness, non-adaptively queries 4 k +k2 bits in the proof, accepts a correct proof with probability 1, i. e., it has perfect completeness, and accepts any supposed proof of a false statement with probability at most 2 −k2 +e. In particular, the verifier achieves optimal amortized query complexity of 1 +dfor arbitrarily small constantd> 0. Such a characterization was already proved by Samorodnitsky and Trevisan (STOC 2000), but their verifier loses perfect completeness and their proof makes an essential use of this feature. By using an adaptive verifier, we can decrease the number of query bits to 2 k +k2 , equal to the number obtained by Samorodnitsky and Trevisan. Finally we extend some of the results to PCPs over non-Boolean alphabets.

12 1 - 50 of 60
CiteExportLink to result list
Cite
Citation style
• apa
• harvard1
• ieee
• modern-language-association-8th-edition
• vancouver
• Other style
More styles
Language
• de-DE
• en-GB
• en-US
• fi-FI
• nn-NO
• nn-NB
• sv-SE
• Other locale
More languages
Output format
• html
• text
• asciidoc
• rtf
v. 2.34.0
| | | |