Change search
Link to record
Permanent link

Direct link
BETA
Publications (10 of 28) Show all publications
Austrin, P., Kaski, P. & Kubjas, K. (2019). Tensor network complexity of multilinear maps. In: Leibniz International Proceedings in Informatics, LIPIcs: . Paper presented at 10th Innovations in Theoretical Computer Science, ITCS 2019, 10-12 January 2019, San Diego, United States. Schloss Dagstuhl- Leibniz-Zentrum fur Informatik GmbH, Dagstuhl Publishing
Open this publication in new window or tab >>Tensor network complexity of multilinear maps
2019 (English)In: Leibniz International Proceedings in Informatics, LIPIcs, Schloss Dagstuhl- Leibniz-Zentrum fur Informatik GmbH, Dagstuhl Publishing , 2019Conference paper, Published paper (Refereed)
Abstract [en]

We study tensor networks as a model of arithmetic computation for evaluating multilinear maps. These capture any algorithm based on low border rank tensor decompositions, such as O(nω+ϵ) time matrix multiplication, and in addition many other algorithms such as O(nlog n) time discrete Fourier transform and O∗(2n) time for computing the permanent of a matrix. However tensor networks sometimes yield faster algorithms than those that follow from low-rank decompositions. For instance the fastest known O(n(ω+ϵ)t) time algorithms for counting 3t-cliques can be implemented with tensor networks, even though the underlying tensor has border rank n3t for all t ≥ 2. For counting homomorphisms of a general pattern graph P into a host graph on n vertices we obtain an upper bound of O(n(ω+ϵ) bw(P)/2) where bw(P) is the branchwidth of P. This essentially matches the bound for counting cliques, and yields small improvements over previous algorithms for many choices of P. While powerful, the model still has limitations, and we are able to show a number of unconditional lower bounds for various multilinear maps, including: (a) an Ω(nbw(P)) time lower bound for counting homomorphisms from P to an n-vertex graph, matching the upper bound if ω = 2. In particular for P a v-clique this yields an Ω(nd2v/3e) time lower bound for counting v-cliques, and for P a k-uniform v-hyperclique we obtain an Ω(nv) time lower bound for k ≥ 3, ruling out tensor networks as an approach to obtaining non-trivial algorithms for hyperclique counting and the Max-3-CSP problem. (b) an Ω(20.918n) time lower bound for the permanent of an n × n matrix.

Place, publisher, year, edition, pages
Schloss Dagstuhl- Leibniz-Zentrum fur Informatik GmbH, Dagstuhl Publishing, 2019
Keywords
Arithmetic complexity, Lower bound, Multilinear map, Tensor network, Discrete Fourier transforms, Graph theory, Matrix algebra, Tensors, Arithmetic computations, Low-rank decomposition, Lower bounds, Multilinear maps, Network complexity, Non-trivial algorithms, Tensor decomposition, Complex networks
National Category
Computer and Information Sciences
Identifiers
urn:nbn:se:kth:diva-262393 (URN)10.4230/LIPIcs.ITCS.2019.7 (DOI)2-s2.0-85069528130 (Scopus ID)978-3-95977-095-8 (ISBN)
Conference
10th Innovations in Theoretical Computer Science, ITCS 2019, 10-12 January 2019, San Diego, United States
Note

QC 20191018

Available from: 2019-10-18 Created: 2019-10-18 Last updated: 2019-10-18Bibliographically approved
Austrin, P., Kaski, P., Koivisto, M. & Nederlof, J. (2018). Sharper Upper Bounds for Unbalanced Uniquely Decodable Code Pairs. IEEE Transactions on Information Theory, 64(2), 1368-1373
Open this publication in new window or tab >>Sharper Upper Bounds for Unbalanced Uniquely Decodable Code Pairs
2018 (English)In: IEEE Transactions on Information Theory, ISSN 0018-9448, E-ISSN 1557-9654, Vol. 64, no 2, p. 1368-1373Article in journal (Refereed) Published
Abstract [en]

Two sets of 0-1 vectors of fixed length form a uniquely decodeable code pair if their Cartesian product is of the same size as their sumset, where the addition is pointwise over integers. For the size of the sumset of such a pair, van Tilborg has given an upper bound in the general case. Urbanke and Li, and later Ordentlich and Shayevitz, have given better bounds in the unbalanced case, that is, when either of the two sets is sufficiently large. Improvements to the latter bounds are presented.

Place, publisher, year, edition, pages
IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC, 2018
Keywords
Additive combinatorics, binary adder channel, isoperimetric inequality, uniquely decodeable code pair, zero-error capacity
National Category
Electrical Engineering, Electronic Engineering, Information Engineering
Identifiers
urn:nbn:se:kth:diva-222174 (URN)10.1109/TIT.2017.2688378 (DOI)000422916500042 ()2-s2.0-85040946882 (Scopus ID)
Note

QC 20180207

Available from: 2018-02-07 Created: 2018-02-07 Last updated: 2018-02-07Bibliographically approved
Austrin, P., Chung, K.-M., Mahmoody, M., Pass, R. & Seth, K. (2017). On the Impossibility of Cryptography with Tamperable Randomness. Algorithmica, 79(4), 1052-1101
Open this publication in new window or tab >>On the Impossibility of Cryptography with Tamperable Randomness
Show others...
2017 (English)In: Algorithmica, ISSN 0178-4617, E-ISSN 1432-0541, Vol. 79, no 4, p. 1052-1101Article in journal (Refereed) Published
Abstract [en]

We initiate a study of the security of cryptographic primitives in the presence of efficient tampering attacks to the randomness of honest parties. More precisely, we consider p-tampering attackers that may efficiently tamper with each bit of the honest parties' random tape with probability p, but have to do so in an "online" fashion. Our main result is a strong negative result: We show that any secure encryption scheme, bit commitment scheme, or zero-knowledge protocol can be "broken" with advantage Omega(p) by a p-tampering attacker. The core of this result is a new algorithm for biasing the output of bounded-value functions, which may be of independent interest. We also show that this result cannot be extended to primitives such as signature schemes and identification protocols: assuming the existence of one-way functions, such primitives can be made resilient to (1/poly(n))-tampering attacks where n is the security parameter.

Place, publisher, year, edition, pages
SPRINGER, 2017
Keywords
Tampering, Randomness, Encryption
National Category
Computer and Information Sciences
Identifiers
urn:nbn:se:kth:diva-215773 (URN)10.1007/s00453-016-0219-7 (DOI)000411982500004 ()2-s2.0-84990840579 (Scopus ID)
Note

QC 20171019

Available from: 2017-10-19 Created: 2017-10-19 Last updated: 2018-01-13Bibliographically approved
Austrin, P., Benabbas, S. & Georgiou, K. (2016). Better Balance by Being Biased: A 0.8776-Approximation for Max Bisection. ACM Transactions on Algorithms, 13(1), Article ID 2.
Open this publication in new window or tab >>Better Balance by Being Biased: A 0.8776-Approximation for Max Bisection
2016 (English)In: ACM Transactions on Algorithms, ISSN 1549-6325, E-ISSN 1549-6333, Vol. 13, no 1, article id 2Article in journal (Refereed) Published
Abstract [en]

Recently, Raghavendra and Tan (SODA 2012) gave a 0.85-approximation algorithm for the MAX BISECTION problem. We improve their algorithm to a 0.8776-approximation. As MAX BISECTION is hard to approximate within alpha(GW) + epsilon approximate to 0.8786 under the Unique Games Conjecture (UGC), our algorithm is nearly optimal. We conjecture that MAX BISECTION is approximable within alpha(GW) - epsilon, that is, that the bisection constraint (essentially) does not make MAX CUT harder. We also obtain an optimal algorithm (assuming the UGC) for the analogous variant of MAX 2-SAT. Our approximation ratio for this problem exactly matches the optimal approximation ratio for MAX 2-SAT, that is, alpha(LLZ) + epsilon approximate to 0.9401, showing that the bisection constraint does not make MAX 2-SAT harder. This improves on a 0.93-approximation for this problem from Raghavendra and Tan.

Place, publisher, year, edition, pages
ASSOC COMPUTING MACHINERY, 2016
Keywords
Max-bisection, approximation algorithms, semidefinite programming
National Category
Electrical Engineering, Electronic Engineering, Information Engineering
Identifiers
urn:nbn:se:kth:diva-202652 (URN)10.1145/2907052 (DOI)000392927800002 ()2-s2.0-84992195833 (Scopus ID)
Note

QC 20170306

Available from: 2017-03-06 Created: 2017-03-06 Last updated: 2018-05-23Bibliographically approved
Austrin, P., Kaski, P., Koivisto, M. & Nederlof, J. (2016). Sharper upper bounds for unbalanced Uniquely Decodable Code Pairs. In: 2016 IEEE International Symposium on Information Theory: . Paper presented at 2016 IEEE International Symposium on Information Theory, ISIT 2016, Universitat Pompeu FabraBarcelona, Spain, 10 July 2016 through 15 July 2016 (pp. 335-339). Institute of Electrical and Electronics Engineers (IEEE), 2016, Article ID 7541316.
Open this publication in new window or tab >>Sharper upper bounds for unbalanced Uniquely Decodable Code Pairs
2016 (English)In: 2016 IEEE International Symposium on Information Theory, Institute of Electrical and Electronics Engineers (IEEE), 2016, Vol. 2016, p. 335-339, article id 7541316Conference paper, Published paper (Refereed)
Abstract [en]

Two sets A, B subset of {0, 1}(n) form a Uniquely Decodable Code Pair (UDCP) if every pair a is an element of A, b is an element of B yields a distinct sum a + b, where the addition is over Z(n). We show that every UDCP A, B, with vertical bar A vertical bar = 2((1-is an element of)n) and vertical bar B vertical bar = 2(beta n), satisfies beta <= 0.4228 + root is an element of. For sufficiently small is an element of, this bound significantly improves previous bounds by Urbanke and Li [Information Theory Workshop '98] and Ordentlich and Shayevitz [2014, arXiv: 1412.8415], which upper bound beta by 0.4921 and 0.4798, respectively, as is an element of approaches 0.

Place, publisher, year, edition, pages
Institute of Electrical and Electronics Engineers (IEEE), 2016
Series
IEEE International Symposium on Information Theory, ISSN 2157-8095
Keywords
Decoding, Uniquely decodable codes, Upper Bound, Information theory
National Category
Electrical Engineering, Electronic Engineering, Information Engineering
Identifiers
urn:nbn:se:kth:diva-194967 (URN)10.1109/ISIT.2016.7541316 (DOI)000390098700068 ()2-s2.0-84986000922 (Scopus ID)9781509018062 (ISBN)
Conference
2016 IEEE International Symposium on Information Theory, ISIT 2016, Universitat Pompeu FabraBarcelona, Spain, 10 July 2016 through 15 July 2016
Note

QC 20161122

Available from: 2016-11-22 Created: 2016-11-01 Last updated: 2017-01-27Bibliographically approved
Austrin, P., Kaski, P., Koivisto, M. & Nederlof, J. (2015). Subset sum in the absence of concentration. In: Leibniz International Proceedings in Informatics, LIPIcs: . Paper presented at 32nd International Symposium on Theoretical Aspects of Computer Science, STACS 2015, 4 March 2015 through 7 March 2015 (pp. 48-61).
Open this publication in new window or tab >>Subset sum in the absence of concentration
2015 (English)In: Leibniz International Proceedings in Informatics, LIPIcs, 2015, p. 48-61Conference paper, Published paper (Refereed)
Abstract [en]

We study the exact time complexity of the Subset Sum problem. Our focus is on instances that lack additive structure in the sense that the sums one can form from the subsets of the given integers are not strongly concentrated on any particular integer value. We present a randomized algorithm that runs in O (20.3399nB4) time on instances with the property that no value can arise as a sum of more than B different subsets of the n given integers.

Keywords
Additive combinatorics, Exponential-time algorithm, Homomorphic hashing, Littlewood-Offord problem, Subset sum, Exponential time algorithm, Set theory
National Category
Computer and Information Sciences
Identifiers
urn:nbn:se:kth:diva-167392 (URN)10.4230/LIPIcs.STACS.2015.48 (DOI)2-s2.0-84923924069 (Scopus ID)9783939897781 (ISBN)
Conference
32nd International Symposium on Theoretical Aspects of Computer Science, STACS 2015, 4 March 2015 through 7 March 2015
Funder
Swedish Research Council, 621-2012-4546
Note

QC 20150527

Available from: 2015-05-27 Created: 2015-05-22 Last updated: 2018-01-11Bibliographically approved
Austrin, P., Håstad, J. & Guruswami, V. (2014). (2 + epsilon)-Sat Is NP-hard. In: Proceedings - Annual IEEE Symposium on Foundations of Computer Science, FOCS: . Paper presented at 55th Annual IEEE Symposium on Foundations of Computer Science, FOCS 2014, 18 October 2014 through 21 October 2014 (pp. 1-10).
Open this publication in new window or tab >>(2 + epsilon)-Sat Is NP-hard
2014 (English)In: Proceedings - Annual IEEE Symposium on Foundations of Computer Science, FOCS, 2014, p. 1-10Conference paper, Published paper (Refereed)
Abstract [en]

We prove the following hardness result for anatural promise variant of the classical CNF-satisfiabilityproblem: Given a CNF-formula where each clause has widthw and the guarantee that there exists an assignment satisfyingat least g = [w/2]-1 literals in each clause, it is NP-hard tofind a satisfying assignment to the formula (that sets at leastone literal to true in each clause). On the other hand, when g = [w/2], it is easy to find a satisfying assignment via simplegeneralizations of the algorithms for 2-SAT. Viewing 2-SAT σ P as easiness of SAT when 1-in-2 literals are true in every clause, and NP-hardness of 3-SAT as intractability of SAT when 1-in-3 literals are true, our resultshows, for any fixed &amp;epsi; > 0, the hardness of finding a satisfyingassignment to instances of '(2 + &amp;epsi;)-SAT' where the density ofsatisfied literals in each clause is promised to exceed 1/(2+ε). We also strengthen the results to prove that given a (2k + 1)-uniform hypergraph that can be 2-colored such that each edgehas perfect balance (at most k + 1 vertices of either color), itis NP-hard to find a 2-coloring that avoids a monochromaticedge. In other words, a set system with discrepancy 1 is hard todistinguish from a set system with worst possible discrepancy.

Series
Proceedings - Annual IEEE Symposium on Foundations of Computer Science, FOCS, ISSN 0272-5428 ; 6978984
Keywords
complexity dichotomy, Constraint satisfaction, discrepancy, polymorphisms, probabilistically checkable proofs, promise problems
National Category
Computer Sciences
Identifiers
urn:nbn:se:kth:diva-167553 (URN)10.1109/FOCS.2014.9 (DOI)000366324300001 ()2-s2.0-84919962415 (Scopus ID)9781479965175 (ISBN)
Conference
55th Annual IEEE Symposium on Foundations of Computer Science, FOCS 2014, 18 October 2014 through 21 October 2014
Note

QC 20150608

Available from: 2015-06-08 Created: 2015-05-22 Last updated: 2018-01-11Bibliographically approved
Austrin, P. & Khot, S. (2014). A Simple Deterministic Reduction for the Gap Minimum Distance of Code Problem. IEEE Transactions on Information Theory, 60(10), 6636-6645
Open this publication in new window or tab >>A Simple Deterministic Reduction for the Gap Minimum Distance of Code Problem
2014 (English)In: IEEE Transactions on Information Theory, ISSN 0018-9448, E-ISSN 1557-9654, Vol. 60, no 10, p. 6636-6645Article in journal (Refereed) Published
Abstract [en]

We present a simple deterministic gap-preserving reduction from SAT to the minimum distance of code problem over F-2. We also show how to extend the reduction to work over any fixed finite field. Previously, a randomized reduction was known due to Dumer, Micciancio, and Sudan, which was recently derandomized by Cheng and Wan. These reductions rely on highly nontrivial coding theoretic constructions, whereas our reduction is elementary. As an additional feature, our reduction gives hardness within a constant factor even for asymptotically good codes, i.e., having constant positive rate and relative distance. Previously, it was not known how to achieve a deterministic reduction for such codes.

Keywords
Linear code, computational complexity
National Category
Computer and Information Sciences
Identifiers
urn:nbn:se:kth:diva-154750 (URN)10.1109/TIT.2014.2340869 (DOI)000342416900048 ()2-s2.0-84907221430 (Scopus ID)
Note

QC 20141105

Available from: 2014-11-05 Created: 2014-10-27 Last updated: 2018-01-11Bibliographically approved
Austrin, P., Chung, K.-M. -., Mahmoody, M., Pass, R. & Seth, K. (2014). On the impossibility of cryptography with tamperable randomness. In: 34rd Annual International Cryptology Conference, CRYPTO 2014: . Paper presented at 17 August 2014 through 21 August 2014, Santa Barbara, CA (pp. 462-479). (PART 1)
Open this publication in new window or tab >>On the impossibility of cryptography with tamperable randomness
Show others...
2014 (English)In: 34rd Annual International Cryptology Conference, CRYPTO 2014, 2014, no PART 1, p. 462-479Conference paper, Published paper (Refereed)
Abstract [en]

We initiate a study of the security of cryptographic primitives in the presence of efficient tampering attacks to the randomness of honest parties. More precisely, we consider p-tampering attackers that may efficiently tamper with each bit of the honest parties' random tape with probability p, but have to do so in an "online" fashion. Our main result is a strong negative result: We show that any secure encryption scheme, bit commitment scheme, or zero-knowledge protocol can be "broken" with probability p by a p-tampering attacker.The core of this result is a new Fourier analytic technique for biasing the output of bounded-value functions, which may be of independent interest. We also show that this result cannot be extended to primitives such as signature schemes and identification protocols: assuming the existence of one-way functions, such primitives can be made resilient to (1/poly(n))- tampering attacks where n is the security parameter.

Keywords
Encryption, Randomness, Tampering, Random processes, Analytic technique, Cryptographic primitives, Encryption schemes, Identification protocol, Security parameters, Zero-knowledge protocols, Cryptography
National Category
Electrical Engineering, Electronic Engineering, Information Engineering
Identifiers
urn:nbn:se:kth:diva-167864 (URN)10.1007/978-3-662-44371-2_26 (DOI)2-s2.0-84905380508 (Scopus ID)9783662443705 (ISBN)
Conference
17 August 2014 through 21 August 2014, Santa Barbara, CA
Note

QC 20150615

Available from: 2015-06-15 Created: 2015-05-22 Last updated: 2015-06-15Bibliographically approved
Austrin, P. & Khot, S. (2013). A characterization of approximation resistance for even k-partite CSPs. In: ITCS 2013 - Proceedings of the 2013 ACM Conference on Innovations in Theoretical Computer Science: . Paper presented at 2013 4th ACM Conference on Innovations in Theoretical Computer Science, ITCS 2013; Berkeley, CA; United States; 9 January 2013 through 12 January 2013 (pp. 187-196). New York: Association for Computing Machinery (ACM)
Open this publication in new window or tab >>A characterization of approximation resistance for even k-partite CSPs
2013 (English)In: ITCS 2013 - Proceedings of the 2013 ACM Conference on Innovations in Theoretical Computer Science, New York: Association for Computing Machinery (ACM), 2013, p. 187-196Conference paper, Published paper (Refereed)
Abstract [en]

A constraint satisfaction problem (CSP) is said to be approximation resistant if it is hard to approximate better than the trivial algorithm which picks a uniformly random assignment. Assuming the Unique Games Conjecture, we give a characterization of approximation resistance for k-partite CSPs defined by an even predicate.

Place, publisher, year, edition, pages
New York: Association for Computing Machinery (ACM), 2013
Keywords
approximation resistance, unique games conjecture
National Category
Computer and Information Sciences
Identifiers
urn:nbn:se:kth:diva-136052 (URN)10.1145/2422436.2422459 (DOI)2-s2.0-84873348647 (Scopus ID)978-145031859-4 (ISBN)
Conference
2013 4th ACM Conference on Innovations in Theoretical Computer Science, ITCS 2013; Berkeley, CA; United States; 9 January 2013 through 12 January 2013
Note

QC 20131203

Available from: 2013-12-03 Created: 2013-12-03 Last updated: 2018-01-11Bibliographically approved
Organisations
Identifiers
ORCID iD: ORCID iD iconorcid.org/0000-0001-8217-0158

Search in DiVA

Show all publications