Valentine Kabanets (SFU)
29/11/13 12:30 Talks
Small de Morgan formulas make low-frequency sound: Fourier concentration from shrinkage
Speaker:
Valentine Kabanets (Simon Fraser University, Computing Science)
Abstract:
A standard complexity-theoretic way to measure the complexity of a Boolean function is via the size of a smallest algorithm (circuit or formula) computing (or approximating) the function. Another way to measure the complexity of a function is via the sparsity of its Fourier decomposition, e.g., the concentration of its Fourier mass on some subset of frequencies. The celebrated result of Linial, Mansour, and Nisan (1993) was the first to establish a connection between circuit complexity and Fourier complexity: they showed that the class of Boolean functions computable by polysize AC0 circuits (generalization of CNF/DNFs to arbitrary constant depth) exhibit sharp concentration of the Fourier spectrum over the set of low frequencies (of polylogarithmic weight).
In this talk, I will present analogous results for the class of Boolean functions computable by de Morgan formulas (using AND, OR, and NOT gates) of sub-quadratic size. The Fourier concentration for such functions is over the frequencies related to the shrinkage exponent: the quantity measuring how much the formulas reduce in size when hit with random restrictions of the variables. Namely, we show for an n-variate Boolean function f computable by a formula of size s, that the Fourier mass of f over the frequencies higher than s^{1/\Gamma} n^{\epsilon} is at most exp( - n^{\epsilon/6} ), where \Gamma is the shrinkage exponent for the corresponding class of formulas: \Gamma=2 for general formulas, and \Gamma= \approx 3.27 for read-once formulas. We prove that this Fourier concentration is essentially optimal.
As an application, we get that subquadratic-size formulas have negligible correlation with parity, and are learnable under the uniform distribution, and also lossily compressible, in subexponential time. We also prove essentially tight bounds on the average sensitivity of
read-once formulas.
The talk is aimed at the general CS audience: no prior knowledge of circuit complexity or Fourier analysis is necessary.
(Joint work with Russell Impagliazzo (UCSD))
Date:
Friday, November 29, 2013
Time:
12:30 pm - 1:30 pm
Location:
TASC1 Building, Room No. 9204, Simon Fraser University, Burnaby
Valentine Kabanets (Simon Fraser University, Computing Science)
Abstract:
A standard complexity-theoretic way to measure the complexity of a Boolean function is via the size of a smallest algorithm (circuit or formula) computing (or approximating) the function. Another way to measure the complexity of a function is via the sparsity of its Fourier decomposition, e.g., the concentration of its Fourier mass on some subset of frequencies. The celebrated result of Linial, Mansour, and Nisan (1993) was the first to establish a connection between circuit complexity and Fourier complexity: they showed that the class of Boolean functions computable by polysize AC0 circuits (generalization of CNF/DNFs to arbitrary constant depth) exhibit sharp concentration of the Fourier spectrum over the set of low frequencies (of polylogarithmic weight).
In this talk, I will present analogous results for the class of Boolean functions computable by de Morgan formulas (using AND, OR, and NOT gates) of sub-quadratic size. The Fourier concentration for such functions is over the frequencies related to the shrinkage exponent: the quantity measuring how much the formulas reduce in size when hit with random restrictions of the variables. Namely, we show for an n-variate Boolean function f computable by a formula of size s, that the Fourier mass of f over the frequencies higher than s^{1/\Gamma} n^{\epsilon} is at most exp( - n^{\epsilon/6} ), where \Gamma is the shrinkage exponent for the corresponding class of formulas: \Gamma=2 for general formulas, and \Gamma= \approx 3.27 for read-once formulas. We prove that this Fourier concentration is essentially optimal.
As an application, we get that subquadratic-size formulas have negligible correlation with parity, and are learnable under the uniform distribution, and also lossily compressible, in subexponential time. We also prove essentially tight bounds on the average sensitivity of
read-once formulas.
The talk is aimed at the general CS audience: no prior knowledge of circuit complexity or Fourier analysis is necessary.
(Joint work with Russell Impagliazzo (UCSD))
Date:
Friday, November 29, 2013
Time:
12:30 pm - 1:30 pm
Location:
TASC1 Building, Room No. 9204, Simon Fraser University, Burnaby