Why P could be equal NP

In the previous post we have established relation between P vs NP problem and quartic positive polynomial optimization problem. Here I will give some intuition why it is possible that we would be able to decide that quartic polynomial has non-zero global minimum. I’ll give only the short intuition leaving details for professionals. I’m not interested in putting all comas in place.


  1. NP-complete decision problems
  2. Non-negative quartic polynomials
  3. Reznick perturbation lemma
  4. Sum of squares optimization
  5. Blekherman beautiful high school exposition about why not all positive quartics are sums of squares
  6. Extremal polynomials
  7. Some combinatorics
  8. Semialgebraic geometry

  1. NP-complete problems:
    Partition problem: Given a multiset of integers a_k is it possible to divide it into complementary multisets having the same sum, i.e. whether there exist x_k \in \{\pm 1 \} such that \sum_k a_k x_k =0.
    For those who loves 3-SAT: given a set of logical clauses having exactly 3 binary variables (literals) x_k each, decide whether all clauses can be satisfied (evaluated true) at once. To encode it, we want the set of polynomial equation evaluating 0 when clause is true: \neg x_1 \vee x_2 \vee x_3 would be translated into x_1 (1-x_2) (1-x_3), x_k \in \{ 0, 1 \}.
  2. Quartics:
    x_k \in \{\pm 1 \} \Rightarrow (x_k^2-1)^2=0,
    x_k \{ 0, 1 \} \in \Rightarrow x_k^2( x_k-1 )^2 = 0 ,
    Combining together we have for partition problem: \alpha \sum_k (x_k^2-1)^2 + \left( \sum_k a_k x_k \right)^2; for 3-SAT: \alpha \sum_k x_k^2( x_k-1 )^2 + \sum_k c_k, where c_k is one 3 term clause. The last equation is not always positive, but for large enough \alpha it is, when problem is not satisfiable, thanks to third ingredient.
  3. Reznick perturbation lemma (3.1 in the reference) adapted to quartics:
    Given positive polynomial with no zeros at infinity (f= \sum_k x_k^2( x_k-1 )^2), condition 1 in the Lemma. If NP-complete problem is not satisfiable V_1 = \emptyset, condition 2. Finally,  g(\omega) >0, \omega \in Z(f), where Z(f) are zero set of f, a set of x where polynomial f vanishes. Therefore, there exists c>0 that f+cg is positive.
    It should be noted here that global minimum is polynomially bounded away from 0, and constant c too.
  4. Both are covered in the previous post. In short the number of coefficients in quadratic polynomials (squared they give quartic sum of squares) is roughly quadratic. It is sufficient to fix the value of quadratic polynomial at quadratic number of points to constrain all coefficients, and therefore to constrain the values at all points in a hypercube of interest. The number of coefficients in quartic polynomial is roughly quartic, therefore there is more freedom in choice of coefficients. It is possible to select quartic coefficient in a way that will constrain all values of quadratic to 0 on hypercube.
  5. The simple example of quartic polynomial that is not sum of squares is Robinson polynomial: R_{1,2,3}=\sum \limits_{k=1,2,3} x_k^2( x_k-1 )^2 + 2x_1x_2x_3(x_1+x_2+x_3-2) . This is extremal polynomial (Choi, Lam (1977), Reznick (2007) )  – it cannot be non-trivially decomposed into other forms. This polynomial has seven zeros on the hypercube and one non zero value. Therefore, we can independently control value at one vertex of hypercube (1,1,1) by multiplying Robinson polynomial by positive constant.
  6.  Let R_{\bar 1, 2, 3}= \sum \limits_{k=1,2,3} x_k^2( x_k-1 )^2 + 2(1-x_1)x_2x_3( (1-x_1)+x_2+x_3-2), i.e. we replace x_1 \rightarrow 1-x_1, we say negation (by analogy with binary variables and there translation to polynomials). This is also extremal positive polynimial that controls value at (0,1,1). Same way we can make all extremal polynomials that controls independently all vertices of a hypercube. Let L_+(R_{1,2,3}) be a linear combination of all Robinson polynomials controlling different vertices of hypercube in 3 variables with positive coefficients (i.e. a point inside the cone of positive polynomials defined by the Robinson polynomials).Now consider polynomial L_+(R_{1,2,3})+ L_+(R_{1,2,4})+L_+(R_{1,3,4})+L_+(R_{2,3,4})+\alpha_1 \sum \limits_{k=1,2,3,4} x_k^2( x_k-1 )^2 + \alpha_2 x_1x_2x_3x_4 . Once the value of this polynomial at x=(1,1,1,1) is positive, there exists \alpha_1>0 such that this polynomial is positive! The same is true if we sum up all negations of quartic:  L_+(R_{1,2,3})+ L_+(R_{1,2,4})+L_+(R_{1,3,4})+L_+(R_{2,3,4})+\beta \sum \limits_{k=1,2,3,4} x_k^2( x_k-1 )^2 + \sum \limits_{n_i=0...1, i=1..4} \alpha_{ \{n_i \} } \prod_i x_i^{n_i}(1-x_i)^{1-n_i} . Once values on 16 hypercube vertices are positive there is \beta^* that for all \beta > \beta* the polynomial is positive.  Note that all 16 vertices are controlled independently.

    Now take all the combinations of 4 coordinates/variables/literals out of n dimensions/variables/literals of the problem. Construct the polynomials from the previous paragraph for all combinations and take linear combination of them with positive coefficients. We get positive polynomials (not extreme) that have control over quartic number of points (in the number of variables) which is exactly the number of points required to constraint all coefficients of quartic polynomial. It should be noted that we would not be able to find the global minimum of the polynomial because of the requirement that quartic terms (that are not controlled by the Robinson terms) should be evaluated at positive values, but remember that there is polynomial gap for unsatisfiable problem. Therefore, we can require all vertices to have positive values.

    The only missing part here is the estimation on the bound for \beta^* value. This may be large, but should be polynomial in the number of bits and number of dimensions.

  7. We actually interested in the positive solution inside hypercube. So, we have additional terms to reduce the complexity of the problem, i.e. semialgebraic bound on the region: x_k>0, 1-x_k>0 for 3-SAT and 1+x_k>0, 1-x_k>0 for partition problem. It adds additional freedom in the choice of variables, and limits the value of \beta being quite small.

Finally we can control values of quartic polynomial in quartic number of points such that differences between given unsatisfiable problem and resulting quartic be sum of squares, including 1. In the later case we have linear program in polynomial number of variables.

I’m not a mathematician, and hate all comas in place. I’m more than happy to unpack unclear points, but will refuse to estimate \beta so no one would claim I solved the problem, in case. And remember, the science is moving forward not because established scientists change their mind, but because young people come.

Sincerely yours,


Update 12 Jan 2016.

I see where the skepticism comes from. The perturbation can be quite strong. We can require quartic polynomial being zero (with zero grdient) at roughly cubic number of generic points. all quadratics would be zero in this space since they have only quadratic number of monomials, so we have over-determined system of linear equations on the coefficients of quadratics. On the other hand we can freely move those cubic points in space (up to infinity) and still get possibly positive polynomials. This is the problem on one hand, on the other hand, this opens a way for continuation methods. By the way by the homogenization the problem of infinity is replaced by the path along unit sphere. Therefore, there is more good than bad. I imagine a continuation method for solving optimization problem by starting with well defined positive polynomials (say sums of squares) and splitting extra solutions from well defined ones(say on a hypecube) toward correct coefficients until we get enough of them (up to a cubic, where by the number of coefficient they will approach extremal polynomials). We can actually have a polynomial number of separate splits with their (conical/non-negative) sum leading to desired optimized polynomial. The big question here is preservation of positivity, which is a global property.

update 24 jan 2016

Rhe following is in words that needs translation to math.

If we start with sums of squares \sum (x_k^2 - 1)^2 which have clear solution, we can now peak a cubic point around solution (very close) and require the qurtic polynomial to be round at this points (i.e. the value and gradient are zero at these points and Hessian is positive definite ) than this polinomial will be positive and is not sum of squares. More generaly, the quartic polynomial will positive if more than square points (number of monomials in quadratic polinomial of the same dimension) are round and there is continuous path to original sum of squares such that this property holds for every point in the path. That would an alternative ( to Hilbert method) way to construct positive polynomials.

This entry was posted in Uncategorized. Bookmark the permalink.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s