Semiprime factorization and modular arithmetics

Edit 6.6.2016. Premature optimization is a source for errors. The following is completely wrong. Would like to smoke the same thing again. Linear algebra does not work here. We need characterization of quotient space. What I do not understand is why fast methods are working modulo number itself, but not other prime numbers.


Suppose we have number N=xy, where we know N and want to find prime factors x and y.

Lets take small prime p, than we can write N= (x_p p + a_p)(y_p p + b_p)= (x_p y_p) p^2 + (b_p x_p + a_p y_p) p + a_p b_p.

Step 0

One can observe that a_p b_p= N \mod p, since multiplication group modulo prime is cyclic for each a_p \neq 0 there is unique b_p satisfying the equation. In other words we need to test p-1 values to find correct one, if we can. From now on I assume a_p, b_p satisfy a_p b_p=  N \mod p and therefore p divides N- a_p b_p .

Step 1

One can observe that b_p x_p + a_p y_p = \frac{N- a_p b_p}{p} \mod p, so we can write x_p = -b_p^{-1} a_p y_p + b_p^{-1} \frac{N- a_p b_p}{p} \mod p since b_p \neq 0 . In other words \exists t_{p1} \in \mathbb{Z} | x_p = -b_p^{-1} a_p y_p + b_p^{-1} \frac{N- a_p b_p}{p}+t_{p1} p . Substituting it into original equation we get N - a_p b_p = (-b_p^{-1} a_p y_p + b_p^{-1} \frac{N- a_p b_p}{p}+t_{p1} p ) y_p p^2 + (b_p (-b_p^{-1} a_p y_p + b_p^{-1} \frac{N- a_p b_p}{p}+t_{p1} p )  + a_p y_p) p =  t_{p1} y_p p^3 + (-b_p^{-1} a_p y_p^2 + b_p^{-1} \frac{N- a_p b_p}{p} y_p +b_p t_{p1} ) p^2 + N- a_p b_p . Summarizing,
x_p = -b_p^{-1} a_p y_p + b_p^{-1} \frac{N- a_p b_p}{p}+t_{p1} p
t_{p1} y_p p + (-b_p^{-1} a_p y_p^2 + b_p^{-1} \frac{N- a_p b_p}{p} y_p +b_p t_{p1} ) =0  .

Step 2
t_{p1}= b_p^{-2} (a_p y_p^2 - b_p^{-1} \frac{N- a_p b_p}{p} y_p) +t_{p2} p, t_{p2} \in \mathbb{Z}.
x_p= -b_p^{-1} a_p y_p + b_p^{-1} \frac{N- a_p b_p}{p}+b_p^{-2} (a_p y_p^2 - b_p^{-1} \frac{N- a_p b_p}{p} y_p) p + t_{p2} p^2

Step \log_p N
x_p= P^{\log_p N}(y)+ t_{p\log_p N} p ^ {\log_p N} where P^{\log_p N}(y) is a univariate polynomial on y of degree \log_p N

Final Step
In other words x_p = P^{\log_p N}(y) \mod p ^ {\log_p N} Substituting this into original equation we will get modular univariate polynomial equation of degree \log_p N+1. We need to factorize it and check \log_p N+1 values of resulted x_p.

I’ll leave the analysis of algorithm complexity to professionals. Have fun.

Posted in Uncategorized | 1 Comment

Why P could be equal NP

In the previous post we have established relation between P vs NP problem and quartic positive polynomial optimization problem. Here I will give some intuition why it is possible that we would be able to decide that quartic polynomial has non-zero global minimum. I’ll give only the short intuition leaving details for professionals. I’m not interested in putting all comas in place.

Ingredients:

  1. NP-complete decision problems
  2. Non-negative quartic polynomials
  3. Reznick perturbation lemma
  4. Sum of squares optimization
  5. Blekherman beautiful high school exposition about why not all positive quartics are sums of squares
  6. Extremal polynomials
  7. Some combinatorics
  8. Semialgebraic geometry

  1. NP-complete problems:
    Partition problem: Given a multiset of integers a_k is it possible to divide it into complementary multisets having the same sum, i.e. whether there exist x_k \in \{\pm 1 \} such that \sum_k a_k x_k =0.
    For those who loves 3-SAT: given a set of logical clauses having exactly 3 binary variables (literals) x_k each, decide whether all clauses can be satisfied (evaluated true) at once. To encode it, we want the set of polynomial equation evaluating 0 when clause is true: \neg x_1 \vee x_2 \vee x_3 would be translated into x_1 (1-x_2) (1-x_3), x_k \in \{ 0, 1 \}.
  2. Quartics:
    x_k \in \{\pm 1 \} \Rightarrow (x_k^2-1)^2=0,
    x_k \{ 0, 1 \} \in \Rightarrow x_k^2( x_k-1 )^2 = 0 ,
    Combining together we have for partition problem: \alpha \sum_k (x_k^2-1)^2 + \left( \sum_k a_k x_k \right)^2; for 3-SAT: \alpha \sum_k x_k^2( x_k-1 )^2 + \sum_k c_k, where c_k is one 3 term clause. The last equation is not always positive, but for large enough \alpha it is, when problem is not satisfiable, thanks to third ingredient.
  3. Reznick perturbation lemma (3.1 in the reference) adapted to quartics:
    Given positive polynomial with no zeros at infinity (f= \sum_k x_k^2( x_k-1 )^2), condition 1 in the Lemma. If NP-complete problem is not satisfiable V_1 = \emptyset, condition 2. Finally,  g(\omega) >0, \omega \in Z(f), where Z(f) are zero set of f, a set of x where polynomial f vanishes. Therefore, there exists c>0 that f+cg is positive.
    It should be noted here that global minimum is polynomially bounded away from 0, and constant c too.
  4. Both are covered in the previous post. In short the number of coefficients in quadratic polynomials (squared they give quartic sum of squares) is roughly quadratic. It is sufficient to fix the value of quadratic polynomial at quadratic number of points to constrain all coefficients, and therefore to constrain the values at all points in a hypercube of interest. The number of coefficients in quartic polynomial is roughly quartic, therefore there is more freedom in choice of coefficients. It is possible to select quartic coefficient in a way that will constrain all values of quadratic to 0 on hypercube.
  5. The simple example of quartic polynomial that is not sum of squares is Robinson polynomial: R_{1,2,3}=\sum \limits_{k=1,2,3} x_k^2( x_k-1 )^2 + 2x_1x_2x_3(x_1+x_2+x_3-2) . This is extremal polynomial (Choi, Lam (1977), Reznick (2007) )  – it cannot be non-trivially decomposed into other forms. This polynomial has seven zeros on the hypercube and one non zero value. Therefore, we can independently control value at one vertex of hypercube (1,1,1) by multiplying Robinson polynomial by positive constant.
  6.  Let R_{\bar 1, 2, 3}= \sum \limits_{k=1,2,3} x_k^2( x_k-1 )^2 + 2(1-x_1)x_2x_3( (1-x_1)+x_2+x_3-2), i.e. we replace x_1 \rightarrow 1-x_1, we say negation (by analogy with binary variables and there translation to polynomials). This is also extremal positive polynimial that controls value at (0,1,1). Same way we can make all extremal polynomials that controls independently all vertices of a hypercube. Let L_+(R_{1,2,3}) be a linear combination of all Robinson polynomials controlling different vertices of hypercube in 3 variables with positive coefficients (i.e. a point inside the cone of positive polynomials defined by the Robinson polynomials).Now consider polynomial L_+(R_{1,2,3})+ L_+(R_{1,2,4})+L_+(R_{1,3,4})+L_+(R_{2,3,4})+\alpha_1 \sum \limits_{k=1,2,3,4} x_k^2( x_k-1 )^2 + \alpha_2 x_1x_2x_3x_4 . Once the value of this polynomial at x=(1,1,1,1) is positive, there exists \alpha_1>0 such that this polynomial is positive! The same is true if we sum up all negations of quartic:  L_+(R_{1,2,3})+ L_+(R_{1,2,4})+L_+(R_{1,3,4})+L_+(R_{2,3,4})+\beta \sum \limits_{k=1,2,3,4} x_k^2( x_k-1 )^2 + \sum \limits_{n_i=0...1, i=1..4} \alpha_{ \{n_i \} } \prod_i x_i^{n_i}(1-x_i)^{1-n_i} . Once values on 16 hypercube vertices are positive there is \beta^* that for all \beta > \beta* the polynomial is positive.  Note that all 16 vertices are controlled independently.

    Now take all the combinations of 4 coordinates/variables/literals out of n dimensions/variables/literals of the problem. Construct the polynomials from the previous paragraph for all combinations and take linear combination of them with positive coefficients. We get positive polynomials (not extreme) that have control over quartic number of points (in the number of variables) which is exactly the number of points required to constraint all coefficients of quartic polynomial. It should be noted that we would not be able to find the global minimum of the polynomial because of the requirement that quartic terms (that are not controlled by the Robinson terms) should be evaluated at positive values, but remember that there is polynomial gap for unsatisfiable problem. Therefore, we can require all vertices to have positive values.

    The only missing part here is the estimation on the bound for \beta^* value. This may be large, but should be polynomial in the number of bits and number of dimensions.

  7. We actually interested in the positive solution inside hypercube. So, we have additional terms to reduce the complexity of the problem, i.e. semialgebraic bound on the region: x_k>0, 1-x_k>0 for 3-SAT and 1+x_k>0, 1-x_k>0 for partition problem. It adds additional freedom in the choice of variables, and limits the value of \beta being quite small.

Finally we can control values of quartic polynomial in quartic number of points such that differences between given unsatisfiable problem and resulting quartic be sum of squares, including 1. In the later case we have linear program in polynomial number of variables.

I’m not a mathematician, and hate all comas in place. I’m more than happy to unpack unclear points, but will refuse to estimate \beta so no one would claim I solved the problem, in case. And remember, the science is moving forward not because established scientists change their mind, but because young people come.

Sincerely yours,

C.T.


Update 12 Jan 2016.

I see where the skepticism comes from. The perturbation can be quite strong. We can require quartic polynomial being zero (with zero grdient) at roughly cubic number of generic points. all quadratics would be zero in this space since they have only quadratic number of monomials, so we have over-determined system of linear equations on the coefficients of quadratics. On the other hand we can freely move those cubic points in space (up to infinity) and still get possibly positive polynomials. This is the problem on one hand, on the other hand, this opens a way for continuation methods. By the way by the homogenization the problem of infinity is replaced by the path along unit sphere. Therefore, there is more good than bad. I imagine a continuation method for solving optimization problem by starting with well defined positive polynomials (say sums of squares) and splitting extra solutions from well defined ones(say on a hypecube) toward correct coefficients until we get enough of them (up to a cubic, where by the number of coefficient they will approach extremal polynomials). We can actually have a polynomial number of separate splits with their (conical/non-negative) sum leading to desired optimized polynomial. The big question here is preservation of positivity, which is a global property.


update 24 jan 2016

Rhe following is in words that needs translation to math.

If we start with sums of squares \sum (x_k^2 - 1)^2 which have clear solution, we can now peak a cubic point around solution (very close) and require the qurtic polynomial to be round at this points (i.e. the value and gradient are zero at these points and Hessian is positive definite ) than this polinomial will be positive and is not sum of squares. More generaly, the quartic polynomial will positive if more than square points (number of monomials in quadratic polinomial of the same dimension) are round and there is continuous path to original sum of squares such that this property holds for every point in the path. That would an alternative ( to Hilbert method) way to construct positive polynomials.

Posted in Uncategorized | Leave a comment

Generating positive polynomials that are not sums of squares

Motivation: “To prove that P=NP”. More seriously, there is a very powerful technique, called Sum of Squares (SOS) of polynomials in the optimization. For instance, if f(x) = \sum_i g_i^2(x), x \in \mathbb{R}^n , where f(x) and g_i(x) are some polynomials it is obvious that f(x) is non-negative. Unfortunately, this does not work in opposite direction. There are examples of positive polynomials that are not a sum of squares of other polynomials. Therefore, automatic generation of non-negative polynomials that are not sums of squares allows to extend applicability of SOS technique. If we would be enable to generate enough of such polynomials the global minimum of any polynomial function f(x) would be accessible. Consequently, P would be equal NP. This post has modest goal is to understand how to generate algorithmically a single positive polynomial that is not a sum of squares of other polynomials in arbitrary dimension. In a sense this is a formalization of previous post.

Preliminaries

Let x \in \mathbb{R}^n, \alpha \in \mathbb{ N }_{0}^n, \mathbb{X}^d = \{ X = \prod_k X_k^{\alpha_k} | \sum_k \alpha_k \leq d \}, where X_k is an arbitrary unique symbol denoting variable. For example, \mathbb{X}^0= \{ 1 \}, \mathbb{X}^1= \{ 1, X_1, X_2, ..., X_n \}. Let \mathbb{X}^d(x) being evaluation of the set \mathbb{X}^d at point x. For example, \mathbb{X}^0(x)= \{ 1 \} for any point x, \mathbb{X}^1( (0,1) )= \{ 1, 0, 1 \}. \mathbb{X}^2( (0,1) )= \{ 1, 0, 1, 0, 1, 0 \}, where \mathbb{X}^2 = \{ 1, X_1, X_2, ..., X_n, X_1^2, X_2^2, ... X_N^2, X_1 X_2, X_1 X_3, ... X_{n-1} X_n \}. Let polynomial be a linear combination of elements of \mathbb{X}^d : f(x) = \sum_k c_k \mathbb{X}^d_k , where \mathbb{X}^d_k enumerates elements of the set.

NP-complete integer partition problem

Let a_i \in \mathbb{N}, \xi_i \in {\pm1}. Integer partition problem is asking whether \exists \xi : \sum_k a_k \xi_k =0 , e.g. whether it is possible to divide set of integers into two complementary sets that have equal sum. This problem is NP-complete.

Proposition 1 Integer partition problem is equivalent to the question whether global minimum of polynomial \sum_k (X_k^2-1)^2 + \left( \sum_k a_k X_k \right)^2 is greater than zero or not.

Proof Since the polynomial is a sum of squares of real variables, it is non-negative and can be zero only when each term is zero. the first sum can only be zero at the values of x_k= \{\pm1\}. The second sum at given values can be zero only when partition exists \blacksquare

Corollary To prove P=NP it is sufficient to analyse quartic polynomials.
Proof Obvious \blacksquare

Zeros of quadratics on hyper-cube
Here we would like to establish some properties of the quadratic polynomials on the hyper-cube \mathbb{H} = \{ x \in \{0; 1\}^n \}

Propostiion 2 Consider quadratic polynomial f(x) . If f(x) = 0, \forall x~\in~\tilde{ \mathbb{H} }, \tilde{ \mathbb{H} } = \{ x: x~\in~\mathbb{H}, \sum_k x_k \leq 2 \} than f(x) =0, \forall x \in \mathbb{H} .

Proof
f(x)= c_1+ \sum_k c_{X_k} X_k + c_{X_k^2} X_k^2 + \sum \limits_{j \neq k} c_{X_j X_k} X_j X_k
Consequitively,
x=0 \Rightarrow c_1=0.
\sum_k x_k =1 \Rightarrow x_k=1, x_{j \neq k}=0 \Rightarrow c_{X_k} + c_{X_k^2}=0.
\sum_k x_k =2 \Rightarrow x_k =1, x_j=1, x_{ l \neq k,j }=0 \Rightarrow c_{X_kX_j}=0, taking into account second case.

Finally, f(x) = \sum_k c_{X_k} X_k(X_k-1) , which is clearly 0 for x \in \mathbb{H} , since in each term in the sum either first or second factor is 0 \blacksquare

Quartics that vanish on \tilde{ \mathbb{H} }
f(x)= c_1+ \sum_k c_{X_k} X_k + c_{X_k^2} X_k^2 + c_{X_k^3}X_k^3 +c_{X_k^4}X_k^4 +\sum \limits_{j \neq k} c_{X_j X_k} X_j X_k + c_{X_j^2 X_k} X_j^2 X_k+ c_{X_j^3 X_k} X_j^3 X_k+ c_{X_j^2 X_k^2} X_j^2 X_k^2+ \sum \limits_{j \neq k \neq m} c_{X_j X_k X_m} X_j X_k X_m + c_{X_j^2 X_k X_m} X_j^2 X_k X_m + \sum \limits_{j \neq k \neq m \neq l} c_{X_j X_k X_m X_l} X_j X_k X_m X_l .

Consequently,
x=0 \Rightarrow c_1=0.
\sum_k x_k =1 \Rightarrow x_k=1, x_{j \neq k}=0 \Rightarrow c_{X_k} + c_{X_k^2} + c_{X_k^3} + c_{X_k^4}=0.
\sum_k x_k =2 \Rightarrow x_k =1, x_j=1, x_{ l \neq k,j }=0 \Rightarrow c_{X_j X_k} + c_{X_j^2 X_k}+ c_{X_j^3 X_k}+ c_{X_j^2 X_k^2}=0, taking into account second case.

Additionally, we want quartics to preserve sign around x \in \tilde{ \mathbb{H} }, since we are interested in positive polynomials. Necessary condition is that gradient equals 0 for corresponding points.

\frac{ \partial f(x)}{ \partial X_q }= (c_{X_k} + 2 c_{X_k^2} X_k + 3 c_{X_k^3}X_k^2 + 4 c_{X_k^4}X_k^3) \delta_{k,q} +\sum \limits_{j \neq k} (c_{X_j X_k} X_j + c_{X_j^2 X_k} X_j^2+ c_{X_j^3 X_k} X_j^3+ 2 c_{X_j^2 X_k^2} X_j^2 X_k) \delta_{k,q}+ ( 2 c_{X_j^2 X_k} X_j X_k+ 3 c_{X_j^3 X_k} X_j^2 X_k) \delta_{j,q} + \sum \limits_{j \neq k \neq m} c_{X_j X_k X_m} X_j X_m \delta_{k,q} + c_{X_j^2 X_k X_m} (X_j^2 X_m \delta_{k,q}+2 X_j X_k X_m\delta_{j,q}) + \sum \limits_{j \neq k \neq m \neq l} c_{X_j X_k X_m X_l} X_j X_m X_l \delta_{k,q}.

Consequently,
x=0 \Rightarrow c_{X_k}=0.
\sum_k x_k =1 is splitted into 2 cases. x_q=1 and x_q=0
x_q=1 \Rightarrow 2 c_{X_q^2} + 3 c_{X_q^3} + 4 c_{X_q^4} = 0 together with c_{X_k^2} + c_{X_k^3} + c_{X_k^4}=0 that lead to c_{X_k^3}= -2 c_{X_k^2}, c_{X_k^4}= c_{X_k^2}
x_q=0 \Rightarrow c_{X_j X_q} + c_{X_j^2 X_q} + c_{X_j^3 X_q}=0, \forall j,q .
\sum_k x_k =2 split again into the same cases.
x_q=1 \Rightarrow 2 c_{X_j^2 X_q^2} + 2 c_{X_q^2 X_j } +3 c_{X_q^3 X_j }= 0 together with c_{X_j X_q} + c_{X_j^2 X_q} + c_{X_j^3 X_q}=0 and c_{X_j X_k} + c_{X_j^2 X_k}+ c_{X_j^3 X_k}+ c_{X_j^2 X_k^2}=0 that lead to c_{X_j^2 X_q^2}= 0, c_{X_q^2 X_j} = -3 c_{X_q X_j}, c_{X_q^3 X_j}= 2 c_{X_q X_j}.
x_q=0 \Rightarrow c_{X_j X_m X_q} + c_{X_j^2 X_m X_q} + c_{X_m^2 X_j X_q}= 0. The last condition counted over triplet lead to Robinson perturbation term c_{X_j^2 X_m X_q}= c_{X_j X_m^2 X_q} = c_{X_j X_m X_q^2}= - \frac{1}{2} c_{X_j X_m X_q}

Combining all conditions together we get
f(x)= \sum_k c_{X_k^2} X_k^2(X_k-1)^2 +\sum \limits_{j \neq k} c_{X_j X_k} ( X_j X_k - X_j^2 X_k- X_j^2 X_k+X_j^2 X_k^2)+ \sum \limits_{j \neq k \neq m} c_{X_j X_k X_m}(-2 X_j X_k X_m + X_j^2 X_k X_m +X_j X_k^2 X_m +X_j X_k X_m^2)+ \sum \limits_{j \neq k \neq m \neq l} c_{X_j X_k X_m X_l} X_j X_k X_m X_l .

Now we need to analyze the possible values of coefficients that guarantee positiveness of quartic polynomial.


Update Jan 6 2016

What is interesting is that
(1) if quartic is positive and has n+1 zeros forming n-simplex than its form is constrained. After a linear transformation this simplex can be transformed into standard orthogonal simplex with zeros being at
(0,0,0,…, 0),
(1,0,0,…, 0),
(0,1,0,…, 0),
(0,0,0,…, 1).
This leads to constraints on the coefficients of positive quartic polynomials. For instance, zero at 0 lead to c_1=0. Zero gradient (necessary for quartic being positive) leads to c_{X_k}= 0. Zeros at other vertices lead to the following constraint c_{X_k^4}+c_{X_k^3}+c_{X_k^2}= 0, whereas zero gradient leads to \frac{\partial}{\partial X_k} \Rightarrow 2 c_{X_k^2} + 3 c_{X_k^3} + 4 c_{X_k^4} = 0 leading to null-space (1, -2, 1), i.e. c_{X_k^2}= c_k, c_{X_k^3}+-2c_k, c_{X_k^4}= c_k for any c_k which leads to polynomial
\sum_k c_k X_k^2(X_k-1)^2 + f(x), where f(x) have no single variable term. Therefore, this simple requirement leads to representation of quartic positive polynomial as sums of squares plus some perturbation.

(2) It is sufficient to spread enough zeros across space, not necessary on the corner of hyper-cube to nullify all coefficient of quadratics. There are n_2=\frac{(n+2)(n+1)}{2} coefficients in quadratics. For the corresponding quartic we need at least \frac{(n+2)(n+1)}{2} generic points where quartic and its gradient vanish. That leads to \frac{(n+2)(n+1)^2}{2} constraints. We need more coefficients to have some freedom. That is achievable for the dimension more than 5. That is saying that perturbation can be very strong.

To be continued.

Posted in Uncategorized | 1 Comment

Hilbert-Robinson-Reznick-Blekherman positive polynomials

Reznick (2007) (see previous post) simplified Robinson simplification of Hilbert construction of positive polynomials that are not sums of squares by his perturbation lemma. Blekherman shows that the mystery is in the dimension of monomial basis. For example, there are 2^n points in n-dimensional hypercube, but since there are only \frac{(n+1)n}{2} quadratic monomials all this points when evaluated cannot have rank more then the number of monomials. As a consequence, it is sufficient to nullify only about quaratic number of points in a quadratic form (by applying a set of linear constraints on the coefficients arising from evaluations of values of monomials) to nullify all points in a hypercube for these quadratic forms.

There are much more monomials in quartic polynomials (much larger basis), and therefore much larger freedom in choosing coefficients such that quartic will be nullified at quadratic number of points in hypercube, but will be nonzero on other points of a hypercube! That lead that such quartic polynomial even if it will be positive will not be a sum of squares.

Reznick perturbation lemma open a way to automatically (algorithmically) generate positive polynomials that a re not sums of squares, which in the current version may not lead to sufficient expansion of positive polynomial cone to solve all the problems from a given class. Reznick (2007) perturbation lemma, simplified for our current purpose, in high school terms, starts from a sums of squares of terms defining hypercube, say \{0,1\}^n is f(x)= \sum \limits_{k=1}^N x_k^2(x_k-1)^2. Moreover, at the minimum it round, i.e. Hessian is strictly positive definite (easy to check by direct calculations, intuitively each term in the sum around the minimum is a positive definite quadratic multiplied by approximately 1). Now we are going to perturb this quartic by another quartic (g(x)), splitting points in the hypercube into two sets. In one set we require that Taylor expansion of g(x) vanish to second order term, and on the other set we require that g(x) is strictly positive. Only those points in the hypercube are dangerous, since perturbation f(x)+ c g(x), c>0 may lead to negative values around minima for any positive c. In the rest of the space we can choose small enough c to ensure positiveness of resulting polynomial (I skip this part, but one need to have worst estimate for the values of g(x) for a given norm of x and take a region bounded by some norm and outer region where only ratios between leading coefficients matter). For the first set of points g(x) has Hessians with the finite eigenvalues. If we need to ensure that c times minimum eigenvalue is smaller that Hessian eigenvalues (all the same) for f(x) in the minima. That ensures 0 value on the set that are local minima. For the second set of points the value is positive in some neighborhood by construction.

Now automatic construction: given monomial basis we need to ensure about quadratic number of points to have 0 values and gradients being 0. That is about n^3 conditions for about n^4 monomials. The rest of points in the hypercube should be strictly greater than zero. For that we need to ensure that the values at the basis points are positive and that all point in the hypercube are represented by the linear combinations with positive coefficients. For that we need to peak as the basis points the points with minimum number of ‘1’ as coordinates. and the second condition is the inequality condition. leading to linear program. It has solution because we have coefficients in front of x_k^2 that can grow unbound, and being compensated by other terms. and inequality condition is much weaker then equality condition. Then one can compute the estimate for the constant c, which depends only on the calculations in the basis points. The exact machinery still need to be worked out, since one need also estimate the values of projections of all point of hypercube onto basis points.

That may be not sufficient to produce wide enough expansion of SOS cone to solve all instances of, say, partition problem, but we may need small enough extension to pertube problem quartic to be represented by the sums of squares.

Sincerely yours,
C.T.

PS
psd that is not SOS:
x_1^2 (x_1 - 1)^2 + x_2^2 (x_2 - 1)^2 + x_3^2(x_3 - 1)^2 + \frac{1}{1000} \left( 3 x_2 (x_2 - 1) ( 28 x_1^2 - 28 x_1 - 19 x_2^2 + 19 x_2 + 28 x_3^2 - 28 x_3) + 140 x_1 x_2 x_3 (x_1 + x_2 + x_3 - 2)\right)

and the (not cleaned) code in Matlab with symbolic toolbox to get it


%% mupad test for positive polynomials Hilber-Reznik
n=3;

y= evalin( symengine, sprintf( 'combinat::compositions(4, Length= %d, MinPart=0)', n+1) );
pow= arrayfun( @(k) double(y(k)), 1:numel(y), 'UniformOutput', false );
pow= cat(1, pow{:});
xs= sym( 'x', [n,1]);

xx= zeros( 2^n, n );
v=0:2^n-1;
for k=1:n,
xx( bitand(v, 2^(k-1) )~=0 , k )=1;
end
%xx= 2*xx-1;
xx(:, n+1)=1;
%xx( end+1, :)= [1 2 2 1];

y= evalin( symengine, sprintf( 'combinat::compositions(2, Length= %d, MinPart=0)', n+1) );
pow2= arrayfun( @(k) double(y(k)), 1:numel(y), 'UniformOutput', false );
pow2= cat(1, pow2{:});

yy2= nan( size( pow2,1), size(xx,2) );
yy= nan( size( pow,1), size(xx,2) );
for k=1:size(xx,1),
yy(:, k)= prod( bsxfun( @power, xx(k,:), pow ), 2);
yy2(:, k)= prod( bsxfun( @power, xx(k,:), pow2 ), 2);
end

for k=size( pow, 1 ):-1:1,
xsm( k )= sym(1);
for k2= n:-1:1,
xsm( k )= xsm( k )*xs(k2)^pow( k, k2 );
end
end

for k= n:-1:1,
dxsm(k,:)= diff( xsm, xs(k) );
end

r= rank(yy2);
x0Idx= 1:7;
x1Idx= 8;

%we need also condition for gradient here
dpow= zeros(size(pow));
dpow(:,end)=1;
for k=1: size( pow, 2)-1,
idx= pow(:,k)>0;
dpow( idx, end )= dpow(idx, end).*pow(idx,k);
dpow(idx,k)= pow(idx,k)-1;
end
% %
% equality constraints
lc= zeros(size(pow, 1), 0);
for k=1:7,
lc(:, end+1)= subs( xsm, xs, xx(k, 1:3) )';
lc(:, end+(1:n))= subs( dxsm, xs, xx(k, 1:3) )';
end
% lc(:, end+(1:n))= subs( dxsm, xs, xx(8, 1:3) )';

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%
% The following line is a place to play with different polynomials
% we have rather strange way to do it here by imposing eight point
% where quartic polynomial vanishes first order term on Taylor expansion,
% and the value at this point is positive (it is sufficient to have 0 here,
% but the code would be more complicated).
% removing second line below (lc .. ) and switching next lc lines
% (commenting/ uncomenting) lead to Robinson polynomial
%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

x2= [3 -3 3]; % [3 -1 3] also works well
lc(:, end+(1:n))= subs( dxsm, xs, x2 )'; % comment for Robinson
bc= zeros( size( lc, 2), 1 );

% lc(:, end+1)= subs( xsm, xs, xx(8, 1:3) )'; % uncomment for Robinson
lc(:, end+1)= subs( xsm, xs, x2 )'; % comment for Robinson
bc(end+1)=1;

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

cf= mldivide( lc', bc)*54*2 % multiplier 1 for Robinson
disp( [size(lc) rank(lc)])

% %

% coefficient c in perturbation lemma was found by hand, and in now way is optimal
p2= 1/1000*sum( round(cf).*xsm.') + sum( xs.^2.*(xs-1).^2) % first factor is 1 for Robinson
dp2= [diff(p2, xs(1)) diff(p2, xs(2)) diff(p2, xs(3))]
subs( p2, xs, xx(8, 1:3))
sol= solve(dp2)
xv= ([sol.x1 sol.x2 sol.x3]);
vf= arrayfun( @(k) double(subs( p2, xs, xv(k, :))), 1:size(xv,1))';
[min(vf) min(real(vf)) min(imag(vf))]

Posted in Uncategorized | 2 Comments

P vs NP – an analogy between analogies

There are few ingredients that  may allow to prove that P=NP, although most people believe in opposite. Here I draw outline.

1. Partition problem: given multiset a with n entries tell whether it is possible to divide it into two miltisets having the same sum. Formally, tell whether there exists vector x \in \{-1,1\}^n such that \sum \limits_{k=1}^n a_k x_k=0, or in vector notation x^T a=0. This problem is NP-complete.

2. It is possible to encode this problem into minimization of quartic polinomial:
\mathop{\mbox{min} } \limits_{x \in \mathbb{C} } \sum \limits_{k=1}^n \left( x_k^2-1\right)^2+ \left( \sum \limits_{k=1}^n a_k x_k \right)^2
The minimum is zero when x exists and is greater than zero otherwise. Moreover, non-zero value is inversely bounded by length of the vector a.

3. Non-zero minimum exists if \exists \lambda_k \geq 0, \gamma >0 | \sum \limits_{k=1}^n \left( x_k^2-1\right)^2+ \left( \sum \limits_{k=1}^n a_k x_k \right)^2 = \sum \lambda_k \mbox{PSD}_k(x) +\gamma , where \mbox{PSD}(x) is positive semidefinite polynomial ( a polynomial that is non-negative over whole range of values x ), and summation is taken over all PSD.

4. Set PSD polynomials is convex. Therefore, in principle it is solvable in polynomial time. The problem is the generation of rays of positive polynomials. For instance it is easy to generate rays of PSD polynomials consisted of sum squares of polynomials. On the other hand there are plenty of examples of PSD polynomials that are not sum of squares (see for example Reznick, 2007). It is not easy to find all of them.

5. There is a finite basis for polynomials of a fixed degree (4 in this case, Wikipedia) consisting of linear combination of all possible monomials. So there are not too many (polynomial in the number of inputs) basis function that can be used to show non-zero minimum. The question how to generate them.

6. Computer Science PSD: this are PSD that are instances of Computer Science problems having no solution, for example, Partition problem polynomials from (2). For instance, polynomial corresponding to a= [1, 1, 1, 1, 1] is strictly positive. Here we need to generate enough PSD that certainly have no solution. Then we would have a complete basis for PSD polynomials and solve the problem, which would be an instance of linear programming in this case (details for similar problem can be found in Parrilo, 2000, thesis).

7. Partition problem has a nice property of a phase transition (Mertens, 2003). For instance, there is a specific parameter, that when problem is below this parameter probability to obtain solution is 1, and when it is above the probability of perfect partition is 0 for large $n$. Therefore, we have a tool to generate many examples of strictly positive polynomials, and hopefully get a complete basis of positive polynomials with probability tending to 1.

Therefore, we use (7) to generate basis (2), and solve linear program problem in (3). There are high chances that we get certificate for unsolvable problem with probability close to 1 in polynomial time.

Posted in Uncategorized | Leave a comment

Lonely Runner Conjecture. General case.

We consider (LRC) .

Conjecture Suppose n>1 runners having distinct constant speeds v_k > 0 start at a common point (origin) and run laps on a circular track with circumference 1. Then, there is a time when no runner is closer than \frac{1}{n+1} from the origin.

W.l.o.g. we can assume v_k > v_m \forall k>m.

One can formulate LRC as follows. Suppose l_k \in \mathbb{N}_0 is the number of whole laps (including 0) runner k passed on a track, than \exists t \in \mathbb{R}_+ and \exists l_k | l_k+\frac{ 1 }{n+1} \leq v_k t \leq l_k+ \frac{n}{n+1}.

Case n=1, two runners is trivial. At time t=\frac{1}{2 v_1} runner 1 is exactly distance \frac{1}{2} (l_1= 0).

Case n=2, three runners is a special case.

We start with 4 inequalities
\left\{ \begin{array}{ll}  \frac{l_1+1/3}{v_1} \leq t, &  t \leq \frac{l_1+2/3}{v_1} \\  \frac{l_2+1/3}{v_2} \leq t, & t \leq \frac{l_2+2/3}{v_2}  \end{array} \right.

Since t \in \mathbb{R}_+ we can eliminate it if all combination of left parts on the left columns are smaller than any right parts in the right column of the table. e.g.
\left\{ \begin{array}{l}  \frac{l_1+1/3}{v_1} \leq \frac{l_1+2/3}{v_1} \\  \frac{l_1+1/3}{v_1} \leq \frac{l_2+2/3}{v_2} \\  \frac{l_2+1/3}{v_2} \leq \frac{l_1+2/3}{v_1}\\  \frac{l_2+1/3}{v_2} \leq \frac{l_2+2/3}{v_2}  \end{array} \right.

The first and the last inequality are trivially correct. From the second and third inequality we would like to express l_2
\frac{v_2}{v_1} \left( l_1 + \frac{1}{3} \right) -\frac{2}{3} \leq l_2 \leq  \frac{v_2}{v_1} \left( l_1 + \frac{2}{3} \right) -\frac{1}{3}.
In other words,
\frac{v_2}{v_1} \left( l_1 + \frac{1}{3} \right) -\frac{2}{3} \leq l_2 \leq  \frac{v_2}{v_1} \left( l_1 + \frac{1}{3} \right) -\frac{2}{3} + \frac{1}{3}\frac{v_2}{v_1} +\frac{1}{3}.

There are 2 sub-cases.

  1. Sub-case \frac{v_2}{v_1} \geq 2.
    \frac{1}{3}\frac{v_2}{v_1} +\frac{1}{3} \geq 1 and \exists l_2 \in \mathbb{N}_0 satisfying inequality
  2. Sub-case \frac{v_2}{v_1} < 2 .
    In this case l_1=l_2=0 lead to (remeber that \frac{v_2}{v_1} > 1 )
    \frac{1}{3} \frac{ v_2}{v_1} - \frac{2}{3} \leq 0 \leq \frac{2}{3} \frac{v_2}{v_1} - \frac{1}{3}.

Case n=3, four runners is an illustration for a general case.

We start with 6 inequalities
\left\{ \begin{array}{ll} \frac{l_1+1/4}{v_1} \leq t, &  t \leq \frac{l_1+3/4}{v_1} \\  \frac{l_2+1/4}{v_2} \leq t, &  t \leq \frac{l_2+3/4}{v_2} \\  \frac{l_3+1/3}{v_3} \leq t, &  t \leq \frac{l_3+3/4}{v_3}  \end{array} \right.

Let’s express l_2 in terms of l_1
\frac{v_2}{v_1} \left( l_1 + \frac{1}{4} \right) -\frac{3}{4} \leq l_2 \leq \frac{v_2}{v_1} \left( l_1 + \frac{3}{4} \right) -\frac{1}{4}.
Rewriting it we obtain
\frac{v_2}{v_1} \left( l_1 + \frac{1}{4} \right) -\frac{3}{4} \leq l_2 \leq  \frac{v_2}{v_1} \left( l_1 + \frac{1}{4} \right) -\frac{3}{4} + \frac{1}{2}\frac{v_2}{v_1} +\frac{1}{2}.
\frac{1}{2}\frac{v_2}{v_1} +\frac{1}{2} >1, since \frac{v_2}{v_1} > 1. In other words, \forall l_1 \exists l_2 \in \mathbb{N}_0 satisfying inequality.

Now, let express l_3 in terms of l_1 and l_2
\left\{ \begin{array}{l}  \frac{v_3}{v_1} \left( l_1 + \frac{1}{4} \right) -\frac{3}{4} \leq l_3 \leq  \frac{v_3}{v_1} \left( l_1 + \frac{3}{4} \right) -\frac{1}{4} \\  \frac{v_3}{v_2} \left( l_2 + \frac{1}{4} \right) -\frac{3}{4} \leq l_3 \leq  \frac{v_3}{v_2} \left( l_2 + \frac{3}{4} \right) -\frac{1}{4}. \end{array}\right.

We can express l_2 with inequalities obtained earlier
\left\{ \begin{array}{l}  \frac{v_3}{v_1} \left( l_1 + \frac{1}{4} \right) -\frac{3}{4} \leq l_3 \leq  \frac{v_3}{v_1} \left( l_1 + \frac{3}{4} \right) -\frac{1}{4} \\ \frac{v_3}{v_2} \left( \frac{v_2}{v_1} \left( l_1 + \frac{1}{4} \right) -\frac{3}{4} + \frac{1}{4} \right) -\frac{3}{4} \leq l_3 \leq  \frac{v_3}{v_2} \left( \frac{v_2}{v_1} \left( l_1 + \frac{3}{4} \right) -\frac{1}{4} + \frac{3}{4} \right) -\frac{1}{4}.  \end{array}\right.
Collecting terms we obtain
\left\{ \begin{array}{l} \frac{v_3}{v_1} \left( l_1 + \frac{1}{4} \right) -\frac{3}{4} \leq l_3 \leq  \frac{v_3}{v_1} \left( l_1 + \frac{3}{4} \right) -\frac{1}{4} \\  \frac{v_3}{v_1} \left( l_1 + \frac{1}{4} \right) -\frac{3}{4} -\frac{1}{2} \frac{v_3}{v_2} \leq l_3 \leq  \frac{v_3}{v_1} \left( l_1 + \frac{3}{4} \right)-\frac{1}{4}+ \frac{1}{2} \frac{v_3}{v_2}.  \end{array}\right.

We can see that first inequality is always stronger that the second one (meaning that if l_3 satisfies first inequalities it will satisfy second inequalities). But the first inequalities are the same as previous inequalities for l_2 with relabelling. Therefore, \forall l_1 \exists l_2, l_3 \in \mathbb{N}_0 satisfying initial inequalities and LRC holds.

General case

We start with 2 n inequalities

\left\{ \begin{array}{lll}  \frac{l_k+\frac{1}{n+1}}{v_k} \leq t, &  t \leq \frac{l_k+\frac{n}{n+1}}{v_k} &  k=1..n  \end{array} \right.

Now we expressing l_{k+m_1}, m_1 \geq 1 in terms of l_k and l_{k+m_2}, m_2 > m_1 in terms of l_{k+m_1} and l_{k}.
\frac{v_{k+m_1}}{v_k} \left( l_k + \frac{1}{n+1} \right) -\frac{n}{n+1} \leq l_{k+m_1} \leq  \frac{v_{k+m_1}}{v_k} \left( l_k + \frac{n}{n+1} \right) -\frac{1}{n+1}.
\frac{v_{k+m_2}}{v_{k+m_1}} \left( l_{k+m_1} + \frac{1}{n+1} \right) -\frac{n}{n+1} \leq l_{k+m_2} \leq  \frac{v_{k+m_2}}{v_{k+m_1}} \left( l_{k+m_1} + \frac{n}{n+1} \right) -\frac{1}{n+1}.
\frac{v_{k+m_2}}{v_k} \left( l_k + \frac{1}{n+1} \right) -\frac{n}{n+1} \leq l_{k+m_2} \leq  \frac{v_{k+m_2}}{v_k} \left( l_k + \frac{n}{n+1} \right) -\frac{1}{n+1}.

Now we can substitute first inequalities into the second ones.
\frac{v_{k+m_2}}{v_{k+m_1}} \left( \frac{v_{k+m_1}}{v_k} \left( l_k + \frac{1}{n+1} \right) -\frac{n}{n+1} + \frac{1}{n+1} \right) -\frac{n}{n+1}  \leq l_{k+m_2} \leq  \frac{v_{k+m_2}}{v_{k+m_1}} \left(\frac{v_{k+m_1}}{v_k} \left( l_k + \frac{n}{n+1} \right) -\frac{1}{n+1} + \frac{n}{n+1} \right) -\frac{1}{n+1}.
Rearranging terms we obtain
\frac{v_{k+m_2}}{v_k} \left( l_k + \frac{1}{n+1} \right) -\frac{n}{n+1}  - \frac{n-1}{n+1} \frac{v_{k+m_2}}{v_{k+m_1}}  \leq l_{k+m_2} \leq  \frac{v_{k+m_2}}{v_k} \left( l_k + \frac{n}{n+1} \right) -\frac{1}{n+1}  +\frac{n-1}{n+1} \frac{v_{k+m_2}}{v_{k+m_1}}.
Compare it to
\frac{v_{k+m_2}}{v_k} \left( l_k + \frac{1}{n+1} \right) -\frac{n}{n+1}  \leq l_{k+m_2} \leq  \frac{v_{k+m_2}}{v_k} \left( l_k + \frac{n}{n+1} \right) -\frac{1}{n+1}.
The second inequality is always stronger.

Therefore, we left with the set of inequalities (when they are satisfied LRC holds when LRC holds they are satisfied!!! )
\frac{v_{k}}{v_1} \left( l_1 + \frac{1}{n+1} \right) -\frac{n}{n+1}  \leq l_{k} \leq \frac{v_{k}}{v_1} \left( l_1 + \frac{n}{n+1} \right) -\frac{1}{n+1},\ \forall k=2..n.

Lets rewrite it again
\frac{v_{k}}{v_1} \left( l_1 + \frac{1}{n+1} \right) -\frac{n}{n+1} \leq l_{k} \leq \frac{v_{k}}{v_1} \left( l_1 + \frac{1}{n+1} \right) -\frac{n}{n+1} + \frac{n-1}{n+1} \left( \frac{v_k}{v_1} +1 \right)

\frac{n-1}{n+1} \left( \frac{v_k}{v_1} +1 \right) > 2 \frac{n-1}{n+1} = 1+ \frac{n-3}{n+1} \geq 1 if n \geq 3 .


It is not clear from this why \frac{1}{n+1} is tight distance.


11.2.2016
It is clear why it is not clear. The above just tell that with speed ratio more the one we can always find a lap for faster runner. The tightness comes from from ratios less than one. Most probably from the interactions or the ratios coming from the same pairs.

Posted in Uncategorized | Leave a comment

On the lonely runner conjecture III

We consider (LRC) .

Conjecture Suppose n>1 runners having distinct constant integer speeds v_k start at a common point (origin) and run laps on a circular track with circumference 1. Then, there is a time when no runner is closer than \frac{1}{n+1} from the origin.

Case of n=2,3 runners is discussed here. Here we look at case n=4.

Let \| x \| denote the distance of x to the nearest integer.

Case v_k \mod 5 \neq 0,\ \forall k. At time t=\frac{1}{5} all runners are at least \frac{1}{5} from the origin. Done.

Case v_k= 5 r_k, k=1..3, v_4 \mod 5 \neq 0. There is time t_1 such that runners r_k, k=1..3 are at least \frac{1}{4} from the origin – see case n=3. At time t_2= \frac{t_1}{5} runners v_1, v_2, v_3 are at the same positions as runners r_1, r_2, r_3 at time t_1. If at time t_2, \| v_4 t_2\| \geq \frac{1}{5} we are done. Otherwise, since v_4 and 5 are co-prime, \exists m : \| v_4 \frac{m}{5} \|= \frac{2}{5} \Rightarrow \| \frac{2}{5} - v_4 \left( t_2+\frac{m}{5} \right) \| \leq \frac{1}{5} . Done.

Case v_k= 5 s_k, k=1,2, v_k \mod 5 \neq 0, k=3,4.

Definition Sector S_k= \left( \frac{k}{5}, \frac{k+1}{5} \right), k=0..4 .
There is a time t_1 such that runners s_k, k=1,2 are at least \frac{1}{3} from the origin – see case n=2. Since v_k, k=3,4 and 5 are co-prime those runners visit all sectors once at times \frac{m}{5}, m=0..4, and runners 1 and 2 will be at the same position. There 5 such times, during 2 times runner 3 will be at sectors S_0, S_4 and during 2 times runner 4 will visit the same sectors. Therefore, there is m:\ \| v_k \left( \frac{t_1+m}{5} \right) \| > \frac{1}{5}. Done.

Two previous cases follow from this post.
Now difficult case.

Case v_1= 5 s_1,\ v_k \mod 5 \neq 0, k=2,3,4.

Rearrange speeds that v_2+v_3 = 5 s_2 . If there more than one way choose the one with maximum sum. We have 2 sub-cases: s_1 \geq s_2, s_1< s_2.

  1. s_1 \geq s_2
    \exists m:\ \| v_2 \frac{m}{5} \|=\| v_3 \frac{m}{5} \| = \frac{2}{5}. The runner 1 is faster than either runner 2 or 3. Therefore, they meet at distance greater than \frac{1}{5}.

    Now runner 4.

    1. \| v_4 \frac{m}{5} \| = \frac{2}{5} \Rightarrow this runner is slower than either runner 2 or 3 (call this runner r) which is at the same position as runner 4. Therefore, it will meet runner 1 later than runner r, which meet runner 1 at distance larger than \frac{1}{5}. Done.
    2. \| v_4 \frac{m}{5} \| = \frac{1}{5} \Rightarrow . This runner should move fast to reach runner 1 at distance \frac{1}{5}. So, it will cover at least \frac{3}{5} + l of the lap. Then, looking at opposite direction in time in the same time it will cover the same distance, and will be at least \frac{2}{5} from the origin when runner 1 reach distance \frac{1}{5} If it is moving even faster to meet runner 1 at \frac{1}{5} it will miss runner 1 from the opposite side. Namely, it has to cover at least the whole lap, and therefore meet runner 1 at the opposite side before runner 1 reaches \frac{1}{5} . Done.
  2. s_1< s_2 This is the most interesting case.

    At times t=\frac{m}{5s_2}, m=1..5s_2-1 runners 2 and 3 are the same distance from the origin on the opposite sides from the origin.

    Runner 1 exhibit repeated motion at t. It makes 5 \gcd ( s_1,s_2) cycles, where \gcd stands for greatest common divisor. The number of different positions it attain evenly distributed along the track is n_1=\frac{s_2}{\gcd ( s_1,s_2)} \geq 2 . If n_1=2 half of the time runner 1 is away from the origin. With larger n_1 there is bigger fraction, with the limit \frac{3}{5}. Let t_1 \in t is the time when runner 1 is distant. At the same time runners 2 and 3 same distance on the opposite sides from the origin. Now at times t_1+\frac{m}{5}, m=0..4 runner 1 stays at the same position, runners 2 and 3 are on the opposite position relative to the origin, and runner 4 visiting all sectors once each time. Therefore, there are total 2 moments when runner 4 in S_0, S_4, and there are total, possibly different 2 times runner 2 in S_0, S_4, on the other hand when runner 2 in sector 0 runner 3 is in sector 4 and vice verse. Therefore, there is m:\ \| v_k \left( t_1 + \frac{m}{5} \right)\| \geq 1. Done.

\blacksquare

PS. If you find error or better exposition leave comment. If you do not want to be in the list of those talking to crackpot trisector, leave a note in the comment, the comment will not appear public, but the content will be implemented in the text.

PPS. There is a much shorter prove for n=4 in Barajas and Serra(2008). Algebraically it is short, but I think it lucks an intuition. I have filling this is quite general approach, and cases n=5 and n=6 bring new ingredients (non prime n+1, and competition between 2 pairs of paired runners ). May be we need to wait for the new ingredient until n=3 \times 5 -1 = 14 .

Posted in Uncategorized | 1 Comment

On the lonely runner conjecture II

We consider (LRC) .

Conjecture Suppose n>1 runners having distinct constant integer speeds v_k start at a common point (origin) and run laps on a circular track with circumference 1. Then, there is a time when no runner is closer than \frac{1}{n+1} from the origin.

We first show trivial proves for n=2,3 runners.

Let \| x \| denote the distance of x to the nearest integer.

Let \kappa_n = \inf \max \limits_{0 \leq t \leq 1} \min \limits_{ 1 \leq i \leq n} \| v_i t \|. where the infimum is taken over all n-tuples v_1, ... v_n of distinct positive integers.

In terms of \kappa_n the LRC state that \kappa_n= \frac{1}{n+1}

Without loss of generality (wlog) we can assume k < m \Leftrightarrow v_k < v_m .

Case n=2. Wlog we can assume v_1, v_2 are relatively prime. At time t_k= \frac{m}{v_1+v_2}, m=1..v_1+v_2-1 two runners are at the same distance from the origin from different sides of the lap at distances proportional to \frac{1}{v_1+v_2}. Since they are relatively prime both runners visit all the points \frac{m}{v_1+v_2}, m=1..v_1+v_2-1. The largest distance is \frac{ \left[ \frac{v_1+v_2}{2} \right] }{v_1+v_2}, where \left[ \circ \right] means integer part.

For example, v_1=1, v_2= 2, at times \frac{1}{3}, \frac{2}{3} runners are at positions (\frac{1}{3}, \frac{2}{3}) and (\frac{2}{3},\frac{1}{3}) and the maximum distance from origin is \frac{\left[ \frac{1+2}{2}\right]}{3}= \frac{\left[1.5 \right]}{3}= \frac{1}{3}.

Another example, v_1= 2, v_2= 3, the maximum distance is \frac{2}{5} > \frac{1}{3}.

If v_1, v_2 \mod 2 =1 the maximum distance is \frac{1}{2} >  \frac{1}{3} at time \frac{1}{2}.

In general the maximum distance is r_{1,2}= \frac{1}{2} \left(1- \frac{ (v_1+v_2) \mod 2 }{ v_1+v_2} \right) with the minimum for runners at speeds v_1= 1, v_2= 2. Therefore, \kappa_2= \frac{1}{3}.

Case n=3.

  • First we assume simple case: v_3 \mod ( v_1+v_2 )= 0, e.g v_3= q(v_1+v_2), q \in \mathbb{N}_+.

    At times t= \frac{m}{v_1+v_2} runner 3 is at the origin. There is a time such that some time before it was the same distance with runner 1 and some time after it will be the same distance with runner 2. (The situation is reversed at times 1-t). We are interested when that is happening for the time runners 1 and 2 are most distant from the origin.

    For runners 1 and 3: They need to pass r_{1,2} with speed v_1+v_3, so the time needed is \frac{r_{1,2} }{v_1+v_3}. Runner 3 passes r_{1, 3}= \frac{v_3 r_{1,2} }{v_1+v_3}= r_{1,2} \frac{1}{1+v_1 / v_3 }. Therefore, the closest point is reached when \frac{v_1}{v_3}= \frac{v_1}{t (v_1+v_2) } is maximum.

    r_{1, 3}=  \frac{q ( v_1+v_2 ) }{v_1+ q ( v_1+v_2 ) }  \frac{1}{2} \left(1- \frac{ (v_1+v_2) \mod 2 }{ v_1+v_2} \right)

    If (v_1+v_2) \mod 2 = 0, r_{2, 3} < r_{1, 3} around r_{1,2}= \frac{1}{2}

    r_{2, 3}=  \frac{1}{2} \frac{q ( v_1+v_2 ) }{v_2+ q ( v_1+v_2 ) }

    Lemma 1. \frac{q ( v_1+v_2 ) }{v_2+ q ( v_1+v_2 ) } > \frac{1}{2}.
    Proof. \frac{q ( v_1+v_2 ) }{v_2+ q ( v_1+v_2 ) } - \frac{1}{2} = \frac{1}{2} \frac{ 2 q ( v_1+v_2 ) -  v_2 -  q ( v_1+v_2 )}{v_2+ q ( v_1+v_2 ) }  = \frac{1}{2} \frac{  q v_1  + ( q - 1) v_2 }{v_2+ q ( v_1+v_2 ) } > 0 \blacksquare

    Now, consider the case when (v_1+v_2) \mod 2 = 1.

    r_{1, 3}=  \frac{1}{2} \frac{q ( v_1+v_2 ) }{v_1+ q ( v_1+v_2 ) }   \frac{ v_1+v_2 - 1 }{ v_1+v_2}  =  \frac{1}{2} \frac{q ( v_1+v_2 - 1 ) }{v_1+ q ( v_1+v_2 ) }

    Lemma 2. \frac{q ( v_1+v_2 - 1 ) }{v_1+ q ( v_1+v_2 ) } \geq \frac{1}{2}
    Proof. \frac{ ( v_1+v_2 - 1 ) }{v_1/q + v_1+v_2 } - \frac{1}{2} = \frac{2 v_1+ 2 v_2 - 2 - v_1/q - v_1-v_2 }{2 \left[ v_1/ q+ ( v_1+v_2 ) \right] } = \frac{ v_1 \left( 1- \frac{1}{q} \right)+ v_2 - 2 }{2 \left[ v_1/ q+ ( v_1+v_2 ) \right] } . Since v_2 \geq 2 nominator is greater than 0, except when v_2=2, v_1= 1, q= 1 \blacksquare

    Corollary 3. r_{1, 3} \geq \frac{1}{4}.
    Corollary 4. r_{1, 3} = \frac{1}{4} only for v_1=1, v_2=2, v_3=3.

  • Case: v_3 \mod ( v_1+v_2 ) \neq 0.

    Lemma 5. Fix time t=  \frac{m}{v_1+v_2}, m \in \{1, ..., v_1+v_2-1 \} . Let r_3 be the distance from the origin of runner 3 at the moment t and r_{1,2} > r_3 be the distance of the runners 1 and 2 from the origin at the same time. The maximal distance when runners 1 and 2 or 1 and 3 equidistant around time t is greater than \frac{r_3+r_{1, 2}}{2}.
    Proof. Either the runner 3 is running toward or away from the origin. e.g. the distance to the origin either decrease with time or increase. Let v be the speed of the runner 1 or runner 2 moving in opposite direction, so that the difference in distances is decreasing moving either forward or backward in time. Since v_3 > v runner 3 will cover greater distance away from the origin that other runner \blacksquare

    Lemma 6. Fix the time t= \frac{m}{v_1+v_2} when r_{1,2}= \frac{1}{2} \left(1- \frac{ (v_1+v_2) \mod 2 }{ v_1+v_2} \right). Let r'_3= \frac{m v_3 \mod (v_1+v_2) }{v_1+v_2} , and r_3= \min (r'_3, 1-r'_3)= \| r'_3\| – distance of runner 3 from the origin. r_{1, 2} + r_3 > \frac{1}{2} .
    Proof. r_3 \geq \frac{1}{v_1+v_2}, r_{1,2}+r_3 \geq \frac{1}{2} \left(1- \frac{ (v_1+v_2) \mod 2 }{ v_1+v_2} \right) + \frac{1}{v_1+v_2} = \frac{1}{2} + \frac{2- (v_1+v_2) \mod 2 }{v_1+v_2} > \frac{1}{2} \blacksquare

    Combining Lemma 5 and Lemma 6 shows \max (r_{1,3}, r_{2,3}) > \frac{1}{4}.

Overall, \kappa_3= \frac{1}{4} only for runners v_1=1, v_2=2, v_3=3.

Posted in Uncategorized | 1 Comment

Cantor diagonal argument is almost surely inconsistent

Here we construct table consisted of realization of Bernoulli processes; using Cantor diagonal argument we construct row that is not in the table, and finally we show that constructed number is almost surely in the table countably infinite number of times.

Consider doubly infinite table X_{k,m}, where index k \in \mathbb{N} enumerates rows, and index m \in \mathbb{N} enumerates columns. Each element X_{k,m} is independent binary random variable with equally probable possible states 0 and 1. Suppose x_{k,m} is a realization of random variables. We are going to fix it for the rest of the post.

Cantor diagonal argument Let row y_{k} = 1-x_{k,k}. Since, y is different from every row in at lest one place it is not in the table. Using induction we will check the next row diagonal, and will get proof by induction .

Theorem There are countably many rows equal to y in the table with probability 1.
Proof 1. Select the subset of rows (x_{k,\circ}) where the first column has value y_1: A^1=\{ x_{k,\circ} | x_{k,1}= y_1 \}. Since x_{k,m} is a realization of independent random variable, there are infinite number of rows in A^1 with probability one. Moreover, viewing A^1 as a table one can see that a_{k,1} = y_1, \forall k , and a_{k,m}, m>1, \forall k is a realization of equi-probable binary random variable.

Induction step. Suppose there is a set of countable many rows A^k such that a_{p,m}^k= y_m, \forall m<=k, \forall p. We select from the table A^k a subset of rows that have A^{k+1}= \{ a^k_{p, \circ} | a^k_{p,k+1}=y_{k+1} \} . The set A^{k+1} consists of countable infinite number of rows with probability 1. Moreover, the table A^{k+1} divided into 2 parts. The left part- columns from 1 to k+1 – all rows coincide with y_1 ... y_{k+1}. The right part – all columns starting from k+2 are realization of independent random variable.

Therefore, by induction, there is set “A^{\infty} that is countably infinite with probability 1, and each member consists of rows y. QED.

If one interpret table x_{k,m} as membership table, e.g. a set B_{k}= \{ e_m : x_{k,m} =1 \} than we have countable infinite representation of power set of countably infinite set by the theorem, whereas Cantor diagonal argument says the opposite. Therefore, if one replace the axiom of choice with the axiom of “free will’ – there exist Bernoulli process one have contradictory statement about sets cardinality. Axiom of choice is than deductible from the fact that all members of the sets A^k are ordered by the rows and particular implementation of random variables.

Applying weights to the column, e.g. representing real numbers r=\sum x_{k,m} 2^{-m} shows there are countable many reals, and moreover there are countable many equal reals.

Posted in Uncategorized | 1 Comment

Nullspace of tautologies

Below is Mathematica printout showing the usefulness of tautologies.
tautologies6vars4thOrder

Given x_k, k=1..6 and equations x_k^2-1=0, one can expact 64 dimensional nullspace in the space of monomials of some degree. The above show that in the usual Nullstellensatz case one need 8th order monomials, whereas it is sufficient to have tautologies variables of equivalent degree 6. This is on decoding side.

The encoding (problem equations) side still need some work to be done. What is obvious is that there is much more than just multiplication by a monomials. For example, take quadratic encoding polynomial f= \sum a_{i,k}, x_i x_k = 0, x_0= 1. In tautology variables it is linear equation, but than one can split it into two halves on the right and left side and square it. One can also take cubes, that is still possible since we are looking at equivalent 6th order polynomials. One has exponentially many ways to split encoding equation. The question here how to extract relevant information in, say, polynomial time.

Posted in Uncategorized | Leave a comment