## P vs NP – an analogy between analogies

There are few ingredients that  may allow to prove that P=NP, although most people believe in opposite. Here I draw outline.

1. Partition problem: given multiset $a$ with $n$ entries tell whether it is possible to divide it into two miltisets having the same sum. Formally, tell whether there exists vector $x \in \{-1,1\}^n$ such that $\sum \limits_{k=1}^n a_k x_k=0$, or in vector notation $x^T a=0$. This problem is NP-complete.

2. It is possible to encode this problem into minimization of quartic polinomial:
$\mathop{\mbox{min} } \limits_{x \in \mathbb{C} } \sum \limits_{k=1}^n \left( x_k^2-1\right)^2+ \left( \sum \limits_{k=1}^n a_k x_k \right)^2$
The minimum is zero when $x$ exists and is greater than zero otherwise. Moreover, non-zero value is inversely bounded by length of the vector $a$.

3. Non-zero minimum exists if $\exists \lambda_k \geq 0, \gamma >0 | \sum \limits_{k=1}^n \left( x_k^2-1\right)^2+ \left( \sum \limits_{k=1}^n a_k x_k \right)^2 = \sum \lambda_k \mbox{PSD}_k(x) +\gamma$, where $\mbox{PSD}(x)$ is positive semidefinite polynomial ( a polynomial that is non-negative over whole range of values $x$ ), and summation is taken over all PSD.

4. Set PSD polynomials is convex. Therefore, in principle it is solvable in polynomial time. The problem is the generation of rays of positive polynomials. For instance it is easy to generate rays of PSD polynomials consisted of sum squares of polynomials. On the other hand there are plenty of examples of PSD polynomials that are not sum of squares (see for example Reznick, 2007). It is not easy to find all of them.

5. There is a finite basis for polynomials of a fixed degree (4 in this case, Wikipedia) consisting of linear combination of all possible monomials. So there are not too many (polynomial in the number of inputs) basis function that can be used to show non-zero minimum. The question how to generate them.

6. Computer Science PSD: this are PSD that are instances of Computer Science problems having no solution, for example, Partition problem polynomials from (2). For instance, polynomial corresponding to $a= [1, 1, 1, 1, 1]$ is strictly positive. Here we need to generate enough PSD that certainly have no solution. Then we would have a complete basis for PSD polynomials and solve the problem, which would be an instance of linear programming in this case (details for similar problem can be found in Parrilo, 2000, thesis).

7. Partition problem has a nice property of a phase transition (Mertens, 2003). For instance, there is a specific parameter, that when problem is below this parameter probability to obtain solution is 1, and when it is above the probability of perfect partition is 0 for large $n$. Therefore, we have a tool to generate many examples of strictly positive polynomials, and hopefully get a complete basis of positive polynomials with probability tending to 1.

Therefore, we use (7) to generate basis (2), and solve linear program problem in (3). There are high chances that we get certificate for unsolvable problem with probability close to 1 in polynomial time.

## Lonely Runner Conjecture. General case.

We consider (LRC) .

Conjecture Suppose $n>1$ runners having distinct constant speeds $v_k > 0$ start at a common point (origin) and run laps on a circular track with circumference 1. Then, there is a time when no runner is closer than $\frac{1}{n+1}$ from the origin.

W.l.o.g. we can assume $v_k > v_m \forall k>m$.

One can formulate LRC as follows. Suppose $l_k \in \mathbb{N}_0$ is the number of whole laps (including 0) runner k passed on a track, than $\exists t \in \mathbb{R}_+$ and $\exists l_k | l_k+\frac{ 1 }{n+1} \leq v_k t \leq l_k+ \frac{n}{n+1}$.

Case n=1, two runners is trivial. At time $t=\frac{1}{2 v_1}$ runner 1 is exactly distance $\frac{1}{2}$ ($l_1= 0$).

Case n=2, three runners is a special case.

$\left\{ \begin{array}{ll} \frac{l_1+1/3}{v_1} \leq t, & t \leq \frac{l_1+2/3}{v_1} \\ \frac{l_2+1/3}{v_2} \leq t, & t \leq \frac{l_2+2/3}{v_2} \end{array} \right.$

Since $t \in \mathbb{R}_+$ we can eliminate it if all combination of left parts on the left columns are smaller than any right parts in the right column of the table. e.g.
$\left\{ \begin{array}{l} \frac{l_1+1/3}{v_1} \leq \frac{l_1+2/3}{v_1} \\ \frac{l_1+1/3}{v_1} \leq \frac{l_2+2/3}{v_2} \\ \frac{l_2+1/3}{v_2} \leq \frac{l_1+2/3}{v_1}\\ \frac{l_2+1/3}{v_2} \leq \frac{l_2+2/3}{v_2} \end{array} \right.$

The first and the last inequality are trivially correct. From the second and third inequality we would like to express $l_2$
$\frac{v_2}{v_1} \left( l_1 + \frac{1}{3} \right) -\frac{2}{3} \leq l_2 \leq \frac{v_2}{v_1} \left( l_1 + \frac{2}{3} \right) -\frac{1}{3}.$
In other words,
$\frac{v_2}{v_1} \left( l_1 + \frac{1}{3} \right) -\frac{2}{3} \leq l_2 \leq \frac{v_2}{v_1} \left( l_1 + \frac{1}{3} \right) -\frac{2}{3} + \frac{1}{3}\frac{v_2}{v_1} +\frac{1}{3}.$

There are 2 sub-cases.

1. Sub-case $\frac{v_2}{v_1} \geq 2.$
$\frac{1}{3}\frac{v_2}{v_1} +\frac{1}{3} \geq 1$ and $\exists l_2 \in \mathbb{N}_0$ satisfying inequality
2. Sub-case $\frac{v_2}{v_1} < 2 .$
In this case $l_1=l_2=0$ lead to (remeber that $\frac{v_2}{v_1} > 1$ )
$\frac{1}{3} \frac{ v_2}{v_1} - \frac{2}{3} \leq 0 \leq \frac{2}{3} \frac{v_2}{v_1} - \frac{1}{3}.$

Case n=3, four runners is an illustration for a general case.

$\left\{ \begin{array}{ll} \frac{l_1+1/4}{v_1} \leq t, & t \leq \frac{l_1+3/4}{v_1} \\ \frac{l_2+1/4}{v_2} \leq t, & t \leq \frac{l_2+3/4}{v_2} \\ \frac{l_3+1/3}{v_3} \leq t, & t \leq \frac{l_3+3/4}{v_3} \end{array} \right.$

Let’s express $l_2$ in terms of $l_1$
$\frac{v_2}{v_1} \left( l_1 + \frac{1}{4} \right) -\frac{3}{4} \leq l_2 \leq \frac{v_2}{v_1} \left( l_1 + \frac{3}{4} \right) -\frac{1}{4}.$
Rewriting it we obtain
$\frac{v_2}{v_1} \left( l_1 + \frac{1}{4} \right) -\frac{3}{4} \leq l_2 \leq \frac{v_2}{v_1} \left( l_1 + \frac{1}{4} \right) -\frac{3}{4} + \frac{1}{2}\frac{v_2}{v_1} +\frac{1}{2}.$
$\frac{1}{2}\frac{v_2}{v_1} +\frac{1}{2} >1,$ since $\frac{v_2}{v_1} > 1.$ In other words, $\forall l_1 \exists l_2 \in \mathbb{N}_0$ satisfying inequality.

Now, let express $l_3$ in terms of $l_1$ and $l_2$
$\left\{ \begin{array}{l} \frac{v_3}{v_1} \left( l_1 + \frac{1}{4} \right) -\frac{3}{4} \leq l_3 \leq \frac{v_3}{v_1} \left( l_1 + \frac{3}{4} \right) -\frac{1}{4} \\ \frac{v_3}{v_2} \left( l_2 + \frac{1}{4} \right) -\frac{3}{4} \leq l_3 \leq \frac{v_3}{v_2} \left( l_2 + \frac{3}{4} \right) -\frac{1}{4}. \end{array}\right.$

We can express $l_2$ with inequalities obtained earlier
$\left\{ \begin{array}{l} \frac{v_3}{v_1} \left( l_1 + \frac{1}{4} \right) -\frac{3}{4} \leq l_3 \leq \frac{v_3}{v_1} \left( l_1 + \frac{3}{4} \right) -\frac{1}{4} \\ \frac{v_3}{v_2} \left( \frac{v_2}{v_1} \left( l_1 + \frac{1}{4} \right) -\frac{3}{4} + \frac{1}{4} \right) -\frac{3}{4} \leq l_3 \leq \frac{v_3}{v_2} \left( \frac{v_2}{v_1} \left( l_1 + \frac{3}{4} \right) -\frac{1}{4} + \frac{3}{4} \right) -\frac{1}{4}. \end{array}\right.$
Collecting terms we obtain
$\left\{ \begin{array}{l} \frac{v_3}{v_1} \left( l_1 + \frac{1}{4} \right) -\frac{3}{4} \leq l_3 \leq \frac{v_3}{v_1} \left( l_1 + \frac{3}{4} \right) -\frac{1}{4} \\ \frac{v_3}{v_1} \left( l_1 + \frac{1}{4} \right) -\frac{3}{4} -\frac{1}{2} \frac{v_3}{v_2} \leq l_3 \leq \frac{v_3}{v_1} \left( l_1 + \frac{3}{4} \right)-\frac{1}{4}+ \frac{1}{2} \frac{v_3}{v_2}. \end{array}\right.$

We can see that first inequality is always stronger that the second one (meaning that if $l_3$ satisfies first inequalities it will satisfy second inequalities). But the first inequalities are the same as previous inequalities for $l_2$ with relabelling. Therefore, $\forall l_1 \exists l_2, l_3 \in \mathbb{N}_0$ satisfying initial inequalities and LRC holds.

General case

We start with $2 n$ inequalities

$\left\{ \begin{array}{lll} \frac{l_k+\frac{1}{n+1}}{v_k} \leq t, & t \leq \frac{l_k+\frac{n}{n+1}}{v_k} & k=1..n \end{array} \right.$

Now we expressing $l_{k+m_1}, m_1 \geq 1$ in terms of $l_k$ and $l_{k+m_2}, m_2 > m_1$ in terms of $l_{k+m_1}$ and $l_{k}.$
$\frac{v_{k+m_1}}{v_k} \left( l_k + \frac{1}{n+1} \right) -\frac{n}{n+1} \leq l_{k+m_1} \leq \frac{v_{k+m_1}}{v_k} \left( l_k + \frac{n}{n+1} \right) -\frac{1}{n+1}.$
$\frac{v_{k+m_2}}{v_{k+m_1}} \left( l_{k+m_1} + \frac{1}{n+1} \right) -\frac{n}{n+1} \leq l_{k+m_2} \leq \frac{v_{k+m_2}}{v_{k+m_1}} \left( l_{k+m_1} + \frac{n}{n+1} \right) -\frac{1}{n+1}.$
$\frac{v_{k+m_2}}{v_k} \left( l_k + \frac{1}{n+1} \right) -\frac{n}{n+1} \leq l_{k+m_2} \leq \frac{v_{k+m_2}}{v_k} \left( l_k + \frac{n}{n+1} \right) -\frac{1}{n+1}.$

Now we can substitute first inequalities into the second ones.
$\frac{v_{k+m_2}}{v_{k+m_1}} \left( \frac{v_{k+m_1}}{v_k} \left( l_k + \frac{1}{n+1} \right) -\frac{n}{n+1} + \frac{1}{n+1} \right) -\frac{n}{n+1} \leq l_{k+m_2} \leq \frac{v_{k+m_2}}{v_{k+m_1}} \left(\frac{v_{k+m_1}}{v_k} \left( l_k + \frac{n}{n+1} \right) -\frac{1}{n+1} + \frac{n}{n+1} \right) -\frac{1}{n+1}.$
Rearranging terms we obtain
$\frac{v_{k+m_2}}{v_k} \left( l_k + \frac{1}{n+1} \right) -\frac{n}{n+1} - \frac{n-1}{n+1} \frac{v_{k+m_2}}{v_{k+m_1}} \leq l_{k+m_2} \leq \frac{v_{k+m_2}}{v_k} \left( l_k + \frac{n}{n+1} \right) -\frac{1}{n+1} +\frac{n-1}{n+1} \frac{v_{k+m_2}}{v_{k+m_1}}.$
Compare it to
$\frac{v_{k+m_2}}{v_k} \left( l_k + \frac{1}{n+1} \right) -\frac{n}{n+1} \leq l_{k+m_2} \leq \frac{v_{k+m_2}}{v_k} \left( l_k + \frac{n}{n+1} \right) -\frac{1}{n+1}.$
The second inequality is always stronger.

Therefore, we left with the set of inequalities (when they are satisfied LRC holds when LRC holds they are satisfied!!! )
$\frac{v_{k}}{v_1} \left( l_1 + \frac{1}{n+1} \right) -\frac{n}{n+1} \leq l_{k} \leq \frac{v_{k}}{v_1} \left( l_1 + \frac{n}{n+1} \right) -\frac{1}{n+1},\ \forall k=2..n.$

Lets rewrite it again
$\frac{v_{k}}{v_1} \left( l_1 + \frac{1}{n+1} \right) -\frac{n}{n+1} \leq l_{k} \leq \frac{v_{k}}{v_1} \left( l_1 + \frac{1}{n+1} \right) -\frac{n}{n+1} + \frac{n-1}{n+1} \left( \frac{v_k}{v_1} +1 \right)$

$\frac{n-1}{n+1} \left( \frac{v_k}{v_1} +1 \right) > 2 \frac{n-1}{n+1} = 1+ \frac{n-3}{n+1} \geq 1$ if $n \geq 3 .$

It is not clear from this why $\frac{1}{n+1}$ is tight distance.

## On the lonely runner conjecture III

We consider (LRC) .

Conjecture Suppose $n>1$ runners having distinct constant integer speeds $v_k$ start at a common point (origin) and run laps on a circular track with circumference 1. Then, there is a time when no runner is closer than $\frac{1}{n+1}$ from the origin.

Case of $n=2,3$ runners is discussed here. Here we look at case $n=4$.

Let $\| x \|$ denote the distance of x to the nearest integer.

Case $v_k \mod 5 \neq 0,\ \forall k$. At time $t=\frac{1}{5}$ all runners are at least $\frac{1}{5}$ from the origin. Done.

Case $v_k= 5 r_k, k=1..3, v_4 \mod 5 \neq 0$. There is time $t_1$ such that runners $r_k, k=1..3$ are at least $\frac{1}{4}$ from the origin – see case $n=3$. At time $t_2= \frac{t_1}{5}$ runners $v_1, v_2, v_3$ are at the same positions as runners $r_1, r_2, r_3$ at time $t_1$. If at time $t_2$, $\| v_4 t_2\| \geq \frac{1}{5}$ we are done. Otherwise, since $v_4$ and 5 are co-prime, $\exists m : \| v_4 \frac{m}{5} \|=$ $\frac{2}{5} \Rightarrow$ $\| \frac{2}{5} - v_4 \left( t_2+\frac{m}{5} \right) \| \leq \frac{1}{5}$. Done.

Case $v_k= 5 s_k, k=1,2, v_k \mod 5 \neq 0, k=3,4$.

Definition Sector $S_k= \left( \frac{k}{5}, \frac{k+1}{5} \right), k=0..4$.
There is a time $t_1$ such that runners $s_k, k=1,2$ are at least $\frac{1}{3}$ from the origin – see case $n=2$. Since $v_k, k=3,4$ and 5 are co-prime those runners visit all sectors once at times $\frac{m}{5}, m=0..4$, and runners 1 and 2 will be at the same position. There 5 such times, during 2 times runner 3 will be at sectors $S_0, S_4$ and during 2 times runner 4 will visit the same sectors. Therefore, there is $m:\ \| v_k \left( \frac{t_1+m}{5} \right) \| > \frac{1}{5}$. Done.

Two previous cases follow from this post.
Now difficult case.

Case $v_1= 5 s_1,\ v_k \mod 5 \neq 0, k=2,3,4$.

Rearrange speeds that $v_2+v_3 = 5 s_2$. If there more than one way choose the one with maximum sum. We have 2 sub-cases: $s_1 \geq s_2$, $s_1< s_2$.

1. $s_1 \geq s_2$
$\exists m:\ \| v_2 \frac{m}{5} \|=\| v_3 \frac{m}{5} \| = \frac{2}{5}$. The runner 1 is faster than either runner 2 or 3. Therefore, they meet at distance greater than $\frac{1}{5}$.

Now runner 4.

1. $\| v_4 \frac{m}{5} \| = \frac{2}{5} \Rightarrow$ this runner is slower than either runner 2 or 3 (call this runner $r$) which is at the same position as runner 4. Therefore, it will meet runner 1 later than runner $r$, which meet runner 1 at distance larger than $\frac{1}{5}$. Done.
2. $\| v_4 \frac{m}{5} \| = \frac{1}{5} \Rightarrow$. This runner should move fast to reach runner 1 at distance $\frac{1}{5}$. So, it will cover at least $\frac{3}{5} + l$ of the lap. Then, looking at opposite direction in time in the same time it will cover the same distance, and will be at least $\frac{2}{5}$ from the origin when runner 1 reach distance $\frac{1}{5}$ If it is moving even faster to meet runner 1 at $\frac{1}{5}$ it will miss runner 1 from the opposite side. Namely, it has to cover at least the whole lap, and therefore meet runner 1 at the opposite side before runner 1 reaches $\frac{1}{5}$. Done.
2. $s_1< s_2$ This is the most interesting case.

At times $t=\frac{m}{5s_2}, m=1..5s_2-1$ runners 2 and 3 are the same distance from the origin on the opposite sides from the origin.

Runner 1 exhibit repeated motion at $t$. It makes $5 \gcd ( s_1,s_2)$ cycles, where $\gcd$ stands for greatest common divisor. The number of different positions it attain evenly distributed along the track is $n_1=\frac{s_2}{\gcd ( s_1,s_2)} \geq 2$. If $n_1=2$ half of the time runner 1 is away from the origin. With larger $n_1$ there is bigger fraction, with the limit $\frac{3}{5}$. Let $t_1 \in t$ is the time when runner 1 is distant. At the same time runners 2 and 3 same distance on the opposite sides from the origin. Now at times $t_1+\frac{m}{5}, m=0..4$ runner 1 stays at the same position, runners 2 and 3 are on the opposite position relative to the origin, and runner 4 visiting all sectors once each time. Therefore, there are total 2 moments when runner 4 in $S_0, S_4$, and there are total, possibly different 2 times runner 2 in $S_0, S_4$, on the other hand when runner 2 in sector 0 runner 3 is in sector 4 and vice verse. Therefore, there is $m:\ \| v_k \left( t_1 + \frac{m}{5} \right)\| \geq 1$. Done.

$\blacksquare$

PS. If you find error or better exposition leave comment. If you do not want to be in the list of those talking to crackpot trisector, leave a note in the comment, the comment will not appear public, but the content will be implemented in the text.

PPS. There is a much shorter prove for $n=4$ in Barajas and Serra(2008). Algebraically it is short, but I think it lucks an intuition. I have filling this is quite general approach, and cases $n=5$ and $n=6$ bring new ingredients (non prime $n+1$, and competition between 2 pairs of paired runners ). May be we need to wait for the new ingredient until $n=3 \times 5 -1 = 14$.

Posted in Uncategorized | 1 Comment

## On the lonely runner conjecture II

We consider (LRC) .

Conjecture Suppose $n>1$ runners having distinct constant integer speeds $v_k$ start at a common point (origin) and run laps on a circular track with circumference 1. Then, there is a time when no runner is closer than $\frac{1}{n+1}$ from the origin.

We first show trivial proves for $n=2,3$ runners.

Let $\| x \|$ denote the distance of x to the nearest integer.

Let $\kappa_n = \inf \max \limits_{0 \leq t \leq 1} \min \limits_{ 1 \leq i \leq n} \| v_i t \|$. where the infimum is taken over all n-tuples $v_1, ... v_n$ of distinct positive integers.

In terms of $\kappa_n$ the LRC state that $\kappa_n= \frac{1}{n+1}$

Without loss of generality (wlog) we can assume $k < m \Leftrightarrow v_k < v_m$.

Case $n=2$. Wlog we can assume $v_1, v_2$ are relatively prime. At time $t_k= \frac{m}{v_1+v_2}, m=1..v_1+v_2-1$ two runners are at the same distance from the origin from different sides of the lap at distances proportional to $\frac{1}{v_1+v_2}$. Since they are relatively prime both runners visit all the points $\frac{m}{v_1+v_2}, m=1..v_1+v_2-1$. The largest distance is $\frac{ \left[ \frac{v_1+v_2}{2} \right] }{v_1+v_2}$, where $\left[ \circ \right]$ means integer part.

For example, $v_1=1, v_2= 2$, at times $\frac{1}{3}, \frac{2}{3}$ runners are at positions $(\frac{1}{3}, \frac{2}{3})$ and $(\frac{2}{3},\frac{1}{3})$ and the maximum distance from origin is $\frac{\left[ \frac{1+2}{2}\right]}{3}= \frac{\left[1.5 \right]}{3}= \frac{1}{3}$.

Another example, $v_1= 2, v_2= 3$, the maximum distance is $\frac{2}{5} > \frac{1}{3}$.

If $v_1, v_2 \mod 2 =1$ the maximum distance is $\frac{1}{2} > \frac{1}{3}$ at time $\frac{1}{2}$.

In general the maximum distance is $r_{1,2}= \frac{1}{2} \left(1- \frac{ (v_1+v_2) \mod 2 }{ v_1+v_2} \right)$ with the minimum for runners at speeds $v_1= 1, v_2= 2$. Therefore, $\kappa_2= \frac{1}{3}$.

Case $n=3$.

• First we assume simple case: $v_3 \mod ( v_1+v_2 )= 0$, e.g $v_3= q(v_1+v_2), q \in \mathbb{N}_+$.

At times $t= \frac{m}{v_1+v_2}$ runner 3 is at the origin. There is a time such that some time before it was the same distance with runner 1 and some time after it will be the same distance with runner 2. (The situation is reversed at times $1-t$). We are interested when that is happening for the time runners 1 and 2 are most distant from the origin.

For runners 1 and 3: They need to pass $r_{1,2}$ with speed $v_1+v_3$, so the time needed is $\frac{r_{1,2} }{v_1+v_3}$. Runner 3 passes $r_{1, 3}= \frac{v_3 r_{1,2} }{v_1+v_3}= r_{1,2} \frac{1}{1+v_1 / v_3 }$. Therefore, the closest point is reached when $\frac{v_1}{v_3}= \frac{v_1}{t (v_1+v_2) }$ is maximum.

$r_{1, 3}= \frac{q ( v_1+v_2 ) }{v_1+ q ( v_1+v_2 ) } \frac{1}{2} \left(1- \frac{ (v_1+v_2) \mod 2 }{ v_1+v_2} \right)$

If $(v_1+v_2) \mod 2 = 0$, $r_{2, 3} < r_{1, 3}$ around $r_{1,2}= \frac{1}{2}$

$r_{2, 3}= \frac{1}{2} \frac{q ( v_1+v_2 ) }{v_2+ q ( v_1+v_2 ) }$

Lemma 1. $\frac{q ( v_1+v_2 ) }{v_2+ q ( v_1+v_2 ) } > \frac{1}{2}$.
Proof. $\frac{q ( v_1+v_2 ) }{v_2+ q ( v_1+v_2 ) } - \frac{1}{2} =$ $\frac{1}{2} \frac{ 2 q ( v_1+v_2 ) - v_2 - q ( v_1+v_2 )}{v_2+ q ( v_1+v_2 ) } =$ $\frac{1}{2} \frac{ q v_1 + ( q - 1) v_2 }{v_2+ q ( v_1+v_2 ) } > 0$ $\blacksquare$

Now, consider the case when $(v_1+v_2) \mod 2 = 1$.

$r_{1, 3}= \frac{1}{2} \frac{q ( v_1+v_2 ) }{v_1+ q ( v_1+v_2 ) } \frac{ v_1+v_2 - 1 }{ v_1+v_2} = \frac{1}{2} \frac{q ( v_1+v_2 - 1 ) }{v_1+ q ( v_1+v_2 ) }$

Lemma 2. $\frac{q ( v_1+v_2 - 1 ) }{v_1+ q ( v_1+v_2 ) } \geq \frac{1}{2}$
Proof. $\frac{ ( v_1+v_2 - 1 ) }{v_1/q + v_1+v_2 } - \frac{1}{2} =$ $\frac{2 v_1+ 2 v_2 - 2 - v_1/q - v_1-v_2 }{2 \left[ v_1/ q+ ( v_1+v_2 ) \right] } =$ $\frac{ v_1 \left( 1- \frac{1}{q} \right)+ v_2 - 2 }{2 \left[ v_1/ q+ ( v_1+v_2 ) \right] }$. Since $v_2 \geq 2$ nominator is greater than 0, except when $v_2=2, v_1= 1, q= 1$ $\blacksquare$

Corollary 3. $r_{1, 3} \geq \frac{1}{4}$.
Corollary 4. $r_{1, 3} = \frac{1}{4}$ only for $v_1=1, v_2=2, v_3=3$.

• Case: $v_3 \mod ( v_1+v_2 ) \neq 0$.

Lemma 5. Fix time $t= \frac{m}{v_1+v_2}, m \in \{1, ..., v_1+v_2-1 \}$. Let $r_3$ be the distance from the origin of runner 3 at the moment $t$ and $r_{1,2} > r_3$ be the distance of the runners 1 and 2 from the origin at the same time. The maximal distance when runners 1 and 2 or 1 and 3 equidistant around time $t$ is greater than $\frac{r_3+r_{1, 2}}{2}$.
Proof. Either the runner 3 is running toward or away from the origin. e.g. the distance to the origin either decrease with time or increase. Let $v$ be the speed of the runner 1 or runner 2 moving in opposite direction, so that the difference in distances is decreasing moving either forward or backward in time. Since $v_3 > v$ runner 3 will cover greater distance away from the origin that other runner $\blacksquare$

Lemma 6. Fix the time $t= \frac{m}{v_1+v_2}$ when $r_{1,2}= \frac{1}{2} \left(1- \frac{ (v_1+v_2) \mod 2 }{ v_1+v_2} \right)$. Let $r'_3= \frac{m v_3 \mod (v_1+v_2) }{v_1+v_2}$, and $r_3= \min (r'_3, 1-r'_3)= \| r'_3\|$ – distance of runner 3 from the origin. $r_{1, 2} + r_3 > \frac{1}{2}$.
Proof. $r_3 \geq \frac{1}{v_1+v_2}$, $r_{1,2}+r_3 \geq \frac{1}{2} \left(1- \frac{ (v_1+v_2) \mod 2 }{ v_1+v_2} \right) + \frac{1}{v_1+v_2} =$ $\frac{1}{2} + \frac{2- (v_1+v_2) \mod 2 }{v_1+v_2} > \frac{1}{2}$ $\blacksquare$

Combining Lemma 5 and Lemma 6 shows $\max (r_{1,3}, r_{2,3}) > \frac{1}{4}$.

Overall, $\kappa_3= \frac{1}{4}$ only for runners $v_1=1, v_2=2, v_3=3$.

Posted in Uncategorized | 1 Comment

## Cantor diagonal argument is almost surely inconsistent

Here we construct table consisted of realization of Bernoulli processes; using Cantor diagonal argument we construct row that is not in the table, and finally we show that constructed number is almost surely in the table countably infinite number of times.

Consider doubly infinite table $X_{k,m}$, where index $k \in \mathbb{N}$ enumerates rows, and index $m \in \mathbb{N}$ enumerates columns. Each element $X_{k,m}$ is independent binary random variable with equally probable possible states $0$ and $1$. Suppose $x_{k,m}$ is a realization of random variables. We are going to fix it for the rest of the post.

Cantor diagonal argument Let row $y_{k} = 1-x_{k,k}$. Since, $y$ is different from every row in at lest one place it is not in the table. Using induction we will check the next row diagonal, and will get proof by induction .

Theorem There are countably many rows equal to $y$ in the table with probability 1.
Proof 1. Select the subset of rows ($x_{k,\circ}$) where the first column has value $y_1$: $A^1=\{ x_{k,\circ} | x_{k,1}= y_1 \}$. Since $x_{k,m}$ is a realization of independent random variable, there are infinite number of rows in $A^1$ with probability one. Moreover, viewing $A^1$ as a table one can see that $a_{k,1} = y_1, \forall k$, and $a_{k,m}, m>1, \forall k$ is a realization of equi-probable binary random variable.

Induction step. Suppose there is a set of countable many rows $A^k$ such that $a_{p,m}^k= y_m, \forall m<=k, \forall p$. We select from the table $A^k$ a subset of rows that have $A^{k+1}= \{ a^k_{p, \circ} | a^k_{p,k+1}=y_{k+1} \}$. The set $A^{k+1}$ consists of countable infinite number of rows with probability 1. Moreover, the table $A^{k+1}$ divided into 2 parts. The left part- columns from 1 to $k+1$ – all rows coincide with $y_1 ... y_{k+1}$. The right part – all columns starting from $k+2$ are realization of independent random variable.

Therefore, by induction, there is set “$A^{\infty}$ that is countably infinite with probability 1, and each member consists of rows $y$. QED.

If one interpret table $x_{k,m}$ as membership table, e.g. a set $B_{k}= \{ e_m : x_{k,m} =1 \}$ than we have countable infinite representation of power set of countably infinite set by the theorem, whereas Cantor diagonal argument says the opposite. Therefore, if one replace the axiom of choice with the axiom of “free will’ – there exist Bernoulli process one have contradictory statement about sets cardinality. Axiom of choice is than deductible from the fact that all members of the sets $A^k$ are ordered by the rows and particular implementation of random variables.

Applying weights to the column, e.g. representing real numbers $r=\sum x_{k,m} 2^{-m}$ shows there are countable many reals, and moreover there are countable many equal reals.

Posted in Uncategorized | 1 Comment

## Nullspace of tautologies

Below is Mathematica printout showing the usefulness of tautologies.
tautologies6vars4thOrder

Given $x_k, k=1..6$ and equations $x_k^2-1=0$, one can expact 64 dimensional nullspace in the space of monomials of some degree. The above show that in the usual Nullstellensatz case one need 8th order monomials, whereas it is sufficient to have tautologies variables of equivalent degree 6. This is on decoding side.

The encoding (problem equations) side still need some work to be done. What is obvious is that there is much more than just multiplication by a monomials. For example, take quadratic encoding polynomial $f= \sum a_{i,k}, x_i x_k = 0, x_0= 1$. In tautology variables it is linear equation, but than one can split it into two halves on the right and left side and square it. One can also take cubes, that is still possible since we are looking at equivalent 6th order polynomials. One has exponentially many ways to split encoding equation. The question here how to extract relevant information in, say, polynomial time.

## NP-complete problems and tautologies

Many NP complete problems can be encoded in system of polynomial equations. For example, in partition problem one is asking whether it is possible to divide a list of integers into two lists that the sum of element of each least are equal. Given the list of elements $a \in \mathbb{Z}^n$ one is looking for the element $x \in \{ \pm 1 \}^n$, such that $\sum \limits_{k=1}^n x_k a_k =0$.

To encode the problem into the system of polynomial equations over $\mathbb{C}$ one need $n$ equations of the form $x_k^2-1=0$ – that will ensure variables $x_k=\pm 1$, and the above mentioned equation $\sum \limits_{k=1}^n x_k a_k =0$. In total $n+1$ nonlinear equations. We can also change the variables that $x'_k= \frac{x_k+1}{2} \in \{0,1\}$. In some sense this variables may be easier for the analyses later.

One way to solve this equation is to linearise it, e.g. we define a new set of variables $y_{(k,m)}= x'_k x'_m,\ k,m=0..n$, and we set $x'_0=1$. Now in terms of $y_{(k,m)}$ we have system of linear equations. Unfortunately, we have lost relationship between terms. Linear system does not “know” that $y_{(2,3)}= y_2 y_3$, etc. On the other hand, if solution of the original system exists it should lay in the null space of the coefficient matrix. We will try use it. Suppose, we have null space $U=[u_1,..., u_K] \in \mathbb{C}^{ \frac{(n+1)(n+2)}{2} \times K }$, than if system of equations has the solution all monomials will be simultaneously expressed as a linear combination of null space vectors $y_{(k,m)}(c)= \sum \limits_{i=1}^K c_i u_{i,(k,m)}$. The dimensionality of null space $K$ is exactly the number of quadratic monomials minus number of equations.

So far, we were looking only at second order monomials. If we go to higher order monomials we can see that $x_{k_1} x_{k_2} x_{k_3} x_{k_4}= (x_{k_1} x_{k_2}) (x_{k_3} x_{k_4})=(x_{k_1} x_{k_3}) (x_{k_2} x_{k_4})=(x_{k_1} x_{k_4})( x_{k_2} x_{k_3})$. What we wrote is tautology – this is true independent of the values of $x_k$. This is also true for any $k_1,...,k_4$. So this tautologies can be written in terms of our new variables $y_{(k_1,k_2)}y_{(k_3,k_4)}-y_{(k_1,k_3)}y_{(k_2,k_4)}=0$, $y_{(k_1,k_2)}y_{(k_3,k_4)}-y_{(k_1,k_4)}y_{(k_2,k_3)}=0$. On the other hand, if we have $k_m = k_n \forall m \neq n$, than we does not have tautology. In any case, we can write down all tautologies for up to order 4, and their expression in terms of new variables $y$. Moreover, for each $y_{k,m}$ we have its linear expression in terms of unknowns $c_i$. So if we expand tautologies in terms of $c_i$ we will get a new set of quadratic equations, that must be satisfied, if original system has a solution.

Now we already know what to do. We need to linearise the system of quadratic equations, for which we need to introduce new variables $b_{k,m}= c_k c_m$, write down the system of linear equations, find the null space of coefficient matrix, encode $b_{m,k}$ as a linear combination of the null space basis, and finally encode forth-order monomials in terms of new coefficient of new null space basis. Ok, we can repeat this forever, just remember to write down all quadratic tautologies in terms of known monomials (e.g. $(x_1x_2x_3x_4)(x_5x_6x_7x_8)= (x_1x_2x_3x_5)(x_4x_6x_7x_8)=...$ many more tautologies). What is interesting here is how $K$ changes with iterations.

Ok, I will come back here later, when I cool down and wil have more time. Note that the number of tautologies are much more (seems to be exponentially more) than the number of monomials.

Each iteration we are squaring the number of variables which encode monomials in terms of null space. Now consider only monomials with $k_m \neq k_n, \forall m \neq n$. If we have monomial of degree $2k$ than only for that monomial we have $\frac{1}{2}\binom{2k}{k}$, since we want to choose half variables, and one half comes from the fact that multiplication is commutative. The number of monomials is $\binom{n}{2k}$. So, assuming tautologies are linearly independent, we have $K_0=n^2$, $K_{m+1}=K_m^2-\frac{1}{2}\binom{2m}{m}\binom{n}{2m} =$ $K_m^2-\frac{1}{2}\binom{n-m}{m}\binom{n}{m} \approx$ $K_m^2-\frac{1}{2} n^{2m}/m!$. So the lower bound is asymptotically …

Posted in Uncategorized | 2 Comments

## Semiprime factorization and custom proof system

Suppose we have $N=pq$, with $p$ and $q$ are unknown odd primes. We can encode factorization problem in the system of polynomial equations. For instance, $p= 1+ \sum_{k=1}^n 2^k x_k$, $q= 1+ \sum_{k=1}^n 2^k y_k$, where $n = \lfloor \log_2 N \rfloor$ is one less the number of bits required to represent $N$, and $x_k, y_k$ are binary indeterminates. By abuse of notation, we can write the following system of equations (here $x, y \in \mathbb{C}^n$)

$\left\{ \begin{array}{lll} f_1= &x_k^2-x_k & =0 \\ g_1= &y_k^2-y_k &=0 \\ h_{1,1}= &\left( 1+ \sum_{k=1}^n 2^k x_k \right) \left( 1+ \sum_{k=1}^n 2^k y_k \right) -N & = 0 \end{array} \right.$

This system has only two solutions for semiprimes – $x$ encoding $p$, $y$ encoding $q$, and vice versa. Therefore, the Groebner basis will consist of $2n$ linear equations encoding line passing through $(x,y)= (p,q)$ and $(x,y)=(q,p)$, and one quadratic equation selecting these two points along this line.

Therefore, we want to represent the linear part of Groebner basis using a linear combination of our equations with polynomial coefficients – $c+ \sum_k a_k x_k +b_k y_k = P_1 h+ \sum_k P_{2,k} f_k + P_{3,k} g_k$ with $P_{...} \in \mathbb{C}[x_1,..., x_n, y_1, ..., y_n]$, Here, the coefficients in the polynomials $P$ are treated as indeterminates. For example, for $n=1$ the correspondin system reads

$\left\{ \begin{array}{lll} f_k= &x_1^2-x_1 & =0 \\ g_k= &y_1^2-y_1 &=0 \\ h_{1,1}= &\left( 1+ 2 x_1 \right) \left( 1+ 2 y_1 \right) -N & = 0 \end{array} \right.$

The linear bases is expressed by

$\begin{array}{l} c+ a_1 x_1 +b_1 y_1 = \\ \left( r_{1,1} x_1 +r_{1,2} y_1 +r_{1,3} \right) (x_1^2-x_1)+ \\ \left( r_{2,1} x_1 +r_{2,2} y_1 +r_{2,3} \right) (y_1^2-y_1)+ \\ \left( r_{3,1} x_1 +r_{3,2} y_1 +r_{3,3} \right) (1+2 x_1+2 y_1+4 x_1 y_1-N) \end{array}$.

and we have the following system for coefficients of different monomials

$\left\{ \begin{array}{rrl} x_1^3: & r_{1,1}&=0 \\ x_1^2y_1: &r_{1,2} +4r_{3,1} &=0 \\ x_1^2:& r_{1,3} - r_{1,1} +2 r_{3,1} &=0\\ x_1y_1^2:& r_{2,1}+4r_{3,2} &=0\\ x_1y_1:& -r_{1,2}-r_{2,1} +2r_{3,1}+2r_{3,2}+4r_{3,3}&=0\\ x_1:& -a_1 -r_{1,3}-r_{3,1}(N-1)+2r_{3,3} &=0\\ y_1^3:& r_{2,2} &=0\\ y_1^2:& r_{2,3}-r_{2,2} +2r_{3,2}&=0\\ y_1:& -b_1 -r_{2,3} -r_{3,2} (N-1) +2r_{3,3} &=0\\ 1:& c+ r_{3,3} (N-1) &=0\\ \end{array} \right.$

$\left\{ \begin{array}{rrl} x_1^2y_1: &-r_{1,2} &=4r_{3,1} \\ x_1y_1^2:& -r_{2,1}&=4r_{3,2} \\ x_1^2:& -r_{1,3} &=2 r_{3,1}\\ y_1^2:& -r_{2,3} &=2r_{3,2}\\ x_1y_1:& 2r_{3,3}&=-3(r_{3,1}+r_{3,2})\\ x_1:& r_{3,1}N+3r_{3,2} &=a_1\\ y_1:& r_{3,2} N+3r_{3,1} &=b_1\\ 1:& -3(r_{3,1}+r_{3,2})(N-1) &=2c\\ \end{array} \right.$

Therefore,
$(N r_{3,1}+3r_{3,2}) x_1+ ( N r_{3,2} +3r_{3,1} ) y_1 - \frac{ 3}{2}(r_{3,1}+r_{3,2})(N-1)= \\ \left( -4r_{3,1} y_1 -2r_{3,1} \right) (x_1^2-x_1)+ \\ \left( -4r_{3,2} x_1 -2r_{3,2} \right) (y_1^2-y_1)+ \\ \left( r_{3,1} x_1 +r_{3,2} y_1 - \frac{ 3}{2}(r_{3,1}+r_{3,2}) \right) (1+2 x_1+2 y_1+4 x_1 y_1-N)$

So the left-hand side of the equation can be written as

$[r_{3,1}, r_{3,2}] \left[ \begin{array}{ccc} N &3 & \frac{ 3}{2}(1-N) \\ 3 &N &\frac{ 3}{2}(1-N) \end{array} \right] \left[ \begin{array}{c} x_1\\y_1\\1 \end{array} \right]$

What we can see here is that the matrix in the above expression has rank 2, unless $N=3$, where the rank of the matrix is 1. When $N=3$ the nullspace of the matrix is

$\left[ \begin{array}{c} 1 \\ 0 \\ 1 \end{array} \right], \left[ \begin{array}{c} -1 \\ 1 \\ 0 \end{array} \right]$

which correspond to two solutions – $x_1= 1, y_1= 0 \rightarrow 3= 3*1$, and $x_1= 0, y_1= 1 \rightarrow 3= 1*3$. Otherwise the null space is expressed as

$\left[ \begin{array}{c} \frac{3(N-1)}{2(N+3)} \\ \frac{3(N-1)}{2(N+3)} \\ 1 \end{array} \right]$

So, for $N=1$ $x_1=y_1= 0 \rightarrow 1= 1*1$, and for $N=9$ $x_1= 1, y_1= 1 \rightarrow 9= 3*3$.

Overall, despite we started with 9+3 indeterminates there are only 2 are relevant to the problem. The question is now how does it scales with $n$. Or another question, is it at all possible to find an algorithm that will select only relevant monomials in $P_{...}$.

Let’s try to extend analysis to the case $n=2$.
To be continued.

## Shor’s algorithm and period finding

Scott Aaronson give interesting interpretation of Shor’s quantum algorithm for factoring integer for the layman like me. What I do not find is the reason why period finding is difficult classically. The only implicit reason is that many smart people have tried it, and did not find an algorithm. Here I want to check the ideas in the Shor’s algorithm for the classical approach. $N$ is the number we want to factor to be consistent with Scott notation.

The main purpose for the quantum algorithm is to find the period of the function $x^r \mod N= x$ for some integer x. We need two ingredients. First, $r$ can be represented as a binary number $r= \sum_k 2^k b_k, b \in \{0, 1\}^n$. Second the natural representation of $x \in \mathbb{Z}/N$ is $e^{2 \pi i \frac{x}{N}}$. Two together give nice representation of the period as a solution of the system of polynomial equations.

Following Scott presentation, we can write $e^{ 2 \pi i \frac{x}{N} }= e^{ 2 \pi i \frac{x^r}{N} } = e^{ 2 \pi i \frac{x ^{ \left( \sum_k 2^k b_k \right) } }{N} } = \prod_k e^{ 2 \pi i \frac{x ^{ \left( 2^k b_k \right) } }{N} } = \prod_k c_k$, where

$c_k= \left \{ \begin{array}{cc} 1, & b_k= 0 \\ e^{ 2 \pi i \frac{x ^{ 2^k } }{N} }, & b_k = 1 \end{array} \right.$

To encode properties of $b_k$ we need following equations

(1) $b_k (b_k-1)= 0$.

To encode $c_k$ we need additionally

(2) $C_k(x)= \left( e^{ 2 \pi i \frac{x ^{ 2^k } }{N} } -1 \right) b_k + 1$.

and the final equations encoding periodicity

(3) $\prod_k C_k(x_m) - c_{0,x_m} = 0$.

We can write many equations of the form (3) as many as we need using different seed values $x$. (and of cause we need to check if the period is archived by the consecutive squaring). Otherwise we need to solve complicated system of equations.

The first thing is we a re going to rewrite eqns (3). We would call product term in normal form for some set of indices $S$ if it is written as

$\phi_{0} \prod_{k \in S} (b_k-\phi_k)$

Next, we peel the onion. We first are interested on the action of the $b_k+\xi$ on $b_k+\phi_k$

$(b_k+\xi)( b_k+\phi_k)= (1+\phi_k+\xi) b_k+ \phi_k \xi = (1+\phi_k+\xi) ( b_k+ \frac{\phi_k \xi}{1+\phi_k+\xi})$.

So if we choose $\xi= \zeta_k \frac{1+\phi_k}{\phi_k-\zeta_k}$ we will obtain form $b_k+\zeta_k$ with some global multiplier. Therefore, we can convert any form to any form by this action applied to all variables.

Now consider two equations of the form (3).

$\begin{array}{ll} \phi_{0,1} \prod_k (b_k-\phi_{k,1}) - 1 & =0 \\ \phi_{0,2} \prod_k (b_k-\phi_{k,2}) - 1 & =0 \end{array}$.

We are going to act on $b_1$ in the first equation in order to obtain the corresponding term in the second equation, and we are going to act on all the rest variables in the second equation in order to obtain the terms in the first equation. At the end we get

$\begin{array}{ll} \phi'_{0,1} (b_1-\phi_{1,2}) \prod_{k \neq 1} (b_k-\phi_{k,1})- (b_1-\xi_1)&=0 \\ \phi'_{0,2} (b_1-\phi_{1,2}) \prod_{k \neq 1} (b_k-\phi_{k,1})- \prod_{k \neq 1}(b_k-\xi_k)&=0 \end{array}$

The differences between two is

$\phi_0 \prod_{k \neq 1}(b_k-\xi_k) - (b_1-\xi_1)=0$

$\left \{ \begin{array}{lll} b_p \left( c_{0,2} \prod_k c_k(x_1) + c_{0,1} \prod_k c_k(x_2) \right) &= c_{0,2} \frac{ 1+c_{1,p} }{c_{1,p}} \prod_k C_k(x_1) - c_{0,2} \frac{ 1+c_{1,p}}{c_{1,p} } \prod_{k \neq p} C_k(x_1) - c_{0,1}\frac{ 1+c_{2,p} }{c_{2,p}} \prod_k C_k(x_2) + c_{0,1}\frac{ 1+c_{2,p}}{c_{2,p} } \prod_{k \neq p} C_k(x_2) & = 0\\ b_q \left( c_{0,2} \prod_k c_k(x_1) + c_{0,1} \prod_k c_k(x_2) \right) &= c_{0,2} \frac{ 1+c_{1,q} }{c_{1,q}} \prod_k C_k(x_1) - c_{0,2} \frac{ 1+c_{1,q}}{c_{1,q} } \prod_{k \neq q} C_k(x_1) - c_{0,1}\frac{ 1+c_{2,q} }{c_{2,q}} \prod_k C_k(x_2) + c_{0,1}\frac{ 1+c_{2,q}}{c_{2,q} } \prod_{k \neq q} C_k(x_2) & = 0 \\ c_{0,2} \prod_k c_k(x_1) - c_{0,1} \prod_k c_k(x_m) & =0 \end{array}\right.$

$\left \{ \begin{array}{ll} c_{0,2} \left( \frac{ 1+c_{1,p} }{c_{1,p}} - \frac{ 1+c_{2,p} }{c_{2,p}} \right) \prod_k C_k(x_1) - c_{0,2} \frac{ 1+c_{1,p}}{c_{1,p} } \prod_{k \neq p} C_k(x_1) + c_{0,1}\frac{ 1+c_{2,p}}{c_{2,p} } \prod_{k \neq p} C_k(x_2) & = 0 \\ c_{0,2} \left( \frac{ 1+c_{1,q} }{c_{1,q}} - \frac{ 1+c_{2,q} }{c_{2,q}} \right) \prod_k C_k(x_1) - c_{0,2} \frac{ 1+c_{1,q}}{c_{1,q} } \prod_{k \neq q} C_k(x_1) + c_{0,1}\frac{ 1+c_{2,q}}{c_{2,q} } \prod_{k \neq q} C_k(x_2) & = 0 \\ c_{0,2} \frac{ 1+c_{2,q} }{c_{2,q}} \prod_k c_k(x_1) - c_{0,1} \frac{ 1+c_{2,q} }{c_{2,q}} \prod_k c_k(x_m) & =0 \end{array}\right.$

In order to avoid possible multiplication to 0 $a \neq \{0,-1\}$. Let $\phi_{k,m}= e^{ 2 \pi i \frac{x_m ^{ 2^k } }{N} } -1$. Than

$(b_k +a) C_k(x_m) = \left( 1+a+\frac{1}{\phi_{k,m}}\right) C_k(x_m) - \left( 1+\frac{1}{\phi_{k,m}} \right)$,

$(b_n+a_1) \prod_k C_k(x_m) = \left( 1+a_1+\frac{1}{\phi_{k,m}}\right) \prod_k C_k(x_m) - \left( 1+\frac{1}{\phi_{k,m}} \right) \prod_{k \neq n} C_k(x_m)$,

$(b_n+a_2) (b_n+a_1) \prod_k C_k(x_m)$ $= (b_n+a_2) \left( 1+a_1+\frac{1}{\phi_{k,m}}\right) \prod_k C_k(x_m) - (b_n+a_2) \left( 1+\frac{1}{\phi_{k,m}} \right) \prod_{k \neq n} C_k(x_m)$ $= \left[ \left( 1+a_2+\frac{1}{\phi_{k,m}}\right) \left( 1+a_1+\frac{1}{\phi_{k,m}}\right) - \frac{1}{\phi_{k,m}} \left( 1+\frac{1}{\phi_{k,m}} \right) \right] \prod_k C_k(x_m)$ $- \left[ \left( 1+\frac{1}{\phi_{k,m}} \right) \left( 1+a_1+\frac{1}{\phi_{k,m}}\right) +\left( a_2+\frac{1}{\phi_{k,m}} \right) \left( 1+\frac{1}{\phi_{k,m}} \right) \right] \prod_{k \neq n} C_k(x_m)$

—— >>> Still needs update <<< —-
Applying this twice for second line in (4) with different constants $a_1, a_2$ we get

$\left \{ \begin{array}{ll} \mbox{const}_1 \prod_k c_k(x_1) +\mbox{const}_3 \prod_k c_k(x_m) + \mbox{const}_2 \prod_{k \neq n} c_k(x_1)+ \mbox{const}_4 \prod_{k \neq n} c_k(x_m) & = 0\\ \mbox{const}_5 \prod_k c_k(x_1) + \mbox{const}_7 \prod_k c_k(x_m)+ \mbox{const}_6 \prod_{k \neq n} c_k(x_1) + \mbox{const}_8 \prod_{k \neq n} c_k(x_m) & =0 \\ \mbox{const}_9 \prod_k c_k(x_1) - \mbox{const}_{10} \prod_k c_k(x_m) & =0 \end{array}\right.$

From which we can eliminate terms $\prod_k c_k(x_1)$ and $\prod_k c_k(x_m)$. Therefore, we left with the system of second line in (4) with one variable less. Applying it recursively we can get equation on a single variable find the solution and back substitute.

I leave details for experts, what values of $a_1, a_2$ should be chosen at each stage, the precision one have to keep for const values (solving it exact may require exponential number of terms), the possible linear dependencies in eliminating variable (e.g. the choice and the number of seed values $x_m$ ), etc. Don't forget I may have simple mistake, and this is bullshit.

[Update] since there is no interest, I unroll some statements. (19 Oct 2012)

Posted in Uncategorized | Tagged | 1 Comment

## On the integer factorization and Nullstellensatz linear algebra: Fighting monomials.

Previous post show the idea. The problem is that one need to find polynomials of high degree to get the number of coefficients equal to the number of monomials. Too many monomials have to be killed. Here is another way to hunt. Shor’s algorithm uses Fourier transform , and fast integer multiplication also uses discrete Fourier Transform. Let’s try it here. It is going to be only sketch here, it is neither pretended to be correct nor complete.

We have our binary represented “could be factors”

$\begin{array}{l} x= \sum \limits_{k=0}^{K_x} 2^k x_k,\\ y= \sum \limits_{k=0}^{K_y } 2^k y_k, \\ x_k^2-x_k=0,\\ y_k^2-y_k=0 \end{array}$.

Following the fast multiplication recipe we are going to pad out binary representations with zeros, e.g. now

$\begin{array}{ll} x= \sum \limits_{k=0}^{K_x+K_y} 2^k x_k, \\ y= \sum \limits_{k=0}^{K_y+K_x } 2^k y_k, \\x_k^2-x_k=0, & k=0..K_x, \\ x_k=0, & k> K_x, \\ y_k^2-y_k=0, & k=0..K_y, \\ y_k= 0, & k > K_y. \end{array}$

Now we want to apply Discrete Fourier Transform to our factors, e.g. ($K= K_x+K_y$)

$\begin{array}{ll} x_k= \frac{1}{\sqrt{K}} \sum \limits_{m=0}^{K-1} e^{ \frac{2 \pi i}{K} k m} v_m \\ y_k= \frac{1}{\sqrt{K}} \sum \limits_{m=0}^{K-1} e^{ \frac{2 \pi i}{K} k m} u_m \\ \end{array}$

We need to keep the constraints.
$\begin{array}{ll} x_k^2-x_k= \left(\frac{1}{\sqrt{K}} \sum \limits_{m=0}^{K-1} e^{ \frac{2 \pi i}{K} k m} v_m \right) \left( \frac{1}{\sqrt{K}} \sum \limits_{m=0}^{K-1} e^{ \frac{2 \pi i}{K} k p} v_p \right) - \frac{1}{\sqrt{K}} \sum \limits_{k=0}^{K-1} e^{ \frac{2 \pi i}{K} k m} v_m \\ \end{array}$

The linear combinations of constraints are still constraints, so we can make another Fourier transform for the constraints. Since it is non-singular transformation we are not loosing any constraint.

$\begin{array}{ll} F(x_k(v))= \frac{1}{\sqrt{K}} \sum \limits_{k=0}^K e^{ -\frac{2 \pi i}{K} k n} (x_k^2-x_k)= \left( K^{-\frac{3}{2}} \sum \limits_{m, p, k=0}^{K-1} e^{ \frac{2 \pi i}{K} k(m+p-n)} v_m v_p \right) - \frac{1}{K} \sum \limits_{k, m=0}^{K-1} e^{ \frac{2 \pi i}{K} k (m-n) } v_m \\ \end{array}$

Which means that each of K constraints has a single term of the first order, and exactly $K$ terms of second order. In total it contains all quadratic terms distributed across $K$ constraints such that $m+p \mod K = n$.

Now the product $x y-n$ is also containing linear in K number of terms because we need to take point-wise products of the transformed x

$x y -n = \frac{1}{\sqrt{K}} \sum \limits_{m=0}^{K-1} 2^m \sum \limits_{k=0}^{K-1} e^{ -\frac{2 \pi i}{K} k m} v_k u_k - n= 0$

Now we are ready to construct certificate. Let say of degree 4. Moreover, we want to restrict the terms in the certificate to not exceed 2 in any variable. Therefore, we have the following components.

$K-K_x+1$ terms of the form $\forall p=K_x..K-1, x_p(v) \left( \sum \limits_{k=0}^{K-1} \sum \limits_{m=k}^{K-1} \sum \limits_{n=0}^{K-1} c_{p,k,m,n}^X u_k u_m v_n \right)$ – giving $(K-K_x)*K*K (K+1)/2$ coefficients.

$K-K_y+1$ terms of the form $\forall p=K_y..K-1, y_p(v) \left( \sum \limits_{k=0}^{K-1} \sum \limits_{m=k}^{K-1} \sum \limits_{n=0}^{K-1} c_{p,k,m,n}^Y v_k v_m u_n \right)$ – giving $(K-K_y)*K*K (K+1)/2$ coefficients. Both of this conditions give $K^3(K+1)/2$

$K_x-1$ terms representing quadratic constraints for unknown x bits: $F(x_p(v)) \left( \sum \limits_{k=0}^{K-1} \sum \limits_{m=k}^{K-1} c_{p,k,m}^{X^2} u_k u_m \right)$

$K_y-1$ terms representing quadratic constraints for unknown y bits: $F(y_p(u)) \left( \sum \limits_{k=0}^{K-1} \sum \limits_{m=k}^{K-1} c_{p,k,m}^{Y^2} v_k v_m \right)$

On the other hand there are only $(K+1)^2*(K+2)^2/4$ monomials in the certificate. Therefore, there is enough coefficients, if they are linearly independent (which have to be shown) to test feasibility of factoring.
If we would be lucky to show this linear independence, and that this linear independence holds for all not factorable n, we would have polynomial algorithm for factorization, since the degree of the certificate can be bounded by 4, independent of n.

Posted in Uncategorized | | 1 Comment