This algorithm solves for small roots of polynomials modulo any integer, meaning given some polynomial of degree and any integer , then if , this algorithm can findwith time polynomial in and . The key idea behind this algorithm is to construct a polynomialsuch that in . As roots of polynomials over the reals can be found easily, this gives an easy way to find . We shall introduce the Coppersmith algorithm in a few iterations, with each iteration approaching the bound.
We first consider a criteria for a root of a polynomial moduloto exists over the reals as well. Supposeis a polynomial of degree. Define the norm of a polynomial to be . Given , if
then in . The proof is a relatively straightforward chain of inequalities:
and since implies for some , we know that must beto satisfy the inequality above.
With this, if we can find some polynomials such that , then if we can find some such that , then we can find easily. This gives a brief idea as to why lattices would be useful in such a problem.
To use lattices, notice that we can encode the polynomial as the vector with components . In this way, adding polynomials and multiplying polynomials by numbers still makes sense. Lets suppose that and (otherwise multiplyby . Consider the polynomials and consider the latticegenerated by and , . As a matrix, the basis vectors are
the basis vectors of our lattice would look like
Let
As every element in this lattice is some polynomial , if , then. Furthermore, if and a short vectorhas length less than , then we have in .
The volume of this lattice is and the lattice has dimension . By using the LLL algorithm, we can find a vector with length at most
As long as , then by the above criteria we know that this vector has has a root over . This tells us that
While this isn't the bound that we want, this gives us an idea of what we can do to achieve this bound, i.e. add more vectors such that the length of the shortest vector decreases.
One important observation to make is that any coefficients in front ofdoes not matter as we can simply brute force the top bits of our small root in time. Hence we only need to getfor some fixed constant.
In order to achieve this, notice that if , then . This loosens the inequality required for a polynomial to have as a small root as our modulus is now larger. With this in mind, consider the polynomials
where we will determinelater. Here , hence we shall consider the lattice generated by . As an example, if we have
We have the following immediate computations of :
hence when using the LLL algorithm, the shortest LLL basis vectorhas length
and we need for . Hence we have
Since , this will achieve the bound that we want. However as for big , the LLL algorithm would take a very long time, we typically choose a suitably largesuch that the algorithm is still polynomial in and brute force the remaining bits.
1) We often see in literature. We shall now show where this mysteriouscomes from. The other term will appear in the next exercise. Typically, one sets to simplify calculations involving the LLL algorithm as . Suppose we want , show that this gives us .
2) We show that we can indeed find small roots less thanin polynomial time. In the worse case, the longest basis vector cannot exceed . Hence the LLL algorithm will run in at mosttime.
and choose , then is a constant hence the number of bits needed to be brute forced is a constant. This gives us the approximate run time of .
3) We shall show that this is the best bound we can hope for using lattice methods. Suppose there exists some algorithm that finds roots less than in polynomial time. Then consider the case whenand . Show that this forces the lattice to have a size too big for the algorithm to run in polynomial time, assuming the algorithm finds all small roots.