This algorithm solves for small roots of polynomials modulo any integer, meaning given some polynomial f(x)∈Z[x]of degree dand any integer N, then if f(x0)=0(modN),∣x0∣<Nd1, this algorithm can findx0with time polynomial in logNand d. The key idea behind this algorithm is to construct a polynomialg(x)such that g(x0)=0in R. As roots of polynomials over the reals can be found easily, this gives an easy way to find x0. We shall introduce the Coppersmith algorithm in a few iterations, with each iteration approaching the Nd1bound.
Polynomials in lattices
We first consider a criteria for a root of a polynomial moduloNto exists over the reals as well. Supposef(x)=∑i=0dfixiis a polynomial of degreed. Define the ℓ2norm∥f(x)∥2 of a polynomial to be ∑i=0dfi2. Given ∣x0∣<B,f(x0)=0(modN), if
then f(x0)=0 in R. The proof is a relatively straightforward chain of inequalities:
and since f(x0)=0(modN)implies f(x0)=kNfor some k∈Z, we know that kmust be0to satisfy the inequality above.
With this, if we can find some polynomials fisuch that fi(x0)=0(modN), then if we can find some cisuch that ∥∑icifi(x)∥2≤d+1N, then we can find x0easily. This gives a brief idea as to why lattices would be useful in such a problem.
To use lattices, notice that we can encode the polynomial f(x)=∑i=0dfixias the vector with components fi. In this way, adding polynomials and multiplying polynomials by numbers still makes sense. Lets suppose that f(x0)=0(modN),x0<Band fd=1(otherwise multiplyfby fd−1(modN). Consider the polynomials gi(x)=Nxiand consider the latticeLgenerated by f(Bx)and gi(Bx), 0≤i≤d−1. As a matrix, the basis vectors are
As long as cδ,dNd+1dB2d<d+1N, then by the above criteria we know that this vector has x0has a root over R. This tells us that
While this isn't the Nd1bound that we want, this gives us an idea of what we can do to achieve this bound, i.e. add more vectors such that the length of the shortest vector decreases.
Achieving the Nd1bound
One important observation to make is that any coefficients in front ofNxdoes not matter as we can simply brute force the top bits of our small root in O(1)time. Hence we only need to getB=kNd1for some fixed constantk.
In order to achieve this, notice that if f(x0)=0(modN), then f(x0)h=0(modNh). This loosens the inequality required for a polynomial to have x0as a small root as our modulus is now larger. With this in mind, consider the polynomials
where we will determinehlater. Here gi,j(x0)=0(modNh), hence we shall consider the lattice L generated by gi,j(Bx). As an example, if we have
We have the following immediate computations of L:
hence when using the LLL algorithm, the shortest LLL basis vectorv(Bx)has length
and we need ∥v(Bx)∥2<dhNhfor v(x0)=0. Hence we have
Since limh→∞dh−1h−1=d1, this will achieve the Nd1bound that we want. However as for big h, the LLL algorithm would take a very long time, we typically choose a suitably largehsuch that the algorithm is still polynomial in logN,dand brute force the remaining bits.
1) We often see h=⌈max(d2ϵd+dε−1,d7)⌉in literature. We shall now show where this mysteriousd7comes from. The other term will appear in the next exercise. Typically, one sets δ=43to simplify calculations involving the LLL algorithm as 4δ−14=2. Suppose we want B>21Ndh−1h−1, show that this gives us dh≥7.
2) We show that we can indeed find small roots less thanNd1in polynomial time. In the worse case, the longest basis vector cannot exceed O(Bdh−1Nh). Hence the LLL algorithm will run in at mostO(d6h6(d+h)2log2N)time.
and choose ε=logN1, then Nεis a constant hence the number of bits needed to be brute forced is a constant. This gives us the approximate run time of O((d+d1logN)2log8N).
3) We shall show that this is the best bound we can hope for using lattice methods. Suppose there exists some algorithm that finds roots less than O(Nd1+ϵ)in polynomial time. Then consider the case whenN=p2and f(x)=x2+px. Show that this forces the lattice to have a size too big for the algorithm to run in polynomial time, assuming the algorithm finds all small roots.