Extensions of Coppersmith algorithm

The Coppersmith algorithm can be made even more general. There are two main extensions, first to an unknown modulus, then to multivariate polynomials.

Unknown modulus

This extension of Coppersmith allows one to find small roots modulo an unknown some unknown factor of a number. More specifically, suppose that we have some unknown factorbbofNNsuch that b<Nβb<N^\betaand some monic polynomialffof degreeddsuch that f(x0)=0(modb)f(x_0)=0\pmod bfor some x0<Nβ2dx_0<N^{\frac{\beta^2}d}. Then we can findx0x_0in time polynomial in logN,d\log N,d.

One key reason why this is possible is because we dont need to explicitly know the modulus to determine if a small root exists, i.e. f(Bx)2bd+1Nβd+1\left\lVert f(Bx)\right\rVert_2\leq\frac b{\sqrt{d+1}}\leq\frac{N^{\beta}}{\sqrt{d+1}}is sufficient for a root less thanBBto exist. The algorithm here is extremely similar to the Coppersmith algorithm, except we add more polynomials into the lattice. The polynomials that we will use are

gi,j(x)=Nhjf(x)jxi0i<d,0j<hgi,h(x)=f(x)hxi0i<t\begin{align*} g_{i,j}(x)&=N^{h-j}f(x)^jx^i&0\leq i<d,0\leq j<h\\ g_{i,h}(x)&=f(x)^hx^i&0\leq i<t \end{align*}

The latticeLLgenerated by these polynomials would have


As we require the shortest vector to have length at most Nβhn\frac{N^{\beta h}}{\sqrt{n}}, repeating the computations from the previous section, we obtain

B<4δ14(1n)1n1N2βhn1dh(h+1)n(n1)B<\sqrt{\frac{4\delta-1}4}\left(\frac1n\right)^{\frac1{n-1}}N^{\frac{2\beta h}{n-1}-\frac{dh(h+1)}{n(n-1)}}

It turns out that the maxima of 2βhn1dh(h+1)n(n1)\frac{2\beta h}{n-1}-\frac{dh(h+1)}{n(n-1)}is β2d\frac{\beta^2}das n,hn,h\to\infty. One way to achieve this is by settingn=dβhn=\frac d\beta hand we obtain


and this indeed achieves the O(Nβ2d)O\left(N^{\frac{\beta^2}d}\right)bound. Similar to the Coppersmith algorithm, one chooses a sufficiently big h,nh,nsuch that the remainding bits can be brute forced in constant time while the algorithm still remains in polynomial time.


1) We show that the maximum of2βhn1dh(h+1)n(n1)\frac{2\beta h}{n-1}-\frac{dh(h+1)}{n(n-1)}is indeed β2d\frac{\beta^2}d. We can assume that d2,h1,0<β1,n2d\geq2,h\geq1,0<\beta\leq1,n\geq2. Since hn1(2βdh+1n)\frac h{n-1}\left(2\beta-d\frac{h+1}{n}\right)and hn1<h+1n\frac h{n-1}<\frac{h+1}n, the maximum occurs when h,nh,n\to\inftyand hn1,h+1nx\frac h{n-1},\frac{h+1}n\to x, hence we have reduced this to maximizing 2βxdx22\beta x-dx^2which achieves its maximum of β2d\frac{\beta^2}d at x=βdx=\frac\beta d.

Last updated