In this section, we hope to bring some intuitive understanding to the LLL algorithm and how it works. The LLL algorithm is a lattice reduction algorithm, meaning it takes in a basis for some lattice and hopefully returns another basis for the same lattice with shorter basis vectors. Before introducing LLL reduction, we'll introduce 2 key algorithms that LLL is built from, Gram-Schmidt orthogonalization and Gaussian Reduction. We give a brief overview on why these are used to build LLL.
As the volume of a lattice is fixed, and is given by the determinant of the basis vectors, whenever our basis vectors gets shorter, they must, in some intuitive sense, become more orthogonal to each other in order for the determinant to remain the same. Hence, Gram-Schmidt orthogonalization is used as an approximation to the shortest basis vector. However, the vectors that we get are in general not in the lattice, hence we only use this as a rough idea of what the shortest vectors would be like.
Lagrange's algorithm can be thought as the GCD algorithm for 2 numbers generalized to lattices. This iteratively reduces the length of each vector by subtracting some amount of one from another until we can't do it anymore. Such an algorithm actually gives the shortest possible vectors in 2 dimensions! Unfortunately, this algorithm may not terminate for higher dimensions, even in 3 dimensions. Hence, it needs to be modified a bit to allow the algorithm to halt.
Gram-Schmidt orthogonalization is an algorithm that takes in a basis as an input and returns a basis where all vectors are orthogonal, i.e. at right angles. This new basis is defined as
where is the Gram-Schmidt coefficients.
One can immediately check that this new basis is orthogonal, meaning
Let be the matrix where the th row is given by andbe the matrix where the th row is given by , then the Gram-Schmidt orthogonalization gives us where and is the Gram-Schmidt coefficient. As an example, consider the basis of a subspace of :
Instead of doing the Gram-Schmidt orthogonalization by hand, we can get sage to do it for us:
This outputs two matrices, and :
One can quickly verify that and that the rows of are orthogonal to each other.
A useful result is that
Intuitively, this tells us that the more orthogonal a set of basis for a lattice is, the shorter it is as the volume must be constant.
1) Show that the basis is orthogonal.
2) Verify that the output of sage is indeed correct.
3) Show that and is a diagonal matrix whose entries are . Conclude that .
4*) Given the Iwasawa decomposition where is a lower diagonal matrix with on its diagonal, is a diagonal matrix and an orthogonal matrix, meaning , show that and . Furthermore, prove that such a decomposition is unique.
Loading...
There are a few issues that one may encounter when attempting to generalize Lagrange's algorithm to higher dimensions. Most importantly, one needs to figure what is the proper way to swap the vectors around and when to terminate, ideally in in polynomial time. A rough sketch of how the algorithm should look like is
There are two things we need to figure out, in what order should we reduce the basis elements by and how should we know when to swap. Ideally, we also want the basis to be ordered in a way such that the smallest basis vectors comes first. Intuitively, it would also be better to reduce a vector by the larger vectors first before reducing by the smaller vectors, a very vague analogy to filling up a jar with big stones first before putting in the sand. This leads us to the following size reduction algorithm:
Next, we need to figure a swapping condition. Naively, we want
for all . However, such a condition does not guarantee termination in polynomial time. As short basis vectors should be almost orthogonal, we may also want to incorperate this notion. Concretely, we want to be somewhat small for all pairs of , i.e. we may want something like
However, since , this condition is easily satisfied for a sufficiently long , which is not what we want. The key idea is to merge these two in some way and was first noticed by Lovász - named the Lovász condition:
It turns out that using this condition, the algorithm above terminates in polynomial time! More specifically, it has a time complexity of where we havebasis vectors as a subset of and is a bound for the largest norm of . ensures that the lattice vectors are ordered roughly by size and ensures the algorithm terminates.
This follows the proof provided by the authors of the LLL paper. We first prove that the algorithm terminates by showing it swaps the vectors finitely many times. Letbe the number of basis vectors as a subset of . Let be the volume of the lattice generated by at each step of the algorithm. We have . Now consider the quantity
This quantity only changes whenever some changes, i.e when swaps happen. Let's consider what happens when we swap and . Recall the Gram-Schmidt algorithm:
From this, see that when we swap and , is replaced by . Now using the Lovász condition, we see that we have, hence the value of must decrease by at least , i.e. the new is less than . All other must remain the same as the volume remains fixed when we swap basis vectors around. Hence at each swap, decreases by . This is why we need .Now we are left with showing is bounded from below then we are done.
Let be the length of the shortest (nonzero) vector in the lattice. We can treat as the volume of the lattice generated by. Let be the shortest vector in the lattice in . By using Minkowski's lattice point theorem, we have
(Note that the value of isn't particularly important, one can use a easier value like )
Hence we see that , and hence has a (loose) lower bound , meaning that there are at most swaps. Since at each iteration,either increases bywhen there is no swaps or decreases by at mostwhen there is swaps and ranges fromto, the number of time the loop runs must be at most , hence the algorithm terminates.
This proof also gives us a handle on the time complexity of the operation. Letis the length of the longest input basis vector. Since we have , and the algorithm loops times. The Gram-Schmidt orthogonalization is the most expensive part in the entire process, taking up arithmetic operations. By using classical algorithm for arithmetic operations, each takes time. From this, we deduce that the time complexity of the LLL algorithm is , a somewhat reasonable polynomial time algorithm.
Let be the output of the LLL algorithm, it turns out that we have the bound
which requires . Such bounds for the shortest vector will be elaborated in more detail in the section on reduced basis.
1) Implement the LLL in sage and experimentally verify that does indeed decrease byeach time.
2) Show that the time complexity analysis is correct, and indeed each loop takes at most operations.