CryptoBook
  • CryptoBook
  • Book Plan
  • Style Guide
    • Sample Page
  • Contributors
  • Fundamentals
    • Mathematical Notation
    • Division and Greatest common divisor
      • Euclidean Algorithm
    • Modular Arithmetic
      • Theorems of Wilson, Euler, and Fermat
        • Fermat's Little Theorem in Detail
        • Euler's Theorem in Detail
      • Quadratic Residues
    • Continued Fractions
  • Number Theory
  • Ideals
  • Polynomials With Shared Roots
  • Integer Factorization
    • Pollard rho
    • Sieves
  • Abstract algebra
    • Groups
      • Another take on groups
      • Discrete Log Problem
    • Rings
    • Fields
    • Polynomials
  • Elliptic Curves
    • Untitled
  • Lattices
    • Introduction
    • LLL reduction
      • Gram-Schmidt Orthogonalization
      • Lagrange's algorithm
      • LLL reduction
    • Lattice reduction
      • Minkowski reduced
      • HKZ reduced
      • LLL reduced
    • Applications
      • Coppersmith algorithm
      • Extensions of Coppersmith algorithm
    • Hard lattice problems
    • Lattices of interest
    • Cryptographic lattice problems
      • Short integer solutions (SIS)
      • Learning with errors (LWE)
      • Ring-LWE
      • NTRU
    • Interactive fun
    • Resources and notations
  • Asymmetric Cryptography
  • RSA
    • Proof of correctness
    • RSA application
    • Low Private Component Attacks
      • Wiener's Attack
      • Boneh-Durfee Attack
    • Common Modulus Attack
    • Recovering the Modulus
  • Diffie-Hellman
    • MITM
  • Elliptic Curve Cryptography
  • Symmetric Cryptography
    • Encryption
    • The One Time Pad
    • AES
      • Rijndael Finite Field
      • Round Transformations
  • Hashes
    • Introduction / overview
    • The Birthday paradox / attack
  • Isogeny Based Cryptography
    • Introduction to Isogeny Cryptography
    • Isogenies
    • Isogeny and Ramanujan Graphs
  • Appendices
    • Sets and Functions
    • Probability Theory
Powered by GitBook
On this page
  • Overview
  • Polynomial time proof
  • Exercises

Was this helpful?

Export as PDF
  1. Lattices
  2. LLL reduction

LLL reduction

Overview

There are a few issues that one may encounter when attempting to generalize Lagrange's algorithm to higher dimensions. Most importantly, one needs to figure what is the proper way to swap the vectors around and when to terminate, ideally in in polynomial time. A rough sketch of how the algorithm should look like is

def LLL(B):
    d = B.nrows()
    i = 1
    while i<d:
        size_reduce(B)
        if swap_condition(B):
            i += 1
        else:
            B[i],B[i-1] = B[i-1],B[i]
            i = max(i-1,1)
    return B

There are two things we need to figure out, in what order should we reduce the basis elements by and how should we know when to swap. Ideally, we also want the basis to be ordered in a way such that the smallest basis vectors comes first. Intuitively, it would also be better to reduce a vector by the larger vectors first before reducing by the smaller vectors, a very vague analogy to filling up a jar with big stones first before putting in the sand. This leads us to the following size reduction algorithm:

def size_reduce(B):
    d = B.nrows()
    i = 1
    while i<d:
        Bs,M = B.gram_schmidt()
        for j in reversed(range(i)):
            B[i] -= round(M[i,j])*B[j]
            Bs,M = B.gram_schmidt()
    return B

Next, we need to figure a swapping condition. Naively, we want

Polynomial time proof

Exercises

PreviousLagrange's algorithmNextLattice reduction

Last updated 4 years ago

Was this helpful?

We can further improve this by optimizing the Gram Schmidt computation as this algorithm does not modify B∗\mathcal B^*B∗at all. Furthermoreμ\muμchanges in a very predictable fasion and when vectors are swapped, one can write explicit formulas for howB∗\mathcal B^*B∗changes as well.

∥bi∥≤∥bi+1∥\left\lVert b_i\right\rVert\leq\left\lVert b_{i+1}\right\rVert∥bi​∥≤∥bi+1​∥

for all iii. However, such a condition does not guarantee termination in polynomial time. As short basis vectors should be almost orthogonal, we may also want to incorperate this notion. Concretely, we want ∣μi,j∣\left|\mu_{i,j}\right|∣μi,j​∣to be somewhat small for all pairs of i,ji,ji,j, i.e. we may want something like

∣μi,j∣≤c|\mu_{i,j}|\leq c∣μi,j​∣≤c

However, since μi,j=⟨bi,bj∗⟩⟨bj∗,bj∗⟩\mu_{i,j}=\frac{\langle b_i,b_j^*\rangle}{\langle b_j^*,b_j^*\rangle}μi,j​=⟨bj∗​,bj∗​⟩⟨bi​,bj∗​⟩​, this condition is easily satisfied for a sufficiently long bj∗b_j^*bj∗​, which is not what we want. The key idea is to merge these two in some way and was first noticed by Lovász - named the Lovász condition:

δ∥bi∗∥2≤∥bi+1∗+μi+1,ibi∗∥2δ∈(14,1)\delta\left\lVert b_i^*\right\rVert^2\leq\left\lVert b_{i+1}^*+\mu_{i+1,i}b_i^*\right\rVert^2\quad\delta\in\left(\frac14,1\right)δ∥bi∗​∥2≤​bi+1∗​+μi+1,i​bi∗​​2δ∈(41​,1)

It turns out that using this condition, the algorithm above terminates in polynomial time! More specifically, it has a time complexity of O(d5nlog⁡3B)O\left(d^5n\log^3B\right)O(d5nlog3B)where we havedddbasis vectors as a subset of Rn\mathbb R^nRnand BBBis a bound for the largest norm of bib_ibi​. 14<δ\frac14<\delta41​<δ ensures that the lattice vectors are ordered roughly by size and δ<1\delta<1δ<1ensures the algorithm terminates.

This follows the proof provided by the authors of the LLL paper. We first prove that the algorithm terminates by showing it swaps the vectors finitely many times. Letdddbe the number of basis vectors as a subset of Rn\mathbb R^nRn. Let did_idi​be the volume of the lattice generated by {bj}j=1i\left\{b_j\right\}_{j=1}^i{bj​}j=1i​at each step of the algorithm. We have di=∏j=1i∥bj∗∥d_i=\prod_{j=1}^i\left\lVert b_j^*\right\rVertdi​=∏j=1i​​bj∗​​. Now consider the quantity

D=∏i=1ddiD=\prod_{i=1}^dd_iD=i=1∏d​di​

This quantity only changes whenever some bi∗b_i^*bi∗​changes, i.e when swaps happen. Let's consider what happens when we swap bib_ibi​and bi+1b_{i+1}bi+1​. Recall the Gram-Schmidt algorithm:

bi∗=bi−∑j=1i−1μi,jbj∗μi,j=⟨bi,bj∗⟩⟨bj∗,bj∗⟩b_i^*=b_i-\sum_{j=1}^{i-1}\mu_{i,j}b_j^*\quad\mu_{i,j}=\frac{\langle b_i,b_j^*\rangle}{\langle b_j^*,b_j^*\rangle}bi∗​=bi​−j=1∑i−1​μi,j​bj∗​μi,j​=⟨bj∗​,bj∗​⟩⟨bi​,bj∗​⟩​

From this, see that when we swap bib_ibi​and bi+1b_{i+1}bi+1​, bi∗b_i^*bi∗​is replaced by bi+1∗+μi+1,ibi∗b_{i+1}^*+\mu_{i+1,i}b_i^*bi+1∗​+μi+1,i​bi∗​. Now using the Lovász condition, we see that we have∥bi+1∗+μi+1,ibi∗∥2<δ∥bi∗∥2\left\lVert b_{i+1}^*+\mu_{i+1,i}b_i^*\right\rVert^2<\delta\left\lVert b_i^*\right\rVert^2​bi+1∗​+μi+1,i​bi∗​​2<δ∥bi∗​∥2, hence the value of did_idi​must decrease by at least δ\deltaδ, i.e. the new did_idi​is less than diδ\frac{d_i}\deltaδdi​​. All other dj,j≠id_j,j\neq idj​,j=imust remain the same as the volume remains fixed when we swap basis vectors around. Hence at each swap, DDDdecreases by δ\deltaδ. This is why we need δ<1\delta<1δ<1.Now we are left with showing did_idi​is bounded from below then we are done.

Let λ1(L)\lambda_1(L)λ1​(L)be the length of the shortest (nonzero) vector in the lattice. We can treat did_idi​as the volume of the lattice LiL_iLi​generated by{bj}j=1i\left\{b_j\right\}_{j=1}^i{bj​}j=1i​. Let xix_ixi​be the shortest vector in the lattice in LiL_iLi​. By using Minkowski's lattice point theorem, we have

λ1(L)≤xi≤2πΓ(i2+1)1i⏟Cidi1idi≥λ1(L)iCii=di,min⁡\begin{align*} \lambda_1(L)\leq x_i&\leq\underbrace{\frac2{\sqrt\pi}\Gamma\left(\frac i2+1\right)^{\frac1i}}_{C_i}d_i^\frac1i\\ d_i&\geq\frac{\lambda_1(L)^i}{C_i^i}=d_{i,\min} \end{align*}λ1​(L)≤xi​di​​≤Ci​π​2​Γ(2i​+1)i1​​​dii1​​≥Cii​λ1​(L)i​=di,min​​

(Note that the value of CiC_iCi​isn't particularly important, one can use a easier value like i\sqrt ii​)

Hence we see that did_idi​, and hence DDDhas a (loose) lower bound Dmin⁡=∏i=1ddi,min⁡D_{\min}=\prod_{i=1}^dd_{i,\min}Dmin​=∏i=1d​di,min​, meaning that there are at most log⁡Dlog⁡Dmin⁡δ\frac{\log D}{\log D_{\min}\delta}logDmin​δlogD​swaps. Since at each iteration,kkkeither increases by111when there is no swaps or decreases by at most111when there is swaps and kkkranges from222toddd, the number of time the loop runs must be at most 2log⁡Dlog⁡Dmin⁡δ+d2\frac{\log D}{\log D_{\min}\delta}+d2logDmin​δlogD​+d, hence the algorithm terminates.

This proof also gives us a handle on the time complexity of the operation. LetBBBis the length of the longest input basis vector. Since we have di≤Bid_i\leq B^idi​≤Bi, D≤Bm2+m2D\leq B^{\frac{m^2+m}2}D≤B2m2+m​and the algorithm loops O(d2log⁡B)O\left(d^2\log B\right)O(d2logB)times. The Gram-Schmidt orthogonalization is the most expensive part in the entire process, taking up O(d2n)O\left(d^2n\right)O(d2n)arithmetic operations. By using classical algorithm for arithmetic operations, each takes O(nlog⁡B)O\left(n\log B\right)O(nlogB)time. From this, we deduce that the time complexity of the LLL algorithm is O(d5mlog⁡2B)O\left(d^5m\log^2B\right)O(d5mlog2B), a somewhat reasonable polynomial time algorithm.

Let bib_ibi​be the output of the LLL algorithm, it turns out that we have the bound

∥b1∥≤(44δ−1)d−14vol(L)1d\left\lVert b_1\right\rVert\leq\left(\frac4{4\delta-1}\right)^{\frac{d-1}4}\text{vol}(L)^\frac1d∥b1​∥≤(4δ−14​)4d−1​vol(L)d1​

which requires δ>14\delta>\frac14δ>41​. Such bounds for the shortest vector will be elaborated in more detail in the section on reduced basis.

1) Implement the LLL in sage and experimentally verify that DDDdoes indeed decrease byδ\deltaδeach time.

2) Show that the time complexity analysis is correct, and indeed each loop takes at most O(d2n)O\left(d^2n\right)O(d2n)operations.