We've established that the same lattice can be described by many different bases, and that some bases are much more useful than others. A "good" basis has short, nearly orthogonal vectors. A "bad" basis has long, highly skewed vectors. Lattice reduction is the process of transforming a given basis into a better one — ideally, into one whose vectors are as short and orthogonal as possible.

Why does this matter? Two reasons. First, most lattice problems become easy once you have a short basis. Second, the difficulty of lattice reduction — specifically, how good a reduced basis you can achieve in polynomial time — sets the exact hardness of lattice-based cryptographic schemes. Security parameters are chosen so that no efficient reduction algorithm can produce a basis short enough to break the scheme.

Beginner's Intuition 💡

Imagine asking someone for walking directions to a store that is just 1 block North and 1 block East. A "good" set of directions says exactly that: "Go 1 block North, then 1 block East."

A "bad" set of directions might say: "Go 100 blocks North, then 99 blocks South, then 100 blocks East, then 99 blocks West." If you follow these instructions perfectly, you'll still end up at the exact same store, but the instructions are awful, confusing, and huge.

In lattice cryptography, the public key is intentionally given as a "bad" set of directions (a bad basis).Lattice Reduction is the mathematical process of trying to untangle those bad directions to find the simple, direct path (the good basis).

What Makes a Basis "Reduced"?

The quality of a basis is measured by its Gram–Schmidt vectors (introduced in §1.2). A perfectly orthogonal basis has Gram–Schmidt vectors equal to the basis vectors themselves — no projection, no shortening. As the basis becomes more skewed, the Gram–Schmidt vectors shrink. The ratio between consecutive Gram–Schmidt norms tells us how fast the basis "deteriorates" from one vector to the next.

A reduced basis, informally, is one where this deterioration is controlled — each Gram–Schmidt vector is not too much shorter than the previous one.

Size Reduction: The First Step

Before any serious reduction, we perform size reduction: subtract integer multiples of earlier basis vectors from later ones to make the Gram–Schmidt coefficients small. This is analogous to the elimination steps in Gaussian elimination, but done over integers.

Definition 19 — Size-Reduced Basis

A basis $B = (b_1, \dots, b_n)$ is size-reduced if all Gram–Schmidt coefficients satisfy:

$|\mu_{ij}| \leq \frac{1}{2} \quad \text{for all } i > j$

where $\mu_{ij} = \langle b_i, \tilde{b}_j \rangle / \|\tilde{b}_j\|^2$are the Gram–Schmidt coefficients. Achieving size reduction is easy: for each pair $i > j$, subtract $\lfloor \mu_{ij} \rceil \cdot b_j$ from $b_i$. This keeps all lattice points intact (it's a unimodular operation) and bounds the off-diagonal Gram–Schmidt coefficients.

Size reduction alone isn't enough — we can have a size-reduced basis that is still heavily skewed. The next step is to ensure consecutive Gram–Schmidt vectors don't shrink too fast.

The LLL Algorithm

The LLL algorithm (Lenstra, Lenstra, Lovász, 1982) was the first polynomial-time algorithm to produce a provably short lattice basis. It transformed the field — before LLL, no efficient basis reduction with guarantees existed.

LLL repeatedly does two things: size-reduce the current basis, then check whether swapping adjacent basis vectors would improve the Gram–Schmidt profile. The swap criterion — called the Lovász condition — ensures that after a swap, the geometric mean of consecutive Gram–Schmidt norms improves.

Definition 20 — LLL-Reduced Basis

A basis is LLL-reduced with parameter $\delta \in (1/4, 1]$ (typically $\delta = 3/4$) if it is size-reduced and satisfies the Lovász condition for all consecutive pairs:

$\delta \|\tilde{b}_i\|^2 \leq \|\tilde{b}_{i+1}\|^2 + \mu_{i+1,i}^2 \|\tilde{b}_i\|^2$

Intuitively: the $(i+1)$-th Gram–Schmidt vector cannot be much shorter than the $i$-th one (adjusted for skew). If the condition is violated, swapping $b_i$ and $b_{i+1}$ strictly improves the profile.

Theorem 4 — LLL Guarantee (Lenstra–Lenstra–Lovász 1982)

For $\delta = 3/4$, the LLL algorithm terminates in polynomial time and the output basis $B' = (b_1', \dots, b_n')$ satisfies:

$\|b_1'\| \leq 2^{(n-1)/4} \cdot \lambda_1(\Lambda)$

$\|b_1'\| \leq 2^{n/2} \cdot \det(\Lambda)^{1/n}$

The first vector is at most exponentially longer than the true shortest vector — the exponential is unavoidable for a polynomial-time algorithm (assuming NP ≠ P), but the base is small (about $2^{1/4} \approx 1.19$ per dimension). In practice, LLL often does much better than the worst-case bound.

Why LLL Terminates: The Potential Argument

The LLL algorithm terminates because each swap strictly decreases a carefully chosen potential function. Define the LLL potential as:

The Potential Function

$\Phi(B) = \prod_{i=1}^n \|\tilde{b}_i\|^{2(n-i+1)}$

Each swap decreases $\Phi$ by a factor of at least $\delta - \mu_{i+1,i}^2 \geq \delta - 1/4 > 0$. Since $\Phi$ is a positive integer bounded above by a function of the input, the algorithm must terminate after a polynomial number of swaps. Each swap and size-reduction step is itself polynomial-time, giving overall polynomial complexity.

Block Korkine–Zolotarev (BKZ) Reduction

LLL's weakness is that it only looks at consecutive pairs of basis vectors. To get shorter output, you need to look at larger blocks simultaneously. The BKZ algorithm generalizes LLL by solving an SVP instance inside every consecutive block of size $\beta$.

The tradeoff is stark: larger $\beta$ gives shorter output but requires exponential time (since solving SVP in a $\beta$-dimensional block is exponential in $\beta$). BKZ with $\beta = 2$ is equivalent to LLL; BKZ with $\beta = n$ solves SVP exactly.

Definition 21 — BKZ-β Reduction

The BKZ-$\beta$ algorithm processes the basis in a sliding window of size $\beta$. In each window, it:

1. Projects the current basis vectors onto the orthogonal complement of all previous Gram–Schmidt vectors.
2. Solves SVP in this projected $\beta$-dimensional sublattice.
3. Inserts the short vector found back into the full basis.
4. Re-size-reduces.

This is repeated over all windows until no more progress is possible. The output satisfies:

$\|b_1\| \approx \delta_\beta^n \cdot \det(\Lambda)^{1/n}$

where $\delta_\beta \approx \left( \frac{\beta}{2\pi e} \right)^{1/(2(\beta-1))}$ is the root Hermite factor achievable by BKZ-$\beta$.

BKZ in Cryptanalysis

Security estimates for all lattice-based cryptosystems are computed by asking: what is the smallest block size $\beta$ for which BKZ-$\beta$ would recover the short vector underlying the scheme? The cost of BKZ with that $\beta$ is then the security level in bits.

BKZ cost model

Running BKZ-$\beta$ on an $n$-dimensional lattice costs approximately:

$T \approx n \cdot T_{\mathrm{SVP}}(\beta)$

where $T_{\mathrm{SVP}}(\beta)$ is the cost of solving SVP in dimension $\beta$. Using sieving as the SVP oracle: $T_{\mathrm{SVP}}(\beta) \approx 2^{0.292\beta}$ classically, $2^{0.265\beta}$ quantumly.

Realistic BKZ behavior

The theoretical BKZ model assumes perfect SVP oracles and many tours through the basis. In practice, BKZ behavior is modeled by the BKZ simulator (Chen–Nguyen 2011): it predicts the Gram–Schmidt profile after BKZ more accurately than worst-case analysis. Modern cryptanalysis tools (like the Lattice Estimator) use the simulator for security estimates.

Progressive BKZ

In practice, cryptanalysts run BKZ with increasing block sizes: $\beta = 20, 30, 40, \dots$ — called progressive BKZ. Each increase yields marginally better output but costs exponentially more. The security level is the block size where the total cost exceeds the target (e.g., $2^{128}$).

The Geometric Series Assumption (GSA)

When analyzing BKZ output, practitioners often use the Geometric Series Assumption (GSA): after BKZ reduction, the Gram–Schmidt norms decrease in a geometric sequence. Formally:

Definition 22 — Geometric Series Assumption

After BKZ-$\beta$ reduction, the Gram–Schmidt log-norms $\log \|\tilde{b}_i\|$ lie approximately on a line with negative slope:

$\log \|\tilde{b}_i\| \approx \frac{1}{2}\log(\det(\Lambda)^{2/n}) + (\frac{n+1}{2} - i) \cdot \log(\delta_\beta^2)$

This geometric decay of Gram–Schmidt norms makes the output predictable and allows closed-form computation of the first vector's length, which is what matters for cryptanalysis. The GSA is empirically accurate for moderate BKZ block sizes.

The Root Hermite Factor δ₀ and How It Measures Output Quality

After a reduction algorithm terminates, we want a single number that describes how good its output is. The root Hermite factor is that number. It measures the length of the first output basis vector relative to the lattice's "expected" spacing, normalized by dimension so that it is comparable across different lattice sizes.

Root Hermite Factor — Definition and Formula

Given a lattice in dimension with determinant , and a reduction algorithm that outputs a first basis vector of length , the root Hermite factor is:


Equivalently, . This form makes the geometric interpretation clear: is the factor by which the output exceeds the lattice spacing.

For BKZ- reduction, the achievable root Hermite factor is:


As , (approaching the true shortest vector). As (LLL), in dimension 100. The formula shows a clear tradeoff: smaller requires larger , which means exponentially more computation.

In practice, the root Hermite factor achieved by BKZ-677 is approximately — the value needed to break ML-KEM-768, requiring roughly classical operations.

HKZ, LLL, and BKZ: A Comparison

There is a spectrum of reduction algorithms, ranging from the computationally free but weak, to the optimal but exponential. The three most important points on this spectrum are HKZ, LLL, and BKZ.

HKZ Reduction (Hermite–Korkine–Zolotarev)

HKZ reduction is the gold standard — the best possible fully reduced basis. A basis is HKZ-reduced if: (1) it is size-reduced, and (2) the first vector is a shortest nonzero vector (), and (3) the projection of remaining vectors onto the orthogonal complement of is itself HKZ-reduced (applied recursively).

HKZ reduction achieves but requires solving SVP exactly in each recursive step — taking time . It is computationally equivalent to solving SVP. Used as a theoretical baseline, not in practice for large .

LLL Reduction

LLL is the practical workhorse — polynomial time with provable guarantees. It enforces a weak Lovász condition on consecutive pairs of Gram–Schmidt vectors. Its output satisfies .

Root Hermite factor: (dimension 100). Runtime: .

LLL is the default preprocessing step before any heavier reduction, and is used directly when speed matters more than output quality (e.g., in Coppersmith attacks on RSA).

BKZ Reduction

BKZ interpolates between LLL () and HKZ (). It solves SVP inside every window of size and achieves: .

Runtime: per tour. The tradeoff is stark — doubling roughly doubles the quality of reduction but squares the cost.

Modern variants (BKZ 2.0, progressive BKZ, pump-and-jump BKZ) incorporate better SVP oracles, preprocessing, and tour strategies to achieve better practical performance than the theoretical model predicts.

In summary: HKZ gives optimal output at exponential cost; LLL gives weak output at polynomial cost; BKZ covers all points in between. All post-quantum lattice security estimates are expressed in terms of the BKZ block size needed to break the scheme.

Summary: Reduction Algorithms and Their Guarantees

LLL (β = 2)

Time: $\mathrm{poly}(n, \log B)$ where $B$ is a bound on input size.

Output: $\|b_1\| \leq 2^{(n-1)/4} \lambda_1$.

Root Hermite factor: $\delta \approx 1.022$ (dim 100). Fast and simple — the workhorse for preprocessing.

BKZ-20

Time: seconds to minutes in practice.

Root Hermite factor: $\delta \approx 1.009$.

Output significantly better than LLL. Used in early stages of attacks. Not enough to break well-designed PQC schemes.

BKZ-100

Time: $\sim 2^{29}$ per tour.

Root Hermite factor: $\delta \approx 1.0045$.

Achieves good quality but feasible in practice. Used in actual cryptanalysis of weak instances.

BKZ-677 (Kyber-768 security)

Time: $\sim 2^{182}$ classical.

Root Hermite factor: $\delta \approx 1.0028$.

This is the block size needed to break ML-KEM-768. Far beyond any feasible computation — ensuring ~182-bit security.