Sieving algorithms solve the Shortest Vector Problem (SVP) by maintaining a large list of lattice vectors and repeatedly combining pairs (or tuples) to produce shorter ones. Starting from an exponentially large random sample of lattice vectors, sieving distills them down to a list containing a shortest vector.

Sieving currently achieves the best known asymptotic complexity for SVP — single-exponential in dimension — and is the algorithm of choice inside BKZ for solving the local SVP oracle. The cost of sieving directly determines the security level of deployed lattice schemes.

Beginner's Intuition 💡

Imagine trying to find the single shortest piece of spaghetti in an incredibly massive, tangled bowl. Instead of measuring every single piece one by one (which would take forever), Lattice Sieving takes a different approach.

It grabs handfuls of spaghetti, compares them to each other, snaps off the longer parts, and throws away the rest. It keeps combining and comparing the remaining pieces, "sieving" through the pile until only the absolute shortest piece is left. While this takes a massive amount of memory (you need a huge "bowl" to hold all the pieces you're comparing), it is currently the fastest known way to solve the Shortest Vector Problem, acting as the ultimate benchmark for lattice security.

The AKS Sieve (2001)

Ajtai, Kumar, and Sivakumar proved in 2001 that SVP can be solved in single-exponential time and space — the first such algorithm. The AKS sieve runs in:

time and space for some constant . While not practical (the constant is large and the algorithm is complex), it established that SVP is not intrinsically harder than.

The Gauss Sieve (2008)

Nguyen and Vidick's Gauss sieve made sieving practical. It maintains a list of vectors and repeatedly reduces pairs:

For vectors , if or , replace with the shorter combination. Continue until no further reductions are possible. The result is a Gauss-reduced list whose shortest element approximates .

The Gauss sieve runs in heuristic time and space .

The 2-Sieve and ListSieve

Micciancio and Voulgaris (2010) introduced the 2-sieve with provable guarantees and the ListSieve. The key insight: sample vectors uniformly from a ball of radius , then find all pairs within distance of each other:

Replace both and with if it is shorter. Repeat. The list converges to a shortest vector in time and space (provable bound).

The BDGL Sieve (2016)

Becker, Ducas, Gama, and Laarhoven's 2016 sieve reduced the exponent to approximately using angular locality-sensitive hashing (LSH). This is currently the best known classical algorithm:

The idea: instead of checking all pairs in the list, use LSH to quickly find pairs of vectors with small angle (equivalently, vectors where their sum or difference is short). Two vectors produce a short combination when:

i.e., the angle between them is less than . LSH partitions the unit sphere into caps; vectors in the same cap are likely to have a small angle and thus form a productive pair.

Quantum Sieving

The BDGL sieve can be quantized using Grover's algorithm to speed up the pair-finding step. The quantum variant achieves:

This is why NIST post-quantum standards target classical security but also must account for this quantum speedup — requiring lattice dimensions large enough that even exceeds .

Sieving in Practice

Sieving becomes practical in dimensions above roughly 70–80 (compared to enumeration, which dominates below that). Inside BKZ- with large , the local SVP oracle is solved by a sieve in the projected -dimensional sublattice. For , sieving is faster than pruned enumeration.

The g6k library (see next section) provides a production-quality implementation of these sieves and is the tool of choice for lattice challenges and security analysis.

The Nguyen–Vidick Sieve: Precise Complexity

The Nguyen–Vidick (NV) sieve (2008) was the first practically efficient sieve with a clean heuristic analysis. Given a list of lattice vectors sampled uniformly from a ball of radius , the sieve finds all pairs such that and replaces them. The list converges when no more reductions are possible.

The heuristic analysis proceeds by counting the expected number of pairs. In dimension , two uniform random vectors from a ball of radius satisfy with probability:

To ensure each output vector has expected one productive partner — so the list self-reduces — we need , giving list size:

Processing all pairs naively takes time, but since each vector participates in roughly one productive pair, the total work is dominated by list maintenance:

where the memory bound uses the fact that the reduced list size after one pass is vectors. This tighter analysis was given by Pujol–Stehlé (2009).

Tuple Sieving: k-Sieve for k > 2

Tuple sieves generalize pairwise sieving to combinations of vectors. The -sieve (Herold–Kirshanova, 2017; Ducas, 2018) finds tuples such that is shorter than all , for signs .

The key improvement: a -tuple reduces vectors that no 2-subset can reduce. The 3-sieve (implemented in g6k) achieves heuristic complexity:

At first glance the 3-sieve seems slower than BDGL (which achieves ). However, the 3-sieve uses the same memory as the 2-sieve. The correct comparison is at fixed memory: at memory , the best 2-sieve time is (BDGL), while the 3-sieve at the same memory achieves a better time-memory trade-off curve for intermediate memory budgets.

Asymptotically, the optimal -sieve for would achieve time — matching the memory lower bound — but the combinatorial cost of managing -tuples grows rapidly, making the practical sweet spot.

Time–Memory Trade-offs in Sieving

Sieving algorithms exhibit a fundamental time–memory trade-off: using less memory forces more computation, and vice versa. This trade-off is critical for security analysis because an adversary with a large memory budget can sieve faster.

For the BDGL sieve with a memory budget of vectors (where ), the time complexity scales as:

At the extremes: with full memory one achieves (BDGL optimum). With memory (streaming sieve, essentially no database), time degrades toward with large constant .

The low-memory sieve of Bai–Miller–Nguyen (2023) achieves:

halving the memory exponent at the cost of a modest time increase. For large-scale attacks on NIST schemes (which require dimensions ~500–1000), the memory required by BDGL would be+ entries — far beyond any classical hardware. The low-memory variants are therefore more practically relevant for hardware-constrained adversaries.

NIST security categories account for both time and memory requirements by requiring attacks to cost at least in both metrics simultaneously — ensuring that no time–memory trade-off brings a scheme within reach of a realistic adversary.