1.2 Vectors & Bases
How you describe a lattice — and why the same lattice can look easy or impossibly hard depending on how you describe it.
A lattice is built from a set of directions called basis vectors. Think of them like the axes on a map: once you have your directions, every location on the map can be described by how many steps you take along each axis. Change the axes, and the same locations now have different coordinates — but the terrain itself hasn't changed.
This is the key idea of this section: the same lattice can be described by many different sets of basis vectors. Some descriptions make the lattice easy to work with. Others make it look completely intractable. That gap is the foundation of lattice cryptography.
Beginner's Intuition 💡
Imagine trying to give someone walking directions to reach every intersection in Manhattan. The simplest way is to give them two basic steps: "Walk 1 block North" and "Walk 1 block East". These basic steps are your **basis vectors**.
However, you could instead give them completely different basic steps: "Walk 100 blocks North and 99 blocks East" and "Walk 99 blocks North and 100 blocks East". Believe it or not, by taking combinations of these complex steps forwards and backwards, you can still reach the exact same intersections!
In lattice math, the simple steps are a "Good Basis", and the complex steps are a "Bad Basis". The grid of intersections (the lattice) never changes—only the directions you use to describe it do.
The Basis Matrix
Rather than listing basis vectors one at a time, we stack them as columns of a single matrix. This matrix $B$ compactly encodes the entire shape of the lattice. Any lattice point is obtained by multiplying $B$ by a column vector of integers.
Definition 2 — Basis Matrix
Write the basis vectors as the columns of a matrix $B = [b_1 \mid b_2 \mid \cdots \mid b_n]$. Then the lattice is:
$\Lambda(B) = \{ Bx \mid x \in \mathbb{Z}^n \}$
In plain English: take any column of integers $x$, multiply it by $B$, and you get a lattice point. When the number of basis vectors equals the ambient dimension (a square matrix), we call the lattice full-rank.
Many Bases, One Lattice
Here is something surprising: a lattice has infinitely many valid bases. Consider a 2D lattice built from two vectors. You can swap one basis vector for the sum of both and get a completely different-looking pair of arrows — yet they generate exactly the same infinite grid of points.
The transformations that turn one valid basis into another are called unimodular matrices. They are integer matrices whose determinant is exactly +1 or −1. Multiplying a basis by a unimodular matrix reshuffles the basis vectors without adding or removing any lattice points.
Definition 3 — Equivalent Bases
Two bases $B$ and $B'$ generate the same lattice if and only if $B' = BU$ for some integer matrix $U$ with $\det(U) = \pm 1$. Such matrices are called unimodular.
Intuition: $U$ tells you how to mix the old basis vectors (with integer coefficients, no fractions) to get the new ones. The lattice points don't move — only the labels change.
Good Bases and Bad Bases
Not all bases are equally useful. A good basis has short, nearly perpendicular vectors. Given such a basis, you can quickly find nearby lattice points, estimate the shortest vector, and solve geometric problems efficiently. A bad basis has long, nearly parallel vectors that are highly skewed. The lattice is identical — but working with a bad basis feels like navigating with a wildly distorted map.
This asymmetry is the secret behind lattice-based trapdoor cryptography. A trusted party generates a short, well-behaved basis (their private key), then publishes a deliberately ugly, skewed basis derived from it (the public key). Anyone who only sees the ugly basis cannot efficiently work backwards to find the short one.
Gram–Schmidt Orthogonalization
To measure how "skewed" a basis is, we use a classical tool from linear algebra called Gram–Schmidt orthogonalization. The idea: given your basis vectors, iteratively subtract off their projections onto each other to produce a new set of perpendicular vectors. These perpendicular vectors are not a lattice basis (they're generally not integer combinations of the originals), but they reveal the underlying geometry of the basis.
Definition 4 — Gram–Schmidt Vectors
Start with $\tilde{b}_1 = b_1$. For each subsequent vector, subtract its projection onto all previous orthogonal vectors:
$\tilde{b}_i = b_i - \sum_{j < i} \frac{\langle b_i, \tilde{b}_j \rangle}{\|\tilde{b}_j\|^2} \cdot \tilde{b}_j$
The resulting vectors $\tilde{b}_1, \dots, \tilde{b}_n$ are mutually perpendicular. Their lengths measure how much each basis vector contributes in a "fresh" direction not already covered by previous vectors. Short Gram–Schmidt vectors mean a skewed basis; long ones mean a well-spread basis.
An important fact: the product of all Gram–Schmidt lengths always equals $|\det(B)|$, the volume of the fundamental cell of the lattice. So the total volume is fixed — what changes between bases is how evenly that volume is distributed across the Gram–Schmidt vectors.
The LLL Algorithm: Taming a Bad Basis
If you are given a bad basis, can you find a better one? Yes — the LLL algorithm(named after Lenstra, Lenstra, and Lovász, 1982) does exactly this in polynomial time. It won't find the best possible basis (that problem is as hard as SVP), but it finds one that is provably pretty good — short enough to be useful, efficient enough to run on large inputs.
What LLL guarantees
After running LLL, the first basis vector $b_1$ satisfies $\|b_1\| \leq 2^{(n-1)/4} \cdot \lambda_1$, where $\lambda_1$ is the true shortest vector length. In dimension 100, this means the output might be up to ~10,000× longer than the true shortest vector — good enough for many applications, not good enough to break well-designed lattice cryptography.
BKZ: stronger reduction
Block Korkine-Zolotarev (BKZ) is a family of algorithms that improve on LLL by solving SVP inside overlapping blocks of the basis. Larger block size $\beta$ gives shorter output vectors but runs exponentially slower in $\beta$. BKZ-{β} with large $\beta$ is the main tool in lattice cryptanalysis — used to estimate how hard a lattice problem actually is.
The trapdoor construction
In GPV signatures and similar schemes, the signer holds a short basis $S$ (the trapdoor). The public key is a related long basis $A = SU$. Signing uses $S$ to solve CVP efficiently; verification only needs the public $A$. Without $S$, solving CVP under $A$ is believed computationally infeasible.
The Dual Lattice
Every lattice has a companion called its dual lattice. The dual contains all vectors that have integer inner products with every vector in the original lattice. If the original lattice is "big" (coarsely spaced), the dual is "small" (finely spaced), and vice versa. The dual lattice plays a key role in the security proofs of LWE and in the analysis of Gaussian distributions over lattices, which we will encounter in §2.2.
Definition 5 — Dual Lattice
$\Lambda^\vee = \{ y \in \mathbb{R}^n \mid \langle y, x \rangle \in \mathbb{Z} \text{ for all } x \in \Lambda \}$
If $B$ is a basis for $\Lambda$, then $(B^{-\top})$ is a basis for $\Lambda^\vee$. The determinant of the dual is $\det(\Lambda^\vee) = 1/\det(\Lambda)$ — confirming that a denser lattice has a sparser dual, and vice versa.
The Gram Matrix
Given a basis matrix , the Gram matrix is the symmetric positive definite matrix . Its entries are all pairwise inner products of the basis vectors: the diagonal entry is the squared length of the -th basis vector, and the off-diagonal entry measures how much vectors and point in the same direction.
The Gram matrix encodes the complete metric geometry of the basis — all lengths and angles — in a single object. If for all , the basis is perfectly orthogonal. The determinant of the Gram matrix gives the squared volume of the fundamental parallelepiped:
Gram Matrix — Key Formula
For a basis matrix with columns :
The angle between basis vectors satisfies . A basis is orthogonal if and only if is diagonal. Two bases and generate the same lattice if and only if their Gram matrices are related by for a unimodular integer matrix .
Volume of a Lattice
The volume (or determinant) of a lattice is one of its most fundamental invariants. It measures the "density" of lattice points: a small volume means points are close together; a large volume means they are sparsely distributed. Crucially, the volume does not depend on which basis you choose — it is an intrinsic property of the lattice itself.
Lattice Volume — Definition and Properties
For any basis matrix of :
This equals the volume of the fundamental parallelepiped spanned by the basis vectors. It is invariant under basis change: if with , then .
For the integer lattice : .
For a scaled lattice : .
For the dual: .
The product of Gram–Schmidt norms also equals the volume: , which is how the volume is efficiently computed in practice.
Change of Basis and Unimodular Matrices
Since a lattice has infinitely many valid bases, it is important to characterize precisely when two matrices and generate the same lattice. The answer is elegant: they are related by a unimodular matrix — an integer matrix with determinant .
Unimodular Matrices and Basis Equivalence
An integer matrix is unimodular if . Equivalently, is invertible over the integers — its inverse also has integer entries.
The set of all unimodular matrices forms a group under multiplication, denoted . The elementary row and column operations over integers — adding an integer multiple of one row to another, or swapping two rows — all produce unimodular matrices.
In cryptography, a trapdoor consists of a short basis and a unimodular matrix such that is a bad (long, skewed) public basis. Anyone who sees only cannot recover without solving the lattice reduction problem.