In Part I, we looked at the problem we want attackers of UOV to solve. In Part II we had plenty of oil and vinegar, but did not really discussed the whole unbalanced part of the scheme. So in this third part of this three part series, I will discuss why we are using Unbalanced Oil and Vinegar in particular, and discuss the Kipnis–Shamir attack on the original Oil and Vinegar scheme.
The paper that presented this attack is formulated in the language of matrices, but in my opinion, and showing my biases as a algebraic geometer, one can’t fully understand the problem when in this language, and I will instead present it in the language of bilinear forms, that I have build over the last two blog posts. Using bilinear forms is useful, since it allows us to present things without having to chose an explicit basis, which means that we do not really have to distinguish between the public and the private key all that much.
Preliminaries
Most of this section is a mini course in linear algebra and in particular bilinear forms and dual vector spaces. I will try to also formulate the attack purely in terms of matrices, so if this is going too far for you, you might still follow along with the attack, but as mentioned, using the language of bilinear forms in my opinion gives a deeper understanding.
First, as a quick refresher, as discussed in Part II, a UOV public key is a set of bilinear forms
, which have the property that there exists a subspace
such that
for all
. This subspace
is the private key we are looking for. The attack in question is a key recovery attack working with the public key only, so we don’t really need to repeat how signing works, and we will focus solely on how to recover this subspace given the bilinear forms, with Part II discussing the signature scheme.
We also need some additional vocabulary to talk about bilinear forms. Bilinear forms in positive characteristic behave somewhat weirdly sometimes, so it makes sense to prove stuff from scratch anyways.
First, we need to discuss non-degenerate bilinear forms. A bilinear form is called non-degenerate, if, for all vectors there is at least one vector
such that
. If you look at the matrix defining the bilinear form, this is equivalent of saying that this matrix has full rank.
The reason we care about non-degenerate bilinear forms is because they allow us, on a finite dimensional vector space at least, to “transpose” vectors, i.e. turn column vectors into row vectors. If you know quantum physics, what a non-degenerate bilinear form does is turn a ket vector into a bra vector. If you don’t know quantum physics, don’t worry about it, those are just terms the quantum physicists invented because they didn’t know enough about bilinear forms and dual vector spaces.
To understand dual vector spaces, all you need to know here is that linear forms, i.e. linear functions that output a single scalar, themselves also form a vector space over the same field, since you can add linear forms and multiply them with scalars. Conceptually, these are literally just row vectors, as mentioned above, since those can be multiplied with column vectors to give elements of the base field, i.e. row vectors define all the linear forms that exists (for a finite dimensional vector space. Infinite dimensional vector spaces are very, very different, and to take back my comment on quantum physicists a bit, those are what quantum physics deals with, but still, understanding dual vector spaces would make their field a whole lot more approachable). We write for this dual vector space.
To tie this to our bilinear forms, the important thing a bilinear form allows you to do is to “curry”, or partially evaluate it, that is given a bilinear form and an element
, we can look at the linear form
.
And for non-degenerate forms, doing this partial evaluation gives you an extra property, namely that, if , we know that
as well, since the linear form 0 is defined by being 0 for all vectors in the vector space, and linearity in the first argument means that
, so by definition of non-degeneracy it follows that
. What this means is that, if
is linear independent, then
is linear independent as well. This proves the statement I made above of finite dimensional vector spaces having the same dimension as their dual, and means that every non-degenerate bilinear form gives us a way of mapping column vectors to row vectors. This might seem like a lot of work for literally just turning a page 90 degrees, but as I said, quantum physicists still pay the price for thinking they could turn papers better all by themselves.
Next, we need to talk about orthogonality. This term really only makes geometric sense in characteristic 0, but we can still use it algebraically: Two vectors are called orthogonal if
. For a subvectorspace
we define
. With this definition, we can rephrase the trapdoor of our bilinear forms as there being a subspace
such that
. Basically, a subspace that is perpendicular to itself in all bilinear forms of the public key.
While this is somewhat obviously nonsensically in a geometric way, we can rescue at least one property of perpendicularity into positive characteristic, and that is the dimension of the orthogonal space:
For a non-degenerate bilinear form , and a subvectorspace
, we have
. To see this, take a basis of
. For every basis vector
, we again look at the linear form
. This gives us
many linearly independent linear forms, all of which vanishing on
. We can consider these forms as a linear function
, and since they are linear independent, we know that this linear function has full rank. Vanishing on all forms makes
the kernel of this function, and by the kernel-rank theorem, this implies that
.
The Attack
With all of this out of the way, we can now look at the attack itself. So in the following, we have bilinear forms
, defined over an
-dimensional vector space
with
, i.e. there are as many oil vectors as there are vinegar vectors, and everything is “balanced”. We also have a secret,
-dimensional sub vector space
such that
for all
, as discussed above. One thing to note here is that
, so in this balanced case, and only in this balanced case, we further get
, since a subvectorspace that has the same dimension as the space is equal to the space.
The main idea behind the Kipnis–Shamir attack is that bilinear forms are hard, but linear functions are easy, and that for two non-degenerate bilinear forms and
there is exactly one linear function
such that
for all
. We know this, because both
and
are non-degenerate, so both can map a basis of
to a basis of the dual vector space
, so
is the map given by this base change. More concretely, if
is represented by the matrix
, i.e.
, then
, as a linear function, is represented by
, since
. If we hadn’t done all this work about bilinear forms, we would now need to add a two line proof that this is invariant under base change. I instead opted for writing several paragraphs worth of linear algebra, which, as surely we will all agree, is far preferable.
So far, so unremarkable. You have a bunch of bilinear forms, you can define this linear function. But now, we can ask what is. We know that for any vector
,
for all vectors
, so we know that
as well, so we know that
. But as pointed out above, in the balanced case
, so
. This means that
(technically, I have not proven equality, but that follows pretty easily by applying non-degeneracy again). This means
,
restricted to the subvector space
is an endomorphism.
But for a linear function, restricting to a subvectorspace as an endomorphism is the same as saying that this subvectorspace is a product of eigenspaces. And we can compute the eigenspaces of a linear function. By itself that is only moderately helpful, since we don’t know which eigenspaces of are part of
and which aren’t. But we don’t just have one
, we have a linear function for every pair of bilinear forms. And we can intersect eigenspaces, and find the common eigenspace of multiple linear functions. Two random linear functions are very unlikely to share an eigenspace, and we have
linear functions sharing an eigenspace. So that common eigenspace is just
. I describe how to compute that shared eigenspace below, but see the paper for more details.
To put it together, the attack algorithm works as follows:
- Discard all singular matrices, as they correspond to degenerate bilinear forms.
- Compute
for enough
.
- Take an arbitrary linear combination of the computed matrices
- Compute the characteristic polynomial of
and factorize it.
- Due to the common eigenspace, the characteristic polynomial will always have a factor of degree m. But it might have more factors. If there are more factors, discard and try a new linear combination. If you do not find such a matrix and factorization, continue at step 7.
- By Cayley–Hamilton, the kernel of factor of the characteristic polynomial, evaluated at the matrix, is the eigenspace of the matrix. Since our matrix has a characteristic polynomial which only has two factors, one of the two eigenspaces is the oil space, which we can quickly verify.
- Read the paper for why finding such a characteristic polynomial is very likely, so you never reach this step.
A geometric interpretation
I have to admit, when I first read about this attack, I found it extremely devastating. A polynomial time attack on one member of a family of schemes is a very frightening thing to see, and while the attack clearly requires n = 2m to work (Otherwise the whole eigenspace argument falls apart), at least I would like to see some form of an argument why n > 2m somehow is not just mitigating this specific attack, but also fundamentally has a structure that is somehow more complex than the balanced case is.
I think I have found such an argument, but as a fair word of warning, what follows will be some take no prisoners, deep, deep end of the algebraic geometric pool. Well the shallow end of the deep end, but you get my drift. I will try to keep the argument understandable for more general audiences, but you’ll have to take a bunch of things I say at face value. The whole argument is part of a deeper investigation I did on UOV, but that should properly be a different blog post, or maybe a CFAIL paper or something like that.
With the disclaimers out of the way, here we go:
If an algebraic geometer is given m polynomials, even if they are just quadratic forms, our basic instincts kicks in, and we study the geometric object obtained by setting all these polynomials to zero. I will call this the oil-and-vinegar variety, i.e. . Note that even though these polynomials are homogenous, we need to keep our
animalisticgeometric urges in check and not look at for now. This variety is a subvariety of
, and since it is given by m polynomials, it, assuming the polynomials are complete intersection, will be
dimensional. For the non-geometers: This is similar how having m linear equations creates an n – m dimensional vector space, with “complete intersection” being the equivalent to things being linearly independent, only that stuff can go wrong in a lot more interesting ways, and in general stuff bends a lot more, but importantly, if you take random polynomials, they are very likely to be complete intersection, so if our oil and vinegar polynomials were not, that in itself would be remarkable. It’s unfortunately not really possible to check whether polynomials are complete intersection without computing Gröbner bases, see part I, but when I tried it the tiny cases I could compute Gröbner bases for, things were complete intersection up until n = 2m (for n < 2m, it cannot be complete intersection as we will see). But I didn’t compute how likely complete intersection is in general.
The other thing to notice is that , seen as a linear subvariety of
, is a subvariety of
, since the quadratic forms, by definition, evaluate to 0 on
. And
is m dimensional. This shows that for n < 2m,
cannot be defined by polynomials in complete intersection, since it’s dimension m is greater than n – m.
So in the balanced case, where n = 2m, is a m dimensional algebraic variety, with an m dimensional linear subvariety. In other words, an irreducible component of
. The Kipnis–Shamir attack isolates this linear component from the other, nonlinear components of
.

Artist’s impression of the OV-variety for m = 1, n = 2. You can see the linear subspace as irreducible component. Note that this is a polynomial of degree 3, so it is not actually a valid OV-variety, but it would look boring otherwise.
In the unbalanced case however, n > 2m, so . Which makes
a closed subvariety with codimension greater than 0, i.e.
lies on some, itself non-linear, irreducible component of
. It does at least conceptually make sense to me that there might be an algorithm to tease out a linear irreducible component, but that linear closed subvarieties of non-linear components are much harder to pry away.

Plot of a m = 1, n = 3 OV-variety. There are plenty of linear subspaces to chose from (which would not usually be the case), but they are all located on a single irreducible component which is decisively non-linear. It’s still not the greatest picture, since the plotting software limits me to three dimensions only.