Quaternion from two vectors: the final version

I have published a first article about how to build a quaternion from two arbitrary direction vectors that will transform one of the directions into the other. That article was deliberately omitting the special case when the two vectors were facing away from each other, which required special treatment. So I wrote a second article explaining how to quickly pick an orthogonal vector.

Combining the results from the two articles should be straightforward. Here is the resulting, final function:

/* Build a unit quaternion representing the rotation
 * from u to v. The input vectors need not be normalised. */
quat quat::fromtwovectors(vec3 u, vec3 v)
{
    float norm_u_norm_v = sqrt(dot(u, u) * dot(v, v));
    float real_part = norm_u_norm_v + dot(u, v);
    vec3 w;
    if (real_part < 1.e-6f * norm_u_norm_v)
    {
        /* If u and v are exactly opposite, rotate 180 degrees
         * around an arbitrary orthogonal axis. Axis normalisation
         * can happen later, when we normalise the quaternion. */
        real_part = 0.0f;
        w = abs(u.x) > abs(u.z) ? vec3(-u.y, u.x, 0.f)
                                : vec3(0.f, -u.z, u.y);
    }
    else
    {
        /* Otherwise, build quaternion the standard way. */
        w = cross(u, v);
    }
    return normalize(quat(real_part, w.x, w.y, w.z));
}

If you know you are exclusively dealing with unit vectors, you can replace all occurrences of norm_u_norm_v with the value 1.0f in order to avoid a useless square root.

On picking an orthogonal vector (and combing coconuts)

Consider the following problem:

Given a non-zero vector v in 3D space, find a non-zero vector w orthogonal to v (i.e. such that v.w = 0).

At first sight, this seems easy. For every possible v there are an infinite number of orthogonal vectors lying in a plane:

We just need to pick one. But it’s not as trivial as it may seem.

First attempts

The following vector satisfies v.w1 = 0:

$$\mathbf{v} = \begin{bmatrix}x \\ y \\ z \end{bmatrix}
\quad
\mathbf{w_1} = \begin{bmatrix}-y \\ x \\ 0 \end{bmatrix}
\quad
\mathbf{v} . \mathbf{w_1} = x * -y + y * x + z * 0 = 0
$$

Unfortunately that w1 vector can be zero, for instance when v = [0 0 1]. The original problem requires a non-zero vector.

Let’s try another one:

$$\mathbf{v} = \begin{bmatrix}x \\ y \\ z \end{bmatrix}
\quad
\mathbf{w_2} = \begin{bmatrix}0 \\ -z \\ y \end{bmatrix}
\quad
\mathbf{v} . \mathbf{w_2} = x * 0 + y * -z + z * y = 0
$$

Again, it works most of the time, except in some cases, for instance v = [1 0 0].

How about adding them together? Certainly w1 and w2 cannot be both zero at the same time!

$$\mathbf{v} = \begin{bmatrix}x \\ y \\ z \end{bmatrix}
\quad
\mathbf{w_3} = \begin{bmatrix}-y \\ x-z \\ y \end{bmatrix}
\quad
\mathbf{v} . \mathbf{w_3} = x * -y + y * (x-z) + z * y = 0
$$

Alas, this still doesn’t work all the time: it fails with v = ![1 0 1].

Should we try harder? Isn’t there a linear combination of these that will work all the time?

Well, no. There isn’t.

You can’t comb a coconut

Sadly, the hairy ball theorem, a consequence of Brouwer’s fixed-point theorem, states that any continuous function f that maps the set of unit vectors to orthogonal vectors takes the value zero somewhere.

Whichever way you look at the sphere of unit vectors, a continuous tangent field will always have a zero point:

Most of the maths operations we usually do (adding, multiplying, taking square roots, etc.) are continuous. Fortunately, we are not limited to them: we can use a simple if() construct to create a discontinuity that will save us.

A very good implementation

Here is my personal implementation of the orthogonal vector problem. It is very good. It has excellent numerical stability. It only does one test. It is short. It is beautiful. I really hope you consider using it.

/* Always works if the input is non-zero.
 * Doesn’t require the input to be normalised.
 * Doesn’t normalise the output. */
vec3 orthogonal(vec3 v)
{
    return abs(v.x) > abs(v.z) ? vec3(-v.y, v.x, 0.0)
                               : vec3(0.0, -v.z, v.y);
}

Many implementations, such as Ogre3D’s, have additional tests or operations to perform this task, and use epsilons and normalised vectors to avoid singularities.

But our only test is actually enough: if |x|>|z|, it means the largest component in absolute value in v is either x or y. So we build a vector using x and y. Otherwise, the largest component in absolute value is either y or z. So we build a vector using y and z. That way, we ensure that the length of our returned vector never, ever comes near zero.

Whether some given code will cause inefficient branching is often unclear. In our case, it is very likely that the ternary operator will actually be branchless, with some help of the compiler and the hardware.

That said, how about we try to get rid of the ternary operator, just for fun?

Going branchless for fun

Let’s see. We had these two candidate vectors, w1 and w2, which worked almost always except when specific values of v caused them to be zero. And whatever the constant k we may pick, a vector of the form w1 + k w2 will eventually take the value zero for some value of v, too, because of the hairy ball theorem.

Now here comes the trick. Instead of using a constant k, we use a function f(x,y,z). This is what our w vector looks like:

$$\mathbf{v} = \begin{bmatrix}x \\ y \\ z \end{bmatrix}
\quad
\mathbf{w} = \begin{bmatrix}-y \\ x- z * f(x,y,z) \\ y * f(x,y,z) \end{bmatrix}
\quad
\mathbf{v} . \mathbf{w} = \cdots = 0
$$

From now I shall assume that v is a unit vector.

What can cause w to be zero, then?

One necessary condition is y = 0. When y ≠ 0 we can chose anything we want for f(x,y,z), it’ll always give us a good orthogonal vector. This restricts the problem to the y = 0 circle, giving us the useful equality x² + z² = 1.

The other condition is x = z*f(x,y,z). So if we manage to build a function f such that f(x,y,z) never equals x/z, we win.

Using x² + z² = 1 we can plot all the possible values for x/z as a function of x. It will show us, for a given x, the values that f cannot take:

The almost vertical slopes you see go to infinity upwards and downwards. As expected, this prevents us from using a continuous function: it would obviously cross one of these walls at some point.

Well, let’s try a non-continuous function, then. What are our options?

  • fmod
  • floor, ceil, round
  • fract

Here is one that I came up with and which works reasonably well:

$$f(x,y,z) = \mathrm{fract}(|x| + \frac{1}{2})$$

Look how it nicely avoids x/z:

And finally, our resulting branchless code:

/* Requires the input to be normalised.
 * Doesn’t normalise the output. */
vec3 orthogonal(vec3 v)
{
    float k = fract(abs(v.x) + 0.5);
    return vec3(-v.y, v.x - k * v.z, k * v.y);
}

I find it highly unlikely that this second version will perform better than the branching one. However, I haven’t put much thought into it and maybe someone will come up with a much better solution using the ideas presented here. Have fun!

Beautiful maths simplification: quaternion from two vectors

In this article I would like to explain the thought process behind the derivation of a widely used formula: finding a quaternion representing the rotation between two 3D vectors. Nothing really new, but hopefully a few ideas could be reused at other times.

Note: the routine presented here is incomplete on purpose. For a version that can be used in production code, see the next article instead, Quaternion from two vectors: the final version.

Naive method

A rotation is best visualised using a rotation axis and an angle. Except in degenerate cases, the rotation axis can be obtained by computing the cross product of the two original vectors:

Then the angle can be obtained using the properties of the cross product and/or the dot product:

\begin{align}
\boldsymbol{u} . \boldsymbol{v} &= |\boldsymbol{u}| . |\boldsymbol{v}| . \cos\theta \\
||\boldsymbol{u} \times \boldsymbol{v}|| &= |\boldsymbol{u}| . |\boldsymbol{v}| . |\sin\theta|
\end{align}

Since θ is always between 0 and π, we only care about the dot product. This gives us some obvious code to create the quaternion (omitting corner cases such as θ = 0 for clarity):

quat quat::fromtwovectors(vec3 u, vec3 v)
{
    float cos_theta = dot(normalize(u), normalize(v));
    float angle = acos(cos_theta);
    vec3 w = normalize(cross(u, v));
    return quat::fromaxisangle(angle, w);
}

This is naive but it works. Googling for “quaternion from two vectors” shows this forum post and this SO question where this method or a variation thereof is suggested.

Looking under the hood

Let’s have a look at what happens when building the quaternion from an axis and an angle:

$$ q = \cos\frac{\theta}{2} + \boldsymbol{i}\sin\frac{\theta}{2}v_x + \boldsymbol{j}\sin\frac{\theta}{2}v_y + \boldsymbol{k}\sin\frac{\theta}{2}v_z $$

This means the code for quat::fromaxisangle would look somewhat like this:

quat quat::fromaxisangle(float angle, vec3 axis)
{
    float half_sin = sin(0.5f * angle);
    float half_cos = cos(0.5f * angle);
    return quat(half_cos,
                half_sin * axis.x,
                half_sin * axis.y,
                half_sin * axis.z);
}

Avoiding trigonometry

If you read Iñigo Quilez’s recent article about avoiding trigonometry you’ll have probably frowned at the fact that we computed θ from cos(θ), then computed sin(θ/2) and cos(θ/2).

Indeed, it happens that there is a much simpler way to do it; the half-angle formulas from precalculus tell us the following:

\begin{align}
\sin\frac{\theta}{2} &= \sqrt{\frac{1 - \cos\theta}{2}} \\
\cos\frac{\theta}{2} &= \sqrt{\frac{1 + \cos\theta}{2}}
\end{align}

This allows us to simplify our quaternion creation code:

quat quat::fromtwovectors(vec3 u, vec3 v)
{
    float cos_theta = dot(normalize(u), normalize(v));
    float half_cos = sqrt(0.5f * (1.f + cos_theta));
    float half_sin = sqrt(0.5f * (1.f - cos_theta));
    vec3 w = normalize(cross(u, v));
    return quat(half_cos,
                half_sin * w.x,
                half_sin * w.y,
                half_sin * w.z);
}

This is pretty nice. By using well known trigonometry formulas, we got rid of all trigonometry function calls!

Avoiding square roots

It happens that we can do slightly better. Note that we normalize three vectors: u, v and cross(u, v). That’s three square roots. The thing is, we already know the norm of w through this formula:

$$||\boldsymbol{u} \times \boldsymbol{v}|| = |\boldsymbol{u}| . |\boldsymbol{v}| . |\sin\theta|$$

And we know sin(θ) from precalculus again:

$$\sin\theta = 2 \sin\frac{\theta}{2} \cos\frac{\theta}{2}$$

Also, using the fact that sqrt(a)sqrt(b) = sqrt(ab) lets us perform one less square root.

We can therefore come up with the following performance improvement:

quat quat::fromtwovectors(vec3 u, vec3 v)
{
    float norm_u_norm_v = sqrt(sqlength(u) * sqlength(v));
    float cos_theta = dot(u, v) / norm_u_norm_v;
    float half_cos = sqrt(0.5f * (1.f + cos_theta));
    float half_sin = sqrt(0.5f * (1.f - cos_theta));
    vec3 w = cross(u, v) / (norm_u_norm_v * 2.f * half_sin * half_cos);
    return quat(half_cos,
                half_sin * w.x,
                half_sin * w.y,
                half_sin * w.z);
}

Oh wait! We divide by sin(θ/2) to compute w, then we multiply by sin(θ/2) again. This means we don’t even need that variable, and we can simplify even further:

quat quat::fromtwovectors(vec3 u, vec3 v)
{
    float norm_u_norm_v = sqrt(sqlength(u) * sqlength(v));
    float cos_theta = dot(u, v) / norm_u_norm_v;
    float half_cos = sqrt(0.5f * (1.f + cos_theta));
    vec3 w = cross(u, v) / (norm_u_norm_v * 2.f * half_cos);
    return quat(half_cos, w.x, w.y, w.z);
}

This is more or less the code used by the Ogre3D engine in OgreVector3.h, except they perform an additional normalisation step on the final result. This is mathematically useless, but due to numerical stability issues, it is probably safe to do so nonetheless.

This final normalisation step is actually an opportunity to simplify the code even further.

Improving on Ogre3D

We are down to two square roots and four divisions, plus quite a few mul/adds. Depending on the platform that we are running on, it is possible to simplify even further and improve performance. For instance, on many SIMD architectures, normalising a quaternion can be very fast.

This is the code we get if we multiply every component of the quaternion by 2.f * half_cos and let normalize() do the rest of the job:

quat quat::fromtwovectors(vec3 u, vec3 v)
{
    float norm_u_norm_v = sqrt(sqlength(u) * sqlength(v));
    float cos_theta = dot(u, v) / norm_u_norm_v;
    float half_cos = sqrt(0.5f * (1.f + cos_theta));
    vec3 w = cross(u, v) / norm_u_norm_v;
    return normalize(quat(2.f * half_cos * half_cos, w.x, w.y, w.z));
}

Now half_cos only appears in its squared form, and since it comes from a square root, we can simply omit that square root:

quat quat::fromtwovectors(vec3 u, vec3 v)
{
    float norm_u_norm_v = sqrt(sqlength(u) * sqlength(v));
    float cos_theta = dot(u, v) / norm_u_norm_v;
    vec3 w = cross(u, v) / norm_u_norm_v;
    return normalize(quat(1.f + cos_theta, w.x, w.y, w.z));
}

And using the same reasoning we can multiply every quaternion component by norm_u_norm_v:

quat quat::fromtwovectors(vec3 u, vec3 v)
{
    float norm_u_norm_v = sqrt(sqlength(u) * sqlength(v));
    vec3 w = cross(u, v);
    quat q = quat(norm_u_norm_v + dot(u, v), w.x, w.y, w.z);
    return normalize(q);
}

We are still doing two square roots and four divisions, some of which are hidden in normalize(), but the code is considerably shorter now.

Final form

If u and v can be enforced to be unit vectors, norm_u_norm_v can be omitted and simply replaced with 1.0f:

quat quat::fromtwovectors(vec3 u, vec3 v)
{
    vec3 w = cross(u, v);
    quat q = quat(1.f + dot(u, v), w.x, w.y, w.z);
    return normalize(q);
}

Isn’t it beautiful, considering the sin(), cos() and acos() ridden mess we started with?

This algorithm can be found all over the Internet, but I do not know who first came up with it. Also, a lot of 3D engines (both publicly available and slightly more private) could benefit from it.

Update (06/01/2014)

In the comments below, Michael Norel provides the following improvement to the non-unit version. Since the values d = dot(u, v) and w = cross(u, v) are computed no matter what, the value sqlength(u) * sqlength(v) could be computed in a different way, *i.e.* d * d + sqlength(w). The following code does at least three multiplications less:

quat quat::fromtwovectors(vec3 u, vec3 v)
{
    vec3 w = cross(u, v);
    quat q = quat(dot(u, v), w.x, w.y, w.z);
    q.w += length(q);
    return normalize(q);
}

Also, Marc B. Reynolds notes that, in the unit version, the final normalisation factor is sqrt((1 + dot(u, v))² + sqlength(cross(u, v))) which reduces to sqrt(2 + 2 dot(u, v)) thanks to the sin² + cos² identity. It leads to the following possibly improved version:

quat quat::fromtwovectors(vec3 u, vec3 v)
{
    float m = sqrt(2.f + 2.f * dot(u, v));
    vec3 w = (1.f / m) * cross(u, v);
    return quat(0.5f * m, w.x, w.y, w.z);
}

Fast branchless RGB to HSV conversion in GLSL

Some time ago I devised an original algorithm to convert from RGB to HSV using very few CPU instructions and I wrote a small article about it.

When looking for a GLSL or HLSL conversion routine, I have found implementations of my own algorithm. However they were almost all straightforward, failing to take full advantage of the GPU’s advanced swizzling features.

So here it is, the best version I could come up with:

vec3 rgb2hsv(vec3 c)
{
    vec4 K = vec4(0.0, -1.0 / 3.0, 2.0 / 3.0, -1.0);
    vec4 p = mix(vec4(c.bg, K.wz), vec4(c.gb, K.xy), step(c.b, c.g));
    vec4 q = mix(vec4(p.xyw, c.r), vec4(c.r, p.yzx), step(p.x, c.r));
    float d = q.x - min(q.w, q.y);
    float e = 1.0e-10;
    return vec3(abs(q.z + (q.w - q.y) / (6.0 * d + e)), d / (q.x + e), q.x);
}

Update: Emil Persson suggests using the ternary operator explicitly to force compilers into using a fast conditional move instruction:

    vec4 p = c.g < c.b ? vec4(c.bg, K.wz) : vec4(c.gb, K.xy);
    vec4 q = c.r < p.x ? vec4(p.xyw, c.r) : vec4(c.r, p.yzx);

And because a lot of people get it wrong, too, here is the reverse operation in GLSL. It is the algorithm almost everyone uses (or should use):

vec3 hsv2rgb(vec3 c)
{
    vec4 K = vec4(1.0, 2.0 / 3.0, 1.0 / 3.0, 3.0);
    vec3 p = abs(fract(c.xxx + K.xyz) * 6.0 - K.www);
    return c.z * mix(K.xxx, clamp(p - K.xxx, 0.0, 1.0), c.y);
}

Porting to HLSL is straightforward: replace vec3 and vec4 with float3 and float4, mix with lerp, fract with frac, and clamp(…, 0.0, 1.0) with saturate(…).

A fast RGB to HSV floating point conversion

The operations typically performed to convert from RGB to HSV are the following:

  • find the largest RGB component
  • find the smallest RGB component
  • compute V and S
  • select the main circular sector for H
  • compute H

Here is, to my knowledge, the most commonly used RGB to HSV routine for floating point, with an extra minor optimisation (adding 1e-20f to divisors to avoid the need to care about divisions by zero):

static void RGB2HSV(float r, float g, float b,
                    float &h, float &s, float &v)
{
    float rgb_max = std::max(r, std::max(g, b));
    float rgb_min = std::min(r, std::min(g, b));
    float delta = rgb_max - rgb_min;
    s = delta / (rgb_max + 1e-20f);
    v = rgb_max;
    float hue;
    if (r == rgb_max)
        hue = (g - b) / (delta + 1e-20f);
    else if (g == rgb_max)
        hue = 2 + (b - r) / (delta + 1e-20f);
    else
        hue = 4 + (r - g) / (delta + 1e-20f);
    if (hue < 0)
        hue += 6.f;
    h = hue * (1.f / 6.f);
}

Several things seem worth noticing already:

  • Most of the complexity comes from the hue calculation.
  • Four min/max operations are performed to find rgb_max and rgb_min; however, sorting three values can be done with only 3 comparisons. This is not necessarily problematic because min/max could be wired in an efficient way depending on the CPU.
  • Two additional tests are performed to compare r and g to rgb_max; if rgb_max and rgb_min were computed using tests, this is a waste of time to compare them again.
  • Adding 6.f to the final hue value only has a 16.6% chance of happening.

The actual hue calculation depends on how r, g, and b are ordered:

$\operatorname{Hue_{0\dots 6}}(r,g,b)=\begin{cases}
    (g - b) / (r - b), & \text{if $r \ge g \ge b$}.\\
    6 + (g - b) / (r - g), & \text{if $r \ge b \ge g$}.\\
    2 + (b - r) / (g - r), & \text{if $g \ge b \ge r$}.\\
    2 + (b - r) / (g - b), & \text{if $g \ge r \ge b$}.\\
    4 + (r - g) / (b - g), & \text{if $b \ge r \ge g$}.\\
    4 + (r - g) / (b - r), & \text{if $b \ge g \ge r$}.\\
  \end{cases}$

But let’s rewrite this in terms of x, y and z, where x is the largest of (r,g,b), z is the smallest of the three, and y is inbetween:

$\operatorname{Hue_{0\dots 6}}(R,G,B)=\begin{cases}
    (y - z) / (x - z), & \text{if $r \ge g \ge b$}.\\
    6 + (z - y) / (x - z), & \text{if $r \ge b \ge g$}.\\
    2 + (y - z) / (x - z), & \text{if $g \ge b \ge r$}.\\
    2 + (z - y) / (x - z), & \text{if $g \ge r \ge b$}.\\
    4 + (y - z) / (x - z), & \text{if $b \ge r \ge g$}.\\
    4 + (z - y) / (x - z), & \text{if $b \ge b \ge r$}.\\
  \end{cases}$

There are a lot of similarities here. We can push it even further, using the fact that x ≥ z and y ≥ z by definition:

$\operatorname{Hue_{0\dots 6}}(R,G,B)=\left|K + \dfrac{y - z}{x - z}\right|,
 \text{with $K =\begin{cases}
    0, & \text{if $r \ge g \ge b$}.\\
    -6, & \text{if $r \ge b \ge g$}.\\
    2, & \text{if $g \ge b \ge r$}.\\
    -2, & \text{if $g \ge r \ge b$}.\\
    4, & \text{if $b \ge r \ge g$}.\\
    -4, & \text{if $b \ge b \ge r$}.\\
  \end{cases}$}$

That’s actually the same calculation! Only the hue offset K changes. The idea now is the following:

  • Sort the triplet (r,g,b) using comparisons
  • Build K while sorting the triplet
  • Perform the final calculation

Putting the idea into practice gives us the following code:

static void RGB2HSV(float r, float g, float b,
                    float &h, float &s, float &v)
{
    float K = 0.f;
    if (g < b)
    {
        float tmp = g; g = b; b = tmp;
        K = -1.f;
    }
    if (r < g)
    {
        float tmp = r; r = g; g = tmp;
        K = -2.f / 6.f - K;
    }
    if (g < b)
    {
        float tmp = g; g = b; b = tmp;
        K = -K;
    }
    float chroma = r - b;
    h = fabs(K + (g - b) / (6.f * chroma + 1e-20f));
    s = chroma / (r + 1e-20f);
    v = r;
}

You can check for yourself that the values for K explicited above are properly generated by that function. There were many other ways to sort (r,g,b) but this specific one lets us do one final optimisation.

We notice that the last swap effectively changes the sign of K and the sign of g - b. Since both are then added and passed to fabs(), the sign reversal can actually be omitted.

That additional trickery gives us this final code:

static void RGB2HSV(float r, float g, float b,
                    float &h, float &s, float &v)
{
    float K = 0.f;
    if (g < b)
    {
        std::swap(g, b);
        K = -1.f;
    }
    if (r < g)
    {
        std::swap(r, g);
        K = -2.f / 6.f - K;
    }
    float chroma = r - std::min(g, b);
    h = fabs(K + (g - b) / (6.f * chroma + 1e-20f));
    s = chroma / (r + 1e-20f);
    v = r;
}

That’s 2 tests and 1 std::min call instead of the previous 3 tests and 4 std::min/max calls. We really should see some kind of performance gain here.

And as expected, benchmarks indicate a performance increase of 25 to 40 % with a great variety of CPUs, compilers and compiler flags. The following graph (average nanoseconds per conversion) is on a Core i7-2600K CPU, using g++ 4.7.2 with -O3 -ffast-math:

Beyond De Bruijn: fast binary logarithm of a 10-bit number

Recently I needed a method for retrieving the binary logarithm of a 10-bit number (for the curious, it was for the purpose of converting between 32-bit and 16-bit floating point numbers).

Computing the binary logarithm is equivalent to knowing the position of the highest order set bit. For instance, log2(0x1) is 0 and log2(0x100) is 8.

One well known method for fast binary logarithm is presented at Bit Twiddling Hacks. It is a two-step method where first all lower bits are set to 1 and then a De Bruijn-like sequence is used to perform a table lookup:

int fastlog2(uint32_t v)
{
    static const int MultiplyDeBruijnBitPosition[32] =
    {
        0, 9, 1, 10, 13, 21, 2, 29, 11, 14, 16, 18, 22, 25, 3, 30,
        8, 12, 20, 28, 15, 17, 24, 7, 19, 27, 23, 6, 26, 5, 4, 31
    };
    v |= v >> 1;
    v |= v >> 2;
    v |= v >> 4;
    v |= v >> 8;
    v |= v >> 16;
    return MultiplyDeBruijnBitPosition[(uint32_t)(v * 0x07C4ACDDU) >> 27];
}

That is 12 integer operations and a table lookup.

Optimising

It should be obvious what the sequence of operations on v does: fill the integer with ones starting from the highest order bit. Here are a few examples of what happens to v at each step:

v v |= v >> 1 v |= v >> 2 v |= v >> 4 v |= v >> 8 v |= v >> 16
0x0001 0x0001 0x0001 0x0001 0x0001 0x0001
0x0002 0x0003 0x0003 0x0003 0x0003 0x0003
0x0003 0x0003 0x0003 0x0003 0x0003 0x0003
0x0004 0x0006 0x0007 0x0007 0x0007 0x0007
0x0100 0x0180 0x01e0 0x01fe 0x01ff 0x01ff
0x80000000 0xc0000000 0xf0000000 0xff000000 0xffff0000 0xffffffff

There is one obvious optimisation available: since the input is only 10-bit, the last shift operation v |= v >> 16 can be omitted because the final value was already reached.

int fastlog2(uint32_t v)
{
    static const int MultiplyDeBruijnBitPosition[32] =
    {
        0, 9, 1, 10, 13, 21, 2, 29, 11, 14, 16, 18, 22, 25, 3, 30,
        8, 12, 20, 28, 15, 17, 24, 7, 19, 27, 23, 6, 26, 5, 4, 31
    };
    v |= v >> 1;
    v |= v >> 2;
    v |= v >> 4;
    v |= v >> 8;
    return MultiplyDeBruijnBitPosition[(uint32_t)(v * 0x07C4ACDDU) >> 27];
}

10 instructions instead of 12. Not really amazing, but worth mentioning.

Optimising more?

Could we do better? Now the last line is v |= v >> 8; and it is only useful to propagate the 9th and 10th bits to positions 1 and 2. What happens if we omit that line? Let’s see:

  • For most values of v, the expected value is obtained.
  • For values of v with a highest order bit at 9th position, 0x1fe could be obtained instead of 0x1ff.
  • For values of v with a highest order bit at 10th position, one of 0x3fc, 0x3fd or 0x3fe could be obtained instead of 0x3ff.

The list of possible output values would therefore be 0x1, 0x3, 0x7, 0xf, 0x1f, 0x3f, 0x7f, 0xff, 0x1fe, 0x1ff, 0x3fc, 0x3fd, 0x3fe, 0x3ff. What happens to these values when multiplying them with the De Bruijn sequence? Let's see:

v v * 0x07C4ACDDU) >> 27
0x1 0
0x3 2
0x7 6
0xf 14
0x1f 30
0x3f 29
0x7f 27
0xff 23
0x1fe 15
0x1ff 16
0x3fc 30
0x3fd 31
0x3fe 0
0x3ff 1

Damn! Two values are colliding. It looks like we cannot omit the last line after all.

Beyond De Bruijn

Let’s give another try at the problem. Usually De Bruijn sequences are built using either nontrivial algorithms, or brute force. Maybe we could find another sequence that has no collision? Or a sequence that is not a De Bruijn sequence but that works for our problem?

Well, let’s just brute force!

(2 seconds later)

int fastlog2(uint32_t v)
{
    static const int MagicTable[16] =
    {
        0, 1, 2, 8, -1, 3, 5, 9, 9, 7, 4, -1, 6, -1, -1, -1
    };
    v |= v >> 1;
    v |= v >> 2;
    v |= v >> 4;
    return MagicTable[(uint32_t)(v * 0x5a1a1a2u) >> 28];
}

Down to 8 instructions instead of 12. And the lookup table is now half the size!

Conclusion

It is possible for multiply-and-shift techniques similar to the De Bruijn sequence algorithm to exist for a larger set of problems. Brute forcing the search is a totally valid method for 32-bit multiplication.

The code used for this article is included in the attached file.

Maths trick: doing fewer comparisons

Note: this is not an optimisation. It is just one more tool you should have in your toolbox when looking for optimisations. It may be useful.

This is the trick:

\[\min(x,y) = \dfrac{x + y - |x - y|}{2}\]
\[\max(x,y) = \dfrac{x + y + |x - y|}{2}\]

You can check for yourself that it is always true: when x > y, |x - y| is the same as x - y, etc.

What good is it for? There is often an implicit comparison in min or max. It might be interesting to replace it with a call to the branchless fabs.

Example usage

Consider the following code:

float a, b, c, d;
/* ... */
return (a > b) && (c > d);

That kind of code is often used eg. in collision checks, where a lot of tests can be done. This code does two comparisons. On some architectures, this means two branches. Not always something you want.

The test condition is equivalent to:

(a - b > 0) && (c - d > 0)

Now when are two given numbers both positive? That is if and only if the smallest is positive:

min(a - b, c - d) > 0

We may now use our trick:

(a - b) + (c - d) - |(a - b) - (c + d)| > 0

And so the code could be rewritten as such:

float a, b, c, d;
/* ... */
return (a - b) + (c - d) > fabsf((a - b) - (c - d));

We basically replaced the additional test with a call to fabsf and some additions/subtractions. It may be possible to reorganise the input data so that this second version performs better.

Announce: LolRemez 0.2 released

A new version of our high precision polynomial approximation solver, LolRemez 0.2, is available.

The changes, taking into account all the feedback users provided, are as follows:

  • significant performance and accuracy improvements thanks to various bugfixes and a better extrema finder for the error function.
  • user can now define accuracy of the final result.
  • exp, sin, cos and tan are now about 20% faster.
  • multiplying a real number by an integer power of two is now a virtually free operation.
  • fixed a rounding bug in the real number printing routine.

You can visit the software homepage to download LolRemez and, more importantly, the comprehensive documentation featuring a step-by-step tutorial.

Announce: LolRemez 0.1 released

In my previous article about the Remez exchange algorithm I said I was working on a Remez exchange toolkit for everyone to play with. Though it’s far from being full-featured, I already use it and I believe it is already extremely useful. So I decided to release LolRemez 0.1 to the masses.

You can visit the software homepage to download LolRemez and, more importantly, the comprehensive documentation featuring a step-by-step tutorial.

Better function approximations: Taylor vs. Remez

You may have once crossed this particular piece of magic:

$\sin(a) = a - \dfrac{a^3}{3!} + \dfrac{a^5}{5!} - \dfrac{a^7}{7!} + \dfrac{a^9}{9!} + \dots$

The right part is the Taylor series of sin around 0. It converges very quickly to the actual value of sin(a). This allows a computer to compute the sine of a number with arbitrary precision.

And when I say it’s magic, it’s because it is! Some functions, called the entire functions, can be computed everywhere using one single formula! Other functions may require a different formula for different intervals; they are the analytic functions, a superset of the entire functions. In general, Taylor series are an extremely powerful tool to compute the value of a given function with very high accuracy, because for several common functions such as sin, tan or exp the terms of the series are easy to compute and, when implemented on a computer, can even be stored in a table at compile time.

Approximating sin with Taylor series

This is how one would approximate sin using 7 terms of its Taylor series on the [-π/2, π/2] interval. The more terms, the better the precision, but we’ll stop at 7 for now:

static double taylorsin(double x)
{
    static const
    double a0 =  1.0,
           a1 = -1.666666666666666666666666666666e-1,  /* -1/3! */
           a2 =  8.333333333333333333333333333333e-3,  /*  1/5! */
           a3 = -1.984126984126984126984126984126e-4,  /* -1/7! */
           a4 =  2.755731922398589065255731922398e-6,  /*  1/9! */
           a5 = -2.505210838544171877505210838544e-8,  /* -1/11! */
           a6 =  1.605904383682161459939237717015e-10; /*  1/13! */
    double x2 = x * x;
    return x * (a0 + x2 * (a1 + x2 * (a2 + x2
             * (a3 + x2 * (a4 + x2 * (a5 + x2 * a6))))));
}

And you may think…

/raw-attachment/blog/2011/12/14/understanding-motion-in-games/derp.png “Oh wow that is awesome! So simple for such a difficult function. Also, since I read your masterpiece about polynomial evaluation I know how to improve that function so that it is very fast!”

Well, actually, no.

/raw-attachment/blog/2011/12/14/understanding-motion-in-games/no-rage.png

If you are approximating a function over an interval using its Taylor series then either you or the person who taught you is a fucking idiot because a Taylor series approximates a function near a fucking point, not over a fucking interval, and if you don’t understand why it’s important then please read on because that shit is gonna blow your mind.

Error measurement

Let’s have a look at how much error our approximation introduces. The formula for the absolute error is simple:

$\text{E}(x) = \left|\sin(x) - \text{taylorsin}(x)\right|$

And this is how it looks like over our interval:

You can see that the error skyrockets near the edges of the [-π/2, π/2] interval.

/raw-attachment/blog/2011/12/14/understanding-motion-in-games/derp.png “Well the usual way to fix this is to split the interval in two or more parts, and use a different Taylor series for each interval.”

Oh, really? Well, let’s see the error on [-π/4, π/4] instead:

I see no difference! The error is indeed smaller, but again, it becomes extremely large at the edges of the interval. And before you start suggesting reducing the interval even more, here is the error on [-π/8, π/8] now:

I hope this makes it clear that:

  • the further from the centre of the interval, the larger the error
  • the error distribution is very unbalanced
  • the maximum error on [-π/2, π/2] is about 6.63e-10

And now I am going to show you why that maximum error value is pathetic.

A better approximation

Consider this new function:

static double minimaxsin(double x)
{
    static const
    double a0 =  1.0,
           a1 = -1.666666666640169148537065260055e-1,
           a2 =  8.333333316490113523036717102793e-3,
           a3 = -1.984126600659171392655484413285e-4,
           a4 =  2.755690114917374804474016589137e-6,
           a5 = -2.502845227292692953118686710787e-8,
           a6 =  1.538730635926417598443354215485e-10;
    double x2 = x * x;
    return x * (a0 + x2 * (a1 + x2 * (a2 + x2
             * (a3 + x2 * (a4 + x2 * (a5 + x2 * a6))))));
}

It doesn’t look very different, right? Right. The values a0 to a6 are slightly different, but the rest of the code is strictly the same.

Yet what a difference it makes! Look at this error curve:

That new function makes it obvious that:

  • the error distribution is better spread over the interval
  • the maximum error on [-π/2, π/2] is about 4.96e-14

Check that last figure again. The new maximum error isn’t 10% better, or maybe twice as good. It is more than ten thousand times smaller!!

The minimax polynomial

The above coefficients describe a minimax polynomial: that is, the polynomial that minimises a given error when approximating a given function. I will not go into the mathematical details, but just remember this: if the function is sufficiently well-suited (as sin, tan, exp etc. are), then the minimax polynomial can be found.

The problem? It’s hard to find. The most popular algorithm to find it is the Remez exchange algorithm, and few people really seem to understand how it works (or there would be a lot less Taylor series). I am not going to explain it right now. Usually you need professional math tools such as Maple or Mathematica if you want to compute a minimax polynomial. The Boost library is a notable exception, though.

But you saw the results, so stop using Taylor series. Spending some time finding the minimax polynomial is definitely worth it. This is why I am working on a Remez framework that I will make public and free for everyone to use, modify and do what the fuck they want. In the meantime, if you have functions to numerically approximate, or Taylor-based implementations that you would like to improve, let me know in the comments! This will be great use cases for me.