Matrix multiplication using the FMA instruction

In my previous post, we did matrix multiplication using regular SSE/AVX instructions. In this post, we’ll implement matrix multiplication using the FMA (fused multiply-add) instruction, which takes three arguments and is able to multiply and add at the same time. (Think c = a*b + c.)

If you got here directly without reading the previous posts, note that this is just a somewhat naive implementation using inverted matrices (but without doing the inversion ourselves). (The intent of this article series is to show how to use SIMD instructions.) My previous post had a few links for people who need something really optimized.

Note that we’re using “FMA3” instructions, rather than FMA4 instructions, which only seem to be supported on some AMD processors. (The number indicates the number of arguments passed to the instruction. In the case of FMA4, the formula would be a = b*c + d.)

The first table below shows the operation of the first FMA instruction, where a is still 0 (as there is nothing to add on the very first instruction), and the second table the following FMA instruction, where the “addend” is the result in the first table.

Addend (a) 0 0 0 0
Factor 1 (b) 0.1 0.1 0.1 0.1
Factor 2 (c) 0.1 0.1 0.1 0.1
Result (a) 0.01 0.01 0.01 0.01
Addend (a) 0.01 0.01 0.01 0.01
Factor 1 (b) 0.1 0.1 0.1 0.1
Factor 2 (c) 0.1 0.1 0.1 0.1
Result (a) 0.02 0.02 0.02 0.02

Here’s the code. I changed the square matrix size to 2048 to make measuring a bit easier.

#include <x86intrin.h>
#include <stdio.h>
#include <stdlib.h>

#define N 2048

float *matrix_a;
float *matrix_b;
float result[N][N];

void chunked_mm(int chunk, int n_chunks) {
    __m256 va, vb, vc;
    for (int i = chunk*(N/n_chunks); i < (chunk+1)*(N/n_chunks); i++) {
        for (int j = 0; j < N; j++) {
            float buffer[8] = { 0.0f, 0.0f, 0.0f, 0.0f, 0.0f, 0.0f, 0.0f, 0.0f };
            vc = _mm256_loadu_ps(buffer);
            for (int k = 0; k < N; k += 8) {
                // load
                va = _mm256_loadu_ps(matrix_a+(i*N)+k); // matrix_a[i][k]
                vb = _mm256_loadu_ps(matrix_b+(j*N)+k); // matrix_b[j][k]

                // fused multiply and add
                vc = _mm256_fmadd_ps(va, vb, vc);
            }
            //vc = _mm256_hadd_ps(vc, vc);
            _mm256_storeu_ps(buffer, vc);
            result[i][j] = buffer[0] + buffer[1] + buffer[2] + buffer[3] + buffer[4] + buffer[5] + buffer[6] + buffer[7];
            //result[i][j] = buffer[0] + buffer[2] + buffer[4] + buffer[6];
        }
    }
}

int main(int argc, char **argv) {
    // initialize matrix_a and matrix_b
    matrix_a = malloc(N*N*sizeof(float));
    matrix_b = malloc(N*N*sizeof(float));

    for (int i = 0; i < N*N; i++) {
        *(matrix_a+i) = 0.1f;
        *(matrix_b+i) = 0.2f;
    }
    // initialize result matrix
    for (int i = 0; i < N; i++) {
        for (int j = 0; j < N; j++) {
            result[i][j] = 0.0f;
        }
    }

    #pragma omp parallel for
    for (int i = 0; i < 4; i++) {
        chunked_mm(i, 4);
    }
    
    for (int i = 0; i < N; i++) {
        for (int j = 0; j < N; j++) {
            //printf("%f ", result[i][j]);
            printf("%x ", *(unsigned int*)&result[i][j]);
        }
        printf("\n");
    }
    
    return 0;
}

Performance

Since the CPU used in the previous articles doesn’t support FMA (and we changed N), I’m re-benchmarking the AVX256 version on the new processor.

AVX256: 1.25 seconds
FMA: 1 second

Unfortunately this is borrowed hardware so I can’t play around with this too much, but the above result is pretty consistent.

Matrix multiplication using SIMD instructions

In my previous post, I tried various things to improve the performance of a matrix multiplication using compiler features.

# 20 seconds
gcc -Wall -o mm mm.c

# 1.182 seconds
gcc -g -O4 -fopenmp -fopt-info-optall-optimized -ftree-vectorize -mavx -o mm_autovectorized_openmp mm_autovectorized_openmp.c

However, -O4 -fopenmp using transposed matrices turned out faster (0.882 seconds) than -O4 -fopenmp and auto-vectorization using untransposed matrices. I couldn’t get auto-vectorization to work with the transposed matrices.

In this post, we’ll use simple SIMD instructions to optimize this further. It builds up on my post from two days ago, where I explain how to use SIMD instructions for a very simple and synthetic example.

Note that much more can be done to optimize matrix multiplication than is described in this post. This post just explains the very basics. If you need more advanced algorithms, maybe look through these three links:

https://gist.github.com/nadavrot/5b35d44e8ba3dd718e595e40184d03f0 High Performance Matrix Multiplication

https://news.ycombinator.com/item?id=17164737 Hacker News discussion of above post

https://www.cs.utexas.edu/users/pingali/CS378/2008sp/papers/gotoPaper.pdf Anatomy of High-Performance Matrix Multiplication (academic paper)

Here’s an interesting article that implements high-performance matrix multiplication in just 100 lines, using FMA3: https://cs.stanford.edu/people/shadjis/blas.html

Using transposed matrices makes vectorizing matrix multiplication quite easy. Why? Well, remember that in our simple example, there were three steps. The first step requires that the data to be loaded is laid out sequentially in memory.

  1. Loading data into SIMD registers
  2. Performing operations on corresponding operands in two SIMD registers
  3. Storing the result

Step 1: Loading data

Remember that the data load wanted a memory address where the four (or eight) float values were stored sequentially. Well, if we just transpose the matrix before we start doing stuff, we can just load the matrix B floats sequentially. So the code looks almost the same as in the baby steps post. To make things a bit easier, we will be using SSE for now.

va = _mm_loadu_ps(&(matrix_a[i][k]));
vb = _mm_loadu_ps(&(matrix_b[j][k]));

Step 2: Doing the calculations

All right. We have our floats loaded into two registers. In SSE, we have four floats per register:

Register 1 (va) 0.1 0.1 0.1 0.1
Register 2 (vb) 0.2 0.2 0.2 0.2

The first step is to multiply. In the baby steps post, we used _mm_add_ps to perform addition. Well, multiplication uses an intrinsic with a similar name: _mm_mul_ps. (The AVX version is _mm256_mul_ps.) So if we do:

 vresult = _mm_mul_ps(va, vb)

And we get:

vresult 0.02 0.02 0.02 0.02

Great! Now we just need to add the contents of vresult together! Unfortunately, there is no SIMD instruction that would add every component together to give us 0.08 as the output, given the above vresult as its only input.

From SSE3, there exists _mm_hadd_ps however, the “horizontal add” instruction (https://software.intel.com/sites/landingpage/IntrinsicsGuide/#text=_mm_hadd_ps&expand=2777), which takes two registers as input (you can use the same registers), and computes:

dst[31:0] := a[63:32] + a[31:0]
dst[63:32] := a[127:96] + a[95:64]
dst[95:64] := b[63:32] + b[31:0]
dst[127:96] := b[127:96] + b[95:64]

Here’s an example:

va 0.1 0.2 0.3 0.4
vb 0.5 0.6 0.7 0.8
vresult 0.3 0.7 1.1 1.5

Sorry for the weird color scheme. Maybe you can already see that this is a bit odd – why does it want two registers as input, for starters? We wanted 0.1+0.2+0.3+0.4, which should be 1. Well, let’s see what happens when we use the same register for both inputs, and perform this operation twice!

va 0.1 0.2 0.3 0.4
va 0.1 0.2 0.3 0.4
vresult 0.3 0.7 0.3 0.7
vresult 0.3 0.7 0.3 0.7
vresult 0.3 0.7 0.3 0.7
vresult (new) 1 1 1 1

Yay, we did it! We got 1, which is the result of 0.1+0.2+0.3+0.4. (This works for SSE. We will talk about AVX later.) Here’s the code:

vresult = _mm_hadd_ps(vresult, vresult);
vresult = _mm_hadd_ps(vresult, vresult);

Step 3: Storing the result

Step 3 involves storing the result. We can of course just store the four bytes into an array as before, but as they’re all the same, we’re really only interested in one of them. We could use _mm_extract_ps, which is capable of extracting any of the four floats. But we can do slightly better, we can just cast, which will get us the lowest float in the 128-bit register. There is an intrinsic for this type of cast, _mm_cvtss_f32, so we can just write:

result[i][j] += _mm_cvtss_f32(vresult);

And that’s (assuming SSE3) four sub-operations of the matrix multiplication done in one go! Because we’re doing four ks at once, we have to change the inner loop to reflect that:

for (int k = 0; k < 1024; k += 4) {
    ...
}

So let’s see the code. In this example I’ve also decided to use malloc instead of stack arrays (except for result), so matrix_a[i][k] turns into matrix_a+(i*1024)+k.

#include <x86intrin.h>
#include <stdio.h>
#include <stdlib.h>

int main(int argc, char **argv) {
    float *matrix_a = malloc(1024*1024*sizeof(float));
    float *matrix_b = malloc(1024*1024*sizeof(float));
    float result[1024][1024];
    __m128 va, vb, vresult;

    // initialize matrix_a and matrix_b
    for (int i = 0; i < 1048576; i++) {
        *(matrix_a+i) = 0.1f;
        *(matrix_b+i) = 0.2f;
    }
    // initialize result matrix
    for (int i = 0; i < 1024; i++) {
        for (int j = 0; j < 1024; j++) {
            result[i][j] = 0;
        }
    }

    for (int i = 0; i < 1024; i++) {
        for (int j = 0; j < 1024; j++) {
            for (int k = 0; k < 1024; k += 4) {
                // load
                va = _mm_loadu_ps(matrix_a+(i*1024)+k); // matrix_a[i][k]
                vb = _mm_loadu_ps(matrix_b+(j*1024)+k); // matrix_b[j][k]

                // multiply
                vresult = _mm_mul_ps(va, vb);

                // add
                vresult = _mm_hadd_ps(vresult, vresult);
                vresult = _mm_hadd_ps(vresult, vresult);

                // store
                result[i][j] += _mm_cvtss_f32(vresult);
            }
        }
    }
    
    for (int i = 0; i < 1024; i++) {
        for (int j = 0; j < 1024; j++) {
            printf("%f ", result[i][j]);
        }
        printf("\n");
    }
    
    return 0;
}
gcc -O4 -fopt-info-optall-optimized -msse3 -o sse_mm_unaligned sse_mm_unaligned.c
time ./sse_mm_unaligned > /dev/null

real    0m1.054s
user    0m1.044s
sys     0m0.008s

And the run time is about 1.054 seconds using a single thread. Note that we have to pass -msse3 to gcc, as vanilla SSE does not support the horizontal add instruction.

AVX

As mentioned earlier, the double-hadd method does not work for the AVX _mm256_hadd_ps intrinsic (https://software.intel.com/sites/landingpage/IntrinsicsGuide/#text=_mm256_hadd_ps&expand=2778), which works like this:

dst[31:0] := a[63:32] + a[31:0]
dst[63:32] := a[127:96] + a[95:64]
dst[95:64] := b[63:32] + b[31:0]
dst[127:96] := b[127:96] + b[95:64]
dst[159:128] := a[191:160] + a[159:128]
dst[191:160] := a[255:224] + a[223:192]
dst[223:192] := b[191:160] + b[159:128]
dst[255:224] := b[255:224] + b[223:192]

Here’s a va-vb-table that shows what happens with AVX:

va 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8
vb 0.9 1 1.1 1.2 1.3 1.4 1.5 1.6
vresult 0.3 0.7 1.9 1.3 1.1 1.5 2.7 3.1

Here’s the first va-va table of the double-hadd method:

va 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8
va 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8
vresult 0.3 0.7 0.3 0.7 1.1 1.5 1.1 1.5

And the second vresult-vresult table:

vresult 0.3 0.7 0.3 0.7 1.1 1.5 1.1 1.5
vresult 0.3 0.7 0.3 0.7 1.1 1.5 1.1 1.5
vresult (new) 1 1 1 1 2.6 2.6 2.6 2.6

As you can see, we do not reach our expected result of 3.6 (0.1+0.2+…+0.8). (It’s just like it’s doing two SSE hadds completely independent from each other.) There are various ways to get out of this problem, e.g. extract the two 128-bit halves from the 256-bit register, and then use SSE instructions. This is how you extract:

vlow = _mm256_extractf128_ps(va, 0);
vhigh = _mm256_extractf128_ps(va, 1);

The second argument indicates with half you want.

As an aside: instead of extracting the lower 128 bits and putting them in a register, we can also use a cast, _mm256_castps256_ps128 (https://software.intel.com/en-us/node/524181).

The lower 128-bits of the source vector are passed unchanged to the result. This intrinsic does not introduce extra moves to the generated code.

Anyway, let’s go with the extracted values first. So we have the following situation:

vlow 0.1 0.2 0.3 0.4
vhigh 0.5 0.6 0.7 0.8

And we want to add all these eight values together. So why don’t we just simply use our trusty _mm_add_ps(vlow, vhigh) first? This way we can do four of eight required additions, leaving us with the following 128-bit register:

vresult 0.6 0.8 1 1.2

And now we want to add up horizontally, so we use the double-_mm_hadd_ps method described above:

vresult 0.6 0.8 1 1.2
vresult 0.6 0.8 1 1.2
vresult 1.4 2.2 1.4 2.2
vresult 1.4 2.2 1.4 2.2
vresult 1.4 2.2 1.4 2.2
vresult 3.6 3.6 3.6 3.6
#include <x86intrin.h>
#include <stdio.h>
#include <stdlib.h>

int main(int argc, char **argv) {
    float *matrix_a = malloc(1024*1024*sizeof(float));
    float *matrix_b = malloc(1024*1024*sizeof(float));
    float result[1024][1024];
    __m256 va, vb, vtemp;
    __m128 vlow, vhigh, vresult;

    // initialize matrix_a and matrix_b
    for (int i = 0; i < 1048576; i++) {
        *(matrix_a+i) = 0.1f;
        *(matrix_b+i) = 0.2f;
    }
    // initialize result matrix
    for (int i = 0; i < 1024; i++) {
        for (int j = 0; j < 1024; j++) {
            result[i][j] = 0;
        }
    }

    for (int i = 0; i < 1024; i++) {
        for (int j = 0; j < 1024; j++) {
            for (int k = 0; k < 1024; k += 8) {
                // load
                va = _mm256_loadu_ps(matrix_a+(i*1024)+k); // matrix_a[i][k]
                vb = _mm256_loadu_ps(matrix_b+(j*1024)+k); // matrix_b[j][k]

                // multiply
                vtemp = _mm256_mul_ps(va, vb);

                // add
                // extract higher four floats
                vhigh = _mm256_extractf128_ps(vtemp, 1); // high 128
                // add higher four floats to lower floats
                vresult = _mm_add_ps(_mm256_castps256_ps128(vtemp), vhigh);
                // horizontal add of that result
                vresult = _mm_hadd_ps(vresult, vresult);
                // another horizontal add of that result
                vresult = _mm_hadd_ps(vresult, vresult);

                // store
                result[i][j] += _mm_cvtss_f32(vresult);
            }
        }
    }
    
    for (int i = 0; i < 1024; i++) {
        for (int j = 0; j < 1024; j++) {
            printf("%f ", result[i][j]);
        }
        printf("\n");
    }
    
    return 0;
}
$ gcc -O4 -fopt-info-optall-optimized -mavx -o avx256_mm_unaligned avx256_mm_unaligned.c 
$ time ./avx256_mm_unaligned > /dev/null

real    0m0.912s
user    0m0.904s
sys     0m0.004s

That is… a tiny bit faster. (Note that I’m running everything multiple times to make sure the difference isn’t just due to change.) However, with AVX we are supposed to get twice the FLOPs, right? We’ll look at other optimizations of the vectorization in a later post. Before that, let’s add OpenMP into the mix.

OpenMP

Unfortunately, OpenMP’s #pragma omp parallel for sometimes doesn’t appear to do what you need it to do. Sticking this in front of the outer (i) loop reduces performance by half! However, we can be sure that this isn’t the processor “oversubscribing” the SIMD units, because if we run two instances of our program at the same time, both finish with almost the same run time we see with just a single instance:

$ time (./avx256_mm_unaligned & ./avx256_mm_unaligned; wait) > /dev/null
real    0m1.001s
user    0m0.988s
sys     0m0.008s

So we’ll use the same chunking trick that we used last time, and our result gets a little better: 0.753 seconds:

#include <x86intrin.h> // Need this in order to be able to use the AVX "intrinsics" (which provide access to instructions without writing assembly)
#include <stdio.h>
#include <stdlib.h>

float *matrix_a;
float *matrix_b;
float result[1024][1024];

void chunked_mm(int chunk, int n_chunks) {
    __m256 va, vb, vtemp;
    __m128 vlow, vhigh, vresult;
    for (int i = chunk*(1024/n_chunks); i < (chunk+1)*(1024/n_chunks); i++) {
        for (int j = 0; j < 1024; j++) {
            for (int k = 0; k < 1024; k += 8) {
                // load
                va = _mm256_loadu_ps(matrix_a+(i*1024)+k); // matrix_a[i][k]
                vb = _mm256_loadu_ps(matrix_b+(j*1024)+k); // matrix_b[j][k]

                // multiply
                vtemp = _mm256_mul_ps(va, vb);

                // add
                // extract higher four floats
                vhigh = _mm256_extractf128_ps(vtemp, 1); // high 128
                // add higher four floats to lower floats
                vresult = _mm_add_ps(_mm256_castps256_ps128(vtemp), vhigh);
                // horizontal add of that result
                vresult = _mm_hadd_ps(vresult, vresult);
                // another horizontal add of that result
                vresult = _mm_hadd_ps(vresult, vresult);

                // store
                result[i][j] += _mm_cvtss_f32(vresult);
            }
        }
    }
}

int main(int argc, char **argv) {
    // initialize matrix_a and matrix_b
    matrix_a = malloc(1024*1024*sizeof(float));
    matrix_b = malloc(1024*1024*sizeof(float));
    for (int i = 0; i < 1048576; i++) {
        *(matrix_a+i) = 0.1f;
        *(matrix_b+i) = 0.2f;
    }
    // initialize result matrix
    for (int i = 0; i < 1024; i++) {
        for (int j = 0; j < 1024; j++) {
            result[i][j] = 0;
        }
    }

    #pragma omp parallel for num_threads(4)
    for (int i = 0; i < 4; i++) {
        chunked_mm(i, 4);
    }
    
    for (int i = 0; i < 1024; i++) {
        for (int j = 0; j < 1024; j++) {
            printf("%f ", result[i][j]);
        }
        printf("\n");
    }
    
    return 0;
}
$ gcc -fopenmp -O4 -mavx -o avx256_mm_unaligned_openmp avx256_mm_unaligned_openmp.c
$ time ./avx256_mm_unaligned_openmp > /dev/null 

real    0m0.753s
user    0m1.332s
sys     0m0.008s

To be honest, with a 2 core/4 thread system, I would have expected better. Running multiple instances doesn’t increase the run time, and the previous version took only 1.27 times as long as this.

Re-evaluating our performance measurements

Array initialization will always take the same small amount of time, but printf(“%f”, …) takes a non-constant amount of time and depends on the values. Let’s see what kind of timing we get when we change this to an %x format string.

printf("%x ", *(unsigned int*)&result[i][j]);
time ./avx256_mm_unaligned > /dev/null

real    0m0.488s
user    0m0.480s
sys     0m0.004s

time ./avx256_mm_unaligned_openmp > /dev/null

real    0m0.277s
user    0m0.832s
sys     0m0.008s

That sounds much better, both in absolute terms and in OpenMP terms. By the way, if we remove the matrix multiplication and only leave initialization and output, we still get an execution time of about 0.111 seconds. So it’s reasonably safe to say that our matrix multiplication takes about 0.377 seconds on a single thread. (I feel like I shot myself in the foot for measuring this using shell’s time, rather than embedding the measurement in the code itself…)

Aligned accesses

To allow the use of the aligned _mm256_load_ps, allocate your memory like this:

    matrix_a = aligned_alloc(ALIGNMENT, 1024*1024*sizeof(float));
    matrix_b = aligned_alloc(ALIGNMENT, 1024*1024*sizeof(float));

Unfortunately, I didn’t notice a significant difference. (You may be able to shave off a few percent.)

Results

Here are the results, again:

AVX, no OpenMP AVX, OpenMP SSE, no OpenMP
Run time 0.488 0.277 0.59
Minus init/output 0.377 0.166 0.479

Matrix multiplication using gcc’s auto-vectorization

In my previous post, I tried to explain how to use SIMD instructions for a really simple (and artificial) example: just adding numbers in two vectors together. In this post, I’d like to take this just a little bit further and talk about matrix multiplication. In this post, we’re using gcc’s auto-vectorization. We’ll vectorize this ourselves in my next post.

If you’re here, you probably know what matrix multiplication is. It’s got a lot of uses, including graphics and neural networks.

We’ll keep our implementation simple by only supporting square matrices with n dividable by 16 (in the case of AVX). Our example will use n=1024. So before we do the vectorized implementation, let’s look at a general (“naive”) example:

#include <stdio.h>

int main(int argc, char **argv) {
    float matrix_a[1024][1024];
    float matrix_b[1024][1024];
    float result_matrix[1024][1024];
    
    // initialize arrays
    for (int i = 0; i < 1024; i++) {
        for (int j = 0; j < 1024; j++) {
            matrix_a[i][j] = 0.1f;
            matrix_b[i][j] = 0.2f;
            result_matrix[i][j] = 0.0f;
        }
    }

    for (i = 0; i < 1024; i++) { // iterate over rows of matrix A/result matrix
        for (j = 0; j < 1024; j++) { // iterate over columns matrix B/result matrix
            for (k = 0; k < 1024; k++) { // iterate over columns of matrix A and rows of matrix B
                result_matrix[i][j] += matrix_a[i][k]*matrix_b[k][j]
            }
        }
    }

    // output
    for (int i = 0; i < 1024; i++) {
        for (int j = 0; j < 1024; j++) {
            printf("%f ", result_matrix[i][j]);
        }
        printf("\n");
    }
}

To compile and run, execute the following commands:

$ gcc -Wall -o mm mm.c
$ ulimit -s 16384
$ time ./mm > mm_output

real    0m20.189s
user    0m20.016s
sys     0m0.072s

(Note that we are allocating the arrays on the stack rather than using malloc, so we need to raise the stack size a bit, otherwise we get an immediate segmentation fault.)

The reason matrix multiplication code can look a bit mysterious is that there are a lot of things that can be optimized. However, there is only one optimization that is required to get vectorization to work at all.

As you can see, when we access matrix_a, we access matrix_a[i][0], then matrix_a[i][1], matrix_a[i][2], matrix_a[i][3], and so on until we have hit the end. This is nice and sequential memory access, and is much faster than haphazard (“random”) accesses. In matrix_b, we have somewhat haphazard accesses. The first access is matrix_b[0][j], the second access is (in our example) 1024 bytes away from the first, matrix_b[1][j], then another 1024 bytes away at matrix_b[2][j], etc. There is a 1024 byte gap between every access. This kind of access is slow. It ruins the CPU’s caching system. This is why matrix_b will often be transposed in matrix multiplication code. If you transpose the matrix, the rows will be the columns and the columns the rows, thus you get nice and sequential access to matrix_b. (In our demonstration code, we are using square matrices with the same values everywhere, so we don’t actually have to do any copying work, as matrix_b is the same transposed or not. So all we have to do is swap the indices.)

            result[i][j] += matrix_a[i][k]*matrix_b[j][k]

So what kind of speed-up does this get us? The naive implementation takes 19-21 seconds on my system. The implementation with the transposed matrix takes 4 seconds! That’s a 5x speed-up!

Next, we’ll try to parallelize the outer for-loop using OpenMP. With OpenMP we just have to add #pragma omp parallel for in front of the loop, like this:

    #pragma omp parallel for
    for (int i = 0; i < 1024; i++) {

And then compile and run like this:

$ gcc -fopenmp -Wall -o mmT mmT.c
$ ulimit -s 16384
$ time ./mmT > /dev/null

real    0m2.939s
user    0m9.984s
sys     0m0.016s

Next, we’ll ask gcc to auto-vectorize! Curiously enough, gcc didn’t autovectorize the version with the transposed loop, so I’ve gathered results for -O4 without autovectorization for non-transposed, -O4 with autovectorization for non-transposed, and -O4 transposed:

-O4 with SSE autovectorization -O4 with AVX autovectorization -O4 without autovectorization
Straight 2.99 1.527 8.921
Transposed n/a n/a 1.565

And here are the commands and some example output:

$ # -O4, no auto-vectorization, straight
$ gcc -g -O4 -fopt-info-optall-optimized -fno-tree-vectorize -o mm mm.c
mm.c:9:9: note: Loop 7 distributed: split to 1 loops and 1 library calls.
$ time ./mm > /dev/null

real    0m8.921s
user    0m8.912s
sys     0m0.004s
$ # -O4, SSE auto-vectorization, straight
$ gcc -g -O4 -fopt-info-optall-optimized -ftree-vectorize -o mm mm.c
mm.c:9:9: note: Loop 7 distributed: split to 1 loops and 1 library calls.
mm.c:18:9: note: loop vectorized
mm.c:9:9: note: loop vectorized
$ # -O4, AVX auto-vectorization, straight
$ gcc -g -O4 -fopt-info-optall-optimized -ftree-vectorize -mavx -o mm mm.c
$ # -O4, no auto-vectorization, transformed
$ gcc -g -O4 -fopt-info-optall-optimized -ftree-vectorize -o mmT mmT.c

Let’s add OpenMP into the mix:

-O4 with AVX autovectorization and OpenMP -O4 with OpenMP
Straight 1.18 5.568
Transposed n/a 0.882

Just asking OpenMP to parallelize the i-loop makes the auto-vectorization break, but we can work around that by manually splitting the matrix multiplication into chunks. This is the full code:

#include <stdio.h>

#define N 1024

float matrix_a[N][N];
float matrix_b[N][N];
float result_matrix[N][N];

void chunked_mm(int chunk, int n_chunks) {
    for (int i = chunk*(N/n_chunks); i < (chunk+1)*(N/n_chunks); i++) {
        for (int j = 0; j < N; j++) {
            for (int k = 0; k < N; k++) {
                result_matrix[i][j] += matrix_a[i][k] * matrix_b[k][j];
            }
        }
    }
}

int main(int argc, char **argv) {
    for (int i = 0; i < N; i++) {
        for (int j = 0; j < N; j++) {
            matrix_a[i][j] = 0.1f;
            matrix_b[i][j] = 0.2f;
            result_matrix[i][j] = 0.0f;
        }
    }
    #pragma omp parallel for
    for (int chunk = 0; chunk < 4; chunk++) {
        chunked_mm(chunk, 4);
    }
 
    for (int i = 0; i < N; i++) {
        for (int j = 0; j < N; j++) {
            printf("%f ", result_matrix[i][j]);
        }
        printf("\n");
    }
}

Compile and run:

$ gcc -g -O4 -fopenmp -fopt-info-optall-optimized -ftree-vectorize -mavx -o mm_autovectorized_openmp mm_autovectorized_openmp.c 
mm_autovectorized_openmp.c:11:9: note: loop vectorized
mm_autovectorized_openmp.c:11:9: note: loop vectorized
mm_autovectorized_openmp.c:21:9: note: Loop 4 distributed: split to 1 loops and 1 library calls.
mm_autovectorized_openmp.c:21:9: note: loop vectorized
mm_autovectorized_openmp.c:21:9: note: loop peeled for vectorization to enhance alignment
mm_autovectorized_openmp.c:21:9: note: loop turned into non-loop; it never loops.
mm_autovectorized_openmp.c:21:9: note: loop with 7 iterations completely unrolled
mm_autovectorized_openmp.c:19:5: note: loop turned into non-loop; it never loops.
mm_autovectorized_openmp.c:19:5: note: loop with 7 iterations completely unrolled
$ time ./mm_autovectorized_openmp > /dev/null

real    0m1.182s
user    0m3.036s
sys     0m0.012s

From the user time being larger than the real time, we can tell that this was indeed running in multiple threads. Enclosing the parallel loop with something like:

for (int loop = 0; loop < 10; loop++) {
        #pragma omp parallel for
        for (int chunk = 0; chunk < 8; chunk++) {
            chunked_mm(chunk, 8);
        }
    }

allows us a better measurement of how much improvement we get.

time ./mm_autovectorized_openmp > /dev/null

real    0m6.649s
user    0m23.572s
sys     0m0.012s

Anyway, the transpose still beats gcc’s auto-vectorization of the non-transposed code. I wish I could get gcc to auto-vectorize the transposed code, but alas.

In the next post we’ll vectorize this ourselves!

Baby steps in SIMD (SSE/AVX)

In case you have never used SIMD instructions, this post explores the real basics. For example: what is SIMD? SIMD stands for “Single instruction, multiple data”. We’re computing more than one “math problem” with a single instruction. CPUs have had instructions to do this for a long time. If you remember the “Pentium MMX” hype – that was the first time SIMD instructions came to the x86 architecture.

However, with some trickery, you can do some limited SIMD without actually using these instructions. Let’s say we want to add 1 to two values at the same time. If we put these two values right next to each other in memory, we can interpret them as a single larger datatype. That’s not all that straightforward to understand, so here’s an example: you can interpret two 8-bit values right next to each other as one 16-bit value, right? To increment both values at the same time, you do value + 0x0101, which is just one assembly instruction. So with no special instructions at all, on a 64-bit platform you can increment eight 8-bit values at the same time by adding 0x0101010101010101.

Okay, that feels pretty hacky and unreliable. Once you’ve incremented a value 256 times, you’ll have spilt into the neighboring value! That’s pretty bad.

So SSE provides 128-bit registers that allow you to comfortably work on e.g. four 32-bit floats at the same time, without any spilling. AVX provides 256-bit registers, and AVX512 provides 512-bit registers. Woo! Unfortunately AVX512 isn’t widely available yet.

So how do you use this? Let’s start with SSE, though you’ll see that updating code to use AVX or AVX512 instead is pretty easy. We’ll look at some very basic example code to add two vectors together.

#include <xmmintrin.h> // Need this in order to be able to use the SSE "intrinsics" (which provide access to instructions without writing assembly)
#include <stdio.h>

int main(int argc, char **argv) {
    float a[4], b[4], result[4]; // a and b: input, result: output
    __m128 va, vb, vresult; // these vars will "point" to SIMD registers

    // initialize arrays (just {0,1,2,3})
    for (int i = 0; i < 4; i++) {
        a[i] = (float)i;
        b[i] = (float)i;
    }
    
    // load arrays into SIMD registers
    va = _mm_loadu_ps(a); // https://software.intel.com/en-us/node/524260
    vb = _mm_loadu_ps(b); // same

    // add them together
    vresult = _mm_add_ps(va, vb);

    // store contents of SIMD register into memory
    _mm_storeu_ps(result, vresult); // https://software.intel.com/en-us/node/524262

    // print out result
    for (int i = 0; i < 4; i++) {
        printf("%f\n", result[i]);
    }
}

That doesn’t seem so hard, does it? To access SIMD instructions without writing assembly code, we use something called “intrinsics”, which make the SIMD instructions look like regular C functions. Don’t worry though, these functions are inline and mostly just consist of the assembly instruction itself, so you probably won’t see any difference in performance.

In this example, we’re using three intrinsics, _mm_loadu_ps, _mm_add_ps, and _mm_storeu_ps. _mm_loadu_ps copies four float values from memory into the SSE register. We do this twice and are thus using two SSE registers. (We have 16 SSE registers available on 64-bit CPUs.) Then, we use _mm_add_ps to, in a single instruction, add the four floats in one register to the corresponding floats in the other register. (So we get a[0]+b[0], a[1]+b[1], a[2]+b[2], a[3]+b[3].) This is stored in a third SSE register. Using _mm_storeu_ps, we put the contents of this result register into the result float array.

We can compile and run this without any extra linking:

$ gcc -Wall -o sse_test sse_test.c 
$ ./sse_test
0.000000
2.000000
4.000000
6.000000

Wow, it worked!

_mm_loadu_ps/_mm_storeu_ps have sister functions without the ‘u’. These functions require memory alignment, which just means that the memory has to start at an address that is cleanly divisible by a certain number, which mostly increases performance (unless something unfortunate happens in the CPU caching department).

To get the alignment, we just declare the arrays like this:

    float a[4] __attribute__ ((aligned (16)));
    float b[4] __attribute__ ((aligned (16)));
    float result[4]  __attribute__ ((aligned (16)));

And then change all instances of _mm_loadu_ps/_mm_storeu_ps to _mm_load_ps/_mm_store_ps.  Intel’s documentation states that we need 16-byte alignment. And GCC’s syntax just looks a bit obscure. It’s described here: https://gcc.gnu.org/onlinedocs/gcc-6.4.0/gcc/Common-Variable-Attributes.html#Common-Variable-Attributes

Cool, that’s SSE. What about AVX? Well, it turns out that we just need to change the included header file, the array sizes and the names of the intrinsics! (Note that you can include all intrinsics available by doing #include <x86intrin.h> instead.)

So here’s the same thing using AVX, and with aligned memory accesses:

#include <immintrin.h> // Need this in order to be able to use the AVX "intrinsics" (which provide access to instructions without writing assembly)
#include <stdio.h>

int main(int argc, char **argv) {
    float a[8] __attribute__ ((aligned (32))); // Intel documentation states that we need 32-byte alignment to use _mm256_load_ps/_mm256_store_ps
    float b[8]  __attribute__ ((aligned (32))); // GCC's syntax makes this look harder than it is: https://gcc.gnu.org/onlinedocs/gcc-6.4.0/gcc/Common-Variable-Attributes.html#Common-Variable-Attributes
    float result[8]  __attribute__ ((aligned (32)));
    __m256 va, vb, vresult; // __m256 is a 256-bit datatype, so it can hold 8 32-bit floats

    // initialize arrays (just {0,1,2,3,4,5,6,7})
    for (int i = 0; i < 8; i++) {
        a[i] = (float)i;
        b[i] = (float)i;
    }

    // load arrays into SIMD registers
    va = _mm256_load_ps(a); // https://software.intel.com/en-us/node/694474
    vb = _mm256_load_ps(b); // same

    // add them together
    vresult = _mm256_add_ps(va, vb); // https://software.intel.com/en-us/node/523406

    // store contents of SIMD register into memory
    _mm256_store_ps(result, vresult); // https://software.intel.com/en-us/node/694665

    // print out result
    for (int i = 0; i < 8; i++) {
        printf("%f\n", result[i]);
    }
    
    return 0;
}

So let’s compile that:

gcc -Wall -o avx256_test_aligned avx256_test_aligned.c 
avx256_test_aligned.c: In function ‘main’:
avx256_test_aligned.c:15:8: warning: AVX vector return without AVX enabled changes the ABI [-Wpsabi]
     va = _mm256_load_ps(a); // https://software.intel.com/en-us/node/694474
     ~~~^~~~~~~~~~~~~~~~~~~
In file included from /usr/lib/gcc/x86_64-linux-gnu/6/include/immintrin.h:41:0,
                 from avx256_test_aligned.c:1:
/usr/lib/gcc/x86_64-linux-gnu/6/include/avxintrin.h:852:1: error: inlining failed in call to always_inline ‘_mm256_store_ps’: target specific option mismatch
 _mm256_store_ps (float *__P, __m256 __A)
 ^~~~~~~~~~~~~~~
avx256_test_aligned.c:18:5: note: called from here
...

Oh no, what happened? It didn’t complain when we used SSE instructions (perhaps because all CPUs of the implicitly selected architecture (x86_64) support SSE, which was first introduced a very long time ago), but it’s complaining that our use of AVX instructions is causing a “target-specific option mismatch”. That’s a bit cryptic, but it means that our target (“vanilla” x86_64) does not support AVX instructions. To make this work, we need to supply the -mavx option:

$ gcc -Wall -mavx -o avx256_test_aligned avx256_test_aligned.c 
$ ./avx256_test_aligned 
0.000000
2.000000
4.000000
6.000000
8.000000
10.000000
12.000000
14.000000

Nice! BTW, for AVX512, we just need to change the 256s to 512s and the array index 8s to 16s, and supply -mavx512f to gcc.

Addendum: if you execute the AVX512 code on a CPU that doesn’t support it, you get this:

gcc -mavx512f -Wall -o avx_test_aligned avx_test_aligned.c 
./avx_test_aligned
Illegal instruction

Second addendum: if you use the aligned instructions without actually aligning your arrays, you get this:

$ ./avx_with_bad_alignment
Segmentation fault

Let me know if you have any questions.

How to find out if an executable uses (e.g.) SIMD instructions (includes jq mini-tutorial!)

“Embarrassingly parallel” algorithms can often make use of SIMD instructions like those that came with the SSE and AVX extensions. In the Python world, numpy is a very popular package to work with arrays. One of the first things I wondered when I started using numpy was, “How optimized is numpy?” Some quick investigation shows that it’s multi-threaded, and some googling shows that it uses SIMD instructions: https://stackoverflow.com/questions/17109410/how-can-i-check-if-my-installed-numpy-is-compiled-with-sse-sse2-instruction-set

Now, it’s a bit tedious to grep for strings like VADDPD in the disassembly, so this post develops a nicer method.

For the impatient, here’s an unorthodox dirty one-liner (it creates a temporary file) that does this for you. It requires jq and internet access to download a database.

tempfile=`mktemp`; curl https://raw.githubusercontent.com/asmjit/asmdb/488b6d986964627f0b130b5265722dde8d93f11d/x86data.js | cpp | sed -n '/^{/,/^}/ { p }' | jq '[ .instructions | .[] | { (.[0]): .[4] } ] | add' > $tempfile; objdump --no-show-raw-insn -M intel -d /usr/lib/python2.7/dist-packages/numpy/core/*.so | awk '{print $2}' | grep -v : | sort | uniq | while read line; do echo -n "$line  "; output=$(jq "with_entries(select(.key | match(\"(^$line\\/|\\/$line\$|$line\\/|^$line\$)\"))) | to_entries | .[] | .value" $tempfile); if [ -z "$output" ]; then echo; else echo $output; fi; done > output_test; rm $tempfile

Note that it is not able to distinguish between e.g. AVX and AVX512. It always prints out the most advanced extension possible, so it will print out AVX512 if any AVX is used. If you want something better, check out the Node.js version at the bottom of this post.

And around this point we start the explanation for the less impatient readers: first of all, we need a database of CPU instructions, and a simple Google query brings up this: https://github.com/asmjit/asmdb (The following discussion is based on commit 488b6d986964627f0b130b5265722dde8d93f11d.)

This project is in JavaScript, and the data file isn’t quite in JSON, so let’s do some minor preprocessing first to make our database easier to use:

cpp x86data.js | sed -n '/^{/,/^}/ { p }' > json

cpp is the C preprocessor to remove comments (there are comments and even multi-line comments in the actual data). The sed bit looks for a line starting with a { and after that a line starting with a }, all the while printing out this whole block.

Next, we need to get a disassembly. Here’s an example for numpy’s .so files:

objdump --no-show-raw-insn -M intel -d /usr/lib/python2.7/dist-packages/numpy/core/*.so | grep -P "^ +[0-9a-z]+:" | awk '{print $2}' | sort | uniq > numpy_instructions

This will get us all instruction mnemonics used. We get a file like this:

adc
add
addpd
addps
addsd
addss
and
andnpd
andnps

Let’s go back to our data. Today, we’ll use jq as our main tool to get the job done (though it will be many times slower than if we wrote a simple script that loads the hash once and re-uses it for every input instruction). If we just want the instructions block, we could do this:

jq '.instructions' json > instructions

However, this tool is a real Swiss army knife. We can use the familiar concept of piping, and we can wrap things in arrays or hashes just by enclosing expressions in [] or {}. Here’s an entire command to get an array of hashes containing only the instruction and the corresponding extension from the json file:

jq '[ .instructions | .[] | {instruction: .[0], extension: .[4] } ]' json

.[] iterates over the array inside the instructions key. Every item in the array is piped to a bit of jq code that creates a hash with an instruction and an extension key, which correspond to array element 0 and 4 in the input data. So we get output like this:

[
  {
    "instruction": "aaa",
    "extension": "X86 Deprecated   OF=U SF=U ZF=U AF=W PF=U CF=W"
  },
  {
    "instruction": "aas",
    "extension": "X86 Deprecated   OF=U SF=U ZF=U AF=W PF=U CF=W"
  },
  .
  .
  .
]

Now we’re going to do something slightly naughty. The extension field isn’t the same for all instructions with the same mnemonic, as different opcodes with the same mnemonics have been added to the instruction set over time. However, we don’t need to be that precise IMO, so we’re just going to merge everything into an object like {“mnemonic”: “extension info”}. First, let’s get an array of hashes:

jq '[ .instructions | .[] | { (.[0]): .[4] } ]' json | head
[
  {
    "aaa": "X86 Deprecated   OF=U SF=U ZF=U AF=W PF=U CF=W"
  },
  {
    "aas": "X86 Deprecated   OF=U SF=U ZF=U AF=W PF=U CF=W"
  },
  {
    "aad": "X86 Deprecated   OF=U SF=W ZF=W AF=U PF=W CF=U"
  },
  .
  .
  .
]

Now we just need to pipe this into the add filter to merge this array of hashes/objects into a single hash/object:

jq '[ .instructions | .[] | { (.[0]): .[4] } ] | add' json > mnem2ext.json

And the result is:

{
  "aaa": "X86 Deprecated   OF=U SF=U ZF=U AF=W PF=U CF=W",
  "aas": "X86 Deprecated   OF=U SF=U ZF=U AF=W PF=U CF=W",
  "aad": "X86 Deprecated   OF=U SF=W ZF=W AF=U PF=W CF=U",
  "aam": "X86 Deprecated   OF=U SF=W ZF=W AF=U PF=W CF=U",
  "adc": "X64              OF=W SF=W ZF=W AF=W PF=W CF=X",
  "add": "X64              OF=W SF=W ZF=W AF=W PF=W CF=W",
  "and": "X64              OF=0 SF=W ZF=W AF=U PF=W CF=0",
  "arpl": "X86 ZF=W",
  "bndcl": "MPX X64",
  ...
}

Wee! But how do we access the information in this file? Well, with jq of course (not efficient though):

while read line; do echo -n "$line  "; jq ".$line" min.json; done < numpy_instructions

Here’s an extract from the output:

cvttpd2dq  "SSE2"
cvttps2dq  "SSE2"
cvttsd2si  "SSE2 X64"
cvttss2si  "SSE X64"
cwde  "ANY"
div  "X64              OF=U SF=U ZF=U AF=U PF=U CF=U"
divpd  "SSE2"
divps  "SSE"
divsd  "SSE2"
divss  "SSE"
fabs  "FPU              C0=U C1=0 C2=U C3=U"
fadd  "FPU              C0=U C1=W C2=U C3=U"

Such a nice mix of instructions. <3 We have a few problems though. Here are some instructions that couldn’t resolved:

cmova  null
cmpneqss  null
ja  null
rep  null
seta  null
vcmplepd

A closer look at our database reveals that some instructions have slashes in them, like “cmova/cmovnbe”. These are aliases, so we should be able to detect these as well. jq sort of allows to search for keys using regex, though the syntax isn’t easy, and the bash escaping makes things a bit worse:

while read line; do echo -n "$line  "; jq "with_entries(select(.key | match(\"(^$line\\/|\\/$line\$|$line\\/|^$line\$)\")))" min.json; done < numpy_instructions > output

Things have gotten a bit slower again, and the rest of our output looks a bit different too:

xor  {
  "xor": "X64              OF=0 SF=W ZF=W AF=U PF=W CF=0"
}
xorpd  {
  "xorpd": "SSE2"
}
xorps  {
  "xorps": "SSE"
}

We can’t get rid of the echo, otherwise we’ll have no way to tell if jq is finding the mnemonic or not. So we’ll use jq to fix the format. Here’s an easy example:

echo '{ "b": "c" }' | jq 'to_entries[]'
[
  {
    "key": "b",
    "value": "c"
  }
]
echo '{ "b": "c" }' | jq 'to_entries | .[] | .value'
"c"

Here, we’re just converting the hash into an array (as we did above with with_entries), and only select the .values. We can just pipe this within jq:

while read line; do echo -n "$line  "; jq "with_entries(select(.key | match(\"(^$line\\/|\\/$line\$|$line\\/|^$line\$)\"))) | to_entries | .[] | .value" min.json; done < numpy_instructions > output

However, we don’t get a newline when we didn’t find an instruction, so we work around this in bash:

while read line; do echo -n "$line  "; output=$(jq "with_entries(select(.key | match(\"(^$line\\/|\\/$line\$|$line\\/|^$line\$)\"))) | to_entries | .[] | .value" min.json); if [ -z "$output" ]; then echo; else echo $output; fi; done < numpy_instructions > output

That leaves mostly pseudo-instructions. The following pseudo-instructions are not included in this database but would indicate SSE2: CMPEQPD, CMPLTPD, CMPLEPD, CMPUNORDPD, CMPNEQPD, CMPNLTPD, CMPNLEPD, CMPORDPD. These all belong to the CMPPD instruction introduced in SSE2, as far as I can tell. (https://www.felixcloutier.com/x86/CMPPD.html#tbl-3-2) It would make sense to have them in the database in this case, but I think I’ll leave well enough alone for now though.

Anyway, doing something like awk ‘{print $2}’ output | sed s/\”//g | sort | uniq shows that my numpy version may use instructions from the following sets:

ANY
AVX
AVX2
AVX512_BW
AVX512_DQ
AVX512_F
CMOV
FPU
FPU_POP
FPU_PUSH
I486
MMX2
SSE
SSE2
SSE4_1
X64

Well, that’s great. Let’s package this up into a shell script so it’s a bit easier to use. Just stick it in a directory that has cpu_extensions.min.json in it and it’ll work.

#!/bin/bash

json_file=$(dirname $0)/cpu_extensions.min.json
objdump --no-show-raw-insn -M intel -d $* | grep -P "^ [0-9a-z]+:" | awk '{print $2}' | sort | uniq | while read line; do
    echo -n "$line  "
    output=$(jq "with_entries(select(.key | match(\"(^$line\\/|\\/$line\$|$line\\/|^$line\$)\"))) | to_entries | .[] | .value" $json_file);
    if [ -z "$output" ];
        then echo;
    else
        echo $output | sed -e 's/"//g' -e 's/ .*//g'
    fi
done

Also, here’s a more efficient (O(n)) implementation in Node.js. It gets away with much less pre-processing, all you have to do is:

sed -n '/^{/,/^}/ { p }' x86data.js > cpu_extensions.json

However, it doesn’t execute objdump for you, so you have to call it like this:

show_cpu_extensions.js <(objdump --no-show-raw-insn -M intel -d /usr/lib/python2.7/dist-packages/numpy/core/*.so | grep -P "^ +[0-9a-z]+:" | awk '{print $2}' | sort | uniq)

I’ve also made it display all possible extensions.

#!/usr/bin/nodejs

var database_file;
var disassembly_file;

if (process.argv.length == 3) {
    // Use default database
    database_file = __dirname + "/cpu_extensions.json";
    disassembly_file = process.argv[2];
} else if (process.argv.length == 4) {
    database_file = process.argv[2];
    disassembly_file = process.argv[3];
} else {
    console.log("Usage: " + process.argv[1] + " [database] disassembly");
    console.log(process.argv);
    process.exit(1);
}

var fs = require("fs");
var readline = require("readline"); 
var mnem2ext = {};

var obj = JSON.parse(fs.readFileSync(database_file, "utf8"));
obj["instructions"].map(function(v, i) {
    var ext = v[4].replace(/ +[A-Z]+=.*/, "").replace(/  +.*/, "");

    if (v[0].match(/\//)) {
        v[0].split("/").forEach(function(v, i) {
            if (!mnem2ext[v]) {
                mnem2ext[v] = {};
            }
            mnem2ext[v][ext] = true;
        });
    } else {
        if (!mnem2ext[v[0]]) {
            mnem2ext[v[0]] = {};
        }
        mnem2ext[v[0]][ext] = true;
    }
});

var lineReader = require("readline").createInterface({input: fs.createReadStream(disassembly_file)});
lineReader.on("line", function(line) {
    console.log(line + ": " + (mnem2ext[line] ? Object.keys(mnem2ext[line]).join(", ") : undefined));
});

“Wrap marker” Thunderbird Extension

Yay, time for a new Thunderbird extension. Wrap Marker.

The code is up on GitHub.

This Thunderbird extension adds a word wrap marker (also called “ruler”, depending on what editor you’re using) to the text area in the compose window when you’re editing plain text emails. In effect, a vertical line indicating that you’re close to the 72/76/80-character mark. (You can change the position in about:config. The default is 76.)

It works by changing the entire editor’s (think “iframe”) designMode from “on” to “off”, and adding a div with contenteditable=”true” instead. If this changes how your compose text area behaves, I’d consider that a bug, so please let me know.

At the time of this writing (February 26, 2018), this extension is still kind of beta and not exactly “thoroughly tested”. It will be submitted to Thunderbird’s extension page once it’s been tested some more and maybe once it’s gotten some of the known bugs fixed. These include:

  • Quoted text in a reply isn’t blue.
  • Your cursor position preference isn’t honored. The cursor will always be in the upper left corner when you start a new reply.
  • This feature is disabled for HTML emails. I don’t think it’ll ever work for HTML emails.
  • You get scrollbars all the time (This is probably fixable. Forgot to fix.)

Backporting security fixes to old versions of the Linux kernel (Meltdown to 2.6.18) (Part 1)

In this post, I’ll give a quick overview over what it takes to backport a large patch (the KAISER patch to protect against Meltdown) to the Linux kernel to a version of the Linux kernel from around ten years ago. Note that this post only covers the main technique and the assembly portion of the patch.

First of all, one should think hard about whether this necessary. Couldn’t you just run a newer kernel with older user space? The answer is, in most cases, yes, you could. As evidenced by our ability to run old Docker images with 10-year-old userland on modern kernels (perhaps adding vsyscall=emulate to the kernel command line), things often work just fine. However, you may run into problems if you’re running on bare metal. I’ve heard of people running a maintained 3.10 kernel on 10-year-old userland without much fuss. I’ve personally run a 64-bit kernel with 100% 32-bit userland (same kernel version, without X11).

However, some people may not be able to afford to re-test their whole setup with different kernel versions all the time, and that is why distributions usually backport pure security fixes from newer kernels to older kernels. The Linux kernel is constantly improved, and over time, the code base of the kernel version included in a specific stable version of a distribution, which may only get security fixes, tends to look pretty different from the current Linux kernel.

Now let’s pretend we have to backport a fix for the Meltdown vulnerability to Linux 2.6.18. First of all, we try very hard to come up with alternative ways to thwart this vulnerability. For 2.6.18, we come up empty-handed, but for earlier kernels, we may find the so-called 4G/4G patch.

This 4G/4G patch unfortunately never made it into the mainline kernel, but was adopted by Red Hat for inclusion in Red Hat Enterprise Linux up to version 4. So we could get our hands on a version of this patch for Linux 2.6.9, and perhaps forward-port this to 2.6.18. The patch at http://people.redhat.com/mingo/4g-patches/4g-2.6.6-B7 weighs in at around 4500 lines, and our foremost priority should be to find a patch with as few lines as possible.

The patch referenced in the original Meltdown paper weighs in at only 1000 lines, and is almost guaranteed to be very barebones. I’d say it would therefore make sense to attempt to backport this patch, and if we manage to do that, perhaps look at what the various distributors decided to do differently from what’s in this patch.

Before we start, it would probably make sense to find a couple sentences that describe what the patch is supposed to do. It’s more than likely that we came across various descriptions of the patch when we were looking for a barebones patch to base our work off from. LWN has a good introduction.

Preparations

We need the source tree of the target kernel version and the source kernel version extracted somewhere. The source kernel version can be had by doing:

$ git clone https://github.com/torvalds/linux.git
$ # cd / mv / etc.
$ git checkout v4.10-rc6

The target version in our case is over here: http://vault.centos.org/5.11/updates/SRPMS/kernel-2.6.18-419.el5.src.rpm. We need to extract this and apply all of the existing patches. I use a current version of Debian, and rpmbuild operates in ~/rpmbuild. So create this directory, and the directories, SRPMS, RPMS, SPECS, SOURCES, BUILD, and BUILDROOT below it. Move the .src.rpm into the SOURCES directory, and issue the following commands.

$ cd ~/rpmbuild/SOURCES
$ rpm2cpio * | cpio -idmv
$ mv kernel.spec ../SPEC
$ cd ../SPEC
$ rpmbuild --nodeps -bp kernel.spec

Make sure you didn’t get any errors in the last step. Your patched kernel, ready to build from, is now inside ~/rpmbuild/BUILD/.

We’ll be making a lot of use of grep and git blame to backport patches. I usually use less to browse code quickly, or open it in an editor (usually kate and/or sublime) when I think I’ll need the file for a longer time. I have two monitors, but having more would help. I also have a bunch of paper to scribble stuff on. When you have a lot of terminal windows open just for the grepping, compiling and other things, you’ll probably find that giving the editor

You’ll find that you’ll have to read up on four-level page tables while creating the patch. Depending on the way you work, you might as well do that before you dig in.

Here are a few more less tips:

  • You likely already know that you can search files by hitting ‘/’
    • You can use the arrow keys to browse through your search history
    • You can disable regex search by hitting Ctrl-R
    • You can type -N followed by return to display line numbers

For debugging, I use the venerable Bochs.

Digging in

arch/x86/entry/entry_64.S and arch/x86/entry/entry_64_compat.S

We have something in arch/x86/entry/entry_64.S and arch/x86/entry/entry_64_compat.S. Okay, we’re adding a few macros (SWITCH_KERNEL_CR3_NO_STACK, SWITCH_USER_CR3, SWITCH_KERNEL_CR3). These macros all seem to be close to a macro called SWAPGS or SWAPGS_UNSAFE_STACK. The presence of “UNSAFE_STACK” also dictates which SWITCH_CR3 macro we’re using. Though nothing may make sense yet, these are all important observations.

On the old kernel, this path doesn’t exist at all, but we have a promising-sounding arch/x86_64/ path.

~/src/kernel/el5/linux-2.6.18.4$ find arch/x86_64/ -name *entry*
arch/x86_64/kernel/entry.S
arch/x86_64/ia32/ia32entry.S

Opening arch/x86_64/kernel/entry.S, we see code that looks similar on the whole. SWAPGS doesn’t exist, but swapgs (as a pure assembly instruction) does. So let’s figure out what SWAPGS is about:

~/src/kernel/git$ grep -rn SWAPGS
...
arch/x86/include/asm/irqflags.h:122:#define SWAPGS      swapgs
...
arch/x86/include/asm/paravirt.h:908:#define SWAPGS                                                              \
        PARA_SITE(PARA_PATCH(pv_cpu_ops, PV_CPU_swapgs), CLBR_NONE,     \
                  call PARA_INDIRECT(pv_cpu_ops+PV_CPU_swapgs)          \
                 )
...

At this point, we might have a hunch that SWAPGS was introduced with the intention to make the same entry code work for both real hardware/real virtualization and paravirtualization, and this is sufficiently confirmed when we git blame the file a bit:

$ git blame arch/x86/entry/entry_64.S
...
72fe485854429 arch/x86/kernel/entry_64.S (Glauber de Oliveira Costa 2008-01-30 13:32:08 +0100  143)     SWAPGS_UNSAFE_STACK
...
$ git show 72fe485854429
commit 72fe4858544292ad64600765cb78bc02298c6b1c
Author: Glauber de Oliveira Costa <gcosta@redhat.com>
Date:   Wed Jan 30 13:32:08 2008 +0100

    x86: replace privileged instructions with paravirt macros
    
    The assembly code in entry_64.S issues a bunch of privileged instructions,
    like cli, sti, swapgs, and others. Paravirt guests are forbidden to do so,
    and we then replace them with macros that will do the right thing.
...

When looking at the above git blame, there are a lot of lines affecting SWAPGS with different commit hashes, but this one is the oldest. We should be able to transfer the macro calls to the lines adjacent to the swapgs instructions. Fortunately, the number of swapgs instructions and the number of SWAPGS macro calls are almost the same in both kernels. With just the names (SWITCH_KERNEL_CR3) of the macros we don’t really know if this switches the kernel CR3 to the user CR3 or the other way round, and when you look at code that was accepted upstream or in distributions, you might see that the macro names have become easier to understand. So let’s dig into the macros, which are declared in the newly #included asm/kaiser.h.

asm/kaiser.h

asm/kaiser.h consists of assembly code (#ifdef __ASSEMBLY__) and C code (#else).  Assembly code in the Linux kernel uses AT&T syntax, which means that the first operands are the sources and the second operands the destinations. The macros look pretty clean (i.e., they are mostly pure assembly code), except for the use of something called PER_CPU_VAR. Modern processors have more than one core, and these cores operate independently. One core might be executing user land, and another core might be in the kernel or about to do the entry into the kernel.

Unfortunately, when we grep for PER_CPU_VAR in the old kernel code, we come up empty-handed:

src/kernel/el5/linux-2.6.18.4$ grep -r PER_CPU_VAR .
src/kernel/el5/linux-2.6.18.4$

Note that a case-insensitive grep comes up with ia64-specific (as in Itanium) code. grepping for PER_CPU, on the other hand, yields a lot of results. Even the KAISER patch itself contains DECLARE_PER_CPU and DEFINE_PER_CPU statements. However, the older kernel doesn’t have DECLARE_PER_CPU_SECTION or DEFINE_PER_CPU_SECTION.

~/src/kernel/git$ grep -r PER_CPU_SECTION . | grep define
./include/linux/percpu-defs.h:#define DECLARE_PER_CPU_SECTION(type, name, sec)                  \
... (More matches in the same file)

Now, we do a chain of git blames until we find something that we consider useful:

git blame include/linux/percpu-defs.h
git show 7c756e6e19e71
git blame 7c756e6e19e71^ -- include/linux/percpu-defs.h # start blaming from one before 7c756e6e19e71; don't forget the '--'
git show 5028eaa97dd1d
# Looks like 5028eaa97dd1d creates the file for the first time, and the definitions used to be in include/asm-generic/percpu.h
git blame 5028eaa97dd1d^ -- include/asm-generic/percpu.h
git show 9b8de7479d0db
git blame 9b8de7479d0db^ -- include/linux/percpu.h
git show 0bd74fa8e29dc

At this point, we finally found the commit that first introduced DEFINE_PER_CPU_SECTION, but this still depends on DEFINE_PER_CPU_PAGE_ALIGNED, which isn’t available yet in 2.6.18. So the search continues:

git blame 0bd74fa8e29dc^ -- include/linux/percpu.h
git show 63cc8c7515646

This commit indicates that DEFINE_PER_CPU_PAGE_ALIGNED was introduced to avoid wasting memory. I don’t believe we really need to care about this. Let’s trace PER_CPU_VAR next:

grep -r PER_CPU_VAR . | grep define
git blame ./arch/x86/include/asm/percpu.h
git show dd17c8f72993f
git blame dd17c8f72993f^ -- arch/x86/include/asm/percpu.h
git show 3334052a321ac

This commit unifies the percpu_32.h and percpu_64.h files into a single header file, and indicates that PER_CPU_VAR only existed in the 32-bit code paths. Instead, the 64-bit code had this, which we grep straight away:

DECLARE_PER_CPU(struct x8664_pda, pda);

~/src/kernel/el5/linux-2.6.18.4$ grep -r x8664_pda
...
include/asm-x86_64/pda.h:11:struct x8664_pda {
...
~/src/kernel/el5/linux-2.6.18.4$ less -N include/asm-x86_64/pda.h
...
     10 /* Per processor datastructure. %gs points to it while the kernel runs */ 
     11 struct x8664_pda {
     12         struct task_struct *pcurrent;   /* Current process */
     13         unsigned long data_offset;      /* Per cpu data offset from linker address */
     14         unsigned long kernelstack;  /* top of kernel stack for current */ 
     15         unsigned long oldrsp;       /* user rsp for system call */
     16 #if DEBUG_STKSZ > EXCEPTION_STKSZ
     17         unsigned long debugstack;   /* #DB/#BP stack. */
     18 #endif
     19         int irqcount;               /* Irq nesting counter. Starts with -1 */   
     20         int cpunumber;              /* Logical CPU number */
     21         char *irqstackptr;      /* top of irqstack */
     22         int nodenumber;             /* number of current node */
     23         unsigned int __softirq_pending;
     24         unsigned int __nmi_count;       /* number of NMI on this CPUs */
     25         int mmu_state;     
     26         struct mm_struct *active_mm;
     27         unsigned apic_timer_irqs;
     28 } ____cacheline_aligned_in_smp;
...

Interesting, this is a per-processor data structure? pda.h doesn’t exist in modern kernels anymore, but some additional googling confirms that, yes, we should be able to use this. I ended up adding unsafe_stack_register_backup to this struct. Through some additional code searching we can find out how to access members of the PDA structure (for assembly, there’s a hint at the top: %gs points to the structure when we’re in kernel space).

The rest of asm/kaiser.h consists entirely of C function prototypes, which we can just copy over. At this point, we have successfully backported about 37% of the entire patch. I used this git blame technique to backport the entire patch. It’s a lot of work, and if you do not include the time it takes to read through the Meltdown papers and the news to get a good overview of what needs to be done, it took me about two to three weeks to get a still-broken patch that causes the system to panic around PID number 370, which is still long before you get to log in to the console. It still took well over a dozen rebuilds to get there.

KDE: Windows freeze or flicker but application doesn’t crash

I’m running KDE on two different systems, and one of them exhibits the following problem very often, and the other just did for the first time:

Windows stop updating their content, and perhaps flicker a bit. Switching to a different window and back causes the window contents to be updated, but still frozen. Which means that the application itself is not crashed.

The following command fixes this:

kwin --replace

You can run this from the run command prompt (Alt+F2) (also called Plasma search or krunner), or you could run it in a terminal. (You’d have to make sure the process doesn’t exit when you close the terminal though.)

If everything appears to be frozen and you can’t get to the run command prompt, you could still switch to a console, log in, and try running the following:

DISPLAY=:0 kwin --replace

Both systems have internal Intel graphics (quite different chipsets though) and KDE5.

The above commands will fix the problem for that time. Your open applications should not be affected by the change. I haven’t looked much into permanent fixes, but changing the rendering backend (System Settings → Display and Monitor → Compositor) may change the frequency the problem is triggered or maybe even get rid of it altogether. (I felt that OpenGL 2.0 probably triggered the problem fewer times than OpenGL 3.1.)

I’ve noticed a fair amount of traffic to my KDE-related posts. If you run into any weird KDE problems that you don’t know how to fix, feel free to leave a comment and ask.

Meltdown / Spectre Kernel Patch Benchmarks on Older Systems

The Meltdown patch for the Linux kernel makes use of the relatively new PCID instruction. I still sometimes use my old laptop, which contains a Core 2 Duo Penryn CPU (T7250), and does not support the PCID instruction, so I did a quick UnixBench run to see what kind of difference the absence of the PCID instruction would make. At the end of this article, I have a bonus “benchmark” for an alternative way to mitigate Meltdown: disabling the CPU’s caches. All my tests were performed on Debian Wheezy (currently oldstable) using kernel version 3.16.0-5-amd64.

First of all, here are another person’s results for a CPU that supports PCID. And since that’s in Japanese, here’s the important bit:

Test Before After Change (positive is better)
System Call Overhead 5391.9 4009.7 -25.63%

Now, my tests on the Penryn CPU:

Test Before After Change (positive is better)
Dhrystone 2 using register variables 3360.4 3414.1 +1.60%
Double-Precision Whetstone 724.1 724 -0.01%
Execl Throughput 1351.7 1222.9 -9.53%
File Copy 1024 bufsize 2000 maxblocks 1582 1244 -21.37%
File Copy 256 bufsize 500 maxblocks 1255.9 922.1 -26.58%
File Copy 4096 bufsize 8000 maxblocks 1982.4 1810.6 -8.67%
Pipe Throughput 1672.8 765.4 -54.24%
Pipe-based Context Switching 1108.3 671 -39.46%
Process Creation 1150 1025.3 -10.84%
Shell Scripts (1 concurrent) 1995.7 1909 -4.34%
Shell Scripts (8 concurrent) 1831.8 1743.3 -4.83%
System Call Overhead 1705.6 544.9 -68.05%
System Benchmarks Index Score 1535.8 1160.9 -24.41%

And the raw data in case you are interested:

Before updating:

Test Score Unit Time Iters. Baseline Index
Dhrystone 2 using register variables 39215974.0 lps 10.0 s 7 116700.0 3360.4
Double-Precision Whetstone 3982.6 MWIPS 9.9 s 7 55.0 724.1
Execl Throughput 5812.4 lps 29.2 s 2 43.0 1351.7
File Copy 1024 bufsize 2000 maxblocks 626453.0 KBps 30.0 s 2 3960.0 1582.0
File Copy 256 bufsize 500 maxblocks 207854.8 KBps 30.0 s 2 1655.0 1255.9
File Copy 4096 bufsize 8000 maxblocks 1149781.6 KBps 30.0 s 2 5800.0 1982.4
Pipe Throughput 2080979.1 lps 10.0 s 7 12440.0 1672.8
Pipe-based Context Switching 443337.7 lps 10.0 s 7 4000.0 1108.3
Process Creation 14490.3 lps 30.0 s 2 126.0 1150.0
Shell Scripts (1 concurrent) 8461.7 lpm 60.0 s 2 42.4 1995.7
Shell Scripts (8 concurrent) 1099.1 lpm 60.1 s 2 6.0 1831.8
System Call Overhead 2558469.9 lps 10.0 s 7 15000.0 1705.6
System Benchmarks Index Score: 1535.8

After updating:

Test Score Unit Time Iters. Baseline Index
Dhrystone 2 using register variables 39842314.8 lps 10.0 s 7 116700.0 3414.1
Double-Precision Whetstone 3982.0 MWIPS 9.8 s 7 55.0 724.0
Execl Throughput 5258.5 lps 30.0 s 2 43.0 1222.9
File Copy 1024 bufsize 2000 maxblocks 492638.1 KBps 30.0 s 2 3960.0 1244.0
File Copy 256 bufsize 500 maxblocks 152610.9 KBps 30.0 s 2 1655.0 922.1
File Copy 4096 bufsize 8000 maxblocks 1050156.7 KBps 30.0 s 2 5800.0 1810.6
Pipe Throughput 952188.4 lps 10.0 s 7 12440.0 765.4
Pipe-based Context Switching 268401.0 lps 10.0 s 7 4000.0 671.0
Process Creation 12918.3 lps 30.0 s 2 126.0 1025.3
Shell Scripts (1 concurrent) 8094.2 lpm 60.0 s 2 42.4 1909.0
Shell Scripts (8 concurrent) 1046.0 lpm 60.1 s 2 6.0 1743.3
System Call Overhead 817288.1 lps 10.0 s 7 15000.0 544.9
System Benchmarks Index Score: 1160.9

Now, Mitigating Meltdown by switching off CPU caches:

You wouldn’t even want to run UnixBench without CPU caches. Here’s a “simpler” benchmark that tells you why:

# time perl -e 'for (1..1000000) {}'

real 0m0.056s
user 0m0.052s
sys 0m0.000s
# insmod disable_cache.ko
# time perl -e 'for (1..1000000) {}' 

real 0m44.689s
user 0m40.044s
sys 0m0.520s
# rmmod disable_cache

Unless you enjoy working on a system that is some 800 times slower. (Don’t try to do this in a GUI setting.)

Nonetheless, here’s some code to disable the CPU caches. (Modified from https://www.linuxquestions.org/questions/linux-kernel-70/disabling-cpu-caches-936077/)

#include <linux/init.h>
#include <linux/module.h>
#include <linux/smp.h>

MODULE_LICENSE("Dual BSD/GPL");

void _disable_cache(void *p) {
 printk(KERN_ALERT "Disabling L1 and L2 caches on processor %d.\n", smp_processor_id());
 __asm__(".intel_syntax noprefix\n\t"
 "mov rax,cr0\n\t"
 "or rax,(1 << 30)\n\t"
 "mov cr0,rax\n\t"
 "wbinvd\n\t"
 ".att_syntax noprefix\n\t"
 : : : "rax" );
}
void _enable_cache(void *p) {
 printk(KERN_ALERT "Enabling L1 and L2 caches on processor %d.\n", smp_processor_id());
 __asm__(".intel_syntax noprefix\n\t"
 "mov rax,cr0\n\t"
 "and rax,~(1 << 30)\n\t"
 "mov cr0,rax\n\t"
 "wbinvd\n\t"
 ".att_syntax noprefix\n\t"
 : : : "rax" );
}

static int disable_cache_init(void)
{
 on_each_cpu(_disable_cache, NULL, 1);
 return 0;
}
static void disable_cache_exit(void)
{
 on_each_cpu(_enable_cache, NULL, 1);
}

module_init(disable_cache_init);
module_exit(disable_cache_exit);

Makefile:

obj-m += disable_cache.o

all:
	make -C /lib/modules/$(shell uname -r)/build M=$(PWD) modules

Note that you need to indent using tabs in Makefile. CR0 can only be read from Ring 0, and thus a kernel module is needed.

Here’s some example code to just read the CR0 registers on all CPUs:

#include <linux/init.h>
#include <linux/module.h>
#include <linux/smp.h>

MODULE_LICENSE("Dual BSD/GPL");

void cache_status(void *p) {
 long int cr0_30 = 0;
 __asm__(".intel_syntax noprefix\n\t"
 "mov %0, cr0\n\t"
 "and %0, (1 << 30)\n\t"
 "shr %0, 30\n\t"
 ".att_syntax noprefix\n\t"
 : "=r" (cr0_30));
 printk(KERN_INFO "Processor %d: %ld\n", smp_processor_id(), cr0_30&(1<<30)>>30);
}

static int cache_status_init(void) {
 on_each_cpu(cache_status, NULL, 1);
 return 0;
}
static void cache_status_exit(void) {
 on_each_cpu(cache_status, NULL, 1);
}

module_init(cache_status_init);
module_exit(cache_status_exit);

And the corresponding Makefile:

obj-m += cache_status.o

all:
	make -C /lib/modules/$(shell uname -r)/build M=$(PWD) modules

 

KDE: The Window Switcher installation is broken, resources are missing.

So I was highly displeased with the standard Breeze task switcher, and thought I’d get a few new ones by clicking the star icon next to the drop-down menu where you select the task switcher. My recommendation is “Grid”. Trying to use Grid, all I get is this error message:

The Window Switcher installation is broken, resources are missing.
Contact your distribution about this.

Hrm. So then I Google and look at code, waste time trying silly things, just to postpone this problem for another weekend. Well, it’s the next weekend now, and just when I’m about to dive back into the code… I restart X (i.e. re-login), and when I try to bring the message up one more time… It doesn’t appear anymore, and Grid and all the others are working! I try installing one more, and sure enough, it didn’t work, but one more re-login and tada. So the answer to this problem might be: restart your KDE session.

It’s still a bug though. But unfortunately I’m no longer interested in looking into this bug now. :(