Baby steps in SIMD (SSE/AVX)

In case you have never used SIMD instructions, this post explores the real basics. For example: what is SIMD? SIMD stands for “Single instruction, multiple data”. We’re computing more than one “math problem” with a single instruction. CPUs have had instructions to do this for a long time. If you remember the “Pentium MMX” hype – that was the first time SIMD instructions came to the x86 architecture.

However, with some trickery, you can do some limited SIMD without actually using these instructions. Let’s say we want to add 1 to two values at the same time. If we put these two values right next to each other in memory, we can interpret them as a single larger datatype. That’s not all that straightforward to understand, so here’s an example: you can interpret two 8-bit values right next to each other as one 16-bit value, right? To increment both values at the same time, you do value + 0x0101, which is just one assembly instruction. So with no special instructions at all, on a 64-bit platform you can increment eight 8-bit values at the same time by adding 0x0101010101010101.

Okay, that feels pretty hacky and unreliable. Once you’ve incremented a value 256 times, you’ll have spilt into the neighboring value! That’s pretty bad.

So SSE provides 128-bit registers that allow you to comfortably work on e.g. four 32-bit floats at the same time, without any spilling. AVX provides 256-bit registers, and AVX512 provides 512-bit registers. Woo! Unfortunately AVX512 isn’t widely available yet.

So how do you use this? Let’s start with SSE, though you’ll see that updating code to use AVX or AVX512 instead is pretty easy. We’ll look at some very basic example code to add two vectors together.

```#include <xmmintrin.h> // Need this in order to be able to use the SSE "intrinsics" (which provide access to instructions without writing assembly)
#include <stdio.h>

int main(int argc, char **argv) {
float a[4], b[4], result[4]; // a and b: input, result: output
__m128 va, vb, vresult; // these vars will "point" to SIMD registers

// initialize arrays (just {0,1,2,3})
for (int i = 0; i < 4; i++) {
a[i] = (float)i;
b[i] = (float)i;
}

// load arrays into SIMD registers

// store contents of SIMD register into memory
_mm_storeu_ps(result, vresult); // https://software.intel.com/en-us/node/524262

// print out result
for (int i = 0; i < 4; i++) {
printf("%f\n", result[i]);
}
}```

That doesn’t seem so hard, does it? To access SIMD instructions without writing assembly code, we use something called “intrinsics”, which make the SIMD instructions look like regular C functions. Don’t worry though, these functions are inline and mostly just consist of the assembly instruction itself, so you probably won’t see any difference in performance.

In this example, we’re using three intrinsics, _mm_loadu_ps, _mm_add_ps, and _mm_storeu_ps. _mm_loadu_ps copies four float values from memory into the SSE register. We do this twice and are thus using two SSE registers. (We have 16 SSE registers available on 64-bit CPUs.) Then, we use _mm_add_ps to, in a single instruction, add the four floats in one register to the corresponding floats in the other register. (So we get a[0]+b[0], a[1]+b[1], a[2]+b[2], a[3]+b[3].) This is stored in a third SSE register. Using _mm_storeu_ps, we put the contents of this result register into the result float array.

We can compile and run this without any extra linking:

```\$ gcc -Wall -o sse_test sse_test.c
\$ ./sse_test
0.000000
2.000000
4.000000
6.000000```

Wow, it worked!

_mm_loadu_ps/_mm_storeu_ps have sister functions without the ‘u’. These functions require memory alignment, which just means that the memory has to start at an address that is cleanly divisible by a certain number, which mostly increases performance (unless something unfortunate happens in the CPU caching department).

To get the alignment, we just declare the arrays like this:

```    float a[4] __attribute__ ((aligned (16)));
float b[4] __attribute__ ((aligned (16)));
float result[4]  __attribute__ ((aligned (16)));```

And then change all instances of _mm_loadu_ps/_mm_storeu_ps to _mm_load_ps/_mm_store_ps.  Intel’s documentation states that we need 16-byte alignment. And GCC’s syntax just looks a bit obscure. It’s described here: https://gcc.gnu.org/onlinedocs/gcc-6.4.0/gcc/Common-Variable-Attributes.html#Common-Variable-Attributes

Cool, that’s SSE. What about AVX? Well, it turns out that we just need to change the included header file, the array sizes and the names of the intrinsics! (Note that you can include all intrinsics available by doing #include <x86intrin.h> instead.)

So here’s the same thing using AVX, and with aligned memory accesses:

```#include <immintrin.h> // Need this in order to be able to use the AVX "intrinsics" (which provide access to instructions without writing assembly)
#include <stdio.h>

int main(int argc, char **argv) {
float a[8] __attribute__ ((aligned (32))); // Intel documentation states that we need 32-byte alignment to use _mm256_load_ps/_mm256_store_ps
float b[8]  __attribute__ ((aligned (32))); // GCC's syntax makes this look harder than it is: https://gcc.gnu.org/onlinedocs/gcc-6.4.0/gcc/Common-Variable-Attributes.html#Common-Variable-Attributes
float result[8]  __attribute__ ((aligned (32)));
__m256 va, vb, vresult; // __m256 is a 256-bit datatype, so it can hold 8 32-bit floats

// initialize arrays (just {0,1,2,3,4,5,6,7})
for (int i = 0; i < 8; i++) {
a[i] = (float)i;
b[i] = (float)i;
}

// load arrays into SIMD registers

vresult = _mm256_add_ps(va, vb); // https://software.intel.com/en-us/node/523406

// store contents of SIMD register into memory
_mm256_store_ps(result, vresult); // https://software.intel.com/en-us/node/694665

// print out result
for (int i = 0; i < 8; i++) {
printf("%f\n", result[i]);
}

return 0;
}```

So let’s compile that:

```gcc -Wall -o avx256_test_aligned avx256_test_aligned.c
avx256_test_aligned.c: In function ‘main’:
avx256_test_aligned.c:15:8: warning: AVX vector return without AVX enabled changes the ABI [-Wpsabi]
~~~^~~~~~~~~~~~~~~~~~~
In file included from /usr/lib/gcc/x86_64-linux-gnu/6/include/immintrin.h:41:0,
from avx256_test_aligned.c:1:
/usr/lib/gcc/x86_64-linux-gnu/6/include/avxintrin.h:852:1: error: inlining failed in call to always_inline ‘_mm256_store_ps’: target specific option mismatch
_mm256_store_ps (float *__P, __m256 __A)
^~~~~~~~~~~~~~~
avx256_test_aligned.c:18:5: note: called from here
...```

Oh no, what happened? It didn’t complain when we used SSE instructions (perhaps because all CPUs of the implicitly selected architecture (x86_64) support SSE, which was first introduced a very long time ago), but it’s complaining that our use of AVX instructions is causing a “target-specific option mismatch”. That’s a bit cryptic, but it means that our target (“vanilla” x86_64) does not support AVX instructions. To make this work, we need to supply the -mavx option:

```\$ gcc -Wall -mavx -o avx256_test_aligned avx256_test_aligned.c
\$ ./avx256_test_aligned
0.000000
2.000000
4.000000
6.000000
8.000000
10.000000
12.000000
14.000000```

Nice! BTW, for AVX512, we just need to change the 256s to 512s and the array index 8s to 16s, and supply -mavx512f to gcc.

Addendum: if you execute the AVX512 code on a CPU that doesn’t support it, you get this:

```gcc -mavx512f -Wall -o avx_test_aligned avx_test_aligned.c
./avx_test_aligned
Illegal instruction```

Second addendum: if you use the aligned instructions without actually aligning your arrays, you get this:

```\$ ./avx_with_bad_alignment
Segmentation fault```

Let me know if you have any questions.