I'm looking for a faster and trickier way to multiply two 4x4 matrices in C. My current research is focused on x86-64 assembly with SIMD extensions. So far, I've created a function witch is about 6x faster than a naive C implementation, which has exceeded my expectations for the performance improvement. Unfortunately, this stays true only when no optimization flags are used for compilation (GCC 4.7). With -O2
, C becomes faster and my effort becomes meaningless.
I know that modern compilers make use of complex optimization techniques to achieve an almost perfect code, usually faster than an ingenious piece of hand-crafed assembly. But in a minority of performance-critical cases, a human may try to fight for clock cycles with the compiler. Especially, when some mathematics backed with a modern ISA can be explored (as it is in my case).
My function looks as follows (AT&T syntax, GNU Assembler):
.text
.globl matrixMultiplyASM
.type matrixMultiplyASM, @function
matrixMultiplyASM:
movaps (%rdi), %xmm0 # fetch the first matrix (use four registers)
movaps 16(%rdi), %xmm1
movaps 32(%rdi), %xmm2
movaps 48(%rdi), %xmm3
xorq %rcx, %rcx # reset (forward) loop iterator
.ROW:
movss (%rsi), %xmm4 # Compute four values (one row) in parallel:
shufps $0x0, %xmm4, %xmm4 # 4x 4FP mul's, 3x 4FP add's 6x mov's per row,
mulps %xmm0, %xmm4 # expressed in four sequences of 5 instructions,
movaps %xmm4, %xmm5 # executed 4 times for 1 matrix multiplication.
addq $0x4, %rsi
movss (%rsi), %xmm4 # movss + shufps comprise _mm_set1_ps intrinsic
shufps $0x0, %xmm4, %xmm4 #
mulps %xmm1, %xmm4
addps %xmm4, %xmm5
addq $0x4, %rsi # manual pointer arithmetic simplifies addressing
movss (%rsi), %xmm4
shufps $0x0, %xmm4, %xmm4
mulps %xmm2, %xmm4 # actual computation happens here
addps %xmm4, %xmm5 #
addq $0x4, %rsi
movss (%rsi), %xmm4 # one mulps operand fetched per sequence
shufps $0x0, %xmm4, %xmm4 # |
mulps %xmm3, %xmm4 # the other is already waiting in %xmm[0-3]
addps %xmm4, %xmm5
addq $0x4, %rsi # 5 preceding comments stride among the 4 blocks
movaps %xmm5, (%rdx,%rcx) # store the resulting row, actually, a column
addq $0x10, %rcx # (matrices are stored in column-major order)
cmpq $0x40, %rcx
jne .ROW
ret
.size matrixMultiplyASM, .-matrixMultiplyASM
It calculates a whole column of the resultant matrix per iteration, by processing four floats packed in 128-bit SSE registers. The full vectorisation is possible with a bit of math (operation reordering and aggregation) and mullps
/addps
instructions for parallel multiplication/addition of 4xfloat packages. The code reuses registers meant for passing parameters (%rdi
, %rsi
, %rdx
: GNU/Linux ABI), benefits from (inner) loop unrolling and holds one matrix entirely in XMM registers to reduce memory reads. A you can see, I have researched the topic and took my time to implement it the best I can.
The naive C calculation conquering my code looks like this:
void matrixMultiplyNormal(mat4_t *mat_a, mat4_t *mat_b, mat4_t *mat_r) {
for (unsigned int i = 0; i < 16; i += 4)
for (unsigned int j = 0; j < 4; ++j)
mat_r->m[i + j] = (mat_b->m[i + 0] * mat_a->m[j + 0])
+ (mat_b->m[i + 1] * mat_a->m[j + 4])
+ (mat_b->m[i + 2] * mat_a->m[j + 8])
+ (mat_b->m[i + 3] * mat_a->m[j + 12]);
}
I have investigated the optimised assembly output of the above's C code which, while storing floats in XMM registers, does not involve any parallel operations – just scalar calculations, pointer arithmetic and conditional jumps. The compiler's code seems to be less deliberate, but it is still slightly more effective than my vectorised version expected to be about 4x faster. I'm sure that the general idea is correct – programmers do similar things with rewarding results. But what is wrong here? Are there any register allocation or instruction scheduling issues I am not aware of? Do you know any x86-64 assembly tools or tricks to support my battle against the machine?
See Question&Answers more detail:
os