It's done this way so that different cores modifying different fields won't have to bounce the cache line containing both of them between their caches. In general, for a processor to access some data in memory, the entire cache line containing it must be in that processor's local cache. If it's modifying that data, that cache entry usually must be the only copy in any cache in the system (Exclusive mode in the MESI/MOESI-style cache coherence protocols). When separate cores try to modify different data that happens to live on the same cache line, and thus waste time moving that whole line back and forth, that's known as false sharing.
In the particular example you give, one core can be enqueueing an entry (reading (shared) buffer_
and writing (exclusive) only enqueue_pos_
) while another dequeues (shared buffer_
and exclusive dequeue_pos_
) without either core stalling on a cache line owned by the other.
The padding at the beginning means that buffer_
and buffer_mask_
end up on the same cache line, rather than split across two lines and thus requiring double the memory traffic to access.
I'm unsure whether the technique is entirely portable. The assumption is that each cacheline_pad_t
will itself be aligned to a 64 byte (its size) cache line boundary, and hence whatever follows it will be on the next cache line. So far as I know, the C and C++ language standards only require this of whole structures, so that they can live in arrays nicely, without violating alignment requirements of any of their members. (see comments)
The attribute
approach would be more compiler specific, but might cut the size of this structure in half, since the padding would be limited to rounding up each element to a full cache line. That could be quite beneficial if one had a lot of these.
The same concept applies in C as well as C++.
与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…