Welcome to OStack Knowledge Sharing Community for programmer and developer-Open, Learning and Share
Welcome To Ask or Share your Answers For Others

Categories

0 votes
384 views
in Technique[技术] by (71.8m points)

linux - Performance difference between IPC shared memory and threads memory

I hear frequently that accessing a shared memory segment between processes has no performance penalty compared to accessing process memory between threads. In other words, a multi-threaded application will not be faster than a set of processes using shared memory (excluding locking or other synchronization issues).

But I have my doubts:

1) shmat() maps the local process virtual memory to the shared segment. This translation has to be performed for each shared memory address and can represent a significant cost. In a multi-threaded application there is no extra translation required: all VM addresses are converted to physical addresses, just like in a regular process that does not access shared memory.

2) The shared memory segment must be maintained somehow by the kernel. For example, when all processes attached to the shm are taken down, the shm segment is still up and can be eventually re-accessed by newly started processes. There could be some overhead related to kernel operations on the shm segment.

Is a multi-process shared memory system as fast as a multi-threaded application?

See Question&Answers more detail:os

与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
Welcome To Ask or Share your Answers For Others

1 Answer

0 votes
by (71.8m points)

1) shmat() maps the local process virtual memory to the shared segment. This translation has to be performed for each shared memory address and can represent a significant cost, relative to the number of shm accesses. In a multi-threaded application there is no extra translation required: all VM addresses are converted to physical addresses, as in a regular process that does not access shared memory.

There is no overhead compared to regular memory access aside from the initial cost to set up shared pages - populating the page-table in the process that calls shmat() - in most flavours of Linux that's 1 page (4 or 8 bytes) per 4KB of shared memory.

It's (to all relevant comparison) the same cost whether the pages are allocated shared or within the same process.

2) The shared memory segment must be maintained somehow by the kernel. I do not know what that 'somehow' means in terms of performances, but for example, when all processes attached to the shm are taken down, the shm segment is still up and can be eventually re-accessed by newly started processes. There must be at least some degree of overhead related to the things the kernel needs to check during the lifetime of the shm segment.

Whether shared or not, each page of memory has a "struct page" attached to it, with some data about the page. One of the items is a reference count. When a page is given out to a process [whether it is through "shmat" or some other mechanism], the reference count is incremented. When it is freed through some means, the reference count is decremented. If the decremented count is zero, the page is actually freed - otherwise "nothing more happens to it".

The overhead is basically zero, compared to any other memory allocated. The same mechanism is used for other purposes for pages anyways - say for example you have a page that is also used by the kernel - and your process dies, the kernel needs to know not to free that page until it's been released by the kernel as well as the user-process.

The same thing happens when a "fork" is created. When a process is forked, the entire page-table of the parent process is essentially copied into the child process, and all pages made read-only. Whenever a write happens, a fault is taken by the kernel, which leads to that page being copied - so there are now two copies of that page, and the process doing the writing can modify it's page, without affecting the other process. Once the child (or parent) process dies, of course all pages still owned by BOTH processes [such as the code-space that never gets written, and probably a bunch of common data that never got touched, etc] obviously can't be freed until BOTH processes are "dead". So again, the reference counted pages come in useful here, since we only count down the ref-count on each page, and when the ref-count is zero - that is, when all processes using that page has freed it - the page is actually returned back as a "useful page".

Exactly the same thing happens with shared libraries. If one process uses a shared library, it will be freed when that process ends. But if two, three or 100 processes use the same shared library, the code obviously will have to stay in memory until the page is no longer needed.

So, basically, all pages in the whole kernel are already reference counted. There is very little overhead.


与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
Welcome to OStack Knowledge Sharing Community for programmer and developer-Open, Learning and Share
Click Here to Ask a Question

...