shared library address space - c++

While I was studying about shared library I read a statement
Although the code of a shared library is shared among multiple
processes, its variables are not. Each process that uses the library
has its own copies of the global and static variables that are defined
within the library.
I just have few doubts.
Whether code part of each process are in separate address space?
Whether shared-library code part are in some some global(unique) address space.
I am just a starter so please help me understand.
Thanks!

Shared libraries are loaded into a process by memory-mapping the file into some portion of the process's address-space. When multiple processes load the same library, the OS simply lets them share the same physical RAM.
Portions of the library that can be modified, such as static globals, are generally loaded in copy-on-write mode, so that when a write is attempted, a page fault occurs, the kernel responds by copying the affected page to another physical page of RAM (for that process only), the mapping redirected to the new page, and then finally the write operation completes.
To answer your specific points:
All processes have their own address space. The sharing of physical memory between processes is invisible to each process (unless they do so deliberately via a shared memory API).
All data and code live in physical RAM, which is a kind of address-space. Most of the addresses you are likely see, however, are virtual memory addresses belonging to the address-space of one process or another, even if that "process" is the kernel.

Related

Does dlopen create multiple library instances?

I can't seem to find an answer after searching for this out on the net.
When I use dlopen the first time it seems to take longer than any time after that, including if I run it from multiple instances of a program.
Does dlopen load up the so into memory once and have the OS save it so that any following calls even from another instance of the program point to the same spot in memory?
So basically does 3 instances of a program running a library mean 3 instances of the same .so are loaded into memory, or is there only one instance in memory?
Thanks
Does dlopen load up the so into memory once and have the OS save it so that any following calls even from another instance of the program point to the same spot in memory?
Multiple calls to dlopen from within a single process are guaranteed to not load the library more than once. From the man page:
If the same shared object is loaded again with dlopen(), the same
object handle is returned. The dynamic linker maintains reference
counts for object handles, so a dynamically loaded shared object is
not deallocated until dlclose() has been called on it as many times
as dlopen() has succeeded on it.
When the first call to dlopen happens, the library is mmaped into the calling process. There are usually at least two separate mmap calls: the .text and .rodata sections (which usually reside in a single RO segment) are mapped read-only, the .data and .bss sections are mapped read-write.
A subsequent dlopen from another process performs the same mmaps. However the OS does not have to load any of the read-only data from disk -- it merely increments reference counts on the pages already loaded for the first dlopen call. That is the sharing in "shared library".
So basically does 3 instances of a program running a library mean 3 instances of the same .so are loaded into memory, or is there only one instance in memory?
Depends on what you call an "instance".
Each process will have its own set of (dynamically allocated) runtime loader structures describing this library, and each set will contain an "instance" of the shared library (which can be loaded at different address in different process). Each process will also have its own instance of writable data (which uses copy-on-write semantics). But the read-only mappings will all occupy the same physical memory (though they can appear at different addresses in each of the processes).

How do DLLs handle concurrency from multiple processes?

I understand from Eric Lippert's answer that "two processes can share non-private memory pages. If twenty processes all load the same DLL, the processes all share the memory pages for that code. They don't share virtual memory address space, they share memory."
Now, if the same DLL file on the harddisk, after loaded into appliations, would share the same physical memory (be it RAM or page files), but mapped to different virtual memory address spaces, wouldn't that make it quite difficult to handle concurrency?
As I understand, concurrency concept in C++ is more about handling threading -- A process can start multiple threads, each can be run on an individual core, so when different threads calls the DLL at the same time, there might be data racing and we need mutex, lock, signal, conditional variable and so on.
But how would a DLL handles multi-processes? The same concept of data racing will happen, isn't it? What are the tools to handle that? Still the same toolset?
Now, if the same DLL file on the hard disk, after loaded into applications, would share the same physical memory (be it RAM or page files), but mapped to different virtual memory address spaces, wouldn't that make it quite difficult to handle concurrency?
As other answers have noted, the concurrency issues are of no concern if the shared memory is never written after it is initialized, which is typically the case for DLLs. If you are attempting to alter the code or resources in a DLL by writing into memory, odds are good you have a bad pointer somewhere and the best thing to do is to crash with an access violation.
However I wanted to also briefly follow up on your concern:
... mapped to different virtual memory address spaces ...
In practice we try very hard to avoid this happening because when it does, there can be a serious user-noticeable performance problem when loading code pages for the first time. (And of course a possible large increase in working set, which causes other performance problems.)
The code in a DLL often contains hard-coded virtual memory addresses, on the assumption that the code will be loaded into a known-at-compile-time virtual memory "base" address. If this assumption is violated at runtime -- because there's another DLL already there, for example -- then all those hard-coded addresses need to be patched at runtime, which is expensive.
If you want some historical details, see Raymond's article on the subject: https://blogs.msdn.microsoft.com/oldnewthing/20041217-00/?p=36953/
DLL's contain multiple "segments", and each segment has a descriptor telling Windows its Characteristics. This is a 32 bits DWORD. Code segments obviously have the code bit set, and generally also the shareable bit. Read-only data can also be shareable, whereas writeable data generally does not have the shareable flag.
Now you can set an unusual combination of characteristics on an extra segment: writeable and shareable. That is not the default, and indeed might cause race conditions. So the final answer to your question is: the problem is avoided chiefly by the default characteristics of segments, and secondly any DLL which has a segment with non-standard characteristics must deal with the self-inflicted problems.

How are DLLs mapped into current programs virtual address space

When I load a DLL in program, how does that occur in memory? Does it get loaded into my Virtual Address Space? If it does, where are the text and data segments stored? I have a 32-bit program I'm maintaining, which uses a large part of the available heap for image processing routines, and I want to know how much I should worry about loading DLLs which themselves might use a lot of space.
Yes: everything that your process needs to access must be in its adress space. This applies to your code and to your data as well.
Here you'll find more about the anatomy of process memory and adress space
and here it's explained that dll are loaded into the virtual adress space.
Remark: the dll might be shared between several processes: it is then loaded only once in memory by the OS. But every process using it could potentially see it at a different place in its own virtual adress space (see also this SO answer about relative virtual adresses).

Is shared virtual memory used when multiple processes read a file using file pointer in Linux?

I wrote a C++ program which read a file using file pointer. And I need to run multiple process at the same time. Since the size of file can be huge (100MB~), to reduce memory usage in multiple processes, I think I need use shared memory. (For example IPC library like boost::interprocess::shared_memory_object)
But does it really need? Because I think if multiple processes read same file, then virtual memory of each processes mapped to same physical memory of file thru page table.
I read a Linux doc and they said,
Shared Virtual Memory
Although virtual memory allows processes to have separate (virtual)
address spaces, there are times when you need processes to share
memory. For example there could be several processes in the system
running the bash command shell. Rather than have several copies of
bash, one in each processes virtual address space, it is better to
have only one copy in physical memory and all of the processes running
bash share it. Dynamic libraries are another common example of
executing code shared between several processes. Shared memory can
also be used as an Inter Process Communication (IPC) mechanism, with
two or more processes exchanging information via memory common to all
of them. Linux supports the Unix TM System V shared memory IPC.
Also, wiki said,
In computer software, shared memory is either
a method of inter-process communication (IPC), i.e. a way of exchanging data between programs running at the same time. One process
will create an area in RAM which other processes can access, or
a method of conserving memory space by directing accesses to what would ordinarily be copies of a piece of data to a single instance
instead, by using virtual memory mappings or with explicit support of
the program in question. This is most often used for shared libraries
and for XIP.
Therefore, what I really curious is that does shared virtual memory supported by OS level or not?
Thanks in advance.
Regarding your first question - if you want your data to be accessible by multiple processes without duplication you'll definitely need some kind of a shared storage.
In C++ I'd surely use boost's shared_memory_object. That's a valid option to share (large) data among processes and it has good documentation with examples (http://www.boost.org/doc/libs/1_55_0/doc/html/interprocess/sharedmemorybetweenprocesses.html).
Using mmap() is a more low-level approach usually used in C. To use it as an IPC you'll have to make the mapped region shared. From http://man7.org/linux/man-pages/man2/mmap.2.html:
MAP_SHARED
Share this mapping. Updates to the mapping are visible to
other processes that map this file, and are carried
through to the underlying file. The file may not actually
be updated until msync(2) or munmap() is called.
Also on that page there's an example of mapping a file to shared memory.
In either case there are at least two things to remember:
You need synchronization if there are multiple processes that modify the shared data.
You can't use pointers, only offsets from the beginning of the mapped region.
Here's an explanation from the boost docs:
If several processes map the same file/shared memory, the mapping address will be surely different in each process. Since each process might have used its address space in a different way (allocation of more or less dynamic memory, for example), there is no guarantee that the file/shared memory is going to be mapped in the same address.
If two processes map the same object in different addresses, this invalidates the use of pointers in that memory, since the pointer (which is an absolute address) would only make sense for the process that wrote it. The solution for this is to use offsets (distance) between objects instead of pointers: If two objects are placed in the same shared memory segment by one process, the address of each object will be different in another process but the distance between them (in bytes) will be the same.
Regarding the OS support - yes, shred memory is an OS specific feature.
In Linux mmap() is actually implemented in kernel and modules and can be used to transfer data between user and kernel-space.
Windows also has it's specifics:
Windows shared memory creation is a bit different from portable shared memory creation: the size of the segment must be specified when creating the object and can't be specified through truncate like with the shared memory object. Take in care that when the last process attached to a shared memory is destroyed the shared memory is destroyed so there is no persistency with native windows shared memory.
Your question doesn't make sense.
I think I need use shared memory. (For example IPC library like boost::interprocess::shared_memory_object).
If you use shared memroy, the memory is shared.
I think if multiple processes read same file, then virtual memory of each processes mapped to same physical memory of file thru page table.
Now you're talking about memory-mapped I/O. It isn't the same thing. However more probably it is what you need in this situation.

How to get Shared Object in Shared Memory

Our app depends on an external, 3rd party-supplied configuration (including custom driving/decision making functions) loadable as .so file.
Independently, it cooperates with external CGI modules using a chunk of shared memory, where almost all of its volatile state is kept, so that the external modules can read it and modify it where applicable.
The problem is the CGI modules require a lot of the permanent config data from the .so as well, and the main app performs a whole lot of entirely unnecessary copying between the two memory areas to make the data available. The idea is to make the whole Shared Object to load into Shared Memory, and make it directly available to the CGI. The problem is: how?
dlopen and dlsym don't provide any facilities for assigning where to load the SO file.
we tried shmat(). It seems to work only until some external CGI actually tries to access the shared memory. Then the area pointed to appears just as private as if it was never shared. Maybe we're doing something wrong?
loading the .so in each script that needs it is out of question. The sheer size of the structure, connected with frequency of calls (some of the scripts are called once a second to generate live updates), and this being an embedded app make it no-go.
simply memcpy()'ing the .so into shm is not good either - some structures and all functions are interconnected through pointers.
The first thing to bear in mind when using shared memory is that the same physical memory may well be mapped into the two processes virtual address space as different addresses. This means that if pointers are used anywhere in your data structures, they are going to cause problems. Everything must work off an index or an offset to work correctly. To use shared memory, you will have to purge all the pointers from your code.
When loading a .so file, only one copy of the .so file code is loaded (hence the term shared object).
fork may also be your friend here. Most modern operating systems implement copy-on-write semantics. This means that when you fork, your data segments are only copied into separate physical memory when one process writes to the given data segment.
I suppose the easiest option would be to use memory mapped file, what Neil has proposed already. If this option does not fill well, alternative is to could be to define dedicated allocator. Here is a good paper about it: Creating STL Containers in Shared Memory
There is also excellent Ion GaztaƱaga's Boost.Interprocess library with shared_memory_object and related features. Ion has proposed the solution to the C++ standardization committee for future TR: Memory Mapped Files And Shared Memory For C++
what may indicate it's worth solution to consider.
Placing actual C++ objects in shared memory is very, very difficult, as you have found. I would strongly recommend you don't go that way - putting data that needs to be shared in shared memory or a memory mapped file is much simpler and likely to be much more robust.
You need to implement object's Serialization
Serialization function will convert your object into bytes, then you can write bytes in SharedMemory and have your CGI module to deserialize bytes back to object.