I have an application which crashed while accessing a shared memory.
typedef struct
{
...
} LinuxUserData;
LinuxUserData *ptrLinuxUserData;
fd = shm_open(shrSegName, O_CREAT|O_RDWR|O_EXCL, 0644);
ptrLinuxUserData = mmap(0, sizeof(LinuxUserData), PROT_READ|PROT_WRITE, MAP_SHARED, fd, 0);
But in the core the memory is inaccessible and it gives me error
(gdb) p *ptrLinuxUserData
Cannot access memory at address 0xeb80d050
This may be probably because core does not collect the shared memory details.
Also core man page mentions "Memory-mapped I/O pages such as frame buffer are never dumped, and virtual DSO pages are always dumped, regardless of the coredump_filter value."
Is there any way through which shared memory or its details can be collected in core and accessed?
or
What is the appropriate way to debug and root cause applications which cores at shared memory?
Related
I'm using Boost shared memory to share vectors across different processes. However, on some occasions, the consumer of the shared memory throws up this exception:
Unexpected exception: The volume for a file has been externally altered so that the opened file is no longer valid.
I have the proper Synchronization mechanism set in place. What could this error indicate?
SOLVED Size of the memory hadn't been properly allocated upon creation by one of the processes.
When a shared memory object is created, its size is 0. To set the size of the shared memory, the user must use the truncate function call, in a shared memory that has been opened with read-write attributes
Source - Boost shared memory
It means the volume for a file has been externally altered. Look for other processes writing the file.
In other words, it means you do not have proper synchronization in place.
Do you use bip::managed_mapped_file::grow by any chance? The documentation states it only allows offline growing.
I am learning working with shared memory in C++. I found that under Windows I need to use CreateFileMapping and MapViewOfFile functions. I want to share array of char so part of my code is:
HANDLE hBuffer = CreateFileMapping(INVALID_HANDLE_VALUE, NULL, PAGE_READWRITE, 0, size, bufferName);
char * buffer = (char *) MapViewOfFile(hBuffer, FILE_MAP_ALL_ACCESS, 0, 0, size);
(there is checking for NULLs of course) and at the end of using shared memory I call:
UnmapViewOfFile(buffer); // returned true
CloseHandle(hBuffer); // returned true also
But in Resource monitor I can see that there was no memory released. When it is called several times, the allocated memory of application is increasing but there is no releasing. What am I doing wrong? Or is there another function to release shared memory?
Thanks for answers.
Problem solved thanks to marcin_j:
Your code looks find (well,... you mispelled HANDLE), you can use
procexp.exe from sysinternals to find your HANDLE by name (if it is
not found then it was closed), also observe on Performance tab how
Virtual Size of you app changes, there is also Handles count that
should change accordingly.
Also, observe what will happen after you execute
memset(buffer,0,size); after MapViewOfFile - this is actually when
system will commit memory and when your workin set will rise.
My above comment is wrong, CreateFileMapping by default applies
SEC_COMMIT which commits memory. But I suppose your memory is paged
until memset is called, after that call paged pages are moved to
physical memory, which rises working set.... if I am not wrong...
I' using shared memory (with semaphore) for communicating between two processes:
Fist, I open shared memory object using the call:
int fd = shm_open("name") [http://linux.die.net/man/3/shm_open]
Second, I map this shared mem object into my adress space using call:
void* ptr = mmap(..fd..) [http://linux.die.net/man/2/mmap2]
However, I want to use EPOLL in conjunction with shared memory file descriptor==> I don't use mmap anymore, and instead, using EPOLL for monitoring, and then add, write function for direct access to shared memory using fd (shared memmory file descriptor)
My question is that: how is the speed of direct reading and writing on shared memory object in comparison with memcpy on pointer returned by mmap?
read(fd, buffer) vs memcpy(des, source, size) //???
Hope to see your answer! Thanks!
read is a syscall and implies a privilege transition which implies address space manipulation (MMU) and then the kernel will call memcpy from your provided buffer to the destination address. It basically does the same thing you would do (call memcpy) but adding 2 expensive operations (privilege transitions) and a cheap one (finding the destination address).
We can conclude that the read/write is very likely to be slower.
For the following question, I am looking for an answer that is based on "pure" C/C++ fundamentals, so I would appreciate a non-Boost answer. Thanks.
I have an application (for example, a telecommunications infrastructure server) which will, when started, spawn several processes on a Linux environment (one for logging, one for Timer management, one for protocol messaging, one for message processing etc.). It is on an x86_64 environment on Gentoo. The thing is, I need a singleton to be able to be accessible from all the processes.
This is different from multi-threading using say, POSIX threads, on Linux because the same address space is used by all POSIX threads, but that is not the case when multiple processes, generated by fork () function call, is used. When the same address space is used, the singleton is just the same address in all the threads, and the problem is trivially solved (using the well known protections, which are old hat for everybody on SO). I do enjoy the protections offered to me by multiple processes generated via fork().
Going back to my problem, I feel like the correct way to approach this would be to create the singleton in shared memory, and then pass a handle to the shared memory into the calling tasks.
I imagine the following (SomeSingleton.h):
#include <unistd.h>
#... <usual includes>
#include "SomeGiantObject.h"
int size = 8192; // Enough to contain the SomeSingleton object
int shm_fd = shm_open ("/some_singleton_shm", O_CREAT | O_EXCL | O_RDWR, 0666);
ftruncate (shm_fd, size);
sharedMemoryLocationForSomeSingleton = mmap (NULL, size, PROT_READ | PROT_WRITE, MAP_SHARED, shm_fd, 0);
class SomeSingleton
{
public:
SomeSingleton* getInstance ()
{
return reinterpret_cast<SomeSingleton*>sharedMemoryLocationForSomeSingleton;
}
private:
SomeSingleton();
/*
Whole bunch of attributes that is shared across processes.
These attributes also should be in shared memory.
e.g., in the following
SomeGiantObject* obj;
obj should also be in shared memory.
*/
};
The getInstance() method returns the shared memory location for the SomeSingleton object.
My questions are as follows:
Is this a legitimate way to handle the problem? How have folks on SO handled this problem before?
For the code above to work, I envision a global declaration (static by definition) that points to the shared memory as shown before the class declaration.
Last, but not the least, I know that on Linux, the overheads of creating threads vs. processes is "relatively similar," but I was wondering why there is not much by way of multi-processing discussions on SO (gob loads of multi-threading, though!). There isn't even a tag here! Has multi-processing (using fork()) fallen off favors among the C++ coding community? Any insight on that is also appreciated. Also, may I request someone with a reputation > 1500 to create a tag "multi-processing?" Thanks.
If you create the shared memory region before forking, then it will be mapped at the same address in all peers.
You can use a custom allocator to place contained objects inside the shared region also. This should probably be done before forking as well, but be careful of repetition of destructor calls (destructors that e.g. flush buffers are fine, but anything that makes an object unusable should be skipped, just leak and let the OS reclaim the memory after all processes close the shared memory handle).
I have 2 questions concerns about using shared memory. I'm using CreateFileMapping to create a shared memory area between two processes.
1) I understand that I need to call CloseHandle on every handle returned from a CreateFileMapping or OpenFileMapping call in order to release the memory. My question is, do all handles get closed appropriately and mem deallocated by Windows XP/7 if the programs using the shared memory exit without calling CloseHandle? IE - is there a possibility of a mem leak after all processes using the mem have been closed?
2) I use MapViewofFile to get a pointer to the mem. In 1 instance I've assumed that the shared memory will always exist in the context of a method. So I've saved the return value of MapViewOfFile as a pointer and closed the handle to the mem and am just using the pointer to the shared mem (but still locking access to it). Is this safe, or should I call MapViewOfFile every time I access the shared mem?
Thanks,
Ian
1) Yes, all handles are closed when a process terminates, no matter if it dies or finishes nicely. No leaks here.
2) As long as you don't call UnmapViewOfFile, the memory will still be accesible to the process, even if the handle has been closed:
Although an application may close the file handle used to create a file mapping object, the system holds the corresponding file open until the last view of the file is unmapped