C++ Boost Creating Shared-Memory for two different processes - c++

So, I'm trying to create a shared-memory segment in a C++ program, so I can for example write a simple character in it, and read that character from another C++ program.
I've downloaded the Boost libraries, as I read it simplifies this process.
Basically I have two questions: First of all, how do I write to it after its created? Then what should I write in the second program in order to identify the segment and read the info in it?
This is what I've got so far. It's not a lot, but I'm still new to this (first program):
#include "stdafx.h"
#include <boost/interprocess/windows_shared_memory.hpp>
#include <boost/interprocess/mapped_region.hpp>
int main(int argc, char *argv[])
{
using namespace boost::interprocess;
windows_shared_memory shared (create_only, "shm", read_write, 65536);
//created shared memory using the windows native library
mapped_region region (shared, read_write, 0 , 0 , (void*)0x3F000000);
//mapping it to a region using HEX
//Here I should write to the segment
return 0;
}
Thanks in advance. Any information I will be more than happy to provide, in order to receive the appropriate help.

The following is a slightly modified example which is based on the Boost documentation on Shared Memory
Note: When using windows_shared_memory keep in mind that the shared memory block will automatically be destroyed when the last process that uses it exists. In the example below that means, if the server exists before the client has a change to open the shared memory block, the client will throw an exception.
Server side:
#include <boost/interprocess/windows_shared_memory.hpp>
#include <boost/interprocess/mapped_region.hpp>
#include <cstring>
#include <cstdlib>
#include <string>
int main(int argc, char *argv[])
{
using namespace boost::interprocess;
//Create a native windows shared memory object.
windows_shared_memory shm (create_only, "shm", read_write, 65536);
//Map the whole shared memory in this process
mapped_region region(shm, read_write);
//Write a character to region
char myChar = 'A';
std::memset(region.get_address(), myChar , sizeof(myChar));
... it's important that the server sticks around, otherwise the shared memory
block is destroyed and the client will throw exception when trying to open
return 0;
}
Client side:
#include <boost/interprocess/windows_shared_memory.hpp>
#include <boost/interprocess/mapped_region.hpp>
#include <cstring>
#include <cstdlib>
#include <string>
int main(int argc, char *argv[])
{
using namespace boost::interprocess;
//Open already created shared memory object.
windows_shared_memory shm (open_only, "shm", read_only);
//Map the whole shared memory in this process
mapped_region region(shm, read_only);
//read character from region
char *myChar= static_cast<char*>(region.get_address());
return 0;
}
Instead of memsetting raw bytes in shared memory, you'll probably be better off using Boost.Interprocess. It's designed to simplify the use of common interprocess communication and synchronization mechanisms and offers a wide range of them - including shared memory. For example you could create a vector in shared memory.

Related

A thrust problem: How can I copy a host_vector to device_vector with a customized permutation order?

I have an array in host, and I want to transfer it to device with a different order.
I have tried this toy code complied with nvc++ test.cpp -stdpar
$ cat test.cpp
#include <iostream>
#include <thrust/iterator/permutation_iterator.h>
#include <thrust/copy.h>
#include <thrust/sequence.h>
#include <thrust/host_vector.h>
#include <thrust/device_vector.h>
#include <array>
using Real = float;
int main(int argc, char* argv[]) {
std::array<std::size_t,6> idx{0,1,2,3,5,4};
thrust::host_vector<Real> hvec(6);
thrust::sequence(hvec.begin(),hvec.end());
typedef thrust::host_vector<Real>::iterator EleItor;
typedef std::array<std::size_t,6>::iterator IdxItor;
thrust::permutation_iterator<EleItor,IdxItor> itor(hvec.begin(),idx.begin());
thrust::device_vector<Real> test;
thrust::copy(itor,itor+6,test); // error
thrust::copy(itor,itor+6,std::ostream_iterator<Real>(std::cout," ");
}
The problem is that thrust::copy does not allow copy from host to device, how can I bypass this restriction?
According to the documentation is is possible to use thrust::copy to copy from host to device, but you need to pass the device iterator:
//-----------------------------vvvvvvvv--
thrust::copy(itor, itor+6, test.begin());
Note that before that you need to allocate memory for the device vector.
Fortunately the thrust::device_vector has a constructor taking a size that will allocate the required memory.
You can use something like:
thrust::device_vector<Real> test(host_vector.size());
Edit (credit goes to #paleonix):
There is another constructor taking iterators, i.e. one can do both allocation and copy as initialization in one line which also has the advantage of avoiding the unnecessary initialization of the memory to 0.0f.
thrust::device_vector<Real> test(itor, itor+6);

why bad_alloc message show me after running my code

this code show the message "terminate called after throwing an instance of 'std::bad_alloc'
what(): std::bad_alloc
Process returned 3 (0x3) execution time : 0.331 s"
where is the problem I could not identify. I used codeblocks. Ram of my PC is 8GB.
#include <iostream>
#include<vector>
#include <stdio.h>
#include <cstdlib>
#include <ctime>
#include <fstream>
#include <sstream>
using namespace std;
int main(){
ofstream bcrs_tensor;
bcrs_tensor.open("bcrs_tensor_Binary", ios::out | ios::binary);
int X=6187,Y=25,Z=78,M=33;
int new_dimension_1,new_dimension_2,new_x_1,new_x_2;
new_dimension_1=X*Z;
new_dimension_2=Y*M;
int* new_A = new int[ new_dimension_1*new_dimension_2 ];
vector<int> block_value,CO_BCRS,RO_BCRS;
block_value.reserve(303092010);
CO_BCRS.reserve(1554318);
RO_BCRS.reserve(37124);
cout<<"size"<<sizeof(block_value)<<endl;
return 0;
}
You are trying to allocate way more memory than your system has available for your app, so std::bad_alloc is being thrown, which you are not catching.
Assuming sizeof(int)=4 in your compiler, you are asking for:
1.48GB for new_A
1.12GB for block_value
5.92MB for CO_BCRS
145KB for RO_BCRS
For a total of 2.61GB.
Even though you have 8GB of RAM installed, your system does not have enough consecutive memory available to satisfy one of those allocations (ie, if it is a 32bit app, the whole process is limited to 2-3GB max, depending on configuration, memory manager implementation, etc. But a good chunk of that is reserved by the OS itself, you can't use it for your code).

Is there an equivalent to the clone() syscall on macOS?

As the one in Linux, in which I can pass as parameters the function I want to execute in the child, the memory to be used, etc. I attach an example, in which I'm trying to start a child process that would execute the chld_func function using the memory allocated within stack_memory().
#include <iostream>
#include <sched.h>
#include <sys/types.h>
#include <unistd.h>
#include <sys/wait.h>
// ...
int main(int argc, char** argv)
{
printf("Hello, World! (parent)\n");
clone(chld_func, stack_memory(), SIGCHLD, 0);
wait(nullptr);
return EXIT_SUCCESS;
}
Maybe I could try to do something similar using fork(), but I don't know where to begin.
Thanks in advance!
As stated here and here clone is specific to Linux.
The macOS system calls you can do include fork and vfork, so you can use one of then.
See also this answer for some reasoning about clone and fork and read man pages:
http://man7.org/linux/man-pages/man2/clone.2.html
http://man7.org/linux/man-pages/man2/vfork.2.html
http://man7.org/linux/man-pages/man2/fork.2.html

How to avoid Segmentation fault: 11; c++

I'm debugging a simple C++ script using gdb and see that I get an error when I try and initialize temp_grid. I try and compile it by running
g++ -Wall initial.cc -o initial
Is there a way to avoid this segmentation fault with something inside the script?
#include <iostream>
#include <array>
#include <valarray>
#include <stdlib.h>
#include <memory>
using namespace std;
int main()
{
using std::array;
array<array<float, 1024>, 1024> grid ={};
// temp grid
array<array<float, 1024>, 1024> temp_grid ={};
return 0;
}
You are most likely overflowing the stack, which has relatively limited storage space for local variables. Try allocating them using dynamic storage (using new). For maximum robustness, use smart pointers (unique_ptr) to manage the pointers.

Odd Memory Error -- Bad Alloc

Working on a WinPCap project. Trying to do some basic pointer and memory operations and having lots of errors.
I've included the two lines I'm trying to run along with the includes.
The same lines in another VSC++ project work just fine. This is the error I am getting
Unhandled exception at 0x75a79617 in
pktdump_ex.exe: Microsoft C++
exception: std::bad_alloc at memory
location 0x0012f8e4..
#include <stdlib.h>
#include <stdio.h>
#include <iostream>
#include <string>
#include "DataTypes.h"
#include <sstream>
#include "EthernetLayer.h"
#include <pcap.h>
int* testPointer = new int[2];
delete[] testPointer;
EDIT:
Found out something useful.
The following code snippet is what is crashing the winpcap library.
EthernetStructPointers* testData;
testData = (EthernetStructPointers*)pkt_data;
EthernetStruct newData;
memcpy(newData.DEST_ADDRESS, testData->DEST_ADDRESS, 6);
These are the definitions of the structs.
struct EthernetStructPointers
{
u_char DEST_ADDRESS[6];
u_char SOURCE_ADDRESS[6];
u_char TYPE[2];
};
struct EthernetStruct
{
u_char DEST_ADDRESS[6];
u_char SOURCE_ADDRESS[6];
u_char TYPE[2];
u_char* dataPointer;
string DestAddress;
string SourceAddress;
string Type;
int length;
};
My guess is the freestore is corrupted by one the previous statements (perhaps by an incorrect use of the pcap interface), and you only learn of the error on the next memory allocation or release, when the manager detects it and throws a bad alloc.
std::bad_alloc should be thrown when you try to new something and have run out of memory. Can you check how much free memory is available to your process?