I'm new with epoll.
My code is working fine. The epoll is storing my file-descriptor and wait until file-descriptor is "ready".
But, for some reason it will not wake up until I will press on Enter (even though data has already received to fd, and after enter I will immediately see all data that has been sent before).
After one enter it will work as expected (no enters needed and when fd is ready again it will wake up again).
Here is am essence of my code:
int nEventCountReady = 0;
epoll_event event, events[EPOLL_MAX_EVENTS];
int epoll_fd = epoll_create1(0);
if(epoll_fd == -1)
{
std::cout << "Error: Failed to create EPoll" << std::endl;
return ;
}
event.events = EPOLLIN;
event.data.fd = myfd;
if(epoll_ctl(epoll_fd, EPOLL_CTL_ADD, 0, &event))
{
fprintf(stderr, "Failed to add file descriptor to epoll\n");
close(epoll_fd);
return ;
}
while(true)
{
std::cout << "Waiting for messages" << std::endl;
nEventCountReady = epoll_wait(epoll_fd, events, EPOLL_MAX_EVENTS, 30000); << Stuck until Enter will be pressed (at first while loop)
for(int i=0; i<nEventCountReady; i++)
{
msgrcv(events[i].data.fd, oIpCMessageContent, sizeof(SIPCMessageContent), 1, 0);
std::cout << oIpCMessageContent.buff << std::endl;
}
}
This
if(epoll_ctl(epoll_fd, EPOLL_CTL_ADD, 0, &event))
should probably be
if(epoll_ctl(epoll_fd, EPOLL_CTL_ADD, myfd, &event))
In the first line you tell epoll to monitor fd 0 which is typically the standard input. That's why it waits for it, e.g. for your Enter.
Note that your original code works only by coincidence. It just happens that when you Enter there is data in your myfd (and even if there's none msgrcv blocks). And once you pressed Enter it will wake up all the time since epoll knows that STDIN is ready but you didn't read from it.
Thanks to kamilCuk, I noticed that msgget doesn't return a file descriptor as I thought.
It returns a "System V message queue identifier".
And as freakish said before, System V message queues don't work with selectors like epoll.
Related
I am developing C++ class with calls to Windows API C libraries.
I am using the Semaphores for a task, let's say I have two processes:
ProcessA has two semaphores:
Global\processA_receiving_semaphore
Global\processA_waiting_semaphore
ProcessB has two semaphores:
Global\processB_receiving_semaphore
Global\processB_waiting_semaphore
I have two threads in each process:
Sending thread in processA:
Wait on "Global\processB_waiting_semaphore"
// do something
Signal "Global\processB_receiving_semaphore"
Receiving thread on processB:
Wait on "Global\processB_receiving_semaphore"
// do something
Signal "Global\processB_waiting_semaphore
I removed ALL code that Releases "Global\processB_waiting_semaphore" but it can still be acquired. Calling WaitForSingleObject on that semaphore always returns successful wait and immediately. I tried setting the timeout period to 0 and it still acquires the semaphore while NOTHING is releasing it.
The receiving semaphore has initial count = 0 and max count = 1 while the waiting semaphore has initial count = 1 and max count = 1.
Calling WaitForSingleObject on the receiving semaphore works great and blocks until it is released by the other process. The problem is with the waiting semaphore and I cannot figure out why. The code is very big and I have made sure the names of the semaphores are set correctly.
Is this a common issue? If you need more explanation please comment and I will modify the post.
EDIT: CODE ADDED:
Receiver semaphores:
bool intr_process_comm::create_rcvr_semaphores()
{
std::cout << "\n Creating semaphore: " << "Global\\" << this_name << "_rcvr_sem";
rcvr_sem = CreateSemaphore(NULL, 0, 1, ("Global\\" + this_name + "_rcvr_sem").c_str());
std::cout << "\n Creating semaphore: " << "Global\\" << this_name << "_wait_sem";
wait_sem = CreateSemaphore(NULL, 1, 1, ("Global\\" + this_name + "_wait_sem").c_str());
return (rcvr_sem && wait_sem);
}
Sender semaphores:
// this sender connects to the wait semaphore in the target process
sndr_sem = OpenSemaphore(SEMAPHORE_MODIFY_STATE, FALSE, ("Global\\" + target_name + "_wait_sem").c_str());
// this target connects to the receiver semaphore in the target process
trgt_sem = OpenSemaphore(SEMAPHORE_MODIFY_STATE, FALSE, ("Global\\" + target_name + "_rcvr_sem").c_str());
DWORD intr_process_locking::wait(unsigned long period)
{
return WaitForSingleObject(sndr_sem, period);
}
void intr_process_locking::signal()
{
ReleaseSemaphore(trgt_sem, 1, 0);
}
Receiving thread function:
void intr_process_comm::rcvr_thread_proc()
{
while (conn_state == intr_process_comm::opened) {
try {
// wait on rcvr_semaphore for an infinite time
WaitForSingleObject(rcvr_sem, INFINITE);
if (inner_release) // if the semaphore was released within this process
return;
// once signaled by another process, get the message
std::string msg_str((LPCSTR)hmf_mapview);
// signal one of the waiters that want to put messages
// in this process's memory area
//
// this doesn't change ANYTHING in execution, commented or not..
//ReleaseSemaphore(wait_sem, 1, 0);
// put this message in this process's queue
Msg msg = Msg::from_xml(msg_str);
if (msg.command == "connection")
process_connection_message(msg);
in_messages.enQ(msg);
//std::cout << "\n Message: \n"<< msg << "\n";
}
catch (std::exception e) {
std::cout << "\n Ran into trouble getting the message. Details: " << e.what();
}
}
}
Sending thread function:
void intr_process_comm::sndr_thread_proc()
{
while (conn_state == intr_process_comm::opened ||
(conn_state == intr_process_comm::closing && out_messages.size() > 0)
) {
// pull a message out of the queue
Msg msg = out_messages.deQ();
if (connections.find(msg.destination) == connections.end())
connections[msg.destination].connect(msg.destination);
if (connections[msg.destination].connect(msg.destination)
!= intr_process_locking::state::opened) {
blocked_messages[msg.destination].push_back(msg);
continue;
}
// THIS ALWAYS GETS GETS WAIT_OBJECT_0 RESULT
DWORD wait_result = connections[msg.destination].wait(wait_timeout);
if (wait_result == WAIT_TIMEOUT) { // <---- THIS IS NEVER TRUE
out_messages.enQ(msg);
continue;
}
// do things here
// release the receiver semaphore in the other process
connections[msg.destination].signal();
}
}
To clarify some things:
trgt_sem in a sender is the rcvr_sem in the receiver.
`sndr_sem' in the sender is the 'wait_sem" in the receiver.
for call WaitForSingleObject with some handle:
The handle must have the SYNCHRONIZE access right.
but you open semaphore with SEMAPHORE_MODIFY_STATE access only. with this access possible call ReleaseSemaphore (This handle must have the SEMAPHORE_MODIFY_STATE access right) but call to WaitForSingleObject fail with result WAIT_FAILED. call to GetLastError() after this must return ERROR_ACCESS_DENIED.
so if we want call both ReleaseSemaphore and any wait function - we need have SEMAPHORE_MODIFY_STATE | SYNCHRONIZE access on handle. so need open with code
OpenSemaphore(SEMAPHORE_MODIFY_STATE|SYNCHRONIZE, )
and of course always checking api return values and error codes can save a lot of time
If you set the timeout to 0 WaitForSingleObject will always return immediately, a successful WaitForSingleObject will return WAIT_OBJECT_0 (which happens to have the value 0), WFSO is not like most APIs where success is indicated by a non-zero return.
First off: this is not a Unix/Linux system. I am working on an IBM AS/400 V7R1 (C++ 98) and do not have access to fork(). Nevertheless, I do have spawnp() to start new child processes and the AS/400 supports the notion of process groups.
In my system, I have a "head" program that starts X number of children. This head calls accept() on incoming connections and immediately gives the socket away to one of the child process via sendmsg(). The children are all sitting on recvmsg(). For the head program, it goes something like this:
rc = socketpair(AF_UNIX, SOCK_DGRAM, 0, pair_sd);
if (rc != 0) {
perror("socketpair() failed");
close(listen_sd);
exit(-1);
}
server_sd = pair_sd[0];
worker_sd = pair_sd[1];
// do some other stuff, set up arguments for spawnp()...
// ...
spawn_fdmap[0] = worker_sd;
for (int i = 0; i < numOfChildren; i++) {
pid = spawnp(spawn_argv[0], 1, spawn_fdmap, &inherit, spawn_argv, spawn_envp);
if (pid < 0) {
CERR << "errno=" << errno << ", " << strerror(errno) << endl;
CERR << "command line [";
for (int x = 0; spawn_argv[x] != 0; ++x) {
cerr << spawn_argv[x] << " ";
}
cerr << ']' << endl;
close(listen_sd);
exit(-1);
}
else {
CERR << "Child worker PID = " << pid << endl;
child_pids.push_back(pid);
}
}
// Close down the worker side of the socketpair.
close(worker_sd);
I've got a reason/scheme to start additional child processes after initial program start. I plan to send the head program some signal which would cause the spawnp() call to execute again. The "close(worker_sd)" has me concerned though. Can I call spawnp() again after I've closed the worker socket? It's just a number, after all. Is it OK to keep the worker_sd open?
Can I call spawnp() again after I've closed the worker socket?
After you called close on that socket, the file descriptor is no longer valid in this process.
You probably want a separate socketpair for each child process, so that messages from different child processes do not get interleaved/corrupted.
I think calling socketpair() for every child is unnecessary, and it means having to keep track of additional sockets on the server side. What I found is that removing the close() on 'worker_sd' allows me to create as many additional child processes as I want. Closing it and creating a child process caused the new child to die when it tried to receive something from the parent. I felt this is what would happen, and it did.
I am currently working on a program that does IPC via Posix Message Queues. I now need a function that removes every message of that queue. The problem is: My code deadlocks. Currently I am trying the following:
void clear_mq(std::string queue_name)
{
struct mq_attr mq_attrs = {0, 10, sizeof(uint8_t), 0};
mqd_t mq = ::mq_open(queue_name.c_str(), O_WRONLY | O_CREAT, 00644, &mq_attrs);
if (mq < 0)
{
std::cout << "Error opening Queue" << std::endl;
exit(-1);
}
struct mq_attr num_messages;
if (mq_getattr(mq, &num_messages) == -1)
{
std::cout << "Error!" << std::endl;
exit(-1);
}
while (num_messages.mq_curmsgs > 0)
{
uint8_t buf;
mq_receive(mq, (char *)&buf, sizeof(uint8_t), NULL);
if (mq_getattr(mq, &num_messages) == -1)
{
std::cout << "Error!" << std::endl;
exit(-1);
}
}
mq_close(mq);
}
Can anyone point out what I am doing wrong? I do not understand why the receive is blocking... At that moment when I call clear_mq noone else is in the receive block...
Could be that mq_receive() fails and you end up in a endless loop.
mq_receive() can fail for various reasons, e.g. the buffer provided must at least have the size of the mq-maxsize.
You should check the return value of mq_receive() and exit the loop if it fails.
You IMHO have no deadlock. However, the mq_receive blocks until it receives a message (man mq_receive) because the queue is not open with O_NONBLOCK parameter while mq_open.
Please also ensure you do not neglect the return value of the mq_receive in the loop.
In case someone else has the problem.
When printing the errno I get error 9 (Bad file descriptor), which makes sense cause the message queue is only opened for write but you are trying to read from it. When you open the queue with O_RDWR see mq_open it should work.
A tip for debugging use mq_timedreceive so that you can check the error.
I'm trying to create a UDP broadcast program to check for local game servers, but I'm having some trouble with the receiving end. Since the amount of servers alive is unknown at all times, you must have a loop that only exits when you stop it. So in this bit of code here:
while(1) // start a while loop
{
if(recvfrom(sd,buff,BUFFSZ,0,(struct sockaddr *)&peer,&psz) < 0) // recvfrom() function call
{
cout << red << "Fatal: Failed to receive data" << white << endl;
return;
}
else
{
cout << green << "Found Server :: " << white;
cout << yellow << inet_ntoa(peer.sin_addr), htons(peer.sin_port);
cout << endl;
}
}
I wish to run this recvfrom() function until I press Ctrl + C. I've tried setting up handlers and such (from related questions), but they're all either too complicated for me, or it's a simple function that just exits the program as a demonstration. Here's my problem:
The program hangs on recvfrom until it receives a connection (my guess), so, there's never a chance for it to specifically wait for input. How can I set up an event that will work into this nicely?
Thanks!
In the CTRL-C handler, set a flag, and use that flag as condition in the while loop.
Oh, and if you're not on a POSIX systems where system-calls can be interrupted by signals, you might want to make the socket non-blocking and use e.g. select (with a small timeout) to poll for data.
Windows have a couple of problems with a scheme like this. The major problem is that functions calls can not be interrupted by the CTRL-C handler. Instead you have to poll if there is anything to receive in the loop, while also checking the "exit loop" flag.
It could be done something like this:
bool ExitRecvLoop = false;
BOOL CtrlHandler(DWORD type)
{
if (type == CTRL_C_EVENT)
{
ExitRecvLoop = true;
return TRUE;
}
return FALSE; // Call next handler
}
// ...
SetConsoleCtrlHandler((PHANDLER_ROUTINE) CtrlHandler, TRUE);
while (!ExitRecvLoop)
{
fd_set rs;
FD_ZERO(&rs);
FD_SET(sd, &rs);
timeval timeout = { 0, 1000 }; // One millisecond
if (select(sd + 1, &rs, NULL, NULL, &timeout) < 0)
{
// Handle error
}
else
{
if (FD_ISSET(sd, &rs))
{
// Data to receive, call `recvfrom`
}
}
}
You might have to make the socket non-blocking for this to work (see the ioctlsocket function for how to).
Thread off your recvFrom() loop so that your main thread can wait for user input. When user requests stop, close the fd from the main thread and the recvFrom() will return immediately with an error, so allowing your recvFrom() thread to exit.
So I'm trying to build a async server... Here is a summary of what I have so far:
int sockfd;
int max;
fd_set socks;
set<int> conns;
bind();
listen(sockfd);
while(1){
FD_ZERO(&socks);
max = sockfd;
FD_SET(sockfd, &socks);
for(set<int>::iterator it=conns.begin(); it!=conns.end(); it++){
FD_SET(*it, &socks);
if(max < *it){
max = *it;
}
}
int res = select(max+1, &socks, NULL, NULL, NULL);
if(res < 0){
cerr << "ERROR with select" << endl;
break;
}else if(res){
if(FD_ISSET(sockfd, &socks)){
//new connection
int new_sockfd = accept();
conns.insert(new_sockfd);
}else{
for(set<int>::iterator it=conns.begin(); it!=conns.end(); it++){
if(FD_ISSET(*it, &socks){
char buffer[256];
read(buffer, 256, *it);
cout << buffer << endl;
close(*it);
conns.erase(*it);
}
}
}
}
}
What ends up happening is... If I connect a client-1, and then client-2. And then I try and send data using Client-2 and then Client-1... it works...
However, If I connect client-1 and then connect client-2... and then try to send data using client-1. Select() returns a -1...
Help?
Take a look into man pages for select. The important part is :
Under the following conditions, pselect() and select() shall fail and set errno to:
EBADF
One or more of the file descriptor sets specified a file descriptor that is not a valid open file descriptor.
EINTR
The function was interrupted before any of the selected events occurred and before the timeout interval expired.
If SA_RESTART has been set for the interrupting signal, it is implementation-defined whether the function restarts or returns with [EINTR].
EINVAL
An invalid timeout interval was specified.
EINVAL
The nfds argument is less than 0 or greater than FD_SETSIZE.
EINVAL
One of the specified file descriptors refers to a STREAM or multiplexer that is linked (directly or indirectly) downstream from a multiplexer.
errno should tell you what is wrong.
This is just a quess, but when you close the connection, your file descriptor becomes invalid. I guess the error from select should be EBADF
I think your code erasing from the set is suspect. once you call conns.erase(*it), your iterator is invalid (and incrementing it leads to undefined behavior).
Changing your loop to something like the following should resolve the issue:
for(set<int>::iterator it=conns.begin(); it!=conns.end();)
{
set<int>::iterator cur = it++;
if(FD_ISSET(*cur, &socks)){
char buffer[256];
read(buffer, 256, *cur);
cout << buffer << endl;
close(*cur);
conns.erase(*cur);
}
}