Trigger from thread to main thread in XCB Event Loop - c++

Does anyone have any ideas on how I can get my main thread event loop which looks like:
const int MY_CUST_MSG(877);
xcb_generic_event_t *event;
while (event = xcb_wait_for_event(connection)) {
switch (event->response_type & ~0x80) {
case MY_CUST_MSG:
// do something
break;
default:
// Unknown event type, ignore it
debug_log("Unknown event: ", event->response_type);
}
free(event);
}
To react to a message from a secondary thread?

xcb_wait_for_event() waits for an event to be received from the server. You'd have to send a message to yourself, through the server, but I would suggest an alternative approach:
Use xcb_file_descriptor() to get the underlying file descriptor for the X connection.
Set up an internal pipe that your application can use to send messages to itself, between thread.
Use xcb_poll_for_event() which is a non-blocking version of xcb_wait_for_event(), to implement a non-blocking check if there's an event that has been read, and if so, read it.
Do a non-blocking read on your internal pipe, to check for any internal message from another thread.
If neither step 3 or 4 produced a message, use poll() to wait for one or the other event.
You will also need to use xcb_flush() to flush any events manually, and xcb_connection_has_error() to check for a fatal connection error to the X server.
See the tutorial for more information.

Related

How do I use select() and gRPC to create a server?

I need to use gRPC but in a single-threaded application (with additional socket channels). Naively, I'm thinking of using select() and depending on which file descriptor pops, calling gRPC to handle the message. My question is, can someone give me a rough (5-10 lines of code) outline skeleton on what I need to call after the select() pops?
Looking at Google's "hello world" example in the synchronous case implies a thread pool (which I can't use), and in the asynchronous case shows the main loop blocking -- which doesn't work for me because I need to handle other socket operations.
You can't do it, at this point (and probably ever).
One of the big weaknesses of event loops, including direct use of select()/poll() style APIs, is that they aren't composable in any natural way short of direct integration between the two.
We could theoretically add such functionality for Linux -- exporting an epoll_fd with a timerfd which becomes readable if it would be productive to call into a completion queue, but doing so would impose substantial constraints and architectural overhead on the rest of the stack just to support this usecase only on Linux. Everywhere else would require a background thread to manage that fd's readability.
This can be done using a gRPC async service along with grpc::Alarm to send any events that come from select or other polling APIs onto the gRPC completion queue. You can see an example using Epoll and gRPC together in this gist. The important functions are these two:
bool grpc_tick(grpc::ServerCompletionQueue& queue) {
void* tag = nullptr;
bool ok = false;
auto next_status = queue.AsyncNext(&tag, &ok, std::chrono::system_clock::now());
if (next_status == grpc::CompletionQueue::GOT_EVENT) {
if (ok && tag) {
static_cast<RequestProcessor*>(tag)->grpc_queue_tick();
} else {
std::cerr << "Not OK or bad tag: " << ok << "; " << tag << std::endl;
return false;
}
}
return next_status != grpc::CompletionQueue::SHUTDOWN;
}
bool tick_loops(int epoll, grpc::ServerCompletionQueue& queue) {
// Pump epoll events over to gRPC's completion queue.
epoll_event event{0};
while (epoll_wait(epoll, &event, /*maxevents=*/1, /*timeout=*/0)) {
grpc::Alarm alarm;
alarm.Set(&queue, std::chrono::system_clock::now(), event.data.ptr);
if (!grpc_tick(queue)) return false;
}
// Make sure gRPC gets at least 1 tick.
return grpc_tick(queue);
}
Here you can see the tick_loops function repeatedly calls epoll_wait until no more events are returned. For each epoll event, a grpc::Alarm is constructed with the deadline set to right now. After that, the gRPC event loop is immediately pumped with grpc_tick.
Note that the grpc::Alarm instance MUST outlive its time on the completion queue. In a real-world application, the alarm should be somehow attached to the tag (event.data.ptr in this example) so it can be cleaned up in the completion callback.
The gRPC event loop is then pumped again to ensure that any non-epoll events are also processed.
Completion queues are thread safe, so you could also put the epoll pump on one thread and the gRPC pump on another. With this setup you would not need to set the polling timeouts for each to 0 as they are in this example. This would reduce CPU usage by limiting dry cycles of the event loop pumps.

How would one avoid race conditions from multiple threads of a server sending data to a client? C++

I was following a tutorial on youtube on building a chat program using winsock and c++. Unfortunately the tutorial never bothered to consider race conditions, and this causes many problems.
The tutorial had us open a new thread every time a new client connected to the chat server, which would handle receiving and processing data from that individual client.
void Server::ClientHandlerThread(int ID) //ID = the index in the SOCKET Connections array
{
Packet PacketType;
while (true)
{
if (!serverptr->GetPacketType(ID, PacketType)) //Get packet type
break; //If there is an issue getting the packet type, exit this loop
if (!serverptr->ProcessPacket(ID, PacketType)) //Process packet (packet type)
break; //If there is an issue processing the packet, exit this loop
}
std::cout << "Lost connection to client ID: " << ID << std::endl;
}
When the client sends a message, the thread will process it and send it by first sending packet type, then sending the size of the message/packet, and finally sending the message.
bool Server::SendString(int ID, std::string & _string)
{
if (!SendPacketType(ID, P_ChatMessage))
return false;
int bufferlength = _string.size();
if (!SendInt(ID, bufferlength))
return false;
int RetnCheck = send(Connections[ID], _string.c_str(), bufferlength, NULL); //Send string buffer
if (RetnCheck == SOCKET_ERROR)
return false;
return true;
}
The issue arises when two threads (Two separate clients) are synchronously trying to send a message at the same time to the same ID. (The same third client). One thread may send to the client the int packet type, so the client is now prepared to receive an int, but then the second thread sends a string. (Because the thread assumes the client is waiting for that). The client is unable to process correctly and results in the program being unusable.
How would I solve this issue?
One solution I had:
Rather than allow each thread to execute server commands on their own, they would set an input value. The main server thread would loop through all the input values from each thread and then execute the commands one by one.
However I am unsure this won't have problems of its own... If a client sends multiple messages in the time frame of a single server loop, only one of the messages will send (since the new message would over-write the previous message). Of course there are ways around this, such as arrays of input or faster loops, but it still poses a problem.
Another issue that I thought of was that a client with a lower ID would always end up having their message sent first each loop. This isn't that big of a deal but if there was a situation, say, a trivia game, where two clients entered the correct answer in the same loop then the client with the lower ID would end up saying the answer "first" every time.
Thanks in advance.
If all I/O is being handled through a central server, a simple (but certainly not elegant) solution is to create a barrier around the I/O mechanisms to each client. In the simplest case this can just be a mutex. Associate that barrier with each client and anytime someone wants to send that client something (a complete message), lock the barrier. Unlock it when the complete message is handled. That way only one client can actually send something to another client at a time. In C++11, see std::mutex.

How to make sure that WSASend() will send the data?

WSASend() will return immediately whether the data will be sent or not. But how to make sure that data will be sent, for example I have a button in my UI that will send "Hello World!" when pressed. Now I want to make sure that when the user click on this button the "Hello World!" will be sent at some point, but WSASend() could return WSAEWOULDBLOCK indicating that data will not be sent, so should I enclose WSASend() in a loop that does not exit until WSASend() returns 0 (success).
Note: I am using IOCP.
should I enclose WSASend() in a loop that does not exit until
WSASend() returns 0 (success)
Err.. NO!
Have the UI issue an overlapped WSASend request, complete with buffer/s and OVERLAPPED/s. If, by some miracle, it does actually return success immedately, (and I've never seen it), you're good.
If, (when:), it returns WSA_IO_PENDING, you can do nothing in your UI button-handler because GUI event-handlers cannot wait. Graphical UI's are state-machines - you must exit the button-handler and return to the message input queue in prompt manner. You can do some GUI stuff, if you want. Maybe disable the 'Send' button, or add some 'Message sent' text to a memo component. That's about it - you must then exit.
Some time later, the successful completion notification, (or failure notification), will get posted to the IOCP completion queue and a handler thread will get hold of it. Use PostMessage, QueueUserAPC or similar inter-thread comms mechanism to signal 'something', (eg. the buffer object used in the original WSASend), back to the UI thread so that it can take action/s on the returned result, eg. re-enabling the 'Send' button.
Yes, it can be seen as messy, but it is the only way you can do it that will work well.
Other approaches - polling loops, Application.DoEvents, timers etc are all horrible bodges.
Overlapped Socket I/O
If an overlapped operation completes immediately, WSASend returns a value of zero and the lpNumberOfBytesSent parameter is updated with the number of bytes sent. If the overlapped operation is successfully initiated and will complete later, WSASend returns SOCKET_ERROR and indicates error code WSA_IO_PENDING.
...
The error code WSA_IO_PENDING indicates that the overlapped operation has been successfully initiated and that completion will be indicated at a later time. Any other error code indicates that the overlapped operation was not successfully initiated and no completion indication will occur.
...
So as demonstrated in docs, you don't need to enclose in a loop, just check for a SOCKET_ERROR and if the last error is not equal to WSA_IO_PENDING, everything is fine:
rc = WSASend(AcceptSocket, &DataBuf, 1,
&SendBytes, 0, &SendOverlapped, NULL);
if ((rc == SOCKET_ERROR) &&
(WSA_IO_PENDING != (err = WSAGetLastError()))) {
printf("WSASend failed with error: %d\n", err);
break;
}

Wait for data on COM port?

I'm looking for a way to get a Windows serial port to timeout until it has received data. It would be nice if there was some kind of event that triggered or a function to do exactly what I want.
This is my current implementation.
void waitforCom(unsinged char byte)
{
while (true)
{
ClearCommError(serial_handle, &errors, &status);
if (status.cbInQue>0)
{
//check if correct byte
break;
}
}
}
Another API call you could be using is WaitCommEvent().
http://msdn.microsoft.com/en-us/library/windows/desktop/aa363479(v=vs.85).aspx
This call can work asynchronously since it takes an OVERLAPPED object as a parameter. In your case you'd want to simply wait on the EV_RXCHAR event to let you know data has arrived:
OVERLAPPED o = {0};
o.hEvent = CreateEvent(NULL, FALSE, FALSE, NULL);
SetCommMask(comPortHandle, EV_RXCHAR);
if (!WaitCommEvent(comPortHandle, &commEvent, &o))
{
// Check GetLastError for ERROR_IO_PENDING, if I/O is pending then
// use WaitForSingleObject() to determine when `o` is signaled, then check
// the result. If a character arrived then perform your ReadFile.
}
Alternatively you could do the same thing by having a thread with an outstanding ReadFile call, but using the OVERLAPPED object instead of blocking as MSalters recommends.
I'm not really a specialist when it comes to WinApi, but there's a whole article on the Microsoft Developer Network, that covers the subject of serial communications. The article mentions the subject of waiting for the data from a port, and it's supplied with an example.
At the winAPI level, for most applications you need to dedicate a thread to serial port input because ReadFile is a blocking call (but with a timeout). The most useful event you can get is having ReadFile return. Just put ReadFile in a loop in a thread and generate your own event or message to some other thread when ReadFile gets some data.

How to set time out for receiving message fromt client of server with non-blocking mode?

I have a server with 2 connections SOCKET which is connected with clients and I set this server is non-blocking mode which don't stop when sending or recieving message. I want to set time out for a SOCKET of each connections, but if I use the following code:
void getMessage(SOCKET connectedSocket, int time){
string error = R_ERROR;
// Using select in winsock
fd_set set;
timeval tm;
FD_ZERO(&set);
FD_SET(connectedSocket, &set);
tm.tv_sec = time; // time
tm.tv_usec = 0; // 0 millis
switch (select(connectedSocket, &set, 0, 0, &tm))
{
case 0:
// timeout
this->disconnect();
break;
case 1:
// Can recieve some data here
return this->recvMessage();
break;
default:
// error - handle appropriately.
break;
}
return error;
}
My server is not none-blocking mode any more! I have to wait until the end of 1st connection's time out to get message from the 2nd connection! That's not what I expect! So, is there any way to set time out for non-blocking mode? Or I have to handle it myself?
select is a demultiplexing mechanism. While you are using it to determine when data is ready on a single socket or timeout, it was actually designed to return data ready status on many sockets (hence the fd_set). Conceptually, it is the same with poll, epoll and kqueue. Combined with non-blocking I/O, these mechanisms provide an application writer with the tools to implement a single threaded concurrent server.
In my opinion, your application does not need that kind of power. Your application will only be handling two connections, and you are already using one thread per connection. I believe leaving the socket in blocking I/O mode is more appropriate.
If you insist on non-blocking mode, my suggestion is to replace the select call with something else. Since what you want from select is an indication of read readiness or timeout for a single socket, you can achieve a similar effect with recv passed with appropriate parameters and with the appropriate timeout set on the socket.
tm.tv_sec = time;
tm.tv_usec = 0;
setsockopt(connectedSocket, SOL_SOCKET, SO_RCVTIMEO, (char *)&tm, sizeof(tm));
char c;
swtich (recv(connectedSocket, &c, 1, MSG_PEEK|MSG_WAITALL)) {
case -1:
if (errno == EAGAIN) {
// handle timeout ...
} else {
// handle other error ...
}
break;
case 0: // FALLTHROUGH
default:
// handle read ready ...
break;
}
From man recv:
MSG_PEEK -- This flag causes the receive operation to return data from the beginning of the receive queue without removing that data from the queue. Thus, a subsequent receive call will return the same data.
MSG_WAITALL (since Linux 2.2) -- This flag requests that the operation block until the full request is satisfied. However, the call may still return less data than requested if a signal is caught, an error or disconnect occurs, or the next data to be received is of a different type than that returned.
As to why select is behaving in the way you observed. While the select call is thread-safe, it is likely fully guarded against reentrancy. So, one thread's call to select will only come in after another thread's call completes (the calls to select are serialized). This is inline with its function as a demultiplexer. It's purpose is to serve as a single arbiter for which connections are ready. As such, it wants to be controlled by a single thread.