ZMQ wait for a message, have client wait for reply - c++

I'm trying to synchronise 4 clients to one server. I want to send a message to the server when the client is ready to move on, then the server counts how many requests it gets and sends a message back to the clients to say it's ready.
What I've done so far is use REQ/REP:
while(1){
int responses = 0;
while(responses<numberOfCameras){
for(int i=0; i<numberOfCameras;i++){
cout<<"waiting"<<endl;
if(sockets[i]->recv(requests[i], ZMQ_NOBLOCK)){
responses ++;
cout<<"rx"<<endl;
}
}
}
for(int i=0; i<numberOfCameras;i++){
cout<<"tx"<<endl;
sockets[i]->send("k",1);
cout<<"Sent"<<endl;
}
}
With more than one camera, this produces the expected error:
Operation cannot be accomplished in current state
Because it cannot do anything until it's replied to the REQ, right?
How can I modify this to work with multiple clients?
EDIT:
I have attempted to implement a less strict REQ REP with PUSH PULL. The meat is:
Server:
while(1){
int responses = 0;
while(responses<numberOfCameras){
for(int i=0; i<numberOfCameras;i++){
cout<<"waiting"<<endl;
if(REQSockets[i]->recv(requests[i], ZMQ_NOBLOCK)){
responses ++;
cout<<"rx"<<endl;
}
}
}
boost::this_thread::sleep(boost::posix_time::milliseconds(200));
for(int i=0; i<numberOfCameras;i++){
cout<<"tx"<<endl;
REPSockets[i]->send("k",1);
cout<<"Sent"<<endl;
}
boost::this_thread::sleep(boost::posix_time::milliseconds(200));
}
Clients:
for (;;) {
std::cout << "Requesting permission to capture"<< std::endl;
REQSocket.send ("?", 1);
// Get the reply.
zmq::message_t reply;
REPSocket.recv (&reply);
std::cout << "Grabbed a frame" << std::endl;
boost::this_thread::sleep(boost::posix_time::seconds(2));
}
I have outputted all of the ports and addresses to check that they're set right.
The server program hangs with the output:
...
waiting
rx
tx
This means that the program is hanging on the send, but I can't see for the life of me why
EDIT 2:
I have made a github repo with a compilable example and linux makefile and converted to use REP REQ again. The issue is that the client doesn't accept the message from the server, but again, I don't know why.

The answer was to use two REP REQ sockets as in edit 2. I had made a stupid typo for "REQ" instead of "REP" in one of the variable usages and hadn't noticed. I was therefore connecting and then binding the same socket.
I will leave the github repo up as I think the question is long enough already.

Related

App crashes when it takes too long to reply in a ZMQ REQ/REP pattern

I am writing a plugin that interfaces with a desktop application through a ZeroMQ REQ/REP request-reply communication archetype. I can currently receive a request, but the application seemingly crashes if a reply is not sent quick enough.
I receive the request on a spawned thread and put it in a queue. This queue is processed in another thread, in which the processing function is invoked by the application periodically.
The message is correctly being received and processed, but the response cannot be sent until the next iteration of the function, as I cannot get the data from the application until then.
When this function is conditioned to send the response on the next iteration, the application will crash. However, if I send fake data as the response soon after receiving the request, in the first iteration, the application will not crash.
Constructing the socket
zmq::socket_t socket(m_context, ZMQ_REP);
socket.bind("tcp://*:" + std::to_string(port));
Receiving the message in the spawned thread
void ZMQReceiverV2::receiveRequests() {
nInfo(*m_logger) << "Preparing to receive requests";
while (m_isReceiving) {
zmq::message_t zmq_msg;
bool ok = m_respSocket.recv(&zmq_msg, ZMQ_NOBLOCK);
if (ok) {
// msg_str will be a binary string
std::string msg_str;
msg_str.assign(static_cast<char *>(zmq_msg.data()), zmq_msg.size());
nInfo(*m_logger) << "Received the message: " << msg_str;
std::pair<std::string, std::string> pair("", msg_str);
// adding to message queue
m_mutex.lock();
m_messages.push(pair);
m_mutex.unlock();
}
std::this_thread::sleep_for(std::chrono::milliseconds(100));
}
nInfo(*m_logger) << "Done receiving requests";
}
Processing function on seperate thread
void ZMQReceiverV2::exportFrameAvailable()
// checking messages
// if the queue is not empty
m_mutex.lock();
if (!m_messages.empty()) {
nInfo(*m_logger) << "Reading message in queue";
smart_target::SMARTTargetCreateRequest id_msg;
std::pair<std::string, std::string> pair = m_messages.front();
std::string topic = pair.first;
std::string msg_str = pair.second;
processMsg(msg_str);
// removing just read message
m_messages.pop();
//m_respSocket.send(zmq::message_t()); wont crash if I reply here in this invocation
}
m_mutex.unlock();
// sending back the ID that has just been made, for it to be mapped
if (timeToSendReply()) {
sendReply(); // will crash, if I wait for this to be exectued on next invocation
}
}
My research shows that there is no time limit for the response to be sent, so this, seeming to be, timing issue, is strange.
Is there something that I am missing that will let me send the response on the second iteration of the processing function?
Revision 1:
I have edited my code, so that the responding socket only ever exists on one thread. Since I need to get information from the processing function to send, I created another queue, which is checked in the revised the function running on its own thread.
void ZMQReceiverV2::receiveRequests() {
zmq::socket_t socket = setupBindSocket(ZMQ_REP, 5557, "responder");
nInfo(*m_logger) << "Preparing to receive requests";
while (m_isReceiving) {
zmq::message_t zmq_msg;
bool ok = socket.recv(&zmq_msg, ZMQ_NOBLOCK);
if (ok) {
// does not crash if I call send helper here
// msg_str will be a binary string
std::string msg_str;
msg_str.assign(static_cast<char *>(zmq_msg.data()), zmq_msg.size());
NLogger::nInfo(*m_logger) << "Received the message: " << msg_str;
std::pair<std::string, std::string> pair("", msg_str);
// adding to message queue
m_mutex.lock();
m_messages.push(pair);
m_mutex.unlock();
}
std::this_thread::sleep_for(std::chrono::milliseconds(100));
if (!sendQueue.empty()) {
sendEntityCreationMessage(socket, sendQueue.front());
sendQueue.pop();
}
}
nInfo(*m_logger) << "Done receiving requests";
socket.close();
}
The function sendEntityCreationMessage() is a helper function that ultimately calls socket.send().
void ZMQReceiverV2::sendEntityCreationMessage(zmq::socket_t &socket, NUniqueID id) {
socket.send(zmq::message_t());
}
This code seems to be following the thread safety guidelines for sockets. Any suggestions?
Q : "Is there something that I am missing"
Yes,the ZeroMQ evangelisation, called a Zen-of-Zero, since ever promotes never try to share a Socket-instance, never try to block and never expect the world to act as one wishes.
This said, avoid touching the same Socket-instance from any non-local thread, except the one that has instantiated and owns the socket.
Last, but not least, the REQ/REP-Scalable Formal Communication Pattern Archetype is prone to fall into a deadlock, as a mandatory two-step dance must be obeyed - where one must keep the alternating sequence of calling .send()-.recv()-.send()-.recv()-.send()-...-methods, otherwise the principally distributed-system tandem of Finite State Automata (FSA) will unsalvageably end up in a mutual self-deadlock state of the dFSA.
In case one is planning to professionally build on ZeroMQ, the best next step is to re-read the fabulous Pieter HINTJENS' book "Code Connected: Volume 1". A piece of a hard read, yet definitely worth one's time, sweat, tears & efforts put in.

Halting of a program the second time around a loop

I am trying to use an mbed LPC1768 to interface with a uCAM-III camera module. This requires following a specific transfer of bytes between the mbed and the camera to sync correctly. The datasheet specifies that this can take up to 60 times to happen correctly.
I'm using the library found on this page https://os.mbed.com/users/ms523/notebook/ucam-development/ to help with the syncing process. It doesn't seem to sync correctly though and I get a reponse timeout found in the Get_Reponse function. I added some printf to help with debugging and discovered that it will run the sync attempt once, return to the start of the loop, execute the first printf statement there but then stop. And I cannot figure out why that is.
This is the response I get on the terminal. Even though there is no code between the Trying to sync time %i and Sending sync chars to uCAM it doesn't run the 2nd time around the loop and just stops. No more lines are printed after this.
Trying to sync time 0
Sending sync chars to uCAM
Response Timeout
Sync failed - trying again
Trying to sync time 1
int uCam::Sync()
{
// This will give 60 attempts to sync with the uCam module
for (int i=0; i<60; i++) {
printf("\n\rTrying to sync time %i ", i);
// Send out the sync command
printf("\n\rSending sync chars to uCAM");
for (int j=0; j<6; j++) {
_uCam.putc(SYNC[j]);
}
// Check if the response was an ACK
if (Get_Response(_ACK,SYNC[1])) {
printf("\n\rRecevied ACK");
// It was an ACK so now get the next response - it should be a sync
if (Get_Response(_SYNC,0x00)) {
printf("\n\rGot ACK");
// We need a small delay (1ms) from receiving the SYNC response and sending an ACK in return
wait(0.001);
printf("\n\rSending ACK for ACK");
for (int k=0; k<6; k++) {
_uCam.putc(ACK[k]);
}
// Everything is now complete so return true
printf("\n\rSynced");
return(1);
}
}
// Wait a while and try again
printf("\n\rSync failed - trying again");
}
printf("\n\rExiting sync function");
// Something went wrong so return false
return(0);
}

c++ socket accept blocks cout

I'm quite new to C++/C in general, so execuse me if this question might appear kinda stupid, but i'm really stuck here.
What I'm trying to do is a simple tcp server giving the user the opportunity to accept/decline incoming connections. I have a function named waitingForConnection() containing the main loop. It's called in the main function after the socket is sucessuflly bind and marked as passive.
What I'd expect is that, after the Client connects to the server, the function handleConnection() is getting called, causing the main loop to wait until the function is executed and only then to contiue.
But what actually seems to happen is that the main loop continues to the next accept() call, which is blocking the thread, before handleConnection() is completley executed. The test-output "done" will only become visible if the client connects a second time and the thread wakes up again.
To me it seem like the code is executed completley out of order, which I believe is not possible, since the whole code should be executed in a single thread.
Console Output after first connection attempt ("n" is user input):
"Accept connection ? (y/n)"n
Console Output after second connection attempt:
"Accept connection ? (y/n)"n
"doneAccept connection ? (y/n)"
Please note that I'm not looking for any workaround using select() or something similar, I'm just trying to understand why the code does what it does and maybe how to fix it by changing the structure.
int soc = socket(AF_INET, SOCK_STREAM, 0);
void handleConnection(int connectionSoc, sockaddr_in client){
string choice;
cout<<"Accept connection ? (y/n)";
cin>>choice;
if(choice=="y"||choice == "Y"){
//do something here
}else{
close(connectionSoc);
}
cout<<"done";
}
void waitingForConnection(){
while(running){
sockaddr_in clientAddress;
socklen_t length;
length = sizeof(clientAddress);
int soc1 = accept(soc,(struct sockaddr*)&clientAddress, &length);
if(soc1<0){
statusOutput("Connection failed");
}else{
handleConnection(soc1, clientAddress);
}
}
}
Problem is that output to std::cout is buffered and you simply do not see that until it is flushed. As std::cin automatically flushes std::cout you get your output on the next input. Change your output to:
std::cout << "done" << std::endl;
and you would get expected behavior.

Simplest QT TCP client

I would like to connect to a listening server and transmit some data. I looked at the examples available but they seem to have extra functions that do not seem very helpful to me (i.e. connect, fortune, etc.). This is the code I have so far:
QTcpSocket t;
t.connectToHost("127.0.0.1", 9000);
Assuming the server is listening and robust, what do I need to implement to send a data variable with datatype QByteArray?
very simple with QTcpSocket. Begin as you did...
void MainWindow::connectTcp()
{
QByteArray data; // <-- fill with data
_pSocket = new QTcpSocket( this ); // <-- needs to be a member variable: QTcpSocket * _pSocket;
connect( _pSocket, SIGNAL(readyRead()), SLOT(readTcpData()) );
_pSocket->connectToHost("127.0.0.1", 9000);
if( _pSocket->waitForConnected() ) {
_pSocket->write( data );
}
}
void MainWindow::readTcpData()
{
QByteArray data = pSocket->readAll();
}
Be aware, though, that for reading from the TcpSocket you may receive the data in more than one transmission, ie. when the server send you the string "123456" you may receive "123" and "456". It is your responsibility to check whether the transmission is complete. Unfortunately, this almost always results in your class being stateful: the class has to remember what transmission it is expecting, whether it has started already and if it's complete. So far, I haven't figured out an elegant way around that.
In my case I was reading xml data, and sometimes I would not get all in one packet.
Here is an elegant solution. WaitForReadyRead could also have a time out in it and
then some extra error checking in case that timeout is reached. In my case I should never
receive an incomplete xml, but if it did happen this would lock the thread up indefinetly
without the timeout:
while(!xml.atEnd()) {
QXmlStreamReader::TokenType t = xml.readNext();
if(xml.error()) {
if(xml.error() == QXmlStreamReader::PrematureEndOfDocumentError) {
cout << "reading extra data" << endl;
sock->waitForReadyRead();
xml.addData(sock->readAll());
cout << "extra data successful" << endl;
continue;
} else {
break;
}
}
...

Client application crash causes Server to crash? (C++)

I'm not sure if this is a known issue that I am running into, but I couldn't find a good search string that would give me any useful results.
Anyway, here's the basic rundown:
we've got a relatively simple application that takes data from a source (DB or file) and streams that data over TCP to connected clients as new data comes in. its a relatively low number of clients; i would say at max 10 clients per server, so we have the following rough design:
client: connect to server, set to read (with timeout set to higher than the server heartbeat message frequency). It blocks on read.
server: one listening thread that accepts connections and then spawns a writer thread to read from the data source and write to the client. The writer thread is also detached(using boost::thread so just call the .detach() function). It blocks on writes indefinetly, but does check errno for errors before writing. We start the servers using a single perl script and calling "fork" for each server process.
The problem(s):
at seemingly random times, the client will shutdown with a "connection terminated (SUCCESFUL)" indicating that the remote server shutdown the socket on purpose. However, when this happens the SERVER application ALSO closes, without any errors or anything. it just crashes.
Now, to further the problem, we have multiple instances of the server app being started by a startup script running different files and different ports. When ONE of the servers crashes like this, ALL the servers crash out.
Both the server and client using the same "Connection" library created in-house. It's mostly a C++ wrapper for the C socket calls.
here's some rough code for the write and read function in the Connection libary:
int connectionTimeout_read = 60 * 60 * 1000;
int Socket::readUntil(char* buf, int amount) const
{
int readyFds = epoll_wait(epfd,epEvents,1,connectionTimeout_read);
if(readyFds < 0)
{
status = convertFlagToStatus(errno);
return 0;
}
if(readyFds == 0)
{
status = CONNECTION_TIMEOUT;
return 0;
}
int fd = epEvents[0].data.fd;
if( fd != socket)
{
status = CONNECTION_INCORRECT_SOCKET;
return 0;
}
int rec = recv(fd,buf,amount,MSG_WAITALL);
if(rec == 0)
status = CONNECTION_CLOSED;
else if(rec < 0)
status = convertFlagToStatus(errno);
else
status = CONNECTION_NORMAL;
lastReadBytes = rec;
return rec;
}
int Socket::write(const void* buf, int size) const
{
int readyFds = epoll_wait(epfd,epEvents,1,-1);
if(readyFds < 0)
{
status = convertFlagToStatus(errno);
return 0;
}
if(readyFds == 0)
{
status = CONNECTION_TERMINATED;
return 0;
}
int fd = epEvents[0].data.fd;
if(fd != socket)
{
status = CONNECTION_INCORRECT_SOCKET;
return 0;
}
if(epEvents[0].events != EPOLLOUT)
{
status = CONNECTION_CLOSED;
return 0;
}
int bytesWrote = ::send(socket, buf, size,0);
if(bytesWrote < 0)
status = convertFlagToStatus(errno);
lastWriteBytes = bytesWrote;
return bytesWrote;
}
Any help solving this mystery bug would be great! at the VERY least, I would like it to NOT crash out the server even if the client crashes (which is really strange for me, since there is no two-way communication).
Also, for reference, here is the server listening code:
while(server.getStatus() == connection::CONNECTION_NORMAL)
{
connection::Socket s = server.listen();
if(s.getStatus() != connection::CONNECTION_NORMAL)
{
fprintf(stdout,"failed to accept a socket. error: %s\n",connection::getStatusString(s.getStatus()));
}
DATASOURCE* dataSource;
dataSource = open_datasource(XXXX); /* edited */ if(dataSource == NULL)
{
fprintf(stdout,"FATAL ERROR. DATASOURCE NOT FOUND\n");
return;
}
boost::thread fileSender(Sender(s,dataSource));
fileSender.detach();
}
...And also here is the spawned child sending thread:
::signal(SIGPIPE,SIG_IGN);
//const int headerNeeds = 29;
const int BUFFERSIZE = 2000;
char buf[BUFFERSIZE];
bool running = true;
while(running)
{
memset(buf,'\0',BUFFERSIZE*sizeof(char));
unsigned int readBytes = 0;
while((readBytes = read_datasource(buf,sizeof(unsigned char),BUFFERSIZE,dataSource)) == 0)
{
boost::this_thread::sleep(boost::posix_time::milliseconds(1000));
}
socket.write(buf,readBytes);
if(socket.getStatus() != connection::CONNECTION_NORMAL)
running = false;
}
fprintf(stdout,"socket error: %s\n",connection::getStatusString(socket.getStatus()));
socket.close();
fprintf(stdout,"sender exiting...\n");
Any insights would be welcome! Thanks in advance.
You've probably got everything backwards... when the server crashes, the OS will close all sockets. So the server crash happens first and causes the client to get the disconnect message (FIN flag in a TCP segment, actually), the crash is not a result of the socket closing.
Since you have multiple server processes crashing at the same time, I'd look at resources they share, and also any scheduled tasks that all servers would try to execute at the same time.
EDIT: You don't have a single client connecting to multiple servers, do you? Note that TCP connections are always bidirectional, so the server process does get feedback if a client disconnects. Some internet providers have even been caught generating RST packets on connections that fail some test for suspicious traffic.
Write a signal handler. Make sure it uses only raw I/O functions to log problems (open, write, close, not fwrite, not printf).
Check return values. Check for negative return value from write on a socket, but check all return values.
Thanks for all the comments and suggestions.
After looking through the code and adding the signal handling as Ben suggested, the applications themselves are far more stable. Thank you for all your input.
The original problem, however, was due to a rogue script that one of the admins was running as root that would randomly kill certain processes on the server-side machine (i won't get into what it was trying to do in reality; safe to say it was buggy).
Lesson learned: check the environment.
Thank you all for the advice.