Google Cloud PubSub Streaming Pull hangs forever - c++

I have this simple code to pull the messages from Google PubSub Subscription:
#include "google/pubsub/v1/pubsub.grpc.pb.h"
#include "google/pubsub/v1/pubsub.pb.h"
#include "grpc++/grpc++.h"
#include "base/logging.h"
int main() {
auto creds = grpc::GoogleDefaultCredentials();
auto stub = std::make_unique<google::pubsub::v1::Subscriber::Stub>(
grpc::CreateChannel("pubsub.googleapis.com", creds));
grpc::ClientContext context;
std::unique_ptr<
grpc::ClientReaderWriter<google::pubsub::v1::StreamingPullRequest,
google::pubsub::v1::StreamingPullResponse>>
stream(stub->StreamingPull(&context));
google::pubsub::v1::StreamingPullRequest request;
request.set_subscription("my_subscription");
request.set_stream_ack_deadline_seconds(10);
stream->Write(request);
google::pubsub::v1::StreamingPullResponse response;
size_t count = 0;
while (stream->Read(&response)) {
google::pubsub::v1::StreamingPullRequest ack_request;
for (const auto& message : response.received_messages()) {
ack_request.add_ack_ids(message.ack_id());
if (++count % 1000 == 0) {
LOG(Info, "count: " << count << " message_size: " << message.message().data().size());
}
}
stream->Write(ack_request);
}
return 0;
}
It turned out that while (stream->Read(&response)) doesn't work forever and stops after ~30 minutes (I don't know why that happens). I tried to wrap the code in while (true) so messages will be pulled in an infinite loop but it turned out that the second iteration can't pull any messages (I see in Google Cloud monitoring that messages are coming).
What is wrong with this code?
I know that GCP didn't implement C++ client yet and StreamingPull is a low level API but I don't wanna wait until they make it (it's unclear when it'll happen) and also don't wanna switch to other language (my application is in C++).

Streaming connections to GCP can be closed for a variety of reasons, e.g. transient network issues or max TTLs on connection lifetimes. A stream should not be expected to be open indefinitely.
When stream->Read(&response) returns false, that's an indication that the stream has been closed [source]. Your code should then recreate the stream to continue pulling messages.

Related

strange logging timestamp with chrono::sleep_until

I am testing application's latency during UDP communication on windows 10.
I tried to send a message every 1 second and receive a response sent immediately from the remote.
Send thread
It works every 1 second.
auto start = std::chrono::system_clock::now();
unsigned int count = 1;
while (destroyFlag.load(std::memory_order_acquire) == false)
{
if (isReady() == false)
{
break;
}
/*to do*/
worker_();
std::this_thread::sleep_until(start + std::chrono::milliseconds(interval_)* count++);
}
worker_()
Send thread call this. just send message and make log string.
socket_.send(address_);
logger_.log("," + std::string("Send") + "\n");
Receiver
When message arrives, it creates a receive log string and flushes it to a file.
auto& queueData = socket_.getQueue();
while (queueData.size() > 0)
{
auto str = queueData.dequeue();
logger_.log(",Receive" + str + "\n");
logger_.flush();
}
I've been testing it overnight and I can't figure out why I got this result.
chart for microseconds
x-axis : Hour_Minute_second
y-axis : microseconds
For a few hours it seemed to work as expected. But after that, the time gradually changed and went to a different time zone.
Does anyone know why this is happening?
std::chrono::steady_clock is working.
It made my charts straight.
And another way, turn off the windows automatically time synchronize.

App crashes when it takes too long to reply in a ZMQ REQ/REP pattern

I am writing a plugin that interfaces with a desktop application through a ZeroMQ REQ/REP request-reply communication archetype. I can currently receive a request, but the application seemingly crashes if a reply is not sent quick enough.
I receive the request on a spawned thread and put it in a queue. This queue is processed in another thread, in which the processing function is invoked by the application periodically.
The message is correctly being received and processed, but the response cannot be sent until the next iteration of the function, as I cannot get the data from the application until then.
When this function is conditioned to send the response on the next iteration, the application will crash. However, if I send fake data as the response soon after receiving the request, in the first iteration, the application will not crash.
Constructing the socket
zmq::socket_t socket(m_context, ZMQ_REP);
socket.bind("tcp://*:" + std::to_string(port));
Receiving the message in the spawned thread
void ZMQReceiverV2::receiveRequests() {
nInfo(*m_logger) << "Preparing to receive requests";
while (m_isReceiving) {
zmq::message_t zmq_msg;
bool ok = m_respSocket.recv(&zmq_msg, ZMQ_NOBLOCK);
if (ok) {
// msg_str will be a binary string
std::string msg_str;
msg_str.assign(static_cast<char *>(zmq_msg.data()), zmq_msg.size());
nInfo(*m_logger) << "Received the message: " << msg_str;
std::pair<std::string, std::string> pair("", msg_str);
// adding to message queue
m_mutex.lock();
m_messages.push(pair);
m_mutex.unlock();
}
std::this_thread::sleep_for(std::chrono::milliseconds(100));
}
nInfo(*m_logger) << "Done receiving requests";
}
Processing function on seperate thread
void ZMQReceiverV2::exportFrameAvailable()
// checking messages
// if the queue is not empty
m_mutex.lock();
if (!m_messages.empty()) {
nInfo(*m_logger) << "Reading message in queue";
smart_target::SMARTTargetCreateRequest id_msg;
std::pair<std::string, std::string> pair = m_messages.front();
std::string topic = pair.first;
std::string msg_str = pair.second;
processMsg(msg_str);
// removing just read message
m_messages.pop();
//m_respSocket.send(zmq::message_t()); wont crash if I reply here in this invocation
}
m_mutex.unlock();
// sending back the ID that has just been made, for it to be mapped
if (timeToSendReply()) {
sendReply(); // will crash, if I wait for this to be exectued on next invocation
}
}
My research shows that there is no time limit for the response to be sent, so this, seeming to be, timing issue, is strange.
Is there something that I am missing that will let me send the response on the second iteration of the processing function?
Revision 1:
I have edited my code, so that the responding socket only ever exists on one thread. Since I need to get information from the processing function to send, I created another queue, which is checked in the revised the function running on its own thread.
void ZMQReceiverV2::receiveRequests() {
zmq::socket_t socket = setupBindSocket(ZMQ_REP, 5557, "responder");
nInfo(*m_logger) << "Preparing to receive requests";
while (m_isReceiving) {
zmq::message_t zmq_msg;
bool ok = socket.recv(&zmq_msg, ZMQ_NOBLOCK);
if (ok) {
// does not crash if I call send helper here
// msg_str will be a binary string
std::string msg_str;
msg_str.assign(static_cast<char *>(zmq_msg.data()), zmq_msg.size());
NLogger::nInfo(*m_logger) << "Received the message: " << msg_str;
std::pair<std::string, std::string> pair("", msg_str);
// adding to message queue
m_mutex.lock();
m_messages.push(pair);
m_mutex.unlock();
}
std::this_thread::sleep_for(std::chrono::milliseconds(100));
if (!sendQueue.empty()) {
sendEntityCreationMessage(socket, sendQueue.front());
sendQueue.pop();
}
}
nInfo(*m_logger) << "Done receiving requests";
socket.close();
}
The function sendEntityCreationMessage() is a helper function that ultimately calls socket.send().
void ZMQReceiverV2::sendEntityCreationMessage(zmq::socket_t &socket, NUniqueID id) {
socket.send(zmq::message_t());
}
This code seems to be following the thread safety guidelines for sockets. Any suggestions?
Q : "Is there something that I am missing"
Yes,the ZeroMQ evangelisation, called a Zen-of-Zero, since ever promotes never try to share a Socket-instance, never try to block and never expect the world to act as one wishes.
This said, avoid touching the same Socket-instance from any non-local thread, except the one that has instantiated and owns the socket.
Last, but not least, the REQ/REP-Scalable Formal Communication Pattern Archetype is prone to fall into a deadlock, as a mandatory two-step dance must be obeyed - where one must keep the alternating sequence of calling .send()-.recv()-.send()-.recv()-.send()-...-methods, otherwise the principally distributed-system tandem of Finite State Automata (FSA) will unsalvageably end up in a mutual self-deadlock state of the dFSA.
In case one is planning to professionally build on ZeroMQ, the best next step is to re-read the fabulous Pieter HINTJENS' book "Code Connected: Volume 1". A piece of a hard read, yet definitely worth one's time, sweat, tears & efforts put in.

Kafka C++ client taking a long time to receive a message

I am using the cppkafka library, a wrapper of the librdkafka, in turn a C++ Kafka client for a very simple message streaming task. My consumer class is behaving weiredly, because it takes a rather long time to receive a message. More precisely, every time the receiving executable is run and kept running, the consumer can receive the first batch of messages correctly, but subsequent messages will take roughly 15 seconds to arrive. Anyone understand what possibility can lead to something like this (kafka configurations, library specific problems or my stupid faults)? A million thanks.
My receving thread is as follows
configuration_.set("group.id", 0);
consumer_ = std::make_unique<cppkafka::Consumer>(configuration_);
consumer_->subscribe({TopicTraits<trade::OrderRequest>::topic, TopicTraits<trade::CancelRequest>::topic});
std::thread([this] {
while (working_) {
cppkafka::Message msg = consumer_->poll();
if (msg) {
if (msg.get_error()) {
if (!msg.is_eof()) {
ERROR("error occurred while polling message: {}", msg.get_error());
}
} else {
try {
Json j = Json::parse(msg.get_payload());
if (msg.get_topic() == TopicTraits<trade::OrderRequest>::topic) {
INFO("received [order_req], {}", msg.get_payload());
ReceiveOrderRequest(j.get<trade::OrderRequest>());
} else if (msg.get_topic() == TopicTraits<trade::CancelRequest>::topic) {
INFO("received [cancel_req], {}", msg.get_payload());
ReceiveCancelRequest(j.get<trade::CancelRequest>());
}
} catch (const std::exception &e) {
ERROR("error occurred while handling incoming message, {}", e.what());
}
}
}
}
}).detach();
Two consumers with the same group id subscribing to different topics blocked poll()
After some research, I found the problem related to one of the more fundamental configuration options of kafka. The problem is that my consumer was blocked in the call to poll(), and the direct cause of it is two consumers with the same group id subscribing to differing topics. I reassigned the group id and the problem vanished.

ZMQ wait for a message, have client wait for reply

I'm trying to synchronise 4 clients to one server. I want to send a message to the server when the client is ready to move on, then the server counts how many requests it gets and sends a message back to the clients to say it's ready.
What I've done so far is use REQ/REP:
while(1){
int responses = 0;
while(responses<numberOfCameras){
for(int i=0; i<numberOfCameras;i++){
cout<<"waiting"<<endl;
if(sockets[i]->recv(requests[i], ZMQ_NOBLOCK)){
responses ++;
cout<<"rx"<<endl;
}
}
}
for(int i=0; i<numberOfCameras;i++){
cout<<"tx"<<endl;
sockets[i]->send("k",1);
cout<<"Sent"<<endl;
}
}
With more than one camera, this produces the expected error:
Operation cannot be accomplished in current state
Because it cannot do anything until it's replied to the REQ, right?
How can I modify this to work with multiple clients?
EDIT:
I have attempted to implement a less strict REQ REP with PUSH PULL. The meat is:
Server:
while(1){
int responses = 0;
while(responses<numberOfCameras){
for(int i=0; i<numberOfCameras;i++){
cout<<"waiting"<<endl;
if(REQSockets[i]->recv(requests[i], ZMQ_NOBLOCK)){
responses ++;
cout<<"rx"<<endl;
}
}
}
boost::this_thread::sleep(boost::posix_time::milliseconds(200));
for(int i=0; i<numberOfCameras;i++){
cout<<"tx"<<endl;
REPSockets[i]->send("k",1);
cout<<"Sent"<<endl;
}
boost::this_thread::sleep(boost::posix_time::milliseconds(200));
}
Clients:
for (;;) {
std::cout << "Requesting permission to capture"<< std::endl;
REQSocket.send ("?", 1);
// Get the reply.
zmq::message_t reply;
REPSocket.recv (&reply);
std::cout << "Grabbed a frame" << std::endl;
boost::this_thread::sleep(boost::posix_time::seconds(2));
}
I have outputted all of the ports and addresses to check that they're set right.
The server program hangs with the output:
...
waiting
rx
tx
This means that the program is hanging on the send, but I can't see for the life of me why
EDIT 2:
I have made a github repo with a compilable example and linux makefile and converted to use REP REQ again. The issue is that the client doesn't accept the message from the server, but again, I don't know why.
The answer was to use two REP REQ sockets as in edit 2. I had made a stupid typo for "REQ" instead of "REP" in one of the variable usages and hadn't noticed. I was therefore connecting and then binding the same socket.
I will leave the github repo up as I think the question is long enough already.

Client application crash causes Server to crash? (C++)

I'm not sure if this is a known issue that I am running into, but I couldn't find a good search string that would give me any useful results.
Anyway, here's the basic rundown:
we've got a relatively simple application that takes data from a source (DB or file) and streams that data over TCP to connected clients as new data comes in. its a relatively low number of clients; i would say at max 10 clients per server, so we have the following rough design:
client: connect to server, set to read (with timeout set to higher than the server heartbeat message frequency). It blocks on read.
server: one listening thread that accepts connections and then spawns a writer thread to read from the data source and write to the client. The writer thread is also detached(using boost::thread so just call the .detach() function). It blocks on writes indefinetly, but does check errno for errors before writing. We start the servers using a single perl script and calling "fork" for each server process.
The problem(s):
at seemingly random times, the client will shutdown with a "connection terminated (SUCCESFUL)" indicating that the remote server shutdown the socket on purpose. However, when this happens the SERVER application ALSO closes, without any errors or anything. it just crashes.
Now, to further the problem, we have multiple instances of the server app being started by a startup script running different files and different ports. When ONE of the servers crashes like this, ALL the servers crash out.
Both the server and client using the same "Connection" library created in-house. It's mostly a C++ wrapper for the C socket calls.
here's some rough code for the write and read function in the Connection libary:
int connectionTimeout_read = 60 * 60 * 1000;
int Socket::readUntil(char* buf, int amount) const
{
int readyFds = epoll_wait(epfd,epEvents,1,connectionTimeout_read);
if(readyFds < 0)
{
status = convertFlagToStatus(errno);
return 0;
}
if(readyFds == 0)
{
status = CONNECTION_TIMEOUT;
return 0;
}
int fd = epEvents[0].data.fd;
if( fd != socket)
{
status = CONNECTION_INCORRECT_SOCKET;
return 0;
}
int rec = recv(fd,buf,amount,MSG_WAITALL);
if(rec == 0)
status = CONNECTION_CLOSED;
else if(rec < 0)
status = convertFlagToStatus(errno);
else
status = CONNECTION_NORMAL;
lastReadBytes = rec;
return rec;
}
int Socket::write(const void* buf, int size) const
{
int readyFds = epoll_wait(epfd,epEvents,1,-1);
if(readyFds < 0)
{
status = convertFlagToStatus(errno);
return 0;
}
if(readyFds == 0)
{
status = CONNECTION_TERMINATED;
return 0;
}
int fd = epEvents[0].data.fd;
if(fd != socket)
{
status = CONNECTION_INCORRECT_SOCKET;
return 0;
}
if(epEvents[0].events != EPOLLOUT)
{
status = CONNECTION_CLOSED;
return 0;
}
int bytesWrote = ::send(socket, buf, size,0);
if(bytesWrote < 0)
status = convertFlagToStatus(errno);
lastWriteBytes = bytesWrote;
return bytesWrote;
}
Any help solving this mystery bug would be great! at the VERY least, I would like it to NOT crash out the server even if the client crashes (which is really strange for me, since there is no two-way communication).
Also, for reference, here is the server listening code:
while(server.getStatus() == connection::CONNECTION_NORMAL)
{
connection::Socket s = server.listen();
if(s.getStatus() != connection::CONNECTION_NORMAL)
{
fprintf(stdout,"failed to accept a socket. error: %s\n",connection::getStatusString(s.getStatus()));
}
DATASOURCE* dataSource;
dataSource = open_datasource(XXXX); /* edited */ if(dataSource == NULL)
{
fprintf(stdout,"FATAL ERROR. DATASOURCE NOT FOUND\n");
return;
}
boost::thread fileSender(Sender(s,dataSource));
fileSender.detach();
}
...And also here is the spawned child sending thread:
::signal(SIGPIPE,SIG_IGN);
//const int headerNeeds = 29;
const int BUFFERSIZE = 2000;
char buf[BUFFERSIZE];
bool running = true;
while(running)
{
memset(buf,'\0',BUFFERSIZE*sizeof(char));
unsigned int readBytes = 0;
while((readBytes = read_datasource(buf,sizeof(unsigned char),BUFFERSIZE,dataSource)) == 0)
{
boost::this_thread::sleep(boost::posix_time::milliseconds(1000));
}
socket.write(buf,readBytes);
if(socket.getStatus() != connection::CONNECTION_NORMAL)
running = false;
}
fprintf(stdout,"socket error: %s\n",connection::getStatusString(socket.getStatus()));
socket.close();
fprintf(stdout,"sender exiting...\n");
Any insights would be welcome! Thanks in advance.
You've probably got everything backwards... when the server crashes, the OS will close all sockets. So the server crash happens first and causes the client to get the disconnect message (FIN flag in a TCP segment, actually), the crash is not a result of the socket closing.
Since you have multiple server processes crashing at the same time, I'd look at resources they share, and also any scheduled tasks that all servers would try to execute at the same time.
EDIT: You don't have a single client connecting to multiple servers, do you? Note that TCP connections are always bidirectional, so the server process does get feedback if a client disconnects. Some internet providers have even been caught generating RST packets on connections that fail some test for suspicious traffic.
Write a signal handler. Make sure it uses only raw I/O functions to log problems (open, write, close, not fwrite, not printf).
Check return values. Check for negative return value from write on a socket, but check all return values.
Thanks for all the comments and suggestions.
After looking through the code and adding the signal handling as Ben suggested, the applications themselves are far more stable. Thank you for all your input.
The original problem, however, was due to a rogue script that one of the admins was running as root that would randomly kill certain processes on the server-side machine (i won't get into what it was trying to do in reality; safe to say it was buggy).
Lesson learned: check the environment.
Thank you all for the advice.