In my use case my messages are small but there are many and they come very fast. Some times my application has to respond. However, I want it to respond only if it has processed all the messages that it has received.
I receive messages in the following way:
static int lws_event_callback(struct lws* conn, enum lws_callback_reasons reason, void* user, void* data, size_t len)
{
switch(reason)
{
case LWS_CALLBACK_CLIENT_RECEIVE:
{
my_callback_client_receive(data);
}}}
I don't know much about the internals of libwebsockets, but conceptually I can imagine that it's possible that by the time that my_callback_client_receive is called, other messages have arrived.
In my ideal world there would be a variable n which meant "number of messages waiting in a buffer to be processed" that I could pass into my call back
case LWS_CALLBACK_CLIENT_RECEIVE:
{
my_callback_client_receive(data, n);
}
So that I would know not to respond until I've processed as many messages as possible. Something like:
void my_callback_client_receive(data, n)
{
process(data);
if (n>1):
return; // this msg is not the most recent, keep processing
else:
send_response();
return;
}
Is this or something like this possible with LWS?
Related
I am writing a plugin that interfaces with a desktop application through a ZeroMQ REQ/REP request-reply communication archetype. I can currently receive a request, but the application seemingly crashes if a reply is not sent quick enough.
I receive the request on a spawned thread and put it in a queue. This queue is processed in another thread, in which the processing function is invoked by the application periodically.
The message is correctly being received and processed, but the response cannot be sent until the next iteration of the function, as I cannot get the data from the application until then.
When this function is conditioned to send the response on the next iteration, the application will crash. However, if I send fake data as the response soon after receiving the request, in the first iteration, the application will not crash.
Constructing the socket
zmq::socket_t socket(m_context, ZMQ_REP);
socket.bind("tcp://*:" + std::to_string(port));
Receiving the message in the spawned thread
void ZMQReceiverV2::receiveRequests() {
nInfo(*m_logger) << "Preparing to receive requests";
while (m_isReceiving) {
zmq::message_t zmq_msg;
bool ok = m_respSocket.recv(&zmq_msg, ZMQ_NOBLOCK);
if (ok) {
// msg_str will be a binary string
std::string msg_str;
msg_str.assign(static_cast<char *>(zmq_msg.data()), zmq_msg.size());
nInfo(*m_logger) << "Received the message: " << msg_str;
std::pair<std::string, std::string> pair("", msg_str);
// adding to message queue
m_mutex.lock();
m_messages.push(pair);
m_mutex.unlock();
}
std::this_thread::sleep_for(std::chrono::milliseconds(100));
}
nInfo(*m_logger) << "Done receiving requests";
}
Processing function on seperate thread
void ZMQReceiverV2::exportFrameAvailable()
// checking messages
// if the queue is not empty
m_mutex.lock();
if (!m_messages.empty()) {
nInfo(*m_logger) << "Reading message in queue";
smart_target::SMARTTargetCreateRequest id_msg;
std::pair<std::string, std::string> pair = m_messages.front();
std::string topic = pair.first;
std::string msg_str = pair.second;
processMsg(msg_str);
// removing just read message
m_messages.pop();
//m_respSocket.send(zmq::message_t()); wont crash if I reply here in this invocation
}
m_mutex.unlock();
// sending back the ID that has just been made, for it to be mapped
if (timeToSendReply()) {
sendReply(); // will crash, if I wait for this to be exectued on next invocation
}
}
My research shows that there is no time limit for the response to be sent, so this, seeming to be, timing issue, is strange.
Is there something that I am missing that will let me send the response on the second iteration of the processing function?
Revision 1:
I have edited my code, so that the responding socket only ever exists on one thread. Since I need to get information from the processing function to send, I created another queue, which is checked in the revised the function running on its own thread.
void ZMQReceiverV2::receiveRequests() {
zmq::socket_t socket = setupBindSocket(ZMQ_REP, 5557, "responder");
nInfo(*m_logger) << "Preparing to receive requests";
while (m_isReceiving) {
zmq::message_t zmq_msg;
bool ok = socket.recv(&zmq_msg, ZMQ_NOBLOCK);
if (ok) {
// does not crash if I call send helper here
// msg_str will be a binary string
std::string msg_str;
msg_str.assign(static_cast<char *>(zmq_msg.data()), zmq_msg.size());
NLogger::nInfo(*m_logger) << "Received the message: " << msg_str;
std::pair<std::string, std::string> pair("", msg_str);
// adding to message queue
m_mutex.lock();
m_messages.push(pair);
m_mutex.unlock();
}
std::this_thread::sleep_for(std::chrono::milliseconds(100));
if (!sendQueue.empty()) {
sendEntityCreationMessage(socket, sendQueue.front());
sendQueue.pop();
}
}
nInfo(*m_logger) << "Done receiving requests";
socket.close();
}
The function sendEntityCreationMessage() is a helper function that ultimately calls socket.send().
void ZMQReceiverV2::sendEntityCreationMessage(zmq::socket_t &socket, NUniqueID id) {
socket.send(zmq::message_t());
}
This code seems to be following the thread safety guidelines for sockets. Any suggestions?
Q : "Is there something that I am missing"
Yes,the ZeroMQ evangelisation, called a Zen-of-Zero, since ever promotes never try to share a Socket-instance, never try to block and never expect the world to act as one wishes.
This said, avoid touching the same Socket-instance from any non-local thread, except the one that has instantiated and owns the socket.
Last, but not least, the REQ/REP-Scalable Formal Communication Pattern Archetype is prone to fall into a deadlock, as a mandatory two-step dance must be obeyed - where one must keep the alternating sequence of calling .send()-.recv()-.send()-.recv()-.send()-...-methods, otherwise the principally distributed-system tandem of Finite State Automata (FSA) will unsalvageably end up in a mutual self-deadlock state of the dFSA.
In case one is planning to professionally build on ZeroMQ, the best next step is to re-read the fabulous Pieter HINTJENS' book "Code Connected: Volume 1". A piece of a hard read, yet definitely worth one's time, sweat, tears & efforts put in.
I am trying to write an asynchronous server to handle multiple users at the same time. The server is standing in the main thread listening for receiving data, in the same thread it receives them (large images) and creates a task to process this data, which it sends to the thread pool, and itself listens to the next image. Here is the code (Handle contains data processing that is performed on another thread):
while (true) {
cv::Mat data = ReceiveImage();
m_Pool.AddTask([=]() mutable {
Handle(std::move(data));
});
}
cv::Mat UDPServer::ReceiveImage() const {
...
try {
for (int i = 0; i < sz; i += num_bytes) {
num_bytes = ReceiveData((char*)&buf[0] + i, sz - i, from);
}
}
...
}
int UDPServer::ReceiveData(char* buf, int len, sockaddr_in& from) const {
socklen_t slen = sizeof(from);
int nReceivedBytes = recvfrom(m_Socket, buf, len, 0, (sockaddr*)&from, &slen);
if (nReceivedBytes == SOCKET_ERROR) {
throw std::runtime_error(RECEIVEFROM_ERROR.data());
}
return nReceivedBytes;
}
There is a problem with this approach: while accepting data from one user, another user can send his data, which will not be accepted.
A possible solution is to accept the data on a different thread. To do this, I want to receive ONLY a signal in the main thread that data has arrived, and transfer them to another thread to receive and send them to the thread pool. Something like Probe in MPI.
How can this be implemented on C ++ sockets? I tried to find it on the internet, but nothing came of it. Or does anyone have a better solution to the problem?
TCP sockets work this way. There is a listened-to socket, call it P, and an actual communication socket, call it Q. The accept system call does this:
Q = accept(P, ...); // there are other parameters
// which are not important here
As soon as accept returns, you can launch an async task on Q, and continue listening on P. The two jobs will not interfere with each other. If another request comes why you are still grinding away on Q, accept will just return another Q for another async task.
This whole idea doesn't work all that well for UDP because there are no persistent connections. Each packet is a communication session of its own. It doesn't make a lot of sense to asynchronously read a packet from a socket. Reading is an atomic operation, and packets are short enough. You can launch an asynchronous task to process each packet's data, there's nothing wrong with that. You can try to implement asynchronous reading by polling on a socket and launching an async task that reads the data as soon as it's ready, but this won't really simplify or speed up anything.
I am using the cppkafka library, a wrapper of the librdkafka, in turn a C++ Kafka client for a very simple message streaming task. My consumer class is behaving weiredly, because it takes a rather long time to receive a message. More precisely, every time the receiving executable is run and kept running, the consumer can receive the first batch of messages correctly, but subsequent messages will take roughly 15 seconds to arrive. Anyone understand what possibility can lead to something like this (kafka configurations, library specific problems or my stupid faults)? A million thanks.
My receving thread is as follows
configuration_.set("group.id", 0);
consumer_ = std::make_unique<cppkafka::Consumer>(configuration_);
consumer_->subscribe({TopicTraits<trade::OrderRequest>::topic, TopicTraits<trade::CancelRequest>::topic});
std::thread([this] {
while (working_) {
cppkafka::Message msg = consumer_->poll();
if (msg) {
if (msg.get_error()) {
if (!msg.is_eof()) {
ERROR("error occurred while polling message: {}", msg.get_error());
}
} else {
try {
Json j = Json::parse(msg.get_payload());
if (msg.get_topic() == TopicTraits<trade::OrderRequest>::topic) {
INFO("received [order_req], {}", msg.get_payload());
ReceiveOrderRequest(j.get<trade::OrderRequest>());
} else if (msg.get_topic() == TopicTraits<trade::CancelRequest>::topic) {
INFO("received [cancel_req], {}", msg.get_payload());
ReceiveCancelRequest(j.get<trade::CancelRequest>());
}
} catch (const std::exception &e) {
ERROR("error occurred while handling incoming message, {}", e.what());
}
}
}
}
}).detach();
Two consumers with the same group id subscribing to different topics blocked poll()
After some research, I found the problem related to one of the more fundamental configuration options of kafka. The problem is that my consumer was blocked in the call to poll(), and the direct cause of it is two consumers with the same group id subscribing to differing topics. I reassigned the group id and the problem vanished.
I am using CodeBlocks with MinGW compiler and wxWidgets library.
I am writing a program that read some data from the microcontroller, by sending messages (using index and subindex) and getting response messages with said data.
My plan was to send messages one-by-one and waiting for response message using __atomic int variables__ to check when I get response message.
This is my function for sending a message:
typedef std::chrono::high_resolution_clock Clock;
void sendSDO(int index, int subindex)
{
int nSent = 0;
atomic_index.store(index);
atomic_subindex.store(subindex);
canOpenClient->SDORead(index, subindex);
auto start = Clock::now();
nSentMessages++;
nSent++;
Sleep(10);
while ((atomic_index.load() != 0) && (atomic_subindex.load() != 0))
{
auto t = chrono::duration_cast<chrono::milliseconds>(Clock::now() - start);
if(t.count() > 20)
{
if (nSent > 5)
{
MainFrame->printTxt("[LOG] response not received\n");
return;
}
atomic_index.store(index);
atomic_subindex.store(subindex);
canOpenClient->SDORead(index, subindex);
nSentMessages++;
nSent++;
start = Clock::now();
}
}
}
Pseudocode is set atomic int to index and subindex of what I want value I want to read from microcontroller, then send message to it SDORead(), and if no response was received in 20 ms, send the message again, up to 5 times.
For receiving messages, I have a __separate thread__ with a callback function which is called when I get response message from the controller:
void notifyEvent(unsigned char ev_type)
{
SDO_msg_t msg;
msg = canOpenClient->Cmd_CustomMessageGet(); //get response message
if(ev_type == CO_EVENT_SDO_READ)
{
if ((msg.index == atomic_index.load()) && (msg.subindex == atomic_subindex.load()))
{
//does stuff, like saves message data to set container
atomic_index.store(0);
atomic_subindex.store(0);
}
}
if (message data not in container)
printf("not in container!")
}
Here I set the same atomic int values to 0, when the correct response message is received, and save response message data
I also have variables nSentMessages and nReceivedMessages, which hold the number of messages sent and messages received. I check at the end if these values are the same. Normally, I wouldn't need this (since I wait for every response), I put it there as an extra safety measure.
Now onto the problem:
1) My problem is in callback function notifyEvent(), where I presumably save response message data to a container, but I still sometimes get "not in container!" from that if statement and I don't know why. (My container is just normal set set<EDSobject, cmp> container, it's not atomic or anything, since I know there won't be reads/writes to it at the same time from different threads.)
2) If you check my function sendSDO(), there is a line Sleep(10). The program works ok with it, but if I remove it, the program returns a different value for nSentMessages and nReceivedMessages - 576 and 575. This happens every time I run the program and I don't understand why.
I am trying to send some data by using boost socket.
TCPClient class's role is to make a connection cna can send data throw sendMessage method.
When I executed under code it does not work. However, it works when I debug it.
I think the problem is timing.
delete[] msg; works before sending msg.(just my thought)
so, I want to check whether msg is sent or not.
or any other good way.
client main() code
TCPClient *client = new TCPClient(ip, port);
client->sendMessage((char *)msg, 64 + headerLength + bodyLength);
delete[] msg;
under code is snedMessage method.
void TCPClient::sendMessage(const char *message, int totalLength) throw(boost::system::system_error) {
if(false == isConnected())
setConnection();
boost::system::error_code error;
this->socket.get()->write_some(boost::asio::buffer(message, totalLength), error);
if(error){
//do something
}
}
Your sendMessage() function is written incorrectly. You cannot expect that socket will send all of your data at once, you need a loop where you try to send, check how many bytes were sent, offset buffer (and update totalLength accordingly of course) if necessary and repeat until all data is sent. Or interrupt if there is error condition. You try to send only once, ignore result and assume that if there is no error then all data was sent. This is not a case. Stream socket may send one or two or whatever amount of bytes at a time, and your code needs to handle that.
Your code should be something like this:
while( totalLength ) {
boost::system::error_code error;
auto sz = this->socket.get()->write_some(boost::asio::buffer(message, totalLength), error);
if(error){
//do something and interrupt the loop
}
totalLength -= sz;
message += sz;
}