Best way to send data using LibEvent - c++

I have a multi-threaded C++ app using lib event. The receive all happens in the libevent process and then a flag is set so that the data received is processed later. On the send tho, things go wrong. I am in a "main thread" and the data to send is assembled and then the following function is invoked.
int SocketWrapper :: SendData( const U8* buffer, int length )
{
if( m_useLibeventToSend )
{
bufferevent* bev = GetBufferEvent();
struct evbuffer* outputBuffer = bufferevent_get_output( bev );
evbuffer_lock( outputBuffer );
int result = evbuffer_add( outputBuffer, buffer, length );
evbuffer_unlock( outputBuffer );
return result;
}
return send( m_socketId, (const char* )buffer, length, 0 );
}
This function crashes on occasion at the point of the evbuffer_add invocation but 99.9% of the time it works fine. This smells like a concurrency bug and it may be related to clients crashing or coming-and-going. I made sure that during the initial creation of the socket by libevent, I did the following:
struct evbuffer* outputBuffer = bufferevent_get_output( GetBufferEvent() );
evbuffer_enable_locking( outputBuffer, NULL );
Do you have any notion of some other special initialization I should be doing? Should I not invoke "SendData" from my main thread and instead send an event to the bufferevent so that the send should happen in the same process as libevent?
All design ideas are open. So far, my workaround is to not use libevent for the send, but to write directly to the socket.
This crash happens in both release and debug, VS 2008, libevent 2.0. It's deep in the library so I will be resorting to including the c files in my project to try and track down my problem, but maybe someone here knows instantly what's wrong. :-)

Related

Synchronize access to boost async_write

I'm using a C library which rips PDF data and provides me with that data via callbacks. Two callbacks are used, one which provides me with the job header and another which provides me with the the ripped data ranging from 1 - 50MB chunks.
I'm then taking that data and sending it across the wire via TCP to someone who cares.
I'm using the boost async_write to send that data across the wire. I want to synchronize access to the async_write until it's done sending the previous chunk of data.
The C callback functions:
void __stdcall HeaderCallback( void* data, int count )
{
// The Send function is a member of my AsyncTcpClient class.
// This is how I'm currently providing my API with the PDF data.
client.Send( data, count );
}
void __stdcall DataCallback( void* data, int count )
{
client.Send( data, count );
}
I receive the provided data in my AsyncTcpClient class's Send method.
void AsyncTcpClient::Send( void* buffer, size_t length )
{
// Write to the remote server.
boost::asio::async_write( _session->socket,
boost::asio::buffer( ( const char* )buffer, length ),
[ this ]( boost::system::error_code const& error, std::size_t bytesTransfered )
{
if ( error )
{
_session->errorCode = error;
OnRequestComplete( _session );
return;
}
std::unique_lock<std::mutex> cancelLock( _session->cancelGuard );
if ( _session->cancelled )
{
OnRequestComplete( _session );
return;
}
} );
}
How can I synchronize access to the async_write function?
Using a mutex at the start of the Send function would be pointless as the async_write returns immediately.
It's also pointless to store the mutex in a unique_lock member variable and attempt to unlock it in the async_write callback lambda as that'll blow up.
How can I synchronize access to the async_write function without using strand?
The first iteration of the program wont use strand for synchronization, I will be implementing that later.
You should use an io_context::strand.
One example from many others, but that answer will help you.

C++ weird async behaviour

Note that I'm using boost async, due to the lack of threading classes support in MinGW.
So, I wanted to send a packet every 5 seconds and decided to use boost::async (std::async) for this purpose.
This is the function I use to send the packet (this is actually copying to the buffer and sending in the main application loop - nvm - it's working fine outside async method!)
m_sendBuf = new char[1024]; // allocate buffer
[..]
bool CNetwork::Send(const void* sourceBuffer, size_t size) {
size_t bufDif = m_sendBufSize - m_sendInBufPos;
if (size > bufDif) {
return false;
}
memcpy(m_sendBuf + m_sendInBufPos, sourceBuffer, size);
m_sendInBufPos += size;
return true;
}
Packet sending code:
struct TestPacket {
unsigned char type;
int code;
};
void SendPacket() {
TestPacket myPacket{};
myPacket.type = 10;
myPacket.code = 1234;
Send(&TestPacket, sizeof(myPacket));
}
Async code:
void StartPacketSending() {
SendPacket();
std::this_thread::sleep_for(std::chrono::seconds{5});
StartPacketSending(); // Recursive endless call
}
boost::async(boost::launch::async, &StartPacketSending);
Alright. So the thing is, when I call SendPacket() from the async method, received packet is malformed on the server side and the data is different than specified. This doesn't happend when called outside the async call.
What is going on here? I'm out of ideas.
I think I have my head wrapped around what you are doing here. You are loading all unsent in to buffer in one thread and then flushing it in a different thread. Even thought the packets aren't overlapping (assuming they are consumed quickly enough), you still to synchronize all the shared data.
m_sendBuf, m_sendInPos, and m_sendBufSize are all being read from the main thread, likely while memcpy or your buffer size logic is running. I suspect you will have to use a proper queue to get your program to work as intended in the long run, but try protecting those variables with a mutex.
Also as other commenters have pointed out, infinite recursion is not supported in C++, but that probably does not contribute to your malformed packets.

Buffer last received ZeroMQ message as class member

I'm trying to write a handler class that subscribes to a message published via zeromq and buffers the last received message.
I tried doing this as follows. The method ReceivedMessage() is to be called by a wrapper application in a cyclic called function. Once it returns true, I tried to access the message using GetReceivedMessageData(). Unfortunately, it seems that the data is not saved properly in the member zmq_receivedMessage_.
I guess this is because of zmq_receivedMessage_ being initialized with fixed size and the call zmq_subscriber_.recv(&zmq_receivedMessage_) does not automatically resize it?
What would be the easiest and most robust way to this? The only way I can think of is using realloc() and memcpy() every time a new message is received. Or is there a simpler way?
#include <cstdint>
#include "zeromq_cpp/zmq.hpp"
class HandlerClass
{
public:
/// #brief Initializes a AirSimToRos class instance.
HandlerClass(std::string const& addr);
// #brief Gets the message data received via ZeroMq as pointer.
void* GetReceivedMessageData();
// #brief Gets the message size received via ZeroMq as size_t.
std::size_t GetReceivedMessageSize();
// #brief Returns true if a new, full message was received via ZeroMq, false otherwise
bool ReceivedMessage();
private:
/// #brief A ZeroMq context object encapsulating functionality dealing with the initialisation and termination.
zmq::context_t zmq_context_;
/// #brief A ZeroMq socket for subscribing to incoming messages.
zmq::socket_t zmq_subscriber_;
/// #brief A ZeroMq message that was received last. Might be empty if ReceivedMessage() never was true.
zmq::message_t zmq_receivedMessage_;
};
HandlerClass::HandlerClass(std::string const& addr)
: zmq_context_(1)
, zmq_subscriber_(zmq_context_, ZMQ_SUB)
{
zmq_subscriber_.setsockopt(ZMQ_IDENTITY, "HandlerSubscriber", 5);
zmq_subscriber_.setsockopt(ZMQ_SUBSCRIBE, "", 0);
zmq_subscriber_.setsockopt(ZMQ_RCVTIMEO, 5000);
zmq_subscriber_.connect(addr);
}
void* HandlerClass::GetReceivedMessageData()
{
return zmq_receivedMessage_.data();
}
std::size_t HandlerClass::GetReceivedMessageSize()
{
return zmq_receivedMessage_.size();
}
bool HandlerClass::ReceivedMessage()
{
int received_bytes = zmq_subscriber_.recv(&zmq_receivedMessage_);
return received_bytes > 0;
}
One way would be a redesign w/ Poller-instance + ZMQ_CONFLATE
Having zero context of the intended class use-cases, the original design seems to be a rather "mechanical" wrapper of a data-mover, not any MVP-slim-design, that can squeeze maximum of the benefits the ZeroMQ Scalable Formal Communication Archetypes Signalling/Messaging framework has already built-in.
The much smarter ( and also a ZMQ_RCV_HWM-safer ( goes beyond the scope of this topic ) ) would be not to just always mechanically read each message from the ZeroMQ Context-domain of control, unless in a real need to re-transmit such data from the HandlerClass somewhere further down the line.
Add a private instance of the Poller that would allow to redesign data-flow mechanics -- using non-destructive query, using a .poll()-method for testing a new message arrival ( having also a Real-Time / Event-Handling Loop stability control tools, not to wait longer than an ad-hoc set .poll()-method timeout ), while yet able to defer any actual data-move as late as possible, until a data indeed need to flow outside of the HandlerClass-instance, not anywhere earlier.
HandlerClass::HandlerClass(std::string const& addr)
: zmq_context_(1)
, zmq_subscriber_(zmq_context_, ZMQ_SUB)
{
zmq_subscriber_.setsockopt( ZMQ_IDENTITY, "HandlerSubscriber", 5 );
zmq_subscriber_.connect( addr );
zmq_subscriber_.setsockopt( ZMQ_SUBSCRIBE, "", 0 );
zmq_subscriber_.setsockopt( ZMQ_LINGER, 0 ); // ALWAYS, READY 4 .term()
zmq_subscriber_.setsockopt( ZMQ_CONFLATE, 1 ); // SMART
zmq_subscriber_.setsockopt( ZMQ_TOS, T ); // WORTH DEPLOY & MANAGE
zmq_subscriber_.setsockopt( ZMQ_RCVTIMEO, 5000 );
// ------------------------------------------------- // ADD Poller-instance
...
// ------------------------------------------------- // RTO
}
Nota Bene: In case the exgress flow is also made on ZeroMQ infrastructure, there are time-saving API tools for Zero-Copy message re-marshalling into another ZeroMQ socket-transport -- ( almost ) for free -- cool, isn't it?

Sending data with libevent works just sometimes

While developing it's very common that things work or they don't. When sending data from my client to my server it does not work everytime but in most cases it does. I am guessing that probably the kernel don't send the buffer it has stored. Anyways, there have to be a method to work arround this behaviour.
My client have a GUI and it have to communitcate with a server. Because threads don't work as I want them to, I decided to use event_base_loop so that it just blocks until one package is processed. After that it can do GUI stuff so that the window won't freeze.
I am very certain that my sending fails and NOT my reading because my server does not call the my callback for reading ("readcb").
The attached function i am calling from the main function like this:
int main(int argc, char **argv)
{
// init stuff
// connnect to server
sendPacket(bev);
}
I researched a lot about this, but I don't find anything. For example bufferevent_flush(bev, EV_WRITE, BEV_FLUSH) don't works with sockets (i tried it even out).
My current function for writing (in short form, simplified for one package):
void sendPacket(bufferevent * bev)
{
// just data:
const unsigned int msg_ = 12;
char msg[msg_] = "01234567891";
// send that packet:
uint16_t packet_id = 1
bufferevent_write(bev, &packet_id, 2);
bufferevent_write(bev, msg, msg_);
//and this part SHOULD make that data really send but it does not every time:
while (evbuffer_get_length(bufferevent_get_output(bev)) > 0)
{
event_base_loop(bufferevent_get_base(bev), EVLOOP_ONCE);
};
//this last one, only to be really sure (that's why i use the second option):
event_base_loop(bufferevent_get_base(bev), EVLOOP_NONBLOCK | EVLOOP_ONCE);
}
Thanks for your time, I would be lost without your help.

winsock, message oriented networking, and type-casting the buffer from recv

Okay, I actually don't have code as of yet because i'm just picking out a framework for the time being, but i'm still a little baffled about how i wish to go about this :.
Server side, i wish to have a class where each instance has a socket and various information identifying each connection. each object will have it's own thread for receiving data. I understand how i'll be implementing most of that, but my confusion starts just as i get to the actual transfer of data between server and client. I'll want to have a bunch of different message structs for specific cases, (for example CONNECT_MSG , DISCONNECT_MSG, POSTTEXT_MSG, etc) and then all i have to do is have a char * point at that struct and then pass it via the send() function.
But as i think on it, it gets a little complicated at that point. Any of those different message types could be sent, and on the receiving end, you will have no idea what you should cast the incoming buffer as. What i was hoping to do is, in the thread of each connection object, have it block until it receives a packet with a message, then dump it into a single queue object managed by the server(mutexes will prevent greediness) and then the server will process each message in FIFO order independent of the connection objects.
I havn't written anything yet, but let me write a little something to illustrate my setup.
#define CONNECT 1000
struct GENERIC_MESSAGE
{
int id;
}
struct CONNECT_MESSAGE : public GENERIC_MESSAGE
{
m_username;
}
void Connection::Thread()
{
while(1)
{
char buffer[MAX_BUFFER_SIZE]; // some constant(probably 2048)
recv(m_socket, buffer, MAX_BUFFER_SIZE, 0);
MESSAGE_GENERIC * msg = reinterpret_cast<MESSAGE_GENERIC *> (buffer);
server->queueMessage(msg);
}
}
void Server::QueueMessage(MESSAGE_GENERIC * msg)
{
messageQueue.push(msg);
}
void Server::Thread()
{
while(1)
{
if(!messageQueue.empty())
ProcessMessages();
else
Sleep(1);
}
}
void Server::ProcessMessages()
{
for(int i = 0; i < messageQueue.size(); i++)
{
switch(messageQueue.front()->id)
{
case CONNECT:
{
// the part i REALLY don't like
CONNECT_MESSAGE * msg = static_cast<CONNECT_MESSAGE *>(messageQueue.front() );
// do the rest of the processing on connect
break;
}
// other cases for the other message types
}
messageQueue.pop();
}
}
Now if you've been following up until now, you realize just how STUPID and fragile this is. it casts to the base class, passes that pointer to a queue, and then just assumes that the pointer is still valid from the other thread, and even then whether or not the remaining buffer after the pointer for the rest of the derived class will always be valid afterward for casting, but i have yet to find a correct way of doing this. I am wide open for ANY suggestions, either making this work, or an entirely different messaging design.
Before you write even a line of code, design the protocol that will be used on the wired. Decide what a message will consist of at the byte level. Decide who sends first, whether messages are acknowledged, how receivers identify message boundaries, and so on. Decide how the connection will be kept active (if it will be), which side will close first, and so on. Then write the code around the specification.
Do not tightly associate how you store things in memory with how you send things on the wire. These are two very different things with two very different sets of requirements.
Of course, feel free to adjust the protocol specification as you write the code.