C++ weird async behaviour - c++

Note that I'm using boost async, due to the lack of threading classes support in MinGW.
So, I wanted to send a packet every 5 seconds and decided to use boost::async (std::async) for this purpose.
This is the function I use to send the packet (this is actually copying to the buffer and sending in the main application loop - nvm - it's working fine outside async method!)
m_sendBuf = new char[1024]; // allocate buffer
[..]
bool CNetwork::Send(const void* sourceBuffer, size_t size) {
size_t bufDif = m_sendBufSize - m_sendInBufPos;
if (size > bufDif) {
return false;
}
memcpy(m_sendBuf + m_sendInBufPos, sourceBuffer, size);
m_sendInBufPos += size;
return true;
}
Packet sending code:
struct TestPacket {
unsigned char type;
int code;
};
void SendPacket() {
TestPacket myPacket{};
myPacket.type = 10;
myPacket.code = 1234;
Send(&TestPacket, sizeof(myPacket));
}
Async code:
void StartPacketSending() {
SendPacket();
std::this_thread::sleep_for(std::chrono::seconds{5});
StartPacketSending(); // Recursive endless call
}
boost::async(boost::launch::async, &StartPacketSending);
Alright. So the thing is, when I call SendPacket() from the async method, received packet is malformed on the server side and the data is different than specified. This doesn't happend when called outside the async call.
What is going on here? I'm out of ideas.

I think I have my head wrapped around what you are doing here. You are loading all unsent in to buffer in one thread and then flushing it in a different thread. Even thought the packets aren't overlapping (assuming they are consumed quickly enough), you still to synchronize all the shared data.
m_sendBuf, m_sendInPos, and m_sendBufSize are all being read from the main thread, likely while memcpy or your buffer size logic is running. I suspect you will have to use a proper queue to get your program to work as intended in the long run, but try protecting those variables with a mutex.
Also as other commenters have pointed out, infinite recursion is not supported in C++, but that probably does not contribute to your malformed packets.

Related

Reading from one socket for several consumers asynchronously in one thread

I am implementing a connection multiplexer - class, which wraps a single connection in order to provide an ability to create so-called Stream-s over it. There can be dozens of such streams over one physical connection.
Messages sent over that connection are defined by a protocol and can be service ones (congestion control, etc), which are never seen by the clients, and data ones - they contain some data for the streams, for which one - defined in the header of the corresponding message.
I have encountered a problem when implementing a method read for a Stream. It must be blocking, but asynchronous, so that it returns some value - data read or error happened - but the request itself must be is some kind of async queue.
To implement asynchronous network IO we have used Boost's async_read-s, async_write-s, etc with a completion token, taken from another library. So, a call to MyConnection::underlying_connection::read(size_t) is asynchronous already in the terms I described before.
One solution I have implemented is function MyConnection::processFrame(), which is reading from the connection, processing message and, if it is a data message, puts the data into the corresponding stream's buffer. The function is to be called in a while loop by the stream's read. But, in that case there can be more than one simulteneous calls to async_read, which is UB. Also, this would mean that even service messages are to wait until some stream wants to read the data, which is not appropriate as well.
Another solution I came up is using future-s, but as I checked, their methods wait/get would block the whole thread (even with defered policy or paired promise), which must be avoided too.
Below is a simplified example with only methods, which are needed to understand the question. This is current implementation, which contains bugs.
struct LowLevelConnection {
/// completion token of 3-rd part library - ufibers
yield_t yield;
/// boost::asio socket
TcpSocket socket_;
/// completely async (in one thread) method
std::vector<uint8_t> read(size_t bytes) {
std::vector<uint8_t> res;
res.reserve(bytes);
boost::asio::async_read(socket_, res, yield);
return res;
}
}
struct MyConnection {
/// header is always of that length
constexpr uint32_t kHeaderSize = 12;
/// underlying connection
LowLevelConnection connection_;
/// is running all the time the connection is up
void readLoop() {
while (connection_.isActive()) {
auto msg = connection_.read(kHeaderSize);
if (msg.type == SERVICE) { handleService(msg); return; }
// this is data message; read another part of it
auto data = connection_.read(msg.data_size);
// put the data into the stream's buffer
streams_.find(data.stream_id).buffer.put(data);
}
}
}
struct Stream {
Buffer buffer;
// also async blocking method
std::vector<uint8_t> read(uint32_t bytes) {
// in perfect scenario, this should look like this
async_wait([]() { return buffer.size() >= bytes; });
// return the subbuffer of 'bytes' size and remove them
return subbufer...
}
}
Thanks for future answers!

How to stop boost::asio async reads from getting mixed up?

I am using boost::asio::ip::tcp::socket to receive data. I need an interface which allows me to specify a buffer and call a completion handler once this buffer is filled asynchronously.
When reading from sockets, we can use the async_read_some method.
However, the async_read_some method may read less than the requested number of bytes, so it must call itself with the rest of the buffer if this happens. Here is my current approach:
template<typename CompletionHandler>
void read(boost::asio::ip::tcp::socket* sock, char* target, size_t size, CompletionHandler completionHandler){
struct ReadHandler {
boost::asio::ip::tcp::socket* sock;
char* target;
size_t size;
CompletionHandler completionHandler;
ReadHandler(ip::tcp::socket* sock, char* target, size_t size, CompletionHandler completionHandler)
: sock(sock),target(target),size(size),completionHandler(completionHandler){}
// either request the remaining bytes or call the completion handler
void operator()(const boost::system::error_code& error, std::size_t bytes_transferred){
if(error){
return;
}
if(bytes_transferred < size){
// Read incomplete
char* newTarg =target+bytes_transferred;
size_t newSize = size-bytes_transferred;
sock->async_read_some(boost::asio::buffer(newTarg, newSize), ReadHandler(sock,newTarg,newSize,completionHandler));
return;
} else {
// Read complete, call handler
completionHandler();
}
}
};
// start first read
sock->async_read_some(boost::asio::buffer(target, size), ReadHandler(this,target,size,completionHandler));
}
So basically, we call async_read_some until the whole buffer is filled, then we call the completion handler. So far so good. However, I think that things get mixed up once I call this method more than once before the first call finishes a receive:
void thisMayFail(boost::asio::ip::tcp::socket* sock){
char* buffer1 = new char[128];
char* buffer2 = new char[128];
read(sock, buffer1, 128,[](){std::cout << "Buffer 1 filled";});
read(sock, buffer2, 128,[](){std::cout << "Buffer 2 filled";});
}
of course, the first 128 received bytes should go into the first buffer and the second 128 should go into the second. But in my understanding, it may be the case that this does not happen here:
Suppose the first async_read_some returns only 70 bytes, then it would issue a second async_read_some with the remaining 58 bytes. However, this read will be queued behind the second 128 byte read(!), so the first buffer will receive the first 70 bytes, the next 128 will go into the second buffer and the final 50 go into the first. I.e., in this case the second buffer would even be filled before the first is filled completely. This may not happen.
How to solve this? I know there is the async_read method, but its documentation says it is simply implemented by calling async_read_some multiple times, so it is basically the same as my read implementation and will not fix the problem.
You simply can't have two asynchronous read operations active at the same time: that's undefined behaviour.
You can
use the free function async_read_until or async_read functions, which already have the higher-level semantics and loop callling the socket's async_read_some until a condition is matched or the buffer is full.
use asynchronous operation chaining to sequence the next async read after the first. In short, you initiate the second boost::asio::async_read* call in the completion handler of the first.
Note:
Gives you the opportunity to act on transport errors first too.
together the free function interface will both raise the abstraction level of the code and solve the problem (the problem was initiating two simultaneous read operations)
use a strand in case you run multiple IO service threads; See Why do I need strand per connection when using boost::asio?

Sending data with libevent works just sometimes

While developing it's very common that things work or they don't. When sending data from my client to my server it does not work everytime but in most cases it does. I am guessing that probably the kernel don't send the buffer it has stored. Anyways, there have to be a method to work arround this behaviour.
My client have a GUI and it have to communitcate with a server. Because threads don't work as I want them to, I decided to use event_base_loop so that it just blocks until one package is processed. After that it can do GUI stuff so that the window won't freeze.
I am very certain that my sending fails and NOT my reading because my server does not call the my callback for reading ("readcb").
The attached function i am calling from the main function like this:
int main(int argc, char **argv)
{
// init stuff
// connnect to server
sendPacket(bev);
}
I researched a lot about this, but I don't find anything. For example bufferevent_flush(bev, EV_WRITE, BEV_FLUSH) don't works with sockets (i tried it even out).
My current function for writing (in short form, simplified for one package):
void sendPacket(bufferevent * bev)
{
// just data:
const unsigned int msg_ = 12;
char msg[msg_] = "01234567891";
// send that packet:
uint16_t packet_id = 1
bufferevent_write(bev, &packet_id, 2);
bufferevent_write(bev, msg, msg_);
//and this part SHOULD make that data really send but it does not every time:
while (evbuffer_get_length(bufferevent_get_output(bev)) > 0)
{
event_base_loop(bufferevent_get_base(bev), EVLOOP_ONCE);
};
//this last one, only to be really sure (that's why i use the second option):
event_base_loop(bufferevent_get_base(bev), EVLOOP_NONBLOCK | EVLOOP_ONCE);
}
Thanks for your time, I would be lost without your help.

winsock, message oriented networking, and type-casting the buffer from recv

Okay, I actually don't have code as of yet because i'm just picking out a framework for the time being, but i'm still a little baffled about how i wish to go about this :.
Server side, i wish to have a class where each instance has a socket and various information identifying each connection. each object will have it's own thread for receiving data. I understand how i'll be implementing most of that, but my confusion starts just as i get to the actual transfer of data between server and client. I'll want to have a bunch of different message structs for specific cases, (for example CONNECT_MSG , DISCONNECT_MSG, POSTTEXT_MSG, etc) and then all i have to do is have a char * point at that struct and then pass it via the send() function.
But as i think on it, it gets a little complicated at that point. Any of those different message types could be sent, and on the receiving end, you will have no idea what you should cast the incoming buffer as. What i was hoping to do is, in the thread of each connection object, have it block until it receives a packet with a message, then dump it into a single queue object managed by the server(mutexes will prevent greediness) and then the server will process each message in FIFO order independent of the connection objects.
I havn't written anything yet, but let me write a little something to illustrate my setup.
#define CONNECT 1000
struct GENERIC_MESSAGE
{
int id;
}
struct CONNECT_MESSAGE : public GENERIC_MESSAGE
{
m_username;
}
void Connection::Thread()
{
while(1)
{
char buffer[MAX_BUFFER_SIZE]; // some constant(probably 2048)
recv(m_socket, buffer, MAX_BUFFER_SIZE, 0);
MESSAGE_GENERIC * msg = reinterpret_cast<MESSAGE_GENERIC *> (buffer);
server->queueMessage(msg);
}
}
void Server::QueueMessage(MESSAGE_GENERIC * msg)
{
messageQueue.push(msg);
}
void Server::Thread()
{
while(1)
{
if(!messageQueue.empty())
ProcessMessages();
else
Sleep(1);
}
}
void Server::ProcessMessages()
{
for(int i = 0; i < messageQueue.size(); i++)
{
switch(messageQueue.front()->id)
{
case CONNECT:
{
// the part i REALLY don't like
CONNECT_MESSAGE * msg = static_cast<CONNECT_MESSAGE *>(messageQueue.front() );
// do the rest of the processing on connect
break;
}
// other cases for the other message types
}
messageQueue.pop();
}
}
Now if you've been following up until now, you realize just how STUPID and fragile this is. it casts to the base class, passes that pointer to a queue, and then just assumes that the pointer is still valid from the other thread, and even then whether or not the remaining buffer after the pointer for the rest of the derived class will always be valid afterward for casting, but i have yet to find a correct way of doing this. I am wide open for ANY suggestions, either making this work, or an entirely different messaging design.
Before you write even a line of code, design the protocol that will be used on the wired. Decide what a message will consist of at the byte level. Decide who sends first, whether messages are acknowledged, how receivers identify message boundaries, and so on. Decide how the connection will be kept active (if it will be), which side will close first, and so on. Then write the code around the specification.
Do not tightly associate how you store things in memory with how you send things on the wire. These are two very different things with two very different sets of requirements.
Of course, feel free to adjust the protocol specification as you write the code.

Trouble tracking down a Bus Error/Seg Fault in C++ and Linux

I have a program that processes neural spike data that is broadcast in UDP packets on a local network.
My current program has two threads a UI thread and a worker thread. The worker thread simply listens for data packets, parses them and makes them available to the UI thread for display and processing. My current implementation works just fine. However for a variety of reasons I'm trying to re-write the program in C++ using an Object Oriented approach.
The current working program initialized the 2nd thread with:
pthread_t netThread;
net = NetCom::initUdpRx(host,port);
pthread_create(&netThread, NULL, getNetSpike, (void *)NULL);
Here is the getNetSpike function that is called by the new thread:
void *getNetSpike(void *ptr){
while(true)
{
spike_net_t s;
NetCom::rxSpike(net, &s);
spikeBuff[writeIdx] = s;
writeIdx = incrementIdx(writeIdx);
nSpikes+=1;
totalSpikesRead++;
}
}
Now in my new OO version of the program I setup the 2nd thread in much the same way:
void SpikePlot::initNetworkRxThread(){
pthread_t netThread;
net = NetCom::initUdpRx(host,port);
pthread_create(&netThread, NULL, networkThreadFunc, this);
}
However, because pthead_create takes a pointer to a void function and not a pointer to an object's member method I needed to create this simple function that wraps the SpikePlot.getNetworSpikePacket() method
void *networkThreadFunc(void *ptr){
SpikePlot *sp = reinterpret_cast<SpikePlot *>(ptr);
while(true)
{
sp->getNetworkSpikePacket();
}
}
Which then calls the getNetworkSpikePacket() method:
void SpikePlot::getNetworkSpikePacket(){
spike_net_t s;
NetCom::rxSpike(net, &s);
spikeBuff[writeIdx] = s; // <--- SegFault/BusError occurs on this line
writeIdx = incrementIdx(writeIdx);
nSpikes+=1;
totalSpikesRead++;
}
The code for the two implementations is nearly identical but the 2nd implementation (OO version) crashes with a SegFault or BusError after the first packet that is read. Using printf I've narrowed down which line is causing the error:
spikeBuff[writeIdx] = s;
and for the life of me I can't figure out why its causing my program to crash.
What am I doing wrong here?
Update:
I define spikeBuff as a private member of the class:
class SpikePlot{
private:
static int const MAX_SPIKE_BUFF_SIZE = 50;
spike_net_t spikeBuff[MAX_SPIKE_BUFF_SIZE];
....
}
Then in the SpikePlot constructor I call:
bzero(&spikeBuff, sizeof(spikeBuff));
and set:
writeIdx =0;
Update 2: Ok something really weird is going on with my index variables. To test their sanity I changed getNetworkSpikePacket to:
void TetrodePlot::getNetworkSpikePacket(){
printf("Before:writeIdx:%d nspikes:%d totSpike:%d\n", writeIdx, nSpikes, totalSpikesRead);
spike_net_t s;
NetCom::rxSpike(net, &s);
// spikeBuff[writeIdx] = s;
writeIdx++;// = incrementIdx(writeIdx);
// if (writeIdx>=MAX_SPIKE_BUFF_SIZE)
// writeIdx = 0;
nSpikes += 1;
totalSpikesRead += 1;
printf("After:writeIdx:%d nspikes:%d totSpike:%d\n\n", writeIdx, nSpikes, totalSpikesRead);
}
And I get the following output to the console:
Before:writeIdx:0 nspikes:0 totSpike:0
After:writeIdx:1 nspikes:32763 totSpike:2053729378
Before:writeIdx:1 nspikes:32763 totSpike:2053729378
After:writeIdx:1 nspikes:0 totSpike:1
Before:writeIdx:1 nspikes:0 totSpike:1
After:writeIdx:32768 nspikes:32768 totSpike:260289889
Before:writeIdx:32768 nspikes:32768 totSpike:260289889
After:writeIdx:32768 nspikes:32768 totSpike:260289890
This method is the only method where I update their values (besides the constructor where I set them to 0). All other uses of these variables are read only.
I'm going to go on a limb here and say all your problems are caused by the zeroing out of the spike_net_t array.
In C++ you must not zero out objects with non-[insert word for 'struct-like' here] members. i.e. if you have an object that contains a complex object (a std string, a vector, etc. etc.) you cannot zero it out, as this destroys the initialization of the object done in the constructor.
This may be wrong but....
You seemed to move the wait loop logic out of the method and into the static wrapper. With nothing holding the worker thread open, perhaps that thread terminates after the first time you wait for a UDP packet, so second time around, sp in the static method now points to an instance that has left scope and been destructed?
Can you try to assert(sp) in the wrapper before trying to call its getNetworkSpikePacket()?
It looks like your reinterpret_cast might be causing some problems. When you call pthread_create, you are passing in "this" which is a SpikePlot*, but inside networkThreadFunc, you are casting it to a TetrodePlot*.
Are SpikePlot and TetrodePlot related? This isn't called out in what you've posted.
If you are allocating the spikeBuff array anywhere then make sure you are allocating sufficient storage so writeIdx is not an out-of-bounds index.
I'd also check that initNetworkRxThread is being called on an allocated instance of spikePlot object (and not on just a declared pointer).