Sending data with libevent works just sometimes - c++

While developing it's very common that things work or they don't. When sending data from my client to my server it does not work everytime but in most cases it does. I am guessing that probably the kernel don't send the buffer it has stored. Anyways, there have to be a method to work arround this behaviour.
My client have a GUI and it have to communitcate with a server. Because threads don't work as I want them to, I decided to use event_base_loop so that it just blocks until one package is processed. After that it can do GUI stuff so that the window won't freeze.
I am very certain that my sending fails and NOT my reading because my server does not call the my callback for reading ("readcb").
The attached function i am calling from the main function like this:
int main(int argc, char **argv)
{
// init stuff
// connnect to server
sendPacket(bev);
}
I researched a lot about this, but I don't find anything. For example bufferevent_flush(bev, EV_WRITE, BEV_FLUSH) don't works with sockets (i tried it even out).
My current function for writing (in short form, simplified for one package):
void sendPacket(bufferevent * bev)
{
// just data:
const unsigned int msg_ = 12;
char msg[msg_] = "01234567891";
// send that packet:
uint16_t packet_id = 1
bufferevent_write(bev, &packet_id, 2);
bufferevent_write(bev, msg, msg_);
//and this part SHOULD make that data really send but it does not every time:
while (evbuffer_get_length(bufferevent_get_output(bev)) > 0)
{
event_base_loop(bufferevent_get_base(bev), EVLOOP_ONCE);
};
//this last one, only to be really sure (that's why i use the second option):
event_base_loop(bufferevent_get_base(bev), EVLOOP_NONBLOCK | EVLOOP_ONCE);
}
Thanks for your time, I would be lost without your help.

Related

How to transfer IVectorView<InkStroke> in C++/CX

scenario: I would like to exchange InkStroke between devices running on the same LAN.
Step1: collect all the strokes drawn on the screen from the client:
// Send strokes to a remote server.
strokesToReplay = inkCanvas->InkPresenter->StrokeContainer->GetStrokes();
const int sz = sizeof(strokesToReplay);
char msgToSend[sz];
memcpy(msgToSend, &strokesToReplay, sz);
send(tcpClient.sockfd, msgToSend, sz, 0);
Step2: receive data from the server part:
// tcpServer is an instance of TCPServer, which contains
// a function that calls listen -> accept
// to establish a connection with the client
bytesRecv = tcpServer.recvFromClient();
// Received data was storaged in TCPServer::buffer (char buffer[16384])
What I would like to do is cast the data in the buffer into IVectorView.
So that it is possible to iterate InkStroke from it like:
for (InkStroke^ inkStroke : buffer) {
... to do
}
But here is the question: how can I cast char * to IVectorView?
I've tried memcpy() and static_cast.
But since there is no proper memory allocated in IVectorView, memcpy() will destroy the whole program.
static_cast() won't work with IVectorView.
Now I am thinking about copying the data into the clipboard, then calling the API provided by Microsoft that gets data from the clipboard and cast it into strokes automatically.
But I Do Not Know If This Works Or Not...
Is there any advice that you guys can give me?
Thank you.

Creating a socket inside a thread C++

I've been given a problem to solve... heres how it goes:
I'm required to write two programs, client and server.
My client program is going to do some trivial task X, which creates a queue of size N.
Then the client program will create N threads, and these child threads will each create a socket, and send some information pertaining to X to the server.
The server then receives this information from the client, and creates child processes to further process this information, and send it back to the client.
My main question is how to go about creating the socket INSIDE the thread.
#include <pthread.h>
#include <stdio.h>
#define NTHREADS 5
void *process_X(void *x_void_ptr)
{
//random
//do i create the socket here?
return NULL;
}
int main()
{
static int x = 0;
pthread_t tid[NTHREADS];
for(int i=0;i<NTHREADS;i++)
{
if(pthread_create(&tid[i], NULL, inc_x, &x))
{
fprintf(stderr, "Error creating thread\n");
return 1;
}
}
// Wait for the other threads to finish.
for (int i = 0; i < NTHREADS; i++)
pthread_join(tid[i], NULL);
return 0;
}
Also, in the information I've been given about creating sockets, i will be inputting hostname and port number from command line. So I will need to use argv[] too, so I dont know how to do that, if it wont be in the main function.
Any help greatly appreciated...
My main question is how to go about creating the socket INSIDE the thread.
Everything within your process_X function takes place in the new thread. So, you do indeed create the socket where your comment indicates.
So I will need to use argv[] too, so I dont know how to do that, if it wont be in the main function.
The last parameter of pthread_create is passed on into process_X, which is what x_void_ptr is. You can simply cast x_void_ptr to whatever type you need it to be.
I suggest parsing the CLI arguments in your main function, and arranging the data in a struct, which is then passed into process_X via pthread_create.

C++ weird async behaviour

Note that I'm using boost async, due to the lack of threading classes support in MinGW.
So, I wanted to send a packet every 5 seconds and decided to use boost::async (std::async) for this purpose.
This is the function I use to send the packet (this is actually copying to the buffer and sending in the main application loop - nvm - it's working fine outside async method!)
m_sendBuf = new char[1024]; // allocate buffer
[..]
bool CNetwork::Send(const void* sourceBuffer, size_t size) {
size_t bufDif = m_sendBufSize - m_sendInBufPos;
if (size > bufDif) {
return false;
}
memcpy(m_sendBuf + m_sendInBufPos, sourceBuffer, size);
m_sendInBufPos += size;
return true;
}
Packet sending code:
struct TestPacket {
unsigned char type;
int code;
};
void SendPacket() {
TestPacket myPacket{};
myPacket.type = 10;
myPacket.code = 1234;
Send(&TestPacket, sizeof(myPacket));
}
Async code:
void StartPacketSending() {
SendPacket();
std::this_thread::sleep_for(std::chrono::seconds{5});
StartPacketSending(); // Recursive endless call
}
boost::async(boost::launch::async, &StartPacketSending);
Alright. So the thing is, when I call SendPacket() from the async method, received packet is malformed on the server side and the data is different than specified. This doesn't happend when called outside the async call.
What is going on here? I'm out of ideas.
I think I have my head wrapped around what you are doing here. You are loading all unsent in to buffer in one thread and then flushing it in a different thread. Even thought the packets aren't overlapping (assuming they are consumed quickly enough), you still to synchronize all the shared data.
m_sendBuf, m_sendInPos, and m_sendBufSize are all being read from the main thread, likely while memcpy or your buffer size logic is running. I suspect you will have to use a proper queue to get your program to work as intended in the long run, but try protecting those variables with a mutex.
Also as other commenters have pointed out, infinite recursion is not supported in C++, but that probably does not contribute to your malformed packets.

Halting an OSC UDP listener function. Xcode iOS application

I'm developing an application that sends OSC data via UDP between a host program and an iPad. Sending from the iPad to the host program was reasonably straight forward. However, receiving data from host program on the iPad has been more troublesome.
I have got to a point where the iPad is RECEIVING the data, but becomes stuck in the .run() packet listener function. It was my understanding that a .break() would interrupt the function and return to my main program, but it doesn't seem to work.
I am more experienced working with objective-c in an iOS environment, so it may be my rudimentary understanding of C++ that is letting me down.
Code below: (I have been using the oscPack library to produce this code. Available Here: http://www.rossbencina.com/code/oscpack)
in my main function (Where the break() points don't appear to be working):
ExamplePacketListener listener;
UdpListeningReceiveSocket s(
IpEndpointName( IpEndpointName::ANY_ADDRESS, RECEIVEPORT ),
&listener );
s.RunUntilSigInt();
//s.Run();
s.Break();
s.AsynchronousBreak();
My packet listener class, very similar to the oscPack example. With some argument modifications. This was another area I felt could be causing a problem. If my osc messages aren't padded correctly, could it stop the run() function from being returned?
class ExamplePacketListener : public osc::OscPacketListener {
protected:
virtual void ProcessMessage( const osc::ReceivedMessage& m,
const IpEndpointName& remoteEndpoint )
{
(void) remoteEndpoint; // suppress unused parameter warning
try{
// example of parsing single messages. osc::OsckPacketListener
// handles the bundle traversal.
if( std::strcmp( m.AddressPattern(), "/oscMasterSend" ) == 0 ){
// example #1 -- argument stream interface
osc::ReceivedMessageArgumentStream args = m.ArgumentStream();
bool a1;
args >> a1 >> osc::EndMessage;
NSLog(#"%d",a1);
}
}
}
};
Any advice would be greatly appreciated.
Thanks,
Tom.

winsock, message oriented networking, and type-casting the buffer from recv

Okay, I actually don't have code as of yet because i'm just picking out a framework for the time being, but i'm still a little baffled about how i wish to go about this :.
Server side, i wish to have a class where each instance has a socket and various information identifying each connection. each object will have it's own thread for receiving data. I understand how i'll be implementing most of that, but my confusion starts just as i get to the actual transfer of data between server and client. I'll want to have a bunch of different message structs for specific cases, (for example CONNECT_MSG , DISCONNECT_MSG, POSTTEXT_MSG, etc) and then all i have to do is have a char * point at that struct and then pass it via the send() function.
But as i think on it, it gets a little complicated at that point. Any of those different message types could be sent, and on the receiving end, you will have no idea what you should cast the incoming buffer as. What i was hoping to do is, in the thread of each connection object, have it block until it receives a packet with a message, then dump it into a single queue object managed by the server(mutexes will prevent greediness) and then the server will process each message in FIFO order independent of the connection objects.
I havn't written anything yet, but let me write a little something to illustrate my setup.
#define CONNECT 1000
struct GENERIC_MESSAGE
{
int id;
}
struct CONNECT_MESSAGE : public GENERIC_MESSAGE
{
m_username;
}
void Connection::Thread()
{
while(1)
{
char buffer[MAX_BUFFER_SIZE]; // some constant(probably 2048)
recv(m_socket, buffer, MAX_BUFFER_SIZE, 0);
MESSAGE_GENERIC * msg = reinterpret_cast<MESSAGE_GENERIC *> (buffer);
server->queueMessage(msg);
}
}
void Server::QueueMessage(MESSAGE_GENERIC * msg)
{
messageQueue.push(msg);
}
void Server::Thread()
{
while(1)
{
if(!messageQueue.empty())
ProcessMessages();
else
Sleep(1);
}
}
void Server::ProcessMessages()
{
for(int i = 0; i < messageQueue.size(); i++)
{
switch(messageQueue.front()->id)
{
case CONNECT:
{
// the part i REALLY don't like
CONNECT_MESSAGE * msg = static_cast<CONNECT_MESSAGE *>(messageQueue.front() );
// do the rest of the processing on connect
break;
}
// other cases for the other message types
}
messageQueue.pop();
}
}
Now if you've been following up until now, you realize just how STUPID and fragile this is. it casts to the base class, passes that pointer to a queue, and then just assumes that the pointer is still valid from the other thread, and even then whether or not the remaining buffer after the pointer for the rest of the derived class will always be valid afterward for casting, but i have yet to find a correct way of doing this. I am wide open for ANY suggestions, either making this work, or an entirely different messaging design.
Before you write even a line of code, design the protocol that will be used on the wired. Decide what a message will consist of at the byte level. Decide who sends first, whether messages are acknowledged, how receivers identify message boundaries, and so on. Decide how the connection will be kept active (if it will be), which side will close first, and so on. Then write the code around the specification.
Do not tightly associate how you store things in memory with how you send things on the wire. These are two very different things with two very different sets of requirements.
Of course, feel free to adjust the protocol specification as you write the code.