Best way to send data client server - c++

What is the best way to handle data that needs to get send to the server? I have an multi thread client, in all threads there is data that needs to get send to the server. But when I launch the server there are some times packets that are send at the same time. So the data is not correct at that time.
I thought, lets make a stack that gets send to the server every x ms. Is this a good way to do this?

You can use message queue structure. There will only one queue in the server and every time a message arrives at the queue its added to the end of the queue, therefore even the messages are sent at the same time they will be ordered. After that process the message in the queue by dequeuing the messages. There are many open source message queue structures you can use, so you do not have to implement it from scratch.
You do not have to wait x seconds to send the data to the server in this structure. This will make your system faster.
Hope it helps

Open one socket per client-thread. That way the server can separate from which thread it comes from and everything is kept in order.

Related

Handling multiple clients simultaneously in C++ UDP server

I have developed a C++ UDP based server application and I am in the process of implementing code to handle multiple clients simultaneously .
I have the following understanding regarding how to handle multiple clients and want to fill in the knowledge gaps
My step wise understanding is as mentioned below
UDP server listens at a specific port(say xxxx)
The server has a message queue .It can be array or linked list or Queue or anything for that matter
As soon as a request arrives at the port xxxx, its placed in the message queue
After putting it in the message queue a new thread(let us call it worked thread) is spawned and it picks up the queued message and the same is removed from the message queue
The worked thread knows about the clients IP:port from the message header
The worker thread processes the request and sends the response to the clients IP:port
The clients gets the response and the worker thread terminates.
Steps 3 to 7 take care of multiple client being handled simultaneously.
Is my understanding sufficient ? Where do I need improvement?
Thanks in advance
The clients gets the response and the worker thread terminates.
The worker thread should terminate when it completes processing. There is no practical way for it to wait for an acknowledgement from the client.
The worker thread processes the request and sends the response to the clients IP:port
I think it will be better to place the response on a queue. The main server thread can check the queue and send any responses found there. This prevents race conditions when two worker threads overlap in their attempts to send responses.
The server has a message queue .It can be array or linked list or Queue or anything for that matter
It pretty much has to be a queue. The interesting question is what queue priority. Initially FIFO would do. If your server becomes overloaded, then you need to consider alternatives. Perhaps it would be good to estimate the processing time required, and do the fast ones first. Or perhaps different clients deserve different priorities.
After putting it in the message queue a new thread(let us call it worked thread) is spawned
This is fine initially. However, you will want to do some time profiling and determine if a thread pool would be advantageous.
Deeper Discussion of threading issues
The job processing must be done in a separate worker thread, so that a long job will not block the server from accepting connections from other clients. However, you should consider carefully whether or not you want to use multiple worker threads. Since you are placing the job requests on a queue, a single worker thread can be used to process them one by one.
PRO single thread
Simpler, more reliable code. The processing code must be thread safe for context switches back to the main thread. However, there will not be any context switches between job processing code. This makes it easier to design and debug the processing code. For example, if the jobs are updating a database, then you do not require any extra code to ensure the database is always consistent - just that consistency is guaranteed at the end of each job process.
Faster response for short jobs. If there are many short jobs submitted at the same time, your CPU can spend more cycles switching between jobs than actually doing useful processing.
CON single thread
A big job will block other jobs until it completes.

socket data emiter c++

i'm having some sync trouble with threads and sockets. I need one thread to recive incoming connections on socket (and remember client data to respond) and other thread to setup frames and send current frame to listed clients. So i was wondering if its possible to (kinda) put my data frames into server socket, so that everyone could just read current frame from socket without server knowing.
Server will just spam its socket with some data and client will get data without server actions. Is this possible? how?
I'm currently doing it pretty messed up way which i dont like:
server is listening on one thread for incoming transmissions and upon reciving such, add client data to list.
on other thread server is sending data to all clients from list.
EDIT:
I want to send data to some kind of buffer from which clients are allowed to read. (client doesnt have to read all messages server sends, just the one buffer contains at the moment of clients request), i dont want server to even notice that clients are reading from buffer if possible.
Right now threads are syncronised using uniqe_lock
What you're describing is probably MultiCast. Specifically, IP MultiCast (I think).
Searching finds a number of useful resources. This one looks concise, and includes coded examples (although I'm not sure how current it is).
If you're only transmitting to a LAN then broadcast will work too.

Websocket and reception of a message

I try to make a server "messages" via websocket under boost.
Currently, I can often send large messages or series of messages from the server.
when I hit "send", it sends tons of data.
The difficulty is that when the server receives a command in a websocket message like "Stop", "Pause" ... this command runs until the end of the previous message. I try to stop the execution of the previous command.
I tried to read the buffer between sending data. but it does not work. I try to check if there is one receiving orders with async_read_some.
I based on the example of
http://www.codeproject.com/Articles/443660/Building-a-basic-HTML5-client-server-application
and HTTP server boost
http://www.boost.org/doc/libs/1_53_0/doc/html/boost_asio/examples.html
Do you have any idea? I reworked my code several times but I can not execute the new real-time control as it appears at the end ..
thank you
If the data has already been sent to the network adapter, there is very little you can do to alter the order of packets. Network adapter will send the packets as and when it gets round to it, in the order you've queued them.
If you want to be able to send "higher priority" messages, then don't send off all the data in one go, but hold it in a queue waiting for the device to accept more data, and if a high priority message comes in, send that before you send any of the other packets off.
Don't make the packets TOO small, but I guess if you make packets that are a few kilobytes or so at a time, it will work quite swiftly and still allow good control over the flow.
Of course, this also will require that the receiver has the understanding of "there may be different 'flows' or 'streams' of information, and if you send a 'pause' command, it means that the previously sent stream is not going to receive anything until 'resume' is sent" obviously adjust this as needed for the behaviour you need, but you do need some way to not just say "put 'STOP' as data into the rest of the flow", but interpret it as a command.
If you send large message in the network as a single packet by the time server receives all the data the server receives stop message you may not have control over it until you complete receiving data.
It's better you implement priority message queue. Send the message as small chunks from client and assemble server instead of single large packet. Give message packets like stop(cancel) high priority. While receiving the messages at server end if any high priority message exists like stop(cancel) you don't need to accept remaining messages you can close the websocket connection at server.
Read the thread Chunking WebSocket Transmission for more info.
As you are using Boost, have you looked at WebSocket++ (Boost/ASIO based)?

how synchronize recv() when multithreading cpp CRT

I have a server interacting with multiple clients where the client send messages to the server and the server reads them via recv() method. The problem I getting is that Im using waitforsingleobject(handler, 10000 millisecs) in order to make the server wait for a few seconds to interact with one client and then let others access to it but then I start seeing answer from the server with the wrong message to the client and getting blocked. So looks like a synchronization issue.
So my question is (since I'm a begginer in c++) how could I ensure that every incoming message is received and replied to the right client, allowing all the clients interact with the server.
There're two alternatives.
First is a pretty standard model - one thread per one client. When a client connects, you start a thread to handle it.
Second approach doesn't require many threads. You should use WSARecv() on an overlapped socket instead of recv(). This way, you can simultaneously open multiple receive operations, one per client, and wait them all in a WaitForMultipleObjects(). To be specific, you will wait on event inside WSAOVERLAPPED. Remember that WaitForMultipleObjects() has a limit on number of wait objects. When exceeded, you will need to run another thread. The return code from WaitForMultipleObjects() will tell you which client has sent data, so you can reply to it.
Or, as suggested above, you could probably use select() to figure out which socket has data.

How to get a Win32 Thread to wait on a work queue and a socket?

I need a client networking thread to be able to respond both to new messages to be transmitted, and the receipt of new data on the network. I wish to avoid this thread performing a polling loop, but rather to process only as needed.
The scenario is as follows:
A client application needs to communicate to a server via a protocol that is largely, but not entirely, synchronous. Typically, the client sends a message to the server and blocks until a response is received.
The server may process client requests asynchronously, in which case the response to client
is not a result, but a notification that processing has begun. A result message is sent to to the client at some point in the future, when the server has finish processing the client request.
The asynchronous result notifications can arrive at the client at any time. These notifications need processed when they are received i.e. it is not possible to process a backlog only when the client transmits again.
The clients networking thread receives and processes notifications from the server, and to transmit outgoing messages from the client.
To achieve this, I need to to make a thread wake to perform processing either when network data is received OR when a message to transmit is enqueued into an input queue.
How can a thread wake to perform processing of an enqueued work item OR data from a socket?
I am interested primarily in using the plain Win32 APIs.
A minimal example or relevant tutorial would be very welcome!
An alternative to I/O Completion Ports for sockets is using WSAEventSelect to associate an event with the socket. Then as others have said, you just need to use another event (or some sort of waitable handle) to signal when an item has been added to your input queue, and use WaitForMultipleObjects to wait for either kind of event.
You can set up an I/O Completion Port for the handles and have your thread wait on the completion port:
http://technet.microsoft.com/en-us/sysinternals/bb963891.aspx
Actually, you can have multiple threads wait on the port (one thread per processor usually works well).
Following on from Michael's suggestion, I have some free code that provides a framework for IO Completion Port style socket stuff; and it includes an IOCP based work queue too. You should be able to grab some stuff from it to solve your problem from here.
Well, if both objects have standard Windows handles, you can have your client call WaitForMultipleObjects to wait on them.
You might want to investiate splitting the servicing of the network port off onto its own thread. That might simplify things greatly. However, it won't help if you just end up having to synchonize something else between that new thread and your main one.