Multiple writes on same socket c++ - c++

I'm currently trying to develop a server and some clients which communicate with each other using something like a proxy in the middle. The "proxy" will have sockets opened to every client and server on the system. This means that I'm currently using threads to keep all the connections opened. Every time a client decides to send a message it uses its socket with the proxy and sends the message. Then the proxy will propagate the message to every other node using the respective socket.
As you can see, a node can be receiving messages by having the proxy writing on the socket or a node may want to send messages by writing on the socket.
How do I guarantee that the content in the socket does not get overwritten ? Do I have to use mutexes to lock the access to the socket ? What is a good practice to solve this problem ?

Connections are bi-directional. Content going one way does not overwrite content going the other way. No mutex is needed for this.
Besides, you couldn't use a mutex anyway, as both sides of the connection are separate.

Related

Boost::Beast Websocket Bidirection Stream (C++)

I'm looking into using the Boost::Beast websocket library to create an asynchronous bidirectional pipe to pass data between a server and a client. I leveraged some code from the async example (I can post some at a later time if necessary, don't have access to it now). I currently have a class which creates several threads running a SocketListener. When a client connects, it creates a Session shared_ptr to do the async read and write functions. The problem is, this session object will only write out when the client has sent me a message. I'm looking for an implementation that allows my server to write on demand to all the clients connected to it and also listen for incoming data from those connections.
Is this possible? Am I using the wrong technique for this? The other way I though this may be achievable is to have an incoming websocket and and outgoing websocket. Incoming would allow a client to drop configurations for the server and outgoing would just monitor a message queue and do a async write if a message is available.
Thanks!
Is this possible?
Yes
Am I using the wrong technique for this?
No
The other way I though this may be achievable is to have an incoming websocket and and outgoing websocket, and No respectively.
That is not necessary, a websocket stream is full-duplex. You can read and write at the same time.
outgoing would just monitor a message queue and do a async write if a message is available.
This is the correct approach, but you can do that in the same Session object that also handles the reads.
Here's an example that reads continuously and can also write full-duplex: https://github.com/vinniefalco/CppCon2018

Set TCP client socket to non-blocking: Server vs client

I have a question regarding non-blocking sockets in TCP connections.
I have implemented two c++ classes, one for the tcp server and one for the client. The server has two sockets file descriptors, one for the server and one for the client. The client has one socket file descriptor.
My server runs asynchronously and my client runs at a fixed rate. Therefore I would like to have a non-blocking socket for sending data from the client to the server, s.t. the client can send data at a fixed rate without stalling and the server asynchronously reads all data that has been buffered meanwhile.
So my question is: Does it make a difference, if I set the client socket to non-blocking in the client or the server class? (using fcntl(this->newsockfd_, F_SETFL, fcntl(this->newsockfd_, F_GETFL, 0) | O_NONBLOCK), where this->newsockfd_ is the client's socket file descriptor in both classes)
I tried this in my programm and it seemed like setting the client socket to non-blocking in the client-class didn't do the trick, but setting it in the server-class did. However, I don't understand why this should make a difference.
If your socket is set to non blocking mode, you will get just that. It will never block. But that does not mean that your api calls will succeed.
There are buffers that are being used behind the scenes and if they are full, which would mean in blocking mode that the socket would block, you will get a return code EWOULDBLOCK, which means that your sent has failed. This means that you basically have to wait for the buffers to empty and then try again.
Your idea of sending at an even rate despite of the server rate to receive, is impossible. You cannot have a client sending at a fixed rate. The whole idea of TCP is that there is a constant negotiation between client and server and the speed will be heavily depending on the network conditions. Congestion and the like.
Moving to non blocking sockets creates some problems of its own. You have to detect that the send fails, you have to check if the socket becomes writeable again, you have to store the bytes that you tried to send, and reattempt a send as soon as the socket becomes writable again.
There is a lot of difference on both client and server between working with blocking and non blocking sockets. non blocking sockets are in my opinion more difficult to be dealt with. You need the select api, with a timeout very likely to detect all the possible socket states. In case of blocking sockets, you can just use a socket in a thread, and if the socket blocks, it is just the thread that will block as well. If your gui is on a different thread, the GUI will be responsive.
Since your client is only sending data the non-blocking setting will not effect it. According to the excellent beej.us guide on socket programming, only calls to accept() and recv() are effected by the non-blocking setting. Since only your server is calling these you are seeing the change on your server code. If your client received data then the non-blocking setting would effect it and you would have to use select() to check if there is data and read from it accordingly.

socket data emiter c++

i'm having some sync trouble with threads and sockets. I need one thread to recive incoming connections on socket (and remember client data to respond) and other thread to setup frames and send current frame to listed clients. So i was wondering if its possible to (kinda) put my data frames into server socket, so that everyone could just read current frame from socket without server knowing.
Server will just spam its socket with some data and client will get data without server actions. Is this possible? how?
I'm currently doing it pretty messed up way which i dont like:
server is listening on one thread for incoming transmissions and upon reciving such, add client data to list.
on other thread server is sending data to all clients from list.
EDIT:
I want to send data to some kind of buffer from which clients are allowed to read. (client doesnt have to read all messages server sends, just the one buffer contains at the moment of clients request), i dont want server to even notice that clients are reading from buffer if possible.
Right now threads are syncronised using uniqe_lock
What you're describing is probably MultiCast. Specifically, IP MultiCast (I think).
Searching finds a number of useful resources. This one looks concise, and includes coded examples (although I'm not sure how current it is).
If you're only transmitting to a LAN then broadcast will work too.

how to detect tcp client connect to server in c++

I have a tcp client/server, and I want to detect connection loss in client side; because my client have multiple interfaces and at a time I connected to server with one of them, I want to know how to detect connection loss in client side so that I could connect my tcp client with another interface to the server and if all of them are down I store my data in text files. I googled it and I already seen keep alive but it's not what I want.
if it is important my project is in linux and code is in c++.
Try to read from the socket. When the socket closes, the read will fail, giving you simple detection. You can do this in a dedicated detection thread so that your main thread doesn't block.
TCP connections are designed to be error correcting and not time critical. This error correction includes network timeouts.
Reads and Write will not fail until the socket is actually closed, which may not be for a very long time.
The only way for a client to decide if a connection has timed-out is for the client to detect that it hasn't received any messages for a specified time, and manually close the socket.
That's what Keep Alive messages are for.
The best way that I found is to check buffer, if buffer is empty it means that your TCP client send the packet to the TCP server successfully and you can send the next packet; for checking the buffer you can use SIOCOUTQ; its very easy to use and show you how much data you have in your buffer.

Multithreaded Server Issue

I am writing a server in linux that is supposed to serve an API.
Initially, I wanted to make it Multi-threaded on a single port, meaning that I'd have multiple threads working on various request received on a single port.
One of my friends told me that it not the way it is supposed to work. He told me that when a request is received, I first have to follow a Handshake procedure, create a thread that is listening to some other port dedicated to the request and then redirect the requested client to the new port.
Theoretically, it's very interesting but I could not find any information on how to implement the handshake and do the redirection. Can someone help?
If I'm not wrong in interpreting your responses, once I create a multithreaded server with a main thread listening to a port, and creates a new thread to handle requests, I'm essentially making it multithreaded on a single port?
Consider the scenario where I get a large number of requests every second. Isn't it true that every request on the port should now wait for the "current" request to complete? If not, how would the communication still be done: Say a browser sends a request, so the thread handling this has to first listen to the port, block it, process it, respond and then unblock it.
By this, eventhough I'm having "multithreads" , all I'm using is one single thread at a time apart from the main thread because the port is being blocked.
What your friend told you is similar to passive FTP - a client tells the server that it needs a connection, the server sends back the port number and the client creates a data connection to that port.
But all you wanted to do is a multithreaded server. All you need is one server socket listening and accepting connections on a given port. As soon as the automatic TCP handshake is finished, you'll get a new socket from the accept function - that socket will be used for communication with the client that has just connected. So now you only have to create a new thread, passing that client socket to the thread function. In your server thread, you will then call accept again in order to accept another connection.
TCP/IP does the handshake, if you can't think of any reason to do a handshake than your application does not demand it.
An example of an application specific handshake could be for user authentication.
What your colleague is suggesting sounds like the way FTP works. This is not a good thing to do -- the internet these days is more or less used for protocols which use a single port, and having a command port is bad. One of the reasons is because statefull firewalls aren't designed for multi-port applications; they have to be extended for each individual application that does things this way.
Look at ASIO's tutorial on async TCP. There one part accept connections on TCP and spawns handlers that each communicate with a single client. That's how TCP-servers usually work (including HTTP/web, the most common tcp protocol.)
You may disregard the asynchronous stuff of ASIO if you're set on creating a thread per connection. It doesn't apply to your question. (Going fully async and have one worker-thread per core is nice, but it might not integrate well with the rest of your environment.)