TCP Server - Multi User File Upload - python-2.7

I'm struggling with trying to figure out how to implement a TCP server in python. I currently have a TCP server that can connect to one client at a time. The Client can successfully communicate with the server, and upload a file using a single socket. Currently, it is single threaded and when one Client connects it blocks the other clients from connecting until it is done.
I'm struggling with the design portion on making this multi-client friendly. If I'm uploading files concurrently should there be multiple sockets? If I go with one socket, how do I differentiate data from different clients?
Can anyone give some advice on this?

Related

Best way to run TCP server alongside Django to gather IoT data

I a django app running on Elasticbeanstalk in AWS. Within my Django application I'd like to gather IoT data coming in via TCP/IP. Currently, I open the socket and switch it to listening through a View function. This leads to the problem that the socket closes, or stops. Furthermore, the socket needs to listen on the port steadily although data is not coming in continiously.
What is a more elegant way to solve this problem? Is there any Django extension to get the socket and listening from a view to a background task? E.g. listen every 60 seconds on the ports and create an object when data comes in?

Are multiple boost::asio tcp channels faster than a single one?

In linux, axel is generally faster than wget. The reason is that axel opens multiple channels (connections) to the source and downloads the pieces of a file simultaneously.
So, the short version of my question is: Would doing the same with boost::asio make the connection transfer data faster?
By looking at these simple examples of a client and a server, I could make a single connection initiate multiple client instances, and connect to the same server with multiple sessions. In the communication protocol, I can make the client and server ready for such connections in such a way, where data is split among all the connection channels.
Could someone please explain to me why this should or shouldn't work out based on the scenarios I drew?
Please ask for more details if you need it.

Maximum number of TCP connections

I am doing a TCP client - server simulation. In the simulation, I have created 2 clients and 2 servers. And I have programmed that read requests will go to server 1 and write requests will go to server 2. Thus, the client will always renew it's socket and make a new connection to the servers.
However, after the client has made 66561 times of connections to the server, instead of sending request packets, it will just simply send some empty ACK packets.
I expected both the clients to be able to send up to millions of requests, but currently, both the clients are only able to send up to 13k requests. Can anyone give me tips or advices?
Nagle's algorithm
Solutions:
Do not use small package in your app protocol
Use socket option TCP_NODELAY on both side client/server
Sounds like most previously created connections are still taking the resource (not released from system). From the information you give,
However, after the client has made 66561 times of connections to the server, instead of sending request packets, it will just simply send some empty ACK packets.
Looks like about 1000+ connections are released. Probably because of the 2msl time is due. If this is the case, suggest you explicitly release a connect before you create a new one.
Copy and paste your C/S part code would help the analyse.

Sharing sockets (WINSOCK) by sending them to each other between 2 servers

I am trying to write a distributed server system (consisting of server 1="main", and server 2="replacement" for now). Don't mind some dirty methods, it's just to achieve a basic function a distributed server would achive.
Currently I have both servers running via SO_REUSEADDR and a TCP Socket (maybe UDP will solve my problem, but I wanna try it either way with TCP first) on the same machine.
Main server sends establishes a connection with the Replacement server and clients connecting to it.
Now what I want to achieve: The main server should send the socket of the connecting clients to the replacement server, so in case the main server can't work anymore (timeout or what ever) the replacement server can work with the clients and send/recv with them.
The socket I send from main to the replacement server is the one I get with ClientSocket = ::accept(ListenSocket, NULL, NULL); where ClientSocket is a SOCKET (UINT_PTR).
The replacement server can't send/recv to the clients even though the main server gets terminated midway.
Is that because each server, even though they run on the same port, need to be connected via a ::connect from the clients?
EDIT: If my theory is true, this should be solved by using UDP instead of TCP as well, no?
EDIT2: With distributed server I mean a server which in case of a failure will be replaced by another without the clients task getting interrupted.
EDIT3: If there is a better and more simple way to do this, I'd like to know about that as well. I'm not too fond of sockets and server communication as of now that's how I came up with my idea to solve it.
You cannot send a socket between machines. A socket is an OS concept. It represents (in this case) a connection between the server and a client. This connection cannot be resumed on a different machine that has a different IP address because a TCP connection is defined to be established between a pair of addresses and a pair of ports.
The UINT_PTR on one machine means nothing to another machine. It is an opaque value. It is an OS handle.
UDP does not solve the problem since the client needs to notice that the IP address is is communicating with has changed.
Even if you manage that you have the problem that a server failure kills all data on that server. The other server cannot resume with the exact same data. This is a basic problem of distributed computing. You cannot keep the state of all three machines completely in sync.
Make the clients tolerate interruptions by retrying. Make the servers stateless and put all data into a database.
This is a very hard problem to solve. Seek off-the-shelve solutions.

Is port an excluesive resourse in xmlrpc multithreading server?

I'm writing a xmlrpc server with XMLRPC++ lib.I want to make it Multi-threaded on a single port,meaning that I'd have multiple threads working on various request received on a single port.
However,I'm not sure it'll work.Is the port a excluesive resourse?When I receive a request and start a thread,is it possible that I receive another request from the same port while the first thread's still working?