Client/Server applications communication over the command line with threads c++ - c++

I have two applications one is a client the other is a server. The server launches the client as a sub thread. The client then outputs its commands via its standard out. The server waits for a command and responses accordingly.
Basically client server via the standard out.
For example:
client >> Move north
Server >> Your new location is {2,3}
client >> Move north
Server >> Your new location is {2,2}
client >> Shoot east
Server >> Projectile 66638 heading east {3,2}
The problem is that i don't know how to connect the two applications together so the server and read and response to the client application.
The reason that I would like to use the command line as the communication layer is that I want to keep the creation of the client as easy as possible.
Also there may be more then one client at a time, The clients should be able to communicate with the server interdependently of each other. (they should not be able to see each others communications)
Currently I am launching the application via the CreateProccess() function. This function makes it easy to set up the initial command line parameters of the application. just not the communication afterwards.
My Question is:
How does a server application that launches a client application as a thread, read/writes the clients standard output?

As the commenters above point out, Named Pipes (or sockets) is the way to go for this kind of solution, and it's two separate processes you probably want, not threads.
In Windows, the TransactNamedPipe() system call helps you accomplish what you want. It's ideally suited to sending commands to a server and waiting for the response, making it easy to create a client that performs something very similar to (synchronous) remote procedure calls to a server.

Related

Writing to a specific already running process instead of creating a new instance of a process

I have written some code that calls an executable and passes arguments to it via the cmd line.
For example it will call
my_process.exe -A my_argument
What I want to happen is that my program my_process will always be running looking for user input and instead of creating a new instance of the process I am wanting to write my data/arguments to the existing process.
I understand that how I am passing the parameters will change from the the initial process start (argc, argv) vs when using stdin.
I also know that I will have to change how I am calling the process but I am unsure of what I need to look into to get this done.
EDIT: so what I am trying to accomplish by doing all of this is below:
Website >> Web Service API >> Hardware API >> PLC
The website is on server A, the web Service and Hardware API is on Server B
OS is Windows 10 Pro 64bit
PLC is a programmable logic controller
The website will send a post to my webservice. The webservice will call the Hardware API which in turn will write or read data to the PLC.
This works when doing a single POST. But when doing multiple POSTS if the connection from the Hardware API to the PLC is still open it will fault.
The Connection between the Hardware API and the PLC is like a COM port not like a socket (which is misleading based on the programming manual).
So what I was trying to do was to keep my Web API the same but create another process that will take all the results from the Web API and put them in a FIFO and then pop them off to the Hardware API (which I will always have running and will have a persistent connection to the plc).
So really the Hardware API would always be running and be a single process that gets data passed to it. The Queue service would always be running and the Web API would pass the results over to it.
I have looked into the below:
https://www.boost.org/doc/libs/1_37_0/doc/html/interprocess.html
child/parent process/fork/file descriptors/dup/dup2
Any thoughts or advice is greatly appreciated.

How to synchronize timers of two programs

I am making an application where I have a client and a server. The client will send some coordinates to the server which will use those to move a robot. What I want is to synchronize the timers, used for time stamping log data, so that I can compare the input vs output. The communication is done through TCP/IP, and the client is done in C++ while the Server is in RAPID (ABB robotic programming language). My problem is that the timers are not synched properly.
Right Now the timers start when the connection between the two is established:
Server side:
ListenForConnection;
startTimer;
Client side:
connectToServer;
startTimer;
This does not work. Is there a technique to ensure that the timers are synchronized?
NB: The server can only be connected through LAN.
You need a protocol between client and server to pass the timestamp.
Right now, presumably you have a protocol for sending coordinates. You need to extend that somehow to allow one side to send timer information to the other side.
The easiest is if you have two way communication capability. In that case, the client does
Connect to server
Keep asking until the server is there
Server says "yes I'm here, the time is 1:00"
The client starts sending coords
If the server has no way to send to the client, then the client needs to send a timestamp from time to time, which the server recognises as being a time, not a coordinate. The two will not be synched until this happens the first time.

Handing over an established TCP connection from one process to another

I am writing a simple web server with C++ that handles long-lived connections. However, I need to reload my web server from time to time. I wonder if there is a way that I can hand over the established connections from one process to another process to be able to retain my established connections after reload.
Would that be enough to only pass file descriptors? what would happen to connection states?
Any similar open source project that does the same thing?
Any thoughts or ideas?
Thanks,
I really have no idea whether this is possible, but I think not. If you fork() then the child will "inherit" the descriptors, but I don't know whether they behave like the should (though I suspect that they do.) And with forking, you can't run new code (can you?) Simple descriptor numbers are process-specific, so just passing them to a new, unrelated process won't work either, and they will be closed when your process terminates anyway.
One solution (in the absence of a simpler one,) is to break your server into two processes:
Front-end: A very simple process that just accepts the connections, keep them open and forwards any data it receives to the second process, and vice versa.
Server: The real web server, that does all the logic and processing, but does not communicate with the clients directly.
The first and second processes communicate via a simple protocol. One feature of this protocol must that it does support the second process being terminated and relaunched.
Now, you can reload the actual server process without losing the client connections (since they are handled by the front-end process.) And since this front-end is extremely simple and probably has very few configurations and bugs, you rarely need to reload it at all. (I'm assuming that you need to reload your server process because it runs into bugs that need to be fixed or you need to change configurations and stuff.)
Another important and helpful feature that this system can have is to be able to transition between server processes "gradually". That is, you already have a front-end and a server running, but you decide to reload the server. You launch another server process that connects to the front-end (while the old server is still running and connected,) and the front-end process forwards all the new client connections to the new server process (or even all the new requests coming from the existing client connections.) And when the old server finishes processing all the requests that it has under processing, it gracefully and cleanly exits.
As I said, this is a solution you might to try only if nothing easier and simpler is found.

Routing sockets to another port

I have a system where I want to listen to a socket and wait to client connect and then pass the connection to another application that I'll start as soon as the connection is established.
I do not have control on this other application and can only set the port where it will listen, but I want to have one process for each new client.
This is what I'm trying to do:
I've been searching for a solution, but I thing I don't have the right terminology, but I managed to find on Richard Stevens' "Unix Network Programming" something about the AF_ROUTE family of sockets that may be combined with a SOCK_RAW to route a connection to another IP and port. But there's too little documentation about how to use this flag and seems to require superuser privileges (that I want to avoid).
Maybe there's an easier solution but I'm probably using the wrong terms. Is it clear what I want to do?
I don't think you'll be able to just "pass" the socket like you want to, especially if you can't change and recompile "APP". Sockets include various administrative overhead (resource management, etc) that are linked to the process they are owned by. In addition, if you can't recompile APP, there is no way to make it bypass the steps involved with accepting a connection and simple have an already open connected "handed" to it by your router.
However, have you considered simply using router as a pass-through? Basically, have your "Router" process connect via sockets to the each "APP" process it spawns, and simply echo whatever it recieves from the appropriate client to the appropriate APP, and visa versa for APP to client?
This does add overhead, and you will have to manage a small mapping to keep track of which clients go to which apps, but it might work (assuming the APP or client aren't basing any behavior off of the IP address they are connected to, etc). Assuming you can't recompile APP, there might not be too many other options.
The code for this is relatively simple. Your handler for data recieved from APP just looks up the socket for the appropriate app from your mapping, and then does a non blocking send of this data out on it. Likewise the handler for data recieved from client. Depending on how exactly the clients and app behave, you may have to handle a bit of synchronization (if you recieve from both simultaneously).

Linux socket programming testing

I am new to socket programming and I have written a code for the server using the epoll. Basically it would wait to accept for any client connection and it would send data to the clients if the data is available from other sources.
I need to test on how is the performance of the server when there are multiple requests for the connection and when the server is sending data to multiple clients. The question I have is how could I simulate a concurrent requests of the connection from a number of clients for the server? Do I create multiple threads to request or multiple processes or some other ways to test it?
Thanks.
I usually create a client program simulating one session with the server, find as many linux boxes nearby and spin up a thousand of them on each machine from a command line like:
for i in {0..1000} ; ./myprogram serverip & done
In some cases the protocol is text based and the interaction or test is simple, so I don't have to write myprogram but just use netcat, as in running nc serverip < input >/dev/null
If all you need is to test streaming data to your client, I'd start with netcat and work my way from there.
This approach is ok for my needs, I usually don't need to test more concurrency than a handful of thousand clients. If you need more scalability, you'll likely have to write the client simulator using io multiplexing (epoll) as well, having each instance simulate as much interactions as you can - and you'll have to do more controlled tests to find limits of your client .
Don't mix performance testing with functional testing though. While you might get awsome performance in a test environment, a live environment can be very different. e.g. clients will misbehave, they will disconnect at the most inapproriate of times. They might send you malicious data. They might be slow, leading to buildup of internal queues on the server or leving you with thousands of seemingly idle connections (killall -STOP nc while in the middle of sending data to test clients might show you interresting things)
You can do it using two threads one to send request and other to receive response.
To simulate multiple clients you can create multiple sub-interface using ifconfig command in a script or using ioctl in your c program.
For sending data from client you can create array of multiple sockets and bind them to different sub interface IPs and loop through them for sending data and use select or poll for receiving data from server.