Monitor connections instances in a C++ program - c++

In my project I'm using libmodbus
I have a thread that uses a modbus_t connection to check status of multiple RTU slaves.
I want to create another thread that checks for the status of the modbus connection.
If the connection fails the first thread should be stopped.
Any idea or approach can help.
Thanks

Related

gRPC C++ client blocking when attempting to connect channel on unreachable IP

I'm trying to enhance some client C++ code using gRPC to support failover between 2 LAN connections.
I'm unsure if I found a bug in gRPC, or more likely that I'm doing something wrong.
Both the server and client machines are on the same network with dual LAN connections, I'll call LAN-A and LAN-B.
The server is listening on 0.0.0.0:5214, so accepts connections on both LANs.
I tried creating the channel on the client with both IPs, and using various load balancing options, ex:
string all_endpoints = "ipv4:172.24.1.12:5214,10.23.50.123:5214";
grpc::ChannelArguments args;
args.SetLoadBalancingPolicyName("pick_first");
_chan = grpc::CreateCustomChannel(all_endpoints,
grpc::InsecureChannelCredentials(),
args);
_stub = std::move(Service::NewStub(_chan));
When I startup the client and server with all LAN connections functioning, everything works perfectly. However, if I kill one of the connections or startup the client with one of the connections down, gRPC seems to be blocking forever on that subchannel. I would expect it to use the subchannel that is still functioning.
As an experiment, I implemented some code to only try to connect on 1 channel (the non-functioning one in this case), and then wait 5 seconds for a connection. If the deadline is exceeded, then we create a new channel and stub.
if(!_chan->WaitForConnected(std::chrono::system_clock::now() +
std::chrono::milliseconds(5000)))
{
lan_failover();
}
The stub is a unique_ptr so should be destroyed, the channel is a shared_ptr. What I see is that I can successfully connect on my new channel but when my code returns, gRPC ends up taking over and indefinitely blocking on what appears to be trying to connect on the old channel. I would expect gRPC would be closing/deleting this no longer used channel. I don't see any functions available in the cpp version that I can call on the channel or globally that would for the shutdown/closure of the channel.
I'm at a loss on how to get gRPC to stop trying to connect on failed channels, any help would be greatly appreciated.
Thanks!
Here is some grpc debug output I see when I startup with the first load balancing implementation I mention, and 1 of the 2 LANs is not functioning (blocking forever):
https://pastebin.com/S5s9E4fA
You can enable keepaliaves. Example usage: https://github.com/grpc/grpc/blob/master/test/cpp/end2end/flaky_network_test.cc#L354-L358
Just wanted to let anyone know the problem wasn't with gRCP, but the way our systems were configured with a SAN that was being written too. A SAN was mounted through the LAN connection I was using the test failover through and the process was actually blocking because it was trying to access that SAN. The stack trace was misleading because it showed the gRPC thread.

Boost Asio TCP Server Handling multiple clients

I am new to network programming and the usage of Boost Asio library.
I successfully implemented a task for my requirement by modifying the Boost Asio "Blocking TCP Echo Server and Client" which performs transactions of operations between my Client and Server.
Now, I have a requirement where I need to connect multiple Clients with my Server.
I found some relevant links suggesting the usage of async_accept at the Server side.
So, I tried running the Boost Asio example: "Async TCP Echo Server" with the "Blocking TCP Echo client", where the server distinguishes the different clients and addresses them accordingly.
But, my actual requirement should be like, instead of the Server completing the entire process for one Client, it [the server] has to perform same operations for the first client then go to the second client and perform those operations and then again come back to the first client and continue in this order until all operations are complete.
Is there any way or idea which could help me perform this flow using Boost Asio? Also I'm just using the "Blocking TCP Echo Client", which just has a normal connect() and not an async_connect(), now is that a problem?
Also, is it possible to communicate between multiple clients through the server using Boost Asio?
Thanking you very much in advance!
There are 2 models to handling multiple client concurrently on the server.
The one is to spawn a new thread for each client and then each thread handles each client synchronously. The second model is to use asynchronous APIs on a single thread all operating on a single service. When the accept completes, you then create a new worker thread and start the worker off the send and recv required by your protocol. You main thread goes back the accepting new connections.
With async, you prime the pump with an async accept and the call io_service run. When the accept completes, your callback runs. You now prime the pump again with further accepts (for more client) start async send and recv for the newly created client. Since all sends and recvs are non-blocking, the only time your thread sleeps is when it has nothing to do. Otherwise the io_service run method takes care of everything for you.
If you are blocking on sends and recvs, through, you cannot process more than one client concurrently.

start remote process and monitor status

I'm using Linux and C++. What is the best way to start a remote process on another machine and verify that it is still running periodically programatically? I'll need to do this for multiple processes as well. Thank you
You can use MPI Interface to communicate between the master and slaves via send and recv methods. These can be used to track the status periodically. MPI tag
can be assigned to each slave and track the response from them. More info https://en.wikipedia.org/?title=Message_Passing_Interface

ZeroMQ sending many to one

I have implemented a zmq library using push / pull on windows. There is a server and up to 64 clients running over loopback. Each client can send and receive to the server. There is a thread that waits for each client to connect on a pull zmq socket. Clients can go away at any time.
The server is expected to go down at times and when it comes back up the clients need to reconnect to it.
The problem is that when nothing is connected I have 64 receive threads waiting for a connection. This shows up as a lot of connections in tcpview and my colleagues inform me that this is appearing like a performance/d-dos sort of attack.
So in order to get around that issue I'd like the clients to send some sort of heart beat to the server "hey I'm here" on one socket. However I can't see how to do that with zmq.
Any help would be appreciated.
I think the basic design of having 64 threads on the server waiting for external connections is flawed. Why not have a single 'master' thread binding the socket, which the external clients would connect to?
Internal to the server, you could still have 64 worker threads. Work would be distributed to the worker threads by the master thread. The communication between the master and the worker threads would be using zmq messages over the inproc transport.
What I have described are simple fan-in and fan-out patterns which are covered in the zmq guide. If you adopt this, most of the zmq code in the clients and workers would remain unchanged. You would have to write code for the master thread, but the zproxy class of CZMQ may work well for you (if you're using CZMQ).
So my advice is to get the basic design right before trying to add heartbeats. [Actually, I'm not sure how heartbeats would help your current problem.]

Multiple connections on the same port socket C++

I need to accept multiple connections to the same port.
I'm using socket in C++, i want to do something like the SSH do.
I can do an ssh user#machine "ls -lathrR /" and run another command to the same machine, even if the first one still running.
How can i do that?
Thanks.
What you want is a multithreaded socket server.
For this, you need a main thread that opens up a socket to listen to (and waits for incoming client connections). This has to go into a while loop of some sort.
Then, when a client connects to it, the accept() function will unblock and at that point you need to serve the client request by passing on the request to a thread that will deal with it.
The server side will loop back and wait for another connection whilst the previous thread carries on its task.
You can either create threads as you need, or use a thread pool which might be more efficient (saving on time initialising new threads).
Have a look here for some more details.
Look for multithreaded server socket on the web, specifically bind(), listen() and accept() from the server side.
You need to read up on ::listen() and ::accept().
The former will set up your socket for listening. You then need a loop (probably in its own thread) which uses ::accept() which will return each time a new connection arrives.
That loop should then spawn a new thread to which you should pass the file descriptor received from ::accept() and then handles all I/O on that socket from thereon.
Old question is old, but I feel no one who answered understood the OP's question.
You're misunderstanding how ssh works. When you send multiple commands/multiple connections to a server over ssh, there is actually only ONE program on the server you're connecting to that is receiving all those commands.
Sshd (the ssh daemon) runs on the server, and is a multithreaded socket server (see fduff's answer). This is the only program that listens on port 22, and handles all incoming ssh connections by itself.