C++ & Boost: I'm trying to find an example TCP program with a server that accepts connections from multiple clients - c++

A chat program would be a good enough example.
Just need a server that can accept multiple connections from the clients, and the server needs to be able to send messages to individual clients.
I plan to turn this into a distributed computing program to work with multiple Neural Networks.

Asio is the Boost library that handles networking. There's a chat server example listed here.

I cannot give you an example progam. But to write a server things that you have to do:
1. server will listen at a port for connection
2. thread pool which will accept the connection and serve request
3. write the server code in thread safe manner
You have to use socket programming A good link for that http://beej.us/guide/bgnet/
you can use win32 api in windows and posix for linux

Related

Are multiple boost::asio tcp channels faster than a single one?

In linux, axel is generally faster than wget. The reason is that axel opens multiple channels (connections) to the source and downloads the pieces of a file simultaneously.
So, the short version of my question is: Would doing the same with boost::asio make the connection transfer data faster?
By looking at these simple examples of a client and a server, I could make a single connection initiate multiple client instances, and connect to the same server with multiple sessions. In the communication protocol, I can make the client and server ready for such connections in such a way, where data is split among all the connection channels.
Could someone please explain to me why this should or shouldn't work out based on the scenarios I drew?
Please ask for more details if you need it.

How to enable gRPC server to support just one client connection

I'm currently considering to use gRPC for basically inter-process communication between Java app (client) and C++ server. The RPC calls will use functionality from very old C++ code base which is definitely not thread-safe.
Normally the Java client will start more gRPC server instances and have just one connection with each server instance.
Is there any way how to ensure this on the gRPC server to accept just one connection and refuse all other attempts for connection. Otherwise I need to introduce some global lock in the RPC functions to have 100% correct server implementation.
There are plans to provide additional server side APIs that will allow the server to decide whether or not to accept an incoming connection, but this is not done yet. For now, a lock is probably a reasonable option.

Interprocess communication: one server and multiple clients

I have one "server" process running, which will fetch data over the network for other processes running on the same machine as the server process.
How should I transfer data from the local server process and the local clients?
For retrieval of network data by the server process, Boost.Asio as suggested by #radman is a good choice.
Between server and local clients, Boost.Interprocess would be more efficient as this is interprocess data transfer, not requiring network usage.
Each of these Boost libraries provides a ready-to-run wrapper around complex underlying Win32 APIs, so you will likely get a working solution faster by using the libraries than by building your own special-purpose code with equivalent function.
You should check out Boost.Asio it fits your problem and is solid.
Standard TCP sockets work fine for interprocess communications between multiple processes on the same machine or different machines. It's standard, supported on almost all platforms and in almost all programming languages. You should be able to find sample C++ code easily.
To connect to a socket on the same machine, use "localhost" as its name or 127.0.0.1 as its IP.
I believe Windows has named pipes, which would work similarly to the suggestions in the other answers (especially #Irish's TCP sockets suggestion). See CreateNamedPipe() for details.

Client and server

I would like to create a connection between two applications. Should I be using Client-Server or is there another way of efficiently communicating between one another? Is there any premade C++ networking client server libraries which are easy to use/reuse and implement?
Application #1 <---> (Client) <---> (Server) <---> Application #2
Thanks!
Client / server is a generic architecture pattern (much like factory, delegation, inheritance, bridge are design patterns). What you probably want is a library to eliminate the tedium of packing and unpacking your data in a format that can be sent over the wire. I strongly recommend you take a look at the protocol buffers library, which is used extensively at Google and released as open source. It will automatically encode / decode data, and it makes it possible for programs written in different languages to send and receive messages of the same type with all the dirty work done for you automatically. Protobuf only deals with encoding, not actually sending and receiving. For that, you can use primitive sockets (strongly recommend against that) or the Boost.Asio asynchronous I/O library.
I should add that you seem to be confused about the meaning of client and server, since in your diagram you have the application talking to a client which talks to a server which talks to another application. This is wrong. Your application is the client (or the server). Client / server is simply a role that your application takes on during the communication. An application is considered to be a client when it initiates a connection or a request, while an application is considered to be a server when it waits for and processes incoming requests. Client / server are simply terms to describe application behavior.
If you know the applications will be running on the same machine, you can use sockets, message queues, pipes, or shared memory. Which option you choose depends on a lot of factors.
There is a ton of example code for any of these strategies as well as libraries that will abstract away a lot of the details.
If they are running on different machines, you will want to communicate through sockets.
There's a tutorial here, with decent code samples.

Multithreaded Server Issue

I am writing a server in linux that is supposed to serve an API.
Initially, I wanted to make it Multi-threaded on a single port, meaning that I'd have multiple threads working on various request received on a single port.
One of my friends told me that it not the way it is supposed to work. He told me that when a request is received, I first have to follow a Handshake procedure, create a thread that is listening to some other port dedicated to the request and then redirect the requested client to the new port.
Theoretically, it's very interesting but I could not find any information on how to implement the handshake and do the redirection. Can someone help?
If I'm not wrong in interpreting your responses, once I create a multithreaded server with a main thread listening to a port, and creates a new thread to handle requests, I'm essentially making it multithreaded on a single port?
Consider the scenario where I get a large number of requests every second. Isn't it true that every request on the port should now wait for the "current" request to complete? If not, how would the communication still be done: Say a browser sends a request, so the thread handling this has to first listen to the port, block it, process it, respond and then unblock it.
By this, eventhough I'm having "multithreads" , all I'm using is one single thread at a time apart from the main thread because the port is being blocked.
What your friend told you is similar to passive FTP - a client tells the server that it needs a connection, the server sends back the port number and the client creates a data connection to that port.
But all you wanted to do is a multithreaded server. All you need is one server socket listening and accepting connections on a given port. As soon as the automatic TCP handshake is finished, you'll get a new socket from the accept function - that socket will be used for communication with the client that has just connected. So now you only have to create a new thread, passing that client socket to the thread function. In your server thread, you will then call accept again in order to accept another connection.
TCP/IP does the handshake, if you can't think of any reason to do a handshake than your application does not demand it.
An example of an application specific handshake could be for user authentication.
What your colleague is suggesting sounds like the way FTP works. This is not a good thing to do -- the internet these days is more or less used for protocols which use a single port, and having a command port is bad. One of the reasons is because statefull firewalls aren't designed for multi-port applications; they have to be extended for each individual application that does things this way.
Look at ASIO's tutorial on async TCP. There one part accept connections on TCP and spawns handlers that each communicate with a single client. That's how TCP-servers usually work (including HTTP/web, the most common tcp protocol.)
You may disregard the asynchronous stuff of ASIO if you're set on creating a thread per connection. It doesn't apply to your question. (Going fully async and have one worker-thread per core is nice, but it might not integrate well with the rest of your environment.)