I'm building a system that has 2 processes.
Process 1
This process is actually a Node.js program. This process is actually a Web Server handling the incoming request.
Process 2
This process is actually a C++ program.
Both the processes are started automatically at startup with help of rc.local
Now, for Process 1 there are some specific request that should be passed to Process 2.
For example, if Process 1 receives a post request at route /enqueue with a JSON body payload, Process 1 should stringify the JSON and pass to Process 2.
When Process 2 receives the JSON, it should kill a worker thread and start a new thread with that JSON to perform the actual task. The worker thread should be killed irrespective of whether the worker thread is still processing previous JSON
If both the processes were Node.js application, I could have forked Process 2 from Process 1 and used the following code.
process.on('message',function(message){
//implementation
}
...
process.send(data);
But my second process is a C++ app.
Any idea on how to implement it?
Note: Before flagging this question, please keep in mind I'm not looking for a full code. I just need the idea on how to do it.
You cannot use the Nodejs messaging/eventing facility for this purpose as it is specific to Node.
You will need to use the communications facilities of your operating system such as Unix, TCP, UDP sockets or an eventing system that both processes can communicate with, like Redis or ZeroMQ.
Related
I have a service written in C++ working in the background, and a client written in python but actually calls functions in C++ (using pybind11) which talks to that service. In my client example I am creating 2 processes. After the fork, the client in the new child process is able to sent requests via gRPC but not receiving the answer message back.
I read there are problems in gRPC and forking in python, but I am not sure how can this be avoided? Is creating new stub for each object in each child process supposed to work?
The flow:
I have a request from the main process - getting an object from the server via pybind+gRPC.
Then forking 2 processes and sending each one the returned object from previous service call.
In the child process, calling another request using that object - the request was sent, the answer is created in the service but I didn't get it in the client.
I have developed a C++ UDP based server application and I am in the process of implementing code to handle multiple clients simultaneously .
I have the following understanding regarding how to handle multiple clients and want to fill in the knowledge gaps
My step wise understanding is as mentioned below
UDP server listens at a specific port(say xxxx)
The server has a message queue .It can be array or linked list or Queue or anything for that matter
As soon as a request arrives at the port xxxx, its placed in the message queue
After putting it in the message queue a new thread(let us call it worked thread) is spawned and it picks up the queued message and the same is removed from the message queue
The worked thread knows about the clients IP:port from the message header
The worker thread processes the request and sends the response to the clients IP:port
The clients gets the response and the worker thread terminates.
Steps 3 to 7 take care of multiple client being handled simultaneously.
Is my understanding sufficient ? Where do I need improvement?
Thanks in advance
The clients gets the response and the worker thread terminates.
The worker thread should terminate when it completes processing. There is no practical way for it to wait for an acknowledgement from the client.
The worker thread processes the request and sends the response to the clients IP:port
I think it will be better to place the response on a queue. The main server thread can check the queue and send any responses found there. This prevents race conditions when two worker threads overlap in their attempts to send responses.
The server has a message queue .It can be array or linked list or Queue or anything for that matter
It pretty much has to be a queue. The interesting question is what queue priority. Initially FIFO would do. If your server becomes overloaded, then you need to consider alternatives. Perhaps it would be good to estimate the processing time required, and do the fast ones first. Or perhaps different clients deserve different priorities.
After putting it in the message queue a new thread(let us call it worked thread) is spawned
This is fine initially. However, you will want to do some time profiling and determine if a thread pool would be advantageous.
Deeper Discussion of threading issues
The job processing must be done in a separate worker thread, so that a long job will not block the server from accepting connections from other clients. However, you should consider carefully whether or not you want to use multiple worker threads. Since you are placing the job requests on a queue, a single worker thread can be used to process them one by one.
PRO single thread
Simpler, more reliable code. The processing code must be thread safe for context switches back to the main thread. However, there will not be any context switches between job processing code. This makes it easier to design and debug the processing code. For example, if the jobs are updating a database, then you do not require any extra code to ensure the database is always consistent - just that consistency is guaranteed at the end of each job process.
Faster response for short jobs. If there are many short jobs submitted at the same time, your CPU can spend more cycles switching between jobs than actually doing useful processing.
CON single thread
A big job will block other jobs until it completes.
I am working on a module which uses 10 queues to handle threads and each of them send curl requests using curl_easy interface (along with Lock) so that; a single connection is maintained till the response is not received. I want to enhance request handling by using curl_multi interface where curl requests are sent by the thread and handled in parallel fashion.
I have created a separate code to implement it. I created 3 threads for instance, being handled one by one, the first thread sends request to curl_multi till it's running and there are transfers existing, which allocates resources using curl_easy interface for each transfer.
I have gone through a lot of examples but cannot figure out how to implement it in C++. Also because I have recently learnt multi threading and curl concepts in C++ I need assistance with the approach.
I expect a single thread should be able to send curl requests till the user doesn't stop sending.
Update - I have added two threads and each sends two requests simultaneously. curl_multi is being handled by an array of handles, for curl_easy.
I want to keep it free of arrays because that is limiting the number of requests.
Can it be made asynchronous and accept all transfers and exit only when the client/user does. There are enough examples of curl_multi therefore I am not clear of its implementation.
Reading the curl_multidocumentation, it doesn't seem as you have to create different threads for this, as it works via your multiple easy handles added to the multi handle object. You then call curl_multi_perform to start all transfers in a non-blocking way.
I expect a single thread should be able to send curl requests till the user doesn't stop sending.
I don't understand what you mean by this, do you mean that you just want to keep those connections alive until everything is transferred? If so, curl_multi already gives you info on the progress of your transfers which can help you determine what to do.
Hope it helps
I have developed a dummy Launch Daemon that keeps writing something to the console(syslog) every 5 minutes. Now, I want to write an application that can communicate with this service. By communicating I mean that the user should be able to input the logging frequency(time). For eg, if the service is logging 'Hello world' every 5 minutes, the user should be able to change it to something else (say 2 mins) and the change should be reflected. Any idea on how I should proceed for developing the application and facilitate interprocess communication between the daemon and the application? Thanks.
There are several ways:
Have a config file for your application that contains the logging frequency and any other parameters you need. The daemon then parses the file on startup to get its parameters. The daemon also creates a SIGHUP handler, and when it receives a SIGHUP it re-reads the values from the config file. The part that the user interacts with then just gets new parameters from the user, edits them into the config file and sends a kill -HUP to the daemon's process id.
The daemon creates a second thread that creates a socket and listens for new parameters, when any arrive, the thread updates variables shared with its main thread which then continues with the new values. The part that interacts with the user then asks the user for new parameters and sends them to the agreed port - you can use nc or netcat to get started and then later code it in C++.
There are many examples on the net about creating a simple thread pool such as Sample1 and Sample2
What I wanted to implement though is to have a separate thread pool for different tasks. For example, the app may have a pool of threads for processing incoming tcp connections (let's call this the network pool), while another pool for talking to a database (database pool).
These incoming tcp requests might want information from the database. In this case it will need to ask the those threads from the database pool to perform query, and return the result asynchronously.
Is there a recommended way to do so using boost::asio? Would it be having one instance of io_service for each pool? And how should those threads communicate with each other (using boost)?
I understand to explain all these, the code won't be that short and trivial, but if possible some sort of pseudo code would be much appreciated.
Thanks!
The communication between thread / thread pools should be through thread safe queues.
In your example, you should have a networking thread pool for handling network connections, a process pool for executing the network requests, and a database connection / thread pool (one pool per database; one thread per database connection, but possibly you could have multiple connections to the same database).
You would also need a thread safe queues, one for the network pool, one for the process pool and one for each of the database pools.
Say you have a network request that needs to get information from the database. You would receive the request while executing on a network thread, and append the handler for the request onto the process queue.
The process handler (in a process thread) would see that the request needs something from the database, and so it would append a database request as well as a callback handler onto the appropriate database queue.
The appropriate database thread would pick up the request from the database queue, execute the query, get the results back, and add the results to the callback handler. The callback handler object with the database results would then be pushed onto the process queue.
The callback handler (in a process thread) would then continue executing the request, and possibly package a response message, which is then pushed onto the network queue.
The network handler (in a network thread) would then pick up the response messsage and deliver it (encoding as necessary).
An example of a thread safe queue can be found here.
Albeit a little complicated, you can see an implementation of an application server that can handle what you're talking about here, although it may be overkill for what you're trying to do. The source code is fairly well documented so you should be able to follow it and see what it's doing.
My example uses boost for asio (see the TCP Connection implementation within that same system), but it does not use boost io_service for handlers.