Multiple concurrent FastCGI applications on the same server - c++

I have a RESTful web service with a C++ API at the back-end. I am using the FastCGI library to facilitate the REST interface. My C++ API has multiple functions that may be used independently. I am looking for a way to make it as fast as possible. Here are a few ideas I got:
Have one FastCGI application that gets the function to be executed, executes that function and returns the output. This way the API calls keep waiting until one 'function' is complete, even though the next call is for a different independent function.
Have multiple FastCGI applications, each having access to only one function from the API, each getting inputs for that particular app and returning outputs of that particular app alone.
This way I can have concurrent calls made to all the different functions, and separate process queues would be made for each function that I have, instead of having one generic process queue to the FastCGI application consisting of calls to different independent functions.
While this looks like it would perform better, I am not sure if it is possible to implement a system such as this - i.e having many FastCGI apps running in parallel from the same server. If it is possible, can someone tell me how to implement this?

Each FastCGI application is a separate program, running in a loop, and communicating with Apache in a binary protocol defined by FastCGI specification. The only possible concurrency problems are the same concurrency problems you would experience if you were running concurrent CGI or PHP requests, with just one exception: since FastCGI processes do not terminate, any limited resources will have to be carefully managed. For example, if you only have a ten-client licence to a database server, you can't have eleven FastCGI processes using the database unless you manage connections better than "open at start, let it close at the end" method often used in CGI or PHP.

Related

django async support - not fully understanding the main concept

which is main concept of django asgi?
when there are multiple tasks to be done inside a view,
handle those multiple tasks concurrently thus reduce view's response time.
when there are multiple requests from multiple users at same time,
hendle those requests concurrently thus users wait less in queue.
Channels? Web Socket?
I'm trying to understand and use the asgi concept but feeling so lost.
Thanks.
asgi provides an asynchronous/synchronous interface for python applications to interact with front-end web elements (HTML and Scripts). In a sense, because the interface itself handles the requests concurrently, it is working to reduce response time - because it is the reason that that django web servers respond notably quick. Multiple tasks from multiple users are handly quickly and efficiently, but that is not the main concept.
Most importantly, asgi provides a method for python (as well as the django library) to interact with the frontend HTML page we are showing the user. As was the original triumph of wsgi; asgi is the upgrade that allows python to communicate with the web client actively (listening) then start asynchronous tasks that allow us to begin tasks or change values outside of the direct scope of what the application is doing. Thus, we can start those tasks, serve different values to the user, and continue those tasks in the background uninterrupted.

Multiprocess web server with ocaml

I want to make webserver with ocaml. It will have REST interface and will have no dependencies (just searching in constant data loaded to RAM on process startup) and serve read only queries (which can be served from any node - result will be the same).
I love OCaml, however, I have one problem that it can only process using on thread at a time.
I think of scaling just by having nginx in front of it and load balance to multiple process instances running on different ports on the same server.
I don't think I'm the only one running into this issue, what would be the best tool to keep running few ocaml processes at a time and to ensure that if any of them crash they would be restarted and have different ports from each other (to load balance between them)?
I was thinking about standard linux service but I don't want to create like 4 hardcoded records and call service start webserver1 on each of them.
Is there a strong requirement for multiple operating system processes? Otherwise, it seems like you could just use something like cohttp with either lwt or async to handle concurrent requests in the same OS process, using multiple threads (and an event-loop).
As you mentioned REST, you might be interested in ocaml-webmachine which is based on cohttp and comes with well-commented examples.

Django and Websockets: How to create, in the same process, an efficient project with both WSGI and websockets?

I'm trying to do a Django application with an asynchronous part: Websockets. Just as a little challenge, I want to mount everything in the same process. Tried Socket.IO but couldn't manage to actually use sockets, instead of longpolling (which killed my browser several times, until I gave up).
What I then tried was a not-so-maintained library based on gevent-websocket. However, had many errors and was not easy to debug.
Now I am trying a Tornado approach but AFAIK (please correct me if I'm wrong) integrating async with a regular django app wrapped by WSGIContainer (websockets would go through Tornado, regular connections through Django) will be a true server killer if a resource is heavy or, somehow, the Django ORM goes slow into heavy operations.
I was thinking on moving to Twisted/Cyclone. Before I move from one architecture with such issue to ANOTHER architecture with such issue, i'd like to ask:
Does Tornado (and/or Twisted) have an architecture of scheduling tasks in the same way Gevent does? (this means: when certain greenlets "block", they schedule themselves to other threads, at least until the operation finishes). I'm asking this because (please correct me if I'm wrong) a regular django view will not be suitable for stuff like #inlineCallbacks, and will cause the whole server to be blocked (incl. the websockets).
I'm new to async programming in python, so there's a huge change I have misinformation about more than one concept. Please help me clarifying this before I switch.
Neither Tornado nor Twisted have anything like gevent's magic to run (some) blocking code with the performance characteristics of asynchronous code. Idiomatic use of either Tornado or Twisted will be visible throughout your app in the form of callbacks and/or Futures/Deferreds.
In general, since you'll need to run multiple python processes anyway due to the GIL, it's usually best to dedicate some processes to websockets with Tornado/Twisted and other processes to Django with the WSGI container of your choice (and then put nginx or haproxy in front so it looks like a single service to the outside world).
If you still want to combine django and an asynchronous service in the same process, the next best solution is to use threads. If you want the two to share one listening port, the listener must be a websocket-aware HTTP server that can spawn other threads for WSGI requests. Tornado does not yet have a solution for this, although one is planned for version 4.1 (https://github.com/tornadoweb/tornado/pull/1075). I believe Twisted's WSGI container does support running the WSGI workers in threads, but I don't have any experience with it myself. If you need them in the same process but do not need to share the same port, then you can simply run the IOLoop or Reactor in one thread and the WSGI container of your choice in another (with its associated worker threads).

webservice dispatcher

Here is my problem: I have a C++ application that consists of Qt GUI and quite a lot of backend code. Currently it is linked into one executable and runs on Solaris. Now, I would like to run the GUI on Windows and leave the rest of the code running on Solaris (porting it will be a huge effort). The interface between GUI and backend is pretty clean and consists of one C++ abstract class (also uses some stl containers). This is the part I would like to turn into webservice.
The problem is that our backend code is not thread safe therefore I will need to run a separate process on Solaris for every GUI on Windows. However, for performance reasons I cannot start and finish process for every request from the GUI.
This design means that I need to take care of several problems:
there must be a single point of contact for the GUI code,
the communication must happen with the instance started during first call (it should either be routed or the first call should return address of the actual server instance),
there must be some keep-alive messages sent between GUI and server process to manage lifetime of server process (server process cannot run forever).
Could you recommend a framework that would take care of these details (message routing/dispatching and lifetime management)?
You could technically configure Apache httpd to spawn a new instance per connection. The configuration also allows you to manage the time the processes stay alive when idle, and how many processes to leave running at a minimum. This would work well as long as the web service is stateless. A little weird, but technically feasible.
If you use something like gSoap, you can compile your C++ classes in Solaris directly into a gSoap mod and won't have to adapt it to any front-end like PHP or Java. It'll just plug into Apache httpd and start working.
Edit:
I just thought about it, and you could probably use HTTP 1.1 keep-alives to manage the life of the process too. Apache lets you configure how long it will allow the keep-alive to remain open, which keeps the thread/process for the connection active.

Integrating C++ code with any web technology on Linux

i am writing an program in c++ and i need an web interface to control the program and which will be efficient and best programming language ...
Your application will just have to listen to messages from the network that your web application would send to it.
Any web application (whatever the language) implementation could use sockets so don't worry about the details, just make sure your application manage messages that you made a protocol for.
Now, if you want to keep it all C++, you could use CPPCMS for your web application.
If it were Windows, I could advice you to register some COM component for your program. At least from ASP.NET it is easily accessible.
You could try some in-memory exchange techniques like reading/writing over a localhost socket connection. It however requires you to design some exchange protocol first.
Or data exchange via a database. You program writes/reads data from the database, the web front-end reads/writes data to the database.
You could use a framework like Thrift to communicate between a PHP/Python/Ruby/whatever webapp and a C++ daemon, or you could even go the extra mile (probably harder than just using something like Thrift) and write language bindings for the scripting language of your choice.
Either of the two options gives you the ability to write web-facing code in a language more suitable for the task while keeping the "heavy lifting" in C++.
Did you take a look at Wt? It's a widget-centric C++ framework for web applications, has a solid MVC system, an ORM, ...
The Win32 API method.
MSDN - Getting Started with Winsock:
http://msdn.microsoft.com/en-us/library/ms738545%28v=VS.85%29.aspx
(Since you didn't specify an OS, we're assuming Windows)
This is not as simple as it seems!
There is a mis-match between your C++ program (which presumibly is long running otherwise why would it need controlling) and a typical web program which starts up when it receives the http request and dies once the reply is sent.
You could possibly use one of the Java based web servers where it is possible to have a long running task.
Alternatively you could use a database or other storage as the communication medium:-
You program periodically writes it current status to a well know table, when a user invokes the control application it reads the current status and gives an appropriate set of options to the user which can then be stored in the DB, and actioned by your program the next time it polls for a request.
This works better if you have a queuing mechanism avaiable, as it can then be event driven rather than polled.
Go PHP :) Look at this Program execution Functions