Writing a REST Service in C++ with Nginx - c++

I'm a bit underwhelmed by the Nginx module documentation. I've a lot of C++ code, and a REST Service already running using Boost Beast, and I'd like to compare performance between Beast and NGINX using the C++ module interface against a Benchmark I'll write accordingly to my needs.
I've seen this tutorial here: https://www.evanmiller.org/nginx-modules-guide.html
But I've thus far not seen a concise, short example to just get started.
Is there a hidden documentation? Alternatively, do you have an example showing how to use Nginx as a REST service in C(++)?

Short answer: Do not embed any application code into nginx.
Long answer:
You can make new nginx module to help nginx to do its work better, for example:
add some new method of authentication
or some new transport to back-end, like shared memory.
Nginx was designed to serve static content, proxy requests and do some filtering like modifying headers.
Main objective of nginx - do these things as fast as possible and spend as less resources as possible.
It allows your application server to scale dynamically without affecting currently connected users.
Nginx is good web server but was never designed to become application server.
It does not makes much sense to embed application logic into nginx just because it is built with C language.
If you need to have the best of both worlds (proxy, static files and rest server) then just use them both (nginx and Beast) with each having its own responsibility.
Nginx will take care of balancing, encryption and any other non-application specific function and app server will do its work.
Nginx's architecture is based on non-blocking network/file calls and all connections are served in a single thread and Nginx do it well because it just shuffles data back and forth.
If the code of your application can generate content fast and without blocking calls to external services then you could embed your app into nginx with consequences of loosing scalability. And if some part of your app requires CPU bound work or blocking calls then you need to move such things off main networking loop and it complicates things "a bit".
By embedding your logic into nginx you could probably save some microseconds and file handles on communications.
For multi-user websocket app like chat or stock feed (i.e. app with long-term open connections) it could liberate extra resources but for the REST app with fast responses it would not make any gain.
Your REST app most likely uses SSL encryption. This encryption adds much more microseconds(milliseconds) to your response time compared to what you could gain by such implementation.
My advice: Leave nginx to do its things and do not interfere with it

Related

Why is *SGI + Nginx/HTTP considered the best practice for deploying web applications?

My friend recently asked me the following question: given that Django already has runserver, why didn't wasn't it extended to be a production-ready customer-facing HTTP server? What people do instead is set up an uwsgi server that speaks WSGI and exposes something that Nginx forwards traffic to by reverse proxying...
Based on what I know, many other languages use this pattern: there is a "simple" HTTP server meant for development, as well as an interface for *GI (ASGI/WSGI/FCGI/CGI) that web server is supposed to reverse proxy to. What is the main reason those web servers don't grow production-ready and instead assume presence of another web server?
Here are some of my theories, but I'm not sure if I'm missing something more significant:
History: dynamic websites date back to perl/PHP, both worked as a "dumb" CGI backend that was basically a filter that processed HTTP request (stdin) to a response (stdout). This architecture worked for some time and became a common pattern,
Performance: web applications are often written in languages that don't JIT and having a web server written in such a language would introduce extra overhead while milliseconds matter. Also, this lets us speed up static file serving,
Security: Django's runserver is clearly described as potentially insecure, according to this quote:
DO NOT USE THIS SERVER IN A PRODUCTION SETTING. It has not gone through security audits or performance tests. (And that’s how it’s gonna stay.
The last point seems to suggest that writing a production-ready HTTP server is too complex to fit within Django's goals, what kind of edge cases would need to be supported to get there?
Is any of the points actually valid, or am I missing the elephant in the room here?
Because they don't want to get into the web server business, and I think that's a wise decision.
Creating, developing and most importantly maintaining a web server is not a trivial thing. They couldn't simply write it once and then it's done (in fact, that's pretty much what they did and it's runserver).
Rather than re-invent the wheel, they've chosen to leave it to those who do it best. They're not likely to match the stability and functionality of a proper web server by doing it as a side-project to support running Django applications. They're better spending their time making Django better.
It's also consistent with the UNIX philosophy, but that's not necessary to get into here.

Setting up Nginx as a reverse proxy for Apache vs just Apache Event MPM

In the Django docs for setting up mod_wsgi, the tutorial notes:
Django doesn’t serve files itself; it leaves that job to whichever Web
server you choose.
We recommend using a separate Web server – i.e., one that’s not also
running Django – for serving media. Here are some good choices:
Nginx
A stripped-down version of Apache
I understand this might be due to wasted resources when Apache spawns new processes to serve each static file, which Nginx avoids. However, Apache's (newish?) Event MPM seems to act similar to an Nginx instance handing off requests to an Apache worker mpm. Therefore I'd like to ask: instead of setting up Nginx to be a reverse proxy for Apache, would using an Apache Event MPM be sufficient for serving static files in Apache?
Apache doesn't spawn a new process for each static file. Apache keeps persistent processes to handle concurrent and subsequent requests just like nginx. The difference is that nginx uses a full async model, whereas Apache relies on processes and/or threading for concurrency, although event MPM uses an async model for initial request acceptance and keep alive connections now. For the majority of people, Apache alone is still a more than acceptable solution. So don't get ahead of yourself if you are just starting out and think you need a Google/Facebook scale solution from the outset.
More important than separate web server is that if using Apache/mod_wsgi, serve the static files under a different host name. That way you avoid heavy weight cookie information being sent for all static file requests. You can do this using virtual hosts in Apache. Also ensure you are using daemon mode of mod_wsgi for running the Django application as that is a better architecture and provides lots more options for setting timeouts so you can have your application recover from various situations which might otherwise cause the server to lock up when overloaded.
For a system which provides a better out of the box configuration and experience than using Apache/mod_wsgi directly and configuring it yourself, look at using mod_wsgi-express.
https://pypi.python.org/pypi/mod_wsgi
http://blog.dscpl.com.au/2015/04/introducing-modwsgi-express.html
http://blog.dscpl.com.au/2015/04/using-modwsgi-express-with-django.html
http://blog.dscpl.com.au/2015/04/integrating-modwsgi-express-as-django.html
The advice about separating the webservers has two advantages. One clearly outlined by Graham. The other is "predictable resource consumption".
The number of resources per HTML page differ. Leaving one webserver to serve the application and the other to serve static resources, has the advantage that you know exactly how many concurrent visitors you can serve: the MaxClients setting of Apache.
If this slows down the loading of images, those webservers need very few modules and no measurable amount of CPU power so a one core machine with SSD disks is all you need and scaling is cheap.
As Graham indicates it starts with a STATIC_URL that has a different hostname. Run it at the same server at the start. When scaling up, tie that hostname to a reverse proxy that serves from several image server backend machines.

How to process requests and cookies using C++ wtih NGINX?

We are planning on migrating from IIS to Nginx to gain performance. Our web layer is very lightweight - for each request we are reading/setting cookies and perform some very quick data cleanup and passing it down to very fast storage (Aerospike). Most of the requests take under 100ms, but we are experiencing inefficiencies due to IIS binding thread to each request. We are processing A LOT of concurrent requests.
Whats the best way to accomplish the same thing in Nginx? I know it would probably make sense to C++ to do most of my processing. Where do I take care of cookies, can I do it with C++? How do I forward a request from Nginx down to a compiled C++ binary effectively.
Thanks for your help!
You will need to write a module in nginx. The basic module should be written in C. But I guess you will be able write the main workhorse functions in c++. Unfortunately the api for module development etc is not documented well but Evan Millers Guide is your best guide.

fcgi vs mod_fastcgi on apache server

I have an apache server in which I am setting up fcgi. I was contemplating if I've to setup the tailor made mod_fastcgi or the plain old cgi-fcgi.
mod-fastcgi doesn't seem to support the "multiplexing" features of fcgi, and the web service I am building is a very high traffic service with several thousand calls per minute and I want them to be processed as quick as possible.
Any suggestions or advice??
Indeed, mod_fastcgi does not support multiplexing. I suppose this is because the Apache web server handles concurrent processing itself. You've probably dealt with it's various Multi-Processing-Models (MPMs) already...
Apache is highly optimized around the several (request) phases provided. The various modules can hook in where-ever you like, which makes the Apache an excellent server to directly integrate high performance and/or really complex applications (e.g. with custom modules in c, mod_perl and so on) as modules themselves.
But both, mod_fastcgi and cgi-fcgi, are IMHO only used to provide response and/or filter handler. Thus; many of the great features (configuration, mapping, post-request logging & cleanup...) provided with Apache are just not used in such a setup.
Thus; if your application is built on top of FGCI, I'd rather not recommend using Apache. Especially for high performance applications under high load; One may prefer a more lightweight but fast HTTP daemon. There are plenty of alternatives like nginx or lighttpd.
Usually one would use them as proxies/balancer to the FCGI processes, cache, SSL handler and logging provider. Of course, Apache is also capable of these tasks, but it's somehow like using a helicopter to direct the traffic at the intersection...
Cheers!

Django node.js socket.io

I am trying to make a realtime messaging application. There will be 2 distinct server(node.js and django) and when a user sends message to another user message will be stored in database than node.js will send a message to receiver like "You have new Message!". For that i am planing to call url which node.js serve. So node.js and django will interact each other. And what is best way send message to specifig client ? (I keep clients with their id's in a assosicative array.)
what do you think about that? is it efficent or do you suggest better way to do this ?
Now that I understand more about what you're trying to do, here my answer, just keep in mind that this only reflects my opinion, and I bet that many others would argue about it.
It all matter on how much traffic you expect to have in your application. If it's not a high traffic application, then efficiency in run-time is insignificant when compared to that of the development, and so choose the technology you feel most comfortable with.
If though you do aim for high traffic application, then I believe that this setup is not a good one.
First of all while http based communication between servers might seem comfortable, you are dealing with the overhead of http over tcp (since http is based on tcp). And so regular tcp sockets scale better, but on the other hand if you write the sockets server in python than you can run it from the same process as the django and then just use it as an object from django (you're entering the realm of threads here). But that's problematic if you have a few web instances, again depends on how much traffic you expect.
As for your choice for implementing the messaging server, I've never tested node.js but I believe that in benchmark tests it won't compare for something written in erlang or Java NIO. For example: JAVA AIO (NIO.2) VS NODEJS