How to run several apps on one EC2 instance? - amazon-web-services

It's probably related to this question: How to run more than one app on one instance of EC2
But that question only seemed to be talking about multiple node.js apps.
I am trying learn several different things, so I'm building different websites to learn Ruby on Rails, LAMP, and node.js. Along with my personal website and blog.
Is there any way to run all these on the same EC2 instance?

First, there's nothing EC2-specific about setting up multiple web apps on one box. You'll want to use nginx (or Apache) in "reverse proxy" mode. This way, the web server listens on port 80 (and 443), and your apps run on various other ports. Each incoming request reads the "Host" header to map the request to a backend. So different DNS names/domains show different content.
Here is how to setup nginx in reverse proxy mode: http://www.cyberciti.biz/tips/using-nginx-as-reverse-proxy.html
For each "back-end" app, you'll want to:
1) Allocate a port (3000 in this example)
2) write an upstream stanza that tells it where your app is
3) write a (virtual) server stanza that maps from the server name to the upstream location
For example:
upstream app1 {
server 127.0.0.1:3000; #App1's port
}
server {
listen *:80;
server_name app1.example.com;
# You can put access_log / error_log sections here to break them out of the common log.
## send request to backend
location / {
proxy_pass http://app1;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
}
I prefer to have Nginx in front of Apache for two reasons: 1) nginx can serve static files with much less memory, and 2) nginx buffers data to/from the client, so people on slow internet connections don't clog your back-ends.
When testing your config, use nginx -s reload to reload the config, and curl -v -H "Host: app1.example.com" http://localhost/ to test a specific domain from your config

Adding to the #Brave answer, I would like to mention the configuration of my nginx for those who are looking for the exact syntax in implementing it.
server {
listen 80;
server_name mysite.com;
location / {
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header Host $http_host;
proxy_pass http://127.0.0.1:3000;
}
}
server {
listen 80;
server_name api.mysite.com;
location / {
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header Host $http_host;
proxy_pass http://127.0.0.1:4500;
}
}
Just create two server objects with unique server name and the port address.
Mind proxy_pass in each object.
Thank you.

Related

AWS Elastic Beanstalk Docker app can't be reached on https

I have been trying to get my app to run on https. It is a single instance, single container docker app, that runs dart code and serves on 8080. So far, the app runs on http perfectly. I do not have, nor want, a load balancer.
I have followed the directions here: https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/https-singleinstance-docker.html and here: https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/configuring-https-httpredirect.html. I also have it configured to connect to my site at "server.mysite.com". I am getting the refused to connect error. I am sort of a noob to this, so if you need more information let me know.
The issue is that the instance is not listening on 443. So it turns out that since I deployed on AWS Linux 2, there is a different way of configuring the location of the https.conf file that the docs make you make.
Here is a ref https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/platforms-linux-extend.html. Essentially, I made a folder in the root (next to .ebextensions) and added a file with the following path .platform/nginx/conf.d/https.conf with the contents of the file the wanted in the docs, eg.
server {
listen 443;
server_name localhost;
ssl on;
ssl_certificate /etc/pki/tls/certs/server.crt;
ssl_certificate_key /etc/pki/tls/certs/server.key;
ssl_prefer_server_ciphers on;
location / {
proxy_pass http://localhost:5000;
proxy_http_version 1.1;
proxy_set_header Connection "";
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
}

How to get my Django REST api to interact with Angular front-end be hosted over an nginx server

I am trying to launch my web app with Django, Angular, and Nginx. During the development phase I made services within Angular that send requests to 127.0.0.1:8000 I was able to get my Angular project to display over my domain name. However, when I try to log into my app over another network it won't work. Is this because I am pointing at 127.0.0.1:8000? Do I need to configure a web server gateway or api gateway for Django? Do I need to point the services in Angular to a different address? Or did I configure something wrong within Nginx? if anyone can help me I would greatly appreciate it.
upstream django_server{
server 127.0.0.1:8000;
}
server{
listen 80;
listen 443 ssl;
server_name example.com www.example.com;
ssl_certificate C:/Certbot/live/example.com/fullchain.pem;
ssl_certificate_key C:/Certbot/live/example.com/privkey.pem;
root /nginx_test/www1/example.com;
index index.html;
location = /favicon.ico {
return 204;
access_log off;
log_not_found off;
}
location /api-token/ {
proxy_pass http://django_server/api-token/;
proxy_http_version 1.1;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
}
I think the reason is in your Angular service configuration. Instead of 127.0.0.1 try to change it to your REST API server IP address.
As I understand in your case when you open your app in the browser you load all static files into your pc/laptop browser. Because of that every time when you trigger frontend service you try to get response from your laptop/pc instead of your backed server.

I have problem to deploy my django project with docker nginx and gunicorn

If I have running my containers on the server side and everything is okay,for accessing route on the browser should I put ip of nginx container with domain in etc/host or it have to work without that?
my nginx.config
server {
listen 80;
server_name my_domain;
root /code/;
error_log /var/log/nginx/new_error.log debug;
location / {
proxy_pass http://web:8000/;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $host;
proxy_redirect off;
}
location /staticfiles/ {
alias /code/staticfiles/;
}
location /mediafiles/ {
alias /code/mediafiles/;
}
}
where web is my docker-container with gunicorn running
If your nginx container has the ports expose using the flag -p <host_port>:<container_port> you can access to nginx service adding the domain in your /etc/hosts file pointing to the ip of your localhost, however if you didn't use this flag you need pointing to the ip of your nginx container. What's the difference?...when you expose the ports you can use the service even outside of the host where the container was deployed.
I hope help you.

What's the purpose of setting "X-Forwarded-For" header in nginx

I have the following Nginx configuration for my Django application:
upstream api {
server localhost:8000;
}
server {
listen 80;
location / {
proxy_pass http://api;
proxy_redirect off;
proxy_set_header Host $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
location /staticfiles {
alias /app/static/;
}
}
I based this config on a tutorial here. After some research, looks like setting the Host header allows the Django API to determine original client's IP address (instead of the IP address of the proxy).
What's the point of the X-Forwarded-For header? I see a field called $http_x_forwarded_for in the nginx logs but I'm not sure it's related.
From the Mozilla docs
The X-Forwarded-For (XFF) header is a de-facto standard header for identifying the originating IP address of a client connecting to a web server through an HTTP proxy or a load balancer. When traffic is intercepted between clients and servers, server access logs contain the IP address of the proxy or load balancer only. To see the original IP address of the client, the X-Forwarded-For request header is used.
In fact, I think that you have misunderstood the Host header. My understanding is that it will be the IP of the nginx server.

localhost in build_absolute_uri for Django with Nginx

On production I use the chain Django - UWSGI - Docker - Nxing. UWSGI works with the port 50012 and Ngxin is configured as:
proxy_pass http://localhost:50012;
Django process thinks that its host is localhost:50012 instead of the domain that Nginx listens to. So when the function build_absolute_uri is called there's localhost:50012 instead of my domain. Is there a way to make Django use the custom host name when build_absolute_uri is called?
Notice: in some libraries build_absolute_uri called implicitly (like social-django, or example), so avoiding this function is not a solution in my case.
The problem
When the public hostname you use to reach the proxy differ from the internal hostname of the application server, Django has no way to know which hostname was used in the original request unless the proxy is passing this information along.
Possible Solutions
1) Set the proxy to pass along the orginal host
From MDN:
The X-Forwarded-Host (XFH) header is a de-facto standard header for identifying the original host requested by the client in the Host HTTP request header.
Host names and ports of reverse proxies (load balancers, CDNs) may differ from the origin server handling the request, in that case the X-Forwarded-Host header is useful to determine which Host was originally used.
There are two things you should do:
ensure all proxies in front of Django are passing along the X-Forwarded-Host header
turn on USE_X_FORWARDED_HOST in the settings
if the internal and external scheme differ as well, set SECURE_PROXY_SSL_HEADER to a meaningful value and set the server to send the corresponding header
When USE_X_FORWARDED_HOST is set to True in settings.py, HttpRequest.build_absolute_uri uses the X-Forwarded-Host header instead of request.META['HTTP_HOST'] or request.META['SERVER_NAME'].
I will not delve too much into the proxy setup part (as it is more related to professional network administration than to programming in the scope of this site) but for nginx it should be something like:
location / {
...
proxy_set_header X-Forwarded-Host $host:$server_port;
proxy_set_header X-Forwarded-Server $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
...
proxy_pass http://upstream:port;
}
Probably the best solution as it is fully dynamic, you don't have to change anything if the public scheme/hostname changes in the future.
If the internal and external scheme differ as well you may want to set SECURE_PROXY_SSL_HEADER in settings.py to something like this:
SECURE_PROXY_SSL_HEADER = ('HTTP_X_FORWARDED_PROTO', 'https')
And then add the following to the server config:
proxy_set_header X-Forwarded-Proto https;
2) Use the same hostname for public and private servers
Lets say your public hostname is "host.example.com": you can add a line like this to your /etc/hosts (on Windows %windir%\System32\drivers\etc\hosts):
127.0.0.1 host.example.com
Now you can use the hostname in the nginx config:
proxy_pass http://host.example.com:port;
When the internal and external scheme differ as well (external https, internal http), you may want to set SECURE_PROXY_SSL_HEADER as described in the first solution.
Every time the public hostname changes you will have to update the config but I guess this is OK for small projects.
I got mine working using proxy_redirect
Lets say you have a container or an upstream with the name app and you want it to return 127.0.0.1 as the host, Then your config should include:
server {
listen 80;
location / {
proxy_pass http://app:8000;
proxy_redirect http://app:8000 http://127.0.0.1:8000;
}
}
Here's my final config:
server {
listen 80;
location / {
proxy_pass http://app:8000;
proxy_set_header X-Forwarded-Host $host:$server_port;
proxy_set_header X-Forwarded-Server $host;
proxy_set_header Host $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_redirect http://app:8000 http://127.0.0.1:8000;
}
}
Also checkout this article for a detailed explanation https://mattsegal.dev/nginx-django-reverse-proxy-config.html
I had a problem same as the question, I used nginx, gunicorn and django on production without docker.
get_current_site(request).domain
returned localhost so I have some issue in drf yasg base url. Simply I solved it by adding
include proxy_params;
to nginx conf.