Nginx: different robots.txt for alternate domain - django

Summary
I have a single web app with an internal and external domain pointing at it, and I want a robots.txt to block all access to the internal domain, but allow all access to the external domain.
Problem Detail
I have a simple Nginx server block that I used to proxy to a Django application (see below). As you can see, this server block responds to any domain (due to the lack of the server_name parameter). However, I'm wondering how to mark specific domains such Nginx will serve up a custom robots.txt file for them.
More specifically, say the domains example.com and www.example.com will serve up a default robots.txt file from the htdocs directory. (Since "root /sites/mysite/htdocs" is set and a robots.txt file is located at /sites/mysite/htdocs/robots.txt)
BUT, I also want the domain "example.internal.com" (which refers to the same server as example.com) to have a custom robots.txt file served; I'd like to create a custom robots.txt so google doesn't index that internal domain.
I thought about duplicating the server block and specifying the following in one of the server blocks. And then somehow overriding the robots.txt lookup in that server block.
"server_name internal.example.com;"
But duplicating the whole server block just for this purpose doesn't seem very DRY.
I also thought about maybe using an if statement to check and see if the host header contains the internal domain. And then serving the custom robots.txt file that way. But Nginx says If Is Evil.
What is a good approach for serving up a custom robots.txt file for an internal domain?
Thank you for your help.
Here is a code sample of the server block that I'm using.
upstream app_server {
server unix:/sites/mysite/var/run/wsgi.socket fail_timeout=0;
}
server {
listen 80;
root /sites/mysite/htdocs;
location / {
try_files $uri #proxy_to_app;
}
location #proxy_to_app {
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Protocol $scheme;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Scheme $scheme;
proxy_set_header Host $http_host;
proxy_redirect off;
proxy_pass http://app_server;
}
}

You can use map to define a conditional variable. Add this outside your server directive:
map $host $robots_file {
default robots.txt;
internal.example.com internal-robots.txt;
}
Then the variable can be used with try_files like this:
server_name internal.example.com;
location = /robots.txt {
try_files /$robots_file =404;
}
Now you can have two robots.txt files in your root:
robots.txt
internal-robots.txt

Related

How to get my Django REST api to interact with Angular front-end be hosted over an nginx server

I am trying to launch my web app with Django, Angular, and Nginx. During the development phase I made services within Angular that send requests to 127.0.0.1:8000 I was able to get my Angular project to display over my domain name. However, when I try to log into my app over another network it won't work. Is this because I am pointing at 127.0.0.1:8000? Do I need to configure a web server gateway or api gateway for Django? Do I need to point the services in Angular to a different address? Or did I configure something wrong within Nginx? if anyone can help me I would greatly appreciate it.
upstream django_server{
server 127.0.0.1:8000;
}
server{
listen 80;
listen 443 ssl;
server_name example.com www.example.com;
ssl_certificate C:/Certbot/live/example.com/fullchain.pem;
ssl_certificate_key C:/Certbot/live/example.com/privkey.pem;
root /nginx_test/www1/example.com;
index index.html;
location = /favicon.ico {
return 204;
access_log off;
log_not_found off;
}
location /api-token/ {
proxy_pass http://django_server/api-token/;
proxy_http_version 1.1;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
}
I think the reason is in your Angular service configuration. Instead of 127.0.0.1 try to change it to your REST API server IP address.
As I understand in your case when you open your app in the browser you load all static files into your pc/laptop browser. Because of that every time when you trigger frontend service you try to get response from your laptop/pc instead of your backed server.

localhost in build_absolute_uri for Django with Nginx

On production I use the chain Django - UWSGI - Docker - Nxing. UWSGI works with the port 50012 and Ngxin is configured as:
proxy_pass http://localhost:50012;
Django process thinks that its host is localhost:50012 instead of the domain that Nginx listens to. So when the function build_absolute_uri is called there's localhost:50012 instead of my domain. Is there a way to make Django use the custom host name when build_absolute_uri is called?
Notice: in some libraries build_absolute_uri called implicitly (like social-django, or example), so avoiding this function is not a solution in my case.
The problem
When the public hostname you use to reach the proxy differ from the internal hostname of the application server, Django has no way to know which hostname was used in the original request unless the proxy is passing this information along.
Possible Solutions
1) Set the proxy to pass along the orginal host
From MDN:
The X-Forwarded-Host (XFH) header is a de-facto standard header for identifying the original host requested by the client in the Host HTTP request header.
Host names and ports of reverse proxies (load balancers, CDNs) may differ from the origin server handling the request, in that case the X-Forwarded-Host header is useful to determine which Host was originally used.
There are two things you should do:
ensure all proxies in front of Django are passing along the X-Forwarded-Host header
turn on USE_X_FORWARDED_HOST in the settings
if the internal and external scheme differ as well, set SECURE_PROXY_SSL_HEADER to a meaningful value and set the server to send the corresponding header
When USE_X_FORWARDED_HOST is set to True in settings.py, HttpRequest.build_absolute_uri uses the X-Forwarded-Host header instead of request.META['HTTP_HOST'] or request.META['SERVER_NAME'].
I will not delve too much into the proxy setup part (as it is more related to professional network administration than to programming in the scope of this site) but for nginx it should be something like:
location / {
...
proxy_set_header X-Forwarded-Host $host:$server_port;
proxy_set_header X-Forwarded-Server $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
...
proxy_pass http://upstream:port;
}
Probably the best solution as it is fully dynamic, you don't have to change anything if the public scheme/hostname changes in the future.
If the internal and external scheme differ as well you may want to set SECURE_PROXY_SSL_HEADER in settings.py to something like this:
SECURE_PROXY_SSL_HEADER = ('HTTP_X_FORWARDED_PROTO', 'https')
And then add the following to the server config:
proxy_set_header X-Forwarded-Proto https;
2) Use the same hostname for public and private servers
Lets say your public hostname is "host.example.com": you can add a line like this to your /etc/hosts (on Windows %windir%\System32\drivers\etc\hosts):
127.0.0.1 host.example.com
Now you can use the hostname in the nginx config:
proxy_pass http://host.example.com:port;
When the internal and external scheme differ as well (external https, internal http), you may want to set SECURE_PROXY_SSL_HEADER as described in the first solution.
Every time the public hostname changes you will have to update the config but I guess this is OK for small projects.
I got mine working using proxy_redirect
Lets say you have a container or an upstream with the name app and you want it to return 127.0.0.1 as the host, Then your config should include:
server {
listen 80;
location / {
proxy_pass http://app:8000;
proxy_redirect http://app:8000 http://127.0.0.1:8000;
}
}
Here's my final config:
server {
listen 80;
location / {
proxy_pass http://app:8000;
proxy_set_header X-Forwarded-Host $host:$server_port;
proxy_set_header X-Forwarded-Server $host;
proxy_set_header Host $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_redirect http://app:8000 http://127.0.0.1:8000;
}
}
Also checkout this article for a detailed explanation https://mattsegal.dev/nginx-django-reverse-proxy-config.html
I had a problem same as the question, I used nginx, gunicorn and django on production without docker.
get_current_site(request).domain
returned localhost so I have some issue in drf yasg base url. Simply I solved it by adding
include proxy_params;
to nginx conf.

Nginx redirects to default page

I am setting up a domain for my Django/Gunicorn/Nginx server. It works fine with IP address instead of domain name in server_name but when I add domain name it redirects to default Ubuntu Nginx page. My Nginx file looks like this (please note that I replaced my domain with example.com):
Path : /etc/nginx/sites-available/projectname
server {
listen 80;
server_name example.com;
return 301 $scheme://www.example.com$request_uri;
}
server {
listen 80;
server_name www.example.com;
client_max_body_size 4G;
location = /favicon.ico {access_log off; log_not_found off;}
location /static/ {
root /path/to/static/dir;
}
location / {
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_redirect off;
proxy_pass http://unix:/path/to/gunicorn.sock;
}
}
I have run the command sudo nginx -t and sudo service nginx restart but no effect. Please let me know if I am doing anything wrong.
1- see main nginx.conf how include all config files. if it is including site-enabled path then go to path and see is a shortcut to config file of this site under site available?
or if all sites are enabled in nginx config file include directly available
include /etc/nginx/sites-available/*;
2-mix two server define code once and with rule forward non www to with www
3-if not work check dns config problem and see result from inside of server via putty not from outside of server with browser to see it is nginx problem or dns config problem.
note: changing dns name servers taken some hours to work and effect on clients.

letsencrypt django webroot

I am trying to setup my nginx and django to be able to renew certificates.
However something goes wrong with my webroot-plugin
in nginx:
location ~ /.well-known {
allow all;
}
But when I run the renewal command:
./letsencrypt-auto certonly -a webroot --agree-tos --renew-by-default --webroot-path=/home/sult/huppels -d huppels.nl -d www.huppels.nl
However it seems that the cert renewal wants to retrieve a file from my server cause i get the following error.
The following errors were reported by the server:
Failed authorization procedure. www.huppels.nl (http-01): urn:acme:error:unauthorized :: The client lacks sufficient authorization :: Invalid response from http://www.huppels.nl/.well-known/acme-challenge/some_long_hash [51.254.101.239]: 400
How do i make this possible with nginx or django?
I have my Django app running with gunicorn. I followed the instructions here.
I made sure to include the proper location blocks:
location /static {
alias /home/user/webapp;
}
location / {
proxy_pass http://127.0.0.1:8000;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
Making sure to include any template location alias as well.
I set the .well-known location block like this;
location /.well-known {
alias /home/user/webapp/.well-known;
}
Pointing it directly do the root of the webapp instead of using the allow all.
I did have to make sure that I only used the non ssl block until the certificate was generated then I used a different nginx config based on h5bps nginx configs.
Note: Make sure you have proper A records for you domain pointing to www if you are going to use h5bp to redirect to www.

How to run several apps on one EC2 instance?

It's probably related to this question: How to run more than one app on one instance of EC2
But that question only seemed to be talking about multiple node.js apps.
I am trying learn several different things, so I'm building different websites to learn Ruby on Rails, LAMP, and node.js. Along with my personal website and blog.
Is there any way to run all these on the same EC2 instance?
First, there's nothing EC2-specific about setting up multiple web apps on one box. You'll want to use nginx (or Apache) in "reverse proxy" mode. This way, the web server listens on port 80 (and 443), and your apps run on various other ports. Each incoming request reads the "Host" header to map the request to a backend. So different DNS names/domains show different content.
Here is how to setup nginx in reverse proxy mode: http://www.cyberciti.biz/tips/using-nginx-as-reverse-proxy.html
For each "back-end" app, you'll want to:
1) Allocate a port (3000 in this example)
2) write an upstream stanza that tells it where your app is
3) write a (virtual) server stanza that maps from the server name to the upstream location
For example:
upstream app1 {
server 127.0.0.1:3000; #App1's port
}
server {
listen *:80;
server_name app1.example.com;
# You can put access_log / error_log sections here to break them out of the common log.
## send request to backend
location / {
proxy_pass http://app1;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
}
I prefer to have Nginx in front of Apache for two reasons: 1) nginx can serve static files with much less memory, and 2) nginx buffers data to/from the client, so people on slow internet connections don't clog your back-ends.
When testing your config, use nginx -s reload to reload the config, and curl -v -H "Host: app1.example.com" http://localhost/ to test a specific domain from your config
Adding to the #Brave answer, I would like to mention the configuration of my nginx for those who are looking for the exact syntax in implementing it.
server {
listen 80;
server_name mysite.com;
location / {
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header Host $http_host;
proxy_pass http://127.0.0.1:3000;
}
}
server {
listen 80;
server_name api.mysite.com;
location / {
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header Host $http_host;
proxy_pass http://127.0.0.1:4500;
}
}
Just create two server objects with unique server name and the port address.
Mind proxy_pass in each object.
Thank you.