Nginx: Connection refused after removing port bindings from docker-compose file - amazon-web-services

Basically, I am trying to get access to MariaDB on the website. I needed to run a docker container so that it looks as if its network requests originate from the instance on which it runs (AWS Ubuntu Instance). For that, I used network_mode: "host" within my docker-compose file for each container. Also, I had to comment out the port bindings since they are not compatible with host network mode. However, now I get:
Connection refused error. I believe I have to change something within the nginx config file which is project.conf, but I am not sure what.
This is the docker-compose file:
version: '3'
services:
app:
restart: always
build: ./app
#ports:
# - "8501:8501"
env_file:
- .env
command: streamlit run Main.py
network_mode: "host"
mariadb:
image: mariadb:10.9.3
#ports:
# - "3306:3306"
volumes:
- db_data:/var/lib/mysql
- db_conf:/etc/mysql/conf.d
environment:
MYSQL_ROOT_PASSWORD: "xxx"
MYSQL_DATABASE: "xxx"
MYSQL_USER: "xxx"
MYSQL_PASSWORD: "xxx"
MYSQL_ROOT_HOST: "xxx"
network_mode: "host"
nginx:
restart: always
build: ./nginx
#ports:
# - "80:80"
depends_on:
- app
- mariadb
volumes:
db_data:
db_conf:
And, this is the project.conf file:
server {
listen 80;
server_name helloworld-st-app;
location / {
proxy_pass http://app:8501/;
}
location ^~ /static {
proxy_pass http://app:8501/static/;
}
location ^~ /healthz {
proxy_pass http://app:8501/healthz;
}
location ^~ /vendor {
proxy_pass http://app:8501/vendor;
}
location /stream {
proxy_pass http://app:8501/stream;
proxy_http_version 1.1;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $host;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_read_timeout 86400;
}
}
I tried removing 8501 from the url, still nothing. Any help on this would be much appreciated. Thanks in advance.

Related

Uniting two virtual environments/servers/apps into one (Nginx/Django)

My project has two virtual environments, "main" and "test". I want to unite them on one server. I've been advised to use nginx proxy to do this, but I'm not sure how, especially since each environment already has its own network:
.yml backend of one project (infra/main folder) (the backend of the "test" project is similar):
version: "3.8"
services:
postgres:
image: postgres:13.3
container_name: postgres_main
restart: always
volumes:
- postgres_data_main:/var/lib/postgresql/data
ports:
- 5432:5432
env_file:
- .env-main
networks:
- main_db_network
backend:
<...>
depends_on:
- postgres
env_file:
- .env-main
networks:
- main_db_network
- main_swag_network
migrations:
<...>
networks:
main_db_network:
name: main_db_network
external: true
main_swag_network:
name: main_swag_network
external: true
volumes:
postgres_data_main:
name: postgres_data_main
static_value_main:
name: static_value_main
How do I set up a nginx_proxy to unite the two on one server?
You need to add a new service nginx - probably in a separate docker-compose file
nginx.conf will look like:
upstream main {
server backend:8000; # name of the service in compose file and opened port
}
upstream test {
server test-backend:8000;
}
location /main {
proxy_pass http://main;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $host;
proxy_redirect off;
}
location /test{
proxy_pass http://test;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $host;
proxy_redirect off;
}
Or instead of changing service names - you may differentiate ports. E.g. main to have mapping 8000:8000 and test e.g. 8001:8000
Dockerfile for nginx:
FROM nginx:1.19.0-alpine
RUN rm /etc/nginx/conf.d/default.conf
COPY nginx.conf /etc/nginx/conf.d
docker-compose.yml for serving Nginx
version: "3.8"
services:
nginx:
build: ./nginx
ports:
- "80:80"
networks:
- main_swag_network
- test_swag_network
networks:
main_swag_network:
external: true
test_swag_network:
external: true
It just need to serve nginx and have connections to both networks defined in test and main configs

Nginx support multiple hostnames

I am working on my django + nginx + docker-compose project
I want to access my site via ip and mysite.com
Problem -- ip url is working, but mysite.com returns error:
403 Forbidden Nginx
My code - docker-compose.yml
services:
django:
build: ./project # path to Dockerfile
command: sh -c "
sleep 3 && gunicorn --bind 0.0.0.0:8000 core_app.wsgi"
...
expose:
- 8000
env_file:
- ./.env
depends_on:
- db
nginx:
image: nginx:1.19.8-alpine
depends_on:
- django
env_file:
- ./.env
ports:
- "80:80"
volumes:
- ./project/nginx-conf.d/:/etc/nginx/conf.d
...
nginx-conf.conf
upstream app {
server django:8000;
}
server {
listen 80;
server_name 127.0.0.1 mysite.com www.mysite.com;
location / {
proxy_pass http://django:8000;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $host;
proxy_redirect off;
}
location /static/ {
alias /var/www/html/static/;
}
}
UPDATE
I was trying to replace proxy_pass http://django:8000; with proxy_pass http://app; but it didn't help
Value of proxy_pass is incorrect.
When you're referencing an upstream group, you've to pass the name of the group to proxy_pass.
In your case, the name of upstream group is "app". So the value of proxy_pass should look like this:
proxy_pass http://app;

How to serve phpMyAdmin to localhost/phpMyAdmin instead of localhost:8080 using nginx in docker

In my project, I am using Django and nginx, but I want to manage my cloud databases through phpmyadmin.
Django is working fine but I can't do the same with phpmyadmin because it is running in apache at localhost:8080, when I want it to run in nginx at localhost/phpmyadmin.
here is the docker-compose.yml
version: "3.9"
services:
web:
restart: always
build:
context: .
env_file:
- .env
volumes:
- ./project:/project
expose:
- 8000
nginx:
restart: always
build: ./nginx
volumes:
- ./static:/static
ports:
- 80:80
depends_on:
- web
phpmyadmin:
image: phpmyadmin/phpmyadmin:latest
restart: always
environment:
PMA_HOST: <host_address>
PMA_USER: <user>
PMA_PASSWORD: <password>
PMA_PORT: 3306
UPLOAD_LIMIT: 300M
ports:
- 8080:80
and nginx default.conf
upstream django{
server web:8000;
}
server{
listen 80;
location / {
proxy_pass http://django;
}
location /pma/ {
proxy_pass http://localhost:8080/;
proxy_buffering off;
}
location /static/ {
alias /static/;
}
}
I hope somebody will be able to tell me how to make nginx work as a reverse proxy for the phpMyAdmin docker container.
If some important information is missing please let me know.
You can access another docker container with its hostname and the internal port (not the exposed one).
Also a rewrite of the url is necessary.
location ~ \/pma {
rewrite ^/pma(/.*)$ $1 break;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $remote_addr;
proxy_set_header Host $host;
proxy_pass http://phpmyadmin;
}
I tested with this docker-compose.yml:
version: "3.9"
services:
nginx:
image: nginx:latest
volumes:
- ./templates:/etc/nginx/templates
ports:
- 80:80
phpmyadmin:
image: phpmyadmin/phpmyadmin:latest

NGINX doesn't work when proxy_set_header is set to $host

I've been setting a simple docker-compose for a Django application, in which I have 3 containers: the Django app, a Postgres container, and NGINX. I successfully set up both Django and Postgres and tested connecting directly to their containers, so now the only thing left was to set up NGINX on the docker-compose file. I used the following NGINX default.conf file, from another template repository:
upstream django {
server app:8000;
}
server {
listen 80;
server_name localhost;
location / {
try_files $uri #proxy_to_app;
}
location #proxy_to_app {
proxy_pass http://django;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_redirect off;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Host $server_name;
}
location /static/ {
autoindex on;
alias /static/;
}
location /media/ {
autoindex on;
alias /media/;
}
}
And this was my docker-compose file:
version: "2"
services:
nginx:
image: nginx:latest
container_name: NGINX
ports:
- "80:80"
- "443:443"
volumes:
- ./test:/djangoapp/test
- ./config/nginx:/etc/nginx/conf.d
- ./test/static:/static
depends_on:
- app
app:
build: .
container_name: DJANGO
command: bash -c "./wait-for-it.sh db:5432 && python manage.py makemigrations && python manage.py migrate && gunicorn test.wsgi -b 0.0.0.0:8000"
depends_on:
- db
volumes:
- ./djangoapp/test:/djangoapp/test
- ./test/static:/static
expose:
- "8000"
env_file:
- ./config/djangoapp.env
db:
image: postgres:latest
container_name: POSTGRES
env_file:
- ./config/database.env
But for some reason I wasn't able to connect on the Django app at all via localhost:80 (the browser always threw me a 502 error, and the container wasn't logging anything when I tried). After a lot of troubleshooting, I found out that the offending line was proxy_set_header Host $host;, and commenting it out made me successfully connect to the Django app via localhost. So the problem was that my NGINX configuration had to use the proxy_host variable instead.
The problem is that I have no idea why that happened in the first place, because looking at this other question (Nginx: when to use proxy_set_header Host $host vs $proxy_host), I was suppose to use $host to proxy from my Django application, and other NGINX configuration examples also sets up the Host like that.
I may be missing something as NGINX is a tad bit confusing for me, but I don't understand why I wasn't able to connect and NGINX wasn't logging anything before I commented that line.

Access Pgadmin4 in Production within dockerized django application

I'm not sure how much sense it would make, but I was learning docker to deploy Django app with Gunicorn + Nginx + AWS.
So far, it works fine, where I have unit tested it in production.
My question is how can I access pgAdmin4 now?
docker-compose.staging.yml
version: '3.8'
# networks:
# public_network:
# name: public_network
# driver: bridge
services:
web:
build:
context: .
dockerfile: Dockerfile.prod
# image: <aws-account-id>.dkr.ecr.<aws-region>.amazonaws.com/django-ec2:web
command: gunicorn djangotango.wsgi:application --bind 0.0.0.0:8000
volumes:
# - .:/home/app/web/
- static_volume:/home/app/web/static
- media_volume:/home/app/web/media
expose:
- 8000
env_file:
- ./.env.staging
networks:
service_network:
db:
image: postgres:12.0-alpine
volumes:
- postgres_data:/var/lib/postgresql/data/
env_file:
- ./.env.staging.db
networks:
service_network:
# depends_on:
# - web
pgadmin:
image: dpage/pgadmin4
env_file:
- ./.env.staging.db
ports:
- "8080:80"
volumes:
- pgadmin-data:/var/lib/pgadmin
depends_on:
- db
links:
- "db:pgsql-server"
environment:
- PGADMIN_DEFAULT_EMAIL=pgadmin4#pgadmin.org
- PGADMIN_DEFAULT_PASSWORD=fakepassword
- PGADMIN_LISTEN_PORT=80
networks:
service_network:
nginx-proxy:
build: nginx
# image: <aws-account-id>.dkr.ecr.<aws-region>.amazonaws.com/django-ec2:nginx-proxy
restart: always
ports:
- 443:443
- 80:80
networks:
service_network:
volumes:
- static_volume:/home/app/web/static
- media_volume:/home/app/web/media
- certs:/etc/nginx/certs
- html:/usr/share/nginx/html
- vhost:/etc/nginx/vhost.d
- /var/run/docker.sock:/tmp/docker.sock:ro
labels:
- "com.github.jrcs.letsencrypt_nginx_proxy_companion.nginx_proxy"
depends_on:
- web
nginx-proxy-letsencrypt:
image: jrcs/letsencrypt-nginx-proxy-companion
env_file:
- .env.staging.proxy-companion
networks:
service_network:
volumes:
- /var/run/docker.sock:/var/run/docker.sock:ro
- certs:/etc/nginx/certs
- html:/usr/share/nginx/html
- vhost:/etc/nginx/vhost.d
depends_on:
- nginx-proxy
networks:
service_network:
volumes:
postgres_data:
pgadmin-data:
static_volume:
media_volume:
certs:
html:
vhost:
I can access the django application through my domain name like xyz.example.com. I have just shown the docker-compose here.
Also within local I can access pgadmin4 via localhost:8080.
Is it possible to do it in production? If yes how?
I would be using AWS RDS for database, but for now my database is within docker container, so I'm thinking how to access it now?
I found some documentation.
https://www.pgadmin.org/docs/pgadmin4/development/container_deployment.html
The url to access your pgadmin page would be configured in nginx. This example:
server {
listen 80;
return 301 https://$host$request_uri;
}
server {
listen 443;
server_name _;
ssl_certificate /etc/nginx/server.cert;
ssl_certificate_key /etc/nginx/server.key;
ssl on;
ssl_session_cache builtin:1000 shared:SSL:10m;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_ciphers HIGH:!aNULL:!eNULL:!EXPORT:!CAMELLIA:!DES:!MD5:!PSK:!RC4;
ssl_prefer_server_ciphers on;
location /pgadmin4/ {
proxy_set_header X-Script-Name /pgadmin4;
proxy_set_header X-Scheme $scheme;
proxy_set_header Host $host;
proxy_pass http://localhost:5050/;
proxy_redirect off;
}
}
The important parts I am catching here are the location /pgadmin4/ redirecting to the localhost:5050. In your case, it would be localhost:8080.
It looks like in your other post you included your nginx config:
https://www.digitalocean.com/community/questions/no-live-upstream-while-connecting-to-upstream-jwilder-ngnix-proxy
upstream djangotango.meghaggarwal.com {
server web:8000;
}
server {
listen 80;
listen 443;
server_name djangotango.meghaggarwal.com
location / {
proxy_pass http://djangotango.meghaggarwal.com;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $host;
proxy_redirect off;
}
location /static/ {
alias /home/app/web/static/;
add_header Access-Control-Allow-Origin *;
}
location /media/ {
alias /home/app/web/media/;
add_header Access-Control-Allow-Origin *;
}
}
I would suggest adding a section like :
location /pgadmin4/ {
proxy_set_header X-Script-Name /pgadmin4;
proxy_set_header X-Scheme $scheme;
proxy_set_header Host $host;
proxy_pass http://localhost:8080/;
proxy_redirect off;
}
It might not be the only configuration you need to add... I have only skimmed the documentation. I am sure the link may help you more if this doesn't do the trick.