!UPDATE!
Big thank's to #ivan
default.conf:
log_format include_id '$remote_addr - $remote_user [$time_local] $request_id "$request" '
'$status $body_bytes_sent "$http_referer" "$http_user_agent"';
server {
listen 443;
listen [::]:443;
ssl_certificate /etc/nginx/conf.d/cert.pem;
ssl_certificate_key /etc/nginx/conf.d/key.pem;
location / {
proxy_pass http://sns-mock:9911;
proxy_store /var/log/nginx/requests/mohsin.json;
}
}
docker-compose.yml:
version: '3.7'
services:
sns-mock:
image: s12v/sns
container_name: sns-mock
ports:
- "9911:9911"
networks:
- default-nw
volumes:
- ./config/db.json:/etc/sns/db.json
sns-mock-proxy:
image: nginx:latest
container_name: sns-mock-proxy
ports:
- "8443:443"
networks:
- default-nw
volumes:
- ./conf.d:/etc/nginx/conf.d
- ./log/nginx:/var/log/nginx
- ./log/nginx/requests:/var/log/nginx/requests
depends_on:
- sns-mock
networks:
default-nw:
external: true
Php test file:
<?php
use Aws\Result;
use Aws\Sns\SnsClient;
use Illuminate\Http\JsonResponse;
use PHPUnit\Framework\TestCase;
class FooTest extends TestCase
{
public function testFoo()
{
$snsClient = new SnsClient([
'endpoint' => 'localhost:8443',
'region' => 'eu-west-2',
'version' => 'latest'
]);
$result = $snsClient->publish([
'Message' => 'foo',
'TopicArn' => 'arn:aws:sns:eu-west-2:123450000001:test-topic'
]);
dd($result, 'ok');
}
}
Test result:
My project:
On my access.log file, i can catch response from phpunit test like here:
172.28.0.1 - - [23/May/2022:06:54:04 +0000] "POST / HTTP/1.1" 200 270 "-" "aws-sdk-php/3.222.17 OS/Linux/5.13.0-41-generic lang/php/8.1.6 GuzzleHttp/7" "-"
===================================================================================================================================
The general answer to your question is "No". However some kind of workaround is possible.
Every request gets processed by nginx receives some internal request ID available via the $request_id variable (16 random bytes, in hexadecimal). You can add that ID to your access log defining your own custom log format:
log_format include_id '$remote_addr - $remote_user [$time_local] $request_id "$request" '
'$status $body_bytes_sent "$http_referer" "$http_user_agent"';
access_log /var/log/nginx/access.log include_id;
Next, in the same location where you have your proxy_pass directive add the proxy_store one:
proxy_store /var/log/nginx/requests/$request_id.json;
I use the separate /var/log/nginx/requests directory here to not make your /var/log/nginx turning into a mess. Of course it should be created manually before you start the docker container, and it should be writable the same way as /var/log/nginx itself (including such a things as SELinux context, if being used on the host system). However for the testing purposes you can start with the proxy_store /var/log/nginx/$request_id.json;.
The whole nginx config should look like
log_format include_id '$remote_addr - $remote_user [$time_local] $request_id "$request" '
'$status $body_bytes_sent "$http_referer" "$http_user_agent"';
server {
listen 443;
listen [::]:443;
access_log /var/log/nginx/access.log include_id;
ssl_certificate /etc/nginx/conf.d/cert.pem;
ssl_certificate_key /etc/nginx/conf.d/key.pem;
location / {
proxy_pass http://sns-mock:9911;
proxy_store /var/log/nginx/requests/$request_id.json;
}
}
If you want requests to be available via the plain HTTP protocol too, you can use the same configuration for the plain HTTP server block instead of the one you've shown in your question:
server {
listen 80;
listen [::]:80;
access_log /var/log/nginx/access.log include_id;
location / {
proxy_pass http://sns-mock:9911;
proxy_store /var/log/nginx/requests/$request_id.json;
}
}
or issue an HTTP-to-HTTPS redirect instead:
server {
listen 80;
listen [::]:80;
return 308 https://$host$request_uri;
}
Now the response body for each request which can be identified from the access log via its request ID, something like
172.18.0.1 - - [20/May/2022:19:54:14 +0000] d6010d713b2dce3cd2713f1ea178e140 "POST / HTTP/1.1" 200 615 "-" "aws-sdk-php/3.222.17 OS/Linux/5.13.0-41-generic lang/php/8.1.6 GuzzleHttp/7" "-"
will be available under the /var/log/nginx/requests directory (response body for the request shown in the given access log entry will be available as d6010d713b2dce3cd2713f1ea178e140.json file).
Related
I have a CentOS 8 (fedora) server running and I'm trying to run my Django Webapp on it through Nginx
It runs on port 8000, and I want to access it on my browser through nginx (so port 80?)
These commands on the server itself
This shows my webapp HTML page fine
curl http://127.0.0.1:8000
curl http://0.0.0.0:8000
but these show me the Nginx 502 Bad Gateway page
curl http://0.0.0.0
curl http://127.0.0.1
No errors in the nginx log files
This is my nginx.config:
events {
worker_connections 1024;
}
http {
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/access.log main;
error_log /var/log/nginx/error.log;
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 65;
types_hash_max_size 2048;
include /etc/nginx/mime.types;
default_type application/octet-stream;
server_tokens off;
server {
listen 80;
server_name $hostname; # I tried this with an '_' also no help
location / {
proxy_pass http://0.0.0.0:8000; # also tried with 127.0.0.1
proxy_set_header Host $host;
}
}
}
Running
nginx -T shows the config has been loaded
Any advise on what to look for? (perhaps my firewall is blocking it somehow? idk)
Kind regards
I'm trying to get my webpage working through Nginx
Looks like it was a firewall issue, I needed to open the port to accept messages
iptables -I INPUT 1 -i eth0 -p tcp --dport 8000 -j ACCEPT
I've got a Django application running on Azure App Service using NGINX.
My nginx.conf file is as follow:
user nginx;
worker_processes 1;
error_log /var/log/nginx/error.log warn;
pid /var/run/nginx.pid;
events {
worker_connections 1024;
}
http {
include /etc/nginx/mime.types;
default_type application/octet-stream;
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/access.log main;
sendfile on;
keepalive_timeout 65;
fastcgi_max_temp_file_size 0;
fastcgi_buffers 128 2048k;
fastcgi_buffer_size 2048k;
proxy_buffer_size 128k;
proxy_buffers 4 256k;
proxy_busy_buffers_size 256k;
server {
listen 8000;
location / {
include uwsgi_params;
uwsgi_pass unix:///tmp/uwsgi.sock;
}
location /static {
alias /app/staticfiles;
}
}
}
daemon off;
Everything works fine, except for one particular API where I include in the header some token (typical bearer token) and it's returning a 502 error from Chrome (in the network tab)
However, when I try to call this from Postman, it's returning the data correctly.
What could be possibly wrong here?
Thanks to #Selcuk's suggestion. I've managed to fix the above error by increasing the buffer-size in iwsgi.ini file
# uwsgi.ini file
buffer-size = 32768
I have a Django app where users can upload files containing data they want to be displayed in the app. The app is containerised using docker. In production I am trying to configure nginx to make this work and as far as I can tell it is working to some extent.
As far as I can tell the file does actually get uploaded as I can see it in the container, and I can also download it from the app. The problem I am having is that once the form has been submitted it is supposed to redirect to another form, where the user can assign stuff to the data in the app (not really relevant to the question). However, I am getting a 500 error instead.
I have taken a look at the nginx error logs and I am seeing:
[info] 8#8: *11 client closed connection while waiting for request, client: 192.168.0.1, server: 0.0.0.0:443
and
[info] 8#8: *14 client timed out (110: Operation timed out) while waiting for request, client: 192.168.0.1, server: 0.0.0.0:443
when the operation is performed.
I also want the media files to be persisted so they are in a docker volume.
I suspect the first log message may be the culprit but is there a way to prevent this from happening, or is it just a poor connection on my end?
Here is my nginx conf:
user nginx;
worker_processes 1;
error_log /var/log/nginx/error.log debug;
pid /var/run/nginx.pid;
events {
worker_connections 1024;
}
http {
include /etc/nginx/mime.types;
default_type application/octet-stream;
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/access.log main;
sendfile on;
#tcp_nopush on;
keepalive_timeout 65;
proxy_headers_hash_bucket_size 52;
client_body_buffer_size 1M;
client_max_body_size 10M;
gzip on;
upstream app {
server django:5000;
}
server {
listen 80 default_server;
listen [::]:80 default_server;
server_name dali.vpt.co.uk;
location / {
return 301 https://$server_name$request_uri;
}
}
server {
listen 443 ssl;
listen [::]:443 ssl;
server_name dali.vpt.co.uk;
ssl_certificate /etc/nginx/ssl/cert.crt;
ssl_certificate_key /etc/nginx/ssl/cert.key;
location / {
# checks for static file, if not found proxy to app
try_files $uri #proxy_to_app;
}
# cookiecutter-django app
location #proxy_to_app {
proxy_set_header X-Forwarded-Proto https;
proxy_set_header X-Url-Scheme $scheme;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_redirect off;
proxy_pass http://app;
}
location /media/ {
autoindex on;
alias /app/tdabc/media/;
}
}
}
and here is my docker-compose file:
version: '2'
volumes:
production_postgres_data: {}
production_postgres_backups: {}
production_media: {}
services:
django: &django
build:
context: .
dockerfile: ./compose/production/django/Dockerfile
image: production_django:0.0.1
depends_on:
- postgres
- redis
volumes:
- .:/app
- production_media:/app/tdabc/media
env_file:
- ./.envs/.production/.django
- ./.envs/.production/.postgres
command: /start.sh
postgres:
build:
context: .
dockerfile: ./compose/production/postgres/Dockerfile
image: production_postgres:0.0.1
volumes:
- production_postgres_data:/var/lib/postgresql/data
- production_postgres_backups:/backups
env_file:
- ./.envs/.production/.postgres
nginx:
build:
context: .
dockerfile: ./compose/production/nginx/Dockerfile
image: production_nginx:0.0.1
depends_on:
- django
volumes:
- production_media:/app/tdabc/media
ports:
- "0.0.0.0:80:80"
- "0.0.0.0:443:443"
Any help or insight into this problem would be much appreciated.
Thanks for your time.
Update
Another thing that I should mention is that when I run the app with my production settings but set DEBUG to True it works perfectly but this is only happening when DEBUG is set to false.
I am getting this error:
Restarting nginx: nginx: [emerg] duplicate "log_format" name "timed_combined" in /etc/nginx/sites-enabled/default:8
nginx: configuration file /etc/nginx/nginx.conf test failed
whenever I am tring to start or restart my nginx server. This does not happen before. Here's the first few lines of code in my /etc/nginx/sites-enabled/default
# You may add here your
# server {
# ...
# }
# statements for each of your virtual hosts
log_format timed_combined '$remote_addr - $remote_user [$time_local] '
'"$request" $status $body_bytes_sent '
'"$http_referer" "$http_user_agent" $request_time';
The "timed_combined" log_format is predefined in sources; you need to use your name, something like:
log_format my_log '$remote_addr - $remote_user [$time_local] '
'"$request" $status $body_bytes_sent '
'"$http_referer" "$http_user_agent" $request_time';
After that you need to re-define access_log:
access_log /path/to/access.log my_log
Hi I am new to this project and I am having issues hosting it on a CentOS7 ec2 instance.
I am getting this error when I hit my domain:
2017/02/17 05:53:35 [error] 27#27: *20 connect() failed (111: Connection refused) while connecting to upstream, client: xxx.xxx.xxx.xxx, server:myApp.io, request: "GET /favicon.ico HTTP/1.1", upstream: "http://172.18.0.7:5000/favicon.ico", host: "myApp.io", referrer: "https://myApp.io"
When I look at the logs
docker logs d381b6d093fa
sleep 5
build starting nginx config
replacing ___my.example.com___/myApp.io
user nginx;
worker_processes 1;
error_log /var/log/nginx/error.log warn;
pid /var/run/nginx.pid;
events {
worker_connections 1024;
}
http {
include /etc/nginx/mime.types;
default_type application/octet-stream;
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/access.log main;
sendfile on;
#tcp_nopush on;
keepalive_timeout 65;
#gzip on;
upstream app {
server django:5000;
}
server {
listen 80;
charset utf-8;
server_name myApp.io ;
location /.well-known/acme-challenge {
proxy_pass http://certbot:80;
proxy_set_header Host $host;
proxy_set_header X-Forwarded-For $remote_addr;
proxy_set_header X-Forwarded-Proto https;
}
location / {
# checks for static file, if not found proxy to app
try_files $uri #proxy_to_app;
}
# cookiecutter-django app
location #proxy_to_app {
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_redirect off;
proxy_pass http://app;
}
}
}
.
Firing up nginx in the background.
Waiting for folder /etc/letsencrypt/live/myApp.io to exist
replacing ___my.example.com___/myApp.io
replacing ___NAMESERVER___/127.0.0.11
I made sure to add my ip address to the env file for allowed hosts.
When I look at running containers I get:
docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
3887c3465802 myApp_nginx "/bin/sh -c /start.sh" 3 minutes ago Up 3 minutes 0.0.0.0:80->80/tcp, 0.0.0.0:443->443/tcp myApp_nginx_1
91cbc2a2359d myApp_django "/entrypoint.sh /g..." 3 minutes ago Up 3 minutes myApp_django_1
My docker-compose.yml looks like:
version: '2'
volumes:
postgres_data: {}
postgres_backup: {}
services:
postgres:
build: ./compose/postgres
volumes:
- postgres_data:/var/lib/postgresql/data
- postgres_backup:/backups
env_file: .env
django:
build:
context: .
dockerfile: ./compose/django/Dockerfile
user: django
depends_on:
- postgres
- redis
command: /gunicorn.sh
env_file: .env
nginx:
build: ./compose/nginx
depends_on:
- django
- certbot
ports:
- "0.0.0.0:80:80"
environment:
- MY_DOMAIN_NAME=myApp.io
ports:
- "0.0.0.0:80:80"
- "0.0.0.0:443:443"
volumes:
- /etc/letsencrypt:/etc/letsencrypt
- /var/lib/letsencrypt:/var/lib/letsencrypt
certbot:
image: quay.io/letsencrypt/letsencrypt
command: bash -c "sleep 6 && certbot certonly -n --standalone -d myApp.io --text --agree-tos --email morozovsdenis#gmail.com --server https://acme-v01.api.letsencrypt.org/directory --rsa-key-size 4096 --verbose --keep-until-expiring --standalone-supported-challenges http-01"
entrypoint: ""
volumes:
- /etc/letsencrypt:/etc/letsencrypt
- /var/lib/letsencrypt:/var/lib/letsencrypt
ports:
- "80"
- "443"
environment:
- TERM=xterm
redis:
image: redis:latest
celeryworker:
build:
context: .
dockerfile: ./compose/django/Dockerfile
user: django
env_file: .env
depends_on:
- postgres
- redis
command: celery -A myApp.taskapp worker -l INFO
celerybeat:
build:
context: .
dockerfile: ./compose/django/Dockerfile
user: django
env_file: .env
depends_on:
- postgres
- redis
command: celery -A myApp.taskapp beat -l INFO
My .env file has the correct allowed host which is my ec2-instance ip address
Any idea what I am doing incorrectly?
I faced the same issue a few months ago. Please have a look at this answer:
Problem with SELinux. It helped me like a charm :)