I've deployed a dockerized node app to elastic beanstalk and instead of my app, the link goes to a default "Welcome to nginx on Amazon Linux!" page saying it means the web server is installed at the site.
It's using Docker running on 64bit Amazon Linux 2
From what I could find scouring the internet, the eb default nginx reverse proxy should forward to either port 8000, 8080, or 5000 (information varies).
I've confirmed my docker app is running properly and opened all three ports to the docker container as a test to narrow later. I know that part is working fine because allowing incoming for them in the ec2 security group successfully routes to my app using the ec2 public IP i.e. 55.555.555.555:8080 or 55.555.555.555:5000
Related answers suggest I can find the reverse proxy port in /etc/nginx/nginx.conf or cat /etc/nginx/conf.d/elasticbeanstalk-nginx-docker-upstream.conf (not found) or /etc/nginx/conf.d/elasticbeanstalk/00_application.conf (not found).
Here's /etc/nginx/nginx/nginx.conf:
# For more information on configuration, see:
# * Official English Documentation: http://nginx.org/en/docs/
# * Official Russian Documentation: http://nginx.org/ru/docs/
user nginx;
worker_processes auto;
error_log /var/log/nginx/error.log;
pid /run/nginx.pid;
# Load dynamic modules. See /usr/share/doc/nginx/README.dynamic.
include /usr/share/nginx/modules/*.conf;
events {
worker_connections 1024;
}
http {
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/access.log main;
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 65;
types_hash_max_size 4096;
include /etc/nginx/mime.types;
default_type application/octet-stream;
# Load modular configuration files from the /etc/nginx/conf.d directory.
# See http://nginx.org/en/docs/ngx_core_module.html#include
# for more information.
include /etc/nginx/conf.d/*.conf;
server {
listen 80;
listen [::]:80;
server_name _;
root /usr/share/nginx/html;
# Load configuration files for the default server block.
include /etc/nginx/default.d/*.conf;
error_page 404 /404.html;
location = /404.html {
}
error_page 500 502 503 504 /50x.html;
location = /50x.html {
}
}
}
I know I can add my own nginx config in .ebextensions but I'm really trying to avoid it because I don't know what I'm doing, more to maintain, and shouldn't this just work out of the box?
Update:
I added
location / {
proxy_pass http://127.0.0.1:8080;
}
to the nginx.conf server block within the ec2 container, restarted the service, and now the eb url correctly routes to my app.
How do I avoid having to do this for every environment or using a custom nginx config?
Since you are using Docker running on 64bit Amazon Linux 2 should use .platform folder to customize your nginx as shown in the docs in "Reverse proxy configuration" section.
Therefore, you could have the following .platform/nginx/conf.d/myconfig.conf with content:
location / {
proxy_pass http://127.0.0.1:8080;
}
I have an application built using django, angular and hosted using cloudfoundry.
partial URL with http fails to proceed, for example http://www.example.com/home will fail but https://www.example.com/home will work fine.
same way http://www.example.com will redirect to https://www.example.com but when given with half URL the redirection is failing.
So i did some research on this issue, and found that nginx.conf file needs to be edited and could not find more. is it need to be uploaded with the django application to cloud foundry
any guide over this will be very helpful
Out put of CF PUSH:
C:\***\dcms-api>cf push -b https://github.com/cloudfoundry/nginx-buildpack.git
Pushing from manifest to org DCMS / space Development as ***#***.com...
Using manifest file C:\***\dcms-api\manifest.yml
Getting app info...
Updating app with these attributes...
name: dcms
path: C:\***\dcms-api
buildpacks:
https://github.com/cloudfoundry/nginx-buildpack.git
disk quota: 512M
health check type: port
instances: 1
memory: 256M
stack: cflinuxfs3
env:
ACCEPT_EULA
DB_HOST
DB_NAME
DB_PORT
DB_USER
DB_USER_PASSWORD
SYS_NAME
SYS_PASSWORD
routes:
dcms***.com
Updating app dcms...
Mapping routes...
Comparing local files to remote cache...
Packaging files to upload...
Uploading files...
385.10 KiB / 385.10 KiB [=====================================================================] 100.00% 1s
Waiting for API to complete processing files...
Staging app and tracing logs...
Cell 9c6ffac8- creating container for instance eb1fe223-
Cell 9c6ffac8- successfully created container for instance eb1fe223-
Downloading app package...
Downloading build artifacts cache...
Downloaded app package (2M)
Downloaded build artifacts cache (108M)
-----> Download go 1.12.4
-----> Running go build supply
/tmp/buildpackdownloads/adf6125a52c1a65c9523985b5a87ec38 ~
-----> Nginx Buildpack version 1.1.9
-----> Supplying nginx
-----> No nginx version specified - using mainline => 1.17.10
-----> Installing nginx 1.17.10
Download
[https://buildpacks.cloudfoundry.org/dependencies/nginx/nginx_1.17.10_linux_x64_cflinuxfs3_2fe87dae.tgz]
**WARNING** nginx 1.17.x will no longer be available in new buildpacks released after 2020-05-01.
See: https://nginx.org/
**ERROR** nginx.conf file must be configured to respect the value of `{{port}}`
**ERROR** Could not validate nginx.conf: no {{port}} in nginx.conf
Failed to compile droplet: Failed to run all supply scripts: exit status 14
Exit status 223
Cell 9c6ffac8- stopping instance eb1fe223-
Cell 9c6ffac8- destroying container for instance eb1fe223-
Cell 9c6ffac8- successfully destroyed container for instance eb1fe223-
Error staging application: App staging failed in the buildpack compile phase
FAILED
and i have a doubt on where to place the following nginx.conf?
nginx.conf:
worker_processes 1;
daemon off;
events { worker_connections 1024; }
http {
log_format cloudfoundry '$http_x_forwarded_for - $http_referer - [$time_local] "$request" $status $body_bytes_sent';
default_type application/octet-stream;
include mime.types;
sendfile on;
gzip on;
tcp_nopush on;
keepalive_timeout 30;
server {
listen 8080;
server_name apps1-bg-int.icloud.intel.com .apps1-bg-int.icloud.intel.com;
location / {
root /home/vcap/app/static/UI;
index index.html;
proxy_redirect off;
proxy_set_header Host $http_host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto https;
}
}
}
Try:
listen {{port}};
It works for me.
https://docs.cloudfoundry.org/buildpacks/nginx/index.html#port
I'm using vagrant and virtualbox for my Django environment. The django environment uses nginx. Everything works fine except intermittently I'll see 502 bad gateway errors. When these errors happen, there is nothing in nginx access.log or error.log. Here are my configurations
Vagrant file private network
config.vm.network "private_network", ip: "192.168.33.10"
nginx.conf
server {
listen 80 default_server;
server_name _;
access_log /var/log/nginx/access.log;
error_log /var/log/nginx/error.log;
location / {
proxy_set_header Host 192.168.33.10;
proxy_set_header X-forwarded-for $proxy_add_x_forwarded_for;
proxy_pass http://127.0.0.1:8000;
}
}
I'm not sure how to debug or fix this issue. Any ideas?
You can try python manage.py runserver 192.168.33.10:8000 since Django defaults your ip to 127.0.0.1 and nginx might have problems with that.
I am trying to set up proxy_protocol in my nginx config. My server sits behind an AWS load balancer (ELB), and I have enabled Proxy Protocol on that for both ports 80 and 443.
However, this is what I get when I hit my server:
broken header: "��/��
'���\DW�Vc�A{����
�#��kj98���=5���g#32ED�</A
" while reading PROXY protocol, client: 172.31.12.223, server: 0.0.0.0:443
That is a direct copy paste from the nginx error log - wonky characters and all.
Here is a snip from my nginx config:
server {
listen 80 proxy_protocol;
set_real_ip_from 172.31.0.0/20; # Coming from ELB
real_ip_header proxy_protocol;
return 301 https://$http_host$request_uri;
}
server {
listen 443 ssl proxy_protocol;
server_name *.....com
ssl_certificate /etc/ssl/<....>;
ssl_certificate_key /etc/ssl/<....?;
ssl_prefer_server_ciphers On;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_ciphers ECDH+AESGCM:DH+AESGCM:ECDH+AES256:DH+AES256:ECDH+AES128:DH+AES:ECDH+3DES:DH+3DES:RSA+AESGCM:RSA+AES:RSA+3DES:!DSS:!aNULL:!eNULL:!EXPORT:!DES:!MD5:!PSK:!RC4;
ssl_session_cache shared:SSL:10m;
add_header X-Frame-Options DENY;
add_header X-Content-Type-Options nosniff;
ssl_stapling on;
ssl_stapling_verify on;
...
I can't find any help online about this issue. Other people have had broken header issues, but the error with bad headers are always readable - they don't look like they are encoded like they are for me.
Any ideas?
Two suggestions:
Verify that your ELB listener is configured to use TCP as the protocol, not HTTP. I have an LB config like the following that's routing to Nginx with proxy_protocol configured:
{
"LoadBalancerName": "my-lb",
"Listeners": [
{
"Protocol": "TCP",
"LoadBalancerPort": 80,
"InstanceProtocol": "TCP",
"InstancePort": 80
}
],
"AvailabilityZones": [
"us-east-1a",
"us-east-1b",
"us-east-1d",
"us-east-1e"
],
"SecurityGroups": [
"sg-mysg"
]
}
You mentioned that you have enabled Proxy Protocol in the ELB, so I'm assuming you've followed AWS setup steps. If so then the ELB should be crafting the HTTP request correctly with the first line as something like PROXY TCP4 198.51.100.22 203.0.113.7 35646 80\r\n. However if the HTTP request is not coming into Nginx with the PROXY ... line then it could cause the problem you're seeing. You could reproduce that if you hit the EC2 DNS name directly in the browser, or you ssh into the EC2 instance and try something like curl localhost, then you should see a similar broken header error in the Nginx logs.
To find out whether it works with a correctly formed HTTP request you can use telnet:
$ telnet localhost 80
PROXY TCP4 198.51.100.22 203.0.113.7 35646 80
GET /index.html HTTP/1.1
Host: your-nginx-config-server_name
Connection: Keep-Alive
Then check the Nginx logs and see if you have the same broken header error. If not then the ELB is likely not sending the properly formatted PROXY request, and I'd suggest re-doing the ELB Proxy Protocol configuration, maybe with a new LB, to verify it's set up correctly.
I had similar situation, nginx had 'proxy_protocol' on but AWS ELB settings was not on, so I got the similar message.
Solutions to edit settings to turn it on:
I had this error and came across this ticket:
https://trac.nginx.org/nginx/ticket/886
which ultimately led me to figuring out that I had an unneeded proxy_protocol declaration in my nginx.conf file. I removed that and everything was working again.
Oddly enough, everything worked fine with nginx version 1.8.0, but when I upgraded to nginx version 1.8.1 is when I started seeing the error.
I got this unreadable header issue too and here are the cause and how I fixed it.
In my case, Nginx is configured with use-proxy-protocol=true properly. It complains about the broken header solely because AWS ELB did not add the required header (e.g. PROXY TCP4 198.51.100.22 203.0.113.7 35646 80) at all. Nginx sees the encrypted HTTPS payload directly. That's why it prints out all the unreadable characters.
So, why didn't the AWS ELB add the PROXY header? It turned out I used wrong ports in the commands to enable Proxy Protocol policy. Instance ports should be used instead of 80 and 443.
The ELB has the following port mapping.
80 -> 30440
443 -> 31772
The commands should be
aws elb set-load-balancer-policies-for-backend-server \
--load-balancer-name a19235ee9945011e9ac720a6c9a49806 \
--instance-port 30440 \
--policy-names ars-ProxyProtocol-policy
aws elb set-load-balancer-policies-for-backend-server \
--load-balancer-name a19235ee9945011e9ac720a6c9a49806 \
--instance-port 31272 \
--policy-names ars-ProxyProtocol-policy
but I used 80 and 443 by mistake.
Hope this helps somebody.
Stephen Karger's solution above is correct, you must adjust make sure to configure your ELB to support proxy. Here is the AWS docs for doing exactly that: http://docs.aws.amazon.com/ElasticLoadBalancing/latest/DeveloperGuide/enable-proxy-protocol.html. The docs are a bit daunting at first, so if you want you can just skip to steps 3 and 4 under the Enable Proxy Protocol Using the AWS CLI section. Those are the only necessary steps for enabling the proxy channeling. Additionally, as Stephen also suggested, you must make sure that your ELB is using TCP instead of http or https, as both of these will not behave properly with ELB's proxy implementation. I suggest moving your socket channel away from common ports like 80 and 443, just so you can still maintain those standardized connections with their default behavior. Of course, making that call is entirely dependent on how your app stack looks.
If it helps, you can use the npm package wscat to debug your websocket connections like so:
$ npm install -g wscat
$ wscat --connect 127.0.0.1
If the connection works in local, then it is for sure your load balancer. However, if it doesn't, there is almost definitely a problem with your socket host.
Additionally, a tool like nmap will aid you in discovering open ports. A nice checklist for debugging:
npm install -g wscat
# can you connect to it from within the server?
ssh ubuntu#69.69.69.69
wscat -c 127.0.0.1:80
# can you connect to it from outside the server?
exit
wscat -c 69.69.69.69:80
# if not, is your socket port open for business?
nmap 69.69.69.69:80
You can also use nmap from within your server to discover open ports. to install nmap on ubuntu, simply sudo apt-get install nmap. on osx, brew install nmap
Here is a working config that i have, although it does not provide ssl support at the moment. In this configuration, I have port 80 feeding my rails app, port 81 feeding a socket connection through my elb, and port 82 open for internal socket connections. Hope this helps somebody!! Anybody using Rails, unicorn, and Faye to deploy should find this helpful. :) happy hacking!
# sets up deployed ruby on rails server
upstream unicorn {
server unix:/path/to/unicorn/unicorn.sock fail_timeout=0;
}
# sets up Faye socket
upstream rack_upstream {
server 127.0.0.1:9292;
}
# sets port 80 to proxy to rails app
server {
listen 80 default_server;
keepalive_timeout 300;
client_max_body_size 4G;
root /path/to/rails/public;
try_files $uri/index.html $uri.html $uri #unicorn;
location #unicorn {
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_set_header X-Forwarded_Proto $scheme;
proxy_redirect off;
proxy_pass http://unicorn;
proxy_read_timeout 300s;
proxy_send_timeout 300s;
}
error_page 500 502 503 504 /500.html;
location = /500.html {
root /path/to/rails/public;
}
}
# open 81 to load balancers (external socket connection)
server {
listen 81 proxy_protocol;
server_name _;
charset UTF-8;
location / {
proxy_pass http://rack_upstream;
proxy_redirect off;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
}
}
# open 82 to internal network (internal socket connections)
server {
listen 82;
server_name _;
charset UTF-8;
location / {
proxy_pass http://rack_upstream;
proxy_redirect off;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
}
}
I'm trying to set nginx with gunicorn but I keep getting the "Welcome to nginx!" page. I am able to successfully listen to other ports (like 8080) but port 80 does not work at all.
server {
listen 80;
server_name host.ca www.host.ca;
access_log /var/log/nginx/example2.log;
location / {
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_redirect off;
proxy_pass http://127.0.0.1:8000;
}
}
I'm running the server as root.
I can't seem to see anything running in port 80.
Diagnosing the Problem
Make sure to check your logs (likely /var/log/nginx or some variant).
Check to see what might be hogging port 80
netstat -nlp | grep 80
Sites-enabled, port hogging
Then, make sure you have the Django site enabled in sites-enabled. Delete any old symlinks if you created one first.
rm /etc/nginx/sites-enabled/django
ln -s /etc/nginx/sites-available/django /etc/nginx/sites-enabled/django
Double check your /etc/nginx/nginx.conf to make sure it's loading sites-enabled and not loading some other default.
http {
...
include /etc/nginx/sites-enabled/*;
}
After you do all this, shut down and restart the nginx service.
Either service nginx restart or service nginx stop && service nginx start