Why I get ElasticBeanstalk::ExternalInvocationError? - amazon-web-services

My app is built on RubyOnRails and its deployed as an elastic beanstalk app using passenger, I am trying to add headers to nginx server and restart it, here is my config file, a script from .ebextensions folder in aws elastic beanstalk:
packages:
yum:
nginx: []
files:
"/etc/nginx/conf.d/webapp.conf" :
mode: "000644"
owner: root
group: root
content: |
server {
location /assets {
alias /var/app/current/public/assets;
gzip_static on;
gzip on;
expires max;
add_header Cache-Control public;
}
location /public {
alias /var/app/current/public;
gzip_static on;
gzip on;
expires max;
add_header Cache-Control public;
}
}
# This reloads the server, which will both make the changes take affect and makes sure the config is valid when you deploy
container_commands:
01_reload_nginx:
command: "sudo service nginx reload"
However I got this error:
[2017-12-13T06:23:48.635Z] ERROR [17344] : Command CMD-AppDeploy failed!
[2017-12-13T06:23:48.635Z] INFO [17344] : Command processor returning results:
{"status":"FAILURE","api_version":"1.0","results":[{"status":"FAILURE","msg":"container_command 01_reload_nginx in .ebextensions/01_elastic_beanstalk_webapp.config failed. For more detail, check /var/log/eb-activity.log using console or EB CLI","returncode":7,"events":[]}]}
/var/log/eb-activity.log:
[2017-12-13T06:23:48.584Z] INFO [17344] - [Application update fix-command-nginx-reload-hope#2/AppDeployStage0/EbExtensionPostBuild/Infra-EmbeddedPostBuild/postbuild_0_myapp_website/Command 01_reload_nginx] : Starting activity...
[2017-12-13T06:23:48.619Z] INFO [17344] - [Application update fix-command-nginx-reload-hope#2/AppDeployStage0/EbExtensionPostBuild/Infra-EmbeddedPostBuild/postbuild_0_myapp_website/Command 01_reload_nginx] : Activity execution failed, because: (ElasticBeanstalk::ExternalInvocationError)
[2017-12-13T06:23:48.619Z] INFO [17344] - [Application update fix-command-nginx-reload-hope#2/AppDeployStage0/EbExtensionPostBuild/Infra-EmbeddedPostBuild/postbuild_0_myapp_website/Command 01_reload_nginx] : Activity failed.
although if I ssh into the instance and execute sudo service nginx reload it will be executed normally..
Any idea?
EDIT
$ cat /proc/version
Linux version 4.9.43-17.39.amzn1.x86_64 (mockbuild#gobi-build-64011) (gcc version 4.8.3 20140911 (Red Hat 4.8.3-9) (GCC) ) #1 SMP Fri Sep 15 23:39:41 UTC 2017
deploy command:
eb deploy my-app -v
headers of requested assets:
Connection: keep-alive
Content-Encoding: gzip
Content-Type: application/x-javascript
Date: Fri, 24 Aug 2018 11:03:50 GMT
ETag: W/"12cd8ea0-20db3"
Last-Modified: Mon, 31 Dec 1979 04:08:00 GMT
Server: nginx/1.12.1
Transfer-Encoding: chunked
Via: 1.1 8cc9957dff77c27e9931ab0aaf344ec9.cloudfront.net (CloudFront)
X-Amz-Cf-Id: 0NlE-FiGgzczadHYeK7HMMsDsGRmaB8Sefvo89phHWw3LSx01t5rgQ==
X-Cache: Miss from cloudfront
missing headers:
access-control-max-age: 3000
age: 48214
the update conf file at server
$ cat /etc/nginx/conf.d/webapp.conf
server {
location /assets {
alias /var/app/current/public/assets;
gzip_static on;
gzip on;
expires max;
add_header Cache-Control public;
add_header 'Access-Control-Allow-Origin' '*';
}
location /public {
alias /var/app/current/public;
gzip_static on;
gzip on;
expires max;
add_header Cache-Control public;
add_header 'Access-Control-Allow-Origin' '*';
}
}
EDIT
service nginx configtest result:
$ sudo service nginx configtest
nginx: the configuration file /etc/nginx/nginx.conf syntax is ok
nginx: configuration file /etc/nginx/nginx.conf test is successful

command: "sudo service nginx reload" is not necessary as NGINX service restarts automatically after every successful deployment. You can remove it from your config file.
You maybe experiencing a delay in the expiration of your CDN service, try flushing it's cache or testing against the EB url directly.

I had similar issues and errors. Previously, I did need the container_commands for settings to take, but then during a big set of upgrades I started getting similar errors during deployment. Ultimately, just needed to remove the container_commands and everything worked perfectly.
Remove this section from your .ebextensions scripts:
container_commands:
01_reload_nginx:
command: "sudo service nginx reload"
Note: you probably want to delete the comment line above it too.

Related

Why is AWS Elastic Beanstalk not passing the custom JWT?

I have successfully deployed a Next.js with Next-Auth project on AWS EB. Everything looks fine. However, I can't get passed the sign in form.
Here's what the browser console shows:
502 Bad Gateway: POST http://----.elasticbeanstalk.com/api/auth/callback/credentials? Uncaught (in promise) SyntaxError: JSON.parse: unexpected character at line 1 column 1 of the JSON data
Here's what the EB logs show:
upstream sent too big header while reading response header from upstream, client: x.x.x.x, server: , request: "POST /api/auth/callback/credentials? HTTP/1.1", upstream: "http://-----/api/auth/callback/credentials?", host: "----", referrer: "http://----.elasticbeanstalk.com/"
I've also shoved some console logs in [...nextauth].tsx file to find the issue.
The console logs show fine all through the Providers.Credentials.
The console logs show fine all through the jwt callbacks.
But the console logs never appear in the session callback.
So it dies somewhere between jwt and session callback.
Here's the config that I have under .ebextensions/
.ebextensions/proxy_custom.config:
files:
"/etc/nginx/conf.d/proxy_custom.conf":
mode: "000644"
owner: root
group: root
content: |
proxy_buffer_size 128k;
proxy_buffers 4 256k;
proxy_busy_buffers_size 256k;
container_commands:
01_reload_nginx:
command: "sudo service nginx reload"
Here is an updated config (based on what I've used), you may want to try them individually to see which config works best for you.
files:
"/etc/nginx/conf.d/proxy_custom.conf":
mode: "000644"
owner: root
group: root
content: |
large_client_header_buffers 4 32k;
fastcgi_buffers 16 32k;
fastcgi_buffer_size 32k;
proxy_buffer_size 128k;
proxy_buffers 4 256k;
proxy_busy_buffers_size 256k;
container_commands:
01_reload_nginx:
command: "sudo service nginx reload"
make sure to check what the actual header sizes of your requests are and adjust the sizes accordingly. curl -s -w \%{size_header} -o /dev/null https://example.com by replacing example.com with your service url and add request headers via -H, if needed. This will give you the header size in bytes.
don't set those buffers too high and use calculations specific to your app. Arbitrarily high values won't do good to your RAM, because those buffers are used per connection.
Reference: https://www.getpagespeed.com/server-setup/nginx/tuning-proxy_buffer_size-in-nginx
The issue was caused by a cookie being too large to set.
The .ebextensions was never being applied. This is due to the app being under Amazon Linux 2. The configs need to go under .platform for Amazon Linux 2.
These are the lines that did it:
proxy_buffer_size 128k;
proxy_busy_buffers_size 256k;
proxy_buffers 4 256k;
This answer from another post helped narrow down the issue.

Access-Control-Allow-Origin cors error happening on NodeJS or Nginx

First at all, I have researched about the topic after writing the question, there are many similar question but I think the problem here is different:
No 'Access-Control-Allow-Origin' - Node / Apache Port Issue
We are getting this Access-Control-Allow-Origin cors error just when we send a heavy request:
Access to XMLHttpRequest at 'https://backend.*****.es/xxxxxx' from
origin 'https://www.testing.*******.es' has been blocked by CORS
policy: No 'Access-Control-Allow-Origin' header is present on the
requested resource
The rest of the request are working fine, I tried setting:
app.use((req, res, next) => {
res.header('Access-Control-Allow-Origin', '*');
next();
});
but I got the same error.
I am starting to think it can be related with nginx.
This is the architecture we are using:
NodeJs, Expressjs
Middlewares:
const LIMIT = '100mb';
const global = () => [
morganMiddleware(),
compression(),
cors(),
bodyParser.urlencoded({ extended: true, limit: LIMIT }),
bodyParser.json({ limit: LIMIT }),
helmet(),
];
Server: Nginx 12.14.1
Host in AWS Elastic BeanStalk
Let me know if anyone have any clue what can be happening because I do not know if it is coming from our nodejs server, or nginx. We have tested many solutions and are still checking out other options.
Locate the type of Amazon Linux 2 (AL2) on your Elastic Beanstalk Dashboard:
Create the following directory structure from the root directory of your App:
.platform/nginx/conf.d/
Create the following file:
.platform/nginx/conf.d/01_custom_ngix.conf
Add client_max_body_size and change the value on the following inside your 01_custom_ngix.conf file you created (the example here is 1000M = 1GB) and make sure your AWS Instance Types is large enough t2.medium:
client_max_body_size 1000M;
client_body_buffer_size 100M;
...
...
add_header 'Access-Control-Allow-Origin' '*' always;
add_header 'Access-Control-Allow-Methods' 'GET, POST, OPTIONS' always;
Git Commit your changes.
Deploy your code:
> eb deploy
options: eb deploy --verbose --debug
We are getting this error in our sever:
2020/02/20 10:56:00 [error] 2784#0: *127 client intended to send too
large body: 4487648 bytes, client: 172.31.3.222, server: , request:
"PUT /cars/drafts/f7124841-f72c-4133-b49e-d9f709b4cf4d HTTP/1.1",
host: "backend.xxxxx.es", referrer:
"https://www.testing.xxxx.es/sellcar/photos/f7124841-f72c-4133-b49e-d9f709b4cf4d"
So besides the error that appears in the front is Access-Control-Allow-Origin it is related with the a limit set in NGINX.
Now we just need to figure out how to access to our elastic beanstalk instance to change that variables and that is already solved.
SSH to Elastic Beanstalk instance
To solve it, create a file x.config inside .ebextensions and change this variables with the next yaml file:
files:
"/etc/nginx/conf.d/01-client_max_body_size.conf":
mode: 000644
owner: root
group: root
content: |
# MODIFIED VARIABLES ADDED BY 01-client_max_body_size.conf
client_max_body_size 12m;
client_body_buffer_size 16k;
container_commands:
nginx_reload:
command: sudo service nginx reload
Then redeploy you environment.
Thank you very much all for your help!

Nginx proxy Amazon S3 resources

I´m performing some WPO tasks, so PageSpeed suggested me to leverage browser caching. I have improved it successfully for some static files in my Nginx server, however my image files stored in Amazon S3 server are still missing.
I have read an approach regarding update each file in S3 to include some header metatags (Expires and Cache-Control). I think this is not a good approach. I have thousands of files, so this is not feasible for me.
I think a most convenient approach is to configure my Nginx 1.6.0 server to proxy the S3 files. I have read about this, but I´m not skilled at all on server config, so I got a couple examples from these sites: https://gist.github.com/benjaminbarbe/1961db5ffbaad57eff12
I added this location code inside my server block in my nginx config file:
#inside server block
location /mybucket.s3.amazonaws.com/ {
proxy_http_version 1.1;
proxy_set_header Host mybucket.s3.amazonaws.com;
proxy_set_header Authorization '';
proxy_hide_header x-amz-id-2;
proxy_hide_header x-amz-request-id;
proxy_hide_header Set-Cookie;
proxy_ignore_headers "Set-Cookie";
proxy_buffering off;
proxy_intercept_errors on;
proxy_pass http://mybucket.s3.amazonaws.com;
}
For sure, this is not working for me. No header is included in my requests. So, first I think the requests are not matching the locations.
Accept-Ranges:bytes
Content-Length:90810
Content-Type:image/jpeg
Date:Fri, 23 Jun 2017 04:53:56 GMT
ETag:"4fd0be549fbcaf9b47c18a15146cdf16"
Last-Modified:Tue, 09 Jun 2015 09:47:13 GMT
Server:AmazonS3
x-amz-id-2:cKsq1qRra74DqVsTewh3P3sgzVUoPR8aAT2NFCuwA+JjCdDZfk7/7x/C0WPjBa51GEb4C8LyAIc=
x-amz-request-id:94EADB4EDD3DE1C1
Your approach to proxy S3 files via Nginx makes a lot of sense. It solves number of problems and comes with extra benefits such masking URLs, proxy cache, speed up transferring by offload SSL/TLS. You do it almost right, let me show what is left to make it perfect.
For sample queries I use the S3 bucket and an image URL mentioned in the public comment to the original question.
We start with inspecting of Amazon S3 files' headers
curl -I http://yanpy.dev.s3.amazonaws.com/img/blog/sailing-routes-around-croatia-central-dalmatia-islands/yachts-anchored-paradise-cove-croatia-3.jpg
HTTP/1.1 200 OK
Date: Sun, 25 Jun 2017 17:49:10 GMT
Last-Modified: Wed, 21 Jun 2017 07:42:31 GMT
ETag: "37a907fc5dd7cfd0c428af78f09e95a9"
Expires: Fri, 21 Jul 2018 07:41:49 UTC
Accept-Ranges: bytes
Content-Type: binary/octet-stream
Content-Length: 378843
Server: AmazonS3
We can see missing Cache-Control but Conditional GET headers have already been configured. When we reuse E-Tag/Last-Modified (that's how a browser's client side cache works), we get HTTP 304 alongside with empty Content-Length. An interpretation of that is client (curl in our case) queries the resource saying that no data transfer required unless file has been modified on the server:
curl -I http://yanpy.dev.s3.amazonaws.com/img/blog/sailing-routes-around-croatia-central-dalmatia-islands/yachts-anchored-paradise-cove-croatia-3.jpg
--header "If-None-Match: 37a907fc5dd7cfd0c428af78f09e95a9"
HTTP/1.1 304 Not Modified
Date: Sun, 25 Jun 2017 17:53:33 GMT
Last-Modified: Wed, 21 Jun 2017 07:42:31 GMT
ETag: "37a907fc5dd7cfd0c428af78f09e95a9"
Expires: Fri, 21 Jul 2018 07:41:49 UTC
Server: AmazonS3
curl -I http://yanpy.dev.s3.amazonaws.com/img/blog/sailing-routes-around-croatia-central-dalmatia-islands/yachts-anchored-paradise-cove-croatia-3.jpg
--header "If-Modified-Since: Wed, 21 Jun 2017 07:42:31 GMT"
HTTP/1.1 304 Not Modified
Date: Sun, 25 Jun 2017 18:17:34 GMT
Last-Modified: Wed, 21 Jun 2017 07:42:31 GMT
ETag: "37a907fc5dd7cfd0c428af78f09e95a9"
Expires: Fri, 21 Jul 2018 07:41:49 UTC
Server: AmazonS3
"PageSpeed suggested to leverage browser caching" that means
Cache=control is missing. Nginx as proxy for S3 files solves
not only problem with missing headers but also saves traffic
using Nginx proxy cache.
I use macOS but Nginx configuration works on Linux exactly the same way without modifications. Step by step:
1.Install Nginx
brew update && brew install nginx
2.Setup Nginx to proxy S3 bucket, see configuration below
3.Request the file via Nginx. Please take a look at the Server header, we see Nginx rather than Amazon S3 now:
curl -I http://localhost:8080/s3/img/blog/sailing-routes-around-croatia-central-dalmatia-islands/yachts-anchored-paradise-cove-croatia-3.jpg
HTTP/1.1 200 OK
Server: nginx/1.12.0
Date: Sun, 25 Jun 2017 18:30:26 GMT
Content-Type: binary/octet-stream
Content-Length: 378843
Connection: keep-alive
Last-Modified: Wed, 21 Jun 2017 07:42:31 GMT
ETag: "37a907fc5dd7cfd0c428af78f09e95a9"
Expires: Fri, 21 Jul 2018 07:41:49 UTC
Accept-Ranges: bytes
Cache-Control: max-age=31536000
4.Request the file using Nginx proxy with Conditional GET:
curl -I http://localhost:8080/s3/img/blog/sailing-routes-around-croatia-central-dalmatia-islands/yachts-anchored-paradise-cove-croatia-3.jpg
--header "If-None-Match: 37a907fc5dd7cfd0c428af78f09e95a9"
HTTP/1.1 304 Not Modified
Server: nginx/1.12.0
Date: Sun, 25 Jun 2017 18:32:16 GMT
Connection: keep-alive
Last-Modified: Wed, 21 Jun 2017 07:42:31 GMT
ETag: "37a907fc5dd7cfd0c428af78f09e95a9"
Expires: Fri, 21 Jul 2018 07:41:49 UTC
Cache-Control: max-age=31536000
5.Request the file using Nginx proxy cache, please take a look at X-Cache-Status header, its value is MISS until cache warmed up after first request
curl -I http://localhost:8080/s3_cached/img/blog/sailing-routes-around-croatia-central-dalmatia-islands/yachts-anchored-paradise-cove-croatia-3.jpg
HTTP/1.1 200 OK
Server: nginx/1.12.0
Date: Sun, 25 Jun 2017 18:40:45 GMT
Content-Type: binary/octet-stream
Content-Length: 378843
Connection: keep-alive
Last-Modified: Wed, 21 Jun 2017 07:42:31 GMT
ETag: "37a907fc5dd7cfd0c428af78f09e95a9"
Expires: Fri, 21 Jul 2018 07:41:49 UTC
Cache-Control: max-age=31536000
X-Cache-Status: HIT
Accept-Ranges: bytes
Based on Nginx official documentation I provide the Nginx S3 configuration with optimised caching settings that supports the following options:
proxy_cache_revalidate instructs NGINX to use conditional GET
requests when refreshing content from the origin servers
the updating parameter to the proxy_cache_use_stale directive instructs NGINX to deliver stale content when clients request an item
while an update to it is being downloaded from the origin server,
instead of forwarding repeated requests to the server
with proxy_cache_lock enabled, if multiple clients request a file that is not current in the cache (a MISS), only the first of those
requests is allowed through to the origin server
Nginx configuration:
worker_processes 1;
daemon off;
error_log /dev/stdout info;
pid /usr/local/var/nginx/nginx.pid;
events {
worker_connections 1024;
}
http {
default_type text/html;
access_log /dev/stdout;
sendfile on;
keepalive_timeout 65;
proxy_cache_path /tmp/ levels=1:2 keys_zone=s3_cache:10m max_size=500m
inactive=60m use_temp_path=off;
server {
listen 8080;
location /s3/ {
proxy_http_version 1.1;
proxy_set_header Connection "";
proxy_set_header Authorization '';
proxy_set_header Host yanpy.dev.s3.amazonaws.com;
proxy_hide_header x-amz-id-2;
proxy_hide_header x-amz-request-id;
proxy_hide_header x-amz-meta-server-side-encryption;
proxy_hide_header x-amz-server-side-encryption;
proxy_hide_header Set-Cookie;
proxy_ignore_headers Set-Cookie;
proxy_intercept_errors on;
add_header Cache-Control max-age=31536000;
proxy_pass http://yanpy.dev.s3.amazonaws.com/;
}
location /s3_cached/ {
proxy_cache s3_cache;
proxy_http_version 1.1;
proxy_set_header Connection "";
proxy_set_header Authorization '';
proxy_set_header Host yanpy.dev.s3.amazonaws.com;
proxy_hide_header x-amz-id-2;
proxy_hide_header x-amz-request-id;
proxy_hide_header x-amz-meta-server-side-encryption;
proxy_hide_header x-amz-server-side-encryption;
proxy_hide_header Set-Cookie;
proxy_ignore_headers Set-Cookie;
proxy_cache_revalidate on;
proxy_intercept_errors on;
proxy_cache_use_stale error timeout updating http_500 http_502 http_503 http_504;
proxy_cache_lock on;
proxy_cache_valid 200 304 60m;
add_header Cache-Control max-age=31536000;
add_header X-Cache-Status $upstream_cache_status;
proxy_pass http://yanpy.dev.s3.amazonaws.com/;
}
}
}
Without the details of which modules Nginx is compiled with, we can say two ways for adding Expires and Cache-Control headers to all files.
Nginx S3 proxy
This is what you asked about -- using Nginx to add expire, cache-control headers on S3 files.
Nginx this set-misc-nginx-module needed to support Nginx S3 proxy & change/add expire, cache-control on the fly. This is a standard full guide from compilation to usage, this is great guide for nginx-extras for Ubuntu server. This is full guide with example with WordPress.
There are more S3 modules for extra things. Without those modules Nginx will not understand and config test (nginx -t) will pass test with wrong config. set-misc-nginx-module is minimum for your need. What you want has better example on this Github gist.
As not all are used with compilation and the setup is really slightly difficult, I am also writing the way to set Expires and Cache-Control header for all files in one Amazon S3 bucket.
Amazon S3 Bucket Expires and Cache-Control Header
Also, it is possible to set Expires and Cache-Control headers for all objects in one AWS S3 bucket with script or command line. There are several such free libraries and scripts on Github like this one, bucket explorer, Amazon's tool, Amazon's this doc and this doc. Command will be like this for that cp CLI tool :
aws s3 cp s3://mybucket/ s3://mybucket/ --recursive --metadata-directive REPLACE \
--expires 2027-09-01T00:00:00Z --acl public-read --cache-control max-age=2000000,public
From an architectural review, what you're trying to do is a wrong way to go about:
Amazon S3 is presumably optimised to be a highly available cache; by introducing a hand-rolled proxying layer on top of it, you're merely introducing an unnecessary extra delay and a huge point of failure, and also losing all the benefits that would come out of S3
Your performance analysis with regards to the number of files is incorrect. If you have thousands of files on S3, the correct solution would be to write a one-time script to change the requisite attributes on S3, instead of hand-rolling a proxying mechanism that you don't fully understand, and that would be executed many times over (ad nauseam). Doing the proxying would likely be a band-aid, and, in reality, will likely decrease the performance, not increase it (even if you'd get to have a stateless automated tool tell you otherwise). Not to mention that it would also be an unnecessary resource drain, and may contribute to actual performance issues and heisenbugs down the line.
That said, if you're still up for proxying with adding the headers, the correct way to do so with nginx would be by using the expires directive.
E.g., you may place expires max; before or after your proxy_pass directive within the appropriate location.
The expires directive automatically takes care of setting a correct Cache-Control header for you, too; but you could also use add_header directive should you wish to add any custom response headers manually.

ngnix - duplicate upstream "app_server" in /etc/nginx/sites-enabled/django

Accidentally deleted conf nginx filled/etc/nginx/sites-enabled/django
, then filled it with the same configuration settings. got the following error:
Feb 02 12:56:53 solomon nginx[32004]: nginx: [emerg] duplicate upstream "app_server" in /etc/nginx/sites-enabled/django.save:1
Feb 02 12:56:53 solomon nginx[32004]: nginx: configuration file /etc/nginx/nginx.conf test failed
Feb 02 12:56:53 solomon systemd[1]: nginx.service: Control process exited, code=exited status=1
Feb 02 12:56:53 solomon sudo[31990]: pam_unix(sudo:session): session closed for user root
Feb 02 12:56:53 solomon systemd[1]: Failed to start A high performance web server and a reverse proxy server.
-- Subject: Unit nginx.service has failed
-- Defined-By: systemd
-- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel
--
-- Unit nginx.service has failed.
--
-- The result is failed.
Feb 02 12:56:53 solomon systemd[1]: nginx.service: Unit entered failed state.
Feb 02 12:56:53 solomon systemd[1]: nginx.service: Failed with result 'exit-code'.
Configuration, which worked before for sure. Have I done something incorrectly ?:
upstream app_server {
server 127.0.0.1:9000 fail_timeout=0;
}
server {
listen 80 default_server;
listen [::]:80 default_server ipv6only=on;
root /usr/share/nginx/html;
index index.html index.htm;
client_max_body_size 4G;
server_name _;
keepalive_timeout 5;
# Your Django project's media files - amend as required
location /media {
alias /home/django/django_project/django_project/media;
}
# your Django project's static files - amend as required
location /static {
alias /home/django/django_project/django_project/static;
}
location / {
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_redirect off;
proxy_pass http://app_server;
}
}
If you have other configuration files in the directory(/etc/nginx/sites-enabled/django) with same upstream name as 'app_server ' then you get that duplicate upstream error.
So replace 'app_server' to any other name. run nginx -t to check for any errors, then restart nginx,
If you tape in your terminal cat /etc/nginx/nginx.conf
you will see these two lines:
include /etc/nginx/conf.d/*.conf;
Wich means Nginx load all files .conf (for example if you are using dokku to deploy your app, you will see dokku.conf which contain "include /home/dokku/*/nginx.conf;" itself) and this means loading all /home/dokku/whatever-folder/nginx.conf
include /etc/nginx/sites-enabled/*;
This problem came up in my case because my configuration file was in /etc/nginx/conf.d and I also had a symbolic link to it in /etc/nginx/sites-enabled/. Not sure how nginx loads the files but apparently it was loaded twice. No matter what name I chose for the upstream, I got duplicate error. Deleting symbolic link fixed the problem.
If the same upstream name "app_server" is not found, find the same upstream name with different case,like:
upstream app_server {
server 127.0.0.1:9000
}
upstream App_server {
server 127.0.0.1:9000
}
This may also cause conflicts. It may be the setting of nginx, but I haven't found the document yet

Sinatra app on AWS Beanstalk with docker and SQS

I've build a simple sinatra app that listens for POST requests on localhost (incomming messages from AWS SQS) and configured a dockerfile along with it for easy deployment.
Sinatra:
set :environment, 'staging'
set :bind, 'localhost'
set :port, '80'
before do
request.body.rewind
#request_payload = JSON.parse request.body.read
end
post '/' do
# do stuff with payload
end
Dockerfile:
#https://dockerfile.github.io/#/ruby
FROM dockerfile/ruby
# Install dependencies
RUN apt-get update
RUN apt-get install postgresql-common postgresql-9.3 libpq-dev -y
# Copy the Gemfile and Gemfile.lock into the image to cache bundle install
# Temporarily set the working directory to where they are
WORKDIR /tmp
ADD ./Gemfile Gemfile
ADD ./Gemfile.lock Gemfile.lock
RUN bundle install
# Copy code into the image
ADD . /code
WORKDIR /code
# Open port 80
EXPOSE 80
# Clean up
RUN apt-get clean && rm -rf /var/lib/apt/lists/* /tmp/* /var/tmp/*
# Default runtime command
CMD /code/launcher.rb
But I am getting these errors in the log files:
-------------------------------------
/var/log/nginx/error.log
-------------------------------------
2014/07/11 20:54:33 [error] 9023#0: *11 recv() failed (104: Connection reset by peer) while reading response header from upstream, client: 127.0.0.1, server: , request: "POST / HTTP/1.1", upstream: "http://127.0.0.1:12569/", host: "localhost"
-------------------------------------
/var/log/docker
-------------------------------------
2014/07/11 20:54:33 Can't forward traffic to backend tcp/172.17.0.8:80: dial tcp 172.17.0.8:80: connection refused
-------------------------------------
/var/log/aws-sqsd/default.log
-------------------------------------
2014-07-11T21:19:35Z http-err: d35bffd4-5c0b-4979-b046-5b42c7a990c0 (6) 502 - 0.023
-------------------------------------
/var/log/nginx/access.log
-------------------------------------
127.0.0.1 - - [11/Jul/2014:21:19:35 +0000] "POST / HTTP/1.1" 502 172 "-" "aws-sqsd/1.1"
-------------------------------------
/var/log/docker-ps.log
-------------------------------------
'docker ps' ran at Fri Jul 11 21:11:52 UTC 2014:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
f3d8a8a3ffb6 aws_beanstalk/current-app:latest /bin/sh -c /code/bui About a minute ago Up About a minute 0.0.0.0:12529->80/tcp backstabbing_pare
Any ideas? I think its something related to the port. I have tried with others with no success...
The set :bind, 'localhost' instruction was the conflict.
Since the POST request comes from outside the docker container, sinatra was declining the connection.