I receive this error every time that I need to add a new environment variable from AWS EBS panel:
AWS Beanstalk events:
2018-02-16 14:49:21 UTC-0200 INFO The environment was reverted to the previous configuration setting.
2018-02-16 14:48:49 UTC-0200 ERROR During an aborted deployment, some instances may have deployed the new application version. To ensure all instances are running the same version, re-deploy the appropriate application version.
2018-02-16 14:48:49 UTC-0200 ERROR Failed to deploy configuration.
2018-02-16 14:48:49 UTC-0200 ERROR Unsuccessful command execution on instance id(s) 'i-xxxxxxxxxxxxxx'. Aborting the operation.
2018-02-16 14:48:49 UTC-0200 INFO Command execution completed on all instances. Summary: [Successful: 0, Failed: 1].
eb-activity.log:
Successfully execute hooks in directory /opt/elasticbeanstalk/hooks/configdeploy/enact.
[2018-02-16T16:21:18.921Z] INFO [8550] – [Configuration update app-0_0_10-180216_141535#104/ConfigDeployStage1/ConfigDeployPostHook] : Starting activity…
[2018-02-16T16:21:18.921Z] INFO [8550] – [Configuration update app-0_0_10-180216_141535#104/ConfigDeployStage1/ConfigDeployPostHook/99_kill_default_nginx.sh] : Starting activity…
[2018-02-16T16:21:19.164Z] INFO [8550] – [Configuration update app-0_0_10-180216_141535#104/ConfigDeployStage1/ConfigDeployPostHook/99_kill_default_nginx.sh] : Activity execution failed, because: + rm -f /etc/nginx/conf.d/00_elastic_beanstalk_proxy.conf
+ service nginx stop
Stopping nginx: /sbin/service: line 66: 8986 Killed env -i PATH=”$PATH” TERM=”$TERM” “${SERVICEDIR}/${SERVICE}” ${OPTIONS} (ElasticBeanstalk::ExternalInvocationError)
caused by: + rm -f /etc/nginx/conf.d/00_elastic_beanstalk_proxy.conf
+ service nginx stop
Stopping nginx: /sbin/service: line 66: 8986 Killed env -i PATH=”$PATH” TERM=”$TERM” “${SERVICEDIR}/${SERVICE}” ${OPTIONS} (Executor::NonZeroExitStatus)
[2018-02-16T16:21:19.164Z] INFO [8550] – [Configuration update app-0_0_10-180216_141535#104/ConfigDeployStage1/ConfigDeployPostHook/99_kill_default_nginx.sh] : Activity failed.
[2018-02-16T16:21:19.165Z] INFO [8550] – [Configuration update app-0_0_10-180216_141535#104/ConfigDeployStage1/ConfigDeployPostHook] : Activity failed.
[2018-02-16T16:21:19.165Z] INFO [8550] – [Configuration update app-0_0_10-180216_141535#104/ConfigDeployStage1] : Activity failed.
[2018-02-16T16:21:19.165Z] INFO [8550] – [Configuration update app-0_0_10-180216_141535#104] : Completed activity. Result:
Configuration update – Command CMD-ConfigDeploy failed
Edit: Added stack-https.config file
eb-activity.log:
Command 01_copy_conf_file] : Activity execution failed, because: (ElasticBeanstalk::ExternalInvocationError
Starting activity...
[2018-02-16T20:38:30.476Z] INFO [2536] - [Application deployment app-0_0_10-1-gb633-180216_175029#124/StartupStage0/EbExtensionPostBuild/Infra-EmbeddedPostBuild/postbuild_0_paneladm_api_stack_SampleApplication_W4FJ8W83X64B] : Starting activity...
[2018-02-16T20:38:32.456Z] INFO [2536] - [Application deployment app-0_0_10-1-gb633-180216_175029#124/StartupStage0/EbExtensionPostBuild/Infra-EmbeddedPostBuild/postbuild_0_paneladm_api_stack_SampleApplication_W4FJ8W83X64B/Command 00_removeconfig] : Starting activity...
[2018-02-16T20:38:32.463Z] INFO [2536] - [Application deployment app-0_0_10-1-gb633-180216_175029#124/StartupStage0/EbExtensionPostBuild/Infra-EmbeddedPostBuild/postbuild_0_paneladm_api_stack_SampleApplication_W4FJ8W83X64B/Command 00_removeconfig] : Completed activity.
[2018-02-16T20:38:34.493Z] INFO [2536] - [Application deployment app-0_0_10-1-gb633-180216_175029#124/StartupStage0/EbExtensionPostBuild/Infra-EmbeddedPostBuild/postbuild_0_paneladm_api_stack_SampleApplication_W4FJ8W83X64B/Command 01_copy_conf_file] : Starting activity...
[2018-02-16T20:38:34.538Z] INFO [2536] - [Application deployment app-0_0_10-1-gb633-180216_175029#124/StartupStage0/EbExtensionPostBuild/Infra-EmbeddedPostBuild/postbuild_0_paneladm_api_stack_SampleApplication_W4FJ8W83X64B/Command 01_copy_conf_file] : Activity execution failed, because: (ElasticBeanstalk::ExternalInvocationError)
I don't know if the problem is because I previous removed the default elastic_beanstalk_proxy.conf file with my commands as below:
Resources:
sslSecurityGroupIngress:
Type: AWS::EC2::SecurityGroupIngress
Properties:
GroupId: {"Fn::GetAtt" : ["AWSEBSecurityGroup", "GroupId"]}
IpProtocol: tcp
ToPort: 443
FromPort: 443
CidrIp: 0.0.0.0/0
files:
/etc/letsencrypt/configs/http_proxy.pre:
mode: "000644"
owner: root
group: root
content: |
# Elastic Beanstalk Managed
upstream nodejs {
server 127.0.0.1:8081;
keepalive 256;
}
server {
listen 8080;
access_log /var/log/nginx/access.log main;
location /.well-known {
allow all;
root /usr/share/nginx/html;
}
# Redirect non-https traffic to https.
location / {
if ($scheme != "https") {
return 301 https://$host$request_uri;
} # managed by Certbot
}
}
# The Nginx config forces https, and is meant as an example only.
/etc/letsencrypt/configs/https_custom.pos:
mode: "000644"
owner: root
group: root
content: |
# HTTPS server
server {
listen 443 default ssl;
server_name localhost;
error_page 497 https://$host$request_uri;
ssl_certificate /etc/letsencrypt/live/ebcert/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/ebcert/privkey.pem;
ssl_session_timeout 5m;
ssl_protocols TLSv1.1 TLSv1.2;
ssl_ciphers "EECDH+AESGCM:EDH+AESGCM:AES256+EECDH:AES256+EDH";
ssl_prefer_server_ciphers on;
if ($ssl_protocol = "") {
rewrite ^ https://$host$request_uri? permanent;
}
location / {
proxy_pass http://nodejs;
proxy_set_header Connection "";
proxy_http_version 1.1;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
}
gzip on;
gzip_comp_level 4;
gzip_types text/plain text/css application/json application/javascript application/x-javascript text/xml application/xml application/xml+rss text/javascript;
}
/etc/letsencrypt/configs/generate-cert.sh:
mode: "000664"
owner: root
group: root
content: |
#!/bin/sh
_EMAIL=
_DOMAIN=
while getopts ":e:d:" OPTION;
do
case "${OPTION}" in
"e") _EMAIL="${OPTARG}";;
"d") _DOMAIN="${OPTARG}";;
esac
done
if [ -z "${_EMAIL}" ]; then
echo "Param email isn't specified!"
fi
if [ -z "${_DOMAIN}" ]; then
echo "Param domain isn't specified!"
fi
if [ -n "$_EMAIL" ] && [ -n "$_DOMAIN" ]; then
cd /opt/certbot/
./certbot-auto certonly \
--debug --non-interactive --email ${_EMAIL} \
--webroot -w /usr/share/nginx/html --agree-tos -d ${_DOMAIN} --keep-until-expiring
fi
if [ $? -ne 0 ]
then
ERRORLOG="/var/log/letsencrypt/letsencrypt.log"
echo "The Let's Encrypt cert has not been renewed!\n" >> $ERRORLOG
else
/etc/init.d/nginx reload
fi
exit 0
/opt/elasticbeanstalk/hooks/configdeploy/post/99_kill_default_nginx.sh:
mode: "000755"
owner: root
group: root
content: |
#!/bin/bash -xe
rm -f /etc/nginx/conf.d/00_elastic_beanstalk_proxy.conf
service nginx stop
service nginx start
packages:
yum:
epel-release: []
container_commands:
00_removeconfig:
command: "rm -f /tmp/deployment/config/#etc#nginx#conf.d#00_elastic_beanstalk_proxy.conf /etc/nginx/conf.d/00_elastic_beanstalk_proxy.conf"
01_copy_conf_file:
command: "cp /etc/letsencrypt/configs/http_proxy.pre /etc/nginx/conf.d/http_proxy.conf; /etc/init.d/nginx reload"
02_createdir:
command: "mkdir /opt/certbot || true"
03_installcertbot:
command: "wget https://dl.eff.org/certbot-auto -O /opt/certbot/certbot-auto"
04_permission:
command: "chmod a+x /opt/certbot/certbot-auto"
05_getcert:
command: "sudo sh /etc/letsencrypt/configs/generate-cert.sh -e ${CERT_EMAIL} -d ${CERT_DOMAIN}"
06_link:
command: "ln -sf /etc/letsencrypt/live/${CERT_DOMAIN} /etc/letsencrypt/live/ebcert"
07_copy_ssl_conf_file:
command: "cp /etc/letsencrypt/configs/https_custom.pos /etc/nginx/conf.d/https_custom.conf; /etc/init.d/nginx reload"
08_cronjob_renew:
command: "sudo sh /etc/letsencrypt/configs/generate-cert.sh -e ${CERT_EMAIL} -d ${CERT_DOMAIN}"
I'm doing this because I replace this file to my own proxy.conf file.
Please I need your help.
References:
awslabs/elastic-beanstalk-sampes/https-redirect-nodejs.config
AWS EBS - Environment Properties and Other Software Settings
I had this problem as well and Amazon acknowledged the error in the documentation. This is a working restart script that you can use in your .ebextensions config file.
/opt/elasticbeanstalk/hooks/configdeploy/post/99_kill_default_nginx.sh:
mode: "000755"
owner: root
group: root
content: |
#!/bin/bash -xe
rm -f /etc/nginx/conf.d/00_elastic_beanstalk_proxy.conf
status=`/sbin/status nginx`
if [[ $status = *"start/running"* ]]; then
echo "stopping nginx..."
stop nginx
echo "starting nginx..."
start nginx
else
echo "nginx is not running... starting it..."
start nginx
fi
Related
I just started with Jelastic and I'm trying to create a container based on jelastic/nginxphp:1.20.2-php-8.0.13. The final goal is to integrate my Symfony development in a container I will execute in Jelastic. As first step I tried to run 'composer install' in my docker file. It builds fine (no error) but when looking into the container the vendor directory is not there. If I rerun the 'composer install' directly into the container, the vendor directory is well created.
Here is the content of my Dockerfile:
FROM jelastic/nginxphp:1.20.2-php-8.0.13
# Set build arguments
ARG APP_ENV=prod
# Set main params
ENV APP_HOME /var/www/webroot
ENV APP_ENV $APP_ENV
COPY infra/jelastic/index.php $APP_HOME/ROOT/
# Get latest Composer
COPY --from=composer:latest /usr/bin/composer /usr/bin/composer
COPY symfony/composer.* $APP_HOME/
WORKDIR $APP_HOME
RUN set -xe \
&& if [ "$APP_ENV" = "prod" ]; then export ARGS="--no-dev"; fi \
&& composer install --prefer-dist --optimize-autoloader --classmap-authoritative --no-interaction --no-ansi $ARGS
RUN composer dump-autoload --classmap-authoritative
CMD service php-fpm start && nginx -g "daemon off;"
More globally, it seems the RUN instruction in the docker file doesn't work as exepected: I also tried to remove some files/directories but at the end nothing is removed and no error is shown during build.
Advanced thanks.
Jacques
Is the end goal to run this as a "certified container" within Jelastic, so you have access to Jelastic add-ons like Let's Encrypt and so on, or do you simply want to run a Docker image in Jelastic? For the latter, I would recommend to use a more "standard" image as your base. –
Damien
Following the recommendation of Damien, I have created a new Dockerfile based on a more standard base. When testing n my development machine everything is fine but when using the container in Jelastic, I see the following errors in the run.log file:
No valid login shell found for user nobody 2021-12-22 11:21:58,046
INFO Set uid to user 65534 succeeded 2021-12-22 11:21:58,081 CRIT
could not write pidfile /run/supervisord.pid 2021-12-22 11:21:59,082
INFO spawnerr: unknown error making dispatchers for 'nginx': EACCES
2021-12-22 11:21:59,083 INFO spawnerr: unknown error making
dispatchers for 'php-fpm': EACCES 2021-12-22 11:22:00,083 INFO gave
up: nginx entered FATAL state, too many start retries too quickly
2021-12-22 11:22:00,084 INFO gave up: php-fpm entered FATAL state, too
many start retries too quickly
Here are the files I'm using.
Dockerfile:
# 1st stage : build js & css
FROM node:14-alpine AS builder
WORKDIR /wamsbot
ENV WAMS_BASE_URL=http://127.0.0.1:8000
ARG NODE_ENV=production
ENV NODE_ENV $NODE_ENV
COPY symfony/package.json symfony/yarn.lock symfony/webpack.config.js ./
COPY symfony/assets ./assets
RUN mkdir -p public \
&& NODE_ENV=development yarn install \
&& yarn run build
FROM composer AS composer
# Copy the source directory and install the dependencies with composer
WORKDIR /wamsbot
COPY symfony/composer.* ./
# Run composer install to install the dependencies
RUN if [ "$APP_ENV" = "prod" ]; then export ARGS="--no-dev"; fi \
&& composer install --prefer-dist --optimize-autoloader --classmap-authoritative --no-interaction --no-ansi $ARGS
COPY symfony/ ./
RUN composer dump-autoload --classmap-authoritative
# continue stage build with the desired image and copy the source including the
# dependencies downloaded by composer
FROM alpine:3
# Install packages and remove default server definition
RUN apk --no-cache add \
curl \
nginx \
php8 \
php8-ctype \
php8-curl \
php8-dom \
php8-fpm \
php8-gd \
php8-intl \
php8-json \
php8-mbstring \
php8-mysqli \
php8-opcache \
php8-openssl \
php8-phar \
php8-session \
php8-simplexml \
php8-xml \
php8-tokenizer \
php8-xmlreader \
php8-zlib \
supervisor \
&& rm -f /etc/nginx/conf.d/default.conf
# Create symlink so programs depending on `php` still function
RUN ln -s /usr/bin/php8 /usr/bin/php
# Configure nginx
COPY infra/prod/config/nginx.conf /etc/nginx/nginx.conf
# Configure PHP-FPM
COPY infra/prod/config/fpm-pool.conf /etc/php8/php-fpm.d/www.conf
COPY infra/prod/config/php.ini /etc/php8/conf.d/custom.ini
# Configure supervisord
COPY infra/prod/config/supervisord.conf /etc/supervisor/conf.d/supervisord.conf
# Setup document root
RUN mkdir -p /var/www/wamsbot
# Make sure files/folders needed by the processes are accessable when they run under the nobody user
RUN chown -R nobody.nobody /var/www/wamsbot \
&& chown -R nobody.nobody /run \
&& chown -R nobody.nobody /var/lib/nginx \
&& chown -R nobody.nobody /var/log/nginx
# Switch to use a non-root user from here on
USER nobody
# Add application
WORKDIR /var/www/wamsbot
COPY --chown=nobody symfony/ /var/www/wamsbot/
ARG APP_ENV=prod
ARG APP_DEBUG=0
ARG GOOGLE_APPLICATION_CREDENTIALS_PATH
ENV APP_ENV $APP_ENV
ENV APP_DEBUG $APP_DEBUG
COPY --from=composer --chown=nobody /wamsbot/ /var/www/wamsbot
COPY --from=builder --chown=nobody /wamsbot/public/build /var/www/wamsbot/public/build
/var/www/wamsbot/public/build
# Copy key files
RUN mkdir -p /tmp/keys
COPY $GOOGLE_APPLICATION_CREDENTIALS_PATH /tmp/keys/google_key.json
# Memory limit increase is required by the dev image
RUN php -d memory_limit=256M bin/console cache:clear
RUN php bin/console assets:install --symlink --relative public \
&& rm -rf /var/www/wamsbot/assets
# Expose the port nginx is reachable on
EXPOSE 8080
# Let supervisord start nginx & php-fpm
CMD ["/usr/bin/supervisord", "-c", "/etc/supervisor/conf.d/supervisord.conf"]
# Configure a healthcheck to validate that everything is up&running
HEALTHCHECK --timeout=10s CMD curl --silent --fail http://127.0.0.1:8080/fpm-ping
fpm-pool.conf:
[global]
; Log to stderr
error_log = /dev/stderr
[www]
; The address on which to accept FastCGI requests.
; Valid syntaxes are:
; 'ip.add.re.ss:port' - to listen on a TCP socket to a specific IPv4 address on
; a specific port;
; '[ip:6:addr:ess]:port' - to listen on a TCP socket to a specific IPv6 address on
; a specific port;
; 'port' - to listen on a TCP socket to all addresses
; (IPv6 and IPv4-mapped) on a specific port;
; '/path/to/unix/socket' - to listen on a unix socket.
; Note: This value is mandatory.
listen = 127.0.0.1:9000
; Enable status page
pm.status_path = /fpm-status
; Ondemand process manager
pm = ondemand
; The number of child processes to be created when pm is set to 'static' and the
; maximum number of child processes when pm is set to 'dynamic' or 'ondemand'.
; This value sets the limit on the number of simultaneous requests that will be
; served. Equivalent to the ApacheMaxClients directive with mpm_prefork.
; Equivalent to the PHP_FCGI_CHILDREN environment variable in the original PHP
; CGI. The below defaults are based on a server without much resources. Don't
; forget to tweak pm.* to fit your needs.
; Note: Used when pm is set to 'static', 'dynamic' or 'ondemand'
; Note: This value is mandatory.
pm.max_children = 100
; The number of seconds after which an idle process will be killed.
; Note: Used only when pm is set to 'ondemand'
; Default Value: 10s
pm.process_idle_timeout = 10s;
; The number of requests each child process should execute before respawning.
; This can be useful to work around memory leaks in 3rd party libraries. For
; endless request processing specify '0'. Equivalent to PHP_FCGI_MAX_REQUESTS.
; Default Value: 0
pm.max_requests = 1000
; Make sure the FPM workers can reach the environment variables for configuration
clear_env = no
; Catch output from PHP
catch_workers_output = yes
; Remove the 'child 10 said into stderr' prefix in the log and only show the actual message
decorate_workers_output = no
; Enable ping page to use in healthcheck
ping.path = /fpm-ping
nginx.conf:
worker_processes auto;
error_log stderr warn;
pid /run/nginx.pid;
events {
worker_connections 1024;
}
http {
include mime.types;
default_type application/octet-stream;
# Define custom log format to include reponse times
log_format main_timed '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for" '
'$request_time $upstream_response_time $pipe $upstream_cache_status';
access_log /dev/stdout main_timed;
error_log /dev/stderr notice;
keepalive_timeout 65;
# Write temporary files to /tmp so they can be created as a non-privileged user
client_body_temp_path /tmp/client_temp;
proxy_temp_path /tmp/proxy_temp_path;
fastcgi_temp_path /tmp/fastcgi_temp;
uwsgi_temp_path /tmp/uwsgi_temp;
scgi_temp_path /tmp/scgi_temp;
# Default server definition
server {
listen [::]:8080 default_server;
listen 8080 default_server;
server_name _;
sendfile off;
root /var/www/wamsbot/public;
index index.php index.html;
location / {
# First attempt to serve request as file, then
# as directory, then fall back to index.php
try_files $uri $uri/ /index.php?q=$uri&$args;
}
# Redirect server error pages to the static page /50x.html
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root /var/lib/nginx/html;
}
# Pass the PHP scripts to PHP-FPM listening on 127.0.0.1:9000
location ~ \.php$ {
try_files $uri =404;
fastcgi_split_path_info ^(.+\.php)(/.+)$;
fastcgi_pass 127.0.0.1:9000;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
fastcgi_param SCRIPT_NAME $fastcgi_script_name;
fastcgi_index index.php;
include fastcgi_params;
}
location ~* \.(jpg|jpeg|gif|png|css|js|ico|xml)$ {
expires 5d;
}
# Deny access to . files, for security
location ~ /\. {
log_not_found off;
deny all;
}
# Allow fpm ping and status from localhost
location ~ ^/(fpm-status|fpm-ping)$ {
access_log off;
allow 127.0.0.1;
deny all;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
include fastcgi_params;
fastcgi_pass 127.0.0.1:9000;
}
}
gzip on;
gzip_proxied any;
gzip_types text/plain application/xml text/css text/js text/xml application/x-javascript text/javascript application/json application/xml+rss;
gzip_vary on;
gzip_disable "msie6";
# Include other server configs
include /etc/nginx/conf.d/*.conf;
}
php.ini:
[Date]
date.timezone="UTC"
supervisord.conf:
[supervisord]
nodaemon=true
logfile=/dev/null
logfile_maxbytes=0
pidfile=/run/supervisord.pid
user=nobody
[program:php-fpm]
command=php-fpm8 -F
stdout_logfile=/dev/stdout
stdout_logfile_maxbytes=0
stderr_logfile=/dev/stderr
stderr_logfile_maxbytes=0
autorestart=false
startretries=0
[program:nginx]
command=nginx -g 'daemon off;'
stdout_logfile=/dev/stdout
stdout_logfile_maxbytes=0
stderr_logfile=/dev/stderr
stderr_logfile_maxbytes=0
autorestart=false
startretries=0
I would appreciate some help. Thanks.
The mentioned behavior occurred because the /var/www/webroot is declared as volume for jelastic/nginxphp:1.20.2-php-8.0.13 - so any changes made by RUN command inside this directory during the build do not persist.
But the files which are created by the COPY or ADD commands persist, so the workaround would be to use the multi-stage build:
On the first stage, you use the mentioned above
jelastic/nginxphp:1.20.2-php-8.0.13 image, put the
symfony/composer.* in any directory other than the /var/www/webroot
(for example '/var/www/app') and run the "composer install" in
'/var/www/app'.
On the second stage, you use the jelastic/nginxphp:1.20.2-php-8.0.13
again, and copy the content of 'app' directory from step 1 using the
following instruction:
COPY --from=0 /var/www/app /var/www/webroot/ROOT
I have a django app running in docker containers (please see docker compose and dockerfile below). I have removed port exposure from my docker-compose however when i deploy the code onto an ubuntu server, I can still access the app via port 3000. I am also using nginx to do the proxing (see nginx file below).
services:
rabbitmq:
restart: always
image: rabbitmq:3.7
...
db:
restart: always
image: mongo:4
...
cam_dash:
restart: always
build: .
command: python3 manage.py runserver 0.0.0.0:3000
...
celery:
restart: always
build: .
command: celery -A dashboard worker -l INFO -c 200
...
celery_beat:
restart: always
build: .
command: celery beat -A dashboard -l info --scheduler django_celery_beat.schedulers:DatabaseScheduler
...
FROM python:3.7
COPY requirements.txt /
RUN pip3 install -r /requirements.txt
ADD ./ /dashboard
WORKDIR /dashboard
COPY ./docker-entrypoint.sh /docker-entrypoint.sh
RUN chmod +x /docker-entrypoint.sh
ENTRYPOINT ["/docker-entrypoint.sh"]
EXPOSE 3000
server {
listen 80 default_server;
listen [::]:80 default_server;
server_name _;
return 301 https://$host$request_uri;
root /var/www/html;
index index.html;
}
server {
listen 443;
server_name camonitor.uct.ac.za;
ssl on;
ssl_certificate /etc/ssl/certs/wildcard.crt;
ssl_certificate_key /etc/ssl/private/wildcard.key;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
location / {
root /var/www/html;
index index.html;
}
location /dash/ {
proxy_pass http://127.0.0.1:3000/dash/;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $remote_addr;
}
...
I am expecting that if I try access https://example.com:3000/dash/, it should not be accessible. https://example.com/dash/ works just fine.
Thanks for the help.
You should prevent access to port 3000 using the system's firewall.
I had same issue hosting more than one web server on same machine and proxying with Nginx, I solved using this port configuration in docker-compose.yml, binding the port only to localhost, maybe you could apply same configuration to python server.
"127.0.0.1:3000:3000"
version: '3'
services:
myService:
image: "myService/myService:1"
container_name: "myService"
ports:
- "127.0.0.1:3000:3000"
I'm deploying my Django/Nginx/Gunicorn webapp to EC2 instance using docker-compose. EC2 instance has static IP where mywebapp.com / www.mywebapp.com points to, and I've completed the certbot verification (site works on port 80 over HTTP) but now trying to get working over SSL.
Right now, HTTP (including loading static files) is working for me, and HTTPS dynamic content (from Django) is working, but static files are not. I think my nginx configuration is wonky.
I tried copying the location /static/ block to the SSL server context in the nginx conf file, but that caused SSL to stop working altogether, not just static files over SSL.
Here's the final docker-compose.yml:
services:
certbot:
entrypoint: /bin/sh -c 'trap exit TERM; while :; do certbot renew; sleep 12h &
wait $${!}; done;'
image: certbot/certbot
volumes:
- /home/ec2-user/certbot/conf:/etc/letsencrypt:rw
- /home/ec2-user/certbot/www:/var/www/certbot:rw
nginx:
command: /bin/sh -c 'while :; do sleep 6h & wait $${!}; nginx -s reload; done
& nginx -g "daemon off;"'
depends_on:
- web
image: xxxxxxxx.dkr.ecr.us-east-1.amazonaws.com/xxxxxxxx:latest
ports:
- 80:80/tcp
- 443:443/tcp
volumes:
- /home/ec2-user/certbot/conf:/etc/letsencrypt:rw
- static_volume:/usr/src/app/public:rw
- /home/ec2-user/certbot/www:/var/www/certbot:rw
web:
entrypoint: gunicorn mywebapp.wsgi:application --bind 0.0.0.0:7000"
image: xxxxxxxx.dkr.ecr.us-east-1.amazonaws.com/xxxxxxxx:latest
volumes:
- static_volume:/usr/src/app/public:rw
version: '3.0'
volumes:
static_volume: {}
nginx.prod.conf:
upstream mywebapp {
# web is the name of the service in the docker-compose.yml
# 7000 is the port that gunicorn listens on
server web:7000;
}
server {
listen 80;
server_name mywebapp;
location / {
proxy_pass http://mywebapp;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $host;
proxy_redirect off;
}
location /static/ {
alias /usr/src/app/public/;
}
location /.well-known/acme-challenge/ {
root /var/www/certbot;
}
}
server {
# https://github.com/wmnnd/nginx-certbot/blob/master/data/nginx/app.conf
listen 443 ssl;
server_name mywebapp;
server_tokens off;
location / {
proxy_pass http://mywebapp;
proxy_set_header Host $http_host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
# generated with help of certbot
ssl_certificate /etc/letsencrypt/live/mywebapp.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/mywebapp.com/privkey.pem;
include /etc/letsencrypt/options-ssl-nginx.conf;
ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem;
}
and finally the nginx service Dockerfile:
FROM nginx:1.15.12-alpine
RUN rm /etc/nginx/conf.d/default.conf
COPY ./nginx.prod.conf /etc/nginx/conf.d
I simply build, push to ECR on local machine then docker-compose pull and run with docker-compose up -d on the EC2 instance.
The error I see in docker-compose logs is:
nginx_1 | 2019/05/09 02:30:34 [error] 8#8: *1 connect() failed (111: Connection refused) while connecting to upstream, client: xx.xx.xx.xx, server: mywebapp, request: "GET / HTTP/1.1", upstream: "http://192.168.111.3:7000/", host: "ec2-xxx-xxx-xxx-xxx.compute-1.amazonaws.com"
And I'm not sure what's going wrong. I'm trying to get both dynamic content (gunicorn) and static content (from: /usr/src/app/public) served correctly under HTTPS using the certs I've generated and verified.
Anyone know what I might be doing wrong?
Check your configuration file with nginx -T - are you seeing the correct configuration? Is your build process pulling in the correct conf?
It's helpful to just debug this on the remote machine - docker-compose exec nginx sh to get inside and tweak the conf from there and nginx -s reload. This will speed up your iteration cycles debugging an SSL issue.
Hi I am new to this project and I am having issues hosting it on a CentOS7 ec2 instance.
I am getting this error when I hit my domain:
2017/02/17 05:53:35 [error] 27#27: *20 connect() failed (111: Connection refused) while connecting to upstream, client: xxx.xxx.xxx.xxx, server:myApp.io, request: "GET /favicon.ico HTTP/1.1", upstream: "http://172.18.0.7:5000/favicon.ico", host: "myApp.io", referrer: "https://myApp.io"
When I look at the logs
docker logs d381b6d093fa
sleep 5
build starting nginx config
replacing ___my.example.com___/myApp.io
user nginx;
worker_processes 1;
error_log /var/log/nginx/error.log warn;
pid /var/run/nginx.pid;
events {
worker_connections 1024;
}
http {
include /etc/nginx/mime.types;
default_type application/octet-stream;
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/access.log main;
sendfile on;
#tcp_nopush on;
keepalive_timeout 65;
#gzip on;
upstream app {
server django:5000;
}
server {
listen 80;
charset utf-8;
server_name myApp.io ;
location /.well-known/acme-challenge {
proxy_pass http://certbot:80;
proxy_set_header Host $host;
proxy_set_header X-Forwarded-For $remote_addr;
proxy_set_header X-Forwarded-Proto https;
}
location / {
# checks for static file, if not found proxy to app
try_files $uri #proxy_to_app;
}
# cookiecutter-django app
location #proxy_to_app {
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_redirect off;
proxy_pass http://app;
}
}
}
.
Firing up nginx in the background.
Waiting for folder /etc/letsencrypt/live/myApp.io to exist
replacing ___my.example.com___/myApp.io
replacing ___NAMESERVER___/127.0.0.11
I made sure to add my ip address to the env file for allowed hosts.
When I look at running containers I get:
docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
3887c3465802 myApp_nginx "/bin/sh -c /start.sh" 3 minutes ago Up 3 minutes 0.0.0.0:80->80/tcp, 0.0.0.0:443->443/tcp myApp_nginx_1
91cbc2a2359d myApp_django "/entrypoint.sh /g..." 3 minutes ago Up 3 minutes myApp_django_1
My docker-compose.yml looks like:
version: '2'
volumes:
postgres_data: {}
postgres_backup: {}
services:
postgres:
build: ./compose/postgres
volumes:
- postgres_data:/var/lib/postgresql/data
- postgres_backup:/backups
env_file: .env
django:
build:
context: .
dockerfile: ./compose/django/Dockerfile
user: django
depends_on:
- postgres
- redis
command: /gunicorn.sh
env_file: .env
nginx:
build: ./compose/nginx
depends_on:
- django
- certbot
ports:
- "0.0.0.0:80:80"
environment:
- MY_DOMAIN_NAME=myApp.io
ports:
- "0.0.0.0:80:80"
- "0.0.0.0:443:443"
volumes:
- /etc/letsencrypt:/etc/letsencrypt
- /var/lib/letsencrypt:/var/lib/letsencrypt
certbot:
image: quay.io/letsencrypt/letsencrypt
command: bash -c "sleep 6 && certbot certonly -n --standalone -d myApp.io --text --agree-tos --email morozovsdenis#gmail.com --server https://acme-v01.api.letsencrypt.org/directory --rsa-key-size 4096 --verbose --keep-until-expiring --standalone-supported-challenges http-01"
entrypoint: ""
volumes:
- /etc/letsencrypt:/etc/letsencrypt
- /var/lib/letsencrypt:/var/lib/letsencrypt
ports:
- "80"
- "443"
environment:
- TERM=xterm
redis:
image: redis:latest
celeryworker:
build:
context: .
dockerfile: ./compose/django/Dockerfile
user: django
env_file: .env
depends_on:
- postgres
- redis
command: celery -A myApp.taskapp worker -l INFO
celerybeat:
build:
context: .
dockerfile: ./compose/django/Dockerfile
user: django
env_file: .env
depends_on:
- postgres
- redis
command: celery -A myApp.taskapp beat -l INFO
My .env file has the correct allowed host which is my ec2-instance ip address
Any idea what I am doing incorrectly?
I faced the same issue a few months ago. Please have a look at this answer:
Problem with SELinux. It helped me like a charm :)
I'm using nginx with Django on Ubunto 10:04. The problem is when I restart nginx I get this error.
sudo /etc/init.d/nginx restart
Restarting nginx: the configuration file /etc/nginx/nginx.conf syntax is ok
configuration file /etc/nginx/nginx.conf test is successful
[emerg]: bind() to 0.0.0.0:80 failed (98: Address already in use)
[emerg]: bind() to 0.0.0.0:80 failed (98: Address already in use)
[emerg]: bind() to 0.0.0.0:80 failed (98: Address already in use)
[emerg]: bind() to 0.0.0.0:80 failed (98: Address already in use)
Also, I have tried stop and then start but still get the error.
Here's the output from lsof:
sudo lsof -i tcp:80
COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME
nginx 27141 root 6u IPv4 245906 0t0 TCP *:www (LISTEN)
nginx 27142 nobody 6u IPv4 245906 0t0 TCP *:www (LISTEN)
If I kill the process with PID 27141 it works. However, I would like to get to the bottom
of why I can't just do a restart.
Here's the nginx.conf:
worker_processes 1;
user nobody nogroup;
pid /tmp/nginx.pid;
error_log /tmp/nginx.error.log;
events {
worker_connections 1024;
accept_mutex off;
}
http {
include mime.types;
default_type application/octet-stream;
access_log /tmp/nginx.access.log combined;
sendfile on;
upstream app_server {
# server unix:/tmp/gunicorn.sock fail_timeout=0;
# For a TCP configuration:
server 127.0.0.1:8000 fail_timeout=0;
}
server {
listen 80 default;
client_max_body_size 4G;
server_name _;
keepalive_timeout 5;
# path for static files
root /home/apps/venvs/app1/app1;
location / {
# checks for static file, if not found proxy to app
try_files $uri #proxy_to_app;
}
location #proxy_to_app {
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_redirect off;
proxy_pass http://app_server;
}
error_page 500 502 503 504 /500.html;
location = /500.html {
root /path/to/app/current/public;
}
}
}
Any ideas?
Try:
$ sudo fuser -k 80/tcp ; sudo /etc/init.d/nginx restart
This worked for me
sudo fuser -k 80/tcp
And then
service nginx start
Source: https://rtcamp.com/tutorials/nginx/troubleshooting/emerg-bind-failed-98-address-already-in-use/
Daemontools starting nginx successfully, then nginx daemonizes, and then daemontools tries to start nginx again, unsuccessfully, logging an error to the log.
The solution to this problem is to disable daemon mode in the main section of the nginx.conf:
daemon off;
Site: http://wiki.nginx.org/CoreModule
Tired with nginx restart issues and "address in use" faults. Decided to make it work once and for all.
Added just one line at the end stop and restart action in /etc/init.d/nginx file
nginx -s quit
so it looks now like (and ensure that nginx folder is in PATH variable, otherwise specify the full path)
stop)
echo -n "Stopping $DESC: "
start-stop-daemon --stop --quiet --pidfile /var/run/$NAME.pid \
--exec $DAEMON || true
echo "$NAME."
nginx -s quit
;;
restart|force-reload)
echo -n "Restarting $DESC: "
start-stop-daemon --stop --quiet --pidfile \
/var/run/$NAME.pid --exec $DAEMON || true
nginx -s quit
sleep 1
test_nginx_config
start-stop-daemon --start --quiet --pidfile \
/var/run/$NAME.pid --exec $DAEMON -- $DAEMON_OPTS || true
echo "$NAME."
;;
Hope that this solution will work for others.
Always test your config first, it will show syntax errors and duplicate lines and point you there.
nginx -t
You will see logs there showing you what is causing the failure.
It's because you aren't restarting as root.
Change to root:
sudo -i
Restart:
service nginx restart
Or:
/etc/init.d/nginx restart