letsencrypt misconfuration on mulitdomains on serverpilot digitalocean - digital-ocean

I've got serverpilot running on a digital ocean droplet.
Ubuntu 14.04
I've followed the tutorial: https://bjoernfranzen.com/how-to-set-up-a-letsencrypt-ssl-certificate-for-your-wordpress-website-on-a-digital-ocean-server-managed-with-a-serverpilot-free-account/
And it worked perfectly for the first domain.
The second domain following the same setup has issues.
Chrome says
"This server cannot prove that it is domain2; its security certificate is from domain1. This may be caused by a misconfiguration or an attacker intercepting your connection"

This error means that SSL installation wasn't succeeded and you made a mistake somewhere. This shell script automates the installation of the SSL on your all ServerPilot apps.
To avoid any troubles for the users, I'm pasting the snippet here as well along with the instructions how to install the SSL:
#!/bin/bash
#################################################
#
# This script automates the installation
# of Let's Encrypt SSL certificates on
# your ServerPilot free plan
#
#################################################
theAction=$1
domainName=$2
appName=$3
spAppRoot="/srv/users/serverpilot/apps/$appName"
domainType=$4
spSSLDir="/etc/nginx-sp/vhosts.d/"
# Install Let's Encrypt libraries if not found
if ! hash letsencrypt 2>/dev/null; then
lecheck=$(eval "apt-cache show letsencrypt 2>&1")
if [[ "$lecheck" == *"No"* ]]
then
sudo wget --no-check-certificate https://dl.eff.org/certbot-auto &>/dev/null
sudo chmod a+x certbot-auto &>/dev/null
sudo mv certbot-auto /usr/local/bin/letsencrypt &>/dev/null
else
sudo apt-get install -y letsencrypt &>/dev/null
fi
fi
if [ -z "$theAction" ]
then
echo -e "\e[31mPlease specify the task. Should be either install or uninstall\e[39m"
exit
fi
if [ -z "$domainName" ]
then
echo -e "\e[31mPlease provide the domain name\e[39m"
exit
fi
if [ ! -d "$spAppRoot" ]
then
echo -e "\e[31mThe app name seems invalid as we didn't find its directory on your server\e[39m"
exit
fi
if [ -z "$appName" ]
then
echo -e "\e[31mPlease provide the app name\e[39m"
exit
fi
if [ "$theAction" == "uninstall" ]; then
sudo rm "$spSSLDir$appName-ssl.conf" &>/dev/null
sudo service nginx-sp reload
echo -e "\e[31mSSL has been removed. If you are seeing errors on your site, then please fix HTACCESS file and remove the rules that you added to force SSL\e[39m"
elif [ "$theAction" == "install" ]; then
if [ -z "$domainType" ]
then
echo -e "\e[31mPlease provide the type of the domain (either main or sub)\e[39m"
exit
fi
sudo service nginx-sp stop
echo -e "\e[32mChecks passed, press enter to continue\e[39m"
if [ "$domainType" == "main" ]; then
thecommand="letsencrypt certonly --register-unsafely-without-email --agree-tos -d $domainName -d www.$domainName"
elif [[ "$domainType" == "sub" ]]; then
thecommand="letsencrypt certonly --register-unsafely-without-email --agree-tos -d $domainName"
else
echo -e "\e[31mDomain type not provided. Should be either main or sub\e[39m"
exit
fi
output=$(eval $thecommand 2>&1) | xargs
if [[ "$output" == *"too many requests"* ]]; then
echo "Let's Encrypt SSL limit reached. Please wait for a few days before obtaining more SSLs for $domainName"
elif [[ "$output" == *"Congratulations"* ]]; then
if [ "$domainType" == "main" ]; then
sudo echo "server {
listen 443 ssl;
listen [::]:443 ssl;
server_name
$domainName
www.$domainName
;
ssl on;
ssl_certificate /etc/letsencrypt/live/$domainName/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/$domainName/privkey.pem;
root $spAppRoot/public;
access_log /srv/users/serverpilot/log/$appName/dev_nginx.access.log main;
error_log /srv/users/serverpilot/log/$appName/dev_nginx.error.log;
proxy_set_header Host \$host;
proxy_set_header X-Real-IP \$remote_addr;
proxy_set_header X-Forwarded-For \$proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-SSL on;
proxy_set_header X-Forwarded-Proto \$scheme;
include /etc/nginx-sp/vhosts.d/$appName.d/*.nonssl_conf;
include /etc/nginx-sp/vhosts.d/$appName.d/*.conf;
}" > "$spSSLDir$appName-ssl.conf"
elif [ "$domainType" == "sub" ]; then
sudo echo "server {
listen 443 ssl;
listen [::]:443 ssl;
server_name
$domainName
;
ssl on;
ssl_certificate /etc/letsencrypt/live/$domainName/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/$domainName/privkey.pem;
root $spAppRoot/public;
access_log /srv/users/serverpilot/log/$appName/dev_nginx.access.log main;
error_log /srv/users/serverpilot/log/$appName/dev_nginx.error.log;
proxy_set_header Host \$host;
proxy_set_header X-Real-IP \$remote_addr;
proxy_set_header X-Forwarded-For \$proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-SSL on;
proxy_set_header X-Forwarded-Proto \$scheme;
include /etc/nginx-sp/vhosts.d/$appName.d/*.nonssl_conf;
include /etc/nginx-sp/vhosts.d/$appName.d/*.conf;
}" > "$spSSLDir$appName-ssl.conf"
fi
echo -e "\e[32mSSL should have been installed for $domainName with auto-renewal (via cron)\e[39m"
# Add a cron job for auto-ssl renewal
grep "sudo service nginx-sp stop && yes | letsencrypt renew &>/dev/null && service nginx-sp start && service nginx-sp reload" /etc/crontab || sudo echo "#monthly sudo service nginx-sp stop && yes | letsencrypt renew &>/dev/null && service nginx-sp start && service nginx-sp reload" >> /etc/crontab
elif [[ "$output" == *"Failed authorization procedure."* ]]; then
echo -e "\e[31m$domainName isn't being resolved to this server. Please check and update the DNS settings if necessary and try again when domain name points to this server\e[39m"
elif [[ ! $output ]]; then
# If no output, we will assume that a valid SSL already exists for this domain
# so we will just add the vhost
sudo echo "server {
listen 443 ssl;
listen [::]:443 ssl;
server_name
$domainName
www.$domainName
;
ssl on;
ssl_certificate /etc/letsencrypt/live/$domainName/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/$domainName/privkey.pem;
root $spAppRoot/public;
access_log /srv/users/serverpilot/log/$appName/dev_nginx.access.log main;
error_log /srv/users/serverpilot/log/$appName/dev_nginx.error.log;
proxy_set_header Host \$host;
proxy_set_header X-Real-IP \$remote_addr;
proxy_set_header X-Forwarded-For \$proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-SSL on;
proxy_set_header X-Forwarded-Proto \$scheme;
include /etc/nginx-sp/vhosts.d/$appName.d/*.nonssl_conf;
include /etc/nginx-sp/vhosts.d/$appName.d/*.conf;
}" > "$spSSLDir$appName-ssl.conf"
echo -e "\e[32mSSL should have been installed for $domainName with auto-renewal (via cron)\e[39m"
grep "sudo service nginx-sp stop && yes | letsencrypt renew &>/dev/null && service nginx-sp start && service nginx-sp reload" /etc/crontab || sudo echo "#monthly sudo service nginx-sp stop && yes | letsencrypt renew &>/dev/null && service nginx-sp start && service nginx-sp reload" >> /etc/crontab
else
echo -e "\e[31mSomething unexpected occurred\e[39m"
fi
sudo service nginx-sp start && sudo service nginx-sp reload
else
echo -e "\e[31mTask cannot be identified. It should be either install or uninstall \e[39m"
fi
Usage:
First of all, copy this code to /usr/local/bin/rwssl and make it executable (chmod +x /usr/local/bin/rwssl). After that, you can run these commands to perform the actions:
To Install the SSL
For main domain:
rwssl install example.com app_name main
For a sub domain:
rwssl install sub.example.com app_name main
P.S.: I'm the project owner.

Related

Nginx not serving static files and user uploaded files in Django Kubernetes

Hi i am working on a kubernetes and can't get static files or user documents. This runs behind nginx ingress all queries and django works as expected. But unable to figure out why images and other documents cant be obtained via url. Application gets installed using helm charts and functionality seems to be okay other than serving files.
FROM python:3.8.12-alpine3.15
ADD ./requirements.txt /app/requirements.txt
RUN set -ex \
&& apk add --no-cache --virtual .build-deps postgresql-dev build-base linux-headers jpeg-dev zlib-dev\
&& python -m venv /env \
&& /env/bin/pip install --upgrade pip \
&& /env/bin/pip install --no-cache-dir -r /app/requirements.txt \
&& runDeps="$(scanelf --needed --nobanner --recursive /env \
| awk '{ gsub(/,/, "\nso:", $2); print "so:" $2 }' \
| sort -u \
| xargs -r apk info --installed \
| sort -u)" \
&& apk add --virtual rundeps $runDeps \
&& apk del .build-deps
ADD ./ /app
WORKDIR /app
ENV VIRTUAL_ENV /env
ENV PATH /env/bin:$PATH
RUN apk add nginx
RUN rm /etc/nginx/http.d/default.conf
COPY helm/nginx.conf /etc/nginx/nginx.conf
ENTRYPOINT ["sh", "helm/run.sh"]
run.sh
nginx -g 'daemon on;'
gunicorn main_app.wsgi:application --bind 0.0.0.0:8080 --workers 3
nginx.conf
user nginx;
worker_processes auto;
error_log /var/log/nginx/error.log;
pid /var/run/nginx.pid;
events {
worker_connections 1024;
}
http {
include /etc/nginx/mime.types;
# default_type application/octet-stream;
access_log /var/log/nginx/access.log;
upstream django {
server 0.0.0.0:8080;
}
server {
include /etc/nginx/mime.types;
listen 8000;
location /static {
autoindex on;
alias /app/static/;
}
location / {
proxy_pass http://django;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $host;
proxy_redirect off;
}
location /uploads{
autoindex on;
alias /app/uploads/;
}
# location ~* ^/[^/]+\.(?:gif|jpg|jpeg|pdf)$ {
# root /app/;
# try_files $uri =404;;;;
# }
}
}
Django settings file is like this:
STATIC_ROOT = BASE_DIR / 'static'
STATIC_URL = '/static/'
# media folder setting
MEDIA_URL = '/uploads/'
MEDIA_ROOT = BASE_DIR / 'uploads'
If i go to /admin url files are served but i get this error in a js file:
Uncaught SyntaxError: Unexpected token '<'
This was one hell of a ride to figure it out. following changes in nginx.conf worked.
location your_ingresspath_here/static {
autoindex on;
alias /app/static/;
}

Why can't I see my NGINX log's when my app is deployed to Azure app services, but it works fine locally?

I have a Dockerized Django application, which I'm orchestrating with Supervisor, which is not optimal but needed when hosting on Azure app services as their multi-app support with docker-compose is still in preview mode (aka. beta).
According to best-practises I have configured each application within supervisord to emit the logs to STDOUT. It works fine when I create the Docker image locally, run it and check the docker logs. However, when I have deployed it to Azure app services and check the logs, my web-application (Gunicorn) is logging as expected, however, the logs from NGINX don't appear at all.
I have tried different configurations in my Dockerfile for linking the log files generated by NGINX (linking to both /dev/stdout and /dev/fd/1 for example) and I have also gone into the the nginx.conf config and trying to log out directly to /dev/stdout. But whatever I do it work fine locally, but on Azure the logs don't show any NGINX-logs. I've pasted relevant configuration files, where you can see the commented lines with the options I've tried with. Hope someone can help me figure this one out.
EDIT:
I've also tried logging the NGINX app to a log-file in the system, which also works fine locally, but not in Azure app-services. I tried deactivating the "user nginx" part in nginx.conf as I though it could have something to do with permissions, but that didn't help either.
EDIT 2:
I also tried creating the log files in my home-directory in the web-app at Azure, thinking it may had to do with not being able to create logs in other directories - again, it works locally, but the logs in Azure are empty.
Dockerfile
FROM python:3.8
ENV PYTHONUNBUFFERED 1
###################
# PACKAGE INSTALLS
###################
RUN apt-get update
RUN apt-get install -y pgbouncer
RUN apt-get update && apt-get install -y supervisor
RUN apt-get install nano
RUN apt-get install -y git
RUN apt-get install curl
# Supervisor-stdout for consolidating logs
RUN pip install git+https://github.com/coderanger/supervisor-stdout
###################
# AZURE SSH SETUP
###################
# Install OpenSSH and set the password for root to "Docker!". In this example, "apk add" is the install instruction for an Alpine Linux-based image.
RUN apt-get install -y --no-install-recommends openssh-server \
&& echo "root:Docker!" | chpasswd
# Copy the sshd_config file to the /etc/ssh/ directory
COPY ./bin/staging/sshd_config /etc/ssh/
# Copy and configure the ssh_setup file
RUN mkdir -p /tmp
COPY ./bin/staging/ssh_setup.sh /tmp
RUN chmod +x /tmp/ssh_setup.sh \
&& (sleep 1;/tmp/ssh_setup.sh 2>&1 > /dev/null)
##############
# NGINX SETUP
##############
ENV NGINX_VERSION 1.15.12-1~stretch
ENV NJS_VERSION 1.15.12.0.3.1-1~stretch
RUN set -x \
&& \
NGINX_GPGKEY=573BFD6B3D8FBC641079A6ABABF5BD827BD9BF62; \
found=''; \
for server in \
hkp://keyserver.ubuntu.com:80 \
hkp://p80.pool.sks-keyservers.net:80 \
pgp.mit.edu \
; do \
echo "Fetching GPG key $NGINX_GPGKEY from $server"; \
apt-key adv --keyserver "$server" --keyserver-options timeout=10 --recv-keys "$NGINX_GPGKEY" && found=yes && break; \
done; \
test -z "$found" && echo >&2 "error: failed to fetch GPG key $NGINX_GPGKEY" && exit 1; \
apt-get remove --purge --auto-remove -y gnupg1 && rm -rf /var/lib/apt/lists/* \
&& dpkgArch="$(dpkg --print-architecture)" \
&& nginxPackages=" \
nginx=${NGINX_VERSION} \
nginx-module-xslt=${NGINX_VERSION} \
nginx-module-geoip=${NGINX_VERSION} \
nginx-module-image-filter=${NGINX_VERSION} \
nginx-module-njs=${NJS_VERSION} \
" \
&& echo "deb https://nginx.org/packages/mainline/debian/ stretch nginx" >> /etc/apt/sources.list.d/nginx.list \
&& apt-get update \
&& apt-get install --no-install-recommends --no-install-suggests -y \
$nginxPackages \
gettext-base \
&& rm -rf /var/lib/apt/lists/* /etc/apt/sources.list.d/nginx.list
COPY ./conf/nginx/staging.conf /etc/nginx/conf.d/default.conf
COPY ./conf/nginx/nginx.conf /etc/nginx/nginx.conf
# Linking logs to be able to print errors and logs to STDOUT
#RUN ln -sf /dev/stdout /var/log/nginx/access.log && ln -sf /dev/stderr /var/log/nginx/error.log
RUN ln -sf /dev/fd/1 /var/log/nginx/access.log && ln -sf /dev/fd/2 /var/log/nginx/error.log
##########################
# DJANGO APPLICATION SETUP
##########################
# install app
RUN mkdir /var/app && chown www-data:www-data /var/app
WORKDIR /var/app
COPY ./requirements.txt /var/app/
RUN pip install -r requirements.txt
COPY . /var/app/
#############
# SUPERVISORD
#############
COPY ./bin/staging/supervisord_main.conf /etc/supervisor/conf.d/supervisord_main.conf
COPY ./bin/staging/prefix-log /usr/local/bin/prefix-log
##########
# VOLUMES
##########
VOLUME /var/logs
########
# PORTS
########
# Expose ports (Added from previous dockerfile)
EXPOSE 80 2222
#########################
# SUPERCRONIC (CRON-TABS)
#########################
ENV SUPERCRONIC_URL=https://github.com/aptible/supercronic/releases/download/v0.1.12/supercronic-linux-amd64 \
SUPERCRONIC=supercronic-linux-amd64 \
SUPERCRONIC_SHA1SUM=048b95b48b708983effb2e5c935a1ef8483d9e3e
RUN curl -fsSLO "$SUPERCRONIC_URL" \
&& echo "${SUPERCRONIC_SHA1SUM} ${SUPERCRONIC}" | sha1sum -c - \
&& chmod +x "$SUPERCRONIC" \
&& mv "$SUPERCRONIC" "/usr/local/bin/${SUPERCRONIC}" \
&& ln -s "/usr/local/bin/${SUPERCRONIC}" /usr/local/bin/supercronic
#############
# PERMISSIONS
#############
RUN ["chmod", "+x", "/var/app/bin/staging/entrypoint_main.sh"]
RUN ["chmod", "+x", "/usr/local/bin/prefix-log"]
############
# ENTRYPOINT
############
ENTRYPOINT ["/var/app/bin/staging/entrypoint_main.sh"]
supervisord_main.conf
[supervisord]
logfile=/var/logs/supervisord.log ; main log file; default $CWD/supervisord.log
logfile_maxbytes=50MB ; max main logfile bytes b4 rotation; default 50MB
logfile_backups=10 ; # of main logfile backups; 0 means none, default 10
loglevel=info ; log level; default info; others: debug,warn,trace
pidfile=/var/logs/supervisord.pid
nodaemon=true ; Run interactivelly instead of deamonizing
# user=www-data
[rpcinterface:supervisor]
supervisor.rpcinterface_factory = supervisor.rpcinterface:make_main_rpcinterface
[inet_http_server]
port = 127.0.0.1:9001
[supervisorctl]
serverurl = http://127.0.0.1:9001
#serverurl=unix:///var/run/supervisor.sock
[program:nginx]
#command=/usr/local/bin/prefix-log /usr/sbin/nginx -g "daemon off;"
command=/usr/sbin/nginx -g "daemon off;"
directory=./projectile/
autostart=true
autorestart=true
stdout_events_enabled=true
stderr_events_enabled=true
stdout_logfile = /dev/fd/1
stdout_logfile_maxbytes=0
stderr_logfile = /dev/fd/2
stderr_logfile_maxbytes=0
[program:ssh]
command=/usr/local/bin/prefix-log /usr/sbin/sshd -D
stdout_events_enabled=true
stderr_events_enabled=true
stdout_logfile = /dev/fd/1
stdout_logfile_maxbytes=0
stderr_logfile = /dev/fd/2
stderr_logfile_maxbytes=0
[program:web]
user=www-data
command=/usr/local/bin/prefix-log gunicorn --bind 0.0.0.0:8000 projectile.wsgi:application # Run each app trough a SH script to prepend logs with the application name
#command=gunicorn --workers=%(ENV_WORKER_COUNT)s --bind 0.0.0.0:8000 myapp_project.wsgi:application
directory=./projectile/
autostart=true
autorestart=true
stdout_events_enabled=true
stderr_events_enabled=true
stdout_logfile = /dev/fd/1
stdout_logfile_maxbytes=0
stderr_logfile = /dev/fd/2
stderr_logfile_maxbytes=0
nginx.conf
user nginx;
worker_processes 2; # Set to number of CPU cores, 2 cores under Azure plan P1v3
error_log /var/log/nginx/error.log warn;
#error_log /dev/stdout warn;
pid /var/run/nginx.pid;
events {
worker_connections 1024;
}
http {
include /etc/nginx/mime.types;
default_type application/octet-stream;
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/access.log main;
#access_log /dev/stdout main;
sendfile on;
#tcp_nopush on;
keepalive_timeout 65;
#gzip on;
include /etc/nginx/conf.d/*.conf;
}
staging.conf
server {
listen 80 default_server;
error_log /dev/stdout info;
access_log /dev/stdout;
client_max_body_size 100M;
location /static {
root /var/app/ui/build;
}
location /site-static {
root /var;
}
location /media {
root /var;
}
location / {
root /var/app/ui/build; # try react build directory first, if file doesn't exist, route requests to django app
try_files $uri $uri/index.html $uri.html #app;
}
location #app {
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto "https"; # assumes https already terminated by the load balancer in front of us
proxy_pass http://127.0.0.1:8000;
proxy_read_timeout 300;
proxy_buffering off;
}
}
Solved it. The issue was that the Azure App service had the configuration setting WEBSITES_PORT=8000 set, which made the app go straight to gunicorn and bypsasing NGINX, thus not creating any logs. Simply removing the setting fixed the issue.

problem deploy django to aws ec2 using nginx

I want to connect my ip adress to a domain name. When I run in a browser "my.ip.adress" the server responds but when I try with "mydomain.com" it doesn't work. I have a 404 error. In my hosting platform I linked "my.ip.adress" to the domain name. I have wait 48 hours as it's recommended to link ip with domaine name.
I'm not sure of the configuration that I did. Maybe my env file ".env-prod" is not call and the pipes break
Could you help me
the folder representation:
env/
myblog/
mysite/
settings.py
wsgi.py
…
scripts/
static/
my env file : .env-prod
export DEBUG=off
export SECRET_KEY='mysecretkey'
export ALLOWED_HOSTS="['my.ip.adress', 'mydomain.com', 'www.mydomain.com']"
export DATABASE_URL=postgres://user:password#db.example.com:5432/production_db?sslmode=require
I have this /etc/systemd/system/gunicorn.socket
[Unit]
Description=gunicorn socket
[Socket]
ListenStream=/run/gunicorn.sock
[Install]
WantedBy=sockets.target
I have /etc/systemd/system/gunicorn.service
[Unit]
Description=gunicorn daemon
Requires=gunicorn.socket
After=network.target
[Service]
User=ubuntu
Group=www-data
WorkingDirectory=/home/ubuntu/myblog
ExecStart=/home/ubuntu/env/bin/gunicorn \
--access-logfile - \
--workers 3 \
--bind unix:/run/gunicorn.sock \
mysite.wsgi:application
[Install]
WantedBy=multi-user.target
I do
sudo systemctl start gunicorn.socket
sudo systemctl enable gunicorn.socket
sudo systemctl status gunicorn.socket
I have also /etc/nginx/sites-available/myblog
server {
listen 80; server_name my.ip.adress; # could I have mydomain.com and www.mydomaine.com there ?
root /home/ubuntu/myblog/;
location /static {
alias /home/ubuntu/myblog/static/;
}
location / {
proxy_set_header Host $http_host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_redirect off;
if (!-f $request_filename) {
proxy_pass http://127.0.0.1:8000;
break;
}
}
}
then I do
sudo nginx -t
sudo ln -s /etc/nginx/sites-available/myblog /etc/nginx/sites-enabled
sudo systemctl restart nginx
In my /etc/nginx/sites-available/default
server {
index index.html index.htm index.nginx-debian.html; # is it important to conserve this line ?
server_name mydomain.com www.mydomain.com;
location / {
# First attempt to serve request as file, then
# as directory, then fall back to displaying a 404.
try_files $uri $uri/ =404;
}
listen [::]:443 ssl ipv6only=on; # managed by Certbot
listen 443 ssl; # managed by Certbot
ssl_certificate /etc/letsencrypt/live/www.mydomain.com/fullchain.pem; # managed by Certbot
ssl_certificate_key /etc/letsencrypt/live/www.mydomain.com/privkey.pem; # managed by Certbot
include /etc/letsencrypt/options-ssl-nginx.conf; # managed by Certbot
ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem; # managed by Certbot
}
I use supervisor and this is my configuration. to install that
sudo apt-get install supervisor
/etc/supervisor/conf.d/myblog-gunicorn.conf
[program:myblog-gunicorn]
command = /home/ubuntu/env/bin/gunicorn mysite.wsgi:application
user = ubuntu
directory = /home/ubuntu/myblog
autostart = true
autorestart = true
then I do
sudo supervisorctl reread
sudo supervisorctl update
sudo supervisorctl status
To solve this problem, you only need to delete the /etc/nginx/sites-available/default file

Aws Beanstalk nginx killed when tried to add new environment

I receive this error every time that I need to add a new environment variable from AWS EBS panel:
AWS Beanstalk events:
2018-02-16 14:49:21 UTC-0200 INFO The environment was reverted to the previous configuration setting.
2018-02-16 14:48:49 UTC-0200 ERROR During an aborted deployment, some instances may have deployed the new application version. To ensure all instances are running the same version, re-deploy the appropriate application version.
2018-02-16 14:48:49 UTC-0200 ERROR Failed to deploy configuration.
2018-02-16 14:48:49 UTC-0200 ERROR Unsuccessful command execution on instance id(s) 'i-xxxxxxxxxxxxxx'. Aborting the operation.
2018-02-16 14:48:49 UTC-0200 INFO Command execution completed on all instances. Summary: [Successful: 0, Failed: 1].
eb-activity.log:
Successfully execute hooks in directory /opt/elasticbeanstalk/hooks/configdeploy/enact.
[2018-02-16T16:21:18.921Z] INFO [8550] – [Configuration update app-0_0_10-180216_141535#104/ConfigDeployStage1/ConfigDeployPostHook] : Starting activity…
[2018-02-16T16:21:18.921Z] INFO [8550] – [Configuration update app-0_0_10-180216_141535#104/ConfigDeployStage1/ConfigDeployPostHook/99_kill_default_nginx.sh] : Starting activity…
[2018-02-16T16:21:19.164Z] INFO [8550] – [Configuration update app-0_0_10-180216_141535#104/ConfigDeployStage1/ConfigDeployPostHook/99_kill_default_nginx.sh] : Activity execution failed, because: + rm -f /etc/nginx/conf.d/00_elastic_beanstalk_proxy.conf
+ service nginx stop
Stopping nginx: /sbin/service: line 66: 8986 Killed env -i PATH=”$PATH” TERM=”$TERM” “${SERVICEDIR}/${SERVICE}” ${OPTIONS} (ElasticBeanstalk::ExternalInvocationError)
caused by: + rm -f /etc/nginx/conf.d/00_elastic_beanstalk_proxy.conf
+ service nginx stop
Stopping nginx: /sbin/service: line 66: 8986 Killed env -i PATH=”$PATH” TERM=”$TERM” “${SERVICEDIR}/${SERVICE}” ${OPTIONS} (Executor::NonZeroExitStatus)
[2018-02-16T16:21:19.164Z] INFO [8550] – [Configuration update app-0_0_10-180216_141535#104/ConfigDeployStage1/ConfigDeployPostHook/99_kill_default_nginx.sh] : Activity failed.
[2018-02-16T16:21:19.165Z] INFO [8550] – [Configuration update app-0_0_10-180216_141535#104/ConfigDeployStage1/ConfigDeployPostHook] : Activity failed.
[2018-02-16T16:21:19.165Z] INFO [8550] – [Configuration update app-0_0_10-180216_141535#104/ConfigDeployStage1] : Activity failed.
[2018-02-16T16:21:19.165Z] INFO [8550] – [Configuration update app-0_0_10-180216_141535#104] : Completed activity. Result:
Configuration update – Command CMD-ConfigDeploy failed
Edit: Added stack-https.config file
eb-activity.log:
Command 01_copy_conf_file] : Activity execution failed, because: (ElasticBeanstalk::ExternalInvocationError
Starting activity...
[2018-02-16T20:38:30.476Z] INFO [2536] - [Application deployment app-0_0_10-1-gb633-180216_175029#124/StartupStage0/EbExtensionPostBuild/Infra-EmbeddedPostBuild/postbuild_0_paneladm_api_stack_SampleApplication_W4FJ8W83X64B] : Starting activity...
[2018-02-16T20:38:32.456Z] INFO [2536] - [Application deployment app-0_0_10-1-gb633-180216_175029#124/StartupStage0/EbExtensionPostBuild/Infra-EmbeddedPostBuild/postbuild_0_paneladm_api_stack_SampleApplication_W4FJ8W83X64B/Command 00_removeconfig] : Starting activity...
[2018-02-16T20:38:32.463Z] INFO [2536] - [Application deployment app-0_0_10-1-gb633-180216_175029#124/StartupStage0/EbExtensionPostBuild/Infra-EmbeddedPostBuild/postbuild_0_paneladm_api_stack_SampleApplication_W4FJ8W83X64B/Command 00_removeconfig] : Completed activity.
[2018-02-16T20:38:34.493Z] INFO [2536] - [Application deployment app-0_0_10-1-gb633-180216_175029#124/StartupStage0/EbExtensionPostBuild/Infra-EmbeddedPostBuild/postbuild_0_paneladm_api_stack_SampleApplication_W4FJ8W83X64B/Command 01_copy_conf_file] : Starting activity...
[2018-02-16T20:38:34.538Z] INFO [2536] - [Application deployment app-0_0_10-1-gb633-180216_175029#124/StartupStage0/EbExtensionPostBuild/Infra-EmbeddedPostBuild/postbuild_0_paneladm_api_stack_SampleApplication_W4FJ8W83X64B/Command 01_copy_conf_file] : Activity execution failed, because: (ElasticBeanstalk::ExternalInvocationError)
I don't know if the problem is because I previous removed the default elastic_beanstalk_proxy.conf file with my commands as below:
Resources:
sslSecurityGroupIngress:
Type: AWS::EC2::SecurityGroupIngress
Properties:
GroupId: {"Fn::GetAtt" : ["AWSEBSecurityGroup", "GroupId"]}
IpProtocol: tcp
ToPort: 443
FromPort: 443
CidrIp: 0.0.0.0/0
files:
/etc/letsencrypt/configs/http_proxy.pre:
mode: "000644"
owner: root
group: root
content: |
# Elastic Beanstalk Managed
upstream nodejs {
server 127.0.0.1:8081;
keepalive 256;
}
server {
listen 8080;
access_log /var/log/nginx/access.log main;
location /.well-known {
allow all;
root /usr/share/nginx/html;
}
# Redirect non-https traffic to https.
location / {
if ($scheme != "https") {
return 301 https://$host$request_uri;
} # managed by Certbot
}
}
# The Nginx config forces https, and is meant as an example only.
/etc/letsencrypt/configs/https_custom.pos:
mode: "000644"
owner: root
group: root
content: |
# HTTPS server
server {
listen 443 default ssl;
server_name localhost;
error_page 497 https://$host$request_uri;
ssl_certificate /etc/letsencrypt/live/ebcert/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/ebcert/privkey.pem;
ssl_session_timeout 5m;
ssl_protocols TLSv1.1 TLSv1.2;
ssl_ciphers "EECDH+AESGCM:EDH+AESGCM:AES256+EECDH:AES256+EDH";
ssl_prefer_server_ciphers on;
if ($ssl_protocol = "") {
rewrite ^ https://$host$request_uri? permanent;
}
location / {
proxy_pass http://nodejs;
proxy_set_header Connection "";
proxy_http_version 1.1;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
}
gzip on;
gzip_comp_level 4;
gzip_types text/plain text/css application/json application/javascript application/x-javascript text/xml application/xml application/xml+rss text/javascript;
}
/etc/letsencrypt/configs/generate-cert.sh:
mode: "000664"
owner: root
group: root
content: |
#!/bin/sh
_EMAIL=
_DOMAIN=
while getopts ":e:d:" OPTION;
do
case "${OPTION}" in
"e") _EMAIL="${OPTARG}";;
"d") _DOMAIN="${OPTARG}";;
esac
done
if [ -z "${_EMAIL}" ]; then
echo "Param email isn't specified!"
fi
if [ -z "${_DOMAIN}" ]; then
echo "Param domain isn't specified!"
fi
if [ -n "$_EMAIL" ] && [ -n "$_DOMAIN" ]; then
cd /opt/certbot/
./certbot-auto certonly \
--debug --non-interactive --email ${_EMAIL} \
--webroot -w /usr/share/nginx/html --agree-tos -d ${_DOMAIN} --keep-until-expiring
fi
if [ $? -ne 0 ]
then
ERRORLOG="/var/log/letsencrypt/letsencrypt.log"
echo "The Let's Encrypt cert has not been renewed!\n" >> $ERRORLOG
else
/etc/init.d/nginx reload
fi
exit 0
/opt/elasticbeanstalk/hooks/configdeploy/post/99_kill_default_nginx.sh:
mode: "000755"
owner: root
group: root
content: |
#!/bin/bash -xe
rm -f /etc/nginx/conf.d/00_elastic_beanstalk_proxy.conf
service nginx stop
service nginx start
packages:
yum:
epel-release: []
container_commands:
00_removeconfig:
command: "rm -f /tmp/deployment/config/#etc#nginx#conf.d#00_elastic_beanstalk_proxy.conf /etc/nginx/conf.d/00_elastic_beanstalk_proxy.conf"
01_copy_conf_file:
command: "cp /etc/letsencrypt/configs/http_proxy.pre /etc/nginx/conf.d/http_proxy.conf; /etc/init.d/nginx reload"
02_createdir:
command: "mkdir /opt/certbot || true"
03_installcertbot:
command: "wget https://dl.eff.org/certbot-auto -O /opt/certbot/certbot-auto"
04_permission:
command: "chmod a+x /opt/certbot/certbot-auto"
05_getcert:
command: "sudo sh /etc/letsencrypt/configs/generate-cert.sh -e ${CERT_EMAIL} -d ${CERT_DOMAIN}"
06_link:
command: "ln -sf /etc/letsencrypt/live/${CERT_DOMAIN} /etc/letsencrypt/live/ebcert"
07_copy_ssl_conf_file:
command: "cp /etc/letsencrypt/configs/https_custom.pos /etc/nginx/conf.d/https_custom.conf; /etc/init.d/nginx reload"
08_cronjob_renew:
command: "sudo sh /etc/letsencrypt/configs/generate-cert.sh -e ${CERT_EMAIL} -d ${CERT_DOMAIN}"
I'm doing this because I replace this file to my own proxy.conf file.
Please I need your help.
References:
awslabs/elastic-beanstalk-sampes/https-redirect-nodejs.config
AWS EBS - Environment Properties and Other Software Settings
I had this problem as well and Amazon acknowledged the error in the documentation. This is a working restart script that you can use in your .ebextensions config file.
/opt/elasticbeanstalk/hooks/configdeploy/post/99_kill_default_nginx.sh:
mode: "000755"
owner: root
group: root
content: |
#!/bin/bash -xe
rm -f /etc/nginx/conf.d/00_elastic_beanstalk_proxy.conf
status=`/sbin/status nginx`
if [[ $status = *"start/running"* ]]; then
echo "stopping nginx..."
stop nginx
echo "starting nginx..."
start nginx
else
echo "nginx is not running... starting it..."
start nginx
fi

Can't restart nginx

I'm using nginx with Django on Ubunto 10:04. The problem is when I restart nginx I get this error.
sudo /etc/init.d/nginx restart
Restarting nginx: the configuration file /etc/nginx/nginx.conf syntax is ok
configuration file /etc/nginx/nginx.conf test is successful
[emerg]: bind() to 0.0.0.0:80 failed (98: Address already in use)
[emerg]: bind() to 0.0.0.0:80 failed (98: Address already in use)
[emerg]: bind() to 0.0.0.0:80 failed (98: Address already in use)
[emerg]: bind() to 0.0.0.0:80 failed (98: Address already in use)
Also, I have tried stop and then start but still get the error.
Here's the output from lsof:
sudo lsof -i tcp:80
COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME
nginx 27141 root 6u IPv4 245906 0t0 TCP *:www (LISTEN)
nginx 27142 nobody 6u IPv4 245906 0t0 TCP *:www (LISTEN)
If I kill the process with PID 27141 it works. However, I would like to get to the bottom
of why I can't just do a restart.
Here's the nginx.conf:
worker_processes 1;
user nobody nogroup;
pid /tmp/nginx.pid;
error_log /tmp/nginx.error.log;
events {
worker_connections 1024;
accept_mutex off;
}
http {
include mime.types;
default_type application/octet-stream;
access_log /tmp/nginx.access.log combined;
sendfile on;
upstream app_server {
# server unix:/tmp/gunicorn.sock fail_timeout=0;
# For a TCP configuration:
server 127.0.0.1:8000 fail_timeout=0;
}
server {
listen 80 default;
client_max_body_size 4G;
server_name _;
keepalive_timeout 5;
# path for static files
root /home/apps/venvs/app1/app1;
location / {
# checks for static file, if not found proxy to app
try_files $uri #proxy_to_app;
}
location #proxy_to_app {
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_redirect off;
proxy_pass http://app_server;
}
error_page 500 502 503 504 /500.html;
location = /500.html {
root /path/to/app/current/public;
}
}
}
Any ideas?
Try:
$ sudo fuser -k 80/tcp ; sudo /etc/init.d/nginx restart
This worked for me
sudo fuser -k 80/tcp
And then
service nginx start
Source: https://rtcamp.com/tutorials/nginx/troubleshooting/emerg-bind-failed-98-address-already-in-use/
Daemontools starting nginx successfully, then nginx daemonizes, and then daemontools tries to start nginx again, unsuccessfully, logging an error to the log.
The solution to this problem is to disable daemon mode in the main section of the nginx.conf:
daemon off;
Site: http://wiki.nginx.org/CoreModule
Tired with nginx restart issues and "address in use" faults. Decided to make it work once and for all.
Added just one line at the end stop and restart action in /etc/init.d/nginx file
nginx -s quit
so it looks now like (and ensure that nginx folder is in PATH variable, otherwise specify the full path)
stop)
echo -n "Stopping $DESC: "
start-stop-daemon --stop --quiet --pidfile /var/run/$NAME.pid \
--exec $DAEMON || true
echo "$NAME."
nginx -s quit
;;
restart|force-reload)
echo -n "Restarting $DESC: "
start-stop-daemon --stop --quiet --pidfile \
/var/run/$NAME.pid --exec $DAEMON || true
nginx -s quit
sleep 1
test_nginx_config
start-stop-daemon --start --quiet --pidfile \
/var/run/$NAME.pid --exec $DAEMON -- $DAEMON_OPTS || true
echo "$NAME."
;;
Hope that this solution will work for others.
Always test your config first, it will show syntax errors and duplicate lines and point you there.
nginx -t
You will see logs there showing you what is causing the failure.
It's because you aren't restarting as root.
Change to root:
sudo -i
Restart:
service nginx restart
Or:
/etc/init.d/nginx restart