I have a single Docker container deployment to AWS Elastic Beanstalk.
When I visit the site, it returns a 502 error, which makes me think the port inside the Docker container is not exposed.
These are my settings:
Dockerrun.aws.json:
{
"AWSEBDockerrunVersion": "1",
"Volumes": [
{
"ContainerDirectory": "/var/app",
"HostDirectory": "/var/app"
}
],
"Logging": "/var/eb_log",
"Ports": [
{
"containerPort": 80
}
]
}
Dockerfile
FROM ubuntu:16.04
# Install Python Setuptools
RUN rm -fR /var/lib/apt/lists/*
RUN apt-get update
RUN apt-get install -y software-properties-common
RUN add-apt-repository ppa:jonathonf/python-3.6
RUN apt-get update && apt-get install -y python3-pip
RUN apt-get install -y python3.6
RUN apt-get install -y python3-dev
RUN apt-get install -y libpq-dev
RUN apt-get install libffi-dev
RUN apt-get install -y git
# Add and install Python modules
ADD requirements.txt /src/requirements.txt
RUN cd /src; pip3 install -r requirements.txt
# Bundle app source
ADD . /src
# Expose
EXPOSE 80
# Run
CMD ["python3", "/src/app.py"]
app.py
from flask import Flask
app = Flask(__name__)
#app.route("/")
def hello():
return "Hello World!"
# run the app.
if __name__ == "__main__":
# Setting debug to True enables debug output. This line should be
# removed before deploying a production app.
app.debug = False
app.run(port=80)
I see this in my docker-ps.log:
CONTAINER ID IMAGE COMMAND CREATED
STATUS PORTS NAMES
ead221e6d2c6 2eb62af087be "python3 /src/app.py" 34 minutes ago Up 34 minutes 80/tcp peaceful_lamport
and:
/var/log/eb-docker/containers/eb-current-app/ead221e6d2c6-stdouterr.log
-------------------------------------
* Running on http://127.0.0.1:80/ (Press CTRL+C to quit)
and this error:
2017/07/06 05:57:36 [error] 15972#0: *10 connect() failed (111: Connection refused) while connecting to upstream, client: 172.5.154.225, server: , request: "GET / HTTP/1.1", upstream: "http://172.17.0.3:80/", host: "bot-platform.us-west-2.elasticbeanstalk.com"
What am I doing wrong?
After looking up your error code I think you could give the following solution a try. It seems that you have to edit the nginx config of elastic beanstalk. Thereto you add the file nginx.config to the directory .ebextionsions in elastic beanstalk. Put the following content into the file:
files:
"/etc/nginx/conf.d/000_my_config.conf":
content: |
upstream nodejsserver {
server 127.0.0.1:8081;
keepalive 256;
}
server {
listen 8080;
location / {
proxy_pass http://nodejsserver;
proxy_set_header Connection "";
proxy_http_version 1.1;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
location /myconfig {
proxy_pass http://my_proxy_pass_host;
}
}
Maybe you have to adjust it a little but this seems to be the proper way to solve your problem. If you google your error you will find a lot slightly different solutions on how to adjust nginx in order to resolve this problem.
Related
I'm developing a simple Chat application (based on django-private-chat2 ) with django & django-channels. I want the application to be fully Containerized and use nginx for routing inside the container.
So I'm trying to connect through a web-socket to a redis-server running inside a docker container. I've tried many things (see below) but still can't get it to work. It is not unlikely that my general approach is wrong.
EDIT:
As #Zeitounator suggested I've created a mcve illustrating the Problem. It's on my secondary github here it contains all configuration files for a minimal example (Dockerfile, docker-compose.yaml, nginx.conf, redis.conf, supervisord.ini ...) also the folder 'tests' contains two tests illustrating - that it works locally, but not inside container. Tests are to be run inside root directory.
I've added important configs code at the end.
I still believe my nginx configuration might be off, any help appreciated!
Here is what I've got so far.
Redis-Server and websocket connection works outside Docker container.
Inside the docker container I compile and run nginx with the 'HTTP Redis' here
this module is loaded via 'load_module *.so' inside nginx.conf, I've verified that the module is loaded.
I've configured the redis-server inside the Docker-container to use 'bind 0.0.0.0 and protected-mode no'
Inside nginx then I route all the '/' traffic to a django application running on port 8000.
I route all traffic to 'chat_ws/' ( from the web socket ) to 127.0.0.1:6379 the redis-server ( with nginx reverse_proxy ).
I've verified that routing works properly ( return 404 with nginx on chat_ws addresses works )
I can connect to the redis-server through redis-cli on my machine, when I use 'redis-cli -h DOCKER_CONTAINER_IP' so the redis_pass also seems to work.
In the django settings I've specified the CHANNEL_LAYERS and set the redis backend host to 127.0.0.1:6379 ( which again works completely fine outside the Docker container)
But if I open the webpage (through the Docker container) in my Browser everything works except the websocket connection to the redis-server
I'm especially confused that redis-cli connection to the container works fine but not the websocket, Which does work locally ( outside the container ).
( What I've been thinking:
maybe a web-socket connection w. redis_pass throuh nginx is generally problematic?
maybe 'HTTP Redis' version is to old, but how to debug this since I don't see any log output of the module in nginx stdout. )
Any help to debug recommendation is appreciated. Or Ideas for different approaches. Also tell me If I should provide further Information or share specific config files. Thanks in advance!
Dockerfile:
Installs requirements, compiles nginx w. redis_pass, start everything.
FROM nginx:alpine AS builder
ADD ./requirements.txt /app/requirements.txt
RUN apk add --update --no-cache python3 py3-pip && ln -sf python3 /usr/bin/python
RUN set -ex \
&& apk add --no-cache --virtual .build-deps postgresql-dev build-base python3-dev python2-dev libffi-dev \
&& python3 -m venv /env \
&& python3 -m pip install --upgrade pip \
&& python3 -m pip install --no-cache --upgrade pip setuptools \
&& python3 -m ensurepip \
&& python3 -m pip install -r /app/requirements.txt \
&& runDeps="$(scanelf --needed --nobanner --recursive /env \
| awk '{ gsub(/,/, "\nso:", $2); print "so:" $2 }' \
| sort -u \
| xargs -r apk info --installed \
| sort -u)" \
&& apk add --virtual rundeps $runDeps \
&& apk del .build-deps
RUN apk add --no-cache build-base libressl-dev libffi-dev
# rest of code is mounted to the docker container in docker-compose ( only in dev, for local debugging )
WORKDIR /app
ENV VIRTUAL_ENV /env
ENV PATH /env/bin:$PATH
# Install Supervisord to, start redis-server, gunicorn and nginx simultaniously
RUN apk add --no-cache supervisor
RUN apk add --no-cache redis
# Stuff needed to make custom nginx compilation
RUN apk add --no-cache --virtual .build-deps \
gcc \
libc-dev \
make \
openssl-dev \
pcre-dev \
zlib-dev \
linux-headers \
curl \
gnupg \
libxslt-dev \
gd-dev \
geoip-dev
RUN wget "http://nginx.org/download/nginx-${NGINX_VERSION}.tar.gz" -O nginx.tar.gz
# compile with HTTP redis for nginx
RUN wget "https://people.freebsd.org/~osa/ngx_http_redis-0.3.9.tar.gz" -O redis.tar.gz
# Compile ...
RUN mkdir /usr/src && \
CONFARGS=$(nginx -V 2>&1 | sed -n -e 's/^.*arguments: //p') \
tar -zxC /usr/src -f nginx.tar.gz && \
tar -xzvf "redis.tar.gz" && \
REDISDIR="$(pwd)/ngx_http_redis-0.3.9" && \
cd /usr/src/nginx-$NGINX_VERSION && \
./configure --with-compat $CONFARGS --add-dynamic-module=$REDISDIR && \
make && make install
COPY supervisord.ini /etc/supervisor.d/supervisord.ini
COPY ./nginx.conf /etc/nginx/nginx.conf
RUN mkdir /etc/redis
COPY ./redis.conf /etc/redis/redis.conf
EXPOSE 80
# Start all services ( see ./supercisord.ini )
CMD ["/usr/bin/supervisord", "-c", "/etc/supervisor.d/supervisord.ini"]
nginx.conf
daemon off;
user nginx;
worker_processes 1;
pid /var/run/nginx.pid;
load_module /usr/local/nginx/modules/ngx_http_redis_module.so;
events {
worker_connections 1024;
}
http {
access_log /dev/stdout;
upstream asgi {
server 127.0.0.1:8000 fail_timeout=0;
}
server {
listen 80;
server_name localhost;
client_max_body_size 4G;
location /chat_ws {
set $redis_key $uri;
redis_pass 127.0.0.1:6379;
error_page 404 = /fallback;
}
location / {
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_pass http://asgi;
}
}
}
redis.conf
bind 0.0.0.0
protected-mode no
docker-compose.yml
version: '3.7'
services:
nginxweb:
image: redis_nginx_docker
restart: always
build:
context: .
dockerfile: ./Dockerfile
ports:
- 8000:80
env_file:
- env
volumes:
- ./mcve:/app
Remaining configuration may be found in the git repo.
I have a django project named MySite, and this project has some application inside as below:
- MySite
-- app
-- venv
-- media
-- django_project
--- wsgi.py
--- settings.py
--- urls.py
--- asgi.py
To deploy on aws, I am in the phase of gunicorn configuring. However I face with this error:
guni:gunicorn BACKOFF Exited too quickly (process log may have details)
However, first time status is like:
gunicorn STARTING
this is my gunicorn.conf:
[program:gunicorn]
directory=/home/ubuntu/MySite
command=/usr/bin/gunicorn --workers 3 --bind unix:/home/ubuntu/MySite/app.sock django_project.wsgi.application
autostart=true
autorestart=true
stderr_logfile=/var/log/gunicorn/gunicorn.err.log
stdout_logfile=/var/log/gunicorn/gunicorn.out.log
[group:guni]
program:gunicorn
in gunicorn.err.log it says problem is in:
usage: gunicorn [OPTIONS] [APP_MODULE]
gunicorn: error: unrecognized arguments: django_project.wsgi.application
when I try this:
gunicorn --bind 0.0.0.0:8000 django_project.wsgi:application
I get this error:
SyntaxError: invalid syntax
[2021-02-10 10:12:40 +0000] [6914] [INFO] Worker exiting (pid: 6914)
[2021-02-10 10:12:40 +0000] [6912] [INFO] Shutting down: Master
[2021-02-10 10:12:40 +0000] [6912] [INFO] Reason: Worker failed to boot.
The entire process to install and run gunicorn which I did:
**********************************START********************************
sudo apt-get upgrade -y
sudo apt-get update -y
1) Clone the git project
git clone https://github.com/XX/MyProj.git
2) cd /MySite ## there is a venv with django installed in
3) Activate venv
source venv/bin/activate
5. Instal NGINX and GUNICORN
pip3 install gunicorn ## install without sudo..
sudo apt-get install nginx -y
pip install psycopg2-binary
6. Connect gunicorn (#Error: Worker failed to boot.)
gunicorn --bind 0.0.0.0:8000 django_project.wsgi:application
7. Install supervisor
sudo apt-get install -y supervisor ## This command holds the website after we logout
8. Config supervisor
cd /etc/supervisor/conf.d
sudo touch gunicorn.conf
9) ##In the file file do following###
[program:gunicorn]
directory=/home/ubuntu/MySite
command=/usr/bin/gunicorn --workers 3 --bind unix:/home/ubuntu/MySite/app.sock django_project.wsgi:application
autostart=true
autorestart=true
stderr_logfile=/var/log/gunicorn/gunicorn.err.log
stdout_logfile=/var/log/gunicorn/gunicorn.out.log
[group:guni]
Program:gunicorn
####endfile####
10). Config supervisor
cd /etc/supervisor/conf.d
sudo touch gunicorn.conf
11). Connect file to supervisor
sudo mkdir -p /var/log/gunicorn
sudo supervisorctl reread
sudo supervisorctl reread
sudo supervisorctl update
12. Check if gunicorn is running in background
sudo supervisorctl status
**********************************END********************************
Your issue could be a combination of a few things:
Verify you django settings.py is set up properly for deployment. Pay very close attention to the STATIC_ROOT and STATICFILES_DIR variables as they are critical to serving your projects static files.
Make sure your virtualenv is activated and you have ran pip install -r requirements.txt.
Note: At this point try to run your project with your servers public ip python manage.py runserver server_public_ip:8000 a lot of people assume that, because their project ran locally, it will run on the server. Something always goes wrong.
Make sure you run python manage.py collectstatic on your server. This collects your static files and makes a directory for it. Take note of the path it tells you it's going to copy them to, you're going to need it for your /static/ location block in your nginx sites-available configuration file.
Make sure your gunicorn.conf command variable points to your virtualenv path, and that it points to a .sock file that gunicorn and nginx can access (sudo chown user:group). Here is an example:
[program:gunicorn]
command=/home/user/.virtualenvs/djangoproject/bin/gunicorn -w3 --bind unix:/etc/supervisor/socks/gunicorn.sock djangoproject.wsgi:application --log-level=info
directory=/home/user/djangoproject
numprocs=3
process_name=%(program_name)s_%(process_num)d
user=user
environment=LANG=en_US.UTF-8,LC_ALL=en_US.UTF-8,HOME="/home/user/djangoproject", USER="user"
autostart=true
autorestart=true
stderr_logfile=/var/log/supervisor/gunicorn.err.log
stdout_logfile=/var/log/supervisor/gunicorn.out.log
There are a couple ways you can set up your nginx configuration files. Some docs have you do it all in the nginx.conf file, however, you should break it up into sites-available with a symbolic link to sites-enabled. Either way should get you the same result. Here is an example:
upstream django {
server unix:/etc/supervisor/socks/gunicorn.sock;
}
server {
listen 80;
server_name www.example.com;
client_max_body_size 4G;
proxy_set_header Host $host;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_headers_hash_max_size 1024;
proxy_read_timeout 300s;
proxy_connect_timeout 75s;
access_log /var/log/nginx/access.log;
error_log /var/log/nginx/error.log debug;
location /media/ {
autoindex off;
alias /home/user/djangoproject/media/;
}
location /static/ {
autoindex off;
alias /home/user/djangoproject/static/;
}
location / {
proxy_pass http://django;
proxy_redirect off;
}
}
Make sure that you run sudo service nginx restart after every nginx configuration change.
Make sure you run sudo supervisorctl reread, sudo supervisorctl update all, and sudo supervisorctl restart all after every supervisor configuration file change. In your case every gunicorn.conf file change.
Lastly, as a general rule, make sure all of your directory paths match to their respected processes. For example: Let's say your gunicorn command variable points to a sock file that is in /path/to/project/gunicorn.sock, but your nginx configuration has a proxy_pass to /etc/supervisor/socks/gunicorn.sock, nginx doesn't know where to pass your requests to, so gunicorn never sees the request. Also, you can add this to your nginx location blocks so you can see where each request gets to in your browser dev tools response header: add_header X-debug-message "The port 80, / location was served from django" always;
Note: If you are getting the "Welcome to Nginx" page, it means nginx doesn't know where to send the request. A lot of times you have a static root directory path problem. However, there are other issues, but situational to how things are set up. You'll have to debug with some trial and error. Also, try adding a location block to a known url like http://example.com/login, if you get there, you know you have an nginx configuration issue. If you get 404 not found, then you most likely have a django project problem or a gunicorn problem.
This guide should work just fine :-)
Install dependencies
[user#machine]$ sudo apt update && sudo apt upgrade -y
[user#machine]$ sudo apt install python3-dev python3-pip supervisor nginx -y
[user#machine]$ sudo systemctl enable nginx.service
[user#machine]$ sudo systemctl restart nginx.service
[user#machine]$ sudo systemctl status nginx.service
Create new site
/etc/nginx/sites-available/<site-name>
server{
listen 80;
server_name <ip or domain>;
location = /favicon.ico {
access_log off;
log_not_found off;
}
location /static/ {
root /path/to/static/root;
}
# main django application
location / {
include proxy_params;
proxy_pass http://unix:/path/to/project/root/run.sock;
}
}
Create symbolic link
[user#machine]$ ln -s /etc/nginx/sites-available/<site-name> /etc/nginx/sites-enabled
Setup supervisor
/etc/supervisor/conf.d/<project-name>.conf
[program:web]
directory=/path/to/project/root
user=<user>
autostart=true
autorestart=true
redirect_stderr=true
stdout_logfile=/path/to/logs/gunicorn-error.log
command=/path/to/gunicorn --access-logfile - --workers 3 --bind unix:/path/to/project/root/run.sock <project-name>.wsgi:application
Then
[user#machine]$ sudo supervisorctl reread
[user#machine]$ sudo supervisorctl update
[user#machine]$ sudo supervisorctl status web
[user#machine]$ sudo systemctl restart nginx.service
The error tells you that the way you start gunicorn is wrong.
It looks like you are using supervisor. As per the docs, the correct syntax is:
[program:gunicorn]
command=/path/to/gunicorn main:application -c /path/to/gunicorn.conf.py
directory=/path/to/project
user=nobody
autostart=true
autorestart=true
redirect_stderr=true
I have followed the DigitalOcean tutorial to deploy a django app at DigitalOcean, the guide is:
https://www.digitalocean.com/community/tutorials/how-to-deploy-a-local-django-app-to-a-vps
Question:
The problem is that when I go to the IP with the browser, I see the Welcome to nginx page and not my django app.
Tutorial important points
Respect the tutorial, I have not seen the following error as tutorial says: server_names_hash, you should increase server_names_hash_bucket_size: 32
Another important difference between what I did and tutorial is that gunicorn_django --bind yourdomainorip.com:8001 did not work for me.
I use this statement to start gunicorn:
web: gunicorn --chdir code/computationalMarketing computationalMarketing.wsgi --log-file -
My configuration
At /etc/nginx/sites-enabled I have symlink called computationalMarketing that refers to /etc/nginx/sites-available/computationalMarketing
This files has the following lines:
server {
listen 127.0.0.1;
server_name 159.65.18.211;
error_log /var/log/nginx/localhost.error_log info;
root /var/www/localhost/htdocs;
location /static/ {
alias /opt/computationalMarketing/static/;
}
location / {
proxy_pass http://127.0.0.1:8001;
proxy_set_header X-Forwarded-Host $server_name;
proxy_set_header X-Real-IP $remote_addr;
add_header P3P 'CP="ALL DSP COR PSAa PSDa OUR NOR ONL UNI COM NAV"';
}
}
I have a virtualenv at /opt/computationalMarketing and inside this I have another computationalMarketing folder with the Git repo file.
This repo has the following structure:
My installations are:
sudo pip3 install numpy==1.13.3
sudo pip3 install pandas==0.22.0
sudo pip3 install scikit-learn==0.19.1
sudo pip3 install pymysql==0.8.1
sudo pip3 install psycopg2==2.7.3.2
sudo pip3 install django==2.0.5
sudo pip3 install django-connection-url==0.1.2
sudo pip3 install whitenoise==3.3.1
sudo pip3 install gunicorn==19.7.1
The database is a Postgresql, which I can connect without problem.
Can anyone guess why I am seeing the nginx page and not my django app?
You've told nginx to listen for this particular config on the localhost only. Don't do that. Remove that listen line altogether.
There are a few other weird things in your question. The command you claim to be using to start gunicorn is a Procfile instruction, it's not something you could actually run at the command line. What command are you actually using to start gunicorn? Whatever you use, you do need to tell it to serve on the same port that nginx is proxying to - in your case 8001.
Hi so I am trying to deploy my portfolio to Ubuntu Server 16.04 but I keep getting internal server error.
To walk through what i have done i created the instance and changed the HTTP and HTTPs security settings to anywhere.
After that i launched the instance then running these commands
ubuntu#ip-172-31-41-27:~ sudo apt-get update
ubuntu#ip-172-31-41-27:~$ sudo apt-get install python-pip python-dev
nginx git
ubuntu#ip-172-31-41-27:~$ sudo apt-get update
ubuntu#ip-172-31-41-27:~$ sudo pip install virtualenv
ubuntu#ip-172-31-41-27:~/portfolio$ virtualenv venv --python=python3
ubuntu#ip-172-31-41-27:~/portfolio$ source venv/bin/activate
(venv)ubuntu#ip-172-31-41-27:~/portfolio$ pip install -r
requirements.txt
(venv) ubuntu#ip-172-31-41-27:~/portfolio$ pip install django bcrypt
django-extensions
(venv) ubuntu#ip-172-31-41-27:~/portfolio$ pip install gunicorn
i edited the settings.py
# SECURITY WARNING: don't run with debug turned on in production!
DEBUG = False
ALLOWED_HOSTS = ['52.14.89.55']
STATIC_ROOT = os.path.join(BASE_DIR, "static/")
then run,
(venv) ubuntu#ip-172-31-41-27:~/portfolio$ python manage.py
collectstatic
followed by,
ubuntu#ip-172-31-41-27:~/portfolio$ sudo vim
/etc/systemd/system/gunicorn.service
where I add
[Unit]
Description=gunicorn daemon
After=network.target
[Service]
User=ubuntu
Group=www-data
WorkingDirectory=/home/ubuntu/portfolio
ExecStart=/home/ubuntu/portfolio/venv/bin/gunicorn --workers 3 --bind
unix:/home/ubuntu/portfolio/portfolio.sock portfolio.wsgi:application
[Install]
WantedBy=multi-user.target
followed by a gunicorn reboot
ubuntu#ip-172-31-41-27:~/portfolio$ sudo systemctl daemon-reload
ubuntu#ip-172-31-41-27:~/portfolio$ sudo systemctl start gunicorn
ubuntu#ip-172-31-41-27:~/portfolio$ sudo systemctl enable gunicorn
finally,
ubuntu#54.162.31.253:~$ sudo vim /etc/nginx/sites-
available/portfolio
adding
server {
listen 80;
server_name 52.14.89.55;
location = /favicon.ico { access_log off; log_not_found off; }
location /static/ {
root /home/ubuntu/portfolio;
}
location / {
include proxy_params;
proxy_pass http://unix:/home/ubuntu/portfolio/portfolio.sock;
}
}
creating the link
ubuntu#ip-172-31-41-27:/etc/nginx/sites-enabled$ sudo ln -s
/etc/nginx/sites-available/portfolio /etc/nginx/sites-enabled
ubuntu#ip-172-31-41-27:/etc/nginx/sites-enabled$ sudo nginx -t
nginx: the configuration file /etc/nginx/nginx.conf syntax is ok
nginx: configuration file /etc/nginx/nginx.conf test is successful
deleting the default
then restarting
ubuntu#ip-172-31-41-27:/etc/nginx/sites-enabled$ sudo service nginx
restart
EDIT**
I changed the following codes to match exactly what I have done.
I just installed Jenkins EC2 instance in AWS. I tried to configure the redirection from http to https (i.e. http://myjenkins.com to https://myjenkins.com). Do I configure in AWS or in Jenkins? I only found https://aws.amazon.com/premiumsupport/knowledge-center/redirect-http-https-elb/ but does not help much. Please advise. Thanks
If you are trying to get to the jenkins web UI on port 443, i would suggest using a web server like nginx to proxy requests to your jenkins installation. That way, you can have a fairly vanilla jenkins installation and handle all of the SSL configuration and port redirection in nginx (which is much easier to do).
Here's an example outline of how you might accomplish what are you asking:
Set up your server and install Jenkins normally, serving on port 8080.
Install nginx and configure it to proxy "/" to port 8080 on localhost.
Install your SSL certs. Using certbot with Let's Encrypt makes this step pretty easy as it handles all of the SSL config for you. (Note that for the install to work, your Security Group will have to allow all traffic to access your instance while you're doing the install. You can make it more restrictive once everything is configured. You also need a URL that is publicly accessible for your SSL certs to be valid).
Access your site using the bare domain and look for it to be forwarded to https.
And here are the actual steps I used to get mine working on a Ubuntu EC2 VM (you might have to hum along to the tune of the install but you will get the idea):
apt-get update
apt-get upgrade -y
apt-get install nginx -y
cd /etc/nginx/sites-enabled/
vim default (see config below)
systemctl restart nginx
wget -q -O - http://pkg.jenkins-ci.org/debian/jenkins-ci.org.key | apt-key add -
echo "deb http://pkg.jenkins-ci.org/debian binary/" | tee -a /etc/apt/sources.list
add-apt-repository ppa:webupd8team/java -y
apt-get update
apt-get install oracle-java8-installer -y
apt-get install jenkins –y
systemctl status jenkins
cd /var/lib/jenkins/secrets/
cat initialAdminPassword
ufw enable
sudo add-apt-repository ppa:certbot/certbot
apt-get update
apt-get install python-certbot-nginx
ufw allow 'Nginx Full'
ufw allow OpenSSH
ufw status
certbot --nginx -d jenkins.example.com
Your default nginx config will look something like this:
server {
listen 80 default_server;
listen [::]:80 default_server;
root /var/www/html;
index index.html index.htm index.nginx-debian.html;
server_name jenkins.example.com;
location / {
proxy_pass http://localhost:8080;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
}
if ($scheme != "https") {
return 301 https://$host$request_uri;
}
When you run the certbot --nginx -d jenkins.example.com step, it will also insert some lines into your nginx config to set up the SLL and cert specifics.
After that, you should be good!
You need to configure Jenkins settings to HTTPS inside your EC2;
And if you are using Load Balance in front of the EC2, you also need to configure ELB to forward port to HTTPS.