I have an elastic beanstalk with a single ec2 instance and I need to install an SSL certificate during deployment and at this time the server can't be reached via the ip address given by the A record on the DNS. I would like to use LetsEncrypt with the certbot-dns-cloudflare plugin to automatically get and install a certificate. I have created a cloudflare credentials file containing my cloudflare api key so that the plugin can request cloudflare to create a DNS TXT record and use it to do the domain name ownership validation.
I encountered a number of problems when attempting to install certbot using the method described here https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/SSL-on-an-instance.html#letsencrypt (EPEL libraries not containing certbot), and appeared to have better luck using the cerbot-auto install method here https://medium.com/#mohan08p/install-and-renew-lets-encrypt-ssl-on-amazon-ami-6d3e0a61693.
So my process so far is:
$ wget https://dl.eff.org/certbot-auto
$ chmod a+x certbot-auto
$ sudo ./certbot-auto --debug --install-only
This appears to get certbot installed and I see no error messages.
Next I do this:
$ cd /opt/eff.org/certbot/venv
$ source bin/activate
$ sudo pip install certbot-dns-cloudflare
... cut short for brevity ...
Collecting zope.event (from zope.component->certbot>=0.21.1->certbot-dns-cloudflare)
Downloading https://files.pythonhosted.org/packages/c5/96/361edb421a077a4c208b4a5c212737d78ae03ce67fbbcd01621c49f332d1/zope.event-4.4-py2.py3-none-any.whl
Collecting pycparser (from cffi!=1.11.3,>=1.7->cryptography>=0.8->acme>=0.21.1->certbot-dns-cloudflare)
Downloading https://files.pythonhosted.org/packages/68/9e/49196946aee219aead1290e00d1e7fdeab8567783e83e1b9ab5585e6206a/pycparser-2.19.tar.gz (158kB)
100% |################################| 163kB 7.9MB/s
Collecting zope.proxy (from zope.deferredimport>=4.2.1->zope.component->certbot>=0.21.1->certbot-dns-cloudflare)
Downloading https://files.pythonhosted.org/packages/7c/f5/e9ed65cdf8c93d24d7512ef89e21b241bc9ae75d90bc8608cc142f4c26f9/zope.proxy-4.3.1.tar.gz (43kB)
100% |################################| 51kB 12.1MB/s
Installing collected packages: funcsigs, pbr, six, mock, zope.interface, chardet, idna, certifi, urllib3, asn1crypto, enum34, pycparser, cffi, ipaddress, cryptography, PyOpenSSL, requests, requests-toolbelt, pytz, pyrfc3339, josepy, acme, future, parsedatetime, ConfigArgParse, zope.hookable, zope.proxy, zope.deferredimport, zope.deprecation, zope.event, zope.component, certbot, jsonlines, cloudflare, certbot-dns-cloudflare
Found existing installation: six 1.8.0
Uninstalling six-1.8.0:
Successfully uninstalled six-1.8.0
Found existing installation: chardet 2.0.1
DEPRECATION: Uninstalling a distutils installed project (chardet) has been deprecated and will be removed in a future version. This is due to the fact that uninstalling a distutils project will only partially uninstall the project.
Uninstalling chardet-2.0.1:
Successfully uninstalled chardet-2.0.1
Found existing installation: urllib3 1.8.2
Uninstalling urllib3-1.8.2:
Successfully uninstalled urllib3-1.8.2
Running setup.py install for pycparser ... done
Found existing installation: requests 1.2.3
Uninstalling requests-1.2.3:
Successfully uninstalled requests-1.2.3
Running setup.py install for future ... done
Running setup.py install for ConfigArgParse ... done
Running setup.py install for zope.hookable ... done
Running setup.py install for zope.proxy ... done
Running setup.py install for cloudflare ... done
Successfully installed ConfigArgParse-0.13.0 PyOpenSSL-18.0.0 acme-0.29.1 asn1crypto-0.24.0 certbot-0.29.1 certbot-dns-cloudflare-0.29.1 certifi-2018.11.29 cffi-1.11.5 chardet-3.0.4 cloudflare-2.1.0 cryptography-2.4.2 enum34-1.1.6 funcsigs-1.0.2 future-0.17.1 idna-2.8 ipaddress-1.0.22 josepy-1.1.0 jsonlines-1.2.0 mock-2.0.0 parsedatetime-2.4 pbr-5.1.1 pycparser-2.19 pyrfc3339-1.1 pytz-2018.7 requests-2.21.0 requests-toolbelt-0.8.0 six-1.12.0 urllib3-1.24.1 zope.component-4.5 zope.deferredimport-4.3 zope.deprecation-4.4.0 zope.event-4.4 zope.hookable-4.2.0 zope.interface-4.6.0 zope.proxy-4.3.1
You are using pip version 9.0.3, however version 18.1 is available.
You should consider upgrading via the 'pip install --upgrade pip' command.
In the listing I see indications that the cloudflare plugin was successfully installed. However, when I list the plugins I don't see it:
$ sudo ./certbot-auto plugins
Saving debug log to /var/log/letsencrypt/letsencrypt.log
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
* apache
Description: Apache Web Server plugin
Interfaces: IAuthenticator, IInstaller, IPlugin
Entry point: apache = certbot_apache.entrypoint:ENTRYPOINT
* nginx
Description: Nginx Web Server plugin
Interfaces: IAuthenticator, IInstaller, IPlugin
Entry point: nginx = certbot_nginx.configurator:NginxConfigurator
* standalone
Description: Spin up a temporary webserver
Interfaces: IAuthenticator, IPlugin
Entry point: standalone = certbot.plugins.standalone:Authenticator
* webroot
Description: Place files in webroot directory
Interfaces: IAuthenticator, IPlugin
Entry point: webroot = certbot.plugins.webroot:Authenticator
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
Attempts to run certbot-auto using the plugin fail as follows:
$ sudo ./certbot-auto certonly --dns-cloudflare --dns-cloudflare-credentials ~/.secrets/certbot/cloudflare.ini --dns-cloudflare-propagation-seconds 60 -d my-domain.com
usage:
certbot-auto [SUBCOMMAND] [options] [-d DOMAIN] [-d DOMAIN] ...
Certbot can obtain and install HTTPS/TLS/SSL certificates. By default,
it will attempt to use a webserver both for obtaining and installing the
certificate.
certbot: error: unrecognized arguments: --dns-cloudflare-credentials /home/ec2-user/.secrets/certbot/cloudflare.ini --dns-cloudflare-propagation-seconds 60
Can anyone advise?
Thanks
This is what worked for me in the end:
$ wget https://dl.eff.org/certbot-auto
$ chmod a+x certbot-auto
$ sudo ./certbot-auto --debug --install-only
$ whereis certbot
certbot: /usr/local/bin/certbot
$ cd /opt/eff.org/certbot/venv
$ source bin/activate
$ sudo pip install certbot-dns-cloudflare
$ deactivate
$ sudo /usr/local/bin/certbot plugins
Saving debug log to /var/log/letsencrypt/letsencrypt.log
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
* dns-cloudflare
Description: Obtain certificates using a DNS TXT record (if you are using
Cloudflare for DNS).
Interfaces: IAuthenticator, IPlugin
Entry point: dns-cloudflare =
certbot_dns_cloudflare.dns_cloudflare:Authenticator
* standalone
Description: Spin up a temporary webserver
Interfaces: IAuthenticator, IPlugin
Entry point: standalone = certbot.plugins.standalone:Authenticator
* webroot
Description: Place files in webroot directory
Interfaces: IAuthenticator, IPlugin
Entry point: webroot = certbot.plugins.webroot:Authenticator
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
If incorporating this in .ebextensions/01-packages/install-packages.conf which will be run under root, you'll need to add something to create the following file containing your cloudflare email and api key at /root/.secrets/certbot/cloudflare.ini
$ sudo mkdir /root/.secrets/certbot
$ sudo chmod 700 /.secrets
$ sudo su
# printf 'dns_cloudflare_email = <your-cf-email>\ndns_cloudflare_api_key = <your-cf-api-key' > /root/.secrets/certbot/cloudflare.ini
# printf 'A\nn\nn\n' | /usr/local/bin/certbot certonly --dns-cloudflare --dns-cloudflare-credentials ~/.secrets/certbot/cloudflare.ini --dns-cloudflare-propagation-seconds 60 -d my-domain.com
Saving debug log to /var/log/letsencrypt/letsencrypt.log
Plugins selected: Authenticator dns-cloudflare, Installer None
Obtaining a new certificate
Performing the following challenges:
dns-01 challenge for my-domain.com
Waiting 60 seconds for DNS changes to propagate
Waiting for verification...
Cleaning up challenges
IMPORTANT NOTES:
- Congratulations! Your certificate and chain have been saved at:
/etc/letsencrypt/live/my-domain.com/fullchain.pem
Your key file has been saved at:
/etc/letsencrypt/live/my-domain.com/privkey.pem
Your cert will expire on 2019-03-17. To obtain a new or tweaked
version of this certificate in the future, simply run certbot
again. To non-interactively renew *all* of your certificates, run
"certbot renew"
- If you like Certbot, please consider supporting our work by:
Donating to ISRG / Let's Encrypt: https://letsencrypt.org/donate
Donating to EFF: https://eff.org/donate-le
I had the same issue trying to install cerbot cloudflare plugin on Amazon Linux. I tried a few different things but the following worked using pip i.e.
sudo yum install -y python-pip
pip install --upgrade pip
pip install certbot-dns-cloudflare
For me certbot was installed in two locations /usr/local/bin/certbot which worked and the default /usr/bin/certbot which couldn't find the newly installed plugins.
I was using which certbot, certbot plugins, and /usr/local/bin/certbot plugins to debug this.
Hope this helps someone.
Related
So I have been at this for days now almost and it is driving me crazy. Based on other posts, I have set up the following cloudbuild.yaml :
steps:
- name: gcr.io/cloud-builders/docker
args:
- build
- -t
- gcr.io/${INSTANCE_NAME}
- .
- name: gcr.io/cloud-builders/docker
args:
- push
- gcr.io/${INSTANCE_NAME}
- name: 'gcr.io/${INSTANCE_NAME}'
entrypoint: sh
env:
- DATABASE_URL=postgresql://USER:PASSWORD#localhost/DATABASE?host=/cloudsql/CONNECTION_NAME
args:
- -c
- |
wget https://dl.google.com/cloudsql/cloud_sql_proxy.linux.amd64 -O cloud_sql_proxy
chmod +x cloud_sql_proxy
./cloud_sql_proxy -instances=CONNECTION_NAME=tcp:5432 & sleep 3
npx prisma migrate deploy
- name: gcr.io/google.com/cloudsdktool/cloud-sdk
entrypoint: gcloud
args:
- run
- deploy
- backend
- --image
- gcr.io/${INSTANCE_NAME}
- --region
- europe-west1
images:
- gcr.io/${INSTANCE_NAME}
When running this, I am greeted by:
Step #2: 2023/02/05 13:00:49 Listening on 127.0.0.1:5432 for CONNECTION_NAME
Step #2: 2023/02/05 13:00:49 Ready for new connections
Step #2: 2023/02/05 13:00:49 Generated RSA key in 118.117245ms
Step #2: npm WARN exec The following package was not found and will be installed: prisma#4.9.0
Step #2: Prisma schema loaded from prisma/schema.prisma
Step #2: Datasource "db": PostgreSQL database "develop", schema "public" at "localhost"
Step #2:
Step #2: Error: P1001: Can't reach database server at `/cloudsql/CONNECTION_NAME`:`5432`
Step #2:
Step #2: Please make sure your database server is running at `/cloudsql/CONNECTION_NAME`:`5432`.
So even with using the database url hardcoded and with the Cloud SQL proxy working, i am STILL getting this error. What am I missing?
Check for the container-name in .env file and change it to postgres as it would replace name in connection string as discussed here
Or try the following format if you don’t want to hardcode IP address
DB_USER=dbuser
DB_PASS=dbpass
DB_HOST=localhost
DB_PORT=5432
CLOUD_SQL_CONNECTION_NAME=/cloudsql/gcp-project-id:europe-west3:db-instance-name
DATABASE_URL=postgres://${DB_USER}:${DB_PASS}#${DB_HOST}:${DB_PORT}/${DB_BASE}?host=${CLOUD_SQL_CONNECTION_NAME}
If you have public IP try connecting by unix socket
How can I deploy a directory to a FTP or SSH server, with a trigger and cloudbuild.yaml?
So far I can already generate a listing of the files which I'd like to upload:
steps:
- name: 'ubuntu'
entrypoint: 'bash'
args:
- '-c'
- |-
find $_UPLOAD_DIRNAME -exec echo {} >> batch.txt \;
cat ./batch.txt
env:
...
I've came to the conclusion, that I don't want the FTP anti-pattern
and have therefore written an alternate SSH cloudbuild.yaml:
generate a new pair of RSA keys.
use the private key for SSH login.
recursively upload the directory with scp.
run remote commands with ssh.
It logs in as user root, therefore remote /etc/ssh/sshd_config needs PermitRootLogin yes.
My variable substitutions meanwhile look alike this:
And this would be the cloudbuild.yaml, which generally demonstrates how to set up SSH keys:
steps:
- name: 'gcr.io/google.com/cloudsdktool/cloud-sdk:latest'
entrypoint: 'bash'
args:
- '-c'
- |-
echo Deploying $_UPLOAD_DIRNAME # $SHORT_SHA
gcloud config set compute/zone $_COMPUTE_ZONE
gcloud config set project $PROJECT_ID
mkdir -p /builder/home/.ssh
gcloud compute config-ssh
gcloud compute scp --ssh-key-expire-after=$_SSH_KEY_EXPIRE_AFTER --scp-flag="${_SSH_FLAG}" --recurse ./$_UPLOAD_DIRNAME $_COMPUTE_INSTANCE:$_REMOTE_PATH
gcloud compute ssh $_COMPUTE_INSTANCE --ssh-key-expire-after=$_SSH_KEY_EXPIRE_AFTER --ssh-flag="${_SSH_FLAG}" --command="${_SSH_COMMAND}"
env:
- '_COMPUTE_ZONE=$_COMPUTE_ZONE'
- '_COMPUTE_INSTANCE=$_COMPUTE_INSTANCE'
- '_UPLOAD_DIRNAME=$_UPLOAD_DIRNAME'
- '_REMOTE_PATH=$_REMOTE_PATH'
- '_SSH_FLAG=$_SSH_FLAG'
- '_SSH_COMMAND=$_SSH_COMMAND'
- '_SSH_KEY_EXPIRE_AFTER=$_SSH_KEY_EXPIRE_AFTER'
- 'PROJECT_ID=$PROJECT_ID'
- 'SHORT_SHA=$SHORT_SHA'
I've managed to deploy to FTP with ncftp:
first patch /etc/apt/sources.list.
then install ncftp with apt-get.
create the file ~/.ncftp with variable substitutions.
optional step: replace text in files with sed.
recursively upload the directory with ncftpput.
Here's my cloudbuild.yaml (it is working, but the next answer might offer a better solution):
steps:
- name: 'ubuntu'
entrypoint: 'bash'
args:
- '-c'
- |-
echo Deploying ${_UPLOAD_DIRNAME} # ${SHORT_SHA}
echo to ftp://${_REMOTE_ADDRESS}${_REMOTE_PATH}
echo "deb http://archive.ubuntu.com/ubuntu/ focal universe" > /etc/apt/sources.list
apt-get update -y && apt-get install -y ncftp
cat << EOF > ~/.ncftp
host $_REMOTE_ADDRESS
user $_FTP_USERNAME
pass $_FTP_PASSWORD
EOF
# sed -i "s/##_GIT_COMMIT_##/${SHORT_SHA}/g" ./${_UPLOAD_DIRNAME}/plugin.php
ncftpput -f ~/.ncftp -R $_REMOTE_PATH $_UPLOAD_DIRNAME
env:
- '_UPLOAD_DIRNAME=$_UPLOAD_DIRNAME'
- '_REMOTE_ADDRESS=$_REMOTE_ADDRESS'
- '_REMOTE_PATH=$_REMOTE_PATH'
- '_FTP_USERNAME=$_FTP_USERNAME'
- '_FTP_PASSWORD=$_FTP_PASSWORD'
- 'SHORT_SHA=$SHORT_SHA'
Where _REMOTE_PATH is eg. /wp-content/plugins (the variable requires at least one slash) and the _UPLOAD_DIRNAME is the name of the directory within the local Git repository, with no slashes.
I'm trying to run Elasticsearch with Docker on an AWS EC2 instance, but when it runs, after a few seconds will be stopped, any of you have any experiences what the problem could be?
This is my Elasticsearch config in the docker-compose.yaml:
elasticsearch:
build:
context: ./elasticsearch
args:
- ELK_VERSION=${ELK_VERSION}
volumes:
- elasticsearch:/usr/share/elasticsearch/data
environment:
- cluster.name=laradock-cluster
- node.name=laradock-node
- bootstrap.memory_lock=true
- discovery.type=single-node
- "ES_JAVA_OPTS=-Xms7g -Xmx7g"
- xpack.security.enabled=false
- xpack.monitoring.enabled=false
- xpack.watcher.enabled=false
- cluster.initial_master_nodes=laradock-node
ulimits:
memlock:
soft: -1
hard: -1
nofile:
soft: 65536
hard: 65536
ports:
- "${ELASTICSEARCH_HOST_HTTP_PORT}:9200"
- "${ELASTICSEARCH_HOST_TRANSPORT_PORT}:9300"
depends_on:
- php-fpm
networks:
- frontend
- backend
And This is my Dockerfile:
FROM docker.elastic.co/elasticsearch/elasticsearch:7.5.1
RUN /usr/share/elasticsearch/bin/elasticsearch-plugin install --batch discovery-ec2
EXPOSE 9200 9300
Also, I did sysctl -w vm.max_map_count=655360 on my AWS EC2 instance
Notice: my AWS EC2 instance is Ubuntu 18.4
Thanks
I am not sure about your docker-compose.yaml as you are not referring this in your dockerfile, But I am able to reproduce the issue. I launched same ubuntu 18.4 in my AWS account and used your dockerfile to launch a ES docker container using below commands:
docker build --tag=elasticsearch-custom .
docker run -ti -v /usr/share/elasticsearch/data elasticsearch-custom
And my docker container was also stopping just after starting up as shown below:
ubuntu#ip-172-31-32-95:~$ docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
03cde4a19389 elasticsearch-custom "/usr/local/bin/dock…" 33 seconds ago Exited (78) 6 seconds ago mystifying_napier
When checked the logs on console, when starting the docker, I found below error:
ERROR: [1] bootstrap checks failed [1]: the default discovery settings
are unsuitable for production use; at least one of
[discovery.seed_hosts, discovery.seed_providers,
cluster.initial_master_nodes] must be configured
Which is very well known error and can be easily resolved just by adding -e "discovery.type=single-node" to docker run command. After adding this in docker run command as below:
docker run -e "discovery.type=single-node" -ti -v /usr/share/elasticsearch/data elasticsearch-custom
its running fine as shown below:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
191fc3dceb5a elasticsearch-custom "/usr/local/bin/dock…" 8 minutes ago Up 8 minutes 9200/tcp, 9300/tcp recursing_elgamal
I have a django site which I would like to deploy to a Digital Ocean server everytime a branch is merged to master. I have it mostly working and have followed this tutorial.
.travis.yml
language: python
python:
- '2.7'
env:
- DJANGO_VERSION=1.10.3
addons:
ssh_known_hosts: mywebsite.com
git:
depth: false
before_install:
- openssl aes-256-cbc -K *removed encryption details* -in travis_rsa.enc -out travis_rsa -d
- chmod 600 travis_rsa
install:
- pip install -r backend/requirements.txt
- pip install -q Django==$DJANGO_VERSION
before_script:
- cp backend/local.env backend/.env
script: python manage.py test
deploy:
skip_cleanup: true
provider: script
script: "./travis-deploy.sh"
on:
all_branches: true
travis-deploy.sh - runs when the travis 'deploy' task calls it
#!/bin/bash
# print outputs and exit on first failure
set -xe
if [ $TRAVIS_BRANCH == "master" ] ; then
# setup ssh agent, git config and remote
echo -e "Host mywebsite.com\n\tStrictHostKeyChecking no\n" >> ~/.ssh/config
eval "$(ssh-agent -s)"
ssh-add travis_rsa
git remote add deploy "travis#mywebsite.com:/home/dean/se_dockets"
git config user.name "Travis CI"
git config user.email "travis#mywebsite.com"
git add .
git status # debug
git commit -m "Deploy compressed files"
git push -f deploy HEAD:master
echo "Git Push Done"
ssh -i travis_rsa -o UserKnownHostsFile=/dev/null travis#mywebsite.com 'cd /home/dean/se_dockets/backend; echo hello; ./on_update.sh'
else
echo "No deploy script for branch '$TRAVIS_BRANCH'"
fi
Everything works find until things get to the 'deploy' stage. I keep getting error messages like:
###########################################################
# WARNING: POSSIBLE DNS SPOOFING DETECTED! #
###########################################################
The ECDSA host key for mywebsite.com has changed,
and the key for the corresponding IP address *REDACTED FOR STACK OVERFLOW*
is unknown. This could either mean that
DNS SPOOFING is happening or the IP address for the host
and its host key have changed at the same time.
###########################################################
# WARNING: REMOTE HOST IDENTIFICATION HAS CHANGED! #
###########################################################
IT IS POSSIBLE THAT SOMEONE IS DOING SOMETHING NASTY!
Someone could be eavesdropping on you right now (man-in-the-middle attack)!
It is also possible that a host key has just been changed.
The fingerprint for the ECDSA key sent by the remote host is
* REDACTED FOR STACK OVERFLOW *
Please contact your system administrator.
Add correct host key in /home/travis/.ssh/known_hosts to get rid of this message.
Offending RSA key in /home/travis/.ssh/known_hosts:11
remove with: ssh-keygen -f "/home/travis/.ssh/known_hosts" -R mywebsite.com
Password authentication is disabled to avoid man-in-the-middle attacks.
Keyboard-interactive authentication is disabled to avoid man-in-the-middle attacks.
Permission denied (publickey,password).
fatal: Could not read from remote repository.
Please make sure you have the correct access rights
and the repository exists.
Script failed with status 128
INTERESTINGLY - If I re-run this job the 'git push' command will succeed at pushing to the deploy remote (my server). However, the next step in the deploy script which is to SSH into the server and run some postupdate commands will fail for the same reason (hosts fingerprint change or something). Or, it will ask for travis#mywebsite.com password (it has none) and will hang on the input prompt.
Additionally when I debug the Travis CI build and use the SSH url you're given to SSH into the machine Travis CI runs on - I can SSH into my own server from it. However it takes multiple tries to get around the errors.
So - this seems to be a fluid problem with stuff persisting from builds into the next on retries causing different errors/endings.
As you can see in my .yml file and the deploy script I have attempted to disable various host name checks and added the domain to known hosts etc... all to no avail.
I know I have things 99% set up correctly as things do mostly succeed when I retry the job a few times.
Anyone seen this before?
Cheers,
Dean
I've struggled with this for quite some time.
I have a Django application and I'm trying to package it into containers.
The problem is that when I publish to a certain port (8001) the host refuses my connection.
$ docker-machine ip default
192.168.99.100
When I try to curl or reach by browser 192.168.99.100:8001, the connection is refused.
C:\Users\meow>curl 192.168.99.100:8001
curl: (7) Failed to connect to 192.168.99.100 port 8001: Connection refused
First remark: I'm using Docker Toolbox.
Let's start from the docker-compose.yml file.
version: '2'
services:
db:
build: ./MongoDocker
image: ockidocky_mongo
web:
build: ./DjangoDocker
image: ockidocky
#volumes: .:/srv
ports:
- 8001:8000
links:
- db
Second remark: This file orginally gave me some trouble about permission building from scratch. To fix this, I built the images separately.
docker build -t ockidocky .
docker build -t ockidocky_mongo .
Here's the dockerfile for Mongo:
# Based on this tutorial. https://devops.profitbricks.com/tutorials/creating-a-mongodb-docker-container-with-an-attached-storage-volume/
# Removed some sudo here and there because they are useless in Docker for Windows
# Set the base image to use to Ubuntu
FROM ubuntu:latest
# Set the file mantainer
MAINTAINER meow
RUN apt-key adv --keyserver hkp://keyserver.ubuntu.com:80 --recv 7F0CEB10 && \
echo 'deb http://downloads-distro.mongodb.org/repo/ubuntu-upstart dist 10gen' | tee /etc/apt/sources.list.d/mongodb.list && \
apt-get update && apt-get install -y mongodb-org
VOLUME ["/data/db"]
WORKDIR /data
EXPOSE 27017
#Edited with --smallfiles (Check this issue https://github.com/dockerfile/mongodb/issues/9)
CMD ["mongod", "--smallfiles"]
Dockerfile for Django is based on this other tutorial.
I won't include the code, but it works.
It's important to say that the last row is:
ENTRYPOINT ["/docker-entrypoint.sh"]
I changed the docker-entrypoint.sh to run without Gunicorn.
echo Start Apache server.
python manage.py runserver
At this point docker ps tells me that everything is up:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
ddfdb20c2d7c ockidocky "/docker-entrypoint.s" 9 minutes ago Up 9 minutes 0.0.0.0:8001->8000/tcp ockidocky_web_1
2e2c2e7a5563 ockidocky_mongo "mongod --smallfiles" 2 hours ago Up 9 minutes 27017/tcp ockidocky_db_1
When I run a docker inspect ockidocky and about ports, it displays:
"Ports": {
"8000/tcp": [
{
"HostIp": "0.0.0.0",
"HostPort": "8001"
}
]
},
Is this dependant on mounting volumes?
It is one of the things I really can't figure out and gives me a lot of errors with Docker Toolbox.
As far as I can see everything worked fine during the build, and as far as I know the connection that was refused shouldn't depend on that.
EDIT:
After connectinc to the container and listing the processes with ps -aux, this is what I see:
USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND
root 1 0.7 3.0 218232 31340 ? Ssl 20:15 0:01 python manage.p
root 9 13.1 4.9 360788 50132 ? Sl 20:15 0:26 /usr/bin/python
root 15 0.0 0.2 18024 2596 ? Ss 20:15 0:00 /bin/bash
root 20 0.1 0.3 18216 3336 ? Ss 20:17 0:00 /bin/bash
root 33 0.0 0.2 34424 2884 ? R+ 20:18 0:00 ps -aux
P.s. Feel free to suggest how I can make this project easier for myself.
I solved the issue. I don't know why I had to specify the door on this line of docker-entrypoint.sh:
python manage.py runserver 0.0.0.0:8000
Now docker logs ockidocky_web_1 shows the usual django output messages.
If someone could give a proper explanation, I would be happy to edit and upvote.
I had the same problem and additionally, the ALLOWED_HOSTS in Django settings.py, need to include the docker machine IP.
It's mostly because of failure on the service you are going to run on your desired port(In your case the desired port is 8001)!
If any networking checks is OK and you don't have any problem with your network, just check your service which going to listen on your desired port! With the high chance of probabilty, your service is not loaded or ran successfully!
The check for your service depends on the service you are running, but sometimes(most of the times) docker logs YOUR_CONTAINER_ID could help to know more about your failure reason!