I am trying to run my django application in a docker container. I am using uWSGI to serve the django application, and also have a celery worker running in the background. These processes are started by supervisord.
The problem that I am having is that I am unable to see the application on the port that I would expect to see it on. I am exposing port 8080 and running the uwsgi process on 8080, but cannot find my application in a browser at the ip address $(boot2docker ip):8080. I just get Google Chrome's 'This webpage is not available'. (I am using a Mac, so I need to get the boot2docker ip address). The container is clearly running, and reports that my uwsgi and celery processes are both successfully running as well.
When I run docker exec CONTAINER_ID curl localhost:8080 I get a response like
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 21 0 21 0 0 3150 0 --:--:-- --:--:-- --:--:-- 3500
... so it seems like the container is accepting connections on port 8080.
When I run docker exec CONTAINER_ID netstat -lpn |grep :8080 I get tcp 0 0 0.0.0.0:8080 0.0.0.0:* LISTEN 11/uwsgi
When I run docker inspect CONTAINER_ID I get the following:
[{
"AppArmorProfile": "",
"Args": [
"-c",
"/home/docker/code/supervisor-app.conf"
],
"Config": {
"AttachStderr": true,
"AttachStdin": false,
"AttachStdout": true,
"Cmd": [
"supervisord",
"-c",
"/home/docker/code/supervisor-app.conf"
],
"CpuShares": 0,
"Cpuset": "",
"Domainname": "",
"Entrypoint": null,
"Env": [
"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
"env=staging"
],
"ExposedPorts": {
"8080/tcp": {}
},
"Hostname": "21443d8a16df",
"Image": "vitru",
"Memory": 0,
"MemorySwap": 0,
"NetworkDisabled": false,
"OnBuild": null,
"OpenStdin": false,
"PortSpecs": null,
"StdinOnce": false,
"Tty": false,
"User": "",
"Volumes": null,
"WorkingDir": ""
},
"Created": "2014-12-27T01:00:22.390065668Z",
"Driver": "aufs",
"ExecDriver": "native-0.2",
"HostConfig": {
"Binds": null,
"CapAdd": null,
"CapDrop": null,
"ContainerIDFile": "",
"Devices": [],
"Dns": null,
"DnsSearch": null,
"ExtraHosts": null,
"Links": null,
"LxcConf": [],
"NetworkMode": "bridge",
"PortBindings": {},
"Privileged": false,
"PublishAllPorts": false,
"RestartPolicy": {
"MaximumRetryCount": 0,
"Name": ""
},
"SecurityOpt": null,
"VolumesFrom": null
},
"HostnamePath": "/mnt/sda1/var/lib/docker/containers/21443d8a16df8e2911ae66d5d31341728d76ae080e068a5bb1dd48863febb607/hostname",
"HostsPath": "/mnt/sda1/var/lib/docker/containers/21443d8a16df8e2911ae66d5d31341728d76ae080e068a5bb1dd48863febb607/hosts",
"Id": "21443d8a16df8e2911ae66d5d31341728d76ae080e068a5bb1dd48863febb607",
"Image": "de52fbada520519793e348b60b608f7db514eef7fd436df4542710184c1ecb7f",
"MountLabel": "",
"Name": "/suspicious_fermat",
"NetworkSettings": {
"Bridge": "docker0",
"Gateway": "172.17.42.1",
"IPAddress": "172.17.0.87",
"IPPrefixLen": 16,
"MacAddress": "02:42:ac:11:00:57",
"PortMapping": null,
"Ports": {
"8080/tcp": null
}
},
"Path": "supervisord",
"ProcessLabel": "",
"ResolvConfPath": "/mnt/sda1/var/lib/docker/containers/21443d8a16df8e2911ae66d5d31341728d76ae080e068a5bb1dd48863febb607/resolv.conf",
"State": {
"ExitCode": 0,
"FinishedAt": "0001-01-01T00:00:00Z",
"Paused": false,
"Pid": 16230,
"Restarting": false,
"Running": true,
"StartedAt": "2014-12-27T01:00:22.661588847Z"
},
"Volumes": {},
"VolumesRW": {}
}
]
As someone not terribly fluent in Docker, I'm not really sure what all that means. Maybe there is a clue in there as to why I cannot connect to my server?
Here is my Dockerfile, so you can see if I'm doing anything blatantly wrong in there.
FROM ubuntu:14.04
# Get most recent apt-get
RUN apt-get -y update
# Install python and other tools
RUN apt-get install -y tar git curl nano wget dialog net-tools build-essential
RUN apt-get install -y python3 python3-dev python-distribute
RUN apt-get install -y nginx supervisor
# Get Python3 version of pip
RUN apt-get -y install python3-setuptools
RUN easy_install3 pip
RUN pip install uwsgi
RUN apt-get install -y python-software-properties
# Install GEOS
RUN apt-get -y install binutils libproj-dev gdal-bin
# Install node.js
RUN apt-get install -y nodejs npm
# Install postgresql dependencies
RUN apt-get update && \
apt-get install -y postgresql libpq-dev && \
rm -rf /var/lib/apt/lists
# Install pylibmc dependencies
RUN apt-get update
RUN apt-get install -y libmemcached-dev zlib1g-dev libssl-dev
ADD . /home/docker/code
# Setup config files
RUN ln -s /home/docker/code/supervisor-app.conf /etc/supervisor/conf.d/
# Create virtualenv and run pip install
RUN pip install -r /home/docker/code/vitru/requirements.txt
# Create directory for logs
RUN mkdir -p /var/logs
# Set environment as staging
ENV env staging
EXPOSE 8080
# The supervisor conf file starts uwsgi on port 8080 and starts a celeryd worker
CMD ["supervisord", "-c", "/home/docker/code/supervisor-app.conf"]
I believe the problem you have is that EXPOSE only makes the ports available between containers... not to the host system. See docs here:
https://docs.docker.com/reference/builder/#expose
You need to "publish" the port via the -p flag for docker run command:
https://docs.docker.com/reference/run/#expose-incoming-ports
There is a similar distinction in Fig, if you were using it, between expose and ports directives in the fig.yml file.
Related
I am trying to deploy a docker container to Elastic Beanstalk in AWS. I'm repeatedly getting error while doing so and each time the error is related to the ENTRYPOINT that I specified in the dockerrun.aws.json. What am I doing wrong here ?
The webapp uses Django, python3 and keras.
This is my Dockerfile content:
# reference: https://hub.docker.com/_/ubuntu/
FROM ubuntu:18.04
RUN apt-get update && apt-get install \
-y --no-install-recommends python3 python3-virtualenv
# Adds metadata to the image as a key value pair example LABEL
version="1.0"
LABEL maintainer="Amir Ashraff <amir.ashraff#gmail.com>"
##Set environment variables
ENV VIRTUAL_ENV=/opt/venv
RUN python3 -m virtualenv --python=/usr/bin/python3 $VIRTUAL_ENV
ENV PATH="$VIRTUAL_ENV/bin:$PATH"
# Install dependencies:
COPY requirements.txt .
RUN pip install --no-cache-dir -r requirements.txt
# Open Ports for Web App
EXPOSE 8000
WORKDIR /manage.py
COPY . /manage.py
RUN chmod +x /manage.py
ENTRYPOINT [ "python3" ]
CMD [ "python3", "manage.py runserver 0.0.0.0:8000" ]
And this is the dockerrun.aws.json content:
{
"AWSEBDockerrunVersion": "1",
"Ports": [
{
"ContainerPort": ""
}
],
"Volumes": [
{
"HostDirectory": "/~/aptos",
"ContainerDirectory": "/aptos/diabetes_retinopathy_recognition"
}
],
"Logging": "/aptos/diabetes_recognition",
"Entrypoint": "/opt/venv/bin/python3",
"Command": "python3 manage.py runserver 0.0.0.0:8000"
}
And this is the error from AWS logs:
Docker container quit unexpectedly on Tue Aug 20 13:03:47 UTC 2019:
/opt/venv/bin/python3: can't open file 'python3': [Errno 2] No such file
or directory.
Packer seems to exclude ssh keys from the project but I have set the block-project-ssh-keys value to false. The final command fails but that user has an ssh key tied to the project.
Any ideas?
{
"builders": [
{
"type": "googlecompute",
"project_id": "mahamed901",
"source_image_family": "ubuntu-1804-lts",
"ssh_username": "packer",
"zone": "europe-west1-b",
"preemptible": "true",
"image_description": "Worker Node for Jenkins (Java + Docker)",
"disk_type": "pd-ssd",
"disk_size": "10",
"metadata": {"block-project-ssh-keys":"false"},
"image_name": "ubuntu1804-jenkins-docker-{{isotime | clean_image_name}}",
"image_family": "ubuntu1804-jenkins-worker"
}
],
"provisioners": [
{
"type": "shell",
"inline": [
"sudo apt update",
"#sudo apt upgrade -y",
"#sudo apt-get install -y git make default-jdk",
"#curl https://get.docker.com/ | sudo bash",
"uptime",
"sudo curl -L \"https://github.com/docker/compose/releases/download/1.23.1/docker-compose-$(uname -s)-$(uname -m)\" -o /usr/local/bin/docker-compose",
"sudo chmod +x /usr/local/bin/docker-compose",
"sleep 5",
"cat /etc/passwd",
"#sudo usermod -aG docker jenkins",
"#sudo docker ps",
"#rm ~/.ssh/authorized_keys"
]
}
]
}
This is controlled by metadata option block-project-ssh-keys true or false.
See this issue
(the format of your metadata is wrong, remove the square brackets [ ].)
I'm attempting to build an AWS AMI using both Packer and Ansible to provision my AMI. I'm getting stuck on being able to copy some local files to my newly spun up EC2 instance using Ansible. I'm using the copy module in Ansible to do this. Here's what my Ansible code looks like:
- name: Testing copy of the local remote file
copy:
src: /tmp/test.test
dest: /tmp
Here's the error I get:
amazon-ebs: TASK [Testing copy of the local remote file] ***********************************
amazon-ebs: fatal: [127.0.0.1]: FAILED! => {"changed": false, "failed": true, "msg": "Unable to find '/tmp/test.test' in expected paths."}
I've verified that the file /tmp/test.test exists on my local machine from which Ansible is running.
For my host file I just have localhost in it since packer is telling Ansible everything it needs to know about where to run Ansible commands.
I'm not sure where to go from here or how to properly debug this error, so I'm hoping for a little help.
Here's what my Packer script looks like:
{
"variables": {
"aws_access_key": "{{env `access_key`}}",
"aws_secret_key": "{{env `secret_key`}}"
},
"builders": [{
"type": "amazon-ebs",
"access_key": "{{user `aws_access_key`}}",
"secret_key": "{{user `aws_secret_key`}}",
"region": "us-east-1",
"source_ami": "ami-116d857a",
"instance_type": "t2.micro",
"ssh_username": "admin",
"ami_name": "generic_jenkins_image",
"ami_description": "Testing AMI building with Packer",
"vpc_id": "xxxxxxxx",
"subnet_id": "xxxxxxxx",
"associate_public_ip_address": "true",
"tags": {"Environment" : "Dev", "Product": "SharedOperations"}
}],
"provisioners": [
{
"type": "shell",
"inline": [
"sleep 30",
"sudo rm -f /var/lib/dpkg/lock",
"sudo apt-get update -y --fix-missing",
"sudo apt-get -y install libpq-dev python-dev libxml2-dev libxslt1-dev libldap2-dev libsasl2-dev libffi-dev gcc build-essential python-pip",
"sudo pip install ansible"
]
},
{
"type": "ansible-local",
"playbook_file": "ansible/main.yml"
}
]
}
And here's my entire Ansible file:
---
- hosts: all
sudo: yes
tasks:
- name: Testing copy of the local remote file
copy:
src: /tmp/test.test
dest: /tmp
You are using ansible-local provisioner which runs the playbooks directly on targets ("local" in HashiCorp's products like Vagrant, Packet is used to describe the point of view of the provisioned machine).
The target does not have the /tmp/test.test file, hence you get the error.
You actually want to run the playbook using the regular Ansible provisioner.
I get an error while i launch crossbar 0.12.1 that I did not have with the version 0.11
[Controller 210] crossbar.error.invalid_configuration:
WSGI app module 'myproject.wsgi' import failed: No module named django -
Python search path was [u'/myproject', '/opt/crossbar/site-packages/crossbar/worker', '/opt/crossbar/bin', '/opt/crossbar/lib_pypy/extensions', '/opt/crossbar/lib_pypy', '/opt/crossbar/lib-python/2.7', '/opt/crossbar/lib-python/2.7/lib-tk', '/opt/crossbar/lib-python/2.7/plat-linux2', '/opt/crossbar/site-packages']
I have not changed anything else that the crossbar update.
My config.json are still the same, with the pythonpath of my project within the option :
{
"workers": [
{
"type": "router",
"options": {
"pythonpath": ["/myproject"]
},
"realms": [
{
"name": "realm1",
"roles": [
{
"name": "anonymous",
"permissions": [
{
"uri": "*",
"publish": true,
"subscribe": true,
"call": true,
"register": true
}
]
}
]
}
],
"transports": [
{
"type": "web",
"endpoint": {
"type": "tcp",
"port": 80
},
"paths": {
"/": {
"type": "wsgi",
"module": "myproject.wsgi",
"object": "application"
},
etc...
Do you have an idea ?
Thanks.
It seems that "pythonpath": ["/myproject"] replaces other python path configs from your dist-packages. Look for an option that adds /myproject and not replacing current path settings.
Or - add the path to your project to the machine python path, and don't provide crossbar with any python path, so it will pick the exisitng one.
Something like (depends on OS):
$ sudo nano /usr/lib/python2.7/dist-packages/myproject.pth
Then:
/home/username/path/to/myproject
I work with Docker in order to have a clean environment.
The Dockerfile here : http://crossbar.io/docs/Installation-on-Docker/ seem broken :
ImportError: No module named setuptools_ext
----------------------------------------
Cleaning up...
Command python setup.py egg_info failed with error code 1 in /tmp/pip-build-VfPnRU/pynacl
Storing debug log for failure in /root/.pip/pip.log
The command '/bin/sh -c pip install crossbar[all]' returned a non-zero code: 1
it seem solved with :
RUN pip install --upgrade cffi
Before RUN pip install crossbar[all]
With this Environment, my problem are solved :)
Don't know why i get this error before, but it's work.
Many thanks to all here and to indexerror, the "french python stackoverflow" :)
http://indexerror.net/3380/crossbar-0-12-1-wsgi-error-no-module-named-django?show=3415
P.S.
Here the clean Dockerfile i use :
FROM ubuntu
ENV APPNAME="monappli"
ADD requirements.txt /tmp/
RUN apt-get update
RUN apt-get install -y gcc build-essential python-dev python2.7-dev libxslt1-dev libssl-dev libxml2 libxml2-dev tesseract-ocr python-imaging libffi-dev libreadline-dev libbz2-dev libsqlite3-dev libncurses5-dev python-mysqldb python-pip
RUN cd /tmp/ && pip install -r requirements.txt
RUN pip install -U crossbar[all]
WORKDIR $APPNAME
CMD cd / && cd $APPNAME && python manage.py makemigrations && python manage.py migrate && crossbar start
With Django, flask and/or all the dependencies you want within a file named "requirements.txt" in the same folder than the Docker file :
requirements.txt ex :
ipython
django
djangorestframework
djangorestframework-jwt
django-cors-headers
bottlenose
python-amazon-simple-product-api
python-dateutil
beautifulsoup4
datetime
mechanize
pytesseract
requests
I'm writing Chef wrappers around some of the built in OpsWorks cookbooks. I'm using Berkshelf to clone the OpsWorks cookbooks from their github repo.
This is my Berksfile:
source 'https://supermarket.getchef.com'
metadata
def opsworks_cookbook(name)
cookbook name, github: 'aws/opsworks-cookbooks', branch: 'release-chef-11.10', rel: name
end
%w(dependencies scm_helper mod_php5_apache2 ssh_users opsworks_agent_monit
opsworks_java gem_support opsworks_commons opsworks_initial_setup
opsworks_nodejs opsworks_aws_flow_ruby
deploy mysql memcached).each do |cb|
opsworks_cookbook cb
end
My metadata.rb:
depends 'deploy'
depends 'mysql'
depends 'memcached'
The problem is, when I try to override attributes that depend on the opsworks key in the node hash, I get a:
NoMethodError
-------------
undefined method `[]=' for nil:NilClass
OpsWorks has a whole bunch of pre-recipe dependencies that create these keys and do a lot of their setup. I'd like to find a way to either pull in those services and run them against my Kitchen instances or mock them in a way that I can actually test my recipes.
Is there a way to do this?
I would HIGHLY recommend you check out Mike Greiling's blog post Simplify OpsWorks Development With Packer and his github repo opsworks-vm which help you to mock the entire opsworks stack including the install of the opsworks agent so you can also test app deploy recipes, multiple layers, multiple instances at the same time, etc . It is extremely impressive.
Quick Start on Ubuntu 14.04
NOTE: This can NOT be done from an ubuntu virtual machine because virtualbox does not support nested virtualization of 64-bit machines.
Install ChefDK
mkdir /tmp/packages && cd /tmp/packages
wget https://opscode-omnibus-packages.s3.amazonaws.com/ubuntu/12.04/x86_64/chefdk_0.8.1-1_amd64.deb
sudo dpkg -i chefdk_0.8.0-1_amd64.deb
cd /opt/chefdk/
chef verify
which ruby
echo 'eval "$(chef shell-init bash)"' >> ~/.bash_profile && source ~/.bash_profile
Install VirtualBox
echo 'deb http://download.virtualbox.org/virtualbox/debian vivid contrib' > /etc/apt/sources.list.d/virtualbox.list
wget -q https://www.virtualbox.org/download/oracle_vbox.asc -O- | sudo apt-key add -
sudo apt-get update -qqy
sudo apt-get install virtualbox-5.0 dkms
Install Vagrant
cd /tmp/packages
wget https://dl.bintray.com/mitchellh/vagrant/vagrant_1.7.4_x86_64.deb
sudo dpkg -i vagrant_1.7.4_x86_64.deb
vagrant plugin install vagrant-berkshelf
vagrant plugin install vagrant-omnibus
vagrant plugin list
Install Packer
mkdir /opt/packer && cd /opt/packer
wget https://dl.bintray.com/mitchellh/packer/packer_0.8.6_linux_amd64.zip
unzip packer_0.8.6_linux_amd64.zip
echo 'PATH=$PATH:/opt/packer' >> ~/.bash_profile && source ~/.bash_profile
Build Mike Greiling's opsworks-vm virtualbox image using Packer
mkdir ~/packer && cd ~/packer
git clone https://github.com/pixelcog/opsworks-vm.git
cd opsworks-vm
rake build install
This will install a new virtualbox vm to ~/.vagrant.d/boxes/ubuntu1404-opsworks/
To mock a single opsworks instance, create a new Vagrantfile like so:
Vagrant.configure("2") do |config|
config.vm.box = "ubuntu1404-opsworks"
config.vm.provision :opsworks, type: 'shell', args: 'path/to/dna.json'
end
The dna.json file path is set relative to the Vagrantfile and should contain any JSON data you wish to send to OpsWorks Chef.
For example:
{
"deploy": {
"my-app": {
"application_type": "php",
"scm": {
"scm_type": "git",
"repository": "path/to/my-app"
}
}
},
"opsworks_custom_cookbooks": {
"enabled": true,
"scm": {
"repository": "path/to/my-cookbooks"
},
"recipes": [
"recipe[opsworks_initial_setup]",
"recipe[dependencies]",
"recipe[mod_php5_apache2]",
"recipe[deploy::default]",
"recipe[deploy::php]",
"recipe[my_custom_cookbook::configure]"
]
}
}
To mock multiple opsworks instances and include layers see his AWS OpsWorks "Getting Started" Example which includes the stack.json below.
Vagrantfile (for multiple instances)
Vagrant.configure("2") do |config|
config.vm.box = "ubuntu1404-opsworks"
# Create the php-app layer
config.vm.define "app" do |layer|
layer.vm.provision "opsworks", type:"shell", args:[
'ops/dna/stack.json',
'ops/dna/php-app.json'
]
# Forward port 80 so we can see our work
layer.vm.network "forwarded_port", guest: 80, host: 8080
layer.vm.network "private_network", ip: "10.10.10.10"
end
# Create the db-master layer
config.vm.define "db" do |layer|
layer.vm.provision "opsworks", type:"shell", args:[
'ops/dna/stack.json',
'ops/dna/db-master.json'
]
layer.vm.network "private_network", ip: "10.10.10.20"
end
end
stack.json
{
"opsworks": {
"layers": {
"php-app": {
"instances": {
"php-app1": {"private-ip": "10.10.10.10"}
}
},
"db-master": {
"instances": {
"db-master1": {"private-ip": "10.10.10.20"}
}
}
}
},
"deploy": {
"simple-php": {
"application_type": "php",
"document_root": "web",
"scm": {
"scm_type": "git",
"repository": "dev/simple-php"
},
"memcached": {},
"database": {
"host": "10.10.10.20",
"database": "simple-php",
"username": "root",
"password": "correcthorsebatterystaple",
"reconnect": true
}
}
},
"mysql": {
"server_root_password": "correcthorsebatterystaple",
"tunable": {"innodb_buffer_pool_size": "256M"}
},
"opsworks_custom_cookbooks": {
"enabled": true,
"scm": {
"repository": "ops/cookbooks"
}
}
}
For those not familiar with vagrant you just do a vagrant up to start the instance(s). Then you can modify your cookbook locally and any changes can be applied by re-running chef against the existing instance(s) with vagrant provision. You can do a vagrant destroy and vagrant up to start from scratch.
You'll have to do it manually. You can add arbitrary attributes to your .kitchen.yml, just go on an OpsWorks machine and log the values you need and either use them directly or adapt them to workable test data.