I'm a novice on Docker and I actually try this tutorial: https://docs.docker.com/compose/django/
There are several things that I don't understand:
The "code" folder is never created.
Once the container is launched with the "docker-compose up"
command, how can I access to the postgreSQL command line ?
In the tutorial, at the "Create a Django project" part, the first point is
"Change to the root of your project directory." But if I understand
correctly the tutorial, I'm already in this folder.
Someone can help me ? thanks by advance.
The "code" folder is never created.
Code folder is mounted to docker container from current directory, according to docker-compose.yml. So after run command in docker container there will be directory code (you can prove that by docker-compose run web ls /code), but on your local OS not.
Once the container is launched with the "docker-compose up" command, how can I access to the postgreSQL command line ?
You can connect by port:
docker-compose run db psql -U postgres.
Or by django dbshell:
docker-compose run web python manage.py dbshell
In the tutorial, at the "Create a Django project" part, the first point is "Change to the root of your project directory." But if I understand correctly the tutorial, I'm already in this folder.
If you're already in project root dire, than you can skip step, but always keep in mind that current directory will be mounted to /code dir in container.
Related
I got my SpringBoot app running in a docker container built from a Dockerfile and hosted on AWS ubuntu instance.
Everything is working perfectly, except I have an image, a css file and a js file that does not load. Upon inspecting the page, these files show 404 not found error.
I have used winscp to upload my files to my aws instance. In myApp folder is where my docker file is and where I build my container.
Directory Structure is:
myApp
-Dockerfile
-target
-myApp.jar
-src
-main
-java
-[all my code in respective subdirectories]
-resources
-static
-my.js
-my.css
-my.jpg
-templates
-folder1
-html
-html2
-folder2
-html
-html2
I am almost certain my problem lies in the docker container and my dockerfile.
Spring Boot automatically looks for static files in /src/main/resources/static. I'm thinking my docker container does not have this file structure.
Here is my Dockerfile
FROM ubuntu:latest
RUN apt-get update && apt-get install -y openjdk-14-jdk
WORKDIR /usr/local/bin/myApp
ADD . /src/main/resources/static
ADD target/myapp.jar .
ENTRYPOINT ["java", "-jar", "myApp.jar"]
When i build the container it shows everything copied and built successfully, but the files are not being reached. And what is weird to me is spring boot is serving the correct templates from the static folder. I am at a complete loss on this one. I have tried adding the resources individually from the dockerfile and still no luck.
You are setting WORKDIR and then copying .jar into that location using relative path but when you are copying the other stuff (/src/main/resources/static), you are using absolute path which is completely destroying your folder structure (since those files are not copied into folder referenced by your WORKDIR). You have probably forgot .(dot) in front of that path - ./src/main/resources/static.
Run docker exec -it <image-id> bash to get access into your running container and see what was copied where if you are not sure, fixing the stuff in your Dockerfile should be easy from then on.
My current objective is to have Travis deploy our Django+Docker-Compose project upon successful merge of a pull request to our Git master branch. I have done some work setting up our AWS CodeDeploy since Travis has builtin support for it. When I got to the AppSpec and actual deployment part, at first I tried to have an AfterInstall script do docker-compose build and then have an ApplicationStart script do docker-compose up. The containers that have images pulled from the web are our PostgreSQL container (named db, image aidanlister/postgres-hstore which is the usual postgres image plus the hstore extension), the Redis container (uses the redis image), and the Selenium container (image selenium/standalone-firefox). The other two containers, web and worker, which are the Django server and Celery worker respectively, use the same Dockerfile to build an image. The main command is:
CMD paver docker_run
which uses a pavement.py file:
from paver.easy import task
from paver.easy import sh
#task
def docker_run():
migrate()
collectStatic()
updateRequirements()
startServer()
#task
def migrate():
sh('./manage.py makemigrations --noinput')
sh('./manage.py migrate --noinput')
#task
def collectStatic():
sh('./manage.py collectstatic --noinput')
# find any updates to existing packages, install any new packages
#task
def updateRequirements():
sh('pip install --upgrade -r requirements.txt')
#task
def startServer():
sh('./manage.py runserver 0.0.0.0:8000')
Here is what I (think I) need to make happen each time a pull request is merged:
Have Travis deploy changes using CodeDeploy, based on deploy section in .travis.yml tailored to our CodeDeploy setup
Start our Docker containers on AWS after successful deployment using our docker-compose.yml
How do I get this second step to happen? I'm pretty sure ECS is actually not what is needed here. My current status right now is that I can get Docker started with sudo service docker start but I cannot get docker-compose up to be successful. Though deployments are reported as "successful", this is only because the docker-compose up command is run in the background in the Validate Service section script. In fact, when I try to do docker-compose up manually when ssh'd into the EC2 instance, I get stuck building one of the containers, right before the CMD paver docker_run part of the Dockerfile.
This took a long time to work out, but I finally figured out a way to deploy a Django+Docker-Compose project with CodeDeploy without Docker-Machine or ECS.
One thing that was important was to make an alternate docker-compose.yml that excluded the selenium container--all it did was cause problems and was only useful for local testing. In addition, it was important to choose an instance type that could handle building containers. The reason why containers couldn't be built from our Dockerfile was that the instance simply did not have the memory to complete the build. Instead of a t1.micro instance, an m3.medium is what worked. It is also important to have sufficient disk space--8GB is far too small. To be safe, 256GB would be ideal.
It is important to have an After Install script run service docker start when doing the necessary Docker installation and setup (including installing Docker-Compose). This is to explicitly start running the Docker daemon--without this command, you will get the error Could not connect to Docker daemon. When installing Docker-Compose, it is important to place it in /opt/bin/ so that the binary is used via /opt/bin/docker-compose. There are problems with placing it in /usr/local/bin (I don't exactly remember what problems, but it's related to the particular Linux distribution for the Amazon Linux AMI). The After Install script needs to be run as root (runas: root in the appspec.yml AfterInstall section).
Additionally, the final phase of deployment, which is starting up the containers with docker-compose up (more specifically /opt/bin/docker-compose -f docker-compose-aws.yml up), needs to be run in the background with stdin and stdout redirected to /dev/null:
/opt/bin/docker-compose -f docker-compose-aws.yml up -d > /dev/null 2> /dev/null < /dev/null &
Otherwise, once the server is started, the deployment will hang because the final script command (in the ApplicationStart section of my appspec.yml in my case) doesn't exit. This will probably result in a deployment failure after the default deployment timeout of 1 hour.
If all goes well, then the site can finally be accessed at the instance's public DNS and port in your browser.
I've linked a git branch to my Elastic Beanstalk environment and using git aws.push it deploys correctly.
I've now added a .extensions directory which contains a config script which should be creating a couple of directories. However, nothing appears to be happening.
I understand that the .extensions directory should be copied across to the ec2 instance as well but I'm not seeing it.
I've checked eb-tools.log and it's not mentioned in the upload.
Is there something additional that's required?
The script contains:
commands:
cache:
command: mkdir /tmp/cache
items:
command: mkdir /tmp/cache/items
chmod:
command: chmod -R 644 /tmp
You can find the run logs for this at /var/log/cfn-init.log.
In here I could see that the mkdir commands had worked initially but subsequently failed as the directory already existed.
Turns out that eb extensions run commands in alphabetical order so I had to change the commands to:
01command1:
02command2:
etc.
From this point on it worked fine.
Something else that was confusing me is that the .ebextensions directory in my local git repo was not appearing on the target instance directory. this is because once it's been run it will delete the directory.
Double check that your local script file has a .config extension. I was having a similar problem because my local file was called .ebextensions/01_stuff.yaml and it was fixed once I renamed it to .ebextensions/01_stuff.config.
I'm trying to get the Neo4j on Heroku and Getting Started with Python on Heroku tutorials up and going. The Neo4j one works fine, but the Python one has problems.
For anyone else trying to follow this tutorial, I've recorded the
problems and my solutions to help you out as well.
This is all done on a Win7 x64 dev machine.
Q1) "virtualenv venv --distribute" - errors with:
'virtualenv' is not recognized as an internal or external command, operable program or batch file.
A1) The workaround is to fully qualify the path to:
"C:\Python27\Scripts\virtualenv venv --distribute"
Q2a) "foreman start" - errors with:
'foreman' is not recognized as an internal or external command, operable program or batch file.
A2) Looks like a path issue so I ran the line:
"set PATH=%PATH%;C:\Program Files (x86)\Heroku\ruby-1.9.2\bin\"
Q2b) "foreman start" now errors:
Bad file descriptor
{Ruby paths...}
A2b) Help?
So I can't run the app locally, but maybe still on the server, so moving on...
Q3) .gitignore - can't create this file on Windows.
A3) Clone another project and copy that file and edit.
Q4) "git push heroku master" - errors with:
Permission denied (publickey).
fatal: Could not read from remote repository.
Please make sure you have the correct access rights and the repository exists.
A4) Apparently need to create new SSH keys. Managing Your SSH Keys Note again fully quality the path such as below and select the new key to add to heroku.
"c:\Program Files (x86)\Git\bin\ssh-keygen.exe" -t rsa"
Q5) Try "git push heroku master" the same with the Neo4j test app "flask-py2neo" - errors during the compile. Is this example current?
A5) Remove distribute from requirements.txt.
Any ideas?
I followed the directions outlined in AWS documentation for creating an Elastic Beanstalk application, however after deploying my application via "eb start" the status was red. I checked the log files and learned that my requirements.txt file had an error in it (I used "=" where I should have used "=="). I fixed by requirements file, checked it into Git, and did a "git aws.push". This did not get my app running and when the app auto updated it gave me the same error. I figured an "eb stop" "eb start" would do the trick (maybe a full manual restart would work?) but that didn't work either. I eventually had to delete my app and recreate it to get the old requirements.txt cleared out so that the new one could be used.
Is this expected behavior? I'm new to AWS Elastic Beanstalk and read through as much doc as I could however I couldn't find any footnotes describing behavior in a scenario like this.
Create a file like this:
# .ebexetensions/always-update-pip.config
container_commands:
keep-pip-up2date:
command: pip install -r requirements.txt
After you have run git aws.push and the environment has been updated, take a snapshot of your logs. In the /var/log/eb-tools.log You should see which pip requirements are being updated/installed and which requirements already exist.