Django / Docker / Remote Debug using Pydev - django

My setup is the following one :
- a django server running in a docker with port mapping: 8090:8090
- Eclipse with PyDev
I want to be able to put breakpoint on Pydev (click on a line, step by step)
I found several articles like;
http://www.pydev.org/manual_adv_remote_debugger.html
but it's still not working.
1) Should I update the manage.py to "import pydev" ? which lines to add and do I have to copy inside the docker container the pysrc of pydev plugin in order to be able to do the module import ?
2) Is there a port forwarding needed ? python instance running into docker should have access to remote debug server on host machine ?
3) I found article about pycharm and remote debug using ssh ? not possible to do similar with pydev ?
4) How to "link" my local directory and docker "directory" ?
[EDIT] I found the solution
Copy the eclipse/path_to\pydev\plugins\org.python.pydev\pysrc directory into a place where your docker container can access it.
Edit pysrc/pydevd_file_utils.py, and add directory mapping between your host and docker container like:
PATHS_FROM_ECLIPSE_TO_PYTHON = [(r'C:/django',r'/.../lib/django'),
(r'C:/workspace/myapp',r'/var/www/myapp')]
you can set several tuples if you have several paths containing python code
edit manage.py and add the following
sys.path.append('/my_path/to_pysrc_/under_docker/pysrc')
import pydevd
pydevd.settrace(host='172.17.42.1') #IP of your host
In Pydev, in preferences > Pydev > Run/Debug > Port for remote debugger: 5678
In Debug Perspective, click on "Start the Pydev server"
in your docker, run: python manage.py runserver 0.0.0.0:8090 --noreload
(replace 8090 by your http port)
In Pydev: you will see that the code just break after settrace !
Now You can add some breakpoint and use the debug CLI of Pydev:) Enjoy !

I had the similar issue - django project in docker, connect to docker by pycharm 145.1504 & 162.1120 via docker interpreter, run server works OK, but debug is stack after pycharm runs
/usr/bin/python2.7 -u /root/.pycharm_helpers/pydev/pydevd.py --multiproc --qt-support --client '0.0.0.0' --port 38324 --file /opt/project/manage.py runserver 0.0.0.0:8000.
I tried to find out why for a few days, then connected pycharm to docker by ssh connection and everything works fine, run and debug.

Ok, from what you wrote I will assume you have a Django docker container running on your local machine.
From inside your container (e.g. docker-compose exec <container name> bash to get into it)
pip install pydevd
in Eclipse, put a breakpoint like this:
import pydevd; pydevd.settrace('docker.for.mac.localhost')
If you're not using Docker for Mac, you have to do a bit of work to get the IP of your machine from inside of your container, e.g. see this.
go to Debug Perspective and start the PyDev debug server
start your application or test
... and you should see your views for stack, variables, etc., populate as the code stops at the breakpoint.
In Python 3.7, there is now a builtin breakpoint, which you can configure to point to your favorite debugger using an environment variable (the default is pdb):
breakpoint()
It also takes arguments, so you can do:
breakpoint(host='docker.for.mac.localhost')
I found that a bit annoying to type, so I ended up putting inside an app a module that looks like this:
# my_app/pydevd.py
import pydevd
def set_trace():
pydevd.settrace('docker.for.mac.localhost')
I then set the environment variable for the builtin breakpoint (say in your docker-compose.yml):
PYTHONBREAKPOINT=my_app.pydevd.set_trace

Related

How to easily debug Redis and Django/Gunicorn when developing using Docker?

I'm not referring to more sophisticated debugging techniques, but how to get access to the same kind of error messages that are normally directed to terminal tabs?
Basically I'm adopting Docker in a Django project also using Redis.
In the old way of working I opened a linux terminal tab for gunicorn like this: gunicorn --reload --bind 0.0.0.0:8001 myapp.wsgi:application
And this tab kept running Gunicorn and any Python error was shown in this tab so I could see the problem and fix it.
I could also open a second tab for the celery woker: celery -A myapp worker --pool=solo -l info
The same thing happened, the tab was occupied by Celery and any Python error in a task was shown in the tab and I could see the problem and correct the code.
My question is: Using docker is there a way to make each of the containers direct these same errors that would previously go to the screen, to go to log files so that I can debug my code when an error occurs in Python?
What is the correct way to handle simple debugging during development using Docker containers?
After looking more about this in the docker documentation I found a link that solves this problem: View logs for a container or service
Basically the command "docker logs CONTAINER_ID" shows on the screen exactly what we would see in the terminal running the application.
Works perfectly to see Django, Redis and Angular logs.
Just type:
docker logs CONTAINER_ID
Replace the container_id keyword with the real id of the container you want to log in.
To find the id type:
docker ps

How to "dockerize" Flask application?

I have Flask application named as rest.py and I have dockerize but it is not running.
#!flask/bin/python
from flask import Flask, jsonify
app = Flask(__name__)
tasks = [
{
'id': 1,
'title': u'Buy groceries',
'description': u'Milk, Cheese, Pizza, Fruit, Tylenol',
'done': False
}
]
#app.route('/tasks', methods=['GET'])
def get_tasks():
return jsonify({'tasks': tasks})
if __name__ == '__main__':
app.run(debug=True)
Dockerfile is as follows
FROM ubuntu
RUN apt-get update -y
RUN apt-get install -y python-dev python-pip
COPY . /rest
WORKDIR /rest
RUN pip install -r Req.txt
ENTRYPOINT ["python"]
CMD ["rest.py"]
I have build it using this command...
$ docker build -t flask-sample-one:latest
...and when I run container...
$ docker run -d -p 5000:5000 flask-sample-one
returning the following output:
7d1ccd4a4471284127a5f4579427dd106df499e15b868f39fa0ebce84c494a42
What am I doing wrong?
The output you get is the container ID. Check with docker ps whether it keeps running.
Use docker logs [container-id] to figure out what's going on inside.
Some problems I can find in your question:
Change the app.run line to app.run(host='0.0.0.0', debug=True). From the point of view of the container, its services need to be externally available. So they need to run on the loopback interface, like you would run it if you'd set up a publicly available server on a host directly.
Make sure that Flask gets installed. Your docker image file requires all the commands to make it work from a blank Ubuntu installation.
Please do not forget to deactivate debug if you'd ever expose this service on your host. Debug mode in Flask makes it possible for visitors to run arbitrary code if they can trigger an exception (it's a feature, not a bug).
After that (and building the container again [1]), try curl http://127.0.0.1:5000/tasks on the host. Let me know if it works, if not there are other problems in your setup.
[1] You can improve the prototyping workflow with Flask's built-in reloader (which is enabled by default) if you use a volume mount in your docker container for the directory that contains your python files - this would allow you to change your script on the host, reload in the browser and directly see the result.
I believe that you need to reinforce your concepts about Docker, in order to understand how it works, and then you will achieve your objectives regarding "dockerizing" whatever application.
Here is an article which can give your some first steps.
An official HOWTO will also help you.
Some observations that might help you:
check if your Req.txt contains flask for installation
before dockerizing, check if your application is working
check your running containers with docker ps and see if your container is running
if it is running, test your application: curl http://127.0.0.1:5000/tasks
*
One more thing:
your JSON has an OBJECT with an ARRAY with just one ELEMENT
Is that what you want for your prototype?
Take a look on this doc, about the JSON standard.

Getting ember to run under docker on Windows Quickstart

Working through this tutorial on setting up ember-cli in a Docker container:
http://www.rkblog.rk.edu.pl/w/p/setting-ember-cli-development-environment-ember-21/
Here are my steps:
Created docker-compose.yml in an empty folder on the host machine
Launched Docker Quickstart to get a terminal
Changed to the folder with the .yml
Ran the two docker-compose commands below from the terminal (added -d because without that you get a message that interactive mode is not supported)
Ran docker ps -a to verify that the container was running
Ran docker inspect CONTAINER_ID to find the ip address of the running container
Found the IP address at an odd location (172.17.0.2)
Attempted to access port 4200 on that IP from the host Windows machine browser and also from the Docker CL via curl but without success.
Ran docker ps -a and found that both containers that had been instantiated had exited.
Now if I try to start the container again it just exits immediately
docker-compose run -d --rm ember init
docker-compose run -d --rm ember server
What am I missing to get up and running? Do I need to open ports on the Default VM running in Virtualbox? How do I diagnose why the container keeps exiting?
First I would suggest using docker-compose up, that is most likely what you want.
To see the logs for a detached container you can run docker logs <container name>. If there are any errors you'll see them there.
A likely cause of the "container exit" is because the process goes into the background. Docker requires a process to stay in the foreground, but many serve commands will background by default. To keep the process in the foreground you can sometimes add use a flag like --foreground or --no-daemon, but I'm not sure if one exists for ember.
If that flag doesn't exist, it's likely that ember server is just checking if stdin/stdout are connected to a tty. By default they are not. You can add these lines to your docker-compose.yml to fix it:
stdin_open: True
tty: True
Ok finally resolved it. The issue with the module resolution may have been long file name resolution on windows because after I moved the source folder to the root of the host I was able to get ember serve running under windows.
Then from the terminal window I ran the commands to init and launch ember-server
docker-compose run -d --rm ember init
docker-compose run -d --rm ember server
Then did:
docker-compose up -d
which launched the containers successfully and then I was able to access the Ember page served up at the IP:Port specified earlier in the comments
http://192.168.99.100:4200/

AWS: CodeDeploy for a Docker Compose project?

My current objective is to have Travis deploy our Django+Docker-Compose project upon successful merge of a pull request to our Git master branch. I have done some work setting up our AWS CodeDeploy since Travis has builtin support for it. When I got to the AppSpec and actual deployment part, at first I tried to have an AfterInstall script do docker-compose build and then have an ApplicationStart script do docker-compose up. The containers that have images pulled from the web are our PostgreSQL container (named db, image aidanlister/postgres-hstore which is the usual postgres image plus the hstore extension), the Redis container (uses the redis image), and the Selenium container (image selenium/standalone-firefox). The other two containers, web and worker, which are the Django server and Celery worker respectively, use the same Dockerfile to build an image. The main command is:
CMD paver docker_run
which uses a pavement.py file:
from paver.easy import task
from paver.easy import sh
#task
def docker_run():
migrate()
collectStatic()
updateRequirements()
startServer()
#task
def migrate():
sh('./manage.py makemigrations --noinput')
sh('./manage.py migrate --noinput')
#task
def collectStatic():
sh('./manage.py collectstatic --noinput')
# find any updates to existing packages, install any new packages
#task
def updateRequirements():
sh('pip install --upgrade -r requirements.txt')
#task
def startServer():
sh('./manage.py runserver 0.0.0.0:8000')
Here is what I (think I) need to make happen each time a pull request is merged:
Have Travis deploy changes using CodeDeploy, based on deploy section in .travis.yml tailored to our CodeDeploy setup
Start our Docker containers on AWS after successful deployment using our docker-compose.yml
How do I get this second step to happen? I'm pretty sure ECS is actually not what is needed here. My current status right now is that I can get Docker started with sudo service docker start but I cannot get docker-compose up to be successful. Though deployments are reported as "successful", this is only because the docker-compose up command is run in the background in the Validate Service section script. In fact, when I try to do docker-compose up manually when ssh'd into the EC2 instance, I get stuck building one of the containers, right before the CMD paver docker_run part of the Dockerfile.
This took a long time to work out, but I finally figured out a way to deploy a Django+Docker-Compose project with CodeDeploy without Docker-Machine or ECS.
One thing that was important was to make an alternate docker-compose.yml that excluded the selenium container--all it did was cause problems and was only useful for local testing. In addition, it was important to choose an instance type that could handle building containers. The reason why containers couldn't be built from our Dockerfile was that the instance simply did not have the memory to complete the build. Instead of a t1.micro instance, an m3.medium is what worked. It is also important to have sufficient disk space--8GB is far too small. To be safe, 256GB would be ideal.
It is important to have an After Install script run service docker start when doing the necessary Docker installation and setup (including installing Docker-Compose). This is to explicitly start running the Docker daemon--without this command, you will get the error Could not connect to Docker daemon. When installing Docker-Compose, it is important to place it in /opt/bin/ so that the binary is used via /opt/bin/docker-compose. There are problems with placing it in /usr/local/bin (I don't exactly remember what problems, but it's related to the particular Linux distribution for the Amazon Linux AMI). The After Install script needs to be run as root (runas: root in the appspec.yml AfterInstall section).
Additionally, the final phase of deployment, which is starting up the containers with docker-compose up (more specifically /opt/bin/docker-compose -f docker-compose-aws.yml up), needs to be run in the background with stdin and stdout redirected to /dev/null:
/opt/bin/docker-compose -f docker-compose-aws.yml up -d > /dev/null 2> /dev/null < /dev/null &
Otherwise, once the server is started, the deployment will hang because the final script command (in the ApplicationStart section of my appspec.yml in my case) doesn't exit. This will probably result in a deployment failure after the default deployment timeout of 1 hour.
If all goes well, then the site can finally be accessed at the instance's public DNS and port in your browser.

How to run a deploy command on remote host from PyCharm?

I am looking for a way to simplify remote deployment of a django application directly from PyCharm.
Even if deploying the files itself works just file with the remote host and upload, I was not able to find a way to run the additional commands on the server site (like manage.py syncdb).
I am looking for a fully automated solution, one that would work at single click (or command).
I don't know much about PyCharm so maybe you could do something from the IDE, but I think you'll probably want to take a look at the fabric project (http://docs.fabfile.org/en/1.0.1/index.html)
It's a python deployment automation tool that's pretty great.
Here is one of my fabric script files. Note that I make a lot of assumptions (This is my own that I use) that completely depend on how you want to set up your project, such as I use virtualenv, pip, and south as well as my own personal preference for how to deploy and where to deploy to.
You'll likely want to rework or simplify it to meet your needs.
You may use File > Settings > Tools > External Tools to run arbitrary external executable files. You may write a small command that connects over SSH and issues a [set of] command. Then the configured tool would be executable
For example, in my project based on tornado, I run the instances using supervisord, which, according to answer here, cannot restart upon code change.
I ended up writing a small tool on paramiko, that connects via ssh and runs supervisorctl restart. The code is below:
import paramiko
from optparse import OptionParser
parser = OptionParser()
parser.add_option("-s",
action="store",
dest="server",
help="server where to execute the command")
parser.add_option("-u",
action="store",
dest="username")
parser.add_option("-p",
action="store",
dest="password")
(options, args) = parser.parse_args()
client = paramiko.SSHClient()
client.load_system_host_keys()
client.connect(hostname=options.server, port=22, username=options.username, password=options.password)
command = "supervisorctl reload"
(stdin, stdout, stderr) = client.exec_command(command)
for line in stdout.readlines():
print line
client.close()
External Tool configuration in Pycharm:
program: <PYTHON_INTERPRETER>
parameters: <PATH_TO_SCRIPT> -s <SERVERNAME> -u <USERNAME> -p <PASSWORD>