Problem
I have a CircleCI continuous integration for my django application. I would like to use a standalone chrome selenium node container to run my UI tests. The following setup works locally:
Launch django server in the background:
python manage.py migrate && python manage.py runserver 0.0.0.0:8081 &
Run the webdriver container:
docker run --net='host' --name selenium -d -p 4444:4444 selenium/standalone-chrome
Run test:
from fabric.operations import local
from selenium import webdriver
#given('I have a web browser')
def browser():
return webdriver.Remote(
command_executor='http://127.0.0.1:4444/wd/hub',
desired_capabilities=DesiredCapabilities.CHROME)
#when('I open the main page')
def view_main(browser):
assert "backend.fail" in local("curl http://localhost:8081/", capture=True)
browser.maximize_window()
browser.get("http://localhost:8081/")
return browser
curl http://localhost:8081/ is run in the context of the CircleCI shell, and succeeds, whereas browser.get("http://localhost:8081/") is run from the docker container running webdriver and fails.
Question
How can I make my docker container see my django server on localhost:8081 on CircleCI?
Research
I read in the docker documentation that --net=host puts the host and docker client on the same network stack, and it works locally in a vagrant vm.
I have looked at this question which explores communication between multiple docker containers and this question which handles a general setup of docker tests on CircleCI, but none address the visibility of the host from a docker container.
Answer:
After some digging and using CircleCI's excellent ssh debugging tool, it turned out that the reason for the failure was not --net='host' but rather the selenium container failing to start.
It silently failed because
unable to start xvfb
unable to start xvfb
unable to start xvfb
unable to start xvfb
Then I googled "selenium circleci" and it turned out that CircleCI has a native xfvb running on :99 and chromedriver preinstalled:
CircleCI runs graphical programs in a virtual framebuffer, using xvfb.
This means programs like Selenium, Capybara, Jasmine, and other
testing tools which require a browser will work perfectly, just like
they do when you use them locally.
I changed my test to:
#given('I have a web browser')
def browser():
try:
return webdriver.Remote(
command_executor='http://127.0.0.1:4444/wd/hub',
desired_capabilities=DesiredCapabilities.CHROME)
except:
return webdriver.Chrome()
And now my tests pass.
Related
I am developing a web application with React for frontend and Django for backend. I use Webpack to watch for changes and bundle code for React apps.
The problem is that I have to run two commands concurrently, one for React and the other one for Django:
webpack --config webpack.config.js --watch
./manage.py runserver
Is there any way to customize runserver command to execute the npm script, like npm run start:dev? When you use Node.js as a backend platform, you can do the similar job like npm run build:client && npm run start:server.
If you are already using webpack and django, probably you can be interested in using webpack-bundle-tracker and django-webpack-loader.
Basically webpack-bundle-tracker will create an stats.json file each time the bundle is build, and django-webpack-loader will watch for those stats.json file to relaunch the dev server. This stack allows to separate the concerns between the server and the client.
There are a couple of posts out there explaining this pipeline.
I'm two and a half years late, but here's a management command that implements the solution that OP wanted, rather than a redirection to another solution. It inherits from the staticfiles runserver and runs webpack concurrently in a thread.
Create this management command at <some_app>/management/commands/my_runserver.py:
import os
import subprocess
import threading
from django.contrib.staticfiles.management.commands.runserver import (
Command as StaticFilesRunserverCommand,
)
from django.utils.autoreload import DJANGO_AUTORELOAD_ENV
class Command(StaticFilesRunserverCommand):
"""This command removes the need for two terminal windows when running runserver."""
help = (
"Starts a lightweight Web server for development and also serves static files. "
"Also runs a webpack build worker in another thread."
)
def add_arguments(self, parser):
super().add_arguments(parser)
parser.add_argument(
"--webpack-command",
dest="wp_command",
default="webpack --config webpack.config.js --watch",
help="This webpack build command will be run in another thread (should probably have --watch).",
)
parser.add_argument(
"--webpack-quiet",
action="store_true",
dest="wp_quiet",
default=False,
help="Suppress the output of the webpack build command.",
)
def run(self, **options):
"""Run the server with webpack in the background."""
if os.environ.get(DJANGO_AUTORELOAD_ENV) != "true":
self.stdout.write("Starting webpack build thread.")
quiet = options["wp_quiet"]
command = options["wp_command"]
kwargs = {"shell": True}
if quiet:
# if --quiet, suppress webpack command's output:
kwargs.update({"stdin": subprocess.PIPE, "stdout": subprocess.PIPE})
wp_thread = threading.Thread(
target=subprocess.run, args=(command,), kwargs=kwargs
)
wp_thread.start()
super(Command, self).run(**options)
For anyone else trying to write a command that inherits from runserver, note that you need to check for the DJANGO_AUTORELOAD_ENV variable to make sure you don't create a new thread every time django notices a .py file change. Webpack should be doing it's own auto-reloading anyway.
Use the --webpack-command argument to change the webpack command that runs (for example, I use --webpack-command 'vue-cli-service build --watch'
Use --webpack-quiet to disable the command's output, as it can get messy.
If you really want to override the default runserver, rename the file to runserver.py and make sure the app it lives in comes before django.contrib.static in your settings module's INSTALLED_APPS.
You shouldn't mess with the built-in management commands but you can make your own: https://docs.djangoproject.com/en/1.10/howto/custom-management-commands/.
On your place I'd leave runserver in place and create one to run your custom (npm in this case) script, i.e. with os.execvp.
In theory you could run two parallel subprocesses one that would execute for example django.core.management.execute_from_command_line and second to run your script. But it would make using tools like pbd impossible (which makes work very hard).
The way I do it is that I leverage Docker and Docker compose. Then when I use docker-compose up -d my database service, npm scripts, redis, etc run in the background (running runserver separately but that's another topic).
I am building a Python+Django development environment using docker. I defined Dockerfile files and services in docker-compose.yml for web server (nginx) and database (postgres) containers and a container that will run our app using uwsgi. Since this is a dev environment, I am mounting the the app code from the host system, so I can easily edit it in my IDE.
The question I have is where/how to run migrate command.
In case you don't know Django, migrate command creates the database structure and later changes it as needed by the project. I have seen people run migrate as part of the compose command directive command: python manage.py migrate && uwsgi --ini app.ini, but I do not want migrations to run on every container restart. I only want it to run once when I create the containers and never run again unless I rebuild.
Where/how would I do that?
Edit: there is now an open issue with the compose team. With any luck, one time command containers will get supported by compose. https://github.com/docker/compose/issues/1896
You cannot use RUN because as you mentioned in the comments your source is mounted during running of the container.
You cannot use CMD either since you don't want it to run everytime you restart the container.
I recommend using docker exec manually after running the container. I do not think there is a way to automate this inside a dockerfile or docker-compose because of the two reasons I gave above.
It sounds like what you need is a tool for managing project tasks. dobi is a tool designed to handle these tasks (disclaimer: I am the author of this tool).
You can see an example of how to run a migration here: https://github.com/dnephin/dobi/tree/master/examples/init-db-with-rails. The example uses rails, but it's basically the same idea as django.
You could setup a task called migrate which would run the command in a container and write the data to a volume. Then when you start your docker-compose containers, use that volume as the source for your database service.
https://github.com/docker/compose/issues/1896 is finally resolved now by the new service profiles introduced with docker-compose 1.28.0. With profiles you can mark services to be only started in specific profiles:
services:
nginx:
# ...
postgres:
# ...
uwsgi:
# ...
migrations:
profiles: ["cli-only"] # profile name chosen freely
# ...
docker-compose up # start only your app services, no migrations
docker-compose run migrations # run migrations on-demand
docker exec -it container-name bash
Then you will be inside the container and you can run any command you normally do when you develop without using docker.
Trying to streamline a deployment process to webfaction.com for my django application, I have a master (working copy) and a development branch.
currently I'm doing the following:
Make changes to my development branch in my local dev environment
When changes are working, test with run local server, then merge with my master branch
git push so the code is in my remote repo (this has other issues such as passwords, keys etc which I've not quite solved yet) (also i dont believe its possible to scp code to webfaction and I'm not really a fan of any of the FTP services I've used so far)
SSH into my webfaction server and do a git pull and git merge
Test to see if everything is still working (it never is)
Make anychanges required to get everything working again
commit any changes I've had to do to fix everything then push back to the remote repo
Go back to my development environment and sync the code up with the production code
Rinse Repeat for the next feature
obviously I've missed the efficient development train, for the record I've only been working with django for a couple of months as a hobby project.
Can anyone suggest a django deployment process that would be more conducive to sane development?
I would strongly suggest Fabric to handle your deployments to WebFaction:
http://docs.fabfile.org/en/1.11/tutorial.html
By using Fabric you can deploy code and do other server side operations from your local terminal with no need to manually ssh to the server. First install Fabric:
pip install Fabric
Create fabfile.py in your project root folder. Here is an example fabfile that can get you started:
from fabric.api import task, env, run, cd
from fabric.context_managers import prefix
env.hosts = ('wf_username#wf_username.webfactional.com',)
env.forward_agent = True
MANAGEPY = '~/webapps/my_project/code/my_project/manage.py'
PY = '~/webapps/my_project/env/bin/python2.7'
#task
def deploy():
with cd('~/webapps/my_project/code/'):
with prefix('source production'):
run('git pull --rebase origin master')
run('pip install -r requirements.txt')
run('{} {} migrate'.format(PY, MANAGEPY))
run('{} {} collectstatic --noinput'.format(PY, MANAGEPY))
run('touch my_project/my_project/wsgi.py')
You can run fab task from your terminal with:
fab deploy
In my opinion, making code changes directly on server is a bad practice. Fabric can improve your development flow so that you make code edits only locally, quickly deploy them and test them.
The best and shortest way
In settings.py:
try:
from production_settings import *
except ImportError as e:
pass
You can override what needed in production_settings.py; it should stay out of your version control and you can use git resourcefully.
I have a Django site running in Docker containers, which uses docker-compose to manage the various containers (database, nginx, etc.). There are a few Django tasks that I use for site maintenance using the Django manage.py command. They commands take the form of:
manage.py updateflickr --settings=mysite.myproj.prod
Running under docker-compose, they look like:
docker-compose run --rm app manage.py updateflickr --settings=mysite.myproj.prod
My problem is that when I try to run these same commands using Fabric, it appears that the settings file I am specifying is not being used. Django is returning database connection errors, which typically mean that it is not getting the correct database information, or in this case the connection specified in mysite.myprod.prod
My Fabric file looks like:
import os
from fabric.api import *
env.hosts = ['myserver.com']
env.user = "myuser"
env.key_filename = '~/.ssh/do_rsa'
env.shell = '/bin/bash -c'
#task
def updateflickr():
run('docker-compose run --rm app python manage.py updateflickr --settings=mysite.myproj.prod')
I have also expirimented with setting the DJANGO_SETTINGS_MODULE environment variable in my docker-compose.yml but am getting the same results. Finally, the last thing I tried was wrapping the command in a shell script. Same results - if I run on the server, it runs fine. If I run the shell script from Fabric, I get database connection issues.
UPDATE
I am not so sure this is so much a question about Fabric, then a question about how docker-compose runs. If I try the following:
ssh -t me#myserver.com 'docker-compose run --rm app python manage.py updateflickr --settings=mysite.myproj.prod'
I still get the same results. There must be something different about loading up an interactive shell with just sending a command. I have tried using ssh with and without a -t flag, because docker-compose might need a pty active.
We use jenkins as continious integration system. We have two django servers validated by jenkins.
jenkins validates successully the first server. The second server depends on the first one. Thus we would like to launch at the end of the first server validation the first server itself.
We are using python, virtualenv and django and defined the Virtualenv Builder as follow:
pip install -r requirements.txt
rm -f .coverage
fab localhost test
coverage xml
nohup python manage.py runserver 9090 &
The issue is that the build never ends due to the nohup.
How can I launch the server after a successful build?
I had the same problem.
Ken,
I tried using fabric, but again python manage.py runserver - runs continuosly, so the next command is not starting.
And just few mins ago my collegue showed me how to use nohup and with variable BUILD_ID of Jenkins it would be like this to get Success from the build and leave the Django server running:
BUILD_ID=dontKillMe nohup python manage.py runserver host_server &
This worked for our Django project testing.
Since you are using fabric to test, I would recommend defining another fabric task, say, deploy, which you could call assuming the build succeeds.
Much like the call to fab completes for a successful build such that you get to the nohup line, I would expect the deploy task to return also.
You may also want to consider making the server a service (either via an /etc/init.d style script, or upstart if Ubuntu), and have the fabric task stop the currently running one, copy over whatever new files it needs (or similar process), and then restart it.
Assuming what you have above is a bash script or similar, you may want to also define set -e so that, in case any of the commands returns a non-success code, the script will fail, and in turn, fail the build.