Is it possible to configure an instance of django.test.client.Client to use a specific virtualenv instead of the OS python install? If so, how?
thanks!
Edit:
I'm using django.test.client.Client from a fabric deploy script, not from within Django itself. Fabric is installed in the virtualenv. So I'm doing something like this:
from django.test.client import Client
response = Client().get(url_path)
if response.status_code == 200: |
return response.content
else:
# handle error
pass
The test client doesn't know or care at all about virtualenvs or Python versions.
As long as you've activated the virtualenv at the time of running the tests, the version of Python within the virtualenv will be used.
The test client will use whatever enviornment Django itself is running in. If you load up a virtualenv with Django installed in it, any management commands will use that Django install.
Related
I created new flask project in PyCharm but I can't see how to run flask shell in integrated PyCharm python console window.
Name app is not defined when start console:
I still can run command "flask shell" in integrated PyCharm terminal but it will start without code completion, hints, syntax checks, etc.
Flask app is defined in terminal:
Is there any way to start flask shell in integrated PyCharm python console?
You can use a combination of creating a request context for the Shell (which flask shell does) and a custom starting script for the PyCharm console.
# Step 1: acquire a reference to your Flask App instance
import app
app_instance = app.create_app()
# I return app instance from a factory method here, but might be be `app.app` for basic Flask projects
# Step 2: push test request context so that you can do stuff that needs App, like SQLAlchemy
ctx = app_instance.test_request_context()
ctx.push()
Now you should have the benefits of flask shell alongside the code completion and other benefits that IPython Shell provides, all from the default PyCharm Python Shell.
In my case (in my server.py I had app=Flask(__name__)) I used
from server import app
ctx = app.app_context()
ctx.push()
Then added this to File | Settings | Build, Execution, Deployment | Console | Python Console starting script
This also works:
from app import app
ctx = app.test_request_context()
ctx.push()
more on the Flask docs page
Running a Shell
To run an interactive Python shell you can use the shell command:
flask shell
This will start up an interactive Python shell, setup the correct
application context and setup the local variables in the shell. This
is done by invoking the Flask.make_shell_context() method of the
application. By default you have access to your app and g.
It's just a normal interactive Python shell with some local variables already setup for you, not an IDE editor with code completion, hints, syntax checks, etc.
Now, to run the bot through Django, I first use the python manage.py runserver command and then follow the link to launch a view with my bot. Can you tell me if there is an easier way to start my bot automatically when starting a Django project?
Actually, you can use a management command to run your bot with something like
python manage.py runbot
All Django context, including DB and settings, will be available
Reference to management command page:
https://simpleisbetterthancomplex.com/tutorial/2018/08/27/how-to-create-custom-django-management-commands.html
Maybe is a little late but you can do the following:
create the bot.py file in the same folder where manage.py is.
inside the bot.py make sure you import the following:
import django
import os
os.environ['DJANGO_SETTINGS_MODULE'] = '{Folder where your settings are}.settings'
django.setup()
and in order to run you just type python bot.py
It seems the best way to send emails from the django-allauth app asynchronously is to simply install django-celery-email. But the packages warns that
This version requires the following versions:Python 2.7 and Python3.5, Django 1.11, 2.1, and 2.2 Celery 4.0
I've only been using python for several months and never encountered a situation where two python version are needed on a project. And I'm using the official recommendation of pipenv for local development. A quick google shows that it isn't possible to have two python interpreters installed in the virtual environment. Since the plugin seems so popular I wondered how others were setting it up? Apologies if I've missed something major that explains this.
A bonus answer would also take into account that I am using docker and the docker image will install the python packages like this.
RUN pipenv install --system --deploy --ignore-pipfile
Many thanks in advance.
I am pretty sure it is just inaccurate description in the project docs, so you need either python 2.7 or python >=3.5 to be installed
In the end I didn't use django-celery-email. It's easy to send the emails generated by the django-allauth app without this package.
I used these resources -
https://github.com/anymail/django-anymail/issues/79
https://docs.djangoproject.com/en/2.2/topics/email/#defining-a-custom-email-backend
Basically you do this to get it working.
In settings.py define a CustomEmailBackend -
EMAIL_BACKEND = "users.backends.CustomEmailBackend"
In a backend.py file define the backend -
from django.core.mail.backends.base import BaseEmailBackend
from .tasks import async_send_messages
class CustomEmailBackend(BaseEmailBackend):
def send_messages(self, email_messages):
async_send_messages.delay(email_messages)
return len(email_messages)
And this is the task -
from django.core.mail import get_connection
from abstract_base_user.celery import app
#app.task(rety_backoff=True, serializer="pickle")
def async_send_messages(email_messages):
conn = get_connection(backend='anymail.backends.mailgun.EmailBackend')
conn.send_messages(email_messages)
The celery django app should be set up in the standard way as defined at https://docs.celeryproject.org/en/latest/django/first-steps-with-django.html
And the celery settings in the settings.py need include the pickle content type -
CELERY_ACCEPT_CONTENT = ['json', 'pickle']
Obviously you need to include your anytime settings and broker settings too. But this should be enough to get anybody started.
I am developing a web application with React for frontend and Django for backend. I use Webpack to watch for changes and bundle code for React apps.
The problem is that I have to run two commands concurrently, one for React and the other one for Django:
webpack --config webpack.config.js --watch
./manage.py runserver
Is there any way to customize runserver command to execute the npm script, like npm run start:dev? When you use Node.js as a backend platform, you can do the similar job like npm run build:client && npm run start:server.
If you are already using webpack and django, probably you can be interested in using webpack-bundle-tracker and django-webpack-loader.
Basically webpack-bundle-tracker will create an stats.json file each time the bundle is build, and django-webpack-loader will watch for those stats.json file to relaunch the dev server. This stack allows to separate the concerns between the server and the client.
There are a couple of posts out there explaining this pipeline.
I'm two and a half years late, but here's a management command that implements the solution that OP wanted, rather than a redirection to another solution. It inherits from the staticfiles runserver and runs webpack concurrently in a thread.
Create this management command at <some_app>/management/commands/my_runserver.py:
import os
import subprocess
import threading
from django.contrib.staticfiles.management.commands.runserver import (
Command as StaticFilesRunserverCommand,
)
from django.utils.autoreload import DJANGO_AUTORELOAD_ENV
class Command(StaticFilesRunserverCommand):
"""This command removes the need for two terminal windows when running runserver."""
help = (
"Starts a lightweight Web server for development and also serves static files. "
"Also runs a webpack build worker in another thread."
)
def add_arguments(self, parser):
super().add_arguments(parser)
parser.add_argument(
"--webpack-command",
dest="wp_command",
default="webpack --config webpack.config.js --watch",
help="This webpack build command will be run in another thread (should probably have --watch).",
)
parser.add_argument(
"--webpack-quiet",
action="store_true",
dest="wp_quiet",
default=False,
help="Suppress the output of the webpack build command.",
)
def run(self, **options):
"""Run the server with webpack in the background."""
if os.environ.get(DJANGO_AUTORELOAD_ENV) != "true":
self.stdout.write("Starting webpack build thread.")
quiet = options["wp_quiet"]
command = options["wp_command"]
kwargs = {"shell": True}
if quiet:
# if --quiet, suppress webpack command's output:
kwargs.update({"stdin": subprocess.PIPE, "stdout": subprocess.PIPE})
wp_thread = threading.Thread(
target=subprocess.run, args=(command,), kwargs=kwargs
)
wp_thread.start()
super(Command, self).run(**options)
For anyone else trying to write a command that inherits from runserver, note that you need to check for the DJANGO_AUTORELOAD_ENV variable to make sure you don't create a new thread every time django notices a .py file change. Webpack should be doing it's own auto-reloading anyway.
Use the --webpack-command argument to change the webpack command that runs (for example, I use --webpack-command 'vue-cli-service build --watch'
Use --webpack-quiet to disable the command's output, as it can get messy.
If you really want to override the default runserver, rename the file to runserver.py and make sure the app it lives in comes before django.contrib.static in your settings module's INSTALLED_APPS.
You shouldn't mess with the built-in management commands but you can make your own: https://docs.djangoproject.com/en/1.10/howto/custom-management-commands/.
On your place I'd leave runserver in place and create one to run your custom (npm in this case) script, i.e. with os.execvp.
In theory you could run two parallel subprocesses one that would execute for example django.core.management.execute_from_command_line and second to run your script. But it would make using tools like pbd impossible (which makes work very hard).
The way I do it is that I leverage Docker and Docker compose. Then when I use docker-compose up -d my database service, npm scripts, redis, etc run in the background (running runserver separately but that's another topic).
I'm trying to understand how virtual environment gets invoked. The website I have been tasked to manage has a .venv directory. When I ssh into the site to work on it I understand I need to invoke it with source .venv/bin/activate. My question is: how does the web application invoke the virtual environment? How do I know it is using the .venv, not the global python?
More detail: It's a Drupal website with Django kind of glommed onto it. Apache it the main server. I believe the Django is served by gunicorn. The designer left town
_
Okay, I've found how, in my case, the virtualenv was being invoked for the django.
BASE_DIR/run/gunicorn script has:
#GUNICORN='/usr/bin/gunicorn'
GUNICORN=".venv/bin/gunicorn"
GUNICORN_CONF="$BASE_DIR/run/gconf.py"
.....
$GUNICORN --config $GUNICORN_CONF --daemon --pid $PIDFILE $MODULE
So this takes us into the .venv where the gunicorn script starts with:
#!/media/disk/www/aavso_apps/.venv/bin/python
Voila
Just use absolute path when calling python in virtualenv.
For example your virtualenv is located in /var/webapps/yoursite/env
So you must call it /var/webapps/yoursite/env/bin/python
If you use just Django behind a reverse proxy, Django will use whatever is the python environment for the user that started the server was determined in which python command. If you're using a management tool like Gunicorn, you can specify which environment to use in those configs. Although Gunicorn itself requires us to activate the virtual environment before invoking Gunicorn
EDIT:
Since you're using Gunicorn, take a look at this, https://www.digitalocean.com/community/tutorials/how-to-deploy-python-wsgi-apps-using-gunicorn-http-server-behind-nginx