Set up django-celery-email for local dev - django

It seems the best way to send emails from the django-allauth app asynchronously is to simply install django-celery-email. But the packages warns that
This version requires the following versions:Python 2.7 and Python3.5, Django 1.11, 2.1, and 2.2 Celery 4.0
I've only been using python for several months and never encountered a situation where two python version are needed on a project. And I'm using the official recommendation of pipenv for local development. A quick google shows that it isn't possible to have two python interpreters installed in the virtual environment. Since the plugin seems so popular I wondered how others were setting it up? Apologies if I've missed something major that explains this.
A bonus answer would also take into account that I am using docker and the docker image will install the python packages like this.
RUN pipenv install --system --deploy --ignore-pipfile
Many thanks in advance.

I am pretty sure it is just inaccurate description in the project docs, so you need either python 2.7 or python >=3.5 to be installed

In the end I didn't use django-celery-email. It's easy to send the emails generated by the django-allauth app without this package.
I used these resources -
https://github.com/anymail/django-anymail/issues/79
https://docs.djangoproject.com/en/2.2/topics/email/#defining-a-custom-email-backend
Basically you do this to get it working.
In settings.py define a CustomEmailBackend -
EMAIL_BACKEND = "users.backends.CustomEmailBackend"
In a backend.py file define the backend -
from django.core.mail.backends.base import BaseEmailBackend
from .tasks import async_send_messages
class CustomEmailBackend(BaseEmailBackend):
def send_messages(self, email_messages):
async_send_messages.delay(email_messages)
return len(email_messages)
And this is the task -
from django.core.mail import get_connection
from abstract_base_user.celery import app
#app.task(rety_backoff=True, serializer="pickle")
def async_send_messages(email_messages):
conn = get_connection(backend='anymail.backends.mailgun.EmailBackend')
conn.send_messages(email_messages)
The celery django app should be set up in the standard way as defined at https://docs.celeryproject.org/en/latest/django/first-steps-with-django.html
And the celery settings in the settings.py need include the pickle content type -
CELERY_ACCEPT_CONTENT = ['json', 'pickle']
Obviously you need to include your anytime settings and broker settings too. But this should be enough to get anybody started.

Related

What changes are needed to my django app when deploying to pythonanywhere? error points to nowhere

Deploying my django website with S3 as storage which runs fine locally to pythonanywhere gives a strange error I can't google a solution for:
"TypeError: a bytes-like object is required, not 'str'"
What I'm doing wrong?
I've tried to put my environment variables out of settings.env (aws keys, secret_key, etc) ad set them directly in my settings.py app. + every suggestion I could find but it's still the same :(
here's my /var/www/username_pythonanywhere_com_wsgi.py:
# +++++++++++ DJANGO +++++++++++
# To use your own Django app use code like this:
import os
import sys
from dotenv import load_dotenv
project_folder = os.path.expanduser('~/portfolio_pa/WEB') # adjust as appropriate
load_dotenv(os.path.join(project_folder, 'settings.env'))
# assuming your Django settings file is at '/home/myusername/mysite/mysite/settings.py'
path = '/home/corebots/portfolio_pa'
if path not in sys.path:
sys.path.insert(0, path)
os.environ['DJANGO_SETTINGS_MODULE'] = 'WEB.settings'
## Uncomment the lines below depending on your Django version
###### then, for Django >=1.5:
from django.core.wsgi import get_wsgi_application
application = get_wsgi_application()
###### or, for older Django <=1.4
#import django.core.handlers.wsgi
#application = django.core.handlers.wsgi.WSGIHandler()
I'd expect the website to run fine just like it does locally.
Boto library doesn't have a good Python3 support. This particular issue is known in the boto bugtracker: https://github.com/boto/boto/issues/3837
The best way of fixing this is to use boto3 which has decent Python3 support and is a generally most supported AWS SDK for Python.
The reason why it works on your local machine and doesn't work on production is that pythonanywhere setup seems to be using proxy which triggers this incompatible boto code. See the actual calling code: https://github.com/boto/boto/blob/master/boto/connection.py#L747
Your error traceback confirms this.
Unfortunately, I'm not familliar with the django-photologue, but a brief look doesn't suggest that it strongly depends on boto3. Maybe I'm wrong.
I still think that the best way is to go with boto3. As a backup strat you can fork boto with a fix for this issue and install that instead of the official one from PyPI: https://github.com/boto/boto/pull/3699

ImportError when using Haystack 2.0.0 with Django 1.5 and Gunicorn WSGI

I use django-haystack 2.0.0 to index my site, and it has been working great until I upgraded to Django 1.5 and started using the WSGI interface. If I just use the django_gunicorn command it works great, but the Django documentation "highly recommends" I use the gunicorn command.
When I start my site with the gunicorn command, Haystack throws the following error on any page load:
ImportError: cannot import name signals
I have no problems importing signals from the Django or Python shells. I use virtualenv and install all packages locally inside that environment. My wsgi.py file looks just like the default one in the django admin, except that I add the local path to the python path as such:
path = os.sep.join(os.path.abspath(__file__).split(os.sep)[:-2])
if path not in sys.path:
sys.path.append(path)`
Any help you could provide would be very appreciated, thank you!
I don't use gunicorn, but I had the same problem when I used the HAYSTACK_SIGNAL_PROCESSOR setting to point to a custom class that I wrote. That class imported one of my models, which eventually propagated up the import chain, to import my settings module, thus causing a circular import.
When using a setting such as HAYSTACK_SIGNAL_PROCESSOR that points to a class, make sure that class standsalone, and doesn't import either directly or indirectly the Django settings file.

Openshift: Installing Django 1.5 causes Server 500 error

I have created a Django 1.3 application on Openshift.
I wanted to upgrade to Django 1.5. So I updated setup.py to install Django 1.5
#!/usr/bin/env python
from setuptools import setup
setup(
name='<Application name>',
version='1.0',
description='',
author='',
author_email='',
url='http://www.python.org/sigs/distutils-sig/',
install_requires=['Django>=1.5'],
)
The server returns http 500.
If setup.py has install_requires=['Django<=1.4'] it works fine.
How can I install Django 1.5 on Openshift?
Update: I can see a github commit where in the install_requires for Django is changed from >=1.3 to <=1.4 for the handling this same issue. But I still cannot figure out what caused that server 500 and how can we install Django 1.5 on openshift
It might come from your code, did you check the backwards incompatibilities mentioned in the release notes (mainly ALLOWED_HOSTS required in your settings.py)
It could also come from the {% url %} tag syntax change, see here.
When i installed Django app on OpenShift, Django version was 1.5.1. I think OpenShift install last version Django, because the condition Django >= 1.4, that is no lower this version.
That is screenshot, when i installed app
I had the same issue : from your screenshot you're using python2.6 ?
Try to use python2.7 with this configurations put on the application file:
#!/usr/bin/env python
import os
import sys
sys.path.append(os.path.join(os.environ['OPENSHIFT_REPO_DIR']))
os.environ['DJANGO_SETTINGS_MODULE'] = 'mywebsite.settings'
virtenv = os.environ['OPENSHIFT_HOMEDIR'] + 'python/virtenv/'
os.environ['PYTHON_EGG_CACHE'] = os.path.join(virtenv, 'lib/python2.7/site-packages')
virtualenv = os.path.join(virtenv, 'bin/activate_this.py')
try:
execfile(virtualenv, dict(__file__=virtualenv))
except IOError:
pass
#
# IMPORTANT: Put any additional includes below this line. If placed above this
# line, it's possible required libraries won't be in your searchable path
#
import django.core.handlers.wsgi
application = django.core.handlers.wsgi.WSGIHandler()
And as refered by #Charles L try to set the settings using allowed host

Can Django's test client use a specific virtualenv?

Is it possible to configure an instance of django.test.client.Client to use a specific virtualenv instead of the OS python install? If so, how?
thanks!
Edit:
I'm using django.test.client.Client from a fabric deploy script, not from within Django itself. Fabric is installed in the virtualenv. So I'm doing something like this:
from django.test.client import Client
response = Client().get(url_path)
if response.status_code == 200: |
return response.content
else:
# handle error
pass
The test client doesn't know or care at all about virtualenvs or Python versions.
As long as you've activated the virtualenv at the time of running the tests, the version of Python within the virtualenv will be used.
The test client will use whatever enviornment Django itself is running in. If you load up a virtualenv with Django installed in it, any management commands will use that Django install.

Installing Celery with Django

I'm trying to install celery with django and I'm get the following error:
from celery.task import sets
ImportError: cannot import name sets
How can I fix that?
Installing a newer version of celery might work. Or your install might not be complete.
But sets should be a part of celery.task.
Atleast, the latest docs indicate that this should be the case: http://ask.github.com/celery/reference/celery.task.sets.html
https://github.com/ask/celery/issues#issue/315
looks like an issue that has been reported.
You might be interested in these 2 pages of the #celery irc log
http://botland.oebfare.com/logger/celery/2011/3/4/2/
http://botland.oebfare.com/logger/celery/2011/3/4/3/