I'm tring to use Django 1.1 in GAE, But when I uncomment
use_library('django', '1.1')
in this script
import os
os.environ['DJANGO_SETTINGS_MODULE'] = 'settings'
from google.appengine.dist import use_library
#use_library('django', '1.1')
# Google App Engine imports.
from google.appengine.ext.webapp import util
# Force Django to reload its settings.
from django.conf import settings
settings._target = None
import django.core.handlers.wsgi
import django.core.signals
import django.db
import django.dispatch.dispatcher
# Unregister the rollback event handler.
django.dispatch.dispatcher.disconnect(
django.db._rollback_on_exception,
django.core.signals.got_request_exception)
def main():
# Create a Django application for WSGI.
application = django.core.handlers.wsgi.WSGIHandler()
# Run the WSGI CGI handler with that application.
util.run_wsgi_app(application)
if __name__ == "__main__":
main()
I receives
AttributeError: 'module' object has no
attribute 'disconnect'
What is going on?
From http://justinlilly.com/blog/2009/feb/06/django-app-engine-doc-fix/
For those setting up Django on Google
App Engine on version after the
signals refactor, the following fix is
needed for the code supplied by
Google.
# Log errors.
django.dispatch.dispatcher.connect(
log_exception, django.core.signals.got_request_exception)
# Unregister the rollback event handler.
django.dispatch.dispatcher.disconnect(
django.db._rollback_on_exception,
django.core.signals.got_request_exception)
becomes:
# Log errors.
django.dispatch.Signal.connect(
django.core.signals.got_request_exception, log_exception)
# Unregister the rollback event handler.
django.dispatch.Signal.disconnect(
django.core.signals.got_request_exception,
django.db._rollback_on_exception)
Related
I have a doubt regarding the implementation of celery with rabbitMQ since only the first function (debug_task()) that I have defined in celery.py is executed.
The problem is that send_user_mail(randomNumber, email) is not working. debug_task is working, so it's registered.
This is the celery console
[2022-10-08 22:28:48,081: ERROR/MainProcess] Received unregistered
task of type 'callservices.celery.send_proveedor_mail_new_orden'. The
message has been ignored and discarded.
Did you remember to import the module containing this task? Or maybe
you are using relative imports?
Why it's unregistered?
celery.py
from __future__ import absolute_import, unicode_literals
import os
from celery import Celery
from django.conf import settings
from django.core.mail import EmailMultiAlternatives, send_mail
os.environ.setdefault('DJANGO_SETTINGS_MODULE', 'callservices.settings')
app = Celery('tasks',broker='pyamqp://guest#localhost//')
app.config_from_object('django.conf:settings')
app.autodiscover_tasks(settings.INSTALLED_APPS)
#app.task()
def debug_task():
print("hi all")
#app.task()
def send_user_mail(randomNumber, email):
subject = 'email validation - ServicesYA'
cuerpo="Your number is: "+str(randomNumber)
send_mail(subject, cuerpo ,'xxx.ssmtp#xxx.com', [email],fail_silently = False)
return 1
This is init.py
# This will make sure the app is always imported when
# Django starts so that shared_task will use this app.
from celery import app as celery_app
__all__ = ('celery_app',)
and in settings.py I add this line:
BROKER_URL = "amqp://guest:guest#localhost:5672//"
I'm following this repo but I got this error:
Error: Import error cannot import name 'ProfileResource' from 'crowdfunding.models' (C:\_\_\_\_\_\crowdfunding\models.py)
which supposedly makes an asynchronous import. The problem is it cannot detect my ProfileResource.
I have specified in my settings.py that my resource be retrieved from admin.py.
def resource():
from crowdfunding.admin import ProfileResource
return ProfileResource
IMPORT_EXPORT_CELERY_MODELS = {
"Profile": {
'app_label': 'crowdfunding',
'model_name': 'Profile',
'resource': resource,
}
}
but it can't seem to do that.
My celery.py is this:
from __future__ import absolute_import, unicode_literals
import os
import sys
from celery import Celery
# sys.path.append("../")
# Set the default Django settings module for the 'celery' program.
os.environ.setdefault('DJANGO_SETTINGS_MODULE', 'mainapp.settings')
from django.conf import settings
app = Celery('mainapp',
broker='amqp://guest:guest#localhost:15672//',
# broker='localhost',
# backend='rpc://',
backend='db+sqlite:///db.sqlite3',
# include=['crowdfunding.tasks']
)
# Using a string here means the worker doesn't have to serialize
# the configuration object to child processes.
# - namespace='CELERY' means all celery-related configuration keys
# should have a `CELERY_` prefix.
app.config_from_object('django.conf:settings', namespace='CELERY')
app.autodiscover_tasks()
and the broker and backend are working fine so it's just the config not being recognized. What could be the problem?
I believe that the problem is that changes to the code do not apply to celery automatically. Every time you change the source code, you need to manually restart celery to apply the new changes that you made to the import path in settings.py.
I have a working Django app that I was able to get functioning on Heroku. The structure is project named 'untitled' and an app named 'web' such that the structure is:
project_root
static
templates
untitled
--->init.py
--->settings.py
--->urls.py
--->wsgi.py
web
--->init.py
--->admin.py
--->apps.py
--->models.py
--->tests.py
--->urls.py
--->views.py
This is a fairly basic app that I can get working outside of GAE (local and on Heroku), however, I'm getting stuck on the app.yaml and main.py requirements for GAE.
My app.yaml is:
application: seismic-interpretation-institute-py27
version: 1
runtime: python27
api_version: 1
threadsafe: true
handlers:
- url: /.*
script: main.app
libraries:
- name: django
version: "latest"
and my main.py (generated from PyCharm) is
import os,sys
import django.core.handlers.wsgi
import django.core.signals
import django.db
import django.dispatch.dispatcher
# Google App Engine imports.
from google.appengine.ext.webapp import util
# Force Django to reload its settings.
from django.conf import settings
settings._target = None
os.environ['DJANGO_SETTINGS_MODULE'] = 'untitled.settings'
# Unregister the rollback event handler.
django.dispatch.dispatcher.disconnect(
django.db._rollback_on_exception,
django.core.signals.got_request_exception)
def main():
# Create a Django application for WSGI.
application = django.core.handlers.wsgi.WSGIHandler()
# Run the WSGI CGI handler with that application.
util.run_wsgi_app(application)
if __name__ == '__main__':
main()
Finally, the output that is reported when running locally is
It seems that the error,
ImportError: Settings cannot be imported, because environment variable DJANGO_SETTINGS_MODULE is undefined.
is causing my problems. I am not exactly sure how to fix it.
try replace
from django.conf import settings
settings._target = None
os.environ['DJANGO_SETTINGS_MODULE'] = 'untitled.settings'
to
os.environ.setdefault("DJANGO_SETTINGS_MODULE", "untitled.settings")
from django.conf import settings
settings._target = None
In my Flask server app, I wanted to split up my routes into separate files so I used Blueprint. However this caused logging to fail within the constructor function used by a route. Can anyone see what I might have done wrong to cause this?
Simplified example ...
main.py ...
#!/usr/bin/python
import logging
import logging.handlers
from flask import Flask, Blueprint
from my_routes import *
logger = logging.getLogger("")
logger.setLevel(logging.DEBUG)
handler = logging.handlers.RotatingFileHandler("flask.log",
maxBytes=3000000, backupCount=2)
formatter = logging.Formatter(
'[%(asctime)s] {%(filename)s:%(lineno)d} %(levelname)s - %(message)s')
handler.setFormatter(formatter)
logger.addHandler(handler)
logging.getLogger().addHandler(logging.StreamHandler())
logging.debug("started app")
app = Flask(__name__)
app.register_blueprint(api_v1_0)
if __name__ == '__main__':
logging.info("Starting server")
app.run(host="0.0.0.0", port=9000, debug=True)
my_routes.py ...
import logging
import logging.handlers
from flask import Flask, Blueprint
class Class1():
def __init__(self):
logging.debug("Class1.__init__()") # This statement does not get logged
self.prop1=11
def method1(self):
logging.debug("Class1.method1()")
return self.prop1
obj1 = Class1()
api_v1_0 = Blueprint('api_v1_0', __name__)
#api_v1_0.route("/route1", methods=["GET"])
def route1():
logging.debug("route1()")
return(str(obj1.method1()))
You create an instance of Class1 in the global scope of module my_routes.py, so the constructor runs at the time you import that module, the from my_routes import * line in main.py. This is before your logging handler is configured, so there is nowhere to log at that time.
The solution is simple, move your import statement below the chunk of code that sets up the logging handler.
I've got a django app that's trying to call a celery task that will eventually be executed on some remote hosts. The task codebase is completely separate to the django project, so I'm using celery.execute.send_task and calling it from a post_delete model signal. The code looks a bit like this:
class MyModel(models.Model):
#staticmethod
def do_async_thing(sender, instance, **kwargs):
celery.execute.send_task("tasks.do_my_thing", args=[instance.name])
signals.post_delete.connect(MyModel.do_async_thing, sender=MyModel)
I'm using the latest Django (1.6.1) and celery 3.1.7, so I understand that I don't need any extra module or app in my django project for it to be able to talk to celery. I've set BROKER_URL inside my settings.py to be the right url amqp://user:password#host/vhost.
When this method fires, I get a Connection Refused error. There's no indication on the celery broker that any connection was attempted - I guess it's not seeing the BROKER_URL configuration and is trying to connect to localhost.
How do I make this work? What extra configuration does send_task need to know where the broker is?
So I discovered the answer, and it was to do with not reading the tutorial (http://docs.celeryproject.org/en/latest/django/first-steps-with-django.html) closely enough.
Specifically, I had the correct celery.py in place which I would have thought should have loaded the settings, but I'd missed the necessary changes to __init__.py in the django project, which wasn't hooking everything together.
My celery.py should be:
from __future__ import absolute_import
import os
from celery import Celery
from django.conf import settings
# set the default Django settings module for the 'celery' program.
os.environ.setdefault('DJANGO_SETTINGS_MODULE', 'myproject.settings')
app = Celery('mypoject')
# Using a string here means the worker will not have to
# pickle the object when using Windows.
app.config_from_object('django.conf:settings')
app.autodiscover_tasks(lambda: settings.INSTALLED_APPS)
and the __init__.py should be simply:
from __future__ import absolute_import
# This will make sure the app is always imported when
# Django starts so that shared_task will use this app.
from .celery import app as celery_app