I am trying to schedule a task, but when running the script I get:
ModuleNotFoundError: No module named 'clients'
Even this is an installed app within my django project.
I already added a hashbang/shebang as:
#!/usr/bin/venv python3.9
#!/usr/bin/python3.9
Either option gets me the same error message. I think my code for the schedule task is good, but here it is anyways:
cd /home/myusername/myproject/maintenance && workon venv && python3.9 mantenimientosemanal.py
Any help will be really appreciated. Thanks.
Ok, after reading a lot, for some reason there is something related to WSGI, sys.path that I don't know yet but I found out this can be solved by creating a custom django-admin commands and then running this python manage.py mycustomcommand as a scheduled task and voila!! It worked!! Thanks and hope this help others.
Related
I have read this stackoverflow Q&A but it did not worked it my case.
In my scenario I push a function (submit_transaction_for_settlement(transaction_id)) to the redis queue using the excellent package django-rq. The job of this function is to submit a transaction for settlement.
In the sandbox, whenever this function is executed I keep getting the same error: AttributeError: type object 'Configuration' has no attribute 'environment'.
I tried agf's proposal about instantiate a new gateway for each transaction inside my function, but it did not work!
Maybe this has something to do with the environment of the redis queue or the worker environment?
def submit_transaction_for_settlement(transaction_id):
from braintree import Configuration, BraintreeGateway
config = Configuration(environment=settings.BRAINTREE_ENVIRONMENT, merchant_id=settings.BRAINTREE_MERCHANT_ID,
public_key=settings.BRAINTREE_PUBLIC_KEY, private_key=settings.BRAINTREE_PRIVATE_KEY)
gateway = BraintreeGateway(config=config)
result = gateway.transaction.submit_for_settlement(transaction_id)
Ahrg!
I hate the moments where I answer a question and minutes after I find myself the solution!
The fault was in the command running the rqworker. I was using the command python manage.py rqworker --worker-class rq.SimpleWorker because I had this issue because I used python 2.7 (or something else caused this issue). The command generated this issue was python manage.py rqworker.
Upgrading now to python 3.4, the last command works like a charm!
So, running python manage.py rqworker did the trick and no such errors!
When am trying to run the chron job in django using below command
python manage.py runcrons
its showing one error like below
$ python manage.py runcrons
No handlers could be found for logger "django_cron"
Does any one have any idea about this error? Any help is appreciated.
It is kind of given in the error you get. You are missing a handler for the "django_cron" logger. See for example https://stackoverflow.com/a/7048543/1197616. Also have a look at the docs for Django, https://docs.djangoproject.com/en/dev/topics/logging/.
Actually the django-cron library does not require a 'django_cron' logger. I resolved the same problem by running the migrations of django_cron:
python manage.py migrate #migrate database
I'm using django==1.5.1 and I got this error below. Earlier I was using django==1.4.2 and didn't got such error following the same tut in GSWD (I did not update the django version in the middle of the project).
(edu-venv)vagrant#precise32:/vagrant/projects/kodeworms$ heroku run python manage.py syncdb
Running `python manage.py syncdb` attached to terminal... up, run.9132
ImproperlyConfigured: settings.DATABASES is improperly configured. Please supply the ENGINE value. Check setti
ngs documentation for more details.
This looks like I missed something which is related more specifically to django==1.5.1 version. Can someone help ?
Someone gave this solution and it worked for me
Original Link: http://gettingstartedwithdjango.com/questions/1/error-in-heroku-run-python-managepy-syncdb/
If you type heroku config you'll get the heroku environment values.
Mine only showed:
HEROKU_POSTGRESQL_BRONZE_URL: postgres://tcmgahtgsrmufa:iyA2dKD5bnO4f7jyv6MSu4453g#ec2-54-225-68-241.compute-1.amazonaws.com:5432/d6oj663f28smnh - there was no DATABASE_URL which dj_database_url.config needs. So then I found our from https://devcenter.heroku.com/articles/heroku-postgresql that you need to promote this to DATABASE_URL. The commands is : heroku pg:promote HEROKU_POSTGRESQL_RED_URL (replace with whatever environment variable your set-up is using). So far so good. I came here to post as soon as I tried this out. I haven't rerun yet but it should work.
I'm using the django celery task queue, and it works fine in development, but not at all in wsgi production. Even more frustrating, it used to work in production, but I somehow broke it.
"sudo rabbitmqctl status" tells me that the rabbitmq server is working. Everything also seems peachy in django: objects are created, and routed to the task manager without problems. But then their status just stays as "queued" indefinitely. The way I've written my code, they should switch to "error" or "ready," as soon as anything gets returned from the celery task. So I assume there's something wrong with the queue.
Two related questions:
Any ideas what the problem might be?
How do I debug celery? Outside of the manage.py celeryd command, I'm not sure how to peer into its inner workings. Are there log files or something I can use?
Thanks!
PS - I've seen this question, but he seems to want to run celery from manage.py, not wsgi.
After much searching, the most complete answer I found for this question is here. These directions flesh out the skimpy official directions for daemonizing celeryd. I'll copy the gist here, but you should follow the link, because Michael has explained some parts in more detail.
The main idea is that you need scripts in three places:
/etc/init.d/celeryd
/etc/default/celeryd
myApp/settings.py
Settings.py appears to be the same as in development mode. So if that's already set up, there are four steps to shifting to production:
Download the daemon script since it's not included in the installation:
https://github.com/celery/celery/tree/3.0/extra/generic-init.d/
Put it in /etc/init.d/celeryd
Make a file in /etc/default/celeryd, and put the variables here into it:
http://docs.celeryproject.org/en/latest/tutorials/daemonizing.html#example-django-configuration
Start the script
This solved my problem.
I think the reason you are not getting any response from celery, is because celeryd server might not be running. You could find out about it by doing ps -ef |grep celeryd. In order to figure out what is the error while trying to run celeryd, you might want to do the following.
In your settings.py file you could give the path to the celery log file CELERYD_LOG_FILE = <Path to the log file>
and while running celeryd server you could specify the level manage.py celeryd -l DEBUG.
This is blowing my mind because it's probably an easy solution, but I can't figure out what could be causing this.
So I have a new dev box and am setting everything up. I installed virtualenv, created a new environment for my project under ~/.virtualenvs/projectname
Then, I cloned my project from github into my projects directory. Nothing fancy here. There are no .pyc files sitting around so it's a clean slate of code.
Then, I activated my virtualenv and installed Django via pip. All looks good so far.
Then, I run python manage.py syncdb within my project dir. This is where I get confused:
ImportError: No module named projectname
So I figured I may have had some references of projectname within my code. So I grep (ack, actually) through my code base and I find nothing of the sorts.
So now I'm at a loss, given this environment why am I getting an ImportError on a module named projectname that isn't referenced anywhere in my code?
I look forward to a solution .. thanks guys!
Is projectname exactly (modulo suffix) the name of the directory the project is in? Wild guess, but I know Django does some things with the current directory…
Also, what is trying to import projectname? Do you get a traceback? If not, try running with py manage.py --traceback syncdb and see what happens.