I have read this stackoverflow Q&A but it did not worked it my case.
In my scenario I push a function (submit_transaction_for_settlement(transaction_id)) to the redis queue using the excellent package django-rq. The job of this function is to submit a transaction for settlement.
In the sandbox, whenever this function is executed I keep getting the same error: AttributeError: type object 'Configuration' has no attribute 'environment'.
I tried agf's proposal about instantiate a new gateway for each transaction inside my function, but it did not work!
Maybe this has something to do with the environment of the redis queue or the worker environment?
def submit_transaction_for_settlement(transaction_id):
from braintree import Configuration, BraintreeGateway
config = Configuration(environment=settings.BRAINTREE_ENVIRONMENT, merchant_id=settings.BRAINTREE_MERCHANT_ID,
public_key=settings.BRAINTREE_PUBLIC_KEY, private_key=settings.BRAINTREE_PRIVATE_KEY)
gateway = BraintreeGateway(config=config)
result = gateway.transaction.submit_for_settlement(transaction_id)
Ahrg!
I hate the moments where I answer a question and minutes after I find myself the solution!
The fault was in the command running the rqworker. I was using the command python manage.py rqworker --worker-class rq.SimpleWorker because I had this issue because I used python 2.7 (or something else caused this issue). The command generated this issue was python manage.py rqworker.
Upgrading now to python 3.4, the last command works like a charm!
So, running python manage.py rqworker did the trick and no such errors!
Related
I hope i could get some help I have installed the django-crontab and created a cron task like the following inside of my cron.py inside my_django_app in django:
import redis
def hello():
redis.set("hello", "world")
the above code works perfectly when I run python manage.py crontab run (cron hash)*
and the key set successfully inside redis but the task doesn't run automatically have you had any similar experience?
Note: I am using linux and python version 3.9 and django 3.2.3
I built a Django 1.9 project locally with sqlite3 as my default database. I have an application named Download which defines the DownloadedSongs table in models.py:
models.py
from __future__ import unicode_literals
from django.db import models
class DownloadedSongs(models.Model):
song_name = models.CharField(max_length = 255)
song_artist = models.CharField(max_length = 255)
def __str__(self):
return self.song_name + ' - ' + self.song_artist
Now, in order to deploy my local project to Heroku, I added the following lines at the bottom of my settings.py file:
import dj_database_url
DATABASES['default'] = dj_database_url.config()
My application has a form with a couple of text fields, and on submitting that form, the data gets inserted into the DownloadedSongs table. Now, when I deployed my project on Heroku and tried submitting this form, I got the following error:
Exception Type: ProgrammingError at /download/
Exception Value: relation "Download_downloadedsongs" does not exist
LINE 1: INSERT INTO "Download_downloadedsongs" ("song_name", "song_a...
This is how my requirements.txt file looks like:
beautifulsoup4==4.4.1
cssselect==0.9.1
dj-database-url==0.4.1
dj-static==0.0.6
Django==1.9
django-toolbelt==0.0.1
gunicorn==19.6.0
lxml==3.6.0
psycopg2==2.6.1
requests==2.10.0
static3==0.7.0
Also, I did try to run the following commands as well:
heroku run python manage.py makemigrations
heroku run python manage.py migrate
However, the issue still persists. What seems to be wrong here?
Make sure your local migration folder and content is under git version control.
If not, add, commit & push them as follows (assuming you have a migrations folder under <myapp>, and your git remote is called 'heroku'):
git add <myapp>/migrations/*
git commit -m "Fix Heroku deployment"
git push heroku
Wait until the push is successful and you get the local prompt back.
Then log in to heroku and there execute migrate.
To do this in one execution environment, do not launch these as individual heroku commands, but launch a bash shell and execute both commands in there: (do not type the '~$', this represents the Heroku prompt)
heroku run bash
~$ ./manage.py migrate
~$ exit
You must not run makemigrations via heroku run. You must run it locally, and commit the result to git. Then you can deploy that code and run those generated migrations via heroku run python manage.py migrate.
The reason is that heroku run spins up a new dyno each time, with a new filesystem, so any migrations generated in the first command are lost by the time the second command runs. But in any case, migrations are part of your code, and must be in version control.
As Heroku's dynos don't have a filesystem that persists across deploys, a file-based database like SQLite3 isn't going to be suitable. It's a great DB for development/quick prototypes, though. https://stackoverflow.com/a/31395988/784648
So between deploys your entire SQLite database is going to be wiped, you should move onto a dedicated database when you deploy to heroku I think. I know heroku has a free tier for postgres databases which I'd recommend if you just want to test deployment to heroku.
python manage.py makemigrations
python manage.py migrate
python manage.py migrate --run-syncdb
this worked for me.
I know this is old, but I had this issue and found this thread useful.
To sum up, the error can also appear when executing the migrations (which is supposed to create the needed relations in the DB), because recent versions of Django check your urls.py before doing the migrations. In my case - and in many others' it seems, loading urls.py meant loading the views, and some views were class-based and had an attribute defined through get_object_or_404:
class CustomView(ParentCustomView):
phase = get_object_or_404(Phase, code='C')
This is what was evaluated before the migrations actually ran, and caused the error. I fixed it by turning my view's attribute as a property:
class CustomView(ParentCustomView):
#property
def phase(self):
return get_object_or_404(Phase, code='C')
You'll know quite easily if this is the problem you are encountering, as the Traceback will point you towards the problematic view.
Also this problem might not appear in development because you have migrated before creating the view.
I am trying to schedule a task, but when running the script I get:
ModuleNotFoundError: No module named 'clients'
Even this is an installed app within my django project.
I already added a hashbang/shebang as:
#!/usr/bin/venv python3.9
#!/usr/bin/python3.9
Either option gets me the same error message. I think my code for the schedule task is good, but here it is anyways:
cd /home/myusername/myproject/maintenance && workon venv && python3.9 mantenimientosemanal.py
Any help will be really appreciated. Thanks.
Ok, after reading a lot, for some reason there is something related to WSGI, sys.path that I don't know yet but I found out this can be solved by creating a custom django-admin commands and then running this python manage.py mycustomcommand as a scheduled task and voila!! It worked!! Thanks and hope this help others.
Django 3.0 is adding asgi / async support and with it a guard around making synchronous requests in an async context. Concurrently, IPython just added top level async/await support, which seems to be running the whole interpreter session inside of a default event loop.
Unfortunately the combination of these two great addition means that any django ORM operation in a jupyter notebook causes a SynchronousOnlyOperation exception:
SynchronousOnlyOperation: You cannot call this from an async context - use a thread or sync_to_async.
As the exception message says, it's possible to wrap each ORM call in a sync_to_async() like:
images = await sync_to_async(Image.objects.all)()
but it's not very convenient, especially for related fields which would usually be implicitly resolved on attribute lookup.
(I tried %autoawait off magic but it didn't work, from a quick glance at the docs I'm assuming it's because ipykernels always run in an asyncio loop)
So is there a way to either disable the sync in async context check in django or run an ipykernel in a synchronous context?
For context: I wrote a data science package that uses django as a backend server but also exposes a jupyter based interface on top of the ORM that allows you to clean/annotate data, track machine learning experiments and run training jobs all in a jupyter notebook.
It works for me
os.environ["DJANGO_ALLOW_ASYNC_UNSAFE"] = "true"
BTW, I start my notebook using the command
./manage.py shell_plus --notebook
Contrary to other answers I'd suggest just running from the shell as:
env DJANGO_ALLOW_ASYNC_UNSAFE=true ./manage.py shell_plus --notebook
and not modifying any config files or startup scripts.
The advantage of doing it like this is that these checks still seem useful to have enabled almost everywhere else (e.g. when debugging locally via runserver or when running tests). Disabling via files would easily disable these in too many places negating their advantage.
Note that most shells provide easy ways of recalling previously invoked command lines, e.g. in Bash or Zsh Ctrl+R followed by notebook would find the last time you ran something that had "notebook" in. Under the fish shell just type notebook and press the up arrow key to start a reverse search.
For now I plan on just using a forked version of django with a new setting to skip the async_unsafe check. Once the ORM gets async support I'll probably have to rewrite my project to support it and drop the flag.
EDIT: there's now a PR to add an env variable (DJANGO_ALLOW_ASYNC_UNSAFE) to disable the check (https://github.com/django/django/pull/12172)
I added
os.environ["DJANGO_ALLOW_ASYNC_UNSAFE"] = "true" code in the setting.py of your project all the way down to the file and then
command python3 manage.py shell_plus --notebook
if you are using python3 or use python
That's it. worked for me.
When am trying to run the chron job in django using below command
python manage.py runcrons
its showing one error like below
$ python manage.py runcrons
No handlers could be found for logger "django_cron"
Does any one have any idea about this error? Any help is appreciated.
It is kind of given in the error you get. You are missing a handler for the "django_cron" logger. See for example https://stackoverflow.com/a/7048543/1197616. Also have a look at the docs for Django, https://docs.djangoproject.com/en/dev/topics/logging/.
Actually the django-cron library does not require a 'django_cron' logger. I resolved the same problem by running the migrations of django_cron:
python manage.py migrate #migrate database