Heroku worker can't access database - django

I have a Django project hosted by Heroku with a (postgresql) database, an app that interacts with that database (on a web dyno), and a worker that should also interact with the database.
The app asks the worker to do long running tasks in the background, like some database queries. However, I'm having trouble implementing this. I think the problem is that the worker is not authorized to access the database. The specific error I'm seeing is:
app[worker.1]: ImproperlyConfigured: Requested setting DEFAULT_INDEX_TABLESPACE, but settings are not configured. You must either define the environment variable DJANGO_SETTINGS_MODULE or call settings.configure() before accessing settings.
This doesn't make sense to me as an error coming from the worker. The settings are getting loaded fine- everything on the web dyno works as expected. The only thing I can figure is that the worker doesn't have access to the configured settings, including the database credentials.
How do I resolve this?
Edit:
Here's the Procfile:
web: gunicorn myapp.wsgi --log-file -
worker: python launchworker.py
As indicated there, the worker is launched as a Heroku worker, and it isn't run through the Django project at all (as I understand would be the case if it were a management command). This was the method recommended by Heroku.

Related

Connect to Postgres on Heroku with Django properly

I'm new to Django and Heroku.
I'm confused about how I should connect to Postgres database on Heroku from my Django app considering the fact that all the credentials and DATABASE_URL could be changed.
Firstly, to connect to my Postgres on Heroku I started by using environment variables and hardcoded them in my Heroku dashboard.
Then I figured out that it is a bad practice because the values can be changed.
I checked this guide by Heroku where they recommend adding this to settings:
DATABASES['default'] = dj_database_url.config(conn_max_age=600, ssl_require=True)
With that, I added my DATABASE_URL to my .env file - because otherwise, the URL will be empty. Now I can get all the correct database credentials in my DATABASE that are the same as in my dashboard. So halfway there.
Then I deleted all the hardcoded environment variables from my Heroku dashboard.
Then when I tried to heroku run python src/manage.py migrate -a myapp data, I received an error:
django.db.utils.OperationalError: connection to server on socket "/var/run/postgresql/.s.PGSQL.5432" failed: No such file or directory
Is the server running locally and accepting connections on that socket?
As I understand it, the problem is that it can't connect to the database (maybe because I deleted environment variables).
From what I saw on the internet - a lot of people in their guides on migrating to Postgres on Heroku use the hardcoded environment variables approach - which is a bad practice.
Otherwise, Heroku's guide doesn't show how specifically we should connect to the database with dynamically updated credentials.
Please advice.

Streamlit app inside django project at the same time

i'm kind of new to Heroku, but got a question in relation to dynos.
My situation is as follows. I'm running on main application in django that runs the following dyno command
"web gunicorn DJANGO_WEBPAGE.wsgi --log-file - ", and inside my main project there is a streamlit app that runs the following command
"streamlit run geo_dashboard.py", but it seems i can't run both commands on the same dyno.
1.- Is there a way to accomplish this?
2.- Do they need to be on separate apps?
3.- In case of limitations do to being a free user, does the hobby plan covers it?
I've tried running my procfile this way
web: gunicorn DJANGO_WEBPAGE.wsgi --log-file - && web: sh setup.sh && streamlit run geo_dashboard.py
Even though i get no errors, only the streamlit app appears leaving the django app shutdown.

How to use Django logging with gunicorn

I have a Django 1.6 site running with gunicorn, managed by supervisor. During tests and runserver I have logging on the console, but with gunicorn the statements don't show up anywhere (not even ERROR level logs). They should be in /var/log/supervisor/foo-stderr---supervisor-51QcIl.log but they're not. I have celery running on a different machine using supervisor and its debug statements show up fine in its supervisor error file.
Edit:
Running gunicorn in the foreground shows that none of my error messages are being logged to stderr like they are when running manage.py. This is definitely a gunicorn problem and not a supervisor problem.
I got a response on GitHub:
https://github.com/benoitc/gunicorn/issues/708
Since you have passed disable_existing_loggers the Gunicorn loggers are disabled when Django loads your logging configuration. If you are setting this because you want to disable some default Django logging configuration, make sure you add back the gunicorn loggers, gunicorn.error and gunicorn.access with whatever handlers you want.
In /etc/supervisor/conf.d/your-app.confyou should set log paths:
stdout_logfile=/var/log/your-app.log
stderr_logfile=/var/log/your-app.log
First, in your supervisor config for the gunicorn script, be sure to define
stdout_logfile=/path/to/logfile.log
redirect_stderr=true
That will make stdout and stderr go to the same file.
Now, on your gunicorn script, be sure to call the process with the following argument
gunicorn YourWSGIModule:app --log-level=critical

Permission problems prevent celery from running as daemon?

I'm currently having some trouble running celery as daemon. I use apache to serve my Django application, so I set uid and gid in celery setting all as "www-data". There are 2 places I know so far that need access permission: /var/log/celery/*.log, /var/run/celery/*.pid, and I already set them owned by "www-data". However, celery couldn't get started when I run sudo service celeryd start. If I get rid of the --uid and --gid option for the command, celery could get started by user "root".
One other thing I noticed is that if I could start celery using "root", it will put some files like: celery.bak, celery.dat, celery.dir in my CELERYD_CHDIR, which is my django application directory. I also changed the application directory owned by "www-data", celery still couldn't get started. I copied all the setting files from another machine in which celery runs fine, so I suppose it's not my setting's problem. Does anyone have any clue? Thanks.
Su to celery user and start celery from the command line. Most likely you have an app log, not celery, that you need permission for.

Getting broker started with django-celery

This is my first time using Celery so this might be a really easy question. I'm following the tutorial. I added BROKER_URL = "amqp://guest:guest#localhost:5672/" to my settings file. I added the simple task to my app. Now I do "ing the worker process" with
manage.py celeryd --loglevel=info --settings=settings
The settings=settings is needed for windows machines celery-django can't find settings.
I get
[Errno 10061] No connection could be made because the target machine actively refused it. Trying again in 2 seconds...
So it seems like the worker is not able to connect to the broker. Do I have to start the broker? Is it automatically started with manage.py runserver? Do I have to install something besides django-celery? Do I have to do something like manage.py runserver BROKER_URL?
Any pointers would be much appreciated.
You need to install broker first. Or try to use django db.
But i do not recommend use django db in production. Redis is ok. But it maybe problem run it on windows.