I am trying to launch a Django project and am unable to do so because it seems my local environment is disconnected from my .env file in the root directory. I've never had this problem before and assume there is an easy fix but I can't seem to figure it out. When I run basic commands like runserver I get the following error:
RATE_LIMIT_DURATION = int(os.environ.get("RATE_LIMIT_DURATION"))
TypeError: int() argument must be a string, a bytes-like object or a number, not 'NoneType'
In my .env file, I have defined the variable as:
RATE_LIMIT_DURATION = 10
I have already confirmed that my database is setup and I am pretty sure my settings are in good shape otherwise.
DATABASES = {
"default": {
"ENGINE": "django.db.backends.postgresql_psycopg2",
"NAME": os.environ.get("DB_NAME"),
"USER": os.environ.get("DB_USER"),
"PASSWORD": os.environ.get("DB_PASSWORD"),
"HOST": "localhost",
"PORT": 8000,
}
}
When I want to include .env to my django project I just use load_dotenv from dotenv python library.
So basically in settings.py I put:
from dotenv import load_dotenv
load_dotenv()
Related
I want to migrate my db to Railway Postgres.
I put my DATABASE_URL as a env variable in prod.env file:
DATABASE_URL='postgresql://postgres:(my password here)F#containers-us-west-97.railway.app:6902/railway'
Here how I import it in my prod settings file:
DATABASE_URL = os.getenv("DATABASE_URL")
DATABASES = {
"default": dj_database_url.config(default=DATABASE_URL, conn_max_age=1800),
}
When I try to migrate the db:
./manage.py migrate --settings=app.settings.prod
I get an error:
django.core.exceptions.ImproperlyConfigured: settings.DATABASES is improperly configured. Please supply the ENGINE value. Check the settings documentation for more details.
I used the same approach when I migrated my DB to Heroku, and it worked well.
I checked that correct value comes DATABASE_URL to prod settings file when I debug it.
I also added DATABASE_URL as a variable to my Railway project.
UPD. I tried to hardcode my DATABASE_URL in the settings file, and it worked well. But again, even when I print my DATABASES after this code:
DATABASES = {
"default": dj_database_url.config(default=DATABASE_URL, conn_max_age=1800),
}
I see that the values are correct.
How can I resolve this?
I'm aware this error has already been posted but the context of my issue is different. I have tried proposed solutions without success.
I'm using django-apscheduler to schedule some management commands, one of these uses ThreadPoolExecutor to speed up an otherwise lengthy process.
Unfortunately, it does not close connections, which eventually leads to:
exception:connection to server at "localhost" (::1), port 5432 failed: FATAL: sorry, too many clients already
I'm using PostgreSQL backend, and when I run the management command directly I can see in pgAdmin that the number of database sessions reduces after execution.
However, when ran by django-apscheduler they accumulate and eventually cause FATAL: sorry, too many clients already.
I've tried calling close_old_connections() from from django.db import close_old_connections but that doesn't seem to do anything.
Can someone point me in the right direction?
This is what my management command looks like:
import concurrent
from django.core.management.base import BaseCommand
from data_model.models import DataModel
class Command(BaseCommand):
help = "Do something interesting"
def handle(self, *args, **kwargs):
with concurrent.futures.ThreadPoolExecutor(10) as executor:
list(executor.map(DataModel.objects.do_something_interesting))
I did also try adding expiration time CONN_MAX_AGE to DB config but that didn't help:
DATABASES = {
"default": {
"ENGINE": "django.db.backends.postgresql_psycopg2",
"NAME": env.str("POSTGRES_DB"),
"USER": env.str("POSTGRES_USER"),
"PASSWORD": env.str("POSTGRES_PASSWORD"),
"HOST": env.str("POSTGRES_HOST"),
"PORT": 5432,
"CONN_MAX_AGE": 60
},
}
Management command execution completed where the arrow is:
I created environment variables for my django project within my pipenv virtual envronment bin/activate (linux) or scripts\activate(windows) file , i made necessary changes in settings file as well as exiting and re activating the virtual environment but im still getting a keyerror (I'm working on a windows machine)
variables in settings.py
SECRET_KEY = os.environ['SECRET_KEY']
EMAIL_HOST_PASSWORD = os.environ['EMAIL_HOST_PASSWORD']
evnvironment variables in virtualenv\scripts\activate file
export SECRET_KEY= "mysecretkey"
export EMAIL_HOST_PASSWORD= "mypassword"
error
File "C:\Users\Dell\.virtualenvs\team-272-SMES-Server-dSgdZ4Ig\lib\os.py", line 673, in __getitem__
raise KeyError(key) from None
KeyError: 'SECRET_KEY'
Make sure you have "SECRET_KEY" in your os.environ
Use this code to check if you have "SECRET_KEY" there:
import os
import pprint
# Get the list of user's
# environment variables
env_var = os.environ
# Print the list of user's
# environment variables
print("User's Environment variable:")
pprint.pprint(dict(env_var), width = 1)
You are probably missing "SECRET_KEY" in the environment variable list. You can add a variable:
# importing os module
import os
# Add a new environment variable
os.environ['GeeksForGeeks'] = 'www.geeksforgeeks.org'
source
On a Windows server, I recommend creating a JSON (or YAML) file with all your database and app secrets. I personally prefer JSON, so an example of one is
{
"SECRET_KEY": "...",
"MYSQL_DBUSER": "jon"
"MYSQL_PW": "..."
...
}
Then in your settings.py you should add something like
import json
with open("config.json") as config:
config = json.load(config)
Then to simply load in your project's secrets, index them by the variable name like
SECRET_KEY = config['SECRET_KEY']
I can't seem to access my env variables' values inside my app. I'm loading my env variables into my app with a .env file and the dotenv package; I learned about it here.
My .env
FLASK_ENV=development
FLASK_APP=app.py
DEBUG=ON
TESTING=False
I want to use the value of the TESTING variable inside my app and run certain code based on whether it is True or False.
How can I get these values? The docs say
Certain configuration values are also forwarded to the Flask object so
you can read and write them from there: app.testing = True
But I get module 'app' has no attribute 'testing'
When I log app by itself I see a Flask object. When I log this out like app.Flask, I see the env variables, but these appear like this, with no refernce to the current value.
{'__name__': 'TESTING', 'get_converter': None}
I want to be able to do something like:
app.testing => False
app.FLASK_ENV => development
and then eventually:
if app.testing == True:
<do something>
PS - I know the app loads this .env file okay because if I remove the values the environment changes back to production, the default.
#settings.py
from pathlib import Path # python3 only
from dotenv import load_dotenv
load_dotenv(verbose=True)
env_path = Path('.') / '.env'
load_dotenv(dotenv_path=env_path)
import os
print(os.environ['TESTING'])
Equivalent in JS is process.env
I have a problem with django channels.
My Django app was running perfectly with WSGI for HTTP requests.
I tried to migrate to channels in order to allow websocket requests, and it turns out that after installing channels and running ASGI (daphne) and a worker, the server answers error 503 and the browser displays error 504 (time out) for the http requests that were previously working (admin page for example).
I read all the tutorial I could find and I do not see what the problem can be. Moreover, if I run with "runserver", it works fine.
I have an Nginx in front of the app (on a separate server), working as proxy and loadbalancer.
I use Django 1.9.5 with asgi-redis>=0.10.0, channels>=0.17.0 and daphne>=0.15.0. The wsgi.py and asgi.py files are in the same folder. Redis is working.
The command I was previously using with WSGI (and which still works if I switch back to it) is:
uwsgi --http :8000 --master --enable-threads --module Cats.wsgi
The command that works using runserver is:
python manage.py runserver 0.0.0.0:8000
The commands that fail for the requests that work with the 2 other commands are:
daphne -b 0.0.0.0 -p 8000 Cats.asgi:channel_layer
python manage.py runworker
Other info:
I added 'channels' in the installed apps (in settings.py)
other settings.py relevant info
CHANNEL_LAYERS = {
"default": {
"BACKEND": "asgi_redis.RedisChannelLayer",
"ROUTING": "Cats.routing.app_routing",
"CONFIG": {
"hosts": [(os.environ['REDIS_HOST'], 6379)],
},
},
}
Cats/routing.py
from channels.routing import route, include
from main.routing import routing as main_routing
app_routing = [
include(main_routing, path=r"^/ws/main"),
]
main/routing.py
from channels.routing import route, include
http_routing = [
]
stream_routing = [
route('websocket.receive', 'main.consumers.ws_echo'), #just for test once it will work
]
routing = [
include(stream_routing),
include(http_routing),
]
main/consumers.py
def ws_echo(message):
message.reply_channel.send({
'text': message.content['text'],
})
#this consumer is just for test once it will work
Any idea what could be wrong? All help much appreciated! Ty
EDIT:
I tried a new thing:
python manage.py runserver 0.0.0.0:8000 --noworker
python manage.py runworker
And this does not work, while python manage.py runserver 0.0.0.0:8000 was working...
Any idea that could help?
channels will use default views for un-routed requests.
assuming you use the javascripts right, I suggest you use only your default Cats/routing.py file as following:
from channels.routing import route
from main.consumers import *
app_routing = [
route('websocket.connect', ws_echo, path="/ws/main")
]
or with reverse to help with your path
from django.urls import reverse
from channels.routing import route
from main.consumers import *
app_routing = [
route('websocket.connect', ws_echo, path=reverse('main view name'))
]
I think also your consumer should be changed. when browser connects using websockets the server should first handle adding message reply channel. something like:
def ws_echo(message):
Group("notifications").add(message.reply_channel)
Group("notifications").send({
"text": json.dumps({'testkey':'testvalue'})
})
the send function should probably be called up on different event and the "notifications" Group should probably changed to have a channel dedicated to the user. something like
from channels.auth import channel_session_user_from_http
#channel_session_user_from_http
def ws_echo(message):
Group("notify-private-%s" % message.user.id).add(message.reply_channel)
Group("notify-private-%s" % message.user.id).send({
"text": json.dumps({'testkey':'testvalue'})
})
If you're using heroku or dokku make sure you've properly set the "scale" to include the worker process. By default they will only run the web instance and not the worker!
For heroku
heroku ps:scale web=1:free worker=1:free
For dokku create a file named DOKKU_SCALE and add in:
web=1
worker=1
See:
http://blog.codelv.com/2017/10/timouts-django-channels-on-dokku.html
https://blog.heroku.com/in_deep_with_django_channels_the_future_of_real_time_apps_in_django