Run manage.py with custom database configuration - django

My use case involves running python manage.py migrate with DATABASE_HOST=127.0.0.1 (since I use Cloud SQL Proxy). However, when the application is uploaded and is serving, the database URL needs to change to an actual remote URL.
Right now, I upload a special settings.py file that contains the localhost URL when I wish to run the migration command. When I deploy to cloud, I make sure to overwrite that file with a new one (which is essentially the entire file except the database URL is my remote db URL) and then upload it.
Is there a better way to achieve this? Something like python manage.py --database_url=127.0.0.1 migrate?

Maybe you should try making a separate file, let's say local_settings.py, in the settings.py directory. In that file copy the ALLOWED_HOSTS =["your IP"].
Then in your settings.py import it as form try: .local_settings import * except: pass
But keep the ALLOWED_HOSTS=[ ] in your settings.py as it is.
Hope it helps!

I used jq to modify a JSON file I read values from (DATABASE_HOST=127.0.0.1), then overwrote the new JSON file once I was done running migrations with the original file.

Related

Passing Django ContentFile to an ImageField doesn't create an image file on the disk on Docker

The code below works on local when I don't use Docker containers. But when I try to run the same code/project in Docker, while everything else is working great, the part of the code below doesn't work as expected and doesn't save the ContentFile as image file to the disk. The ImageField returns the path of the image but actually it doesn't exist.
from django.core.files.base import ContentFile
...
photo_as_bytearray = photo_file.get_as_bytearray() # returns bytearray
cf = ContentFile(photo_as_bytearray)
obj.photo.save('mynewfile.jpg', cf) # <<< doesn't create the file only on Docker
Not sure if there's something I need to change for Docker.
First of all, the origin of the problem was my Celery worker. It was using the same image of my web container, but...
While my web container has a volume associated with the web folder (./web:/path/to/app) to support hot reloading in development environment, the celery worker was not using this volume. I thought that using the same image (mywebimage:dev) for both would be enough but was not.
So now I'll edit my docker-compose file to make them both use the same (real same) files. Celery was using a copy of the web directory because of the COPY statement in the Dockerfile, not the actual directory I work on and edit. So when the celery worker created a file, it wasn't being created on the volume which has the actual web directory.
Hope that helps someone made the same mistake I did.

Django initial data for built-in app

I'm starting to use the "redirects" app built into Django to replace some existing redirects I have in my urls.py. I was wondering if there was any generally accepted way to include initial data in the code base for other apps. For example, if it was for an app I created I could create a migration file that had a RunPython section where I could load some initial data. With a built-in or third-party app there doesn't seem to be any way to create a migration file to add initial data.
The best I can think of right now is to include a .sql file in my repository with the initial data and just manually import the data as I push the code to the different instances.
you can do it by using fixtures
create a folder name fixtures in your app directory
use this command to create a json file that you want to make as initial data.
python manage.py dumpdata you_app_name.model_name --indent 2 > model_name.json
copy this model_name.json to fixtures folder.
upload the code to the repo.
then after the migrate command . Type this command to load the initial data.
python manage.py loaddata model_name.json
reference

Django url template tag incorrect for deployment subdirectory when using collectstatic

I am deploying my django project under a subdirectory of my site, e.g.
http://example.com/apps/[django urls]
The problem is when I run collectstatic, a particular plugin I am using (dajaxice) uses the {% url %} tag to create the appropriate javascript code. Since collectstatic doesn't know about the deployment subpath, the reverse lookup is to the root url instead of the subpath. For example, it should be:
/apps/dajaxice/my_func
instead of:
/dajaxice/my_func
Is there a good way to change the way collectstatic does the reverse url without hacking the plugin? The only thing I can think of is to have one url specification for collectstatic that includes the 'apps' subpath and another one that does not for everything else. However, I cannot figure out how to change the settings.py when using collectstatic only. Any suggestions or alternative solutions?
I finally found how to solve this problem. Dajaxice provides a setting to change the url prefix:
DAJAXICE_MEDIA_PREFIX = 'dajaxice'
By defining this setting, to include 'apps' in the subpath, we can get the url we need. The problem is this prefix must only be altered in the collectstatic command and not when serving a webpage. Therein is the rub.
The solution is to create another settings.py file, we'll call it settings_cli.py. It will look like:
from settings import *
DAJAXICE_MEDIA_PREFIX = 'apps/dajaxice'
Next, we must load the new settings file when executing commands only by changing our manage.py file. Line 6 will now read:
os.environ.setdefault("DJANGO_SETTINGS_MODULE", "MYPROJ.settings_cli")
where it formally refered to "MYPROJ.settings".
Now, when we run:
python manage.py collectstatic
it will use our special prefix, but not affect the normal prefix needed when serving webpages.
For more info on mulitple settings files, a good reference is http://www.djangobook.com/en/2.0/chapter12.html

Configuring postgresql database for local development in Django while using Heroku

I know there are a lot of questions floating around there relating to similar issues, but I think I have a specific flavor which hasn't been addressed yet. I'm attempting to create my local postgresql database so that I can do local development in addition to pushing to Heroku.
I have found basic answers on how to do this, for example (which I think is a wee bit outdated):
'#DATABASES = {'default': dj_database_url.config(default='postgres://fooname:barpass#localhost/dbname')}'
This solves the "ENGINE" is not configured error. However, when I run 'python manage.py syncdb' I get the following error:
'OperationalError: FATAL: password authentication failed for user "foo"
FATAL: password authentication failed for user "foo"'
This happens for all conceivable combinations of username/pass. So my ubuntu username/pass, my heroku username/pass, etc. Also this happens if I just try to take out the Heroku component and build it locally as if I was using postgresql while following the tutorial. Since I don't have a database yet, what the heck do those username/pass values refer to? Is the problem exactly that, that I need to create a database first? If so how?
As a side note I know I could get the db from heroku using the process outlined here: Should I have my Postgres directory right next to my project folder? If so, how?
But assuming I were to do so, where would the new db live, how would django know how to access it, and would I have the same user/pass problems?
Thanks a bunch.
Assuming you have postgres installed, connect via pgadmin or psql and create a new user. Then create a new database and with your new user as the owner. Make sure you can connect via psql with the new user into to the database. you will then need to set up an env variable in your postactivate file in your virtualenv's bin folder and save it. Here is what I have for the database:
export DATABASE_URL='postgres://{{username}}:{{password}}#localhost:5432/{{database}}'
Just a note: adding this value to your postactivate doesn't do anything. The file is not run upon saving. You will either need to run this at the $ prompt, or simply deactivate and active your virtualenv.
Your settings.py should read from this env var:
DATABASES = {'default': dj_database_url.config()}
You will then configure Heroku with their CLI tool to use your production database when deployed. Something like:
heroku config:set DATABASE_URL={{production value here}}
(if you don't have Heroku's CLI tool installed, you need to do it)
If you need to figure how exactly what that value you need for your production database, you can get it by logging into heroku's postgresql subdomain (at the time this is being written, it's https://postgres.heroku.com/) and selecting the db from the list and looking at the "Connection Settings : URL" value.
This way your same settings.py value will work for both local and production and you keep your usernames/passwords out of version control. They are just env config values.

Django, boto, S3 and easy_thumbnails not working in production environment

I'm using Django, django-storages with S3 (boto) in combination with easy-thumbnails. On my local machine, everything works as expected: if the thumbnail doesn't exists, it gets created and upload to S3 and saves in the easy-thumbnails database tables. But the problem is, when I push the code to my production server, it doesn't work, easy-thumbnails output an empty image SRC.
What I already noticed is that when I create the thumbnails on my local machine, the easy-thumbnail path uses backward slashes and my Linux server needs forwards slashes. If I change the slashes in the database, the thumbnails are showed on my Linux machine, but it is still not able to generate thumbnails on the Linux (production) machine.
The simple django-storages test fails:
>>> import django
>>> from django.core.files.storage import default_storage
>>> file = default_storage.open('storage_test', 'w')
Output:
django.core.exceptions.ImproperlyConfigured: Requested setting DEFAULT_FILE_STORAGE, but settings are not configured. You must either define the environment variable DJANGO_SETTINGS_MODULE or call settings.configure() before accessing settings.
If I do:
>>> from base.settings import staging
>>> from django.conf import settings
>>> settings.configure(staging)
This works (I have a settings directory with 4 settings files: base.py, staging.py, development.py and production.py)
It seems that on my production server, the config file isn't loaded properly (however the rest of the website works fine). If I add THUMBNAIL_DEBUG = True to my settings file, but easy-thumbnails' debugging still doesn't work (it does work on my local machine).
What can be te problem? I've been debugging for 10+ hours already.
Try refactoring your settings to use a more object-oriented structure. A good example is outlined by [David Cramer from Disqus:
http://justcramer.com/2011/01/13/settings-in-django/
You'll put any server-specific settings in a local_settings.py file, and you can store a stripped-down version as example_local_settings.py within your repository.
You can still use separate settings files if you have a lot of settings specific to a staging or review server, but you wouldn't want to store complete database credentials in a code repo, so you'll have to customize the local_settings.py anyways. You can define which settings to include by adding imports at the top of local_settings.py:
from project.conf.settings.dev import *
Then, you can set your DJANGO_SETTINGS_MODULE to always point to the same place. This would be instead of calling settings.configure() as outlined in the Django docs:
https://docs.djangoproject.com/en/dev/topics/settings/#either-configure-or-django-settings-module-is-required
And that way, you know that your settings on your production server will definitely be imported, since local_settings.py is always imported.
first try to use:
python manage.py shell --settings=settings/staging
to load shell with correct settings file and then try to debug
For some reason, S3 and easy thumbnails in the templating language didn't seem to get along with each other ... some path problem which probably could be solved at some point.
My solution (read: workaround) was to move the thumbnail generation into the model inside the image field itseld, for example:
avatar = ThumbnailerImageField(upload_to = avatar_file_name, resize_source=dict(size=(125, 125), crop="smart"), blank = True)
For the sake of completeness:
def avatar_file_name(instance, filename):
path = "%s/avatar.%s" % (str(instance.user.username), filename.split('.')[1])
if os.path.exists(settings.MEDIA_ROOT + path):
os.remove(settings.MEDIA_ROOT + path)
return path