Django model translation : store translations in database or use gettext? - django

I'm in a Django website's I18N process.
I've selected two potentially good django-apps :
django-modeltranslation which modifies the db schema to store translations
django-dbgettext which inspect db content to create .po files and uses gettext
From your point of view, what are the pros and cons of those two techniques ?

If you want to let users of your app(or third party translators) easily update the translations without code changes then go for one of the solutions that stores the translations in the database.
If you instead want greater quality control(version control, several set of eyes, etc), then use gettext. By using gettext you may also control which strings you want translate.
Just my 2c.

django-modeltranslation is best for storing translated value. you will go to django-admin and put translated value.
But If you are using django-dbgettext, then you dont need to put any value in django-admin, you can use rosetta for that. If you are not able to look any value for translation and you want it to translate, then you can do entry of model in "*dbgettext_registration.py*" and run command "python manage.py dbgettext_export" then "python manage.py compilemessages".

http://packages.python.org/django-easymode/ combines the two:
http://packages.python.org/django-easymode/i18n/index.html
http://packages.python.org/django-easymode/i18n/translation.html
Gettext is used to translate large ammounts of data, and the admin is used for day to day updates.

I would suggest you always use files for your translations. It's portable and doesn't have unknown impacts on DB performance (especially an issue when using "magic" packages that monkey patch your DB schema)
This package looks simple and extensible: https://github.com/ecometrica/django-vinaigrette

Related

Django: better (simpler) migration system alternative?

Is there an alternative to built-in Django migration system, which doesn't try to be intelligent?
Ideally, I would like the following:
Simple DSL for typical operations (add/change/remove column, add/change/remove table, add/change/remove index, etc., including raw SQL as the last resort)
Auto-revert for these operations (unless raw SQL is used)
Single folder with all migrations for the whole project (so when I refactor some 'app', e.g. split it into two apps, that doesn't require to change migrations). No boilerplate for specifying migration dependencies: each migration is assumed to depend on every preceding one.
No 'state' management. Just run the DSL statements.

How flask-whooshalchemy index data imported manually?

I'm using flask-whooshalchemy on sqlite, and mannually imported a lot of data, now whoosh can search none of it. I think it's because whoosh haven't indexed any of the data, right? How could I add whoosh index on those data manually?
you can try my fork https://github.com/Revolution1/Flask-WhooshAlchemyPlus
just
$ pip install flask_whooshalchemyplus
and
from flask_whooshalchemyplus import index_all
index_all(app)
Have a look at https://gist.github.com/davb5/21fbffd7a7990f5e066c
I've just written this to solve the same issue - rebuild search indices after a bulk data import.
It won't work out of the box for anyone else (my "lib" import contains all of my third party libraries, and you'll need to specify your Flask-SQLAlchemy models in the if name=="main" block), but it should be enough to get you started.
As stated in the file comments, you should consider deleting your search.db folder (WHOOSH_BASE) as this script doesn't remove deleted data, only re-indexes the current data set.
I've found it to be much quicker importing all of my data using SQLAlchemy core then running this script compared to importing my data via SQLAlchemy ORM with on-the-fly Whoosh index updates (44s vs 48m for my data set).
The code for the extension is pretty light you can view it on github. From looking at it, it does look as it just watches for changes when SQLAlchemy flushes the session, so externally entered data won't be indexed automatically.
Depending on the amount of data, and if this is a one off data-load, it might be easiest to just delete the Whoosh index (by default a directory called ‘whoosh_index’), as it looks like it will re-index everything if that index isn't found (see lines 154-165).

Django and Dynamic Example Data

I'm trying to find a way to easily generate an example/demonstration data set from initial_data.json in Django.
Essentially, the fixtures and initial_data.json do exactly what I need, except that the dates are static....
My app uses dates to display/sort otherwise easily generated information (comments, scores etc) and I'd like to create a thorough data set in order to be able to demonstrate the app's functions to prospective clients; the problem arises with the dates. Even if I run syncdb (which automatically includes my initial_data.json), the dates are static, so all the information will relate to those specific dates, rather than to today. As time passes, that data will become less visible in the app and will therefore not fully demonstrate it's abilities to potential clients.
Is there an easy way to update date information in initial_data.json so that dates remain relevant to the current real date and I can then run syncdb again with those new dates? (Assume that this is all on a local machine merely as a demonstration to clients... Not on a server, production or otherwise).
I hope this makes sense?!
you might be better off writing a function (maybe a management command) to generate some dummy data and save to your (temporary?) database
OK, my solution was to use django-mockups: https://github.com/sorl/django-mockups
It adds random data to your tables (all of them or only those specified by the user) by obeying the field types (text, email, url etc) and the max_length specified in those fields. Inserts Lorem Ipsum and inserts correctly formatted email address etc etc
Very easy to use, can be set to run through a cron job, or can be run manually as and when required. Perfect.

Storing important singular values in Django

So I'm working on a website where there are a couple important values that get used in various places throughout the site. For example, certain important dates, like the start and end dates for registration.
One way I can do this is making a model that stores these values, but that sounds like overkill (since I'd only have one instance). Another way is to store these values in the settings.py file, but if I wanted to change them, it seems like I would need to restart the webserver for them to take effect. I was wondering what would be the best practice in Django to handle this kind of stuff.
You can store them in settings.py. While there is nothing wrong with this (you can even organize your settings into multiple different files, if you have to many custom settings), you're right that you cannot change these at runtime.
We were solving the same problem where I work and came up with a simple app called django-constance (you can get it from github at https://github.com/comoga/django-constance). What this lets is store your settings in a settings.py, but once you need to turn them into settings configurable at runtime, you can switch to a Redis data store with django admin frontend. You can even use the value from settings as your default. I suggest you try this app out.
The changes to your code are pretty minimal, as pasted from docs you initialize your dynamic settings like this:
CONSTANCE_CONFIG = {
'MY_SETTINGS_KEY': (42, 'the answer to everything'),
}
And then instead of importing settings from django conf, you do this:
from constance import config
if config.MY_SETTINGS_KEY == 42:
answer_the_question()
If you want a specific set of variables available to all of your template, what you are looking for is Context Processors.
http://docs.djangoproject.com/en/dev/ref/templates/api/#writing-your-own-context-processors
More links
http://www.b-list.org/weblog/2006/jun/14/django-tips-template-context-processors/
http://blog.madpython.com/2010/04/07/django-context-processors-best-practice/
The code for your context processors, can live anywhere in your project. You just have to add it to your settings.py under:
TEMPLATE_CONTEXT_PROCESSORS =
You could keep the define your constants in your settings.py or even under a constants.py and just
from constants import *
However as you mentioned, you would need to reload your server each time the settings are updated. I think you first need to figure out how often will you be changing these settings? Is it worth the extra effort to be able to reload the settings automatically?
If you wanted to automatically enable the settings, each time they are updated you could do the following:
Store settings in the DB
Upon save/change, write output to a file
settings.py / constants.py reads files
reload server
In addition, you have a look at the mezzanine project which allows you to update settings from the django admin interface and will reload as well.
See: http://mezzanine.jupo.org/docs/configuration.html
If the variables you need will be updated infrequently, i suggest just store them in settings.py and add a custom context processor.
If you are using source control such as GIT, updating will be quite easy, you can just update the file and push to your server. For really simple reloading of the server you could also create a post-recieve hook for git that will automatically reload the server when new code is pushed.
I would only suggest the other option if you are updating settings fairly regularly.

What is a sane way to perform a radical Django Model migration in a production environment?

I have an existing django web app that is in use. I have to radically migrate one key model in my design to a completely new design, but I want to cache all of the existing data for that model and migrate them to the new records in production when ready to deploy.
I can afford to bring my website down for a few hours one night and do whatever I need to do to migrate. What are some sane ways I can do this migration?
It seems any migration would need to:
1) Dump all of the existing data into some format, such as SQL, JSON, XML
2) Migrate the model to the new format
3) Reload the data into the new model using a conversion script
I also thought of trying to store all of the existing data in some other model called "OldModel" (if Model is the name of the existing model) and then migrating the data live.
There is a project to help with migrations that I've heard of: South.
Having said that, I admit we've not used it. We still plan our migrations using a file of SQL statements. Madness, I know, but it has the advantage of testability. You can run it as many times as necessary during development and staging testing before the "big deploy". It can be source controlled, diffed, etc. It can also, therefore, be called from a larger deployment script. Of course, we back up production before running it :-)
If your database does journaling, using the old-fashioned method has the added advantage that there is a transaction history that can be rolled back.
Experiments we've run with JSON, XML and "OldModel" -> "NewModel" style dumps have scaled pretty poorly. Mind you, YMMV... we have quite a large database. By using a script, you can run on your production database without having to offload or reload vast amounts of data. This way even a complicated migration can take seconds, rather than hours.
There are around 5 or 6 tools to help automate some portion of migrations. Several of them are listed in this question and I'll add the others just for completeness.
Next, see S. Lott's answer to this question about migration workflows for a great idea on using version numbers in the model name to make migrations easier, including structuring a standalone script to properly convert the tables. To my mind this is vastly superior to serializing the data for export and then trying to build your new tables by importing.
Finally, I haven't been able to think of a way to do a hot migration properly and haven't seen any hints from anywhere else either, so maintenance downtime is inevitable.
Make all migrations in steps!
If you need to add a field, go ahead and add it, with a default value or being optional. This is safe.
If you need to make an existing optional field required, give it a default first.
If you need to make an existing field with a default not have a default, drop the default after fixing all the code that creates instances.
If you need to change the type of a field, add a new field that inherits the value from the current one, first. Then, run a script to update the existing instances to populate the new field. Thirdly, Remove all the code that uses the old field to use the new one. Finally, which no code is left using the original, you can drop it.
For every situation there is a small step you can make. For every bigger change, you can break it down into little ones. This is one place iterative development pays off. Keep good backups in place and don't be afraid to push often! Make the small changes quickly to see if they work.
If you are more comfortable with the Django ORM than with raw SQL, you might consider using Model -> BackupModel -> TestModel -> Model, where all but the last step can be performed without dropping data.
def backup(InModel,OutModel):
in_objs = InModel.objects.all()
for obj in in_objs:
out_obj = OutModel.convert_from(InModel,obj)
out_obj.save()
Here, you would just make sure that all your models have convert_from methods implemented. These should all be trivial conversions except for BackupModel -> TestModel. In the other cases, nothing but the class would change, all data being identically preserved.
The advantage to this is that before you go rewriting all your interfaces, you can play around with TestModel and make sure that your conversions were what you thought they'd be. If everything goes wrong, you convert from BackupModel->Model, and everything is okay. In a worst-case scenario, you give up on Django's ORM, run back to SQL, and simply rename all your tables that begin with backupmodel__* to model__* in your database.
Disclaimer: I've never done this.