Make Django first search message files (.mo) for current application - django

Django searches for messages files (.mo) in the order that is documented here:
https://docs.djangoproject.com/en/dev/topics/i18n/translation/#how-django-discovers-translations
Is it possible to have Django first search for messages files in the locale directory for the app that is currently being used?
I recently discoverd a bug in a project, that was caused by two apps having the same message-id but different translation. All the apps have their own locale directory.

I'm afraid this is not how django works for various reasons:
A single phrase should have a single translation. Django aside, this is a matter of gettext as well (the engine used for translation messages). Besides, it sounds normal to expect from a system to translate a single phrase with consistency in all pages. It still is the same phrase in the original language after all.
What would 'current application' mean? Is current application the one generating the current view? The translation itself might be coming from a module of another application. How to decide which of these two applications is appropriate? Translations can also be lazy which adds additional complexity to such decisions.
I suggest that you reorder your applications in INSTALLED_APPS to match the translation you prefer, or define a new translation messages path using the LOCALE_PATHS setting to provide a translation of your own.
Note: Technically, it is possible to override the default behavior in django.utils.translation.trans_real by providing a custom implementation of this module to change the behavior as you wish (it is the translation() method in this module that implements the selection algorithm). Then you should also override the Trans class originally defined in django.utils.translation.__init__ module to return your custom trans_real module (name this new class MyCustomTrans for example) and explicitly set it as the translation class somewhere in your project's init module, so that it loads early in the code:
from django.utils import translation
translation._trans = MyCustomTrans()
Now your custom algorithm would be used instead, but this would require a lot of work and in my opinion does not worth the hassle.

Related

Sitecore: get translation by language

I'm building an automated task in Sitecore 8.0 Update 2 to automatically send out some emails. These emails need to be in different languages.
I've always used this approach:
Sitecore.Globalization.Translate.TextByDomain("General Dictionary", "some-key");
However, when I try to use this code:
Sitecore.Globalization.Translate.TextByLanguage("some-key", Sitecore.Context.Language);
it doesn't work (even if I simply use the current context language).
I can find little to no documentation about this. What would I have to do to get this to work?
As #jammykam suggested, Maybe you need to wrap your code with SiteContextSwitcher class, since Sitecore.Context.Database might be referencing Core database during Task agent execution because the context site is "scheduler"
So your code should be:
using (var switcher = new SiteContextSwitcher((SiteContextFactory.GetSiteContext("website"))))
{
//Rest of your code
}
I've checked Sitecore.Globalization.Translate.TextByLanguage method and it works.
Could you please, check that translation exist in the 'core' database. Sitecore used to store dictionary items in the 'master' database, but right now it stored translation in the 'core' database.
Switch shell to 'core' database.
Go to the Dictionary: /sitecore/system/Dictionary
Find your key by the first letter in the string: "some-key" -> "S".
Check that translation exists for this key.
Select DictionaryKey item > "some-key"
Switch between languages and check whether translation is exist.

In Django, how can I tell my project to read/write in distinct log directories at runtime?

I am working on a django project that, along with having a database for its models and relations, writes to a log directory called activity_logs outside of the project directory to keep track of formatted user activities, one file for each user. This is an alternative file-structure-based solution to having a database table carry this information along, because this offloads some storage from the DB and is relatively easy to format and express such activities. Perhaps some of you may recommend storing this kind of data in the database, which is fine, but I still believe there is question from all of this that I need help answering.
This django project has multiple apps that have an extensive test suite, one for each app. Additionally, there is a logging.py file that encapsulates the logging functionality (writing/reading activities to/from log files), and so both the test cases within the test suites as well as the view functions (and various other utility functions) all utilize these logging functions in order to store these user activities and retrieve them based on model relationships to emulate a user notification system. Since the logging module takes care of this logging, it needs to know where to write to, and so we have a directory structure called activity_logs to which it writes user log files, creating one for a new user and deleting one for a user removed from the database. One of the newest changes we would like to make in this project is to create a separate logging directory for testing this logging functionality, something like test_activity_logs, so that it would never be confused when writing to the test directory for test users or the regular activity log directory for real users.
My problem is this: at runtime, how can I tell the system, at whichever startpoint of execution (whether it be from a view function call through the django test Client object, a test case, an actual HTTPrequest made via a URL, etc.), when to look inside the activity_logs or the test_activity_logs directory? It solely depends on whether I am generating new information for a real user or a test user, but a User is a User in our system, and I'm facing some trouble trying to tell these functions that call some logging functions to write to the test log directory vs. the regular one. For example, one approach I am trying is to pass a keyword argument (kwarg) to the logging functions so that they can be made aware of which directory to read/write to/from, like so:
self.assertTrue(activity_has_been_logged(ACTIVITY_ACCOUNT_CREATED, user.get_profile(), use_test_activity_log_directory=True) == True)
the kwarg called use_test_activity_log_directory=True will tell the logging function called activity_has_been_logged to read the test activity log directory. Unfortunately, apart from being a little inflexible (but tolerable), this doesn't solve the situation where the django test client object sends a GET or POST request via a URL to a view function that writes activities to log files:
response = client.post(propose_match_url, post) #Can't write to test_log_directory if by default it writes to regular directory!
How do I let the client pass on this kwarg to those view functions? I think that it should totally be possible to do this, but I'm not sure if fiddling with these kwargs is the best way, or maybe create a global variable in the project settings file, but maybe that might cause some trouble with race conditions with a shared mutable variable.
Your help would be great. Thanks in advance!
So I just solved this problem. The logging file hosting all logging functionality is really the only place that needs to know where to look (either test_activity_logs or activity_logs), since all other components will invoke functions from the logging module to write/read to/from these directories. I gave an additional field to the model class of the UserProfile class called is_test that is a boolean field to determine whether to look in the test_activity_logs if is_test=True, or activity_logs if is_test=False. That way, the logging module needs only to check the input parameter of type UserProfile and its new field to determine where to perform its logging functionalities. Problem solved!
Check out daemontools if you're on a *nix box or launchd on OS X. Both can make sure your Django instance stays running in whatever mode you prefer (daemontools has a few more options for that) and can isolate a directory for logging stdout/stderr.
You can set environment variables for each instance to help other log files and temporary files know where to be created, which you then get from os.environment or simply use the current working directory as a base if using daemontools.
The directory is automatically created for you using daemontools.

Accessing 't' (from r18n) in a rack-unit test of a Sinatra app

When using sinatra-r18n to handle internationalisation, the r18n lib exposes a variable t for use within your helpers, routes and templates, as per these instructions.
I have written a simple unit test using rack-unit to confirm that some of my pluralisations work but the test throws an error claiming t is nil.
I've tried referencing it via app.t, MySillyApp.t (where MySillyApp is the name of my Sinatra app), MySillyApp.settings.t etc and none of them give me access to the t I need.
What I am trying to achieve is a confirmation that my translation files include all the keys I need corresponding to plurals of various metric units my app needs to understand. Perhaps there is a more direct way of testing this without going via the Sinatra app itself. I'd welcome any insight here.
I had similar task to check localized strings in my Cucumber scenarios.
I've made working example.
Here you can find how strings got translated.
This file halps to understand how to add R18n support to your testing framework:
require 'r18n-core'
...
class SinCucR18nWorld
...
include R18n::Helpers
end
As you can see instead of rack/unit I'm using RSpec/Cucumber, sorry.

Opinion: Where to put this code in django app:

Trying to figure out the best way to accomplish this. I have inhereted a Django project that is pretty well done.
There are a number of pre coded modules that a user can include in a page's (a page and a module are models in this app) left well in the admin (ie: side links, ads, constant contact).
A new requirement involves inserting a module of internal links in the same well. These links are not associated with a page in the same way the other modules, they are a seperate many to many join - ie one link can be reused in a set across all the pages.
the template pseudo code is:
if page has modules:
loop through modules:
write the pre coded content of module
Since the links need to be in the same well as the modules, I have created a "link placeholder module" with a slug of link-placeholder.
The new pseudo code is:
if page has modules:
loop through modules:
if module.slug is "link-placeholder":
loop through page.links and output each
else:
write pre-coded module
My question is where is the best place to write this output for the links? As I see it, my options are:
Build the out put in the template (easy, but kind of gets messy - code is nice and neat now)
Build a function in the page model that is called when the "link placeholder is encountered) page.get_internal_link_ouutput. Essentially this would query, build and print internal link module output.
Do the same thing with a custom template tag.
I am leaning towards 2 or 3, but it doesn't seem like the right place to do it. I guess I sometimes get a little confused about the best place to put code in django apps, though I do really like the framework.
Thanks in advance for any advice.
I'd recommend using a custom template tag.
Writing the code directly into the template is not the right place for that much logic, and I don't believe a model should have template-specific methods added to it. Better to have template-specific logic live in template-specific classes and functions (e.g. template tags).

Django model translation : store translations in database or use gettext?

I'm in a Django website's I18N process.
I've selected two potentially good django-apps :
django-modeltranslation which modifies the db schema to store translations
django-dbgettext which inspect db content to create .po files and uses gettext
From your point of view, what are the pros and cons of those two techniques ?
If you want to let users of your app(or third party translators) easily update the translations without code changes then go for one of the solutions that stores the translations in the database.
If you instead want greater quality control(version control, several set of eyes, etc), then use gettext. By using gettext you may also control which strings you want translate.
Just my 2c.
django-modeltranslation is best for storing translated value. you will go to django-admin and put translated value.
But If you are using django-dbgettext, then you dont need to put any value in django-admin, you can use rosetta for that. If you are not able to look any value for translation and you want it to translate, then you can do entry of model in "*dbgettext_registration.py*" and run command "python manage.py dbgettext_export" then "python manage.py compilemessages".
http://packages.python.org/django-easymode/ combines the two:
http://packages.python.org/django-easymode/i18n/index.html
http://packages.python.org/django-easymode/i18n/translation.html
Gettext is used to translate large ammounts of data, and the admin is used for day to day updates.
I would suggest you always use files for your translations. It's portable and doesn't have unknown impacts on DB performance (especially an issue when using "magic" packages that monkey patch your DB schema)
This package looks simple and extensible: https://github.com/ecometrica/django-vinaigrette