What would be the fastest way to mark (I mean marking them with the appropiate {% trans "" %} tag) all the strings (a lot) of the templates in a Django project that hasn't been i18d yet?
AFAIK, there is nothing faster than using PyCharm as said here. Is that right?
Check out https://github.com/rory/django-template-i18n-lint.
I used it as part of a 'code quality' unit test suite to ensure we don't leave strings untagged for i18n in a particular project (and also to help spot which strings needed fixing up when first starting the retrospective i18n process). There's also a -r option when you run it that should automatically make the changes.
However, I didn't use that auto-fix option -- I wanted more control over things (eg blocktrans vs trans etc), so I just kept running my tests and fixing up the various places where the linter found missing i18n markup. I also did something similar for gettext imports in Python files, and string markup in Python files, as well as named variable placeholders in strings in Python files, to make translations less likely to mess up word orders.
(The project that I retrospectively i18ned was three years old and pretty large as a result - it took five weeks to get it all straight. I hope yours takes less time)
PS: template-i18n-lint also has a sibling: https://github.com/rory/python-pylint-i18n
Related
django-registration is missing some translations for german.
See github Search for "". Translations are were but "broken".
I don't want to fork or change localization files localy.
Is it possible to provide translation for some strings in my app/project?
You just mention the 2 methods that are candidates for the translation of a string. If you do not want to fork the project nor change the localized files then I believe there is no other way of translating it.
Last resort method: Make an identical file of django-registration that includes the to-be-translated string and add translations there.
IMO the only way is to fork the project, run ./manage.py makemessages and voila! Translations are there. Another thing you can do is to try to contribute to this package by fork it first and then make a pull request! That's the beauty of open sourcing!
I just got done reading the Django docs on internationalization and something is bugging me.
It seems like the default language strings are themselves used as the keys for retrieving translations. That's perfectly fine for short text, but for paragraphs, it seems like poor design. Now, whenever I change the English (default) text, my keys change and I need to manually update my .po file?
It seems like using keys like "INTRO_TEXT" and retrieving the default language from it's own .po file is the right approach. How have others approached this problem and what has worked well for you?
Yes, you will have to manually update the PO files, but most of the times it will be limited to remove the fuzzy marks on out-of-date translations (gettext marks translations as fuzzy when the original version is modified).
I personally prefer this approach as it keeps the text content in the source code which makes it much more readable (especially for HTML). I also find it hard to find good and concise string identifiers, poorly named identifiers are headache prone.
Django-rosetta will be of great help if you do not want to edit PO files by hand or want to delegate translations to non developers.
I have a reasonably simple Django (1.1) site where i need some basic interface and other texts to be translated between two languages. I've created the po files using manage.py makemessages, translated them (using poedit), and compiled the mo files using manage.py compilemessages as outlined in the i18n docs for Django.
But the problem is; most strings still show up in the original language...
i checked that the po files actually contain all strings
i checked that the mo files were freshly generated after the last translation effort
the language does actually change when i switch using the getlang() method
a few strings -do- end up being translated when i switch
but most don't...
Not really sure where else to look... Is there any app that i can use to check whether the compiled mo files are valid & complete for instance? Could these strings be cached? (i'm not using any caching middleware)
Found it!! While pulling out hair trying to figure out what was causing my woes i commented out django.middleware.locale.LocaleMiddleware from my MIDDLEWARE_CLASSES and refreshed the page in an attempt to try everything. Obviously that just turned off translation all together but when i turned it back on again, all my fine translated strings were showing up as they should have been all along.
So i'm guessing something, somewhere get's compiled/cached when you turn on the locale middleware and the only way to refresh it is to turn it off and on. Restarting the server didn't help so this a bit counter intuitive, but who cares it works! :)
I am building an application in django, that is already using a lot of hardcoded strings. They are mostly in templates, but some also in js files and a few can be found inside the code. Now every time some string needs to be changed people comes to us and we have to waste our time finding it and changing. How could I start with cleaning this up and having all those strings in separate files, that could be edited by non-programmers?
We keep all hard-coded strings in a separate module. However, since you want users to modify the strings as they like, you better keep them in the database. I think a simple model with a key (an identifier of the string) and a value (string itself) field will do. Then you can develop a simple page where user selects a string by its identifier and updates it however he wants.
About how to use them in your apps, you can fetch all of them into a dict when your app starts (a proper place may be the init module) and use them accordingly.
What about using i18n services (gettext)? Even if you are not planning to localize your application, they provide an easy and standard way to separate strings from actual code.
Moreover, being PO quite a common standard, there are plenty of tools to edit the resource files; one of them (available also on Windows) is Poedit.
I have a source code of about 500 files in about 10 directories. I need to refactor the directory structure - this includes changing the directory hierarchy or renaming some directories.
I am using svn version control. There are two ways to refactor: one preserving svn history (using svn move command) and the other without preserving. I think refactoring preserving svn history is a lot easier using eclipse CDT and SVN plugin (visual studio does not fit at all for directory restructuring).
But right now since the code is not released, we have the option to not preserve history.
Still there remains the task of changing the include directives of header files wherever they are included. I am thinking of writing a small script using python - receives a map from current filename to new filename, and makes the rename wherever needed (using something like sed). Has anyone done this kind of directory refactoring? Do you know of good related tools?
If you're having to rewrite the #includes to do this, you did it wrong. Change all your #includes to use a very simple directory structure, at mot two levels deep and only using a second level to organize around architecture or OS dependencies (like sys/types.h).
Then change your make files to use -I include paths.
Voila. You'll never have to hack the code again for this, and compiles will blow up instantly if something goes wrong.
As far as the history part, I personally find it easier to make a clean start when doing this sort of thing; archive the old one, make a new repository v2, go from there. The counterargument is when there is a whole lot of history of changes, or lots of open issues against the existing code.
Oh, and you do have good tests, and you're not doing this with a release coming right up, right?
I would preserve the history, even if it takes a small amount of extra time. There's a lot of value in being able to read through commit logs and understand why function X is written in a weird way, or that this really is an off-by-one error because it was written by Oliver, who always gets that wrong.
The argument against preserving the history can be made for the following users:
your code might have embarrassing things, like profanity and fighting among developers
you don't care about the commit history of your code, because it's not going to change or be maintained in the future
I did some directory refactoring like this last year on our code base. If your code is reasonable structured at the beginning, you can do about 75-90% of the work using scripts written in your language of choice (I used Perl). In my case, we were moving from set of files all in one big directory, to a series of nested directories depending on namespaces. So, a file that declared the class protocols::serialization::SerializerBase was located in src/protocols/serialization/SerializerBase. The mapping from the old name to the new name was trivial, so that doing a find and replace on #includes in every source file in the tree was trivial, although it was a big change. There were a couple of weird edge cases that we had to fix by hand, but that seemed a lot better than either having to do everything by hand or having to write our own C++ parser.
Hacking up a shell script to do the svn moves is trivial. In tcsh it's foreach F ( $FILES ) ... end to adjust a set of files. Perl & Python offer better utility.
It really is worth saving the history. Especially when trying to track down some exotic bug. Those who do not learn from history are doomed to repeat it, or some such junk...
As for altering all the files... There was a similar question just the other day over at:
https://stackoverflow.com/questions/573430/
c-include-header-path-change-windows-to-linux/573531#573531