how to disable Google Translate API from not translating proper names with common words - google-cloud-platform

I am using Google Cloud Platform to create content in Indian regional language. Some of the content contains buildings and society name which have common words like 'The Nest Glory'. After conversion from Google Translate API the building name should only be spelled in the regional language, instead it is literally being translated. It sounds funny and user will never find that building.

Cloud Translation API can be told not to translate some part of the text, use the following HTML tags:
<span translate="no"> </span>
<span class="notranslate"> </span>
This functionality requires the source text to be submitted in HTML.
Access to the Cloud Translation API
How to Disable Google Translate from Translating Specific Words

After some search I think one older way is to use the transliteration API which was deprecated by google
Google Transliteration API Deprecated

translate="no"
class="notranslate"
in last API versions this features doesn't work
That's a very complex thing to do: it needs semantic analysis as well as translation and there is no real way to simply add that to your application. The best we can suggest is that you identify the names, remove them from the strings (or replace them with non-translated markers as the may not be in anything like the same position in translated messages) and re-instate them afterwards.
But ... that may well change the quality of the translation; as I said, Google Translate is pretty clever ... but it isn't at Human Interpreter yet.
If that doesn't work, you need to use a real-world human translator who is very familiar with the source and target languages.

Related

Working with strings in GCP-Workflows and GCP-Admin

I'm integrating a project in GCP-Workflows with GCP-Admin, but I'm having trouble working with some data, when extracting a date it is delivered in this format: 2020-12-28T11: 20: 05.000Z, so I can't turn the string into int, and apparently there is no function in GCP like substring() either. I need to use the date with an IF, checking if it is greater or less than the reference.
How can I do this?
There is some lack of function implementation for now in Workflows. New ones are coming very soon. But I don't know if they will solve your problem
Anyway, with workflows, the correct pattern, if a built-in function isn't implemented, is to call an endpoint, for example a Cloud Function or a Cloud Run, which perform the transformation for you and return the expected result.
Quite boring to do, but don't hesitate to open feature request on the issues tracker product team is very reactive and love user feedbacks!
The Workflows standard library now includes a text module with functions for searching (including regular expressions), splitting, substrings and case transformations.

Deidentification using FPE Primitive Transform and ignoring certain characters

is there a way to have the de-identify DLP ignore certain characters? Currently, encrypting EMails with a custom alphabet that includes the "-" sign ends up encrypting as below. Ideally the encrypted text would all be of the format "XXX-XX-XXXX" I noticed there was a CharsToIgnore call that could be made but not sure where to put that... maybe metadata field in the API that's called (deidentify_content?) or at some other place.
Thanks!
325-7959452
31424943-6
Officially you should reconsider using FPE as it isn't considered the best option by security standards. Only use it if you are in legacy system that is going to be strict about formatting, otherwise deterministic crypto is much better for emails, another technique available in the API will give you better control.
But as for FPE, there is not the ability to skip characters when using FPE, that feature only exists for masking.

Why does Google Vision recognize 'und' locale for images containing just numbers in text?

I've provided an image to the Google Cloud Vision OCR API to be annotated. The image just contained a phone number.
Google Cloud Vision said the locale of the text was 'und'. Does this mean undefined? I'm not finding any information in the documentation.
Indeed, 'und' is not within the code for languages in the documentation. And since the image didn't contain but numbers then it wouldn't detect a language.
But, the documentation also states that Vision API uses BCP-47 identifiers, and 'und' is listed as a Non-specific Language Tag. You can also find the clarification that "The special value 'und' (Undetermined) has a 'Scope' of 'special'". Being special defined as:
'special' - Indicates a special language code. These are subtags used
for identifying linguistic attributes not particularly associated with
a concrete language. These include codes for when the language is
undetermined or for non-linguistic content.
Therefore, "the 'und' (Undetermined) primary language subtag identifies linguistic content whose language is not determined".

Google Translator Toolkit machine-translation issues

I am using python 2.7 & django 1.7.
When I use Google Translator Toolkit to machine translate my .po files to another language (English to German), there are many errors due to the use of different django template variables in my translation tags.
I understand that machine translation is not so great, but I am wanting to only test my translation strings on my test pages.
Here is an example of a typical error of the machine-translated .po file translated from English (en) to German (de).
#. Translators: {{ site_name_lowercase }} is a variable that does not require translation.
#: .\templates\users\reset_password_email_html.txt:47
#: .\templates\users\reset_password_email_txt.txt:18
#, python-format
msgid ""
"Once you've returned to %(site_name_lowercase)s.com, we will give you "
"instructions to reset your password."
msgstr "Sobald du mit% (site_name_lowercase) s.com zurückgegeben haben, geben wir Ihnen Anweisungen, um Ihr Passwort zurückzusetzen."
The %(site_name_lowercase)s is machine translated to % (site_name_lowercase) s and is often concatenated to the precedding word, as shown above.
I have hundreds of these type of errors and I estimate that a find & replace would take at least 7 hours. Plus if I makemessages and then translate the .po file again I would have to go through the find and replace again.
I am hoping that there is some type of undocumented rule in the Google Translator Toolkit that will allow the machine-translation to ignore the variables. I have read the Google Translator Toolkit docs and searched SO & Google, but did not find anything that would assist me.
Does anyone have any suggestions?
The %(site_name_lowercase)s is machine translated to % (site_name_lowercase) s and is often concatenated to the precedding word, as shown above.
This is caused by tokenization prior to translation, followed by detokenization after translation, i.e. Google Translate tries to split the input before translation to re-merge it after translation. The variables you use are typically composed of characters that are used by tokenizers to detect token boundaries. To avoid this sort of problem, you can pre-process your file and replace the offending variables by placeholders that do not have this issue - I suggest you try out a couple of things, e.g. _VAR_PLACE_HOLDER_. It is important that you do not use any punctuation characters that may cause the tokenizer to split. After pre-processing, translate your newly generated file and post-process by replacing the placeholders by their original value. Typically, your placeholder will be picked up as an Out-Of-Vocabulary (OOV) item and it will be preserved during translation. Try to experiment with including a sequence number (to keep track of your placeholders during post-processing), since word reordering may occur. There used to be a scientific API for Google Translate that gives you the token alignments. You could use these for post-processing as well.
Note that this procedure will not give you the best translation output possible, as the language model will not recognize the placeholder. You can see this illustrated here (with placeholder, the token "gelezen" is in the wrong place):
https://translate.google.com/#en/nl/I%20have%20read%20SOME_VARIABLE_1%20some%20time%20ago%0AI%20have%20read%20a%20book%20some%20time%20ago
If you just want to test the system for your variables, and you do not care about the translation quality, this is the fastest way to go.
Should you decide to go for a better solution, you can solve this issue yourself by developing your own machine translation system (it's fun, by the way, see http://www.statmt.org/moses/) and apply the procedure explained above, but then with, for example Part-Of-Speech-Tags to improve the language model. Note that you can use the alignment information as well.

Do you know of a good program for editing/translating resource (.rc) files?

I'm building a C++/MFC program in a multilingual environment. I have one main (national) language and three international languages. Every time I add a feature to the program I have to keep the international languages up-to-date with the national one. The resource editor in Visual Studio is not very helpful because I frequently end up leaving a string, dialog box, etc., untranslated.
I wonder if you guys know of a program that can edit resource (.rc) files and
Build a file that includes only the strings to be translated and their respective IDs and accepts the same (or similar) file in another language (this would be helpful since usually the translation is done by someone else), or
Handle the translations itself, allowing to view the same string in different languages at the same time.
In my experience, internationalization requires a little more than translating strings. Many strings when translated, require more space on a dialog. Because of this it's useful to be able to customize the dialogs for each language. Otherwise you have to create dialogs with extra space for the translated strings which then looks less than optimal when displayed in English.
Quite a while back I was using a translation tool for an MFC application but the company that produced the software stopped selling it. When I tried to find a reasonably priced replacement I did not find one.
Check out Lingobit Localizer. Expensive, but well worth it.
Here's a script I use to generate resource files for testing in different languages. It just parses a response from babelfish so clearly the translation will be about as high quality as that done by a drunken monkey, but it's useful for testing and such
for i in $trfile
do
key=`echo $i | sed 's/^\(.*\)=\(.*\)$/\1/g'`
value=`echo $i | sed 's/^\(.*\)=\(.*\)$/\2/g'`
url="http://babelfish.altavista.com/tr?doit=done&intl=1&tt=urltext&lp=$langs&btnTrTxt=Translate&trtext=$value"
wget -O foo.html -A "$agent" "$url" *&> /dev/null
tx=`grep "<td bgcolor=white class=s><div style=padding:10px;>" foo.html`
tx=`echo $tx | iconv -f latin1 -t utf-8 | sed 's/<td bgcolor=white class=s><div style=padding:10px;>\(.*\)<\/div><\/td>/\1/g'`
echo $key=$tx
done
rm foo.html
Check out appTranslator, its relatively cheap and works rather well. The guy developing it is really responsive to enhancement requests and bug report, so you get really good support.
You might take a look at Sisulizer http://www.sisulizer.com. Expensive though. We're evaluating it for use at my company to manage the headache of ongoing translation. I read on their About page that the company was founded by people who left Multilizer and other similar companies.
If there isn't one, it would be pretty easy to loop through all the strings in a resource a compare them to the international resources. You could probably do this with a simple grid.
In the end we have ended up building our own external tools to manage this. Our devs work in the english string table and every automated build sends our strings that have been added/changed and deleted to translation manager. He can also run a report at anytime from an old build to determine what is required for translation.
Check out RC-WinTrans. Its a commercial tool that my company uses. It basically imports our .RC files (or .resx files) into a database which we send to a different office for translation. The tool can then export a translated .RC file (or .resx file) for each language from the database. It even has a basic dialog box editor so the translator can adjust the size of various controls in the dialog box to be sure the translated text fits.
It also accepts a number of command line arguments and has a COM automation interface so you can integrate it into a build process more easily. It works quite well for us and we literally have thousands and thousands of strings and dialog boxes, etc.
(We currently have version 7 so what I've said might be a little bit different than their latest version 8.)
Also try AppTranslator: http://www.apptranslator.com/. It has a build-in resource editor so that translators can, for example, enlargen a text box when need bo. It has separate versions for developers and translators and much more.
We are using Multilizer (http://www.multilizer.com/) and although sometimes it's a bit tricky to use, at the end with a bit of patient it works pretty well.
We even have a translation web site where translators can download our projects and then upload the translations using Multilizer command-line features.
Managing localization and translations using .rc files and Visual Studio is not a good idea. It's much smarter (though counter-intuitive) to start localization through the exe. Read here why: http://www.apptranslator.com/misconceptions.html
I've written this recently, which integrates into VS:
https://github.com/ekkis/Powershell/blob/master/MT.ps1
largely because I was unsatisfied with the solutions out there. you'll need to get a client id from M$ (but they give you 2M words/month translation free - not bad)
ResxCrunch will be out soon, it will edit multiple resource files in multiple languages in one single table.