I have text content which is coming from different languages like chines, Hebrew and so on. By using google translator API converting the text into 'en'. Here problem is google translator is failing when its identifies some special characters like \x11,\x01(unable to display those characters over here) and dropping that set of records. Please suggest some safest way to do this conversion with out dropping records.
data = ''.join(c for c in data if c.printable)
Related
So I am creating an app that lets you convert text to a desired language, and submit it to aws polly. The issue is that when you start getting weird characters in other languages, such as the é sign, or Japanese character, polly.synthesize_speech doesn't like them, and fails the call. How do you submit languages to be read that use non ascii characters?
I am having trouble converting '\xc3\xd8\xe8\xa7\xc3\xb4\xd' (which is a Thai text) to a readable format. I get this value from a smart card, and it basically was working for Windows but not in Linux.
If I print in my Python console, I get:
����ô
I tried to follow some google hints but I am unable to accomplish my goal.
Any suggestion is appreciated.
Your text does not seem to be a Unicode text. Instead, it looks like it is in one of Thai encodings. Hence, you must know the encoding before printing the text.
For example, if we assume your data is encoded in TIS-620 (and the last character is \xd2 instead of \xd) then it will be "รุ่งรดา".
To work with the non-Unicode strings in Python, you may try: myString.decode("tis-620") or even sys.setdefaultencoding("tis-620")
I've recently upgraded a CloudSearch instance from the 2011 to the 2013 API. Both instances have a field called sid, which is a text field containing a two-letter code followed by some digits e.g. LC12345. With the 2011 API, if I run a search like this:
q=12345*&return-fields=sid,name,desc
...I get back 1 result, which is great. But the sid of the result is LC12345 and that's the way it was indexed. The number 12345 does not appear anywhere else in any of the resulting document fields. I don't understand why it works. I can only assume that this type of query is looking for any terms in any fields that even contain the number 12345.
The reason I'm asking is because this functionality is now broken when I query using the 2013 API. I need to use the structured query parser, but even a comparable wildcard query using the simple parser is not working e.g.
q.parser=simple&q=12345*&return=sid,name,desc
...returns nothing, although the document is definitely there i.e. if I query for LC12345* it finds the document.
If I could figure out how to get the simple query working like it was before, that would at least get me started on how to do the same with the structured syntax.
Why it's not working
CloudSearch v1 (2011) had a different way of tokenizing mixed alpha+numeric strings. Here's the logic as described in the archived docs (emphasis mine).
If a string contains both alphabetic and numeric characters and is at
least three and no more than nine characters long, the alphabetic and
numeric portions of the string are treated as separate tokens. For
example, the string DOC298 is tokenized into two terms: doc 298
CloudSearch v2 (2013) text processing follows Unicode Text Segmentation, which does not specify that behavior:
Do not break within sequences of digits, or digits adjacent to letters (“3a”, or “A3”).
Solution
You should just be able to search *12345 to get back results with any prefix. There may be some edge cases like getting back results you don't want (things with more preceding digits like AB99912345); I don't know enough about your data to say whether those are real concerns.
Another option would would be to index the numeric prefix separately from the alphabetical suffix but that's additional work that may be unnecessary.
I'm guessing you are using Cloudsearch in English, so maybe this isn't your specific problem, but also watch out for Stopwords in your search queries:
https://docs.aws.amazon.com/cloudsearch/latest/developerguide/configuring-analysis-schemes.html#stopwords
In your example, the word "jo" is a stop word in Danish and another languages, and of course, supported languages, have a dictionary of stop words that has very common ones. If you don't specify a language in your text field, it will be English. You can see them here: https://docs.aws.amazon.com/cloudsearch/latest/developerguide/text-processing.html#text-processing-settings
I am trying to import excel sheets that contain chinese characters into Stata 13.1. Following the guidelines at the following link: "Chinese Characters in Stata" I am able to get Chinese characters read in Stata. For example, I have .dta files which contain chinese characters and these are displayed correctly.The issue is that when I try and import excel sheets that contain chinese characters, these are imported as "????" - a string of question marks of varying lengths. Is their a way to solve this issue?
Note: I am using Windows 8.1 but I think the method in the link above still applies.
It sounds like an issue with your file and not so much with Stata. Chinese characters are often (if not always) encoded as UTF-8. It's possible that your Excel sheet didn't do this correctly. If you're not required to import from Excel directly, maybe try opening the file in Excel, saving the sheet as a "*.csv" (Comma Separated Values) file, and make sure to select the option which asks for UTF-8 encoding. Then use the insheet using "file.csv" , names to get the file in Stata with the first row made into variable names.
I have a C++ program that was written by a Russian-speaking developer and so it contains Cyrillic characters. When I open the sources they are displayed as garbage. How do I solve this in windows ?
The actual problem is your IDE/editor doesn't display Cyrillic characters correctly. You solve this by changing the IDE/editor settings to use a font that contains Cyrillic characters - for example, Courier New if you're on Windows.
Well, assuming they've actually used ISO C and not some weird Russian variant, the language constructs and standard library calls will be in English (or its strange cousin, American).
The only thing you'll really need to convert are the strings (such as for user output or logging), code comments and variable names.
And even the comments and variable names may not have to change. They may make the code harder to understand to a non-Russian reader however.
If the code contains characters that your current editor doesn't understand, well, you need to get yourself an editor that does. Or get your Russian friends to turn it into English for you.
Don't think that there is another C++ programming language in russia. So you just need to replace the strings to the other language, i.e. English. Care must be taken when processing input since here you can find handling of single characters.
A better approach would be to prepare a localization. You can read all strings from a ressource or file. In that case you can select the resource that matches you target language.
If you mean that the strings of the program are written in Russian and you want to add English texts, you need to first internationalize (i18n) your program, using instead of static strings a library like Gettext; then you need to add support for the English locale.
If you mean that the variables and the comments are in Russian and you want them in English, well.. find a translator ;)
Find a translator and give him the code.