How to convert this '\x5b\x5b\x5b' to this '[[[' - powerbi

It's easy in JavaScript, but how to convert in PQ?
Is there an easy way?

Actually, in JavaScript source code, '\x5b\x5b\x5b\x220BwP4bPODhDZVZzVcXdmZlVnenc\x22' compiles to the same value as '[[["0BwP4bPODhDZVZzVcXdmZlVnenc"' so there is no conversion.
JavaScript strings are counted sequences of UTF-16 code units. UTF-16 is an encoding for the Unicode Character set. A JavaScript literal string allows several types of escape sequences. One is \xHH, where HH is the hexadecimal number (0 to 255) for an ISO 8859-1 code unit. ISO 8859-1 is a subset of Unicode and has the same codepoints as the first 256 Unicode codepoints. UTF-16 encodes those codepoints to the same values.
Power Query strings are also counted sequences of UTF-16 code units. (As are strings in Java, C#, VB4/5/6, VBA, VBScript, VB, F#, …, for that matter.) So, your string is almost there, except for the escapes. We can convert the JavaScript literal string to a Power Query text value in a few steps.
JavaScript also has \uHHHH escapes, where HHHH is the hexadecimal number for a UTF-16 code unit. Because of the similarity between Unicode and ISO 8859-1, \xHH is effectively shorthand for \u00HH.
JSON simplifies JavaScript literal strings, allowing \uHHHH escapes but not \xHH. Power Query has data transformation functions for JSON. So, we need to convert JavaScript to JSON and then transform.
In a blank query, open the Advanced Editor and paste:
let
JavaScriptLiteral = "\x5b\x5b\x5b\x220BwP4bPODhDZVZzVcXdmZlVnenc\x22",
JsonLiteral = """" & Text.Replace(JavaScriptLiteral, "\x","\u00") & """",
Value = Json.Document(JsonLiteral)
in
Value
(This will break if given a source with other escapes like "\\x" or with double quotes.)
You, can, of course, turn this into a Power Query function if you need to apply it to more than one string, or you can convert it to a table, or ….

This seems to work, specific to your \x5b\x5b\x5b portion.
If you start with a table named Table1 as your source for your Power Query query, with your text in Column1:
Then Transform -> Replace Values, using these settings:
...to get this:
You can then Add Column -> Custom Column with this:
... to get this:
Then you can extract the values in the new Custom column:
... to get this:

Related

i am building a program for Urdu language analysis so how can I make my program to accept text file in Urdu language in c++

I am building a language analysis program I have a program which counts the words in text and give the ratio of every word in text as a output, but this program can not work on file containing Urdu text. how can I make it work
Encoding
Urdu may be presented in two¹ forms: Unicode and Code Page 868. This is convenient to you because the two ranges do not overlap. It is inconvenient because the Unicode code range is U+0600 – U+06FF, which means encoding is an issue:
CP-868 will encode each one as a single-byte value in the range 128–252
UTF-8 will encode each one as a two-byte sequence with bits 110x xxxx and 10xx xxxx
UTF-16 encodes every character as two-byte entities
UTF-32 encodes every character as four-byte entities
This means that you should be aware of encoding issues, and for an easy life, use UTF-16 internally (std::u16string), and accept files as (default) UTF-8 / CP-868, or as UTF-16/32 if there is a BOM indicating such.
Your other option is to simply require all input to be UTF-8 / CP-868.
¹ AFAIK. There may be other ways of storing Urdu text.
  Three forms. See comments below.
Word separation
As you know, the end of a word is generally marked with a special letter form.
So, all you need is a table of end-of-word letters listing letters in both the CP-868 range and the Unicode Arabic text range.
Then, every time you find a space or a letter in that table you know you have found the end of a word.
Histogram
As you read words, store them in a histogram. For C++ a map <u16string, size_t> will do. The actual content of each word does not matter.
After that you have all the information necessary to print stats about the text.
Edit
The approach presented above is designed to be simple at the cost of some correctness. If you are doing something for the workplace, for example, and assuming it matters, you should also consider:
Normalizing word forms
For example, the same word may be presented in standard Arabic text codes or using the Urdu-specific codes. If you do not convert to the Urdu equivalent characters then you will have two words that should compare equal but do not.
Use something internally consistent. I recommend UZT, as it is the most complete Urdu text representation. You will also need an additional lookup for the original text representation from the UZT representation.
Dictionaries
As complete a dictionary (as an unordered_set <u16string>) of words in Urdu as you can get.
This is how it is done with languages like Japanese, for example, to find breaks between words.
Then use the dictionary to find all the words you can, and fall back on letterform recognition and/or spaces for what remains.

How to convert accented character to Hexadecimal Unicode in VBScript? [duplicate]

I'd like to create a .properties file to be used in a Java program from a VBScript. I'm going to use some strings in languages that use characters outside the ASCII map. So, I need to replace these characters for its UTF code. This would be \u0061 for a, \u0062 fro b and so on.
Is there a way to get the UTF code for a char in VBScript?
VBScript has the AscW function that returns the Unicode (wide) code of the first character in the specified string.
Note that AscW returns the character code as a decimal number, so if you need it in a specific format, you'll have to write some additional code for that (and the problem is, VBScript doesn't have decent string formatting functions). For example, if you need the code formatted as \unnnn, you could use a function like this:
WScript.Echo ToUnicodeChar("✈") ''# \u2708
Function ToUnicodeChar(Char)
str = Hex(AscW(Char))
ToUnicodeChar = "\u" & String(4 - Len(str), "0") & str
End Function

URL encoding for multibyte character string in c++

I am trying to achieve URL encoding for some of my strings via c++. Strings can contaim multibyte characters like ™, ®, ©, etc.
Input text: Something ™
Output should be: Something%20%E2%84%A2
I can achieve URL encode or decode in JS with encodeURIComponent and decodeURIComponent,
but I have some native code in c++ and hence need to encode some text via c++.
Any help here would be great relief for me.
It's not to hard to do manually, if you can't find a library. First encode the string as UTF-8 (there are other posts on SO about using the standard library to do that if the string is in another encoding) and then replace every character with a value above 127, and every one that's restricted in URLs, with the percent encoding of that character (A percent sign followed by the two hexadecimal digits representing the character's value).

How do I remove words from multilingual text?

I have two versions of the same document (D, say) containing multilingual text (English and others):
I. One is encoded in ASCII with Unicode code-points represented as character entity references (i.e. Unicode characters are of the form &#N, where N is the decimal equivalent of the Unicode hex value)
II. The other is UTF-8 encoding.
Q 1:
I have a separate list of words (encoded in UTF-8, and in more than one language), that I have to remove from the document D. How should I proceed?
Can I use regex to clean D? For doc type I, I believe I have to specify the whole &#N patterns for each word in the list when I form the regex.
Should the task be easier for doc type II, now that I can specify the non-English characters directly in the regex (my emacs is configured to use these non-English fonts) ?
Q 2:
I have a huge collections of such document D's. What should be the best algorithm to remove words from each of these documents? A table look-up is straight-forward but probably the slowest. Should I regex through each?
I suggest processing the entities first so that the two sorts of files look the same. When you’re done removing, put the first set back into their encoded form.

Unicode Woes! Ms-Access 97 migration to Ms-Access 2007

Problem is categorized in two steps:
Problem Step 1. Access 97 db containing XML strings that are encoded in UTF-8.
The problem boils down to this: the Access 97 db contains XML strings that are encoded in UTF-8. So I created a patch tool for separate conversion for the XML strings from UTF-8 to Unicode. In order to covert UTF8 string to Unicode, I have used function
MultiByteToWideChar(CP_UTF8, 0, PChar(OriginalName), -1, #newName, Size);.(where newName is array as declared "newName : Array[0..2048] of WideChar;" ).
This function works good on most of the cases, I have checked it with Spainsh, Arabic, characters. but I am working on Greek and Chineese Characters it is choking.
For some greek characters like "Ευγ. ΚαÏαβιά" (as stored in Access-97), the resultant new string contains null charaters in between, and when it is stored to wide-string the characters are getting clipped.
For some chineese characters like "?¢»?µ?"(as stored in Access-97), the result is totally absurd like "?¢»?µ?".
Problem Step 2. Access 97 db Text Strings, Application GUI takes unicode input and saved in Access-97
First I checked with Arabic and Spainish Characters, it seems then that no explicit characters encoding is required. But again the problem comes with greek and chineese characters.
I tried the above mentioned same function for the text conversion( Is It correct???), the result was again disspointing. The Spainsh characters which are ok with out conversion, get unicode character either lost or converted to regular Ascii Alphabets.
The Greek and Chineese characters shows similar behaviour as mentined in step 1.
Please guide me. Am I taking the right approach? Is there some other way around???
Well Right now I am confused and full of Questions :)
There is no special requirement for working with Greek characters. The real problem is that the characters were stored in an encoding that Access doesn't recognize in the first place. When the application stored the UTF8 values in the database it tried to convert every single byte to the equivalent byte in the database's codepage. Every character that had no correspondence in that encoding was replaced with ? That may mean that the Greek text is OK, while the chinese text may be gone.
In order to convert the data to something readable you have to know the codepage they are stored in. Using this you can get the actual bytes and then convert them to Unicode.