Find hex string is utf-8 or utf-16 - c++

I am new to c++ .
I have hex string from file.
Example - 657374696E65 which if utf-8 code will convert to "estine".
Sometime i get utf-16 code to string.
I need to find, is string in encoded in utf-8 or utf-16 by programatically.
std::string input = "657374696E65";
std::string extract = input.substr(0, 4);
unsigned int x;
std::stringstream ss;
ss << std::hex << extract;
ss >> x;
i initially take each 4 substr then convert to ascii then to widestring.
Sometime t get utf-8 too.
Can any one help me to find is the string i have to convert each 2 char or 4 char to ascii.

The first thing you should do before further processing is undoing the hex encoding, by putting raw bytes into an std::string or std::vector<unsigned char>. Then you can post-process your collection of bytes by UTF-8 or UTF-16 decoding into the string type your application needs.
There is no safe way to detect whether a string is UTF-8 or UTF-16. Microsoft tried to do so in a quite clever way in their IsTextUnicode function. The result was the misinterpretation of files containing the string "bush hid the facts" (without newline) in Notepad (e.g. on Windows XP).
If you can ensure that all UTF-16 strings you receive start with a byte order mark (BOM), use the BOM as indicator for UTF-16.
If you are sure that you strings always contain (amongst other characters) US-ASCII characters, take the appearance of NUL bytes ('\x00') as indicator for UTF-16.
This is one of the better heuristics Windows used: If there is the patter \x0D\x0A (CR/LF), detect the string as UTF-8. This prevents the "bush hid the facts" issue if there is a line break in the string.

Related

Writing wide string to a file in byte mode stopped

I am writing out unicode text (stored as wstring) into a file and I'm doing it in byte mode, but the string in the file ends prior to "™" character being printed. Is "™" not unicode or am I doing something wrong?
wofstream output;
outp.open("output.txt", ofstream::binary);
wstring a =L"ABC™";
output << a;
TM is definitely unicode. ofstream and wofstream do not write the text in UTF-8 format. You've to encode the output buffer in UTF-8 in order to see the results you're expecting. So, try using "WideCharToMultiByte".
There is a common misconception about the iostream binary mode: that it is to read/write binary files. The iostream library works only with text files and only read and write text files. The only thing the the "binary" mode changes is how NL (new line) characters are handled. In binary more, no transformation occurs. In non-binary mode, writing LF characters ('\n') to a stream will convert it to the platform specific new line sequence (Unix -> LF, Windows -> CR LF ("\r\n"), Mac -> CR) while when reading, the platform specific new line sequence will be converted to a single LF ('\n') character.
For everything else, nothing changes, meaning an wofstream will always convert the Unicode wide character string to single byte or multi byte character stream depending on the locale used by your process. If you have a locale of "en_US.utf8" on Linux for example, it will be converted to UTF8. Now, if the current locale does not have a representation for the TM Unicode symbol, then either nothing or a '?' will be written to the file.

How to convert unsigned hex values to corresponding unicode characters which should be written to file using c++ [duplicate]

This question already has answers here:
UTF8 to/from wide char conversion in STL
(8 answers)
Closed 9 years ago.
I need to convert unsigned hex values to corresponding unicode characters which should be written to file using c++
so far I have tried this
unsigned short array[2]={0x20ac,0x20ab};
this should be converted to corresponding character in a file using c++
It depends on what encoding you have choosen.
If you are using UTF-8 encoding, you need to first convert each Unicode character to corresponding UTF-8 bytes sequence and then write that byte sequence to the file.
Its pseudo code will be like
EncodeCharToUTF8(charin, charout, &numbytes); //EncodeCharToUTF8(short,char*, int*);
WriteToFile(charout, numchar);
If you are using UTF-16 encoding, you need to first write BOM at the beginning of the file and then encoding each character into UTF-16 byte sequence (byte order matters here whether it is little-endian or big-endian depending on your BOM).
WriteToFile("\xFF\xFE", 2); //Write BOM
EncodeCharToUTF16(charin, charout, &numbytes); //EncodeCharToUTF16(short,char*, int*);
//Write the character.
WriteToFile(charout, numchar);
UTF-32 is not recommended although, step is similar to UTF-16.
I think this should help you to start.
From your array, it seems that you are going to use UTF-16.
Write UTF-16 BOM 0xFFFE for little endian and 0xFEFF for big endian. After that write each character as per byte order of your machine.
I have given here pseudo code which you can white-boxed. Search more on encoding conversion.
Actually you are facing two problems:
1. How to convert buffer from UTF-8 encoding to UTF-16 encoding?
I suggest you use boost locale library ,
sample codes can be like this:
std::string ansi = "This is what we want to convert";
try
{
std::string utf8 = boost::locale::conv::to_utf<char>(ansi, "ISO-8859-1");
std::wstring utf16 = boost::locale::conv::to_utf<wchar_t>(ansi, "ISO-8859-1");
std::wstring utf16_2 = boost::locale::conv::utf_to_utf<wchar_t, char>(utf8);
}
catch (boost::locale::conv::conversion_error e)
{
std::cout << "Fail to convert to unicode!" << std::endl;
}
2. How to save buffer to a file as UTF-16 encoding?
This involves writting a BOM (ByteOrderMark) at the beginning of the file manually, you can find reference here
That means if you want to save a buffer encodes as UTF-8 to a UNICODE file, you should first write 3 bytes "EF BB BF" in the beginning of the output file."FE FF" for Big-Endian UTF-16, "FF FE" for Little-Endian UTF-16.
I you still don't understand how BOM works, just open a Notepad, and write some words, save it with different "Encoding" options, and then open the saved file with a hex editor, you can see the BOM.
Hope it helps you!

Reading From A File Which Contains Unicode Characters

I have this huge file which contains unicode strings at the beginning (first ~10,000 character or so)
I don't care about the unicode part, parts I'm interested aren't unicode but whenever I try to read those parts I get '=', and if I were to load the entire file to char array and write to to some temporary file (without altering the data) with ofstream I get incorrect data actually all I get is a text file filled with Í If I were to remove the unicode part manually everything works fine, So it seems ifstream cannot deal with streams which contains unicode data, but if this assumption is true, is there any way to work on this file introducing a new library to my project?
Thanks,
EDIT: Here's a sample code, program reads from this file which contains characters (some, not all) that can't be represented in ASCII.
ifstream inFile("somefile");
inFile.seekg(0,ios_base::end);
size_t size = inFile.tellg();
inFile.seekg(0,ios_base::beg);
char *book = new char[size];
inFile.read(book,size);
for (int i = 0; i < size; i++) {
cout << book[i] << " " << i << endl; //book[i] will always be '='
}
ofstream outFile("TEST.txt");
outFile.write(book,size);
outFile.close();
Keith Thompson's question is very important. Depending on which Unicode encoding, writing a small C routine that reads (and discards) the Unicode characters can be trivial, or slightly more complex.
Supposing the encoding is UTF-8, you will have a problem determining when to stop discarding because ASCII is a subset of UTF-8, so any time you encounter an ASCII char, you might be tempted to say "this is it, we're back in ASCII land" and the next char still might be still outside the ASCII range.
So you need to read the file and determine where the last character>127 is. Anything after that is plain ASCII -- hopefully.
A text file is generally in just one encoding utf-8, utf-16 (big or little endian) or utf-32 (big or little) or ASCII or other ANSI code pages. Mixing of encoding is only possible in some custom ways.
That said, you will have to read both the data that you need and that you don't in the same encoding. If you know the format is utf-8 you could, depending on what you are going to do with the data, read the file as a binary file into char buffer piece by piece. Then you could API(s) like strnextc (on windows. equivalent API must be available on other platforms) to move character by character on the buffer. Once you reach the end - you could move the balance to the front of the buffer and load the rest of the buffer from the file.
In fact you could use the above approach in general for any encoding. But for utf-16, you could try using wifstream - provided the endianess of the file and the platform you would be running on is the same. And you need to check if the implementation of wifstream is good at handling change in endiness and is able to take care of BOM (byte order mark) - 2 byte sequence ("FE FF" or "FF FE") that is generally present at the beginning of a file - leave alone surrogate pairs.

how to get a single character from UTF-8 encoded URDU string written in a file?

i am working on Urdu Hindi translation/transliteration. my objective is to translate an Urdu sentence into Hindi and vice versa, i am using visual c++ 2010 software with c++ language. i have written an Urdu sentence in a text file saved as UTF-8 format. now i want to get a single character one by one from that file so that i can work on it to convert it into its equivalent Hindi character. when i try to get a single character from input file and write this single character on output file, i get some unknown ugly looking character placed in output file. kindly help me with proper code. my code is as follows
#include<iostream>
#include<fstream>
#include<cwchar>
#include<cstdlib>
using namespace std;
void main()
{
wchar_t arry[50];
wifstream inputfile("input.dat",ios::in);
wofstream outputfile("output.dat");
if(!inputfile)
{
cerr<<"File not open"<<endl;
exit(1);
}
while (!inputfile.eof()) // i am using this while just to
// make sure copy-paste operation of
// written urdu text from one file to
// another when i try to pick only one character
// from file, it does not work.
{ inputfile>>arry; }
int i=0;
while(arry[i] != '\0') // i want to get urdu character placed at
// each-index so that i can work on it to convert
// it into its equivalent hindi character
{ outputfile<<arry[i]<<endl;
i++; }
inputfile.close();
outputfile.close();
cout<<"Hello world"<<endl;
}
Assuming you are on Windows, the easiest way to get "useful" characters is to read a larger chunk of the file (for example a line, or the entire file), and convert it to UTF-16 using the MultiByteToWideChar function. Use the "pseudo"-codepage CP_UTF8. In many cases, decoding the UTF-16 isn't required, but I don't know about the languages you are referring to; if you expect non-BOM characters (with codes above 65535) you might want to consider decoding the UTF-16 (or decode the UTF-8 yourself) to avoid having to deal with 2-word characters.
You can also write your own UTF-8 decoder, if you prefer. It's not complicated, and just requires some bit-juggling to extract the proper bits from the input bytes and assemble them into the final unicode value.
HINT: Windows also has a NormalizeString() function, which you can use to make sure the characters from the file are what you expect. This can be used to transform characters that have several representations in Unicode into their "canonical" representation.
EDIT: if you read up on UTF-8 encoding, you can easily see that you can read the first byte, figure out how many more bytes you need, read these as well, and pass the whole thing to MultiByteToWideChar or your own decoder (although your own decoder could just read from the file, of course). That way you could really do a "read one char at a time".
'w' classes do not read and write UTF-8. They read and write UTF-16. If your file is in UTF-8, reading it with this code will produce gibberish.
You will need to read it as bytes and then convert it, or write it in UTF-16 in the first place.

Output data not the same as input data

I'm doing some file io and created the test below, but I thought testoutput2.txt would be the same as testinputdata.txt after running it?
testinputdata.txt:
some plain
text
data with
a number
42.0
testoutput2.txt (In some editors its on seperate lines, but in others its all on one line)
some plain
਍ऀ琀攀砀琀ഀഀ
data with
਍ 愀  渀甀洀戀攀爀ഀഀ
42.0
int main()
{
//Read plain text data
std::ifstream filein("testinputdata.txt");
filein.seekg(0,std::ios::end);
std::streampos length = filein.tellg();
filein.seekg(0,std::ios::beg);
std::vector<char> datain(length);
filein.read(&datain[0], length);
filein.close();
//Write data
std::ofstream fileoutBinary("testoutput.dat");
fileoutBinary.write(&datain[0], datain.size());
fileoutBinary.close();
//Read file
std::ifstream filein2("testoutput.dat");
std::vector<char> datain2;
filein2.seekg(0,std::ios::end);
length = filein2.tellg();
filein2.seekg(0,std::ios::beg);
datain2.resize(length);
filein2.read(&datain2[0], datain2.size());
filein2.close();
//Write data
std::ofstream fileout("testoutput2.txt");
fileout.write(&datain2[0], datain2.size());
fileout.close();
}
Its working fine on my side, i have run your program on VC++ 6.0 and checked the output on notepad and MS Word. can you specify name of editor where you are facing problem.
You can't read Unicode text into a std::vector<char>. The char data type only works with narrow strings, and my guess is that the text file you're reading in (testinputdata.txt) is saved with either UTF-8 or UTF-16 encoding.
Try using the wchar_t type for your characters, instead. It is specifically designed to work with "wide" (or Unicode) characters.
Thou shalt verify thy input was successful! Although this would sort you out, you should also note that number of bytes in the file has no direct relationship to the number of characters being read: there can be less characters than bytes (think Unicode character using multiple bytes using UTF8 to be encoded) or vice versa (although the latter doesn't happen with any of the Unicode encodings). All you experience is that read() couldn't read as many characters as you'd asked it to read but write() happily wrote the junk you gave it.