c++ fstreams open file with utf-16 name - c++

At first I built my project on Linux and it was built around streams.
When I started moving to Windows I ran into some problems.
I have a name of the file that I want to open in UTF-16 encoding.
I try to do it using fstream:
QString source; // content of source is shown on image
char *op= (char *) source.data();
fstream stream(op, std::ios::in | std::ios::binary);
But file cannot be opened.
When I check it,
if(!stream.is_open())
{} // I always get that it's not opened. But file indeed exists.
I tried to do it with wstream. But result is the same, because wstream accepts only char * too. As I understand it's so , because string , that is sent as char * , is truncated after the first zero and only one symbol of the file's name is sent, so file is never found. I know wfstream in Vissual studio can accept wchar_t * line as name, but compiler of my choice is MinGW and it doesn't have such signature for wstring constructor.
Is there any way to do it with STL streams?
ADDITION
That string can contaion not only Ascii symbols, it can contain Russian, German, Chinese symbols simultaneously. I don't want limit myself only to ASCII or local encoding.
NEXT ADDITION
Also data can be different, not only ASCII, otherwise I wouldn't bother myself with Unicode at all.
E.g.
Thanks in advance!

Boost::Filesystem especially the fstream.hpp header may help.

If you are using MSVC and it's implementation of the c++ standard library, something like this should work:
QString source; // content of source is shown on image
wchar_t *op= source.data();
fstream stream(op, std::ios::in | std::ios::binary);
This works because the Microsoft c++ implementation has an extension to allow fstream to be opened with a wide character string.

Convert the UTF-16 string using WideCharToMultiByte with CP_ACP before passing the filename to fstream.

Related

std::fstream different behavior on msvc and g++ with utf-8

std::string path("path.txt");
std::fstream f(path);
f.imbue(std::locale(std::locale::empty(), new std::codecvt_utf8<wchar_t>));
std::string lcpath;
f >> lcpath;
Reading a utf-8 text from path.txt on windows fails with MSVC compiler on windows in the sense lcpath does not understand the path as utf-8.
The below code works correctly on linux when compiled with g++.
std::string path("path.txt");
std::fstream ff;
ff.open(path.c_str());
std::string lcpath;
ff>>lcpath;
Is fstream on windows(MSVC) by default assume ascii only?
In the first snippet if I change string with wstring and fstream with wfstream, lcpath gets correct value on windows as well.
EDIT: If I convert the read lcpath using MultiByteToWideChar(), I get the correct representation. But why can't I directly read a UTF-8 string into std::string on windows?
Imbuing an opened file can be problamatic:
http://www.cplusplus.com/reference/fstream/filebuf/imbue/
If loc is not the same locale as currently used by the file stream buffer, either the internal position pointer points to the beginning of the file, or its encoding is not state-dependent. Otherwise, it causes undefined behavior.
The problem here is that when a file is opened and the file has a BOM marker in it this will usually be read from the file by the currently installed local. Thus the position pointer is no longer at the beginning of the file and we have undefined behavior.
To make sure your local is set correctly you must do it before opening the file.
std::fstream f;
f.imbue(std::locale(std::locale::empty(), new std::codecvt_utf8<wchar_t>));
std::string path("path.txt");
f.open(path);
std::string lcpath;
f >> lcpath;

CStdioFile problems with encoding on read file

I can't read a file correctly using CStdioFile.
I open notepad.exe, I type àèìòùáéíóú and I save twice, once I set codification as ANSI (really is CP-1252) and other as UTF-8.
Then I try to read it from MFC with the following block of code
BOOL ReadAllFileContent(const CString &FilePath, CString *fileContent)
{
CString sLine;
BOOL isSuccess = false;
CStdioFile input;
isSuccess = input.Open(FilePath, CFile::modeRead);
if (isSuccess) {
while (input.ReadString(sLine)) {
fileContent->Append(sLine);
}
input.Close();
}
return isSuccess;
}
When I call it, with ANSI file I've got the expected result àèìòùáéíóú
but when I try to read the UTF8 encoded file I've got à èìòùáéíóú
I would like my function works with all files regardless of the encoding.
Why I need to implement?
.EDIT.
Unfortunately, in the real app, files come from external app so change the file encoding isn't an option.I must be able to read both UTF-8 and CP-1252 files.
Any file is valid ANSI, what notepad told ANSI is really Windows-1252 encode.
I've figured out a way to read UTF-8 and CP-1252 right based on the example provided here. Although it works, I need to pass the file encode which I don't know in advance.
Thnks!
I personally use the class as advertised here:
https://www.codeproject.com/Articles/7958/CTextFileDocument
It has excellent support for reading and writing text files of various encodings including unicode in its various flavours.
I have not had a problem with it.

Windows usage of char * functions with UTF-16

I port one application from Linux to Windows.
On Linux I use libmagic library from which I wouldn't be glad to rid of on Windows.
The problem is that I need pass name of file that is held in UTF-16 encoding to such function:
int magic_load(magic_t cookie, const char *filename);
Unfortunately it accepts only const char *filename. My first idea was to convert UTF-16 string to local encoding, but there are some problems - like string can contain e.g. Chinese symbols and local encoding may be Russian.
As result we will get trash on the output and program will not reach its aim.
Converting into UTF-8 doesn't help either, because this is Windows and Windows holds file name in UTF-16.
But I somehow need make that function able to open file with Unicode name.
I came only to one very very bad solution:
1. I have a filename
2. I can copy file with unicode name to file with ASCII name like "1.mp3"
3. open it with libmagic functions and get what I want
4. remove temporarily file
But I understand how this solution is bad and how it could make my application slower, so I wonder, perhaps are there some better ways to do it?
Thanks in advance for any tips, 'cause I'm really confused with it.
Use 8.3 file names to access the files.
In addition to long file names up to 255 characters in length, Windows also generates an MS-DOS-compatible (short) file name in 8.3 format.
http://support.microsoft.com/kb/142982

fstream::open() Unicode or Non-Ascii characters don't work (with std::ios::out) on Windows

In a C++ project, I want to open a file (fstream::open()) (which seems to be a major problem). The Windows build of my program fails miserably.
File "ä" (UTF-8 0xC3 0xA4)
std::string s = ...;
//Convert s
std::fstream f;
f.open(s.c_str(), std::ios::binary | std::ios::in); //Works (f.is_open() == true)
f.close();
f.open(s.c_str(), std::ios::binary | std::ios::in | std::ios::out); //Doesn't work
The string s is UTF-8 encoded, but then converted from UTF-8 to Latin1 (0xE4). I'm using Qt, so QString::fromUtf8(s.c_str()).toLocal8Bit().constData().
Why can I open the file for reading, but not for writing?
File "и" (UTF-8 0xD0 0xB8)
Same code, doesn't work at all.
It seems, this character doesn't fit in the Windows-1252 charset. How can I open such an fstream (I'm not using MSVC, so no fstream::open(const wchar_t*, ios_base::openmode))?
In Microsoft implementations of STL, there's a non-standard extension (overload) to allow unicode support for UTF-16 encoded strings.
Just pass UTF-16 encoded std::wstring to fstream::open(). This is the only way to make it work with fstream.
You can read more on what I find to be the easiest way to support unicode on windows here: http://utf8everywhere.org/
Using the standard APIs (such as std::fstream) on Windows you can only open a file if the filename can be encoded using the currently set "ANSI Codepage" (CP_ACP).
This means that there can be files which simply cannot be opened using these APIs on Windows. Unless Microsoft implements support for setting CP_ACP to CP_UTF8 then this cannot be done using Microsoft's CRT or C++ standard library implementation.
(Windows has had a feature called "short" filenames where, when enabled, every file on the drive had an ASCII filename that can be used via standard APIs. However this feature is going away so it does not represent a viable solution.)
Update: Windows 10 has added support for setting the codepage to UTF-8

UCS-2LE text file parsing

I have a text file which was created using some Microsoft reporting tool. The text file includes the BOM 0xFFFE in the beginning and then ASCII character output with nulls between characters (i.e "F.i.e.l.d.1."). I can use iconv to convert this to UTF-8 using UCS-2LE as an input format and UTF-8 as an output format... it works great.
My problem is that I want to read in lines from the UCS-2LE file into strings and parse out the field values and then write them out to a ASCII text file (i.e. Field1 Field2). I have tried the string and wstring-based versions of getline – while it reads the string from the file, functions like substr(start, length) do interpret the string as 8-bit values, so the start and length values are off.
How do I read the UCS-2LE data into a C++ String and extract the data values? I have looked at boost and icu as well as numerous google searches but have not found anything that works. What am I missing here? Please help!
My example code looks like this:
wifstream srcFile;
srcFile.open(argv[1], ios_base::in | ios_base::binary);
..
..
wstring srcBuf;
..
..
while( getline(srcFile, srcBuf) )
{
wstring field1;
field1 = srcBuf.substr(12, 12);
...
...
}
So, if, for example, srcBuf contains "W.e. t.h.i.n.k. i.n. g.e.n.e.r.a.l.i.t.i.e.s." then the substr() above returns ".k. i.n. g.e" instead of "g.e.n.e.r.a.l.i.t.i.e.s.".
What I want is to read in the string and process it without having to worry about the multi-byte representation. Does anybody have an example of using boost (or something else) to read these strings from the file and convert them to a fixed width representation for internal use?
BTW, I am on a Mac using Eclipse and gcc.. Is it possible my STL does not understand wide character strings?
Thanks!
Having spent some good hours tackling this question, here are my conclusions:
Reading an UTF-16 (or UCS2-LE) file is apparently manageable in C++11, see How do I write a UTF-8 encoded string to a file in Windows, in C++
Since the boost::locale library is now part of C++11, one can just use codecvt_utf16 (see bullet below for eventual code samples)
However, in older compilers (e.g. MSVC 2008), you can use locale and a custom codecvt facet/"recipe", as very nicely exemplified in this answer to Writing UTF16 to file in binary mode
Alternatively, one can also try this method of reading, though it did not work in my case. The output would be missing lines which were replaced by garbage chars.
I wasn't able to get this done in my pre-C++11 compiler and had to resort to scripting it in Ruby and spawning a process (it's just in test so I think that kind of complications are ok there) to execute my task.
Hope this spares others some time, happy to help.
substr works fine for me on Linux with g++ 4.3.3. The program
#include <string>
#include <iostream>
using namespace std;
int main()
{
wstring s1 = L"Hello, world";
wstring s2 = s1.substr(3,5);
wcout << s2 << endl;
}
prints "lo, w" as it should.
However, the file reading probably does something different from what you expect. It converts the files from the locale encoding to wchar_t, which will cause each byte becoming its own wchar_t. I don't think the standard library supports reading UTF-16 into wchar_t.