Write to the input of process strings received by socket - c++

I have an application on the Windows platform that receives remote commands from applications running on the Linux platform.
The Linux applications are experiencing difficulties accessing directories or files that contain accented characters, they send the command to access such files/directories and the return is always: "directory/file not found".
I think the two applications are with different code page, I venture to say this because I previously had problems in linux applications, the directories and files with accented words came with strange symbols in std::cout, and after I added SetConsoleOutputCP (CP_UTF8) in the windows application the problem was solved, and finally the paths containing accents were readable, does this mean that the linux application has code page 65001? Anyway, the problem when sending strings containing the path to the directories/files still persists, whenever the linux application tries to access paths containing accented words it fails.
I'll try to show how the two applications communicate.
Windows Side:
In short, this is the part where the client receives the message from the linux application, and then writes in the process what was received. In this part when writing paths containing accented characters the application returns in the output that is not possible to find them.
BYTE buffer[4096];
DWORD BytesWritten;
int ret = SSL_read(stI->ssl, (char*)buffer, sizeof(buffer));
if (ret <= 0)
break;
if(!WriteFile(stI->hStdIn, buffer, ret, &BytesWritten, NULL))
break;
And then it reads the output of the process and sends the content to the Linux application.
BYTE buffer[4096];
DWORD BytesAvailable, BytesRead;
if (!ReadFile(stI->hStdOut, buffer, min(sizeof(buffer), BytesAvailable), &BytesRead, NULL))
break;
ret = SSL_write(stI->ssl, (char*)buffer, BytesAvailable);
if (ret <= 0)
break;
Linux Side:
This part is very basic, the application reads a user input and then sends it to the windows application.
std::string inputBuffer;
ZH->console_input(inputBuffer, 33); // This function only controls the input and output of data with termios.
inputBuffer+='\n' // To simulate an enter in windows application
// Sends the typed path to the Windows application
SSL_write(session_data.ssl, inputBuffer.c_str(), strlen(inputBuffer.c_str()))
The part of receiving the data is basically the same as the windows application, it receives the data in a char variable and then print on the screen with std::cout.
The only difference is that the socket is set to NONBLOCK and I use the select function.
Any suggestions on how to solve this problem?

Your best bet is to use proper unicode encodings. Windows tends to use UTF-16 (uses 2 bytes to represent a character), Linux on the other hand uses UTF-8. This is typically uses a single byte per character for ASCII and escapes non ascii characters (\uxxxx where x represents a hex digit). If you do a proper conversion from Windows UTF-16 to UTF-8, things should work correctly.
C++11 and Boost do provide some Unicode support, but for gold standard support, take a look at ICU.
Sockets however just transmit bytes so they have nothing to do with Unicode conversions.

Related

Can't display 'ä' in GLFW window title

glfwSetWindowTitle(win, "Nämen");
Becomes "N?men", where '?' is in a little black, twisted square, indicating that the character could not be displayed.
How do I display 'ä'?
If you want to use non-ASCII letters in the window title, then the string has to be utf-8 encoded.
GLFW: Window title:
The window title is a regular C string using the UTF-8 encoding. This means for example that, as long as your source file is encoded as UTF-8, you can use any Unicode characters.
If you see a little black, twisted square then this indicates that the ä is encoded with some iso encoding that is not UTF-8, maybe something like latin1. To fix this you need to open it in the editor in which you can change the encoding of the file, change it to uft-8 (without BOM) and fix the ä in the title.
It seems like the GLFW implementation does not work according to the specification in this case. Probably the function still uses Latin-1 instead of UTF-8.
I had the same problem on GLFW 3.3 Windows 64 bit precompiled binaries and fixed it like this:
SetWindowTextA(glfwGetWin32Window(win),"Nämen")
The issue does not lie within GLFW but within the compiler. Encodings are handled by the major compilers as follows:
Good guy clang assumes that every file is encoded in UTF-8
Trusty gcc checks the system's settings1 and falls back on UTF-8, when it fails to determine one.
MSVC checks for BOM and uses the detected encoding; otherwise it assumes that file is encoded using the users current code page2.
You can determine your current code page by simply running chcp in your (Windows) console or PowerShell. For me, on a fresh install of Windows 11, it yields "850" (language: English; keyboard: German), which stands for Code Page 850.
To fix this issue you have several solutions:
Change your systems code page. Arguably the worst solution.
Prefix your strings with u8, escape all unicode literals and convert the string to wide char before passing Win32 functions; e.g.:
const char* title = u8"\u0421\u043b\u0430\u0432\u0430\u0020\u0423\u043a\u0440\u0430\u0457\u043d\u0456\u0021";
// This conversion is actually performed by GLFW; see footnote ^3
const int l = MultiByteToWideChar(CP_UTF8, 0, title, -1, NULL, 0);
wchar_t* buf = _malloca(l * sizeof(wchar_t));
MultiByteToWideChar(CP_UTF8, 0, title, -1, buf, l);
SetWindowTextW(hWnd, buf);
_freea(buf);
Save your source files with UTF-8 encoding WITH BOM. This allows you to write your strings without having to escape them. You'll still need to convert the string to a wide char string using the method seen above.
Specify the /utf-8 flag when compiling; this has the same effect as the previous solution, but you don't need the BOM anymore.
The solutions stated above still require you convert your good and nice string to a big chunky wide string.
Another option would be to provide a manifest4 with the activeCodePage set to UTF-8. This way all Win32 functions with the A-suffix (e.g. SetWindowTextA) now accept and properly handle UTF-8 strings, if the running system is at least or newer than Windows Version 1903.
TL;DR
Compile your application with the /utf-8 flag active.
IMPORTANT: This works for the Win32 APIs. This doesn't let you magically write Unicode emojis to the console like a hipster JS developer.
1 I suppose it reads the LC_ALL setting on linux. In the last 6 years I have never seen a distribution, that does NOT specify UTF-8. However, take this information with a grain of salt; I might be entirely wrong on how gcc handles this now.
2 If no byte-order mark is found, it assumes that the source file is encoded in the current user code page [...].
3 GLFW performs the conversion as seen here.
4 More about Win32 ANSI-APIs, Manifest and UTF-8 can be found here.

Get Strings in right encoding in c++

I asked a similar question before.
But I am still in trouble with encodings in c++.
I try to describe the problem as well as possible.
I have a c++ client, communicating with an c# service over TCP.
Now I need to display the Messages from the service in an Messagebox (Win32 API).
The Bytes, sended by the c# service are UTF-8 encoded.
Important to know, the c++ client will only be running on Windows Systems.
This is the code to receive the bytes and to display the Text:
char buffer[1024];
int receivedBytes = recv(socketHandle, buffer, sizeof(buffer) - 1, 0);
char str[receivedBytes];
for (int index = 0; index < receivedBytes; index++)
{
str[index] = buffer[index];
}
MessageBox(mainWindow, (LPCTSTR)str, (LPCTSTR) "Fehler", MB_OK|MB_ICONERROR);
If the Text contains chatacters like üäö, they are not shown in the Messagebox the correct way.
What can I do to receive the message as UTF-8 String in c++?
Is there a possibility to convert the char[] to an UTF-8 String?
Thx for helping
Tobi
If you want to display unicode characters in Windows, you need to translate UTF8 string into UTF16 (older UCS2), as this is the unicode standard Windows handles. You do that with MultiByteToWideChar function.
Also make sure that the #define UNICODE is set before you include Windows headers, so that MessageBox points to MessageBoxW or use MessageBoxW explicitly.

to unicode or not to unicode

I'm getting a value from the registry. This value might have double byte characters in it.
I will later have to transfer this across the network to a C# client to display. C# is all unicode.
The function returns MBCS if you call it non-unicode.
What should I use?
string result = string(cbData);
RegQueryValueExA(h_sub_key, "DisplayName", NULL, NULL, (LPBYTE) &result[0], &cbData)
or
string result = string(cbData);
RegQueryValueExW(h_sub_key, L"DisplayName", NULL, NULL, (LPBYTE) &result[0], &cbData)
Using Unicode whenever possible will make your life easier. The registry contains Unicode natively and converts to MBCS on the fly when you use ReqQueryValueExA, why would you want to do an unneeded conversion?
Converting to UTF-8 from UTF-16 might make sense for information going over the network, but if you control both ends of the connection it wouldn't be necessary.
No, that's not the way it works. The string you get back from the first snippet is encoded according to the current system code page. Could be a double-byte encoding. Could be anything. Big problem of course, the C# code on the other end of that Internet connection has no way to guess what the code page might be.
So do not use the first snippet. The second one gets you the string in utf16, the native encoding used in Windows, result needs to be an std::wstring. Also the encoding used by C# so you could send the binary string. Although that's not typically a good idea, xml is popular. It is up to you to set the wire format.

Can't read unicode (japanese) from a file

Hi I have a file containing japanese text, saved as unicode file.
I need to read from the file and display the information to the stardard output.
I am using Visual studio 2008
int main()
{
wstring line;
wifstream myfile("D:\sample.txt"); //file containing japanese characters, saved as unicode file
//myfile.imbue(locale("Japanese_Japan"));
if(!myfile)
cout<<"While opening a file an error is encountered"<<endl;
else
cout << "File is successfully opened" << endl;
//wcout.imbue (locale("Japanese_Japan"));
while ( myfile.good() )
{
getline(myfile,line);
wcout << line << endl;
}
myfile.close();
system("PAUSE");
return 0;
}
This program generates some random output and I don't see any japanese text on the screen.
Oh boy. Welcome to the Fun, Fun world of character encodings.
The first thing you need to know is that your console is not unicode on windows. The only way you'll ever see Japanese characters in a console application is if you set your non-unicode (ANSI) locale to Japanese. Which will also make backslashes look like yen symbols and break paths containing european accented characters for programs using the ANSI Windows API (which was supposed to have been deprecated when Windows XP came around, but people still use to this day...)
So first thing you'll want to do is build a GUI program instead. But I'll leave that as an exercise to the interested reader.
Second, there are a lot of ways to represent text. You first need to figure out the encoding in use. Is is UTF-8? UTF-16 (and if so, little or big endian?) Shift-JIS? EUC-JP? You can only use a wstream to read directly if the file is in little-endian UTF-16. And even then you need to futz with its internal buffer. Anything other than UTF-16 and you'll get unreadable junk. And this is all only the case on Windows as well! Other OSes may have a different wstream representation. It's best not to use wstreams at all really.
So, let's assume it's not UTF-16 (for full generality). In this case you must read it as a char stream - not using a wstream. You must then convert this character string into UTF-16 (assuming you're using windows! Other OSes tend to use UTF-8 char*s). On windows this can be done with MultiByteToWideChar. Make sure you pass in the right code page value, and CP_ACP or CP_OEMCP are almost always the wrong answer.
Now, you may be wondering how to determine which code page (ie, character encoding) is correct. The short answer is you don't. There is no prima facie way of looking at a text string and saying which encoding it is. Sure, there may be hints - eg, if you see a byte order mark, chances are it's whatever variant of unicode makes that mark. But in general, you have to be told by the user, or make an attempt to guess, relying on the user to correct you if you're wrong, or you have to select a fixed character set and don't attempt to support any others.
Someone here had the same problem with Russian characters (He's using basic_ifstream<wchar_t> wich should be the same as wifstream according to this page). In the comments of that question they also link to this which should help you further.
If understood everything correctly, it seems that wifstream reads the characters correctly but your program tries to convert them to whatever locale your program is running in.
Two errors:
std::wifstream(L"D:\\sample.txt");
And do not mix cout and wcout.
Also check that your file is encoded in UTF-16, Little-Endian. If not so, you will be in trouble reading it.
wfstream uses wfilebuf for the actual reading and writing of the data. wfilebuf defaults to using a char buffer internally which means that the text in the file is assumed narrow, and converted to wide before you see it. Since the text was actually wide, you get a mess.
The solution is to replace the wfilebuf buffer with a wide one.
You probably also need to open the file as binary.
const size_t bufsize = 128;
wchar_t buffer[bufsize];
wifstream myfile("D:\\sample.txt", ios::binary);
myfile.rdbuf()->pubsetbuf(buffer, 128);
Make sure the buffer outlives the stream object!
See details here: http://msdn.microsoft.com/en-us/library/tzf8k3z8(v=VS.80).aspx

_wfopen equivalent under Mac OS X

I'm looking to the equivalent of Windows _wfopen() under Mac OS X. Any idea?
I need this in order to port a Windows library that uses wchar* for its File interface. As this is intended to be a cross-platform library, I am unable to rely on how the client application will get the file path and give it to the library.
POSIX API in Mac OS X are usable with UTF-8 strings. In order to convert a wchar_t string to UTF-8, it is possible to use the CoreFoundation framework from Mac OS X.
Here is a class that will wrap an UTF-8 generated string from a wchar_t string.
class Utf8
{
public:
Utf8(const wchar_t* wsz): m_utf8(NULL)
{
// OS X uses 32-bit wchar
const int bytes = wcslen(wsz) * sizeof(wchar_t);
// comp_bLittleEndian is in the lib I use in order to detect PowerPC/Intel
CFStringEncoding encoding = comp_bLittleEndian ? kCFStringEncodingUTF32LE
: kCFStringEncodingUTF32BE;
CFStringRef str = CFStringCreateWithBytesNoCopy(NULL,
(const UInt8*)wsz, bytes,
encoding, false,
kCFAllocatorNull
);
const int bytesUtf8 = CFStringGetMaximumSizeOfFileSystemRepresentation(str);
m_utf8 = new char[bytesUtf8];
CFStringGetFileSystemRepresentation(str, m_utf8, bytesUtf8);
CFRelease(str);
}
~Utf8()
{
if( m_utf8 )
{
delete[] m_utf8;
}
}
public:
operator const char*() const { return m_utf8; }
private:
char* m_utf8;
};
Usage:
const wchar_t wsz = L"Here is some Unicode content: éà€œæ";
const Utf8 utf8 = wsz;
FILE* file = fopen(utf8, "r");
This will work for reading or writing files.
You just want to open a file handle using a path that may contain Unicode characters, right? Just pass the path in filesystem representation to fopen.
If the path came from the stock Mac OS X frameworks (for example, an Open panel whether Carbon or Cocoa), you won't need to do any conversion on it and will be able to use it as-is.
If you're generating part of the path yourself, you should create a CFStringRef from your path and then get that in filesystem representation to pass to POSIX APIs like open or fopen.
Generally speaking, you won't have to do a lot of that for most applications. For example, many applications may have auxiliary data files stored the user's Application Support directory, but as long as the names of those files are ASCII, and you use standard Mac OS X APIs to locate the user's Application Support directory, you don't need to do a bunch of paranoid conversion of a path constructed with those two components.
Edited to add: I would strongly caution against arbitrarily converting everything to UTF-8 using something like wcstombs because filesystem encoding is not necessarily identical to the generated UTF-8. Mac OS X and Windows both use specific (but different) canonical decomposition rules for the encoding used in filesystem paths.
For example, they need to decide whether "é" will be stored as one or two code units (either LATIN SMALL LETTER E WITH ACUTE or LATIN SMALL LETTER E followed by COMBINING ACUTE ACCENT). These will result in two different — and different-length — byte sequences, and both Mac OS X and Windows work to avoid putting multiple files with the same name (as the user perceives them) in the same directory.
The rules for how to perform this canonical decomposition can get pretty hairy, so rather than try to implement it yourself it's best to leave it to the functions the system frameworks have provided for you to do the heavy lifting.
#JKP:
Not all functions in MacOS X accept UTF8, but filenames and filepaths may be UTF8, thus all POSIX functions dealing with file access (open, fopen, stat, etc.) accept UTF8.
See here. Quote:
How a file name looks at the API level
depends on the API. Current Carbon
APIs handle file names as an array of
UTF-16 characters; POSIX ones handle
them as an array of UTF-8, which is
why UTF-8 works well in Terminal. How
it's stored on disk depends on the
disk format; HFS+ uses UTF-16, but
that's not important in most cases.
Some other POSIX functions handle UTF8 as well. E.g. functions dealing with user names, group names or user passwords use UTF8 to store the information (thus a user name can be Japanese and your password can be Chinese, no problem).
But not all handle UTF8. E.g. for all string functions an UTF8 string is just a normal C String and characters above 126 have no special meaning. They don't understand the concept of multiple bytes (chars in C) forming a single Unicode character. How other APIs handle char * pointer being passed to them is different from API to API. However, as a rule as the thumb you can say:
Either the function only accepts C strings with pure ASCII characters (only in the range 0 to 126) or it will accept UTF8. Usually functions don't allow characters above 126 and interpret them in any other encoding than UTF8. If this really was the case, it is documented and then there must be a way to pass the encoding along with the string.
If you're using Cocoa it's fairly easy with NSString. Just load the UTF16 data in using -initWithBytes:length:encoding: (or perhaps -initWithCString:encoding:) and then get a UTF8 version by calling UTF8String on the result. Then, just call fopen with your new UTF8 string as the param.
You can definitely call fopen with a UTF-8 string, regardless of language - can't help with C++ on OSX though - sorry.
I have read file name from configuration UTF8 file through wifstream (it uses wchar_t buffer).
Mac implementation is different from Linux and Windows.
wifstream reads each byte from file to separate wchar_t cell in the buffer. So we have 3 empty bytes, although open requires char string. Thus programmer can use wcstombs function to convert wide character string to multi-byte string.
The API supports UTF8. For better understanding use memory watcher and hex editor for your file.