MFC Dialog Data Exchange (DDX) comma instead point as decimal - c++

To initialize the controls in my dialogs and to gather user input, I'm using DDX. How can I change the program to display float numbers with a comma instead of a point (best without changing the locale)?
The program has the "C" locale set, if I change the locale, I have to take care on every atof, sprintf operation (the library for get-/setting the float numbers, in the underlying mysql database, expects strings with the decimal as point).
So far, I only think of changing the locale and then use stringstream with imbue (found here), but maybe there's a chance without changing the locale.
Thanks for your help!

This is a locale specific thing you probably will need to handle the changing of it using locale.
Note that DDX is for initializing control objects so that your control variable member declarations stay in sync with the values you chose in your resource file or whatever you did when initializing the dialog the controls reside on.
Edit: Some controls like CComboBox and CListBox have a SetLocale method but I've never used it so not sure how well it works and it's not available on all controls.

Related

How to manipulate the terminal output buffer directly

I want to write a game in Linux terminal(In C/C++), so firstly I should be able to print the character I want to it. I tried with "printf()", but it seems a little inconvenient. I think there should be a character buffer for the output characters for a terminal. Is there any way to directly manipulate the buffer?
Thanks a lot.
It goes in a way different manner.
A terminal is nothing else, but a character device, which means it is practically unbuffered. Despite of this, you still can manipulate the screen position with appropriate sequences of characters, called "escape sequences". For example, if you issue the \e[A (0x1B 0x91 0x41) sequence, the cursor goes one line up while leaving the characters intact, while if you issue \e[10;10H, (0x1B 0x91 0x31 0x30 0x3B 0x31 0x30 0x48), your cursor will go to column 10 of row 10 (exactly what you want). After you moved the cursor, the next character you write out goes to that position. For further information on escape sequences, look at this link.
Another important thing to know about is the dimensions of your terminal. ioctl can inform you about the size of the terminal window:
#include <stdio.h>
#include <sys/ioctl.h>
#include <termios.h>
#include <unistd.h>
int main ()
{
struct winsize ws;
ioctl (STDOUT_FILENO, TIOCGWINSZ, &ws);
printf ("Rows: %d, Cols: %d\n", ws.ws_row, ws.ws_col);
return 0;
}
Note that the technique mentioned above is a solution to send commands to the terminal emulator connected to your pseudo terminal device. That is, the terminal device itself remains unbuffered, the commands are interpreted by the terminal emulator.
You might want to use the setbuf function, which allows you tell printf which buffer to be used. You can use your own buffer and control the contents.
However, this is the wrong approach for 2 reasons.
1st, it won't save you work compared to printf(), fwrite() and putchar().
2nd, and more important, even these functions won't help you. From your comment it's clear that you want to manipulate a character on the screen, for example, replace a '.' (empty floor) by a D (Dragon) when that Dragon approaches. You can't do this by manipulating the output buffer of printf(). Once the '.' is displayed, the output buffer has been flushed to the terminal, and if you manipulate that buffer, it has no effect. The terminal has received a copy of that buffer, and has displayed what the data in the buffer instructed it to display. In order to change what is displayed, you have to send new commands.
And this is exactly what ncurses does for you. It keeps track of the state of the terminal, the current content, the curser position and all the nasty details, like, how to make a character appear bold.
You won't succeed with printf. That's hopeless. You need to learn what ncurses can do for you, and then everything else is easy.
TLDR: use ncurses
This answer focuses on the why?
Why?
There probably is a way to modify the buffer used by the terminal emulator on your system, given that you have sufficient priviledges to write into respective memory and maybe even modify other system resources, as required.
As terminals historically have been distinct, isolated, physical devices rather than beeing conceptually emulated in software, you couldn't access them in any way other than sending them data.
(I mean, you could always print a message locally, to instruct a human to take a screwdriver and physically mess around with the physical terminal device, but that's not been the way how humans wanted to solve the contemporary issue of "how do I change the cursor position and rewrite characters on my (and potentially any other connected) terminal?").
As others have pointed out, most physical terminals have (at some point) been built to give special meaning to certain input sequences, instead of printing them, which makes them escape sequences in this context, according to how wikipedia.org defines them, that is.
A behavioral convention in how to respond to certain input sequences emerged (presumably for the sake of interoperability or for reasons of market predominance)and got standardized as ANSI escape codes.
Those input sequences survived the transition from physical termial devices to their emulated counterparts and even though you could probably manipulate the terminal emulator's memory using system calls, libraries such as ncurses allow you to easily make use of said ANSI escape codes in your console application.
Also, using such libraries is the obvious solution to make your application work remotely:
Yes, technically, you could ssh into another system (or get access in any other more obscure way that works for you),and cause system calls or any other event that would interfere with the terminal emulator in the desired way.
Firstly, I doubt most users would want to grant you priviledge to modify their terminal emulator's memory merely to enjoy your text adventure.
Also, interoperability would reduce, as you couldn't easily support that one user, who still has a VT100 and insists on using it to play your game.

Change Console Code Page in Windows C++

I'm trying to output UTF8 characters in the Windows command line. I can't seem to get the function, setConsoleOutputCP to work. I also heard that you had to change the font to "Lucida Grande" for it to work but I can't get that working either. Can someone please provide me with a short example of how to use these functions to correctly output UTF-8 characters to the console?
Also I heard that those functions don't work in Windows XP, is there a better alternative to those functions which will work in Windows XP?
[I know this question is old and was about Windows XP, but it still seemed like a good place to drop this information so I (and maybe others) can find it again in the future.]
Support for Unicode in CMD windows has improved in newer versions of Windows. This program will work on Windows 10.
#include <iostream>
#include <Windows.h>
class UTF8CodePage {
public:
UTF8CodePage() : m_old_code_page(::GetConsoleOutputCP()) {
::SetConsoleOutputCP(CP_UTF8);
}
~UTF8CodePage() { ::SetConsoleOutputCP(m_old_code_page); }
private:
UINT m_old_code_page;
};
int main() {
UTF8CodePage use_utf8;
const char *text = u8"This text is in UTF-8. ¡Olé! 佻\n";
std::cout << text;
return 0;
}
I made an RAII class to ensure the code page is restored because it would be rude to leave the code page changed if the user had purposely selected a specific one. All the Windows-specific code (SetConsoleOutputCP) is contained within that class. The definition of the use_utf8 variable in main changes the code page to UTF-8, and that code page will stay in effect until the variable is destructed at the end of the scope.
Note that I used the u8 prefix on the string literal, which is a newer feature of C++ to ensure that the string is encoded using UTF-8 regardless of the encoding used for the source file. You don't have to use that feature if you have another way to make a string of valid UTF-8 text.
You still have to be sure that the CMD window is using a font that supports the glyphs you need. I don't think there's a way to get font linking automatically.
But this will at least show a the replacement character if the font is missing the glyph. For example, on my window, the ¡Olé! looks right but the CJK glyph is shown approximately like �. If the user copies that replacement character, the clipboard will receive the original glyph, so they can paste it into other programs without any loss of fidelity.
Note that command line parameters you get from main's argv will be in the original code page. One way to work around this is to get the unconverted "wide" command line with GetCommandLineW, convert it to UTF-8 with WideToMultibyte, and then parse it yourself. Alternatively, you can pass the result of GetCommandLineW to CommandLineToArgvW, which will parse it, and then you'd convert each argument to UTF-8.
Finally, note that changing the code page affects only the output. If you input text from the user, it arrives encoded using the original code page (often called the OEM code page).
TODO: Figure out input. SetConsoleCP isn't doing what I think the documentation says it should do.
Windows console doesn't play nice with UNICODE and particularly with UTF-8.
Setting a console code page to utf-8 won't work.
One approach is to use WideCharToMultiByte() (or something else) to convert the text to UTF-16, then MultiByteToWideChar() (or something else) to convert to a localised ISO encoding. The set the console code page to the ISO code page.
Its ugly, but it sort of works.
In short: SetConsoleOutputCP CP_UTF8 and cout/wcout dont work together by default.
Though windows CRT supports utf-8 output, a robust way to output to console utf-8 chars is to convert them into a console current codepage, especially if you want to use count/wcout.
Standard high level functions of basic_ostream does not work properly with utf-8 by default.
I've seen usage of MultiByteToWideChar and WideCharToMultiByte with CP_OEMCP and CP_UTF8 parameters.
You may setup your application environment, including console font via SetCurrentConsoleFontEx but it works only from Vista and Server 2008.
Also, check this about cout and console.
_setmode and wprintf works together as well, but this may lead to crash for non-wide char functions.
The problem occurs because there is a difference of codepage that uses windows in your console with the encoding of your source code text file.
Qt uses utf-8 by default, but another editor can use another one. So you must to verify which one you're using.
To change to utf-8 use:
#include <windows.h>
SetConsoleOutputCP(CP_UTF8);

Windows version of wcswidth_l

I have some text to write to the Windows console that I need to know the real width of in columns. wcswidth_l seems to be the best option on platforms that have it (though mbswidth_l() would be better since I have no desire to use wchar_t, but for some reason it doesn't exist). But in addition to other platforms, I need something that works on Windows. Although it's unlikely that there's a portable solution, I don't know of any solution at all on Windows. I think the console has an API for getting cursor position and such, so I could write the text out and check the change in position. That would be accurate I guess, but writing out extra output isn't acceptable at all.
How does one go about getting the column width of a string or character on Windows?
Edit:
wcswidth_l returns the number of console columns used to display a string. Some characters take up one column and others, e.g. japanese characters, take up two.
As an example the 'column width' of "a あ" is four. 'a' is one, ' ' is one, and 'あ' is two. (Assuming the console is set up to actually display non-ascii characters that is). Also it'd be nice if the API supports strings using codepage 65001 (UTF-8).
First of all, the Windows Console API is located here.
Secondly, is the function you're looking for GetConsoleFontSize?
I'll try to quickly type an example in a second.
EDIT: Here you go. Forgive me if it there's a small error. I actually found it was even easier. GetCurrentConsoleFont fills in a COORD structure on the way to you getting the index to pass to GetConsoleFontSize, so step saved :)
#define _WIN32_WINNT 0x0501 //XP, 0x0601=windows 7
#include <windows.h>
int main()
{
HANDLE hStdOutput = GetStdHandle (STD_OUTPUT_HANDLE);
CONSOLE_FONT_INFO cfi;
GetCurrentConsoleFont (hStdOutput, FALSE, &cfi);
//cfi.dwFontSize.X == x size
//cfi.dwFontSize.Y == y size
}
EDIT:
If you don't mind invisible output, you can use CreateConsoleScreenBuffer to pretty much have an invisible console window at your command while leaving yours unaffected. GetConsoleScreenBufferInfoEx will tell you the cursor position, at which point you can use WriteConsole to write to your buffer (invisibly), and check the cursor location again versus the number of characters actually written. Note that checking the cursor location beforehand would not require clearing the screen to use this method.
If you cannot afford to do extra output, visible or invisible, I'm not sure there really is a possibility.
Portable approach
Since width of characters depends more on characters themselves rather than the system on which they are displayed (ok, there might be excepetions, but they should be rather rare), one can use separate function to do that (on Windows too). This requires Unicode characters as it makes it much easier to analyze width of strings, but one for sure can write a wrapper to convert between encodings.
Available implementation
Here is suitable and portable implementation, which one can plug in into his application and fallback to use it on Windows.

Windows active code page

I have a certain library (IBM's WebSphere MQ) which I'm using, with an API that is suppose to return a remote servers character set.
After some debugging, it seems as though the return value of this function call returns the active code page of my machine. I saw this by looking at the return value of the function call and the result of running chcp in a command line - both returned 862. When I changed the language in the Control Panel->Regional and Language Options->Advanced tab to something else, both values changed again, which verified my suspicion.
My question is, what is the value that chcp returns? What Win32 API gets/sets it? How does it relate to locales? (trying to change the global locale in a C++ application using std::locale::global had no impact on it apparently).
CHCP returns the OEM codepage (OEMCP). The API is Get/SetConsoleCP.
You can set the C++ locale to ".OCP" to match this locale.
Locales mostly identifies languages, and given that historically there are not so many codepages (many languages' alphabet differ not so greatly from 26-Latin), several languages can be "mapped" to the same codepage. As I remember, there is no direct converstion function, but I made it with statistical approach:
For any given locale I collected this languages words I can obtain from the system (LOCALE_SMONTHNAME1..LOCALE_SMONTHNAME12, LOCALE_SNATIVELANGNAME etc) in Unicode
I called WideCharToMultiByte function for every string trying to convert them to this codepage one-byte encoding
WideCharToMultiByte(CodePage, CP_ACP or WC_NO_BEST_FIT_CHARS, ..., #DefChar, #DefUsed);
If DefUsed was set during the process, that it basically meant that this languages is not compatible with this codepage.

Multi-byte character set in MFC application

I have a MFC application in which I want to add internationalization support. The project is configured to use the "multi-byte character set" (the "unicode character set" is not an option in my situation).
Now, I would expect the CWnd::OnChar() function to send me multi-byte characters if I set my keyboard to some foreign language, but it doesn't seem to work that way. The OnChar() function always sends me a 1-byte character in its nChar variable.
I thought that the _getmbcp() function would give me the current code page for the application, but this function always return 0.
Any advice would be appreciated.
And help here? Multibyte Functions in Microsoft C Run-time
As far as changing the default code page:
The default code page for a user (for WinXP - not sure how it is on Vista) is set in the "Regional and Languages options" Control Panel applet on the "Advanced" tab.
The "Language for non-Unicode programs" sets the default code page for the current user. Unfortunately it does not actually tell you the codepage number it's configuring - it just gives the language (which might be further specified with a region variant). This meakes sense from an end-user perspective, because I think codepage numbers have no meaning to 99.999% of end users. You need to reboot for a change to take effect. If you use regmon to determine what changes you could probably come up with something that specifies the default codepage somewhat easier.
Microsoft also has an unsupported utility, AppLocale, for testing localization that changes the codepage for particular applications: http://www.microsoft.com/globaldev/tools/apploc.mspx
Also you can change the code page for a thread by calling SetThreadLocale() - but you also have to call the C runtime's setlocale() function because some CRT functions don't talk to the Win API locale functions (and vice versa). See "Windows SetThreadLocale and CRT setlocale" by Chris Grimes for details.
As always in non Unicode scenarios, you'll get a reliable result only if the system locale (aka in Control Panel "language for non-unicode applications") is set accordingly. If not, don't expect anything good.
For example, if system locale is Chinese Traditional, you'll receive 2 successive WM_CHAR messages (One for each byte, assuming user composed a 2-char character).
isleadbyte() should help you determine if a 2nd byte is coming soon.
If your system locale is NOT set to Chinese, don't expect to receive correct messages even iusing a Chinese keyboard/IME. The misleading part is that some scenarios work. e.g. using a Greek keyboard, you'll receive WM_CHAR char values based of the Greek codepage even if your system locale is Latin-based. But you should really stay away from trying to cope with such scenarios: Success is not guaranteed and will likely vary according to Windows version and locale.
As MikeB wrote, MS AppLocale is your friend to make basic tests.
[ad] and appTranslator is your friend if you need to translate your UI [/ad]
For _getmbcp, MSDN says "A return value of 0 indicates that a single byte code page is in use." That sure makes it not very useful. Try one of these: GetUserDefaultLCID GetSystemDefaultLCID GetACP. (Now why is there no "user" equivalent for GetACP?)
Anyway if you want _getmbcp to return an actual value then set your system default language to Chinese, Japanese, or Korean.
There is actually a very simple (but weird) way of force the OnChar function to send unicode characters to the application even if it's configured in multi-byte character set:
SetWindowLongW( m_hWnd, GWL_WNDPROC, GetWindowLong( m_hWnd, GWL_WNDPROC ) );
Simply by calling the unicode version of "SetWindowLong", it forces the application to receive unicode characters.