ATL macros only working on devel-computer - c++

i use ATL macros like A2T and A2CW.
on the devel-computer everyhting works fine. when i use the application (visual studio 2008 pro) on another computer - the output of the ATL-macro-conversion is not readable.
i hope someone can help me to solve this problem. my application is finished - only the ATL conversion macros are the problem atm.
thanks in advance

The A2X macros use the current codepage to convert the strings. If you have literal strings (or some data you are distributing with the application) that you are converting, that were created with the codepage of the dev. computer in mind, and the other computer has a different codepage set, they will end up as gibberish. You can use APIs to explicitly specify the codepage you're converting from if this is the case. A2X macros should really only be used for content than comes as input from the user, where the codepage may vary, not for data where the codepage is known ahead of time.

Related

MFC CEdit converts non-ascii characters to ascii

We have an MFC Windows Application, written originally in VC++ 6 and over the years updated for newer IDE, currently developed in VS2017.
The application is built with MBCS (not unicode). Trying to switch to Unicode causes 3806 compile errors, and that is probably just a tip of an iceberg.
However we want to be able to run the application with different code page, ie. 1250 (Central European).
I tried to build a small test application, and managed to get it to work with special characters (čćšđž). I did this by setting dialog font to Microsoft Sans Serif with code page 1250.
The same approach in our application does not work. Note: dialogs in our application are created dynamically, and font is set using SetFont.
There is a difference how the special characters are treated in these two applications.
In test application, the special characters are displayed in the edit control, and GetWindowsText retrieves the right bytes. However, trying to write some characters from other languages, renders them as "????".
In our application, all special characters are rendered properly, but GetWindowText (or WM_GETTEXT) convert the special characters to the similar ascii counterpart (čćđ -> ccd).
I believe that Edit control in our application displays Unicode text, but GetWindowText converts it to ascii.
Does anyone have any idea what is happening here, and how I might solve it?
Note: I know how to convert project to Unicode. We are choosing not to commit resources to it at the moment, as it would probably take weeks or months to implement. The question is how I might get it to work with MBSC and why is edit control converting Č to C.
I believe it is absolutely possible to port the application to other languages/codepages, you only need to modify the .rc (resource) files, basically having one resource file for each language, which you may rather want to do anyway, as strings in menus and/or string-tables would be in a different language. And this is actually the only change needed, as far as the application part is concerned.
The other part is the system you are running it on. A window can be unicode or non-unicode. You can see this with the Spyxx utility, it tells you whether a window (procedure) is unicode or not (Window properties, General tab). And while unicode windows do work properly, non-unicode ones have to change encoding from/to unicode and mbcs when getting or setting the text. The conversion is based on the system (default) code-page. This can only be set globally (for the whole machine), and not per application or window. And of course, setting the font's codepage is not enough (and imo it's not needed at all, if you are runnign the application on a machine with the "correct" codepage). That is, for non-unicode applications, only one codepage will be working properly, the others won't.
I can see two options:
If you only need to update a small number of controls, it may be possible to change only these controls to unicode, and use the "wide" versions of the get/set window-test functions or messages - you will have to convert the text between unicode and your desired codepage. It requires writing some code, but has the advantage of the conversion being independent from the system default codepage, eg you can have the codepage in some configuration file, in the registry, or as a command-line option (in the application's shortcut). Some control types can be changed to unicode, some others not, so pls check the documentation. Used this technique successfully for a mbcs application displaying/editing translated strings in many different languages, but I only had one control, a List-View, which btw offers the LVM_SETUNICODEFORMAT message, thus allowing for unicode texts, even in a mbcs application.
The easiest method is simply run the application as is, but it will only be working on machines with the proper default codepage, as most non-unicode applications do.
The system default codepage can be changed by setting the "Language for non-Unicode programs" option, available in the regional settings, Administrative tab, and requires a reboot. Changing the Windows UI language will change this option as well, but by setting this option you don't need to change the UI language, eg you can have English UI and East-European codepage.
See a very similar post here.
Late to the party:
In our application, all special characters are rendered properly, but GetWindowText (or WM_GETTEXT) convert the special characters to the similar ascii counterpart (čćđ -> ccd).
That sounds like the ES_OEMCONVERT flag has been set for the control:
Converts text entered in the edit control. The text is converted from the Windows character set to the OEM character set and then back to the Windows character set. This ensures proper character conversion when the application calls the CharToOem function to convert a Windows string in the edit control to OEM characters. This style is most useful for edit controls that contain file names that will be used on file systems that do not support Unicode.
To change this style after the control has been created, use SetWindowLong.

Visual C++/MFC: getting Japanese characters to work without UNICODE

I have software originally developed 20 years ago in Visual C++ using MFC without UNICODE. Currently strings are held either in char[] or CString, and it works on English and Japanese Windows PCs until Japanese characters are used, as these tend to get converted to strange characters or empty boxes.
Setting UNICODE is presumably the way forward but will require a massive code change, whereas quite a lot seems to work simply by setting System Locale to Japan (in “Window’s Language for non-Unicode programs” setting). I have no idea how Windows does this, but some Japanese character things now work on my English Windows PC, e.g. I can open and save Japanese filenames with no code changes. And in Japan they set System Locale to English and again much works, but not everything.
I get the impression the problems are due to using a font that doesn’t include Japanese characters. Currently I am using Arial / MS Sans Serif and charset set to ANSI_CHARSET or DEFAULT_CHARSET. Is there a different font I should be using, or can I extend these fonts to include Japanese characters? Or am I barking up the wrong tree in which case what do I do next? Am very new to all this unfortunately …
That's a common question (OK I guess not so common any more in 2015, as MBCS programs luckily are a dying breed - I still maintain several though...)
Either way, I'm afraid that, depending on your definition of 'working', to get this working you'll have to bite the bullet and convert to a Unicode build. If you can't make a business case for that, then you'll have to set the right locale (well, worse, have the user set the 'right' one) and test what works and what doesn't, and ask more specific questions on what doesn't.
If your goal is to make one application that correctly displays strings in various encodings in the 'right' way regardless of the locale settings on the computer, and compatible with every input data set / database content without the user having to be aware of encoding issues, then you're out of luck with an MBCS build.
The font missing characters is most likely not the problem. Before you go any further and/or ask further questions, you should read http://www.joelonsoftware.com/articles/Unicode.html, read it again, sleep on it, read it again, explain to somebody else what the relationship is between 'encoding', 'locale', 'character set', 'font' and 'Unicode code point', because only after you can do that, you can decide on how to progress with your application. Sorry, it's not what you want to hear, but it's the reality if you've been tasked with handling internationalization.

How to correctly display characters from different languages?

I am finishing application in Visual C++/Windows API and I am using MySql C Connector.
Whole application code uses ANSI, MySql C Connector is in ANSI too.
This program will be used on Polish and German computers with Windows XP/Vista/7 or 8.
I want to correcly display german umlauts and polish accent characters on:
DialogBox controls (strings are loaded from language files)
Generated XHTML documents
Strings retrieved from MySql database displayed on controls and in XHTML documents
I have heard about MultiByteToWideChar and Unicode functions (MessageBoxW etc.), but application code is nearly finished, converting is a lot of work...
How to make character encoding correctly with the least work and time?
Maybe changing system code page for non-Unicode program?
First, of course: what code set is MySQL returning? Or perhaps:
what code set was used when writing the data into the data base?
Other than that, I don't think you'll be able to avoid using
either wide characters or multibyte characters: for single byte
characters, German would use ISO 8859-1 (code page 1252) or
ISO 8859-15, Polish ISO 8859-2 (code page 1250). But what are
you doing with the characters in your own code? You may be able
to get away with UTF-8 (code page 65001), without many changes.
The real question is where the characters originally come from
(although it might not be too difficult to translate them into
UTF-8 immediately at the source); I don't think that Windows
respects the code page for input.
Although it doesn't help you much to know it, you're dealing
with an almost impossible problem, since so much depends on
things outside your program: things like the encoding of the
display font, or the keyboard driver, for example. In fact,
it's not rare for programs to display one thing on the screen,
and something different when outputting to the printer, or to
display one thing on the screen, but something different if the
data is written to a file, and read with another program. The
situation is improving—modern Unix and the Internet are
gradually (very gradually) standardizing on UTF-8, everywhere
and for everything, and Windows normally uses UTF-16 for
everything that is pure Windows (but needs to support UTF-8 for
the Internet). But even using the platform standard won't help
if the human client has installed (and is using) fonts which
don't have the characters you need.

Pseudographical environment in windows Command Prompt

actually i'm thinking of creating a cool interface for my programming assignment , so i go around searching on how to do it so that such an effect can be create , below is the image .
The Question
1.)What is needed in order to create a program that run pseudographic(semigraphic or whatever they called it) that has menu like BIOS wizard did??I always see some program run in console but it could have graphical-like looking , for example blue environment , and user can use keyboard to choose a list of setting in a menu.
Thanks for spending time reading my question.
It's called Text-based user interface. There're several libraries for doing this. I think this is what you're looking for. :)
Cross platform, Interactive text-based interface with command completion
http://www.gnu.org/s/ncurses/
Ncurses(or maybe pdcurses) is probably what you need.
In the days of 16-bit Windows console windows used to support ANSI escape sequences (via the ansi.sys driver), but they no longer do.
For the apparent line graphics you need to use a platform specific solution anyway, so I recommend just writing an abstraction (functions, class) over the Windows APIs console functions.
The line graphics is done by using characters from the original IBM PC character set, codepage 437. At first you can just hardcode the various patterns. In order to make it seem more like line drawing to the code, or from the code's perspective, so to speak, you'll have to abstract things again. As I remember there is some partial but not complete system in the original codepage 437 character codes. But for Windows console you will need to use the Unicode character codes, which probably do not preserve the original system, so perhaps just define a map where these graphics characters are placed more systematically.
Possibly that already exists.
If you don't care about portability, the Windows API for this can be found here. It will supply all the functions you need, without the need to pack additional libraries with your application.
You can also look in to graphics.h, a non-standard Borland extension that a lot of older graphical games used to use. It goes far beyond the normal limits of the console, but is only supported by 16 bit systems, of which Microsoft has all but removed support for from Windows. You'd also need an ancient Borland compiler, or an emulation, though you probably want the original look and feel.

How to use resources in VC++?

I am using VC 9 and I want to support Russian language for my application. I even created Russian resource strings. But my system has Russian Language setting. If it is not there every character displays junk (its code page is 1251). I also made DLL from Russian resource file. If I run that DLL in application from installed location, it works fine.
But when I change computer setting to English and run that DLL from appilcation, dialog and message box shows junk character. But shouldn't application read from DLL, not from computer language setting? Here I am facing problem how to make a language independent DLL. Any code or setting for this?
By far the easiest solution is to stick to Unicode.
Windows is Unicode internally. (Almost) Every API function exists in two variants, FooA and FooW. THe FooA variant converts char's to wchar_t's before calling FooW. The exact conversion is defined by the code page.
Now, if you use Unicode, there is no such conversion, and no code page. If the user enters ж (U+0436, it is stored as wchar_t(0x0436) and never converted. If your resource contains ж in Unicode, it too is not converted.
If the strings you want to display cannot be represented in the system code page, the only solution is Unicode.