Can't find window in c++ - c++

I'm pretty new to C++ and I'm making a hack for Assassin's Creed Odyssey, however I can't get my code to find the window.
HWND hwnd = FindWindowA(NULL, "Assassin's Creed® Odyssey");
Window name(s):
EDIT: Got it to work, the apostrophe is different than a normal one, for whoever needs it, this is the correct window name Assassin’s Creed® Odyssey

Ahh. FindWindowA expects an ANSI string. To use the ® in Microsoft Windows, you should use the wstring variant, and use a wstring literal:
HWND hwnd = FindWindowW(NULL, L"Assassin's Creed® Odyssey");
Do check that you are using the correct type of apostrophe. I can't tell from the screen capture whether it is an ANSI apostrophe or a right single quote.
The L before the " indicates that it is a wstring. In Windows computers this is pretty much UTF-16, though Microsoft Windows doesn't quite meet all the UTF-16 standards.
See https://en.cppreference.com/w/cpp/language/string_literal for more information on the various type of string literals.
This microsoft article: https://learn.microsoft.com/en-us/windows/desktop/learnwin32/working-with-strings, explains the difference between the ANSI functions (ending with 'A') and the Unicode functions (ending with 'W').

Related

Understanding Multibyte/Unicode

I'm just getting back into Programming C++, MFC, Unicode. Lots have changed over the past 20 years.
Code on another project compiled just fine, but had errors when I paste it into my code. It took me 1-1/2 days of wasted time to solve the function call below:
enter code here
CString CFileOperation::ChangeFileName(CString sFileName)
{
char drive[MAX_PATH], dir[MAX_PATH], name[MAX_PATH], ext[MAX_PATH];
_splitpath_s(sFileName, drive, dir, name, ext); //error
------- other code
}
After reading help, I changed the CString sFileName to use a cast:
enter code here
_splitpath_s((LPTCSTR)sFileName, drive, dir, name, ext); //error
This created an error too. So then I used GetBuffer() which is really the same as above.
enter code here
char* s = sFileName.GetBuffer(300);
_splitpath_s(s, drive, dir, name, ext); //same error for the 3rd time
sFileName.ReleaseBuffer();
At this point I was pretty upset, but finally realized that I needed to change the CString to Ascii (I think because I'm set up as Unicode).
hence;
enter code here
CT2A strAscii(sFileName); //convert CString to ascii, for splitpath()
then use strAscii.m_pz in the function _splitpath_s()
This finally worked. So after all this, to make a story short, I need help focusing on:
1. Unicode vs Mulit-Byte (library calls)
2. Variables to uses
I'm willing to purchase another book, please recommend.
Also, is there a way to filter my help on VS2015 so that when I'm on a variable and press F1, it only gives me help for Unicode and ways to convert old code to unicode or convert Mylti-Byte to Unicode.
Hope this is not to confusing, but I have some catching up to do. Be patient if my verbiage is not perfect.
Thanks in advance.
The documentation of _splitpath lists a Unicode (wchar_t based) version _wsplitpath. That's the one you should be using. Don't convert to ASCII or Windows ANSI, that will in general lose information and not produce a valid path when you recombine the pieces.
Modern Windows programming is Unicode based.
A Visual Studio C++ project is Unicode-based by default, in particular it defines the macro symbol UNICODE, which affects the declarations from <windows.h>.
All supported versions of Windows use Unicode internally throughout, and your application should, too. Windows uses UTF-16 encoding.
To make your application Unicode-enabled you need to perform the following steps:
Set up your project's Character Set to "Use Unicode Character Set" (if it's currently set to "Use Multi-Byte Character Set"). This is not strictly required, but it deals with those cases, where you aren't using the Unicode version explicitly.
Use wchar_t (in place of char or TCHAR) for your strings.
Use wide character string literals (L"..." in place of "...").
Use CStringW (in place of CStringA or CString) in an MFC project.
Explicitly call the Unicode version of the CRT (e.g. wcslen in place of strlen or _tcslen).
Explicitly call the Unicode version of any Windows API call where it exists (e.g. CreateWindowExW in place of CreateWindowExA or CreateWindowEx).
Try using _tsplitpath_s and TCHAR.
So the final code looks something like:
CString CFileOperation::ChangeFileName(CString sFileName)
{
TCHAR drive[MAX_PATH], dir[MAX_PATH], name[MAX_PATH], ext[MAX_PATH];
_tsplitpath_s(sFileName, drive, dir, name, ext); //error
------- other code
}
This will enable C++ compiler to use the correct character width during build time depending on the project settings

Can I point the necessary codepage for the individual string variable in the `Watch1` window?

Visual Studio 2015, C++ language, debugging.
In the Watch1 window I look the values of my variables (strings) of the wchar_t* and char* types. The first of them is Unicode and the second is ANSI (CP_OEMCP codepage). In the Watch1 window the text of the wchar_t* variable is displaying correctly, but the text of the char* variable is displaying unreadable. Can I point the necessary codepage for the individual string variable in the Watch1 window? I want to see both values of my strings correctly in the Watch1 window.
Maybe for such cases is exists the some syntax, similar the $err,hr (the text of the last error, which was gotten via the GetLastError() function).
UPD (the screen added)
Console window has the right output, but in the memory and in the Watch1 window I see unreadable string for my ansiText variable.
The problem is that the original string (starting with hex values 8D A0 A6) is not on Windows-1251 (Windows Cyrillic) code page, but on OEM 866 code page. These two are different, and Visual Studio expects Windows-1251, because that's system's code page (code page used for non-Unicode applications).
It is not possible to specify a code page when you watch a string in debugger. Everything inside should be Unicode anyway, or at least UTF-8, and for those two there are format specifiers, su and s8. See MSDN for all format specifiers.
What you can do is have the following function integrated in the code, and when you want to see some non-ANSI (or non-CP_ACP, to be precise) string just call this function with the string and code page as parameters (but use the function only once in Watch window):
LPCWSTR ViewString(LPCSTR szString, UINT nCodePage)
{
static WCHAR szTemp[1024];
MultiByteToWideChar(nCodePage, 0, szString, -1, szTemp, 1024);
return szTemp;
}
So, in your case in Watch window instead of (char*)ansiText there would be ViewString(ansiText, 866). Also, note that this is not actually "ANSI text", but "OEM text".
I don't know what exactly your program is supposed to do, but I would convert all non-Unicode strings to Unicode at the earliest point in code (right where you get a non-Unicode string), and in your code always work just with Unicode strings. To convert OEM 866 string to Unicode you can use function MultiByteToWideChar with CodePage parameter = 866.

c++ Lithuanian language, how to get more than ascii

I am trying to use Lithuanian in my c++ application, but every try is unsuccesfull.
Multi-byte character set is used. I have tryed everything i have tought of, i am new in c++. Never ever tryed to do something in Lithuanian.
Tryed every setlocale(LC_ALL, "en_US.utf8"); setlocale(LC_ALL, "Lithuanian");...
Researched for 2 hours and didnt found proper examples, solution.
I do have a average sized project which needs Lithuanian translation from database and it cant understand most of "ĄČĘĖĮŠŲŪąčęėįšųū".
Compiler - "Visual studio 2013"
Database - sqlite3.
I cant get simple strings to work(defined myself), and output as Lithuanian to win32 application, even.
In Windows use wide character strings (1UTF-16 encoding, wchar_t type) for internal text handling, and preferably UTF-8 for external text files and networking.
Note that Visual C++ will translate narrow text literals from the source encoding to Windows ANSI, which is a platform-dependent usually single-byte encoding (you can check which one via the GetACP API function), i.e., Visual C++ has the platform-specific Windows ANSI as its narrow C++ execution character set.
But also do note that for an app restricted to non-Windows platforms, i.e. Unix-land, it makes practical sense to do everything in UTF-8, based on char type.
For the database communication you may need to translate to and from the program's internal text representation.
This depends on what the database interface requires, which is not stated.
Example for console output in Windows:
#include <iostream>
#include <fcntl.h>
#include <io.h>
auto main() -> int
{
_setmode( _fileno( stdout ), _O_WTEXT );
using namespace std;
wcout << L"ĄČĘĖĮŠŲŪąčęėįšųū" << endl;
}
To make this compile by default with g++, the source code encoding needs to be UTF-8. Then, to make it produce correct results with Visual C++ the source code encoding needs to be UTF-8 with BOM, which happily is also accepted by modern versions of g++. For otherwise the Visual C++ compiler will assume the Windows ANSI encoding and produce an incorrect UTF-16 string.
Not coincidentally this is the default meaning of UTF-8 in Windows, e.g. in the Notepad editor, namely UTF-8 with BOM.
But note that while in Windows the problem is that the main system compiler requires a BOM for UTF-8, in Unix-land the problem is the opposite, that many old tools can't handle the BOM (for example, even MinGW g++ 4.9.1 isn't yet entirely up to speed: it sometimes includes the BOM bytes, then incorrectly interpreted, in error messages).
1) On other platforms wide character text can be encoded in other ways, e.g. with UTF-32. In fact the Windows convention is in direct conflict with the C and C++ standards which require that a single wchar_t should be able to encode any character in the extended character set. However, this requirement was, AFAIK, imposed after Windows adopted UTF-16, so the fault probably lies with the politics of the C and C++ standardization process, not yet another Microsoft'ism.
Complexity of internationalisation
There are several related but distinct topics that can cause mismatches between them, making try and error approach very tedious:
type used for storing strings and chars: windows iuses wchar_t by default, but for most APIs you have also char equivalents functions
character set encoding this defines how the chars stored in the type are to be understood. For exemple unicode (UTF8, UTF16, UTF32), 7 bits ascii, 8 bit ansii. In windows, by default it is UTF16 for wchar_t and ansi/windows for char
locale defines, among other things, the character set asumptions, when processing strings. This permit to use language independent functions like isalpha(i, loc), islower(i, loc), ispunct(i, loc) to find out if a given character is alphanumeric, a lower case alphabetic, or a punctuation, for example to bereak down a user text into words. C++ offers here portable functions.
output codepage or font used to show a character to the user. This assumes that the font used shows the characters using the same character set used in the code internals.
source code encoding. For example your editor could assume an ansi encoding, with windows 1252 character set.
Most typical errors
The problem n°1 is Win32 console output, as unicode is not well supported by the console. But this is not your problem here.
Another cause of mismatch is the encoding of your text editor. It might not be unicode, but use a windows code page. In this case, you type "Č", the deditor displays it as such, but editor might use windows 1257 encoding for lithuanian and store 0xC8 in the file. If you then display this literal with a windows unicode function, it will interpret 0xC8 as "latin E grave accent" and print something else, as the right unicode encoding for "Č" is 0x010C !
I can be even worse: the compiler may have its own assumption about character set encoding used and convert your litterals into unicode using false assumptions (it happened to me when I used some exotic code generation switch).
How to do ?
To figure out what goes wront, proceed by elimination:
First, for plain windows, use the native unicode setting. Ok it's UTF16 and wchar_t instead of UTF8 and as thus comes with some drawbacks, but it's native and well supported.
Then use explict unicode coding in litterals, for example TEXT("\u010C") instead of TEXT("Č"). This avoids editor and compiler mismatch.
If it's still not the right character, make sure that your font FULLY supports unicode. The default system font for instance doesn't while most other do. You can easily check with the windows font pannel (WindowKey+R fonts then click on "search char") to display the character table of your font.
Set fonts explicitely in your code
For example, a very tiny experiment :
...
case WM_PAINT:
{
hdc = BeginPaint(hWnd, &ps);
auto hf = CreateFont(24, 0, 0, 0, 0, TRUE, 0, 0, 0, 0, 0, 0, 0, L"Times New Roman");
auto hfOld = SelectObject(hdc, hf); // if you comment this out, € and Č won't display
TextOut(hdc, 50, 50, L"Test with éç € \u010C special chars", 30);
SelectObject(hdc, hfOld);
DeleteObject(hf);
EndPaint(hWnd, &ps);
break;
}

Set Unicode text on MFC form controls in Multi-Byte Char Set application

I have Multi-Byte Char Set MFC windows application. Now I need to display international single byte ASCI characters on windows controls. I can't use ASCI characters directly because to display them correctly it is required windows locale be set to adequate country. I need to display characters in all windows locale cases. For this purpose I must convert ASCI to unicode. I can display required international characters in MessageBoxW, but how to display them on windows MFC controls using SetWindowText?
To show unicode string in MessageBoxW I construct it in wstring
WORD g [] = {0x105,0x106,0x107,0x108,0x109,0x110,0x111,0x112,0x113,0x114,0x115,0x116,0x117,0x118,0x119,0x120};
wstring gg (reinterpret_cast<wchar_t*>(g),15);
MessageBoxW(NULL, gg.c_str() , gg.c_str() , MB_ICONEXCLAMATION | MB_OK);
Seting MFC form control text:
class MyFrm: public CDialogEx
{
virtual BOOL OnInitDialog();
}
...
BOOL MyFrm::OnInitDialog()
{
GetDlgItem(IDC_EDIT_TICKET_NUMBER)->SetWindowText( ???);
}
Is it possible somehow convert wstring gg to CString and show unicode chars on window control?
You could try casting your CDialogEx 'this' object to HWND and then call explictly Win32 API to set text using wchars. So your code will look something like this:
BOOL MyFrm::OnInitDialog()
{
SetDlgItemTextW((HWND)(*this), IDC_EDIT_TICKET_NUMBER, gg.c_str());
}
But as I mentioned earlier Unicode is supported starting from Windows XP and using ASCII is really not a good idea unless you're targeting those very very old OS'es before it. Using them nowdays will cause ALL ASCII strings you pass to be firstly converted into Unicode by the Win32 API. So it is a better idea to switch your project entirely to UNICODE.
First, note that you can simply directly initialize a std::wstring with your Unicode hex character data, without any ugly useless reinterpret_cast<wchar_t*>, etc.
Instead of this:
WORD g [] = {0x105,0x106,0x107,0x108,...,0x120};
wstring gg (reinterpret_cast<wchar_t*>(g),15);
just consider that:
wstring text = L"\x0105\x0106\x0108...\0x0120";
The latter seems much cleaner to me.
Second, if you want to pass an instance to std::wstring to an MFC method that expects a const wchar_t* input string pointer, just consider using wstring::c_str() method.
In addition, the best suggestion I can give you is to just port your app to Unicode.
ASCII/MBCS should be considered programming model of the past for MFC; they bring lots of problem when you want to write "international" code.

CEdit::GetLine() windows 7

I have the following segment of code where m_edit is a CEdit control:
TCHAR lpsz[MAX_PATH+1];
// get the edit box text
m_edit.GetLine(0,lpsz, MAX_PATH);
This works perfectly on computers running Windows XP and earlier. I have not tested this in Vista, but on Windows 7, lpsz gets junk unicode characters inserted into it (as well as the actual text sometimes). Any idea as to what is going on here?
Since you're using MFC, why aren't you taking advantage of its CString class? That's one of the reasons many programmers were drawn to MFC, because it makes working with strings so much easier.
For example, you could simply write:
int len = m_edit.LineLength(m_edit.LineIndex(0));
CString path;
LPTSTR p = path.GetBuffer(len);
m_edit.GetLine(0, p, len);
path.ReleaseBuffer();
(The above code is tested to work fine on Windows 7.)
Note that the copied line does not contain a null-termination character (see the "Remarks" section in the documentation). That could explain the nonsense characters you're seeing in later versions of Windows.
It's not null terminated. You need to do this:
int count = m_edit.GetLine(0, lpsz, MAX_PATH);
lpsz[count] = 0;