WM_COPYDATA won't deliver my string correctly - c++

I tried to use WM_COPYDATA to send a string from one window to another. The messaages gets received perfectly by my receiving window. Except the string I send does not stay intact.
Here is my code in the sending application:
HWND wndsend = 0;
wndsend = FindWindowA(0, "Receiving window");
if(wndsend == 0)
{
printf("Couldn't find window.");
}
TCHAR* lpszString = (TCHAR*)"De string is ontvangen";
COPYDATASTRUCT cds;
cds.dwData = 1;
cds.cbData = sizeof(lpszString);
cds.lpData = (TCHAR*)lpszString;
SendMessage(wndsend, WM_COPYDATA, (WPARAM)hwnd, (LPARAM)(LPVOID)&cds);
And this is the code in the receiving application:
case WM_COPYDATA :
COPYDATASTRUCT* pcds;
pcds = (COPYDATASTRUCT*)lParam;
if (pcds->dwData == 1)
{
TCHAR *lpszString;
lpszString = (TCHAR *) (pcds->lpData);
MessageBox(0, lpszString, TEXT("clicked"), MB_OK | MB_ICONINFORMATION);
}
return 0;
Now what happens is that the messagebox that gets called outputs chinese letters.
My guess is that I didn't convert it right, or that I don't actually send the string but just the pointer to it, which gives a totally different data in the receiver's window. I don't know how to fix it though.

sizeof(lpszString) is the size of the pointer, but you need the size in bytes of the buffer. You need to use:
sizeof(TCHAR)*(_tcsclen(lpszString)+1)
The code that reads the string should take care not to read off the end of the buffer by reading the value of cbData that is supplied to it.
Remember that sizeof evaluates at compile time. Keep that thought to the front of your mind when you use it and if ever you find yourself using sizeof with something that you know to be dynamic, take a step back.
As an extra, free, piece of advice I suggest that you stop using TCHAR and pick one character set. I would recommend Unicode. So, use wchar_t in place of TCHAR. You are already building a Unicode app.

Also, lpData is a pointer to the actual data, and cbData should be the size of the data, but you're actually setting the size of the pointer. Set it to the length of the string instead (and probably the terminating 0 character too: strlen(lpszString)+1

Related

How to send WM_COPYDATA from C++ to AutoHotKey?

Trying to SendMessage with WM_COPYDATA from a C++ application to an AutoHotkey script.
I tried to follow the example found in the docs:
https://learn.microsoft.com/en-us/windows/win32/dataxchg/using-data-copy
Then I did:
HWND htarget_window = FindWindow(NULL, L"MyGui");
std::string str = "Hello World";
COPYDATASTRUCT cds;
cds.dwData = 1;
cds.lpData = (PVOID) str.c_str();
cds.cbData = strlen((char*)cds.lpData);
auto Response = SendMessage(htarget_window, WM_COPYDATA, (WPARAM)htarget_window, (LPARAM)&cds);
And in the Autohotkey script:
OnMessage(0x4a , "Receive_WM_COPYDATA")
Receive_WM_COPYDATA(wParam, lParam) {
; Retrieves the CopyDataStruct's lpData member.
StringAddress := NumGet(lParam + 2*A_PtrSize)
; Copy the string out of the structure.
Data := StrGet(StringAddress)
MsgBox Received the following string: %Data%
}
The message is being received, but this is the output:
When it should be: Hello World.
I have also checked for GetLastError() after the SendMessage and it output 0.
I must be doing something wrong inside of the COPYDATASTRUCT.
AutoHotkey x64.
Your use of StrGet() is wrong:
You are not including the std::string's null terminator in the sent data, but you are not passing the value of the COPYDATASTRUCT::cbData field to StrGet(), so it is going to be looking for a null terminator that does not exist. So you need to specify the length that is in the COPYDATASTRUCT::cbData field, eg:
StringLen := NumGet(lParam + A_PtrSize, "int");
StringAddress := NumGet(lParam + 2*A_PtrSize);
Data := StrGet(StringAddress, StringLen, Encoding);
More importantly, you are not specifying an Encoding for StrGet(), so it is going to interpret the raw data in whatever the native encoding of the script is (see A_IsUnicode). Don't do that. Be explicit about the encoding used by the C++ code. If the std::string holds a UTF-8 string, specify "UTF-8". If the std::string holds a string in the user's default ANSI locale, specify "CP0". And so on. What you are seeing happen is commonly known as Mojibake, which happens when single-byte character data is mis-interpreted in the wrong encoding.

Why GetServiceDisplayNameW() and GetServiceDisplayNameA() returns different required buffer sizes in character?

Here is a sample code (only sample code to understand it easily, no error handling, no close handles, and so on):
SC_HANDLE hSCManager = ::OpenSCManager(nullptr, nullptr, 0);
DWORD buffSize = 0;
::GetServiceDisplayName(hSCManager, m_serviceName, nullptr, &buffSize);
LPTSTR buff = new TCHAR[++buffSize];
VERIFY(::GetServiceDisplayName(hSCManager, m_serviceName, buff, &buffSize));
My sample service has the display name of "notepad starter" (15 characters).
Switching between build configuations, GetServiceDisplayName() returns a buffer size of 30 under ANSI (GetServiceDisplayNameA), and 15 under UNICODE (GetServiceDisplayNameW).
Documentation for this API says it returns the buffer size in characters excluding the null terminator (not well documented, but I'm expecting the buffer size to include the null terminator in the second call).
Why is it returning different buffer sizes in different build configurations?
at first GetServiceDisplayName take handle to the service control manager database ( hSCManager) as first parameter, but not handle to service (hService) - so you not need open service for this task. and you not need SC_MANAGER_ALL_ACCESS here, but 0 is enough.
however your main error in next. you allocate buffer new TCHAR[buffSize + 1] - so buffSize + 1 in character - and this is correct because GetServiceDisplayName return size of the service's display name, excluding the null-terminating character - so we need extra one character space for terminating 0;
but in next line error - &buffSize - last parameter lpcchBuffer must containing size of the buffer in characters. so exactly buffer size which you allocated. but you allocate buffSize + 1 space, not buffSize. so code must be next:
if (SC_HANDLE hSCManager = OpenSCManagerW(nullptr, nullptr, 0))
{
DWORD cch = 0;
if (!GetServiceDisplayNameW(hSCManager, m_serviceName, nullptr, &cch))
{
if (GetLastError() == ERROR_INSUFFICIENT_BUFFER)
{
PWSTR buff =(PWSTR)alloca(++cch*sizeof(WCHAR));
if (GetServiceDisplayNameW(hSCManager, m_serviceName, buff, &cch))
{
DbgPrint("%S\n", buff);
}
}
}
CloseServiceHandle(hSCManager);
}
so you in your code must replace buffSize + 1 to ++buffSize
about ansi version - GetServiceDisplayNameA - here really error in api implementation - if buffer size in characters not big enough - it return how many bytes require unicode service name excluding the null-terminating symbol. if buffer is big enough it at all not update lpcchBuffer. this yet else one argument never use A versions of api, but always W
I think the correct answer came after 6 month (i saw it yet after 3 years) from Raymond Chen;
Why is it reporting a required buffer size larger than what it
actually needs?
Because character set conversion is hard.
When you call the Get­Service­Display­NameA function (ANSI version),
it forwards the call to Get­Service­Display­NameW function (Unicode
version). If the Unicode version says, “Sorry, that buffer is too
small; it needs to be big enough to hold N Unicode characters,” the
ANSI version doesn’t know how many ANSI characters that translates to.
A single Unicode character could expand to as many as two ANSI
characters in the case where the ANSI code page is DBCS. The
Get­Service­Display­NameA function plays it safe and takes the
worst-case scenario that the service display name consists completely
of Unicode characters which require two ANSI characters to represent.
That’s why it over-reports the buffer size.
devblogs.microsoft.com/oldnewthing/20180606-00/?p=98925

Correct extraction of int from an edit window in WinAPI for logic testing

I have an edit box which has attribute ES_NUMBER and on a button press am attempting to check if the value of the edit box is between 2 and 15 (inclusive).
Having checked on StackOverflow, I found the strong recommendation to use strtol() as opposed to atoi(), however neither have successfully allowed me to perform the necessary check. Please see below the current code.
char buff[1024];
GetWindowText(hWndNoOfTeams, (LPWSTR)buff, 1024);
int i;
i = strtol(buff,NULL,10);
if ((i > 1)&&(i < 16)){
MessageBox(hWnd, (LPCWSTR)buff, L"MSGBOX", MB_OK);
}else{
MessageBox(hWnd, L"The number of teams must be greater than 1 and less than 16.", L"MSGBOX", MB_OK);
};
The test works correctly between 0 and 9, however, beyond that it always presents the second message box. I have a suspicion that the issue lies in the method of extraction of the integer from the string, as for all values the buff array contains the correct value.
Apologies if I have missed something that ought to be glaringly obvious.
GetWindowText(hWndNoOfTeams, (LPWSTR)buff, 1024);
Never cast anything to LPWSTR or LPSTR or anything related to that. The compiler is telling you that GetWindowText() is expecting a wide character string, which is an array of WCHARs, not an array of chars. The wide character APIs are loaded by default as all new Windows programs should be Unicode-aware.
There are separate conversion routines for wide strings, such as wcstol().
Look up the UNICODE and _UNICODE macros and Unicode handling on Windows for more information.
You are using the Unicode version of the Win32 API functions, so you need to use a Unicode character buffer and the Unicode version of strtol() to match:
WCHAR buff[1024] = {0};
GetWindowText(hWndNoOfTeams, buff, 1024);
int i = wcstol(buff, NULL, 10);
if ((i > 1) && (i < 16)) {
MessageBox(hWnd, buff, L"MSGBOX", MB_OK);
} else {
MessageBox(hWnd, L"The number of teams must be greater than 1 and less than 16.", L"MSGBOX", MB_OK);
};

do writefile function twice

bool sendMessageToGraphics(char* msg)
{
//char ea[] = "SSS";
char* chRequest = msg; // Client -> Server
DWORD cbBytesWritten, cbRequestBytes;
// Send one message to the pipe.
cbRequestBytes = sizeof(TCHAR) * (lstrlen(chRequest) + 1);
if (*msg - '8' == 0)
{
char new_msg[1024] = { 0 };
string answer = "0" + '\0';
copy(answer.begin(), answer.end(), new_msg);
char *request = new_msg;
WriteFile(hPipe, request, cbRequestBytes, &cbRequestBytes, NULL);
}
BOOL bResult = WriteFile( // Write to the pipe.
hPipe, // Handle of the pipe
chRequest, // Message to be written
cbRequestBytes, // Number of bytes to writ
&cbBytesWritten, // Number of bytes written
NULL); // Not overlapped
if (!bResult/*Failed*/ || cbRequestBytes != cbBytesWritten/*Failed*/)
{
_tprintf(_T("WriteFile failed w/err 0x%08lx\n"), GetLastError());
return false;
}
_tprintf(_T("Sends %ld bytes; Message: \"%s\"\n"),
cbBytesWritten, chRequest);
return true;
}
after the first writefile in running (In case of '8') the other writefile function doesn't work right, can someone understand why ?
the function sendMessageToGraphics need to send move to chess board
There are 2 problems in your code:
First of all, there's a (minor) problem where you initialize a string in your conditional statement. You initialize it as so:
string answer = "0" + '\0';
This does not do what you think it does. It will invoke the operator+ using const char* and char as its argument types. This will perform pointer addition, adding the value of '\0' to where your constant is stored. Since '\0' will be converted to the integer value of 0, it will not add anything to the constant. But your string ends up not having a '\0' terminator. You could solve this by changing the statement to:
string answer = std::string("0") + '\0';
But the real problem lies in the way you use your size variables. You first initialize the size variable to the string length of your input variable (including the terminating '\0' character). Then in your conditional statement you create a new string which you pass to WriteFile, yet you still use the original size. This may cause a buffer overrun, which is undefined behavior. You also set your size variable to however many bytes you wrote to the file. Then later on you use this same value again in the next call. You never actually check this value, so this could cause problems.
The easiest way to change this, is to make sure your sizes are set up correctly. For example, instead of the first call, you could do this:
WriteFile(hPipe, request, answer.size(), &cbBytesWritten, NULL);
Then check the return value WriteFile and the value of cbBytesWritten before you make the next call to WriteFile, that way you know your first call succeeded too.
Also, do not forget to remove your sizeof(TCHAR) part in your size calculation. You are never using TCHAR in your code. Your input is a regular char* and so is the string you use in your conditional. I would also advice replacing WriteFile by WriteFileA to show you are using such characters.
Last of all, make sure your server is actually reading bytes from the handle you write to. If your server does not read from the handle, the WriteFile function will freeze until it can write to the handle again.

MultiByteToWideChar or WideCharToMultiByte and txt files

I'm trying to write a universal text editor which can open and display ANSI and Unicode in EditControl. Do I need to repeatedly call ReadFile() if I determine that the text is ANSI? Can't figure out how to perform this task. My attempt below does not work, it displays '?' characters in EditControl.
LARGE_INTEGER fSize;
GetFileSizeEx(hFile,&fSize);
int bufferLen = fSize.QuadPart/sizeof(TCHAR)+1;
TCHAR* buffer = new TCHAR[bufferLen];
buffer[0] = _T('\0');
DWORD wasRead = 0;
ReadFile(hFile,buffer,fSize.QuadPart,&wasRead,NULL);
buffer[wasRead/sizeof(TCHAR)] = _T('\0');
if(!IsTextUnicode(buffer,bufferLen,NULL))
{
CHAR* ansiBuffer = new CHAR[bufferLen];
ansiBuffer[0] = '\0';
WideCharToMultiByte(CP_ACP,0,buffer,bufferLen,ansiBuffer,bufferLen,NULL,NULL);
SetWindowTextA(edit,ansiBuffer);
delete[]ansiBuffer;
}
else
SetWindowText(edit,buffer);
CloseHandle(hFile);
delete[]buffer;
There are a few buffer length errors and oddities, but here's your big problem. You call WideCharToMultiByte incorrectly. That is meant to receive UTF-16 encoded text as input. But when IsTextUnicode returns false that means that the buffer is not UTF-16 encoded.
The following is basically what you need:
if(!IsTextUnicode(buffer,bufferLen*sizeof(TCHAR),NULL))
SetWindowTextA(edit,(char*)buffer);
Note that I've fixed the length parameter to IsTextUnicode.
For what it is worth, I think I'd read in to a buffer of char. That would remove the need for the sizeof(TCHAR). In fact I'd stop using TCHAR altogether. This program should be Unicode all the way - TCHAR is what you use when you compile for both NT and 9x variants of Windows. You aren't compiling for 9x anymore I imagine.
So I'd probably code it like this:
char* buffer = new char[filesize+2];//+2 for UTF-16 null terminator
DWORD wasRead = 0;
ReadFile(hFile, buffer, filesize, &wasRead, NULL);
//add error checking for ReadFile, including that wasRead == filesize
buffer[filesize] = '\0';
buffer[filesize+1] = '\0';
if (IsTextUnicode(buffer, filesize, NULL))
SetWindowText(edit, (wchar_t*)buffer);
else
SetWindowTextA(edit, buffer);
delete[] buffer;
Note also that this code makes no allowance for the possibility of receiving UTF-8 encoded text. If you want to handle that you'd need to take your char buffer and send to through MultiByteToWideChar using CP_UTF8.