Killing old process with BroadcastSystemMessage - c++

I have an application (native C++, Windows), that cannot be run simultaneously on one machine. The behavior that I want to implement is this: on the attempt to run second instance of the application the first one stops running.
To do so I want to use WinApi function BroadcastSystemMessage() something like an example below.
When the application start it sends:
BroadcastSystemMessage(BSF_POSTMESSAGE, &dwRecepients, 0x666, 0, 0);
But, when I run my application in debug mode it doesn't hit
case 0x666:
int iClose = 0 + 1;
break;
when I start another instance. The other messages are nandled correctly (WM_KEYDOWN, WM_ACTIVATE and others).
What I'm I doing wrong?

In order to broadcast a custom message you need to create an id for it with the RegisterWindowMessage function, for example:
UINT msg666 = RegisterWindowMessage(L"custom_devil_message");
and use it both in the sending and receiving code:
// sending code
BroadcastSystemMessage(BSF_POSTMESSAGE, &dwRecepients, msg666, 0, 0);
// receiving code
case msg666:
int iClose = 0 + 1;
break;
Remember that messages do not work for console applications.

The solution began to work after I changed the type of the message to WM_APPCOMMAND + 10. Anyway, it didn't help, because BroadcastSystemMessage() doesn't broadcast messages to tabs in a browser, which is my case. Also, I couldn't find the range of message types that are allowed to be sent with BroadcastSystemMessage().

Related

Crashing when calling QTcpSocket::setSocketDescriptor()

my project using QTcpSocket and the function setSocketDescriptor(). The code is very normal
QTcpSocket *socket = new QTcpSocket();
socket->setSocketDescriptor(this->m_socketDescriptor);
This coding worked fine most of the time until I ran a performance testing on Windows Server 2016, the crash occurred. I debugging with the crash dump, here is the log
0000004f`ad1ff4e0 : ucrtbase!abort+0x4e
00000000`6ed19790 : Qt5Core!qt_logging_to_console+0x15a
000001b7`79015508 : Qt5Core!QMessageLogger::fatal+0x6d
0000004f`ad1ff0f0 : Qt5Core!QEventDispatcherWin32::installMessageHook+0xc0
00000000`00000000 : Qt5Core!QEventDispatcherWin32::createInternalHwnd+0xf3
000001b7`785b0000 : Qt5Core!QEventDispatcherWin32::registerSocketNotifier+0x13e
000001b7`7ad57580 : Qt5Core!QSocketNotifier::QSocketNotifier+0xf9
00000000`00000001 : Qt5Network!QLocalSocket::socketDescriptor+0x4cf7
00000000`00000000 : Qt5Network!QAbstractSocket::setSocketDescriptor+0x256
In the stderr log, I see those logs
CreateWindow() for QEventDispatcherWin32 internal window failed (Not enough storage is available to process this command.)
Qt: INTERNAL ERROR: failed to install GetMessage hook: 8, Not enough storage is available to process this command.
Here is the function, where the code was stopped on the Qt codebase
void QEventDispatcherWin32::installMessageHook()
{
Q_D(QEventDispatcherWin32);
if (d->getMessageHook)
return;
// setup GetMessage hook needed to drive our posted events
d->getMessageHook = SetWindowsHookEx(WH_GETMESSAGE, (HOOKPROC) qt_GetMessageHook, NULL, GetCurrentThreadId());
if (Q_UNLIKELY(!d->getMessageHook)) {
int errorCode = GetLastError();
qFatal("Qt: INTERNAL ERROR: failed to install GetMessage hook: %d, %s",
errorCode, qPrintable(qt_error_string(errorCode)));
}
}
I did research and the error Not enough storage is available to process this command. maybe the OS (Windows) does not have enough resources to process this function (SetWindowsHookEx) and failed to create a hook, and then Qt fire a fatal signal, finally my app is killed.
I tested this on Windows Server 2019, the app is working fine, no crashes appear.
I just want to know more about the meaning of the error message (stderr) cause I don't really know what is "Not enough storage"? I think it is maybe the limit or bug of the Windows Server 2016? If yes, is there any way to overcome this issue on Windows Server 2016?
The error ‘Not enough storage is available to process this command’ usually occurs in Windows servers when the registry value is set incorrectly or after a recent reset or reinstallations, the configurations are not set correctly.
Below is verified procedure for this issue:
Click on Start > Run > regedit & press Enter
Find this key name HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\services\LanmanServer\Parameters
Locate IRPStackSize
If this value does not exist Right Click on Parameters key and Click on New > Dword Value and type in IRPStackSize under the name.
The name of the value must be exactly (combination of uppercase and lowercase letters) the same as what I have above.
Right Click on the IRPStackSize and click on Modify
Select Decimal enter a value higher than 15(Maximum Value is 50 decimal) and Click Ok
You can close the registry editor and restart your computer.
Reference
After researching for a few days I finally can configure the Windows Server 2016 setting (registry) to prevent the crash.
So basically it is a limitation of the OS itself, it is called desktop heap limitation.
https://learn.microsoft.com/en-us/troubleshoot/windows-server/performance/desktop-heap-limitation-out-of-memory
(The funny thing is the error message is Not enough storage is available to process this command but the real problem came to desktop heap limitation. )
So for the solution, flowing the steps in this link: https://learn.microsoft.com/en-us/troubleshoot/system-center/orchestrator/increase-maximum-number-concurrent-policy-instances
I increased the 3rd parameter of SharedSection to 2048 and it fix the issue.
Summary steps:
Desktop Heap for the non-interactive desktops is identified by the third parameter of the SharedSection= segment of the following registry value:
HKEY_LOCAL_MACHINE\System\CurrentControlSet\Control\Session Manager\SubSystems\Windows
The default data for this registry value will look something like the following:
%SystemRoot%\system32\csrss.exe ObjectDirectory=\Windows SharedSection=1024,3072,512 Windows=On SubSystemType=Windows ServerDll=basesrv,1 ServerDll=winsrv:UserServerDllInitialization,3 ServerDll=winsrv:ConServerDllInitialization,2 ProfileControl=Off MaxRequestThreads=16
The value to be entered into the Third Parameter of the SharedSection= segment should be based on the calculation of:
(number of desired concurrent policies) * 10 = (third parameter value)
Example: If it's desired to have 200 concurrent policy instances, then 200 * 10 = 2000, rounding up to a nice memory number gives you 2048as the third parameter resulting in the following update to be made to the registry value:
SharedSection=1024,3072,2048

C++ Disable Delayed Ack on Windows

I am trying to replicate a real time application on a windows computer to be able to debug and make changes easier, but I ran into issue with Delayed Ack. I have already disabled nagle and confirmed that it improve the speed a bit. When sending a lots of small packets, window doesn't ACK right away and delay it by 200 ms. Doing more research about it, I came across this. Problem with changing the registry value is that, it will affect the whole system rather than just the application that I am working with. Is there anyway to disable delayed ACK on window system like TCP_QUICKACK from linux using setsockopt? I tried hard coding 12, but got WSAEINVAL from WSAGetLastError.
I saw some dev on github that mentioned to use SIO_TCP_SET_ACK_FREQUENCY but I didn't see any example on how to actually use it.
So I tried doing below
#define SIO_TCP_SET_ACK_FREQUENCY _WSAIOW(IOC_VENDOR,23)
result = WSAIoctl(sock, SIO_TCP_SET_ACK_FREQUENCY, 0, 0, info, sizeof(info), &bytes, 0, 0);
and I got WSAEFAULT as an error code. Please help!
I've seen several references online that TCP_QUICKACK may actually be supported by Winsock via setsockopt() (opt 12), even though it is NOT documented, or officially confirmed anywhere.
But, regarding your actual question, your use of SIO_TCP_SET_ACK_FREQUENCY is failing because you are not providing any input buffer to WSAIoctl() to set the actual frequency value. Try something like this:
int freq = 1; // can be 1..255, default is 2
result = WSAIoctl(sock, SIO_TCP_SET_ACK_FREQUENCY, &freq, sizeof(freq), NULL, 0, &bytes, NULL, NULL);
Note that SIO_TCP_SET_ACK_FREQUENCY is available in Windows 7 / Server 2008 R2 and later.

libircclient : Selective connection absolutely impossible to debug

I'm not usually the type to post a question, and more to search why something doesn't work first, but this time I did everything I could, and I just can't figure out what is wrong.
So here's the thing:
I'm currently programming an IRC Bot, and I'm using libircclient, a small C library to handle IRC connections. It's working pretty great, it does the job and is kinda easy to use, but ...
I'm connecting to two different servers, and so I'm using the custom networking loop, which uses the select function. On my personal computer, there's no problem with this loop, and everything works great.
But (Here's the problem), on my remote server, where the bot will be hosted, I can connect to one server but not the other.
I tried to debug everything I could. I even went to examine the sources of libircclient, to see how it worked, and put some printfs where I could, and I could see where does it comes from, but I don't understand why it does this.
So here's the code for the server (The irc_session_t objects are encapsulated, but it's normally kinda easy to understand. Feel free to ask for more informations if you want to):
// Connect the first session
first.connect();
// Connect the osu! session
second.connect();
// Initialize sockets sets
fd_set sockets, out_sockets;
// Initialize sockets count
int sockets_count;
// Initialize timeout struct
struct timeval timeout;
// Set running as true
running = true;
// While the server is running (Which means always)
while (running)
{
// First session has disconnected
if (!first.connected())
// Reconnect it
first.connect();
// Second session has disconnected
if (!second.connected())
// Reconnect it
second.connect();
// Reset timeout values
timeout.tv_sec = 1;
timeout.tv_usec = 0;
// Reset sockets count
sockets_count = 0;
// Reset sockets and out sockets
FD_ZERO(&sockets);
FD_ZERO(&out_sockets);
// Add sessions descriptors
irc_add_select_descriptors(first.session(), &sockets, &out_sockets, &sockets_count);
irc_add_select_descriptors(second.session(), &sockets, &out_sockets, &sockets_count);
// Select something. If it went wrong
int available = select(sockets_count + 1, &sockets, &out_sockets, NULL, &timeout);
// Error
if (available < 0)
// Error
Utils::throw_error("Server", "run", "Something went wrong when selecting a socket");
// We have a socket
if (available > 0)
{
// If there was something wrong when processing the first session
if (irc_process_select_descriptors(first.session(), &sockets, &out_sockets))
// Error
Utils::throw_error("Server", "run", Utils::string_format("Error with the first session: %s", first.get_error()));
// If there was something wrong when processing the second session
if (irc_process_select_descriptors(second.session(), &sockets, &out_sockets))
// Error
Utils::throw_error("Server", "run", Utils::string_format("Error with the second session: %s", second.get_error()));
}
The problem in this code is that this line:
irc_process_select_descriptors(second.session(), &sockets, &out_sockets)
Always return an error the first time it's called, and only for one server. The weird thing is that on my Windows computer, it works perfectly, while on the Ubuntu server, it just doesn't want to, and I just can't understand why.
I did some in-depth debug, and I saw that libircclient does this:
if (session->state == LIBIRC_STATE_CONNECTING && FD_ISSET(session->sock, out_set))
And this is where everything goes wrong. The session state is correctly set to LIBIRC_STATE_CONNECTING, but the second thing, FD_ISSET(session->sock, out_set) always return false. It returns true for the first session, but for the second session, never.
The two servers are irc.twitch.tv:6667 and irc.ppy.sh:6667. The servers are correctly set, and the server passwords are correct too, since everything works fine on my personal computer.
Sorry for the very long post.
Thanks in advance !
Alright, after some hours of debug, I finally got the problem.
So when a session is connected, it will enter in the LIBIRC_STATE_CONNECTING state, and then when calling irc_process_select_descriptors, it will check this:
if (session->state == LIBIRC_STATE_CONNECTING && FD_ISSET(session->sock, out_set))
The problem is that select() will alter the sockets sets, and will remove all the sets that are not relevant.
So if the server didn't send any messages before calling the irc_process_select_descriptors, FD_ISSET will return 0, because select() thought that this socket is not relevant.
I fixed it by just writing
if (session->state == LIBIRC_STATE_CONNECTING)
{
if(!FD_ISSET(session->sock, out_set))
return 0;
...
}
So it will make the program wait until the server has sent us anything.
Sorry for not having checked everything !

How to refresh logon screensaver parameter changes?

I have a Windows service that may change the timeout on the logon screensaver in Windows (as described here.) To do that I change the following registry key to the timeout in seconds:
HKEY_USERS\.DEFAULT\Control Panel\Desktop\ScreenSaveTimeOut
The issue is that how do I make OS "read" or refresh the actual screensaver timeout after a change in the registry key above?
My practice shows that it is refreshed (for sure) only when I reboot the system, but in my case I need it to be applied without the reboot.
EDIT_1: After suggestion below I tried, as it seems to me, all possible combinations of the flags for the following:
DWORD bsmInfo1 = BSM_ALLDESKTOPS;
DWORD dwFlgs = BSF_FORCEIFHUNG | BSF_IGNORECURRENTTASK | BSF_NOTIMEOUTIFNOTHUNG | BSF_SENDNOTIFYMESSAGE;
int nbsm1 = ::BroadcastSystemMessage(dwFlgs, &bsmInfo1, WM_SETTINGCHANGE, 0, (LPARAM)L"Windows");
DWORD bsmInfo2 = BSM_ALLDESKTOPS;
int nbsm2 = ::BroadcastSystemMessage(dwFlgs, &bsmInfo2, WM_SETTINGCHANGE, 0, (LPARAM)L"WindowsThemeElement");
to no avail :( I receive 1 as a result from both calls but it has no effect.
I was able to resolve this.-.-.
If your service is running in the same session as the logon screensaver then you can call SystemParametersInfo with the SPI_SETSCREENSAVETIMEOUT flag.
SystemParametersInfo broadcasts the WM_SETTINGCHANGE message to all top level windows to indicate that a parameter has changed. If your code isn't running in the correct session then you could use BroadcastSystemMessage with the BSM_ALLDESKTOPS flag to deliver the WM_SETTINGCHANGE message. However, this does require the SE_TCB_NAME privilege, so your code would have to be running as SYSTEM.
I haven't actually tried this cross-session, so I can't guarantee that it works.

C++ network programing in linux: Server Questions

I am learning how to network program using c/c++ and I have created a server(TCP) that is suppose to respond in specific ways to messages from a client in order to do this I created a class that the server class passes the message to and returns a string to report back to the client.
Here is my problem sometimes it reports the correct string back other times if just repeats what I sent to the message handler. Which no where in the code do I have it return what was passed in. So I am wondering am I handling getting the message correctly?
Secondly, I am unsure of how to keep a connection open in a while loop to continually pass messages back and forth. You can see how I did it in the code below but I am pretty sure this is incorrect, any help on this would be great. Thanks!
if (!fork())
{ // this is the child process
close(sockfd); // child doesn't need the listener
while ((numbytes = recv(new_fd, buf, MAXDATASIZE-1, 0)) > 0)
{
//numbytes = recv(new_fd, buf, MAXDATASIZE-1, 0);
buf[numbytes-1] = '\0';
const char* temp = ash.handleMessage(buf).c_str();
int size_of_temp = ash.handleMessage(buf).length();
send(new_fd, temp, size_of_temp, 0);
//send(new_fd, temp, size_of_temp+1, 0);
}
}//end if
Please excuse my ghetto code
Handles the message
Class Method handler uses
If your learning about sockets you should also know that you can't assume that what you send() as a "complete message", will be delivered as a complete message.
If you send() some big data from your client you might need to use multiple recv() on the server (or vice versa) to read it all. This is a big difference of how files usually work...
If you'r designing your own protocol you can opt to also send the messages length, like [LEN][message]. An easy example would be if the strings you send are limited to 256 bytes you can start with send()ing a short representing the strings length,
Or easier, decide that you use line-feeds (newline - \n) to terminate messages. The the protocol would look like
"msg1\nmsg2\n"
then you would have to recv(), and append the data, until you get a newline. This is all I can muster right now, there are a lot of great examples on the internet, but I would recommend getting the source of some "real" program and look at how it handles its network.
You are calling handleMessage twice. You didn't post the code, but it looks like you're returning a string. It might be better to do:
string temp = ash.handleMessage(buf);
int size_of_temp = temp.length();
This would avoid repeating any action that takes place in handleMessage.