I have an executable (neural network simulation software) developed in C++ and wxWidgets, compiled in Visual Studio 2010. On my development machine (Windows 7 64bit) it was running very slowly, taking 130 seconds instead of 14 seconds for the same run. I finally discovered that it works fine if I just changed the filename of the exe. To hunt for the cause I've turned off antivirus (Microsoft Security Essentials, searched the registry and event logs, but found nothing. Doesn't matter if it's compiled with wxWidgets DLL or static library, 32bit or 64bit.
What could possibly be the problem? Processor affinity and priority is all normal. Only clue is that the process also uses more memory (130meg instead of 90meg).
Related
A Windows Qt C++ application is crashing sometimes, it seems to be happening more often on slower machines.
On my main development machine (macbook pro 2019 running windows on bootcamp) it almost never happen.
But on another latptop, a 8GB ram, intel core i5 1.60GHz, it happens pretty much always.
So much I replicated the development environment there to run the Qt Creator project in debug mode.
Then I got the following picture
What is the disassembler telling?
Disassembler (RtlIsZeroMemory)
How can I further debug this?
We have a minimum System requirement of Server 2012 and Windows 8.1.
We have programs written on C++.
Some programs in our Setup still don't use newer DLLs and start on a Windows 7 or Windows 2008 R2. Some use newer DLLs and don't start with a message that such a DLL isn't found.
Is there a way to prevent the start of a program on an old Windows System with a manifest or something similar? I would prefer that the user don't see a "DLL not found message".
The best way: Windows Loader tells the user that the executable doesn't match this OS.
I know that this works for Windows executables of the OS. Copying a newer EXE from Windows 10 to an old Windows version, even if new specific DLLs are not used.
We're currently beta testing a Windows application which is built with the latest Visual Studio in C++ and runs on Windows 10. The application dynamically links the VC++ libraries (static linkage is not an option for us).
On 75% of our testers' machines (including all our dev machines), the application works out of the box after installing, but with some others it does not start and fails (presumably) during the process of loading dynamic system libraries (since it does not trigger any kind of exception that would write a minidump as with runtime errors).
Some of these users have had errors about missing runtime dlls which were solved after installing the latest VC++ 2017 redistributable, however the application still would not run.
One user has also checked the library dependencies with the Dependencies tool (https://github.com/lucasg/Dependencies), but his results show nothing strange - there is no obvious difference between the output on a working machine and on his own. There are a few question marks (see screenshot: missing modules as shown in Dependencies) next to some UCRT subdependencies but they are there on working machines as well so I presume they are false positives.
I've also tried to deploy the relevant 40 something UCRT and VC++ dlls as an app local deploy next to the executable but it still wouldn't open on the affected machines (I might have missed some relevant ones, or they were still referenced from the System32 folder)
How would you debug such a problem, providing we cannot reproduce it locally (it works out of the box on two completely new devices with a fresh Windows 10 install and without a build environment) and there is a very little information on what might be going wrong with the library calls?
c000001d is illegal instruction exception code.
Either you are targeting instruction sets like AVX2 or SSE4.1 which the customer CPU doesn't support, or the executable is corrupted (e.g. downloaded in text mode instead of binary mode).
For best possible portability do not specify /arch:AVX or /arch:AVX2 when compiling with VC++. The compiler will then target the base instruction set available on the given architecture (x86 or x86_64 with SSE2).
I have a rather large codebase that I've inherited and I'm kind of stuck in the past for the moment. I'm working in Visual C++ 6 in Windows 7 (32-bit), however, I'm targeting an XP machine (Service Pack 2). Corporate doesn't see the ROI of upgrading it to .NET and I've got about as much pull as a Mini Cooper towing a train.
With that said, I did seemingly successfully install VC++6 (without XP compatibility) on my Win7 machine and I can compile and run fine. However, when I try to deploy my release build to my XP machine, it crashes (while it does not crash on Win7). If, however, I build the same code on the XP machine directly, it'll work fine. Running VC++6 on my Win7 machine in XP compatibility mode crashes the IDE upon opening of my workspace.
The only thing I can possibly think of is that the code makes extensive use of ActiveX controls and the registry. I'm not sure if maybe there's some Win7 specific registry modifications that are being made or vice-versa. Then again, I know very little about the registry; I'm definitely much more comfortable working in a Unix environment when coding for pleasure, especially when I code in C/C++.
Here's a screenshot of the error I'm getting when it crashes. I'm imaging it's got something to do with ActiveX registration.
No, this isn't ActiveX related at all. This is you bog-standard, 1980's type assert. As you would have noticed, had you looked at winocc.cpp line 279.
I've finally managed to run the QtCreator debugger on Windows after struggling with the Comodo Firewall incompatibilities.
I was hoping to switch from an older version of Qt and Visual C++ to the newest version of Qt and QtCreator, but the debugger performance is atrocious.
I have created a simple GUI with one window that does nothing else but display the window. After starting up QtCreator takes ~60MB RAM (Private bytes in Sysinternals process explorer).
When I start debugging, GDB is using 180MB. I start examining the main window pointer and it jumps to 313. Every time I try to inspect something, one of the cores jumps to 100% use and I have to wait for a few seconds for the information to show. This is just a toy program and I'm afraid that the real program that I want to switch will be much worse.
Is this kind of performance normal for MinGW? Would changing to the latest MinGW release improve things?
Visual C++ IDE + debugger + real-world program takes just close to 100MB of RAM and examining local variables is instantaneous.
Yesterday I built a copy of the Qt 4.5.2 libraries using MSVC 2008 and am using the QtCreator 1.2 MS CDB (Microsoft Console Debugger) support. It seems much faster than gdb. Building Qt for MSVC takes a few hours, but it might be worth trying.
Also, that means smaller Qt DLLs and EXEs as the MS compiler/linker is much better at removing unused code. Some of the Qt DLLs are less than half the size of their MinGW equivalents. Rumour has it that the C++ code the MS compiler generates is faster too.
I had to work with QtCreator a month ago. It's performance is awful, after 30 minutes of working with him, it will start to respond very slowly to everything. Maybe it's because it's still at the beginning.