I'm making an application in C++ to list all the applications the user has launched in the past few hours. One of the methods I'm trying to implement is using memory in explorer which lists the recently executed executables.
For example, in Process Hacker, I would simply go into explorer's memory and search for pcaclient, double click on the first result and I have my data.
Here's my code for the C++ version (using this built in code brackets function didn't work properly:
https://hastebin.com/pukojobika.cpp
However, when I restart explorer, the memory address changes every time. (in the case of the code above, 0x127d02d0. How would I go about searching for the PCACLIENT string in the memory and making sure that the resulting memory address contains data longer than 16 characters long?
Thanks.
Related
I am working with an open-source application with a Qt UI that processes large (often 500MB+) XML files. It is in general poorly written from a memory perspective, as it stores the entirety of the data parsed from all files in memory rather than processing and then closing them. I suspect it was written this way to be more responsive (we didn't write it), but it's always been a "RAM hog". However, this past April 2022 it worked quite passably on a Windows 10 workstation.
Now, in Oct 2022, the very same .exe file uses so much RAM on the same machine with the same size files that it slows to a crawl and is virtually unusable. So I suspect a change with Windows and/or the machine that somehow changes how Qt handles file open. In particular, looking at the memory usage, it looks suspiciously like when the user selects multiple files, it's trying to invoke the file handler function on them all concurrently rather than one at a time. This would be helpful if the parsing were CPU limited, but a disaster in our case where RAM is by far limiting.
Each file parse requires building a DOM tree that's somewhat larger than the size of the file, but then the code extracts the necessary data and populates a data structure that is smaller than the file (maybe 0.75x the size). The scope of the DOM tree is limited to the function called on file open, so back when we first compiled this app, if you selected 10 files, it would build the first DOM tree, and then populate the corresponding data structure, after which the memory for the DOM tree would be released and only the data structure would "live on". Then, then the next DOM tree would be built, leading to a "sawtooth" pattern of RAM use with a drop when each file finished parsing, and with the peak usage never more than one DOM tree plus the data structures already populated. Now, the same .exe uses about 2x more RAM than the sum of ALL the files put together before even the first parse finishes.
As I said, it's the same .exe, which was compiled on a Windows 7 machine in early 2022 but worked on this Windows 10 desktop as late as April 2022 without such exorbitant RAM usage. In fact, other tasks invoked from the GUI are also slower now, I expect for the same fundamental reason. On the Windows 7 machine where it was originally compiled, it seems to be running the same it always did. Is there any good explanation for this? How would it be fixed within the application code?
I'm trying to write a utility that allows me to read the memory from a process that is currently running in windows. I have used CreateToolhelp32Snapshot to build a current PID list for all programs running on the computer and I open a handle via OpenProcess with the vm_read flags without any issues. The roadblock I am running into is the readprocessmemory function of the windows API fails to read anything if the base address given is not currently readable. That being said what method can I use to determine the readable sections of a process.
My only idea on the matter is that I could iterate over the readprocessmemory function starting at the midway point of (size of process in memory)/2 and continue until I find the specific location that will allow me to read but I believe this would be terribly inefficient for large processes (o(n/2)), and even if it is the only user-mode option how would I even find the total size of the process in memory?
If this question is not meant for stackoverflow let me know and I will close it, please do not down-vote me I have been attempting to solve my problem myself for several hours now.
You can call VirtualQueryEx for each range of pages in the address space to find out if the address is in use. If the other process is not suspended then there will obviously be a chance that a pages status changes between your query and read operations.
For a project, I've created a c++ program that perform a greedy algorithm on a certain set of data. I have about 100 data set (stored in individual files). I've tested each one of these files manually and my program gave me a result each time.
Now, I wanted to "batch process" these 100 data set, because I may have more to do in a near future. So I've created another c++ application that basically loops and call my other program using the system( cmd ); command.
Now, that program works fine, but my other program, which was previously tested, now crashes during that "batch processing". Even weirder, it doesn't crash each time, or even with the same data set.
I've been at it for the past ~6 hours and I can't find what could be wrong. One of my friend suggested to me that maybe my computer calls (with system()) the other program too quickly and so it doesn't have time to free the proper memory space, but I find that hard to believe.
Thank!
EDIT:
I'm running on Windows 7 Professional using Visual Studio 2012
Lets say I open some application or process. Did some work with that. Now I closed it.
Need to know whether this application caused any memory leak.
i.e used up some heap memory and not cleared it properly.
Can I get this statistics some how? I'm using Visual Studio (for development) under Windows OS.
Even I would be interested in knowing this information for any 3rd party application.
When an application closes all resources are automatically released by Windows.
A quick & dirty tool to get an indication for memory/resource-leaks inside an application is Perfmon.
The actions executed by an application, can cause other processes to use more memory. SQL Server can make its cache size bigger, maybe you have opened Word or Explorer, the Windows Search engine might kick in because you saved some file. The virus scanner can be more active, etc.....
Have a look at CrtSetDbgFlag:
http://msdn.microsoft.com/en-us/library/5at7yxcs(v=VS.100).aspx
A colleague has been trying to reduce the memory footprint of a 32 bit app running on vista 64 and has noticed some weird behaviour in the reported size of the private working set.
He made some changes and recompiled the app. Then he ran the app and loaded in a data file. The task manager reports that private working set is 98Mb. He then simply renamed the app to 'fred.exe' now when he runs fred.exe and loads the same data file the private working set is reported to be 125Mb. Rename the file back to its original name, repeat and the private working set is back to 98Mb.
Does anyone know what causes this?
This usually happens during full moons.
Did he remember to sacrifice a chicken to Ba'al-ze-Bool, the god of memory?
Vista is doing some smart stuff with application caching (SuperFetch). As I understand it, this is done by application name.
In your case, I'm assuming Vista detected that "originalName.EXE" never benefitted from a large working set, so trimming it to 98 MB helps other apps. The new "fred.exe" on the other hand still gets the default treatment.
The "working set" of an application is (roughly) how much of the application's virtual memory space is currently available to be used. This value fluctuates for many reasons depending on what else is going on in the machine, and does not really reflect the actual memory footprint of the process. It certainly doesn't depend on the name of the executable.
On the other hand, the "private bytes" value is the most useful in measuring the memory footprint of an application. This value reflect the total amount of requests made by the application to allocate more memory, and is not dependent on how much of the application's working set happens to be swapped in at the time.