I have been trying to find a memory leak in a server. I leave it running and use top command to inspect the VIRT memory size field on it's PID while I connect clients to it. I notice that every time I run something which connects to the server the field increases by about 5 MB suggesting there is a leak somewhere. I want to set a breakpoint in gdb on any call to operator new and connect a client to see if i see any calls to new from an unexpected backtrace that I haven't already correctly inspected for a corresponding delete. How does one set such a breakpoint?
I am on linux btw.
Related
I made a small debugging application specifically to modify a protected executable, when using the application I can hand off the debugging by suspending the process then calling DebugActiveProcessStop and attaching x64dbg to the process and resuming.
The target I'm debugging performs a checksum on itself in 488 inlined places, with my debugging application I breakpoint all of these with a byte pattern search, when the breakpoint hits I intend to place a hardware breakpoint after the checksum command and then undo all of my memory modifications. When the hardware breakpoint hits I read some pertinent registers, store them in a file and patch that checksum to always pass without performing the checksum calculations the next time it is called.
The issue arrives that when I call ContinueDebugEvent from my debugger to get to the hardware breakpoint the target always crashes after a few steps with an access violation(c0000005). While testing things I found that if I suspend the process instead of calling ContinueDebugEvent and then pass the debugging over to x64dbg and let it ContinueDebugEvent the target runs fine. I debugged x64dbg with x64dbg to see what it's doing differently and the call it makes has what appears to be identical parameters to my call to ContinueDebugEvent, just the processId, the threadID, and 10002 (DBG_CONTINUE)
Does anyone know how/why the target would crash when both of our debuggers are making the same function call?
I have compiled my cpp file via a make file. I have run my file via this make file too.
This multi-thread application uses 99% of CPU as well. I am using Ubuntu 16.04.1 LTS as my OS.
After three days of running, I realized that the application has suddenly stopped and I see this unexpected error message on the terminal.
Makefile:: recipe for target 'myMain' failed
make: *** myMain Killed
There is not other error message. This application failed with no exception error message. And I am highly confidant about the programs I write (about failing) despite no one is writing a full proof application.
I have never seen message of make: *** something Killed before too.
Unfortunately, this is a case which I cannot easily repeat again and again to see what is wrong.
I am wondering if make application or Ubuntu have any mechanism to kill any application if running for a long time and taking huge amount of resource?
Update
Thanks to user Basile Starynkevitch, this is the result I received from dmesg:
[351059.556308] Out of memory: Kill process 2794 (main) score 882 or sacrifice child
[351059.556318] Killed process 2794 (main) total-vm:30432908kB, anon-rss:13530324kB, file-rss:0kB
It's most likely that your program was the victim of the Linux kernel's OOM Killer. See also this question and answers.
Out of memory: Kill process
Most likely you were compiling sources as a user and your environment were restricted by the resource limits listed with ulimit -a command (either memory or number of processes). Once the hard limit is hit, the process is killed by the Linux kernel.
If you've got enough memory, it's possible to increase these limits (ulimit -Sv), otherwise you need to increase memory of your machine or add some extra swap space.
For more details about this behaviour, see: Kernel - Out Of Memory Management.
When the machine is low on memory, old page frames will be reclaimed, but despite reclaiming pages is may find that it was unable to free enough pages to satisfy a request even when scanning at highest priority. If it does fail to free page frames, out_of_memory() is called to see if the system is out of memory and needs to kill a process.
my codedui test have a memory leak and i want to better identify the source of that leak - do you know a way to monitor the QTAgent process who runs the codedui test?
Not sure if you need the agent to be running, but the following is advised by Microsoft to find memory leaks:
Launch performance monitor by typing perfmon into Start > Run. Click on the performance monitor and the green plus icon.
Add the following counters for your process here:
Process-->Private Bytes
Process-->Virtual Bytes
To save the log data right click on Performance Monitor in the left panel and click on New > Data Collector Set. Name and save it somewhere, then in the last step check Start this data collector set now.
This will give you a log file for your process.
To read the data use both graphs:
The Private Bytes counter indicates the total amount of memory that a process has allocated, not including memory shared with other processes.
The Virtual Bytes counter indicates the current size of the virtual address space that the process is using.
After this try using UMDH to find the source of the problem.
UPDATE 3/27/2013
It would appear that I am not leaking memory, it is just WT not keeping a persistent session every time F5 is hit, or a new user connects. Basically the old session gets deleted, and a new one is made every time F5 is hit, or a new user connects from another machine. I have read some parts of the documentation that mention making the session persistent, so when a user reloads the page, or a different user connects they all see the same content. However, I have not been able to get it working yet. I think it is a function call or a setting in the wt_config.xml file. Will update if I make any other progress.
ORIGINAL POST
So my question is, how do I clean up memory in WT so every time the user presses F5 on the page the memory use stays the same in the task manager?
Ok, so I am working with WT pronounced (witty) and I have noticed that my server application consumes more memory every time the user hits F5 on the page to refresh it, which to me looks like I am leaking memory, but I followed the same process as WT most basic applications...
So, I went back to the most basic WT app I could find, the hello application the code for which, and the working example, can be found here(http://www.webtoolkit.eu/wt/examples/) if you have not personally built the project.
Once I ran the example on my machine and hit F5 on the page, the memory in my task manager increased.
My likely suspect is this function below.
WApplication *createApplication(const WEnvironment& env)
{
/*
* You could read information from the environment to decide whether
* the user has permission to start a new application
*/
return new HelloApplication(env);
}
It gets called every time F5 is hit and makes a new instance of the HelloApplication which inherits from WApplication.
Some things I have tried to remedy the situation that have not worked include: Keeping 2 pointers for the HelloApplication so I can delete the old pointer every time a new one is allocated. Calling the quit() function, and deleting the pointer. Just calling the quit() function. I have also looked around on the WT documentation site(http://www.webtoolkit.eu/wt/doc/reference/html/index.html) for more detailed information on the class and it's methods, but have not come up with anything that worked.
I ask that anyone responding please be as detailed as possible in how to handle the cleanup of the memory. An example would be much appreciated, thanks in advance!
You also must be aware of the fact that as of Wt 3.3.0 the sessions are only cleaned up as long as requests are received (see this reply of a Wt developer). To overcome this limitation the developer suggests using something similar to following code.
static bool terminating = false;
void
callRepeatedly(boost::function<void()> function, int seconds)
{
if (!terminating) {
Wt::WServer::instance()->ioService().schedule(
seconds * 1000, boost::bind(callRepeatedly, function, seconds));
}
function();
}
int
main(int argc, char** argv)
{
...
callRepeatedly(boost::bind(&Wt::WServer::expireSessions, &server), 60);
Wt::WServer::waitForShutdown();
terminating = true;
server.stop();
...
}
The manual of WApplication says that you create it when the createApplication callback is called, and that Wt deletes it when quit is called, and when the session times out. The default session time-out is 10 minutes (wt_config.xml), so that may be the reason why your memory consumption grows initially when pressing F5.
http://www.webtoolkit.eu/wt/doc/reference/html/classWt_1_1WApplication.html#details
Something different that explains what you see: memory consumption reported by the operating system is not a reliable method to determine if an application leaks memory since free does not really return the memory to the OS. Use proper memory checking tools, such as valgrind.
I'm trying to debug proftpd in order to understand better this exploit http://www.phrack.org/issues.html?issue=67&id=7. The vulnerable section is in mod_sql.c, I have tried to breakpoint the sql_prepare_where function (that is where the heap overflow is done) and then call the USER ... and PASS ... command but it is never triggered.
To find out why I have breakpoints all the hundreds line of mod_sql.c and then launch the program (with full debugging option), some breakpoints are triggered (sql_setuserinfo, set_sqlauthenticate, get_auth_entry...) but only at the very beginning of the launching process, then when the program goes in it main loop nothing else breakpoint related happens (while the log of proftpd mentions that the USER and PASS commands are dispatched to mod_sql.c)..
Would anyone know what I'm missing?
[ It's possible I am missing something essential of GDB, I am learning on the roll :) ]
Server programs often use a "separate program for each connection" method, where after successful accept, the parent forks a child to handle current connection, and goes back to accepting more connections.
I don't know for sure, but if proftpd used that model, it would explain exactly the symptoms you've described.
You can ask GDB to debug the child instead of the parent, by using (gdb) set follow-fork-mode child.