How to Find CPU Utilization of a Single Thread within a Process - c++

I am looking a Tool on How to Find CPU Utilization of a Single Thread within a Process in VC++.
It would be great full if any one could provide me a tool.
Also it could be better if you guys provide how to do programmatically.
Thank you in Advance.

Perhaps using GetThreadTimes would help ?
To elaborate if the thread belongs to another executable, that would be something (not tested) in the lines of:
// Returns true if thread times could be queried and its results are usable,
// false otherwise. Error handling is minimal, considering throwing detailed
// exceptions instead of returning a simple boolean.
bool get_remote_thread_times(DWORD thread_id, FILETIME & kernel_time, FILETIME & user_time)
{
FILETIME creation_time = { 0 };
FILETIME exit_time = { 0 };
HANDLE thread_handle = OpenThread(THREAD_QUERY_INFORMATION, FALSE, thread_id);
if (thread_handle == INVALID_HANDLE) return false;
bool success = GetThreadTimes(thread_handle, &creation_time, &exit_time, &kernel_time, &user_time) != 0;
CloseHandle(thread_handle);
return success;
}

try using process explorer.. (tool).. pretty useful..
http://download.cnet.com/Process-Explorer/3000-2094_4-10223605.html

I'm sure you're asking about Windows here, but for completeness sake, I'll describe one way this can be done on Unix systems.
The /proc file system contains information about all of the running processes on your machine. In this directory you'll find sub directory's for every process on the system (named by pid), inside each of these directory's is a file called stat. Look at 'man proc' and search for the "stat" entry. This file contains a bunch of information, but a couple of the fields can be used to determine how much user and kernel mode time this process has consumed.
With this knowledge in hand, look for a sub directory of a process called "task"... In here you'll find all the child processes spawned by the outer process.. and if you cd into those, you'll find that each has a stat file.

Related

Can a process be limited on how much physical memory it uses?

I'm working on an application, which has the tendency to use excessive amounts of memory, so I'd like to reduce this.
I know this is possible for a Java program, by adding a Maximum heap size parameter during startup of the Java program (e.g. java.exe ... -Xmx4g), but here I'm dealing with an executable on a Windows-10 system, so this is not applicable.
The title of this post refers to this URL, which mentions a way to do this, but which also states:
Maximum Working Set. Indicates the maximum amount of working set assigned to the process. However, this number is ignored by Windows unless a hard limit has been configured for the process by a resource management application.
Meanwhile I can confirm that the following lines of code indeed don't have any impact on the memory usage of my program:
HANDLE h_jobObject = CreateJobObject(NULL, L"Jobobject");
if (!AssignProcessToJobObject(h_jobObject, OpenProcess(PROCESS_ALL_ACCESS, FALSE, GetCurrentProcessId())))
{
throw "COULD NOT ASSIGN SELF TO JOB OBJECT!:";
}
JOBOBJECT_EXTENDED_LIMIT_INFORMATION tagJobBase = { 0 };
tagJobBase.BasicLimitInformation.MaximumWorkingSetSize = 1; // far too small, just to see what happens
BOOL bSuc = SetInformationJobObject(h_jobObject, JobObjectExtendedLimitInformation, (LPVOID)&tagJobBase, sizeof(tagJobBase));
=> bSuc is true, or is there anything else I should expect?
In top of this, the mentioned tools (resource managed applications, like Hyper-V) seem not to work on my Windows-10 system.
Next to this, there seems to be another post about this subject "Is there any way to force the WorkingSet of a process to be 1GB in C++?", but here the results seem to be negative too.
For a good understanding: I'm working in C++, so the solution, proposed in this URL are not applicable.
So now I'm stuck with the simple question: is there a way, implementable in C++, to limit the memory usage of the current process, running on Windows-10?
Does anybody have an idea?
Thanks in advance

What are the possible causes of "BUG: scheduling while atomic?"

There is another process continuously creating files that need processing by this code.
This code constantly scans the file-system for new files that need processing by comparing the contents of the file-system against a sqlite database that contains the processing results - one record for each file. This process is running at nice -n 19 so as not to interfere with the creation of new files by the other process.
It all works perfectly for a large number (>1k) of files, but then blows up with BUG: scheduling while atomic.
According to this
"Scheduling while atomic" indicates that you've tried to sleep
somewhere that you shouldn't
But the only sleep in the code is like this
void doFiles(void) {
for (...) { // for each file in the file-system
... // check database - do processing if needed
}
sleep(1);
}
int main(int argc, char *argv[], char *envp[]) {
while (true) doFiles();
return -1;
}
The code will hit this sleep after it has checked every file in the file-system against the database. The process needs to be repeated since new files will be added from time to time. There is no multi-threading in this code. Are there other possible causes for "BUG: scheduling while atomic" besides a misplaced sleep?
Edit: additional error output:
note: mirlin[1083] exited with preempt_count 1
BUG: scheduling while atomic: mirlin/1083/0x40000002
Modules linked in: g_cdc_ms musb_hdrc nop_usb_xceiv irqk edmak dm365mmap cmemk
Backtrace:
[<c002a5a0>] (dump_backtrace+0x0/0x110) from [<c028e56c>] (dump_stack+0x18/0x1c)
r6:c1099460 r5:c04ea000 r4:00000000 r3:20000013
[<c028e554>] (dump_stack+0x0/0x1c) from [<c00337b8>] (__schedule_bug+0x58/0x64)
[<c0033760>] (__schedule_bug+0x0/0x64) from [<c028e864>] (schedule+0x84/0x378)
r4:c10992c0 r3:00000000
[<c028e7e0>] (schedule+0x0/0x378) from [<c0033a80>] (__cond_resched+0x28/0x38)
[<c0033a58>] (__cond_resched+0x0/0x38) from [<c028ec6c>] (_cond_resched+0x34/0x44)
r4:00013000 r3:00000001
[<c028ec38>] (_cond_resched+0x0/0x44) from [<c0082f64>] (unmap_vmas+0x570/0x620)
[<c00829f4>] (unmap_vmas+0x0/0x620) from [<c0085c10>] (exit_mmap+0xc0/0x1ec)
[<c0085b50>] (exit_mmap+0x0/0x1ec) from [<c0037610>] (mmput+0x40/0xfc)
r9:00000001 r8:80000005 r6:c04ea000 r5:00000000 r4:c0427300
[<c00375d0>] (mmput+0x0/0xfc) from [<c003b5e4>] (exit_mm+0x150/0x158)
r5:c10992c0 r4:c0427300
[<c003b494>] (exit_mm+0x0/0x158) from [<c003cd44>] (do_exit+0x198/0x67c)
r7:c03120d1 r6:c10992c0 r5:0000000b r4:c10992c0
...
As others have said, you can sleep() anytime you want to in user code.
Looks like a problem with a driver on your platform. The driver may not actually call sleep() or schedule(), but often it will make a call of an kernel function which will, in turn, call one of these.
This also looks like it is using memory mapped file I/O on an embedded TI ARM processor.
This error was caused by a bad build.
A clean build by itself did not help.
A fresh checkout and build was required to resolve this issue.

how to JUDGE other program's result via cpp?

I've got a series of cpp source file and I want to write another program to JUDGE if they can run correctly (give input and compare their output with standart output) . so how to:
call/spawn another program, and give a file to be its standard input
limit the time and memory of the child process (maybe setrlimit thing? is there any examples?)
donot let the process to read/write any file
use a file to be its standard output
compare the output with the standard output.
I think the 2nd and 3rd are the core part of this prob. Is there any way to do this?
ps. system is Linux
To do this right, you probably want to spawn the child program with fork, not system.
This allows you to do a few things. First of all, you can set up some pipes to the parent process so the parent can supply the input to the child, and capture the output from the child to compare to the expected result.
Second, it will let you call seteuid (or one of its close relatives like setreuid) to set the child process to run under a (very) limited user account, to prevent it from writing to files. When fork returns in the parent, you'll want to call setrlimit to limit the child's CPU usage.
Just to be clear: rather than directing the child's output to a file, then comparing that to the expected output, I'd capture the child's output directly via a pipe to the parent. From there the parent can write the data to a file if desired, but can also compare the output directly to what's expected, without going through a file.
std::string command = "/bin/local/app < my_input.txt > my_output_file.txt 2> my_error_file.txt";
int rv = std::system( command.c_str() );
1) The system function from the STL allows you to execute a program (basically as if invoked from a shell). Note that this approach is inherenly insecure, so only use it in a trusted environment.
2) You will need to use threads to be able to achieve this. There are a number of thread libraries available for C++, but I cannot give you recommendation.
[After edit in OP's post]
3) This one is harder. You either have to write a wrapper that monitors read/write access to files or do some Linux/Unix privilege magic to prevent it from accessing files.
4) You can redirect the output of a program (that it thinks goes to the standard output) by adding > outFile.txt after the way you would normally invoke the program (see 1)) -- e.g. otherapp > out.txt
5) You could run diff on the saved file (from 3)) to the "golden standard"/expected output captured in another file. Or use some other method that better fits your need (for example you don't care about certain formatting as long as the "content" is there). -- This part is really dependent on your needs. diff does a basic comparing job well.

How to detect file leak and the corresponding code in Solaris?

How to detect file leak and the corresponding stack in Solaris? I see the information was well reported by valgrind on Linux. Please let me know if we have any tools on Solaris also?
On Linux you can use strace to log all file open and close calls. Then you can analyse the log on Resource Leak - the number of open calls should match the number of close calls. If this is not true then you have a leak. On Solaris there is a similar tool - DTrace.
You can, in Solaris, look at currently open filedescriptors of a process by simply using the pfiles command. If you want to track files being opened/closed, truss (the Solaris equivalent to strace) comes to mind, with a filter for file-related syscalls (truss -e open,close but there are others that create filedescriptors).
If you find that the pfiles output grows, first identify whether what you're leaking are ordinary files or things like sockets / pipes. If it's leaking ordinary files, then a dtrace script can be used; the following is a base for own experiments, I currently don't have a Solaris system at hand to try it out and refine it. See below.
#!/usr/bin/dtrace -s
syscall::open:entry { self->t = ustack(); }
syscall::open:return /arg0 >= 0/ { trackedfds[arg0] = self->t; }
syscall::open:return { self->t = 0; }
syscall::close:entry { self->t = arg0; }
syscall::close:return /arg0 >= 0/ { trackedfds[self->t] = 0; }
syscall::close:return { self->t = 0; }
END { printa(trackedfds); }
This builds an associative array indexed by filedescriptor number whose contents are the userside stacktrace at the time of the open() system call. On successful close, the entry for the given filedescriptor number is discarded, and when the program exits (or the script is stopped) the remaining contents of said associative array are printed - if anything's left, that'd be a candidate for leaks.
Note that the END {} probe might not be the correct place for this; proc::exit or something of the like may be required. It depends on when exactly this triggers, before or after the cleanup done at program teardown (exiting / killing a program closes all its filedescriptors, which would erase the trackedfds[] array). That's why I've said above this is a starting point, I can't check without a Solaris system.

Win32 C/C++ checking if two instances of the same program use the same arguments

I have an application and I want to be able to check if (for instance) two instances of it used the same arguments on execution. To make it clearer:
myapp 1 2
myapp 1 3
This isn't a Singleton design pattern problem as I can have more than one instance running. I though about checking the running processes, but it seems that I can only get the process name and that doesn't help me.
Writing a file on startup and then having other instances check if that file exists isn't viable due to abnormal program termination which would leave me hanging.
In Linux I solved this by checking /proc/pid/cmdline and parsing the information there.
Does anyone have any idea if I can do something similar on windows?
Cheers
You can do this via WMI's Win32_Process class.
You want wmic.exe. Try something like:
wmic.exe process list | findstr myapp.exe
Then sort it / parse it / whatever you need to do.
wmic is really a great tool to have.
I ended up using this script instead of filling up my code with WMI calls:
wmic process where "name='cmd.exe'" get CommandLine > list.txt
works great!
cheers and thanks you Seth and Reed
After some thinking I decided to do things a bit simpler...
Implementing a mutex and checking it's existence is. As I needed to check if the instances started with the same parameters and not if the same application was started, I just needed to decide on the mutex name in runtime!
so...
sprintf(cmdstr,"myapp_%i_%i",arg1,arg2);
DWORD m_dwLastError;
m_hMutex = CreateMutex(NULL, FALSE, cmdstr);
m_dwLastError = GetLastError();
if(ERROR_ALREADY_EXISTS == m_dwLastError)
{
found_other = true;
}
and that's it! no parsing, no wmi, no windows development sdk...
Cheers to you all!