CPU Usage gradually increases in dotnet core webservice - web-services

I have a .net Core web service which seems to slowly increase its cpu usage.
meaning at the first day it won't go past 10%, the second day it can go up to 20% and so on.
Using the TOP command in linux, all my webservices seems to sometime be shown there (probably when a request is made) and afterward disappear.
This specific process after running for a while just stays there constantly consuming cpu even when no request has been made.
the API still working fine, it seems like there are some threads that just keeps hanging and consuming cpu. last time I checked I had 5 threads that consumed 3-4% cpu and didn't die for some reason.
My guess is that in some specific scenario a thread just stays alive consuming cpu.
The app runs on ubuntu machine, my first step was trying to create a dump file with ProcDump so I can analyze those threads and maybe find where they are hanging.
ProcDump generates a huge 21gb file, which trying to analyze with lldb throws out of memory exception. even tried transferring it to a windows machine to debug with windbg , no help there as it couldn't open the file.
As there is no specific exception or anything I can't really share any piece of code as I have no idea where the issue is... just kind a hoping for some suggestion that might help me get to a solution or at least understand where the problem is.
Thanks a lot for reading, cheers

You could try using something like jetBrains’ DotMemory, they also have a fairly high level but helpful guide https://www.jetbrains.com/help/dotmemory/How_to_Find_a_Memory_Leak.html it also worth checking your startup file and double checking the services you’ve registered are used in the correct way ie not added as scoped when they should be transient or even a singleton etc

so iv'e been at it for a while.
Eventually found out that my problem was with HttpClient
Probably some bad mix of static class and creating new instances of HttpClient that causes the issue Iv'e explained above.
Solved it by utilizing HttpClientFactory as explained here -
https://learn.microsoft.com/en-us/aspnet/core/fundamentals/http-requests?view=aspnetcore-2.1
Lesson learned :)

A little late but Procdump for Linux just added .NET Core 3 support that generates much more managable sized core dumps. It automatically detects if the target process is .NET Core and does the right thing (i.e., no need to specify switches).

Related

Why does my program run faster on first launch than on next launches?

I have been working for 2.5 years on a personal flight sim project on my leisure time, written in C++ and using Opengl on a windows 7 PC.
I recently had to move to windows 10. Hardware is exactly the same. I reinstalled Code::blocks.
It turns out that on first launch of my project after the system start, performance is OK, similar to what I used to see with windows 7. But, the second, third, and all next launches give me lower performance, with significant less fluidity in frame rate compared to the first run, detectable by eye. This never happened with windows 7.
Any time I start my system, first run is fast, next ones are slower.
I had a look at the task manager while doing some runs. The first run is handled by one of the 4 cores of my CPU (iCore5-6500) at approximately 85%. For the next runs, the load is spread accross the 4 cores. During those slower runs on 4 cores, I tried to modify the affinity and direct my program to only one core without significant improvement in performance. The selected core was working at full load, though.
My C++ code doesn't explicitly use any thread function at this stage. From my modest programmer's point of view, there is only one main thread run in the main(). On the task manager, I can see that some 10 to 14 threads are alive when my program runs. I guess (wrongly?) that they are implicitly created by the use of joysticks, track ir or other communication task with GPU...
Could it come from memory not being correctly freed when my program stops? I thought windows would free it properly, even if I forgot some 'delete' after using 'new'.
Has anyone encountered a similar situation? Any explanation coming to your minds?
Any suggestion to better understand these facts? Obviously, my ultimate goal is to have a consistent performance level whatever the number of launches.
trying to upload screenshots of second run as viewed by task manager:
trying to upload screenshots of first run as viewed by task manager:
Well I got a problems when switching to win10 for clients at my work too here few I encountered all because Windows10 has changed up scheduling of processes creating a lot of issues like:
older windowses blockless thread synchronizations techniques not working anymore
well placed Sleep() helps sometimes. Btw. similar problems was encountered when switching from w2k to wxp.
huge slowdowns and frequent freezes for few seconds on older single threaded apps
usually setting affinity to single core solves this. You can do this also in task manager just to check and if helps then you can do this in code too. Here an example on how to do it with winapi:
Cache size estimation on your system?
messed up drivers timings causing zombies processes even total freeze and or BSOD
I deal with USB in my work and its a nightmare sometimes on win10. On top of all this Win10 tends to enforce wrong drivers on devices (like gfx cards, custom USB systems etc ...)
auto freeze close app if it does not respond the wndproc in time
In Windows10 the timeout is much much smaller than in older versions. If the case You can try running in compatibility mode (set in icon properties on desktop) for older windows (however does not work for #1,#2), or change the apps code to speed up response. For example in VCL you can call Process Messages from inside of blocking code to remedy this... Or you can use threads for the heavy lifting ... just be careful with rendering and using winapi as accessing some winapi (any window/visual related stuff) functions from outside main thread causes havoc ...
On top of all this old IDEs (especially for MCUs) don't work properly anymore and new ones are usually much worse (or unusable because of lack of functionality that was present in older versions) to work with so I stayed faith full to Windows7 for developer purposes.
If none of above helps then try to log the times some of your task did need ... it might show you which part of code is the problem. I usually do this using timing graph like this:
both x,y axises are time and each task has its own color and row in graph. the graph is scrolling in time (to the left side in my case) and has changeable time scale. The numbers are showing actual and max (or sliding avg) value ...
This way I can see if some task is not taking too much time or even overlaps its next execution, peaks are also nicely visible and all runs during runtime without any debug tools which might change the behavior of execution.

How do I debug lower level File access exceptions/crashes in C++ unmanaged code?

I'm currently working on trying to resolve a crash/exception on an unmanaged C++ application.
The application crashes with some predicatibility. The program basically
process a high volume of files combined with running a bunch of queries through
the access DB.
It's definitely occuring during a file access. The error message is:
"failed reading. Network name is no longer available."
It always seems to be crashing in the same lower level file access code.
It's doing a lower level library Seek(), then a Read(). The exception occurs
during the read.
To further complicate things, we can only get the errors to occur when
we're running an disk balancing utility. The utility essentially examines file
access history and moves more frequently/recently used files to faster storage retrieval
while files that are used less frequently are moved to a slower retrieval area. I don't fully
understand the architecture of the this particular storage device,
but essentially it's got an area for "fast" retrieval and one for "archived/slower."
The issues are more easily/predicably reproducible when the utility app is started and
stopped several times. According to the disk manufacturer, we should be able to run
the utility in the background without effecting the behaviour of the client's main application.
Any suggestions how to proceed here? There are theories floating around here that it's somehow related to latency on the storage device. Is there a way to prove/disprove that? We've written a small sample app that basically goes out accesses/reads a whole mess of files on the drive. We've (so far) been unable to reproduce the issue even running with SmartPools. My thought is to try push the latency theory is to have multiple apps basically reading volumes of files from disk while running the utility application.
The memory usage and CPU usage do not look out of line in the Task Manager.
Thoughts? This is turning into a bit of a hairball.
Thanks,
JohnB
Grab your debug binaries.
Setup Application Verifier and add your application to its list.
Hopefully wait for a crash dumb.
Put that through WinDBG.
Try command: !avrf
See what you get....

Failed to resume in time Crashlog

I am trying to figure out a "Failed to resume in time" problem. In one of our testers devices (which is an iPhone 4S with the latest OS) it happens very frequently, whereas in my own device it doesn't seem to happen at all.
Anyway, I got a few crashlogs. I am unable to trace the root of the cause though. I understand that the issue might be
1.When a process is holding up the main thread for too long.
2.When there is a memory issue.
I don't think the memory is much of an issue since it seems to happen when the user leaves the main menu and comes back. Nothing much is happening in the main menu so it probably is a task that runs too long.
Here is an excerpt from the crash log:
Can somebody help me or guide me on who I can trace the cause of the issue? Is there anyway to turn off the watchdog timer(probably not huh?) Also, what does highlighted thread refer to?
I have already checked my applicationDidBecomeActive & applicationWillEnterForeground to make sure there is nothing going on there.
To my knowledge there are no synchronous calls being made at this point. Does Reachability use synchronous calls to check for internet? How can I check for that?
I am not making any large data transfers upon resume.
I notice that GameCenter automatically logs in or check for log in upon resuming your app. Is there anyway to prevent this? Could this possibly cause a time out issue?
I tried doing a time profile, but I am not able to understand how to use it to analyze. If you can provide a good resource for that, that would be amazing.
Thanks!!!
You're currently in "trying to find the issue mode". You should switch to "try to find out how much of an issue this really is" mode.
So go find another 4S (actually as many as you can) to rule out that it's a device-specific issue. If it happens on all 4S it should be easier to pinpoint. If not, have someone else look over it, discuss possible causes. The peer programming approach often helps when you're stuck in a dead-end situation.
If the issue is only on that one device, you might want to check if it's broken (or "jailbroken") or might simply need a hard reboot (hold power and home for 10+ seconds).
If it only happens on some devices but not all, try to find what they have in common. This could be language/locale, or dictation, practically any kind of setting the user might have changed. If necessary, write a logger that logs as many settings as possible to your (web) server so you can compare settings one-by-one and quickly discard those that aren't in synch.
If only very few devices are affected, you could also ignore the issue and hope that additional crash logs from users will reveal the key to the issue.
Finally, there's always the option to disable suspend on terminate and instead terminate the app when the home button is pressed (as it was pre iOS 4). Unless of course the app has to run in background.

How to find why my application becomes unresponsive at unaccessible customer sites

My application is deployed at customer sites, that I can not access, and has no internet connection.
There are complains that in several sites, once in a week or so, the application become unresponsive, so that the operators need to kill and restart it.
We were unable to observe it in our site.
Is there something I can do that may help me find the problem?
It is a VC2008 Win32 MFC applications.
The application is quite complex, and includes many threads, synchronization mechanisms, database access, HMI, communication channels...
Note: The custmer can send us log files.
Note: The application does not crash. It just hangs. Since I don't know what is the nature of the problem, I have no way to know programmatically that something went wrong (or do I?)
I have had great success with ADplus and WinDBG in the past. You may check it out. Especially check out the Hang mode in ADplus.
I would start with some questions - is the CPU hogged during these unresponsive times? Is there a specific process that's hogging it? (You can use PerfMon to get the answers). Depending on the answers I would probably proceed by taking a dump of the process at this stage (ProcDump by sysinternals is great for these purposes) and investigate it offline.
In similar situation on a non-windows platform we have the capability to gather system dumps. Get a thread dump of the entire system for off-site analysis. This enables us to find deadlocks quite easily. For slow problems rather than stop a single dump is not enough. Then we need a sequence of dumps, and some good luck.
Another, rather messier technique is to have enough trace, and enough fine-grained control of trace in the app. Then turn on some trace and hope to spot where the delays are happening.
My experience with finding bugs in installations on the other side of the planet shows three helpful techniques: Logging, logging, and logging.
What do those log files say your customers sent you? If they aren't detailed enough, send them a version that logs more. Use binary approximation to home in on the error.
To know where the process is hung is better to start with the stack trace at that instant.
Now since your program is installed remotely and you can't access it, you can write a monitoring program which can periodically check the stack of your program and log it. This information along with your logging mechanism will make things easier to identify and debug.
Since I am not a windows programmer, i don't know much about such tools availability in windows, however i think you need something similar to this http://www.codeproject.com/KB/threads/StackWalker.aspx

How to read 3rd party application's variables from memory?

I'm trying to read variables from memory. Variables, that doesn't belong to my own program. For instance, let's say I have this Adobe Shockwave (.dcr) application running in browser and I want to read different variables from it. How it's being done? Do I need to hook the process? But it's running under virtual machine, so I don't know how to do it.
This task is pretty much trivial in normal w32 applications (as it is mainly just
CBT-hooking / subclassing), but as I mentioned before, I've got no idea how it's being
done with flash / shockwave.
I'm using C++ (VS9) as my development-environment, in case you wish to know.
Any hints would be highly appreciated, so thank you in advance.
Best regards,
nhaa123
If you're trying to do it manually just for one or two experiments, it's easy.
Try a tool like Cheat engine which is like a free and quick and simple process peeker. Basically it scans the process's memory space for given key values. You can then filter those initial search hits later as well. You can also change those values you do find, live. The link above shows a quick example of using it to find a score or money value in a game, and editing it live as the game runs.
without having debug Binaries/DLLs of the Apps, your only chance is asking some hackers.
Normally you can connect to a process with a debugger, but without the debugging symbols of the binaries you don't see any variable names - just memory addresses.
Further the Flash/Shockwave code runs inside a sandbox inside the browser to prevent security holes by manipulated Flash code. So you don't have a real chance to get access to the running Flash code / to the plugin executing the Flash code - except you have a manipulated version of such a plugin.
So your task is quite hard to solve without using less legal methods. The next hard thing is the virtual machine - this could be solved by implementing your app as a client/server solution, where the "inspector" / watchdog runs as server inside the virtual machine and the client requesting the variable status/content running on your normal host. The communication could be done as simple socket connection.
If you have the chance to write your own Flash/Shockwave plugin, you maybe could be able to see contents of variables.
Sorry, that I cannot help you any further.
ciao,
3DH