I have a c++ app. its run on freebsd. I have a problem ı guess its about memory.
Normally my app work normaly 3-4 days. I sometimes check top command in ssh. SIZE always increase 4mb. Its not regular increase (sometime increase 4mb after 5 min sometimes 30min etc)
My application is 32bit(yes its old project) if size reach 4gb my app core its crash and down.
ı would like to learn. Which processes include in SIZE value.
Image from SSH top command
Related
One of my hosts runs on Mac OS Catalina, and it constantly runs out of disk space...
I have scheduled tasks running there and every day it uploads files into /Users/labuser/myfolder and removes older files from that folder.
After digging through folders I found that /System/Volumes/Data/Users/labuser/myfolder takes 90% of occupied space on my host.
Is there a way to disable this feature on Catalina and stop it from growing /System/Volumes/Data/... ?
/Users/labuser/myfolder is equivalent to the folder with /System/Volumes/Data/ prepended. macOS 10.15 Catalina added firmlinks (some more description here), but actually from a practical perspective (to the user) these are one and the same.
Thus, your problem has nothing to do with a "feature" on Catalina; rather it has to do with the amount of data you're storing and backing up from /Users/labuser/myfolder.
Whether you use ncdu or another disk usage manager that will solve you problem of finding out why you're consuming all of your disk space.
One other relevant point is that because these are "symlinked" (called firmlinks by Apple), some disk inventory apps don't know how to handle this and end up in a recursion scenario when trying to understand total disk usage. I've seen this behavior with ncdu also. That being said, if you run the disk inventory on a subfolder of /System/Volumes/Data/, e.g.:
cd /Users
ncdu
It should avoid these issues.
python: 2.7
Ubuntu: 18.04
matpltolib: 2.2.2
I have a client GUI that get information from a server and displays it. I see memory leak and change in CPU consumption with time. The picture below shows a change in CPU and memory utilization after restarting the client with GUI (~25 seconds from the right, aligned with a spice in network traffic).
The CPU graph has a dip in the CPU utilization showing that CPU usage is different before and after the restart of program.
The Memory graph shows a large drop in the memory utilization and then slight increase due to initialization of the same program.
The Network graph has a spike because the client requests all data from the server for visualization.
I suspect it is something to do with matplotlib. I have 7 figures that I rechart every 3 seconds.
I have added the image of my GUI. The middle 4 graphs are the history charts. However, I am binning all data points in 300 bins since I have ~ 300 pixels in that area. The binning is done in a separate thread. The data arrays( 2x1 000 000 points, time and value) that store the information are created from the very beginning to ensure that I don't have any memory runaway problem when my datasets grow. I do not expect the datasets to grow beyond that since the typical experiment runs at 0.1-0.01 Hz which will take several million seconds to reach the end.
Question: If it is Matplotlib, what can I do? If it is not, what else could it be?
added Sept 6 2018:
I thought of adding another example. Here is the screenshot of CPU and memory usage after I closed the GUI. The code ran for ~ 3 days. Python 2.7, Ubuntu 18.04.1.
Thank you, everyone, for helpful comments.
After some struggle, I have figure out the way to solve the problem. Unfortunately, I have made several changes in my code, so I cannot say definitively what actually helped.
Here what was done:
all charting is done in a separate thread. The image is saved in a buffer as a byte stream with io.Bytes() and later passed to the GUI. This was important for me to solve another problem(GUI freezes while charting with matplotlib).
create a new figure(figure = Figure(figsize=(7,8),dpi=80)) each time you generate the plot. previously I have been reusing the same figure (self.figure = Figure(figsize=(7,8),dpi=80)).
my first question on stack,
I'm running cf10 Enterprise on windows 2003 server AMD Opteron 2.30 Ghz with 4gb ram. Im using cfindex action = update to index over 1k pdfs
I'm getting jvm memory errors and the page is being killed when its run as a scheduled task in the early hours of the morning.
This is the code in the page :
cfindex collection= "pdfs" action = "update" type= "path" extensions = ".pdf" recurse="yes" urlpath="/site/files/" key="D:\Inetpub\wwwroot\site\files"
JVM.config contents
java.home=s:\ColdFusion10\jre
application.home=s:\ColdFusion10\cfusion
java.args=-server -Xms256m -Xmx1024m -XX:MaxPermSize=192m -XX:+UseParallelGC -Xbatch -Dcoldfusion.home={application.home} -Dcoldfusion.rootDir={application.home} -Dcoldfusion.libPath={application.home}/lib -Dorg.apache.coyote.USE_CUSTOM_STATUS_MSG_IN_HEADER=true -Dcoldfusion.jsafe.defaultalgo=FIPS186Random -Dcoldfusion.classPath={application.home}/lib/updates,{application.home}/lib,{application.home}/lib/axis2,{application.home}/gateway/lib/,{application.home}/wwwroot/WEB-INF/flex/jars,{application.home}/wwwroot/WEB-INF/cfform/jars
java.library.path={application.home}/lib,{application.home}/jintegra/bin,{application.home}/jintegra/bin/international,{application.home}/lib/oosdk/classes/win
java.class.path={application.home}/lib/oosdk/lib,{application.home}/lib/oosdk/classes
Ive also tried going higher than 1024mb for -Xmx however cf would not restart until i tokk it back to 1024mb
Could it be a rogue pdf or do i need more ram on the server ?
Thanks in advance
I would say you probably need more RAM. CF10 64 bit with 4Gigs of RAM is pretty paltry. As an experiment why don't you try indexing half of the files. Then try the other half (or divide them up however appropriate). If in each case the process completes and mem use remains normal or recovers to normal then there is your answer. You have ceiling on RAM.
Meanwhile, more information would be helpful. Can you post your JVM settings (the contents of your jvm.config file). If you are using the default heap size (512megs) then you may have room (not much but a little) to increase. Keep in mind it is the max heap size and not the size of physical RAM that constrains your CF engine - though obviously your heap must run within said RAM.
Also keep in mind taht Solr runs in it's own jvm with it's own settings. Chekc out this post for information on that - though I suspect it is your CF heap that is being overrun.
I have an application that mmaps a large number of files. 3000+ or so. It also uses about 75 worker threads. The application is written in a mix of Java and C++, with the Java server code calling out to C++ via JNI.
It frequently, though not predictably, runs out of file descriptors. I have upped the limits in /etc/security/limits.conf to:
* hard nofile 131072
/proc/sys/fs/file-max is 101752. The system is a Linode VPS running Ubuntu 8.04 LTS with kernel 2.6.35.4.
Opens fail from both the Java and C++ bits of the code after a certain point. Netstat doesn't show a large number of open sockets ("netstat -n | wc -l" is under 500). The number of open files in either lsof or /proc/{pid}/fd are the about expected 2000-5000.
This has had me grasping at straws for a few weeks (not constantly, but in flashes of fear and loathing every time I start getting notifications of things going boom).
There are a couple other loose threads that have me wondering if they offer any insight:
Since the process has about 75 threads, if the mmaped files were somehow taking up one file descriptor per thread, then the numbers add up. That said, doing a recursive count on the things in /proc/{pid}/tasks/*/fd currently lists 215575 fds, so it would seem that it should be already hitting the limits and it's not, so that seems unlikely.
Apache + Passenger are also running on the same box, and come in second for the largest number of file descriptors, but even with children none of those processes weigh in at over 10k descriptors.
I'm unsure where to go from there. Obviously something's making the app hit its limits, but I'm completely blank for what to check next. Any thoughts?
So, from all I can tell, this appears to have been an issue specific to Ubuntu 8.04. After upgrading to 10.04, after one month, there hasn't been a single instance of this problem. The configuration didn't change, so I'm lead to believe that this must have been a kernel bug.
your setup uses a huge chunk of code that may be guilty of leaking too; the JVM. Maybe you can switch between the sun and the opensource jvms as a way to check if that code is not by chance guilty. Also there are different garbage collector strategies available for the jvm. Using a different one or different sizes will cause more or less garbage collects (which in java includes the closing of a descriptor).
I know its kinda far fetched, but it seems like all the other options you already followed ;)
For the last few years we've been randomly seeing this message in the output logs when running scheduled tasks in ColdFusion:
Recursion too deep; the stack overflowed.
The code inside the task that is being called can vary, but in this case it's VERY simple code that does nothing but reset a counter in the database and then send me an email to tell me it was successful. But I've seen it happen with all kinds of code, so I'm pretty sure it's not the code that's causing this problem.
It even has an empty application.cfm/cfc to block any other code being called.
The only other time we see this is when we are restarting CF and we are attempting to view a page before the service has fully started.
The error rarely happens, but now we have some rather critical scheduled tasks that cause issues if they don't run. (Hence I'm posting here for help)
Memory usage is fine. The task that ran just before it reported over 80% free memory. Monitoring memory through the night doesn't show any out-of-the-ordinary spikes. The machine has 4 gigs of memory and nothing else running on it but the OS and CF. We recently tried to reinstall CF to resolve the problem, but it did not help. It happens on several of our other servers as well.
This is an internal server, so usage at 3am should be nonexistent. There are no other scheduled tasks being run at that time.
We've been seeing this on our CF7, CF8, and CF9 boxes (fully patched).
The current box in question info:
CF version: 9,0,1,274733
Edition: Enterprise
OS: Windows 2003 Server
Java Version: 1.6.0_17
Min JVM Heap: 1024
Max JVM Heap: 1024
Min Perm Size: 64m
Max Perm Size: 384m
Server memory: 4gb
Quad core machine that rarely sees more than 5% CPU usage
JVM settings:
-server -Dsun.io.useCanonCaches=false -XX:PermSize=64m -XX:MaxPermSize=384m -XX:+UseParallelGC -XX:+AggressiveHeap -Dcoldfusion.rootDir={application.home}/../
-Dcoldfusion.libPath={application.home}/../lib
-Doracle.jdbc.V8Compatible=true
Here is the incredible complex code that failed to run last night, but has been running for years, and will most likely run tomorrow:
<cfquery datasource="common_app">
update import_counters
set current_count = 0
</cfquery>
<cfmail subject="Counters reset" to="my#email.com" from="my#email.com"></cfmail>
If I missed anything let me know. Thank you!
We had this issue for a while after our server was upgraded to ColdFusion 9. The fix seems to be in this technote from Adobe on jRun 4: http://kb2.adobe.com/cps/950/950218dc.html
You probably need to make some adjustments to permissions as noted in the technote.
Have you tried reducing the size of your heap from 1024 to say 800 something. You say there is over 80% of memory left available so if possible I would look at reducing the max.
Is it a 32 or 64 bits OS? When assigning the heap space you have to take into consideration all the overhead of the JVM (stack, libraries, etc.) so that you don't go over the OS limit for the process.
what you could try is to set Minimum JVM Heap Size to the same as your Maximum JVM Heap Size (MB) with in your CF administrator.
Also update the JVM to the latest (21) or at least 20.
In the past i've always upgraded the JVM whenever something wacky started happening as that usually solved the problem.