My humble concern is the following:
Presets:
Purchase Windows 10, and install it
Download and install Visual Studio 2017, Community, version 15.2
Configure it for C++
Create a new C++ project, Win32 app with basic settings (without ATL, non-console)
Build and debug run (see how an empty window appears)
Observe process memory (within VS2017 or other tool)
My observations:
The executable itself is (150 kB). App itself - when running - starts by taking 2 MB of memory. Without toughing it, the memory consumption changes; sometimes grows sometimes decreases (my max is now 3 MB after few minutes to 30 minutes observation perion). You can even minimize it at start and just observe the memory consumption either using Visual Studio or performance monitor. I cannot see anything on the I/O bytes, cannot be sure though.
My questions are:
What is taking so much memory?
Why the memory usage is varying over the time without user interactions?
Thanks!
Is the memory usage of 3 MB of an empty app really OK for you?
Yes, that is ok for me.
If it is, could you explain to me why it is so?
Because I have 4GB or several terabytes of virtual address space to spare.
Related
I have an application written in VC++ Windows form applications that interacts with various hardware such as A/D cards, GPIB, D/A etc. My customer ran the application on-site, and found that the application is crashing after few seconds. I asked him to monitor memory growth through task manager, and I found that indeed , the memory was growing. So it looks like some problem of memory leakage. Now I want to find where exactly in my code , I am not correctly freeing/allocating memory., but I do not have the access to the onsite PC. I have to do this on my PC which is not having those hardware A/D etc. Is there any software that can accept my exe, and point out the name of functions/code line which is causing the problem, without actually ecxuting my exe?
My exe would not run since I do not have those hardware.
I used Smartbear QATime for this tasks. It is a profiler which can also profile heap allocations. In the report, you can get a list of objects which are still alive and also the line where they have been created.
my first question on stack,
I'm running cf10 Enterprise on windows 2003 server AMD Opteron 2.30 Ghz with 4gb ram. Im using cfindex action = update to index over 1k pdfs
I'm getting jvm memory errors and the page is being killed when its run as a scheduled task in the early hours of the morning.
This is the code in the page :
cfindex collection= "pdfs" action = "update" type= "path" extensions = ".pdf" recurse="yes" urlpath="/site/files/" key="D:\Inetpub\wwwroot\site\files"
JVM.config contents
java.home=s:\ColdFusion10\jre
application.home=s:\ColdFusion10\cfusion
java.args=-server -Xms256m -Xmx1024m -XX:MaxPermSize=192m -XX:+UseParallelGC -Xbatch -Dcoldfusion.home={application.home} -Dcoldfusion.rootDir={application.home} -Dcoldfusion.libPath={application.home}/lib -Dorg.apache.coyote.USE_CUSTOM_STATUS_MSG_IN_HEADER=true -Dcoldfusion.jsafe.defaultalgo=FIPS186Random -Dcoldfusion.classPath={application.home}/lib/updates,{application.home}/lib,{application.home}/lib/axis2,{application.home}/gateway/lib/,{application.home}/wwwroot/WEB-INF/flex/jars,{application.home}/wwwroot/WEB-INF/cfform/jars
java.library.path={application.home}/lib,{application.home}/jintegra/bin,{application.home}/jintegra/bin/international,{application.home}/lib/oosdk/classes/win
java.class.path={application.home}/lib/oosdk/lib,{application.home}/lib/oosdk/classes
Ive also tried going higher than 1024mb for -Xmx however cf would not restart until i tokk it back to 1024mb
Could it be a rogue pdf or do i need more ram on the server ?
Thanks in advance
I would say you probably need more RAM. CF10 64 bit with 4Gigs of RAM is pretty paltry. As an experiment why don't you try indexing half of the files. Then try the other half (or divide them up however appropriate). If in each case the process completes and mem use remains normal or recovers to normal then there is your answer. You have ceiling on RAM.
Meanwhile, more information would be helpful. Can you post your JVM settings (the contents of your jvm.config file). If you are using the default heap size (512megs) then you may have room (not much but a little) to increase. Keep in mind it is the max heap size and not the size of physical RAM that constrains your CF engine - though obviously your heap must run within said RAM.
Also keep in mind taht Solr runs in it's own jvm with it's own settings. Chekc out this post for information on that - though I suspect it is your CF heap that is being overrun.
I'm developing a Windows game that needs a lot of small different images, that I put in resources.qrc, they are in tot. 14 MB.
When I try to compile the only error is: "out of memory allocating 134 MB" "cc1plus.exe not found".
How can I handle this?
Windows 7SP1 x86 4 GB RAM
Qt 5.7.0
I had the same problem, when I added big file in resources in Qt. I had the error:
cc1plus.exe:-1: error: out of memory allocating 1073745919 bytes
Solution:
Add CONFIG += resources_big into the *.pro file.
I took it here: cc1plus.exe: out of memory | 60MB encrypted resource file
Don't put them in the qrc, keep them as individual resources (or a new qrc file for each of the image), and just load them on application startup. Qt generates a qrc_XXXXX.cpp file where it effectively inserts the binary data in form of char array of ALL your resources in the resource fileXXXXX in this file (yes, ONE array for your images of 14MB, ie: 14680064 bytes (written as hex (0xXX) bytes into 1 file... it will be big!), highly possibly poor compiler just coughs on them...
Well, I had this problem too. But in my situation putting all resources into .exe was necessary.
After this error I bought additional RAM (project is very important) and then my RAM became 12 GB (from 6 GB).
But I was very surprised when error hadn't disappeared :) After some googling, finally, I found answer there. The problem is cc1plus.exe executable memory limit. So, in case of Qt this problem can be solved in these steps (for Windows 7, MinGW32 4.9.2, for others probably simply needs to change paths):
If your OS is 32 bit, then in cmd (as Admin) put bcdedit /set IncreaseUserVa 3072
Install masm32;
Open cmd (as administrator too);
Put cd C:\Qt\Tools\mingw492_32\libexec\gcc\i686-w64-mingw32\4.9.2
Put C:\masm32\bin\editbin.exe /LARGEADDRESSAWARE cc1plus.exe
That's all.
Don't forget the obvious either: The message might actually be true and you really don't have enough memory, or the memory can't be made available for the process that needs it.
I've got 16GB of RAM on my system which ought to be plenty for my small application. "It can't possibly be that." I thought... but my machine hadn't been restarted in weeks.
A system restart is all it took to fix this error for me.
I'm using Visual Studio 2010 to write and debug a small program. The problem is, whenever I start the application through Visual Studio 2010 the process of my application produces page-faults in the range of 100000 per second and that slows down the program by factor 10 or more. When I start the generated executable from the file system no page-faults are generated after the start-up is complete. That happens with the debug and the (all optimizations allowed) release build. No exceptions are getting thrown.
The program itself is compiled around 200kib and when executed holds around 10mib of data with over 4gib of memory available. There are only the main thread and the thread of the logging framework running. The data is loaded once at the start and after that only the results are stored in newly allocated memory and written to the log at the end.
There does not seem to be a lot of disk activity and the Windows Resource Monitor indicates no hard faults, while the Task Manager shows ever increasing numbers. I know that some performance loss is to be expected for using an IDE, but this seems a little bit excessive. Any advice?
Edit:
Note: I was able to get the count down to about half by cutting down on (de-)allocating new memory.
Process Explorer says:
It seems the debugger is at fault. If I do not attach it, it behaves as expected. Although I'm still wondering why it would provoke such a high amount of page faults, that it slows down all builds considerably.
Page faults are normal. It's part of the process of allocating memory. This is nothing to worry about.
For the last few years we've been randomly seeing this message in the output logs when running scheduled tasks in ColdFusion:
Recursion too deep; the stack overflowed.
The code inside the task that is being called can vary, but in this case it's VERY simple code that does nothing but reset a counter in the database and then send me an email to tell me it was successful. But I've seen it happen with all kinds of code, so I'm pretty sure it's not the code that's causing this problem.
It even has an empty application.cfm/cfc to block any other code being called.
The only other time we see this is when we are restarting CF and we are attempting to view a page before the service has fully started.
The error rarely happens, but now we have some rather critical scheduled tasks that cause issues if they don't run. (Hence I'm posting here for help)
Memory usage is fine. The task that ran just before it reported over 80% free memory. Monitoring memory through the night doesn't show any out-of-the-ordinary spikes. The machine has 4 gigs of memory and nothing else running on it but the OS and CF. We recently tried to reinstall CF to resolve the problem, but it did not help. It happens on several of our other servers as well.
This is an internal server, so usage at 3am should be nonexistent. There are no other scheduled tasks being run at that time.
We've been seeing this on our CF7, CF8, and CF9 boxes (fully patched).
The current box in question info:
CF version: 9,0,1,274733
Edition: Enterprise
OS: Windows 2003 Server
Java Version: 1.6.0_17
Min JVM Heap: 1024
Max JVM Heap: 1024
Min Perm Size: 64m
Max Perm Size: 384m
Server memory: 4gb
Quad core machine that rarely sees more than 5% CPU usage
JVM settings:
-server -Dsun.io.useCanonCaches=false -XX:PermSize=64m -XX:MaxPermSize=384m -XX:+UseParallelGC -XX:+AggressiveHeap -Dcoldfusion.rootDir={application.home}/../
-Dcoldfusion.libPath={application.home}/../lib
-Doracle.jdbc.V8Compatible=true
Here is the incredible complex code that failed to run last night, but has been running for years, and will most likely run tomorrow:
<cfquery datasource="common_app">
update import_counters
set current_count = 0
</cfquery>
<cfmail subject="Counters reset" to="my#email.com" from="my#email.com"></cfmail>
If I missed anything let me know. Thank you!
We had this issue for a while after our server was upgraded to ColdFusion 9. The fix seems to be in this technote from Adobe on jRun 4: http://kb2.adobe.com/cps/950/950218dc.html
You probably need to make some adjustments to permissions as noted in the technote.
Have you tried reducing the size of your heap from 1024 to say 800 something. You say there is over 80% of memory left available so if possible I would look at reducing the max.
Is it a 32 or 64 bits OS? When assigning the heap space you have to take into consideration all the overhead of the JVM (stack, libraries, etc.) so that you don't go over the OS limit for the process.
what you could try is to set Minimum JVM Heap Size to the same as your Maximum JVM Heap Size (MB) with in your CF administrator.
Also update the JVM to the latest (21) or at least 20.
In the past i've always upgraded the JVM whenever something wacky started happening as that usually solved the problem.