I have been able to set the default huge page size to 1GB using the GRUB command line in /etc/default/grub however it seems that I can't set the number of hugepages greater than 12 no matter how I do it (either boot commands or sysctl). It looks like the DirectMap1G parameter is exactly 13 times the size if my hugepagesize (DirectMap1G=13631488kB, and hugepagesize=1048576kB). Is there any way to increase the size of the DirectMap1G parameter if that is what is limiting the number of hugepages? Thanks.
DirectMap1G is a kernel space measure, i.e. hugepage mapping for kernel usage.
DPDK is a userspace library, so instead you need to reserve hugepages for userspace usage as described in DPDK Getting Started Guide.
So correct kernel options would be like following:
default_hugepagesz=1G hugepagesz=1G hugepages=4
Related
I'm making a custom operating system, and I'm running into some memory issues. Recently I've had issues such as this:
I've attributed characters not appearing on screen to there not being enough kernel memory, as this runs from kernel. I'm very new to asm, c++, and OS development as a whole so there could be a lot of things I've done to cause this issue. One reason I believe this is a memory-related issue is because when I shortened my kernel.cpp file, suddenly characters appeared (though if I add too many, some disappear). Here is a link to my GitHub repository with my code: https://github.com/Subwaey/KaiOS
Your boot loader only loads 2 sectors (1024 bytes) of the kernel into memory.
You could increase this a little (temporarily) by changing the mov dh, 2 to a larger value at line 15 of boot.asm.
This has limits (e.g. limited to 255 sectors, and possibly limited by the number of sectors per track on the disk). To break that limit you have to break it up into multiple reads (e.g. if there's 18 sectors per track, then loading a 123 KiB kernel as 7 reads of not more than 18 sectors per read).
The next limit is caused by real mode only being able to access 640 KiB of RAM (where some of that is BIOS data, etc). For this reason (and because most kernels grow to be much larger than 640 KiB) you want a loop that loads the next piece of the kernel's file into a buffer, then switches to protected mode and copies that piece elsewhere (e.g. starting at 0x0100000 where there's likely to be a lot more contiguous RAM you can use), then switches back to real mode; until the whole kernel is loaded and copied elsewhere.
Of course hard-coding the "number of sectors in kernel's file" directly into the boot loader is fragile. A better approach would be to have some kind of header at/near the start of the kernel's file that contains the size of the kernel; where boot loader loads the header and uses it to determine how many sectors it needs to load.
Also; I suspect you don't actually have a separate file for the kernel - it's just a single file containing "boot loader + kernel". This is likely to be a major problem when you try to get the kernel to boot in different scenarios (from partitioned hard disks where boot loader has to care about which partition it was installed in, from CD-ROM where it's all completely different, from network where it's all completely different, etc).
The options don't specify defaults in their docs: https://uwsgi-docs.readthedocs.io/en/latest/Options.html#thread-stacksize.
I'm considering adding thread-stacksize=512to my uwsgi config since it seems to resolve a segfault issue I've been having, but I want to know what the original setting was first.
Edit: Through trial and error, I ended up using 128 for the stack size. At 64, I was seeing my specific issue. I'm going to assume the default is 64 or less.
Uwsgi gets it from the operating system.
1 MB default stack size on Windows
8 MB default stack size on Unix/Linux platforms
On linux you can check this using ulimit command
ulimit -s
-> output 8192 (in kbytes)
I started a ColdFusion 11 instance using Commandbox. I wanted to alter a setting in the CFADMIN under Server Settings => Settings; namely
Maximum number of POST request parameters.
I keep getting the error In memory file system limit cannot be more than JVM max heap size.
How can I quickly get rid of this error as it has nothing to do with the setting I want to modify?
I uncheck Enable In-Memory File System but this changes nothing.
I set Memory Limit for In-Memory Virtual File System to 1 and get the errore message In-Memory File System limit per Application must be numeric, less than In-Memory Virtual File System memory limit, and greater than or equal to 1.
To set my parameter I eventually used cfconfig set postParametersLimit=1000.
I assume this is a bug in Adobe ColdFusion because CommandBox doesn't set a max heap size by default! it's just whatever your OS wants to give it. Try setting an explicit max in CommandBox.
https://commandbox.ortusbooks.com/embedded-server/configuring-your-server/jvm-args#heapsize
Also, FWIW, I'm not sure if the max post size works on a J2E install of CF. It may rely on their hacked up Tomcat version so you may want to test.
my first question on stack,
I'm running cf10 Enterprise on windows 2003 server AMD Opteron 2.30 Ghz with 4gb ram. Im using cfindex action = update to index over 1k pdfs
I'm getting jvm memory errors and the page is being killed when its run as a scheduled task in the early hours of the morning.
This is the code in the page :
cfindex collection= "pdfs" action = "update" type= "path" extensions = ".pdf" recurse="yes" urlpath="/site/files/" key="D:\Inetpub\wwwroot\site\files"
JVM.config contents
java.home=s:\ColdFusion10\jre
application.home=s:\ColdFusion10\cfusion
java.args=-server -Xms256m -Xmx1024m -XX:MaxPermSize=192m -XX:+UseParallelGC -Xbatch -Dcoldfusion.home={application.home} -Dcoldfusion.rootDir={application.home} -Dcoldfusion.libPath={application.home}/lib -Dorg.apache.coyote.USE_CUSTOM_STATUS_MSG_IN_HEADER=true -Dcoldfusion.jsafe.defaultalgo=FIPS186Random -Dcoldfusion.classPath={application.home}/lib/updates,{application.home}/lib,{application.home}/lib/axis2,{application.home}/gateway/lib/,{application.home}/wwwroot/WEB-INF/flex/jars,{application.home}/wwwroot/WEB-INF/cfform/jars
java.library.path={application.home}/lib,{application.home}/jintegra/bin,{application.home}/jintegra/bin/international,{application.home}/lib/oosdk/classes/win
java.class.path={application.home}/lib/oosdk/lib,{application.home}/lib/oosdk/classes
Ive also tried going higher than 1024mb for -Xmx however cf would not restart until i tokk it back to 1024mb
Could it be a rogue pdf or do i need more ram on the server ?
Thanks in advance
I would say you probably need more RAM. CF10 64 bit with 4Gigs of RAM is pretty paltry. As an experiment why don't you try indexing half of the files. Then try the other half (or divide them up however appropriate). If in each case the process completes and mem use remains normal or recovers to normal then there is your answer. You have ceiling on RAM.
Meanwhile, more information would be helpful. Can you post your JVM settings (the contents of your jvm.config file). If you are using the default heap size (512megs) then you may have room (not much but a little) to increase. Keep in mind it is the max heap size and not the size of physical RAM that constrains your CF engine - though obviously your heap must run within said RAM.
Also keep in mind taht Solr runs in it's own jvm with it's own settings. Chekc out this post for information on that - though I suspect it is your CF heap that is being overrun.
For the last few years we've been randomly seeing this message in the output logs when running scheduled tasks in ColdFusion:
Recursion too deep; the stack overflowed.
The code inside the task that is being called can vary, but in this case it's VERY simple code that does nothing but reset a counter in the database and then send me an email to tell me it was successful. But I've seen it happen with all kinds of code, so I'm pretty sure it's not the code that's causing this problem.
It even has an empty application.cfm/cfc to block any other code being called.
The only other time we see this is when we are restarting CF and we are attempting to view a page before the service has fully started.
The error rarely happens, but now we have some rather critical scheduled tasks that cause issues if they don't run. (Hence I'm posting here for help)
Memory usage is fine. The task that ran just before it reported over 80% free memory. Monitoring memory through the night doesn't show any out-of-the-ordinary spikes. The machine has 4 gigs of memory and nothing else running on it but the OS and CF. We recently tried to reinstall CF to resolve the problem, but it did not help. It happens on several of our other servers as well.
This is an internal server, so usage at 3am should be nonexistent. There are no other scheduled tasks being run at that time.
We've been seeing this on our CF7, CF8, and CF9 boxes (fully patched).
The current box in question info:
CF version: 9,0,1,274733
Edition: Enterprise
OS: Windows 2003 Server
Java Version: 1.6.0_17
Min JVM Heap: 1024
Max JVM Heap: 1024
Min Perm Size: 64m
Max Perm Size: 384m
Server memory: 4gb
Quad core machine that rarely sees more than 5% CPU usage
JVM settings:
-server -Dsun.io.useCanonCaches=false -XX:PermSize=64m -XX:MaxPermSize=384m -XX:+UseParallelGC -XX:+AggressiveHeap -Dcoldfusion.rootDir={application.home}/../
-Dcoldfusion.libPath={application.home}/../lib
-Doracle.jdbc.V8Compatible=true
Here is the incredible complex code that failed to run last night, but has been running for years, and will most likely run tomorrow:
<cfquery datasource="common_app">
update import_counters
set current_count = 0
</cfquery>
<cfmail subject="Counters reset" to="my#email.com" from="my#email.com"></cfmail>
If I missed anything let me know. Thank you!
We had this issue for a while after our server was upgraded to ColdFusion 9. The fix seems to be in this technote from Adobe on jRun 4: http://kb2.adobe.com/cps/950/950218dc.html
You probably need to make some adjustments to permissions as noted in the technote.
Have you tried reducing the size of your heap from 1024 to say 800 something. You say there is over 80% of memory left available so if possible I would look at reducing the max.
Is it a 32 or 64 bits OS? When assigning the heap space you have to take into consideration all the overhead of the JVM (stack, libraries, etc.) so that you don't go over the OS limit for the process.
what you could try is to set Minimum JVM Heap Size to the same as your Maximum JVM Heap Size (MB) with in your CF administrator.
Also update the JVM to the latest (21) or at least 20.
In the past i've always upgraded the JVM whenever something wacky started happening as that usually solved the problem.