How I can determine physical RAM installed on computer? (windows) - c++

How I can get physical ram installed to my computer using c++ in Windows?
I mean not only capacity parametrs which can GlobalMemoryStatusEx(), but also number of used memory slots, type of memory (like DDR1/DDR2/DDR3), type of slot (DIMM/SO-DIMM) and clock rate of memory bus.
Am I need to use SMBIOS? Or have been any another way to get this info?

On my machine, most of the information you request is available through WMI. Take a look at the Win32_PhysicalMemory and related classes.
For example, the output of wmic memorychip on my machine is:
C:\>wmic memorychip
Attributes BankLabel Capacity Caption ConfiguredClockSpeed ConfiguredVoltage CreationClassName DataWidth Description DeviceLocator FormFactor HotSwappable InstallDate InterleaveDataDepth InterleavePosition Manufacturer MaxVoltage MemoryType MinVoltage Model Name OtherIdentifyingInfo PartNumber PositionInRow PoweredOn Removable Replaceable SerialNumber SKU SMBIOSMemoryType Speed Status Tag TotalWidth TypeDetail Version
2 BANK 0 17179869184 Physical Memory 2133 1200 Win32_PhysicalMemory 64 Physical Memory ChannelA-DIMM0 12 Samsung 0 0 0 Physical Memory M471A2K43BB1-CPB 15741117 26 2133 Physical Memory 0 64 128
2 BANK 2 17179869184 Physical Memory 2133 1200 Win32_PhysicalMemory 64 Physical Memory ChannelB-DIMM0 12 Samsung 0 0 0 Physical Memory M471A2K43BB1-CPB 21251413 26 2133 Physical Memory 2 64 128
As noted in the link above, FormFactor 12 is SODIMM.
Notably missing are the voltages (which you didn't ask for, but are usually of interest) and the MemoryType, the documentation of which is outdated on MSDN, while the recent SMBIOS docs from DMTF include values in the enum for DDR4. etc.
Therefore, you would probably have to resort to looking at the SMBIOS tables more or less by hand. See: How to get memory information (RAM type, e.g. DDR,DDR2,DDR3?) with WMI/C++

Related

Question about the Cortex-M3 vector table placement

I am trying to understand the placement of the vector table for Cortex-M3 processor.
According to the Cortex-M3 arch ref manual, the reset behavior is like this (some parts are omitted):
So, we can see that the vectortable comes from the VTOR (Vector Table Offset Register).
According to the Cortex-M3 tech ref manual, the VTOR is defined as:
So we can see, it has a reset value of 0x0. So based on the above 2 criteria, the Cortex-M3 processor expects a vector table at the absolute address 0x0 in the Code area after reset.
But in my MDK uVision IDE, I see my application is placed in the IROM1 area, which starts at 0x8000000, which is within the 0.5G Code memory area according to the Cortex-M3 memory map.
And since it has the Starup button checked, I guess that means the IROM1 area should contain the vector table (please correct me if I am wrong about this).
So I think the vector table should lie at the beginning of IROM1 area, i.e. 0x8000000. And it is indeed so. Below pic shows that at the beginning of IROM1, it is the vector table's 1st entry, the SP value.
And what's more strange, the VTOR register (at 0xE000ED08) still holds a 0x0 value:
So, how could my vector table be found with a 0x0 VTOR reset value?
And just out of curiosity, I checked the memory content at 0x0, there contains exactly the same vector table content as IROM1. So who did this magic copy??
ADD 1 - 4:39 PM 10/9/2020
I guess there must be something I don't know about the startup check box in below pic.
ADD 2 - 5:09 PM 10/9/2020
Thanks to #RealtimeRik and #domen. I downloaded the datasheet for STM32F103x8_xB(https://www.st.com/resource/en/datasheet/stm32f103c8.pdf). In section 4 Memory mapping, I saw below diagram:
So it seems the [0x0, 0x8000000) range does get aliased to somewhere else. But I haven't found how to determine where it is aliased to...
ADD 3 - 5:39 PM 10/9/2020
Now I found it!
I downloaded the STM32Fxxx fef manual (btw it's really huge).
In section 3.4 Boot configuration, it specifies the boot mode configured through the BOOT[1:0] pins.
And with different boot mode, different address aliasing is used:
Depending on the selected boot mode, main Flash memory, system memory
or SRAM is accessible as follows:
Boot from main Flash memory: the main Flash memory is aliased in the boot memory space (0x0000 0000), but still accessible from its
original memory space (0x800 0000). In other words, the Flash memory
contents can be accessed starting from address 0x0000 0000 or 0x800 0000.
Boot from system memory: the system memory is aliased in the boot memory space (0x0000 0000), but still accessible from its original
memory space (0x1FFF B000 in connectivity line devices, 0x1FFF F000 in
other devices).
Boot from the embedded SRAM: SRAM is accessible only at address 0x2000 0000.
What I saw is Boot from main Flash memory.
Well finally I can explain why 0x800 0000 is chosen...
ADD 4 - 3:19 PM 10/15/2020
The placement/expectation of the interrupt vector table at the address 0 is similar to the IA32 processor in real mode...
There is no "Magic Copy". 0x00000000 is aliased to 0x08000000.
The actual memory is physically located at 0x08000000 but can also be access at 0x00000000.
If you look in the processor specific reference manual you should find this in the the memory map section.

Is the maximum size of an item in a type 19 file configurable?

A WRITEBLK command fails when the item reaches 2GB in size (item is truncated to 2147483647 bytes).
Using cat I was able to create an item larger than 2GB in the same directory, but opening it in UV gave a corrupt (negative) value for STATUS<4> (Number of bytes available to read).
uv 11.1.4
64bit Linux on a VM
64BIT_FILES = 1
You can make the universe files 32 or 64 bit (regardless of the OS). So you can do a FILEINFO call to see if the file is actually 64bit (even if the account is 64bit).
My guess is that there is an File system limitation on the file size. in the Rocket UniVerse documentation (page 927) it says:
If the device runs out of disk
space, WRITEBLK takes the ELSE clause and returns –4 to the STATUS
function.
Generally only 32 bit systems would be the hard limit on 2 GB, but maybe there is some kind of 32 bit process running in our 64 bit virtual machine that is producing the same effect. See here for a few leads: https://unix.stackexchange.com/questions/274380/file-size-limit

Limiting Java 8 Memory Consumption

I'm running three Java 8 JVMs on a 64 bit Ubuntu VM which was built from a minimal install with nothing extra running other than the three JVMs. The VM itself has 2GB of memory and each JVM was limited by -Xmx512M which I assumed would be fine as there would be a couple of hundred MB spare.
A few weeks ago, one crashed and the hs_err_pid dump showed:
# There is insufficient memory for the Java Runtime Environment to continue.
# Native memory allocation (mmap) failed to map 196608 bytes for committing reserved memory.
# Possible reasons:
# The system is out of physical RAM or swap space
# In 32 bit mode, the process size limit was hit
# Possible solutions:
# Reduce memory load on the system
# Increase physical memory or swap space
# Check if swap backing store is full
# Use 64 bit Java on a 64 bit OS
# Decrease Java heap size (-Xmx/-Xms)
# Decrease number of Java threads
# Decrease Java thread stack sizes (-Xss)
# Set larger code cache with -XX:ReservedCodeCacheSize=
# This output file may be truncated or incomplete.
I restarted the JVM with a reduced heap size of 384MB and so far everything is fine. However when I currently look at the VM using the ps command and sort in descending RSS size I see
RSS %MEM VSZ PID CMD
708768 35.4 2536124 29568 java -Xms64m -Xmx512m ...
542776 27.1 2340996 12934 java -Xms64m -Xmx384m ...
387336 19.3 2542336 6788 java -Xms64m -Xmx512m ...
12128 0.6 288120 1239 /usr/lib/snapd/snapd
4564 0.2 21476 27132 -bash
3524 0.1 5724 1235 /sbin/iscsid
3184 0.1 37928 1 /sbin/init
3032 0.1 27772 28829 ps ax -o rss,pmem,vsz,pid,cmd --sort -rss
3020 0.1 652988 1308 /usr/bin/lxcfs /var/lib/lxcfs/
2936 0.1 274596 1237 /usr/lib/accountsservice/accounts-daemon
..
..
and the free command shows
total used free shared buff/cache available
Mem: 1952 1657 80 20 213 41
Swap: 0 0 0
Taking the first process as an example, there is an RSS size of 708768 KB even though the heap limit would be 524288 KB (512*1024).
I am aware that extra memory is used over the JVM heap but the question is how can I control this to ensure I do not run out of memory again ? I am trying to set the heap size for each JVM as large as I can without crashing them.
Or is there a good general guideline as to how to set JVM heap size in relation to overall memory availability ?
There does not appear to be a way of controlling how much extra memory the JVM will use over the heap. However by monitoring the application over a period of time, a good estimate of this amount can be obtained. If the overall consumption of the java process is higher than desired, then the heap size can be reduced. Further monitoring is needed to see if this impacts performance.
Continuing with the example above and using the command ps ax -o rss,pmem,vsz,pid,cmd --sort -rss we see usage as of today is
RSS %MEM VSZ PID CMD
704144 35.2 2536124 29568 java -Xms64m -Xmx512m ...
429504 21.4 2340996 12934 java -Xms64m -Xmx384m ...
367732 18.3 2542336 6788 java -Xms64m -Xmx512m ...
13872 0.6 288120 1239 /usr/lib/snapd/snapd
..
..
These java processes are all running the same application but with different data sets. The first process (29568) has stayed stable using about 190M beyond the heap limit while the second (12934) has reduced from 156M to 35M. The total memory usage of the third has stayed well under the heap size which suggests the heap limit could be reduced.
It would seem that allowing 200MB extra non heap memory per java process here would be more than enough as that gives 600MB leeway total. Subtracting this from 2GB leaves 1400MB so the three -Xmx parameter values combined should be less than this amount.
As will be gleaned from reading the article pointed out in a comment by Fairoz there are many different ways in which the JVM can use non heap memory. One of these that is measurable though is the thread stack size. The default for a JVM can be found on linux using java -XX:+PrintFlagsFinal -version | grep ThreadStackSize In the case above it is 1MB and as there are about 25 threads, we can safely say that at least 25MB extra will always be required.

GlobalMemoryStatusEx() gives total virtual memory as 127 TeraByte

why GlobalMemoryStatusEx() gives huge total virtual memory.Does it take into account all the page files that can be created?
System details:
Windows 8.1, 64 bit Process, x64 Processor
int main()
{
MEMORYSTATUSEX mex;
mex.dwLength = sizeof (mex);
GlobalMemoryStatusEx(&mex);
std::cout<<mex.ullTotalVirtual<<" "<<mex.ullAvailVirtual;
}
140737488224256 140737478111232
EDIT:
I got same result on Windows 10.I am interested in knowing how this 127 TB figure comes up.Why does the system not take into account that i don't have 127 tb space on my disk?
A 32 bit process on (x64 system) shows only 2gb which is the accessible address limit of a 32 bit process for user mode.Why does it not take into account page files in case of 32 bit process?
Yes. From MSDN:
You can use the GlobalMemoryStatusEx() to determine how much memory your application can allocate without severely impacting other applications.

Not enough space to cache rdd in memory warning

I am running a spark job, and I got Not enough space to cache rdd_128_17000 in memory warning. However, in the attached file, it obviously saying only 90.8 G out of 719.3 G is used. Why is that? Thanks!
15/10/16 02:19:41 WARN storage.MemoryStore: Not enough space to cache rdd_128_17000 in memory! (computed 21.4 GB so far)
15/10/16 02:19:41 INFO storage.MemoryStore: Memory use = 4.1 GB (blocks) + 21.2 GB (scratch space shared across 1 thread(s)) = 25.2 GB. Storage limit = 36.0 GB.
15/10/16 02:19:44 WARN storage.MemoryStore: Not enough space to cache rdd_129_17000 in memory! (computed 9.4 GB so far)
15/10/16 02:19:44 INFO storage.MemoryStore: Memory use = 4.1 GB (blocks) + 30.6 GB (scratch space shared across 1 thread(s)) = 34.6 GB. Storage limit = 36.0 GB.
15/10/16 02:25:37 INFO metrics.MetricsSaver: 1001 MetricsLockFreeSaver 339 comitted 11 matured S3WriteBytes values
15/10/16 02:29:00 INFO s3n.MultipartUploadOutputStream: uploadPart /mnt1/var/lib/hadoop/s3/959a772f-d03a-41fd-bc9d-6d5c5b9812a1-0000 134217728 bytes md5: qkQ8nlvC8COVftXkknPE3A== md5hex: aa443c9e5bc2f023957ed5e49273c4dc
15/10/16 02:38:15 INFO s3n.MultipartUploadOutputStream: uploadPart /mnt/var/lib/hadoop/s3/959a772f-d03a-41fd-bc9d-6d5c5b9812a1-0001 134217728 bytes md5: RgoGg/yJpqzjIvD5DqjCig== md5hex: 460a0683fc89a6ace322f0f90ea8c28a
15/10/16 02:42:20 INFO metrics.MetricsSaver: 2001 MetricsLockFreeSaver 339 comitted 10 matured S3WriteBytes values
This is likely to be caused by the configuration of spark.storage.memoryFraction being too low. Spark will only use this fraction of the allocated memory to cache RDDs.
Try either:
increasing the storage fraction
rdd.persist(StorageLevel.MEMORY_ONLY_SER) to reduce memory usage by serializing the RDD data
rdd.persist(StorageLevel.MEMORY_AND_DISK) to partially persist onto disk if memory limits are reached.
This could be due to the following issue if you're loading lots of avro files:
https://mail-archives.apache.org/mod_mbox/spark-user/201510.mbox/%3CCANx3uAiJqO4qcTXePrUofKhO3N9UbQDJgNQXPYGZ14PWgfG5Aw#mail.gmail.com%3E
With a PR in progress at:
https://github.com/databricks/spark-avro/pull/95
I have a Spark-based batch application (a JAR with main() method, not written by me, I'm not a Spark expert) that I run in local mode without spark-submit, spark-shell, or spark-defaults.conf. When I tried to use IBM JRE (like one of my customers) instead of Oracle JRE (same machine and same data), I started getting those warnings.
Since the memory store is a fraction of the heap (see the page that Jacob suggested in his comment), I checked the heap size: IBM JRE uses a different strategy to decide default heap size and it was too small, so I simply added appropriate -Xms and -Xmx params and the problem disappeared: now the batch works fine both with IBM and Oracle JRE.
My usage scenario is not typical, I know, however I hope this can help someone.

Categories