I was working with LMDB++ (the C++ wrapper for LMDB) and I got this error:
terminate called after throwing an instance of 'lmdb::map_full_error'
what(): mdb_put: MDB_MAP_FULL: Environment mapsize limit reached
Some googling told me that the default map_size is set low in LMDB. How do I go about increasing map_size?
The default LMDB map size is 10 MiB, which is indeed too small for most uses.
To set the LMDB map size using the C++ wrapper, you ought to call lmdb::env#set_mapsize() right after creating your LMDB environment and prior to opening the environment or creating your transaction.
Here's a basic example that increases the map size to 1 GiB:
/* Create and open the LMDB environment: */
auto env = lmdb::env::create();
env.set_mapsize(1UL * 1024UL * 1024UL * 1024UL);
env.open("./example.mdb", 0, 0664);
If you are calculating a large map size as in the above example, take care to include the appropriate type suffix (UL or ULL) on your integer literals, or else you may encounter silent integer overflow and be left wondering why the map size did not increase to what you expected.
See also the documentation for LMDB's underlying C function mdb_env_set_mapsize() for the authoritative word on how the map size works.
Related
I have an uncommon but in my eyes reasonable use case:
I have to build two STM32 firmware images: a boot loader and an application (by using the latest Eclipse CDT based IDE from ST Microelectronics, called: "STM32CubeIDE").
Because my constraints are mostly low power consumption and not security, therefore I have only the requirement for data integrity for the DFU (Device Firmware Upgrade) scenario and for this I implemented a CRC32 check over the complete FW images. The tricky part is, that the firmware itself contains its actually size within a C-struct at a fixed offset address 0x200 in the code memory (the benefit for this design is, that not the complete code memory has to be transmitted, but the FW is always protected by the CRC32):
The layout of a firmware is something like this:
<ISR Table> <FW-Header#FixedAddress0x200> <RestFWCode> " + CRC32
FW Header contains FW size
The complete FW size which is used by the booloader to flash the application is the stored FW size (see 1.) + 4 byte of the appended CRC32
For my implementation I need to replace a memory area within the "FW Header" area with the actual FW size (which is only available after the build process).
For this I made a python script which patches the binary "*.bin" file, but it seems, that Eclipse/GDB uses for debugging the ELF-file, which looks for me much more complicated for a custom patch compared to the binary image, since I found no easy way to do this (to replace the actual FW size and append the 4 bytes of the CRC32).
Therefore, I thought the easiest way would be to patch the code memory right after the firmware got loader from the debugger.
I tested successfully a command-line tool from ST which can manipulate arbitrary memory even in the code memory (my code memory starts at 0x08000000 + application at offset 0x4000 and FW header at öffset 0x200 -> 0x08004200):
ST-LINK_CLI.exe -c SWD -w32 0x08004200 0xAABBCCDD
(see: https://www.st.com/resource/en/user_manual/cd00262073.pdf)
My Problem is, I don't know how to initiate this simple EXE call right before the debugger got attached to the MCU... I tried the "Debug Configuration"-> "Startup" -> "Run Commands", but without success...
Does anybody know a way how to achieve this?
Running a program before starting a debug session can be done using Eclipse's "Launch group" located under Debug Configurations, e.g. top menu -> Run -> Debug Configurations.
However before doing that you should go to project properties -> Builders and add your program invocation there - path to the executable plus its arguments. Make sure it's NOT checked so that it doesn't run when you build your project. Then you can go to the Launch Groups described above and create a group that contains the program you've defined in the project Builders section, then after that your regular debug session which you should already have available on the list.
I am going to recommend a different workflow to achieve the same image format:
<ISR Table> <FW-Header#FixedAddress0x200> <RestFWCode> " + CRC32
Once in place, your build process will be:
Build FW image to produce a raw binary, without a CRC-32 value
Calculate CRC-32 on the raw binary image, third-party tool
Insert the calculated CRC-32 into linker script, rebuild FW image
Note: For STM32CubeIDE, you will want to have your own *.elf project that includes the STM32CubeMX project as a static library. Otherwise, STM32CubeMX will overwrite your linker script each time it generates new code.
Since each project tends to have a slightly different linker script, I am going to demonstrate using a .test_crc sector. Insert the following into your *.ld linker script:
.test_crc :
{
_image_start = .;
BYTE( 0x31)
BYTE( 0x32)
BYTE( 0x33)
BYTE( 0x34)
BYTE( 0x35)
BYTE( 0x36)
BYTE( 0x37)
BYTE( 0x38)
BYTE( 0x39)
/* FW Image Header */
LONG( _image_end - _image_start ) /* Size of FW image */
LONG( _image_start ) /* Base address to load FW image */
/* Place this at the end of the FW image */
_image_end = .;
_IMAGE_CRC = ABSOLUTE(0x0); /* Using CRC-32C (aka zip checksum) */
/*LONG( (-_IMAGE_CRC) - 1 ) /* Uncomment to append value, comment to calculate new value */
} >FLASH
Add the following as a post-build step in STM32CubeIDE (generates the raw binary image):
arm-none-eabi-objcopy -S -O binary -j .test_crc ${ProjName}.elf ${ProjName}.bin
Now your ready to test/evaluate the process:
Rebuild your project to generate a *.bin file.
Using a third-party tool, calculate a CRC-32 checksum. I use 7-Zip command-line interface to generate the CRC-32C value for the *.bin file
Append the calculated CRC-32C. In the linker script, set the 0x0 in the following _IMAGE_CRC = ABSOLUTE(0x0) to match your calculated value. Uncomment the following:
LONG( (-_IMAGE_CRC) - 1 ) /* Uncomment to append value, comment to calculate new value */
Rebuild your image and run the third-party CRC utility, it should now report 0xFFFFFFFF for a CRC-32C value.
When you are ready to apply this to your actual FW image, do the following:
Change the post-build step to dump the full binary: arm-none-eabi-objcopy -S -O binary ${ProjName}.elf ${ProjName}.bin
Move _image_start = .; in front of your vector table.
Move the following after your vector table:
/* FW Image Header */
LONG( _image_end - _image_start ) /* Size of FW image */
LONG( _image_start ) /* Base address to load FW image */
Move the following to end of the last sector:
/* Place this at the end of the FW image */
_image_end = .;
_IMAGE_CRC = ABSOLUTE(0x0); /* Using CRC-32C (aka zip checksum) */
/*LONG( (-_IMAGE_CRC) - 1 ) /* Uncomment to append value, comment to calculate new value */
You may find that you actually do not need the CRC value built into the image, and can just append the CRC value to the *.bin. Then provide the *.bin to your bootloader. The *.bin will still contain the load address and size of FW image, +4 bytes for the appended CRC value.
i have an application that queries a specific folder for its contents with a quick interval. Up to the moment i was using FindFirstFile but even by applying search pattern i feel there will be performance problems in the future since the folder can get pretty big; in fact it's not in my hand to restrict it at all.
Then i decided to give FindFirstFileEx a chance, in combination with some tips from this question.
My exact call is the following:
const char* search_path = "somepath/*.*";
WIN32_FIND_DATA fd;
HANDLE hFind = ::FindFirstFileEx(search_path, FindExInfoBasic, &fd, FindExSearchNameMatch, NULL, FIND_FIRST_EX_LARGE_FETCH);
Now i get pretty good performance but what about compatibility? My application requires Windows Vista+ but given the following, regarding FIND_FIRST_EX_LARGE_FETCH:
This value is not supported until Windows Server 2008 R2 and Windows 7.
I can pretty much compile it on my Windows 7 but what will happens if someone runs this on a Vista machine? Does the function downgrade to a 0
(default) in this case? It's safe to not test against operating system?
UPDATE:
I said above about feeling the performance is not good. In fact my numbers are following on a fixed set of files (about 100 of them):
FindFirstFile -> 22 ms
FindFirstFile -> 4 ms (using specific pattern; however all files may wanted)
FindFirstFileEx -> 1 ms (no matter patterns or full list)
What i feel about is what will happen if folder grows say 50k files? that's about x500 bigger and still not big enough. This is about 11 seconds for an application querying on 25 fps (it's graphical)
Just tested under WinXP (compiled under win7). You'll get 0x57 (The parameter is incorrect) when it calls ::FindFirstFileEx() with FIND_FIRST_EX_LARGE_FETCH. You should check windows version and dynamically choose the value of additional parameter.
Also FindExInfoBasic is not supported before Windows Server 2008 R2 and Windows 7. You'll also get run-time 0x57 error due to this value. It must be changed to another alternative if binary is run under old windows version.
at first periodic queries a specific folder for its contents with a quick interval - not the best solution I think.
you need call ReadDirectoryChangesW - as result you will be not need do periodic queries but got notifies when files changed in directory. the best way bind directory handle with BindIoCompletionCallback or CreateThreadpoolIo and call first time direct call ReadDirectoryChangesW. then when will be changes - you callback will be automatic called and after you process data - call ReadDirectoryChangesW again from callback - until you got STATUS_NOTIFY_CLEANUP (in case BindIoCompletionCallback) or ERROR_NOTIFY_CLEANUP (in case CreateThreadpoolIo) in callback (this mean you close directory handle for stop notify) or some error.
after this (first call to ReadDirectoryChangesW ) you need call FindFirstFileEx/FindNextFile loop but only once - and handle all returned files as FILE_ACTION_ADDED notify
and about performance and compatibility.
all this is only as information. not recommended to use or not use
if you need this look to - ZwQueryDirectoryFile - this give you very big win performance
you only once need open File handle, but not every time like with
FindFirstFileEx
but main - look to ReturnSingleEntry parameter. this is key
point - you need set it to FALSE and pass large enough buffer to
FileInformation. if set ReturnSingleEntry to TRUE function
and return only one file per call. so if folder containing N files -
you will be need call ZwQueryDirectoryFile N times. but with
ReturnSingleEntry == FALSE you can got all files in single call, if buffer will be large enough. in all case you serious reduce
the number of round trips to the kernel, which is very costly
operation . 1 query with N files returned much more faster than N
queries. the FIND_FIRST_EX_LARGE_FETCH and do this - set
ReturnSingleEntry to TRUE - but in current implementation (i check this on latest win 10) system do this only in
FindNextFile calls, but in first call to
FindFirstFileEx it (by unknown reason) still use
ReturnSingleEntry == TRUE - so will be how minimum 2 calls to the ZwQueryDirectoryFile, when possible have single call
(if buffer will be large enough of course) and if direct use
ZwQueryDirectoryFile - you control buffer size. you can
allocate once say 1MB for buffer, and then use it in periodic
queries. (without reallocation). how large buffer use
FindFirstFileEx with FIND_FIRST_EX_LARGE_FETCH - you can
not control (in current implementation this is 64kb - quite
reasonable value)
you have much more reach choice for FileInformationClass - less
informative info class - less data size, faster function worked.
about compatibility? this exist and worked from how minimum win2000 to latest win10 with all functional. (in documentation - "Available starting with Windows XP", however in ntifs.h it declared as #if (NTDDI_VERSION >= NTDDI_WIN2K) and it really was already in win2000 - but no matter- XP support more than enough now)
but this is undocumented, unsupported, only for kernel mode, no lib file.. ?
documented - as you can see, this is both for user and kernel mode - how you think - how is FindFirstFile[Ex] / FindNextFile - working ? it call ZwQueryDirectoryFile - no another way. all calls to kernel only through ntdll.dll - this is fundamental. ( yes still possible that ntdll.dll stop export by name and begin export by ordinal only for show what is unsupported really was). lib file exist, even two ntdll.lib and ntdllp.lib (here more api compare first) in any WDK. headers, where declared ? #include <ntifs.h>. but it conflict with #include <windows.h> - yes conflict, but if include ntifs.h in namespace with some tricks - possible avoid conflicts
I am trying to make a small kernel for 80386 processor mainly for learning purpose and want to get the full memory map of the available RAM.
I have read that it is possible and better to do so with the help of GRUB than directly querying the BIOS.
Can anybody tell me how do I do it ?
Particularly, for using bios functionality in real mode we use bios interrupts and get the desired values in some registers , what is the actual equivalent way when we want to use GRUB provided functions ?
Here is the process I use in my kernel (note that this is 32bit). In my bootstrap assembly file, I tell GRUB to provide me with a memory map:
.set MEMINFO, 1 << 1 # Get memory map from GRUB
Then, GRUB loads the address of the multiboot info structure into ebx for you (this structure contains the address of the memory map). Then I call into C code to handle the actual iteration and processing of the memory map. I do something like this to iterate over the map:
/* Macro to get next entry in memory map */
#define MMAP_NEXT(m) \
(multiboot_memory_map_t*)((uint32_t)m + m->size + sizeof(uint32_t))
void read_mmap(multiboot_info_t* mbt){
multiboot_memory_map_t* mmap = (multiboot_memory_map_t*) mbt->mmap_addr;
/* Iterate over memory map */
while((uint32_t)mmap < mbt->mmap_addr + mbt->mmap_length) {
// process the current memory map entry
mmap = MMAP_NEXT(mmap);
}
}
where multiboot_info_t and multiboot_memory_map_t are defined as in the Gnu multiboot.h file. As Andrew Medico posted in the comments, here is a great link for getting started with this.
I have a program that is creating a map file, its able to do that call just fine, m_hMap = CreateFileMapping(m_hFile,0,dwProtect,0,m_dwMapSize,NULL);, but when the subsequent function call to MapViewOfFile(m_hMap,dwViewAccess,0,0,0), I get an error code of 8, which is ERROR_NOT_ENOUGH_MEMORY, or error string "error Not enough storage is available to process this command".
So I'm not totally understanding what the MapViewOfFile does for me, and how to fix the situation.
some numbers...
m_dwMapSize = 453427200
dwProtect = PAGE_READWRITE;
dwViewAccess = FILE_MAP_ALL_ACCESS;
I think my page size is 65536
In case of very large file and to read it, it is recommended to read it in small pieces and then process each piece. And MapViewOfFile function is used to map a piece in memory.
Look at http://msdn.microsoft.com/en-us/library/windows/desktop/aa366761(v=vs.85).aspx need offset to do its job properly i.e. in case you want to read a very large file in pieces. Mostly due to fragmentation and related reason very large memory request fails.
If you are working on a 64 bit processor then the system will allocate a total of 4GB memory with bit set LargeaddressAware.
go to Configuration properties->linker->system. in Enable largeaddressware: check
Yes /LARGEADDRESSAWARE and check.
I need to retrieve the total amount of RAM present in a system and the total RAM currently being used, so I can calculate a percentage. This is similar to: Retrieve system information on MacOS X?
However, in that question the best answer suggests how to get RAM by reading from:
/usr/bin/vm_stat
Due to the nature of my program, I found out that I am not cannot read from that file - I require a method that will provide me RAM info without simply opening a file and reading from it. I am looking for something to do with function calls. Something like this preferably : getTotalRam() and getRamInUse().
I obviously do not expect it to be that simple but I was looking for a solution other than reading from a file.
I am running Mac OS X Snow Leopard, but would preferably get a solution that would work across all current Mac OS X Platforms (i.e. Lion).
Solutions can be in C++, C or Obj-C, however C++ would the best possible solution in my case so if possible please try to provide it in C++.
Getting the machine's physical memory is simple with sysctl:
int mib [] = { CTL_HW, HW_MEMSIZE };
int64_t value = 0;
size_t length = sizeof(value);
if(-1 == sysctl(mib, 2, &value, &length, NULL, 0))
// An error occurred
// Physical memory is now in value
VM stats are only slightly trickier:
mach_msg_type_number_t count = HOST_VM_INFO_COUNT;
vm_statistics_data_t vmstat;
if(KERN_SUCCESS != host_statistics(mach_host_self(), HOST_VM_INFO, (host_info_t)&vmstat, &count))
// An error occurred
You can then use the data in vmstat to get the information you'd like:
double total = vmstat.wire_count + vmstat.active_count + vmstat.inactive_count + vmstat.free_count;
double wired = vmstat.wire_count / total;
double active = vmstat.active_count / total;
double inactive = vmstat.inactive_count / total;
double free = vmstat.free_count / total;
There is also a 64-bit version of the interface.
You're not supposed to read from /usr/bin/vm_stat, rather you're supposed to run it; it is a program. Look at the first four lines of output
Pages free: 1880145.
Pages active: 49962.
Pages inactive: 43609.
Pages wired down: 123353.
Add the numbers in the right column and multiple by the system page size (as returned by getpagesize()) and you get the total amount of physical memory in the system in bytes.
vm_stat isn't setuid on Mac OS, so I assume there is a non-privileged API somewhere to access this information and that vm_stat is using it. But I don't know what that interface is.
You can figure out the answer to this question by looking at the source of the top command. You can download the source from http://opensource.apple.com/. The 10.7.2 source is available as an archive here or in browsable form here. I recommend downloading the archive and opening top.xcodeproj so you can use Xcode to find definitions (command-clicking in Xcode is very useful).
The top command displays physical memory (RAM) numbers after the label "PhysMem". Searching the project for that string, we find it in the function update_physmem in globalstats.c. It computes the used and free memory numbers from the vm_stat member of struct libtop_tsamp_t.
You can command-click on "vm_stat" to find its declaration as a membor of libtop_tsamp_t in libtop.h. It is declared as type vm_statistics_data_t. Command-clicking that jumps to its definition in /usr/include/mach/vm_statistics.h.
Searching the project for "vm_stat", we find that it is filled in by function libtop_tsamp_update_vm_stats in libtop.c:
mach_msg_type_number_t count = sizeof(tsamp->vm_stat) / sizeof(natural_t);
kr = host_statistics(libtop_port, HOST_VM_INFO, (host_info_t)&tsamp->vm_stat, &count);
if (kr != KERN_SUCCESS) {
return kr;
}
You will need to figure out how libtop_port is set if you want to call host_statistics. I'm sure you can figure that out for yourself.
It's been 4 years but I just wanted to add some extra info on calculating total RAM.
To get the total RAM, we should also consider Pages occupied by compressor and Pages speculative in addition to Kyle Jones answer.
You can check out this post for where the problem occurs.