Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 2 years ago.
Improve this question
Recently, I've come across this video that shows how to use mmap() with file io. However, I can't find the video of his that documents the function. I don't have an understanding of what it is, why it exists, nor how it relates to files.
Too much of the jargon is flying over my head to make sense of it. I had the same problem with sites like Wikipedia.
Files are arrays of bytes stored in a filesystem.
"Memory" in this case is an array of bytes stored in RAM.
Memory mapping is something that an operating system does. It means that some range of bytes in memory has some special meaning.
Memory mapped file is generally a file in the file system, which has been mapped by the operating system to some range of bytes in memory of a process. When the process writes to the memory of the process, the operating system takes care that the bytes are written to the file, and when the process reads from the memory, operating system takes care that the file is read.
Related
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 2 years ago.
Improve this question
As far as I know, a memory address like 0x0 of a program is the beginning of its code segment in its virtual address space. Is it possible to simply read what's in there from within a program? What about checking things like the size of the stack/heap? If not possible in C/++ programs, is it possible in assembly?
Edit: I find memory allocation and management interesting. I'm asking out of curiosity. I like the idea of being able to see what's in every address of my program's virtual address space. When I mentioned stack/heap size, I meant those of the program, too.
First, check /proc/<pid>/maps, assuming that you are running Linux. This will show you a list of allocated regions, and the permissions for each region (or VMA, technically). Check out this answer. The permissions are 'rwx' or a subset of these, representing readable, writable and executable. For regions which are readable, you can craft a pointer in C/C++ using a uintptr_t. Thereafter, you can read it.
Basically, you can dump all of your readable regions using simple pointers.
Btw, in virtually all C binaries, the region starting at address 0x0 will be unmapped, so that using a NULL pointer leads to a SEGFAULT.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 4 years ago.
Improve this question
What is the performance penalty when accessing a data structure if this is located:
In the same process memory block.
In a shared memory block (including locking, but supposing
no other processes access it for a significant amount of time).
I am interested in an approximate comparison values (e.g. percentage), for access, read and write.
All your process memory is mmaped. It does not matter whether one or more processes map the same physical pages of memory, there is no difference in the speed of access in this regard.
What matters in whether memory is located on the local or remote NUMA node.
See NUMA benchmarks in Challenges of Memory Management on Modern NUMA System.
Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 5 years ago.
Improve this question
I came across the term the term "main memory" data structures. For example here and here and here and here. Googling did not give me a definite answer what that actually means. I got some reference that it means the data structures that are used to store in the persistent memory i.e. hard disk. If so, then as I have read binary trees are used store data on hard disks. If so, then std::map which uses binary tree will be one candidate for main memory data structure. What are other examples?
First your understanding of "main memory" is wrong, main memory is used to refer to the system RAM, a hard drive is secondary storage. And then there are specialized on-chip caches that a program generally has little or no control over.
That being said, all of the STL containers are limited to being memory hosted, but depending on the OS this can include having parts swapped out of main memory to disk as part of virtual memory, however that, too, a program has little or no control over. And such disk backing lasts only so long as the program is active, it is not persisted after termination.
And while b-trees are a good candidate for being persisted to disk backed storage a usual binary tree is not. In this context b-tree and binary tree are not the same thing.
Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 6 years ago.
Improve this question
I was hoping someone had some schooling they could lay down about the whole HEAP and stack ordeal. I am trying to make a program that would attempt to create about 20,000 instances of just one union and if so some day I may want to implement a much larger program. Other than my current project consisting of a maximum of just 20,000 unions stored where ever c++ will allocate them do you think I could up the anti into the millions while retaining a reasonable return speed on function calls, approximately 1,360,000 or so? And how do you think it will handle 20,000?
Heap is an area used for dynamic memory allocation.
It's usually used to allocate space for variable collection size, and/or to allocate a large amount of memory. It's definitely not a CPU register(s).
Besides this I think there is no guarantee what heap is.
This may be RAM, may be processor cache, even HDD storage. Let OS and hardware decide what it will be in particular case.
Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 9 years ago.
Improve this question
I have been trying to understand how buffer is constructed? As I understand buffer is a hardware construct such as logic gates are (please correct me if I am wrong). So, I was wondering whether buffer is the location/block always fixed by hardware manufacturer or it could be any location reserved by the software/OS. I mean any buffer i.e. data buffer, cache buffer, etc.
Apologies if my question is bit vague. I am just trying to understand how buffer is implemented and at what level.
A buffer is simply a temporary storage facility for passing data between subsystems. The nature of that buffer (and definition of subsystems) depends on how and where it is used.
Hardware (such as a CPU) may implement a memory cache which is a type of buffer. Being in hardware the size is pretty much fixed but the actual size depends on the hardware design.
(Generically) In software a buffer is typically a chunk of memory reserved by the application that is used to temporarily store data generated by a producer and passed to a consumer for processing. It can be a static (fixed) size or expanded/contracted dynamically. It really depends on the application needs and is defined by the developer/designer.
A buffer is typically used for passing data between software and hardware. The most familiar being I/O. Because I/O is typically slow, data is usually buffered in some way to allow the software to continue running without having to wait for the I/O subsystem to finish.