Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 9 years ago.
Improve this question
I have been trying to understand how buffer is constructed? As I understand buffer is a hardware construct such as logic gates are (please correct me if I am wrong). So, I was wondering whether buffer is the location/block always fixed by hardware manufacturer or it could be any location reserved by the software/OS. I mean any buffer i.e. data buffer, cache buffer, etc.
Apologies if my question is bit vague. I am just trying to understand how buffer is implemented and at what level.
A buffer is simply a temporary storage facility for passing data between subsystems. The nature of that buffer (and definition of subsystems) depends on how and where it is used.
Hardware (such as a CPU) may implement a memory cache which is a type of buffer. Being in hardware the size is pretty much fixed but the actual size depends on the hardware design.
(Generically) In software a buffer is typically a chunk of memory reserved by the application that is used to temporarily store data generated by a producer and passed to a consumer for processing. It can be a static (fixed) size or expanded/contracted dynamically. It really depends on the application needs and is defined by the developer/designer.
A buffer is typically used for passing data between software and hardware. The most familiar being I/O. Because I/O is typically slow, data is usually buffered in some way to allow the software to continue running without having to wait for the I/O subsystem to finish.
Related
Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 2 years ago.
Improve this question
I'm working on software using a library with a huge memory usage (e.g. LightGBM).
I'm working on a data-science software which as the property to reduce RAM usage dynamically when data is not asked, and reload it from disk when necessary depending on our needs, kind of an advanced and configurable swap to sum up.
Therefore, when I call external code, we except that memory follows kind of the same requirements.
When working on huge dataset, memory usage can go way further available memory, the idea his to limit memory usage to avoid being stuck at 100% memory usage.
As soon as I don't want to modify memory management within LightGBM's code because it would mean choose a specific version and re-adapt code each time I want to update. In my software, can I programmatically restrict (and later release) physical RAM usage of my application, to force swapping?
Excepted pseudo-code:
some_function_before();
some_API::please_use_swap(/*threshold=*/16);
some_process_with_heavily_memory_usage();
some_API::end_requirement();
some_function_after();
If there is another approach to resolve this, I'll pick it of course.
Thanks.
There's such an API on Windows: SetProcessWorkingSetSize. You state how much physical RAM you want to use; the rest could be paged out.
As is normal, this is just a hint. Windows may decide that there's plenty of RAM and ignore your hint altogether.
Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 2 years ago.
Improve this question
Recently, I've come across this video that shows how to use mmap() with file io. However, I can't find the video of his that documents the function. I don't have an understanding of what it is, why it exists, nor how it relates to files.
Too much of the jargon is flying over my head to make sense of it. I had the same problem with sites like Wikipedia.
Files are arrays of bytes stored in a filesystem.
"Memory" in this case is an array of bytes stored in RAM.
Memory mapping is something that an operating system does. It means that some range of bytes in memory has some special meaning.
Memory mapped file is generally a file in the file system, which has been mapped by the operating system to some range of bytes in memory of a process. When the process writes to the memory of the process, the operating system takes care that the bytes are written to the file, and when the process reads from the memory, operating system takes care that the file is read.
Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 5 years ago.
Improve this question
I came across the term the term "main memory" data structures. For example here and here and here and here. Googling did not give me a definite answer what that actually means. I got some reference that it means the data structures that are used to store in the persistent memory i.e. hard disk. If so, then as I have read binary trees are used store data on hard disks. If so, then std::map which uses binary tree will be one candidate for main memory data structure. What are other examples?
First your understanding of "main memory" is wrong, main memory is used to refer to the system RAM, a hard drive is secondary storage. And then there are specialized on-chip caches that a program generally has little or no control over.
That being said, all of the STL containers are limited to being memory hosted, but depending on the OS this can include having parts swapped out of main memory to disk as part of virtual memory, however that, too, a program has little or no control over. And such disk backing lasts only so long as the program is active, it is not persisted after termination.
And while b-trees are a good candidate for being persisted to disk backed storage a usual binary tree is not. In this context b-tree and binary tree are not the same thing.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 4 years ago.
Improve this question
So, I'm currently working on a project where Protocol Buffers is used extensively, mainly as a way to store complex objects in a key-value database.
Would a migration to Flat Buffers provide a considerable benefit in terms of performance?
More in general, is there ever a good reason to use Protocol Buffers instead of Flat Buffers?
Protocol buffers are optimized for space consumption on the wire, so for archival and storage, they are very efficient. However, the complex encoding is expensive to parse, and so they are computationally expensive, and the C++ API makes heavy use of dynamic allocations. Flat buffers, on the other hand, are optimized for efficient parsing and in-memory representation (e.g. offering zero-copy views of the data in some cases).
It depends on your use case which of those aspects is more important to you.
Quoting from the flatbuffer page:
Why not use Protocol Buffers, or .. ?
Protocol Buffers is indeed relatively similar to FlatBuffers, with the
primary difference being that FlatBuffers does not need a parsing/
unpacking step to a secondary representation before you can access
data, often coupled with per-object memory allocation. The code is an
order of magnitude bigger, too. Protocol Buffers has neither optional
text import/export nor schema language features like unions.
Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 6 years ago.
Improve this question
I was hoping someone had some schooling they could lay down about the whole HEAP and stack ordeal. I am trying to make a program that would attempt to create about 20,000 instances of just one union and if so some day I may want to implement a much larger program. Other than my current project consisting of a maximum of just 20,000 unions stored where ever c++ will allocate them do you think I could up the anti into the millions while retaining a reasonable return speed on function calls, approximately 1,360,000 or so? And how do you think it will handle 20,000?
Heap is an area used for dynamic memory allocation.
It's usually used to allocate space for variable collection size, and/or to allocate a large amount of memory. It's definitely not a CPU register(s).
Besides this I think there is no guarantee what heap is.
This may be RAM, may be processor cache, even HDD storage. Let OS and hardware decide what it will be in particular case.