UART stream packetisation; stream or vector? - c++

I am writing some code to interface an STM32H7 with a BM64 Bluetooth module over UART.
The BM64 expects binary data in bytes; in general:
1. Start word (0xAA)
2-3. Payload length
4. Message ID
5-n. Payload
n+1. Checksum
My question is around best practice for message queuing, namely:
Custom iostream, message vectors inside an interface class or other?
My understanding so far, please correct if wrong or add if something missed:
Custom iostream has the huge benefit of concise usage inline with cout etc. Very usable and clean and most likely portable, at least in principle, to other devices on this project operating on other UART ports. The disadvantage is that it is relatively a lot of work to create a custom streambuf and not sure what to use for "endl" (can't use null or '\n' as these may exist in the message, with it being binary.)
Vectors seem a bit dirty to me and particularly for embedded stuff, the dynamic allocations could be stealing a lot of memory unless I ruthlessly spend cycles on resize() and reserve(). However, a vector of messages (defined as either a class or struct) would be very quick and easy to do.
Is there another solution? Note, I'd prefer not to use arrays, i.e. passing around buffer pointers and buffer lengths.
What would you suggest in this application?

On bare metal systems I prefer fixed sized buffers with the maximum possible payload size. Two of them, fixed allocated, one to fill and one to send in parallel, switch over when finished. All kind of dynamic memory allocation ends in memory fragmentation, especially if such buffers jitters in size.
Even if you system have an MMU, it is maybe a good idea to not do much dynamic heap allocation at all. I often used own written block pool memory management to get rid of long time fragmentation and late alloc failures.
If you fear to use more than currently needed ram, think again: If you have such less ram that you can't spend more than currently needed, your system may fail sometimes, if you really need the maximum buffer size. That is never an option on embedded at all. The last one is a good argument to have all memory allocated more or less fixed as long as it is possible that under real runtime conditions this can happen at "some point in the future" :-)

Related

How to allocate a large dynamic array in C++?

So I am currently trying to allocate dynamically a large array of elements in C++ (using "new"). Obviously, when "large" becomes too large (>4GB), my program crashes with a "bad_alloc" exception because it can't find such a large chunk of memory available.
I could allocate each element of my array separately and then store the pointers to these elements in a separate array. However, time is critical in my application so I would like to avoid as much cache misses as I can. I could also group some of these elements into blocks but what would be the best size for such a block?
My question is then: what is the best way (timewise) to allocate dynamically a large array of elements such that elements do not have to be stored contiguously but they must be accessible by index (using [])? This array is never going to be resized, no elements is going to be inserted or deleted of it.
I thought I could use std::deque for this purpose, knowing that the elements of an std::deque might or might not be stored contiguously in memory but I read there are concerns about the extra memory this container takes?
Thank you for your help on this!
If your problem is such that you actually run out of memory allocating fairly small blocks (as is done by deque) is not going to help, the overhead of tracking the allocations will only make the situation worse. You need to re-think your implementation such that you can deal with it in blocks that will still fit in memory. For such problems, if using x86 or x64 based hardware I would suggest blocks of at least 2 megabytes (the large page size).
Obviously, when "large" becomes too large (>4GB), my program crashes
with a "bad_alloc" exception because it can't find such a large chunk
of memory available.
You should be using 64-bit CPU and OS at this point, allocating huge contiguous chunk of memory should not be a problem, unless you are actually running out of memory. It is possible that you are building 32-bit program. In this case you won't be able to allocate more than 4 GB. You should build 64-bit application.
If you want something better than plain operator new, then your question is OS-specific. Look at API provided by your OS: on POSIX system you should look for mmap and for VirtualAlloc on Windows.
There are multiple problems with large allocations:
For security reasons OS kernel never gives you pages filled with garbage values, instead all new memory will be zero initialized. This means you don't have to initialize that memory as long as zeroes are exactly what you want.
OS gives you real memory lazily on first access. If you are processing large array, you might waste a lot of time taking page faults. To avoid this you can use MAP_POPULATE on Linux. On Windows you can try PrefetchVirtualMemory (but I am not sure if it can do the job). This should make init allocation slower, but should decrease total time spent in kernel.
Working with large chunks of memory wastes slots in Translation Lookaside Buffer (TLB). Depending on you memory access pattern, this can cause noticeable slowdown. To avoid this you can try using large pages (mmap with MAP_HUGETLB, MAP_HUGE_2MB, MAP_HUGE_1GB on Linux, VirtualAlloc and MEM_LARGE_PAGES). Using large pages is not easy, as they are usually not available by default. They also cannot be swapped out (always "locked in memory"), so using them requires privileges.
If you don't want to use OS-specific functions, the best you can find in C++ is std::calloc. Unlike std::malloc or operator new it returns zero initialized memory so you can probably avoid wasting time initializing that memory. Other than that, there is nothing special about that function. But this is the closest you can get while staying withing standard C++.
There are no standard containers designed to handle large allocations, moreover, all standard container are really really bad at handling those situations.
Some OSes (like Linux) overcommit memory, others (like Windows) do not. Windows might refuse to give you memory if it knows it won't be able to satisfy your request later. To avoid this you might want to increase your page file. Windows needs to reserve that space on disk beforehand, but it does not mean it will use it (start swapping). As actual memory is given to programs lazily, there are might be a lot of memory reserved for applications that will never be actually given to them.
If increasing page file is too inconvenient, you can try creating large file and map it into memory. That file will serve as a "page file" for your memory. See CreateFileMapping and MapViewOfFile.
The answer to this question is extremely application, and platform, dependent. These days if you just need a small integer factor greater than 4GB, you use a 64-bit machine, if possible. Sometimes reducing the size of the element in the array is possible as well. (E.g. using 16-bit fixed-point of half-float instead of 32-bit float.)
Beyond this, you are either looking at sparse arrays or out-of-core techniques. Sparse arrays are used when you are not actually storing elements at all locations in the array. There are many possible implementations and which is best depends on both the distribution of the data and the access pattern of the algorithm. See Eigen for example.
Out-of-core involves explicitly reading and writing parts of the array to/from disk. This used to be fairly common, but people work pretty hard to avoid doing this now. Applications that really require such are often built on top of a database or similar to handle the data management. In scientific computing, one ends up needing to distribute the compute as well as the data storage so there's a lot of complexity around that as well. For important problems the entire design may be driven by having good locality of reference.
Any sparse data structure will have overhead in how much space it takes. This can be fairly low, but it means you have to be careful if you actually have a dense array and are simply looking to avoid memory fragmentation.
If your problem can be broken into smaller pieces that only access part of the array at a time and the main issue is memory fragmentation making it hard to allocate one large block, then breaking the array in to pieces, effectively adding an outer vector of pointers, is a good bet. If you have random access to an array larger than 4 gigabytes and no way to localize the accesses, 64-bit is the way to go.
Depending on what you need the memory for and your speed concerns, and if you're using Linux, you can always try using mmap and simulate a sort of swap. It might be slower, but you can map very large sizes. See Mmap() an entire large file

Loading large amount of binary data into RAM

My application needs to load from MegaBytes to dozens of GigaBytes of binary data (multiple files) into RAM. After some search, I decided to use std::vector<unsigned char> for this purpose, although I am not sure it's the best choice.
I would use one vector for each file. As application previously knows file size, it would call reserve() to allocate memory for it. Sometimes the application might need to fully read a file and in some others only part of it and vector's iterators are nice for that. It may need to unload a file from RAM and put other in place, std::vector::swap() and std::vector::shrink_to_fit() would be very useful. I don't want to have the hard work of dealing with low level memory allocation stuff (otherwise would go with C).
I've some questions:
Application must load the more files from a list it can into RAM. How would it know if there is enough memory space to load one more file? Should it call reserve() and look for errors? How? Reference only says reserve() throws an exception when requested size is greater than std::vector::max_size.
Is std::vector<unsigned char> applicable for getting such large amount of binary data into RAM? I'm worried about std::vector::max_size, since its reference says its value would depend on system or implementation limitations. I presume system limitation is free RAM, is it right? So, no problem. But what about implementations limitation? Are there anything regarding to implementations that could prevent me from doing what I want to? Case affirmative, please give me an alternative.
And what if I want to use entire RAM space, except N GigaBytes? Is the best way really to use sysinfo() and deduce based on free RAM if it is possible to load each file?
Obs.: This section of the application must be get the more performance (low processing time/CPU usage and RAM consumption) possible. I would appreciate your help.
How would it know if there is enough memory space to load one more file?
You wouldn't know before hand. Wrap the loading process in try - catch. If memory runs out, then a std::bad_alloc will be thrown (assuming you use default allocators). Assume that memory is sufficient in the loading code, and deal with the lack of memory in the exception handler.
But what about implementations limitation?
...
Are there anything regarding to implementations that could prevent me from doing what I want to?
You can check std::vector::max_size at run time to verify.
If the program is compiled with a 64 bit word size, then it is quite likely that the vector has sufficient max_size for a few hundred gigabytes.
This section of the application must be get the more performance
This conflicts with
I don't want to have the hard work of dealing with low level memory allocation stuff
But in case low level memory stuff is worth it for the performance, you could memory-map the file into memory.
I've read on some SO questions to avoid them on applications that need high performance and prefer dealing with return values, errno, etc
Unfortunately for you, non-throwing memory allocation is not an option if you use the standard containers. If you are allergic to exceptions, then you must use another implementation of a vector - or whatever container you decide to use. You don't need any container with mmap, though.
Won't handling exceptions break performance?
Luckily for you, run time cost of exceptions is insignificant compared to reading hundreds of gigabytes from disk.
May it be better to run sysinfo() and work on checking free RAM before loading a file?
sysinfo call may very well be slower than handling an exception (I haven't measured, that is just a conjecture) - and it won't tell you about process specific limits that may exist.
And also, it looks hard and costly to repetitively try load a file, catch exception and try load a smaller file (requires recursion?)
No recursion needed. You can use it if you prefer; it can be written with tail call, that can be optimized away.
About memory mapping: I took a look on it sometime ago and found boring to deal with. Would require to use C's open() and all that stuff and say bye to std::fstream.
Once you have mapped the memory, it is easier to use than std::fstream. You can skip the copying into vector part, and simply use the mapped memory as if it was an array that already exists in memory.
Looks like best way of partially reading a file using std::fstream is to derive std::streambuf
I don't see why you would need to derive anything. Just use std::basic_fstream::seekg() to skip to the part that you wish to read.
As an addition to #user2097303's answer, I want to add that vector guarantees contiguous allocation. For long running applications, this will result in memory fragmentation, and in the end, no contiguous block of memory will be present anymore, although between blocks, plenty of space is free.
Therefore it may be a good idea to store your data into deque

Is it possible to determine how much space is available on the stack?

I'm architecting a small software engine and I'd like to make expensive use of the stack for rapid iterations of large number sets. But then it occurred to me that this might be a bad idea since the stack isn't as large a memory store as the heap. But I am attracted to the stack's speed and lack of dynamic allocation coding practices.
Is there a way to find out how far I can push the stack on a given platform? I am looking mainly at mobile devices but the issue could come up on any platform.
On *nix, use getrlimit:
RLIMIT_STACK
The maximum size of the process stack, in bytes. Upon
reaching this limit, a SIGSEGV signal is generated. To handle
this signal, a process must employ an alternate signal stack
(sigaltstack(2)).
On Windows, use VirtualQuery:
For the first call, pass it the address of any value on the stack to
get the base address and size, in bytes, of the committed stack space.
On an x86 machine where the stack grows downwards, subtract the size
from the base address and VirtualQuery again: this will give you the
size of the space reserved for the stack (assuming you're not
precisely on the limit of stack size at the time). Summing the two
naturally gives you the total stack size.
There is no platform-independent method since stack size is left to the implementation and host system logically - on an embedded mini-SOC there are less resources to distribute than on a 128GB RAM server. You can however influence the stack size of a specific thread on all OS'es as well with API-specific calls.
A possible portable solution is to write an allocator yourself.
You do not have to make use of the process stack, just simulate it in the heap.
Allocate a large amount of memory in the beginning, and write a stack allocator on top of it to use it while allocating.
Google 'Allocator Requirements' for information on how to achieve it in C++.
I'm not sure if the term 'Stack Allocator' is canonical, but I mean that you have to put stack like restrictions on where the allocation or deallocation has to happen.
Since you said that your algorithm is suited to this pattern, I think it'd be easy.
In standard C++, definitely not. In a portable way, probably not. In a particular OS, sometimes. If nothing else, you could open your own executable size and inspect the headers of the executable file to see it's stacksize. [The next problem is of course "how much of the stack was used before this bit of code" - which can be difficult to determine].
If you run the code in a separate thread, many of the (low level) thread interfaces allow you to specify a stack (or stacksize), E.g Posix threads pthread_set_stacksize or MS _beginthread. Again, you don't know EXACTLY how much space has been used up before it gets to the actual thread code - but it's probably not a huge amount.
Of course, in an embedded system (e.g. mobile phone), the stacksize is typically quite small, 4K, 12K or 64KB is very much normal - sometimes even a lot smaller than that in some systems.
Another potential problem is that you can't really know how much space is ACTUALLY used on the stack - you can measure after the fact in a compiled system, and of course, if you have a stack local array of int array[25];, we can know it takes up at least 25 * sizeof(int) - but there may be padding, the compiler saves registers on the stack, etc, etc.
Edit, as an afterthought:
I also don't really see much benefit in having two code-paths:
if (enough_stack_space_for_something)
use_stack_based_algorithm();
else
use_heap_based_algorithm();
This would add a fair amount of extra overhead, and more code is generally not a good plan in an embedded/mobile system.
Edit2: Also, if allocating memory is a major part of the runtime, perhaps looking at why that is, for example block-creation of objects would help?
To expand on the answers already given about why there is no portable way to do this, the entire concept of an actual stack is not part of the standard. You could write a C or C++ runtime that doesn't use a stack at all other than the function call records (which might internally be a linked list or something else).
The stack is an implementation detail of a particular machine/OS/compiler. Hence any technique to access stack metrics will be specific to machine/OS/compiler.
While not an actual answer to your specific question (Niels covered that quite well) but as advice to your problem domain: just allocate a large chunk of memory in the heap. There's no reason aside from convenience that the "real" stack is any different. Highly recursive (non-tail-recursive) algorithms often need to do this to ensure that they have a virtually unbounded "stack." Scripting languages that want to ensure they give a runtime error/exception rather than crashing the host application also often do this. To be efficient about things, you can either implement a "split stack" (like a std::deque would give you) or you can just be sure to preallocate a stack big enough for your needs.
There's no standard way to do it from within the language. I'm not even aware of a documented extension that is able to query.
However some compilers have options to set the stack size. And platform may specify what it does when launching a process, and/or provide ways to set stack size of a new thread, maybe even manipulate existing one.
For small platforms it's usual to know the whole memory size, have all the data segments on one end, a set size arena for the heap (may be 0), and the rest is stack, approaching from the other side.

Reading from a socket into a buffer

This question might seem simple, but I think it's not so trivial. Or maybe I'm overthinking this, but I'd still like to know.
Let's imagine we have to read data from a TCP socket until we encounter some special character. The data has to be saved somewhere. We don't know the size of the data, so we don't know how large to make our buffer. What are the possible options in this case?
Extend the buffer as more data arrives using realloc. This approach raises a few questions. What are the performance implications of using realloc? It may move memory around, so if there's a lot of data in the buffer (and there can be a lot of data), we'll spend a lot of time moving bytes around. How much should we extend the buffer size? Do we double it every time? If yes, what about all the wasted space? If we call realloc later with a smaller size, will it truncate the unused bytes?
Allocate new buffers in constant-size chunks and chain them together. This would work much like the deque container from the C++ standard library, allowing for quickly appending new data. This also has some questions, like how big should we make the block and what to do with the unused space, but at least it has good performance.
What is your opinion on this? Which of these two approaches is better? Maybe there is some other approach I haven't considered?
P.S.:
Personally, I'm leaning more towards the second solution, because I think it can be made pretty fast if we "recycle" the blocks instead of doing dynamic allocations every time a block is needed. The only problem I can see with it is that it hurts locality, but I don't think that it's terribly important for my purposes (processing HTTP-like requests).
Thanks
I'd prefer the second variant. You may also consider to use just one raw buffer and process the received data before you receive another bunch of data from the socket, i.e. start processing the data before you encounter the special character.
In any case I would not recommend using raw memory and realloc, but use std::vector which has its own reallocation, or use std::array as a fixed size buffer.
You may also be interested in Boost.Asio's socket_iostreams wich provide another abstraction layer above the raw buffer.
Method 2 sounds better, however there may be significant ramifications on your parser... i.e. once you find your special marker, dealing with non-contiguous buffers while parsing for HTTP requests may end up being more costly or complex than reallocing a large buffer (method 1). Net-net: if your parser is trivial, go with 2, if not, go with 1.

Understanding the efficiency of an std::string

I'm trying to learn a little bit more about c++ strings.
consider
const char* cstring = "hello";
std::string string(cstring);
and
std::string string("hello");
Am I correct in assuming that both store "hello" in the .data section of an application and the bytes are then copied to another area on the heap where the pointer managed by the std::string can access them?
How could I efficiently store a really really long string? I'm kind of thinking about an application that reads in data from a socket stream. I fear concatenating many times. I could imagine using a linked list and traverse this list.
Strings have intimidated me for far too long!
Any links, tips, explanations, further details, would be extremely helpful.
I have stored strings in the 10's or 100's of MB range without issue. Naturally, it will be primarily limited by your available (contiguous) memory / address space.
If you are going to be appending / concatenating, there are a few things that may help efficiency-wise: If possible, try to use the reserve() member function to pre-allocate space-- even if you have a rough idea of how big the final size might be, it would save from unnecessary re-allocations as the string grows.
Additionally, many string implementations use "exponential growth", meaning that they grow by some percentage, rather than fixed byte size. For example, it might simply double the capacity any time additional space is needed. By increasing size exponentially, it becomes more efficient to perform lots of concatenations. (The exact details will depend on your version of stl.)
Finally, another option (if your library supports it) is to use rope<> template: Ropes are similar to strings, except that they are much more efficient when performing operations on very large strings. In particular, "ropes are allocated in small chunks, significantly reducing memory fragmentation problems introduced by large blocks". Some additional details on SGI's STL guide.
Since you're reading the string from a socket, you can reuse the same packet buffers and chain them together to represent the huge string. This will avoid any needless copying and is probably the most efficient solution possible. I seem to remember that the ACE library provides such a mechanism. I'll try to find it.
EDIT: ACE has ACE_Message_Block that allows you to store large messages in a linked-list fashion. You almost need to read the C++ Network Programming books to make sense of this colossal library. The free tutorials on the ACE website really suck.
I bet Boost.Asio must be capable of doing the same thing as ACE's message blocks. Boost.Asio now seems to have a larger mindshare than ACE, so I suggest looking for a solution within Boost.Asio first. If anyone can enlighten us about a Boost.Asio solution, that would be great!
It's about time I try writing a simple client-server app using Boost.Asio to see what all the fuss is about.
I don't think efficiency should be the issue. Both will perform well enough.
The deciding factor here is encapsulation. std::string is a far better abstraction than char * could ever be. Encapsulating pointer arithmetic is a good thing.
A lot of people thought long and hard to come up with std::string. I think failing to use it for unfounded efficiency reasons is foolish. Stick to the better abstraction and encapsulation.
As you probably know, an std::string is really just another name for basic_string<char>.
That said, they are a sequence container and memory will be allocated sequentially. It's possible to get an exceptions from an std::string if you try to make one bigger than the available contiguous memory that you can allocate. This threshold is typically considerably less than the total available memory due to memory fragmentation.
I've seen problems allocating contiguous memory when trying to allocate, for instance, large contiguous 3D buffers for images. But these issues don't start happening at least on the order of 100MB or so, at least in my experience, on Windows XP Pro (for instance.)
Are your strings this big?