I have to write an audio buffer that is filled/read progressively.
For now i'm using
m_outputBuffer.erase(
m_outputBuffer.begin(),
m_outputBuffer.begin()+read_samples);
when read_samples samples have been read from the buffer (I have to clear it to free RAM).
But I know erase() is very expensive so what alternative do I have, considering I basically only have to move the pointer to the first element of my buffer and free the beginning ?
std::deque appears to be a container that's best suited for something like this. std::deque is a random access container, like std::vector, but has (amortized) constant insertion and deletion complexity at the beginning of the container, unlike std::vector (and at the end of the container as well).
Finally I'm still using
m_outputBuffer.erase(
m_outputBuffer.begin(),
m_outputBuffer.begin()+read_samples);
as it's really efficient : the erasing is done in one chunk, and the data left is relocated at the beginning of my vector, thus no pointer changes.
Don't use C++ for this. Write it in C, which is of course also a subset of C++.
The buffer consists of a region of memory, and two pointers, one to the start position, one to the end. When data comes in, you add it to the end pointer, until you wrap. When data goes out, you increment the read pointer. You never need to delete or erase data. If the buffer overflows, likely that means something has gone wrong and you need to shut down the system - expanding it will just prolong the crash process.
Related
A little bit of background first (skip ahead to the boldface if you're bored by this).
I'm trying to glue two pieces of code together. One is a JSON/YML library that makes heavy use of a custom string view object, the other is a piece of code from the early 2000s.
I've been seeing weird behavior for a long time, until I have traced it down to a memory issue, namely that the string views I construct in the JSON/YML library take a const char* as a constructor, and assume that the memory location of that char array stays constant over the lifetime of the string view. However, some of the std::string objects on which I construct these views are temporary, so that's just not true and the string view ends up pointing at garbage.
Now, I thought I was being smart and constructed a cache in the form of a std::vector that would hold all the temporary strings, I would construct the string views on these and only clear the cache in the end - easy.
However, I was still seeing garbled strings every now and then, until I found the reason: sometimes, when pushing things to the vector beyond the preallocated size, the vector would be moved to a different memory location, invalidating all the string views. For now, I've settled on preallocating a cache size that is large enough to avoid any conceivable moving of the vector, but I can see this causing severe and untracable problems in the future for very large runs. So here's my question:
How can I construct a std::vector<std::string> or any other string container that either avoids being moved in memory alltogether, or at least throws an error message if that happens?
Of course, if you feel that I'm going about this whole issue in the wrong way fundamentally, please also let me know how I should deal with this issue instead.
If you're interested, the two pieces of code in question are RapidYAML and the CERN Statistics Library ROOT.
My answer from a similar question: Any way to update pointer/reference value when vector changes capability?
If you store objects in your vector as std::unique_ptr or std::shared_ptr, you can get an observing pointer to the underlying object with std::unique_ptr::get() (or a reference if you dereference the smart pointer). This way, even though the memory location of the smart pointer changes upon resizing, the observing pointer points to the same object and thus the same memory location.
[...] sometimes, when pushing things to the vector beyond the preallocated size, the vector would be moved to a different memory location, invalidating all the string views.
The reason is that std::vector is required to store its data contiguously in memory. So, if you exceed the maximum capacity of the vector when adding an element, it will allocate a new space in memory (big enough this time) and move all the data here.
What you are subject to is called iterator invalidation.
How can I construct a std::vector or any other string container that either avoids being moved in memory alltogether, or at least throws an error message if that happens?
You have at least 3 easy solutions:
If your cache size is supposed to be fixed and is known at compile-time, I would advise you to use std::array instead.
If your cache size is supposed to be fixed but not necessarily known at compile-time, I would advise you to reserve() the required capacity of your std::vector so that you will have the guarantee that it will big enough to not need to be reallocated.
If your cache size may change, I would advise you to use std::list instead. It is implemented as a (usually doubly) linked list. It will guarantee that the elements will not be relocated in memory.
But since they are not stored contiguously in memory, you'll lose the ability to have direct access to any element (i.e. you'll need to iterate over the list in order to find an element).
Of course there probably are other solutions (I do not claim this answer to be exhaustive) but these solutions will allow you to almost not change your code (only the container) and protect your string views to be invalidated.
Perhaps use an std::list. Its accessing method is slower (at least when iterating) but memory location is constant. Reason for both is that it does not use contiguous memory.
Alternatively create a wrapper that wraps a pointer to a string that has been created with "new". That address will also be constant. EDIT: Somehow I managed to miss that what I've just described is pretty much a smartpointer minus automated deletion ;)
Well sadly it is impossible to be able to grow a vector while being sure the content will stay at the same place on classical OS at least.
There is the function realloc that tries to keep the same place, but as you can read on the documentation, there is no guarantee to that, only the os will decide.
To solution your problem, you need the concept of a pool, a pool of string here, that handle the life time of your strings.
You may get away with a simple std::list of string, but it will lead to bad data aliasing and a lot of independent allocations bad to your performances. These will also be the problems with smart pointers.
So if you care about performances, how you may implement it in your case may be not far from your current implementation in my opinion. Because you cannot resize the vector, you should prefer an std::array of a fixed size that you decide at compile time. Then, whenever you need it, you can create a new one to expand your memory capacity. This may be easily implemented by a std::list<std::array> typically.
I don't know if it applies here, but you must be careful if your application can create any number of string during its execution as it may induce an ever growing memory pool, and maybe finally memory problems. To fix that you may insure that the strings you don't use anymore can be reused or freed. Sadly I cannot help you too much here, as these rules will depend on your application.
Is there a better way to copy the contents of a std::deque into a byte-array? It seems like there should be something in the STL for doing this.
// Generate byte-array to transmit
uint8_t * i2c_message = new uint8_t[_tx.size()];
if ( !i2c_message ) {
errno = ENOMEM;
::perror("ERROR: FirmataI2c::endTransmission - Failed to allocate memory!");
} else {
size_t i = 0;
// Load byte-array
for ( const auto & data_byte : _tx ) {
i2c_message[i++] = data_byte;
}
// Transmit data
_marshaller.sendSysex(firmata::I2C_REQUEST, _tx.size(), i2c_message);
_stream.flush();
delete[] i2c_message;
}
I'm looking for suggestions for either space or speed or both...
EDIT: It should be noted that _marshaller.sendSysex() cannot throw.
FOLLOW UP:
I thought it would be worth recapping everything, because the comments are pretty illuminating (except for the flame war). :-P
The answer to the question as asked...
Use std::copy
The bigger picture:
Instead of simply increasing the raw performance of the code, it was worth considering adding robustness and longevity to the code base.
I had overlooked RAII - Resource Acquisition is Initialization. By going in the other direction and taking a slight performance hit, I could get big gains in resiliency (as pointed out by #PaulMcKenzie and #WhozCraig). In fact, I could even insulate my code from changes in a dependency!
Final Solution:
In this case, I actually have access to (and the ability to change) the larger code base - often not the case. I reevaluated* the benefit I was gaining from using a std::deque and I swapped the entire underlying container to a std::vector. Thus saving the performance hit of container swapping, and gaining the benefits of contiguous data and RAII.
*I chose a std::deque because I always have to push_front two bytes to finalize my byte-array before sending. However, since it is always two bytes, I was able to pad the vector with two dummy bytes and replace them by random access - O(n) time.
Embrace the C++ standard library. Assuming _tx is really a std::deque<uint8_t>, one way to do this is simply:
std::vector<uint8_t> msg(_tx.cbegin(), _tx.cend());
_marshaller.sendSysex(firmata::I2C_REQUEST, msg.size(), msg.data());
This allocates the proper size contiguous buffer, copies the contents from the source iterator pair, and then invokes your send operation. The vector will be automatically cleaned up on scope-exit, and an exception will be thrown if the allocation for building it somehow fails.
The standard library provides a plethora of ways to toss data around, especially given iterators that mark where to start, and where to stop. Might as well use that to your advantage. Additionally, letting RAII handle the ownership and cleanup of entities like this rather than manual memory management is nearly always a better approach, and should be encouraged.
In all, if you need continuity (and judging by the looks of that send-call, that's exactly why you're doing this), then copying from non-contiguous to contiguous space is pretty much your only option, and that takes space and copy-time. Not much you can do to avoid that. I suppose peeking into the implementation specifics of std::deque and possibly doing something like stacking send-calls would be possible, but I seriously doubt there would be any reward, and the only savings would likely evaporate in the multi-send invokes.
Finally, there is another option that may be worth considering. Look at the source of all of this. Is a std::deque really warranted? For example, certainly your building that container somewhere else. If you can do that build operation as-efficient, or nearly-so, using a std::vector, then this entire problem goes away, as you can just send that.
For example, if you knew (provably) that your std::deque would never be larger than some size N, you could, pre-size a std::vector or similar continuous RAII-protected allocation, to be 2*N in size, start both a fore and aft iterator pair in the middle and either prepend data by walking the fore iterator backward, or append data by walking the aft iterator forward. In the end, your data will be contiguous between fore and aft, and the send is all that remains. no copies would be required, though added-space is still required. This all hinges on knowing with certainty the maximum message size. If that is available to you, it may be an idea worth profiling.
I'm using libcurl (HTTP transfer library) with C++ and trying to download files from remote HTTP servers. As file is downloaded, my callback function is called multiple times (e.g. every 10 kb) to send me buffer data.
Basically I need something like "string bufer", a data structure to append char buffer to existing string. In C, I allocate (malloc) a char* and then as new buffers come, I realloc and then memcpy so that I can easily copy my buffer to resized array.
In C, there are multiple solutions to achieve this.
I can keep using malloc, realloc, memcpy but I'm pretty sure that they are not recommended in C++.
I can use vector<char>.
I can use stringstream.
My use cases is, I'll append a few thousands of items (chars) at a time, and after it all finishes (download is completed), I will read all of it at once. But I may need options like seek in the future (easy to achieve in array solution (1)) but it is low priority now.
What should I use?
I'd go for stringstream. Just insert into it as you recieve the data, and when you're done you can extract a full std::string from it. I don't see why you'd want to seek into an array? Anyway, if you know the block size, you can calculate where in the string the corresponding block went.
I'm not sure if many will agree with this, but for that use case I would actually use a linked list, with each node containing an arbitrarily large array of char that were allocated using new. My reasoning being:
Items are added in large chunks at a time, one at a time at the back.
I assume this could use quite a large amount of space, so you avoid reallocation events when a vector would otherwise need more space.
Since items are read sequentially, the penalty of link lists being unidirectional doesn't affect you.
Should Seeking through the list become a priority, this wouldn't work though. If it's not a lot of data ultimately, I honestly think a vector would be fine, dispite not being the most efficient structure.
If you just need to append char buffers, you can also simply use std::string and the member function append. On top of that stringstream gives you formatting, functionality, so you can add numbers, padding etc., but from your description you appear not to need that.
I would use vector<char>. But they will all work even with a seek, so your question is really one of style and there are no definitive answers there.
I think I'd use a deque<char>. Same interface as vector, and vector would do, but vector needs to copy the whole data each time an append exceeds its existing capacity. Growth is exponential, but you'd still expect about log N reallocations, where N is the number of equal-sized blocks of data you append. Deque doesn't reallocate, so it's the container of choice in cases where a vector would need to reallocate several times.
Assuming the callback is handed a char* buffer and length, the code to copy and append the data is simple enough:
mydeque.insert(mydeque.end(), buf, buf + len);
To get a string at the end, if you want one:
std::string mystring(mydeque.begin(), mydeque.end());
I'm not exactly sure what you mean by seek, but obviously deque can be accessed by index or iterator, same as vector.
Another possibility, though, is that if you expect a content-length at the start of the download, you could use a vector and reserve() enough space for the data before you start, which avoids reallocation. That depends on what HTTP requests you're making, and to what servers, since some HTTP responses will use chunked encoding and won't provide the size up front.
Create your own Buffer class to abstract away the details of the storage. If I were you I would likely implement the buffer based on std::vector<char>.
I'm wrapping up user space linux socket functionality in some C++ for an embedded system (yes, this is probably reinventing the wheel again).
I want to offer a read and write implementation using a vector.
Doing the write is pretty easy, I can just pass &myvec[0] and avoid unnecessary copying. I'd like to do the same and read directly into a vector, rather than reading into a char buffer then copying all that into a newly created vector.
Now, I know how much data I want to read, and I can allocate appropriately (vec.reserve()). I can also read into &myvec[0], though this is probably a VERY BAD IDEA. Obviously doing this doesn't allow myvec.size to return anything sensible. Is there any way of doing this that:
Doesn't completely feel yucky from a safety/C++ perspective
Doesn't involve two copies of the data block - once from kernel to user space and once from a C char * style buffer into a C++ vector.
Use resize() instead of reserve(). This will set the vector's size correctly -- and after that, &myvec[0] is, as usual, guaranteed to point to a continguous block of memory.
Edit: Using &myvec[0] as a pointer to the underlying array for both reading and writing is safe and guaranteed to work by the C++ standard. Here's what Herb Sutter has to say:
So why do people continually ask whether the elements of a std::vector (or std::array) are stored contiguously? The most likely reason is that they want to know if they can cough up pointers to the internals to share the data, either to read or to write, with other code that deals in C arrays. That’s a valid use, and one important enough to guarantee in the standard.
I'll just add a short clarification, because the answer was already given. resize() with argument greater than current size will add elements to the collection and default - initialize them. If You create
std::vector<unsigned char> v;
and then resize
v.resize(someSize);
All unsigned chars will get initialized to 0. Btw You can do the same with a constructor
std::vector<unsigned char> v(someSize);
So theoretically it may be a little bit slower than a raw array, but if the alternative is to copy the array anyway, it's better.
Reserve only prepares the memory, so that there is no reallocation needed, if new elements are added to the collection, but You can't access that memory.
You have to get an information about the number of element written to Your vector. The vector won't know anything about it.
Assuming it's a POD struct, call resize rather than reserve. You can define an empty default constructor if you really don't want the data zeroed out before you fill the vector.
It's somewhat low level, but the semantics of construction of POD structs is purposely murky. If memmove is allowed to copy-construct them, I don't see why a socket-read shouldn't.
EDIT: ah, bytes, not a struct. Well, you can use the same trick, and define a struct with just a char and a default constructor which neglects to initialize it… if I'm guessing correctly that you care, and that's why you wanted to call reserve instead of resize in the first place.
If you want the vector to reflect the amount of data read, call resize() twice. Once before the read, to give yourself space to read into. Once again after the read, to set the size of the vector to the number of bytes actually read. reserve() is no good, since calling reserve doesn't give you permission to access the memory allocated for the capacity.
The first resize() will zero the elements of the vector, but this is unlikely to create much of a performance overhead. If it does then you could try Potatoswatter's suggestion, or you could give up on the size of the vector reflecting the size of the data read, and instead just resize() it once, then re-use it exactly as you would an allocated buffer in C.
Performance-wise, if you're reading from a socket in user mode, most likely you can easily handle data as fast as it comes in. Maybe not if you're connecting to another machine on a gigabit LAN, or if your machine is frequently running 100% CPU or 100% memory bandwidth. A bit of extra copying or memsetting is no big deal if you are eventually going to block on a read call anyway.
Like you, I'd want to avoid the extra copy in user-space, but not for performance reasons, just because if I don't do it, I don't have to write the code for it...
I'm currently using vectors as c-style arrays to send and recieve data through Winsock.
I have a std::vector and I'm using that as my 'byte array'.
The problem is, I'm using two vectors, one for each send, and one for each recv, but what I'm doing seems to be fairly inefficient.
Example:
std::string EndBody("\r\n.\r\n");
std::fill(m_SendBuffer.begin(),m_SendBuffer.end(),0);
std::copy(EndBody.begin(),EndBody.end(),m_SendBuffer.begin());
SendData();
SendData just calls send the appropriate amount of times and ensures everything works as it should.
Anyway. Unless I zero out the vector before each use I get errors with stuff overlapping. Is there a more efficient way for me to do what I'm doing? Because it seems that zeroing out the entire buffer on each call is horribly inefficient.
Thanks.
you can use m_SendBuffer.clear()
otherwise the end() method would not know what is the real size of the buffer.
clear() is not a very expensive method to call. Unless you're working on some 486 or something it shouldn't affect your performances
Seems like the other posters are focusing on the cost of clearing the buffer, or the size of the buffer. Yet you don't really need to clear or zero out the whole buffer, or know its size, for what you're doing. The 'errors with stuff overlapping' is a problem with SendData, that you've not posted the code for. Presumably SendData doesn't know how much of the buffer it needs to send unless the data within it is zero-terminated. if that assumption is correct, all you have to do is zero-terminate the data correctly.
std::copy(EndBody.begin(),EndBody.end(),m_SendBuffer.begin());
m_SendBuffer[EndBody.size()] = 0;
SendData();
Wouldn't calling clear mean the vector gets a new size of 0? If the OP is using the vector as a large chunk of memory then they'd have to then call resize after clear to ensure the appropriate space is available for calls to send and recv.
Calling clear then resize on the vector would be around the same as just filling it with zeros would it not?
vector::clear
vector::resize
fill
As far as I understand the STL docs, calling clear simply sets the .end() value to be the same as .begin() and sets size to zero,which is instant.
It doesn't change the amount of memory allocated or where the memory is (any iterator will obviously be invalid, but the data tends to linger!). The .capacity() doesn't change and neither does the data stored there, as you have already discovered. If you are always using .begin() .end() and STL iterators to access the area this won't matter.
Don't forget, method variables of a class aren't initialised unless you include them in your initialisation list. Adding m_SendBuffer(BUFSIZE,0) there might do the trick.