Which dynamic container type to use? - c++

I'm writing code for a router (aka gateway), and as I'm receiving and sending packets I need to use a type of container that can support the logic of a router. When receiving a packet I want to place it in the end of the dynamic container (here from and on known as DC). When taking the packet out of the DC for processing I want to take it from the front of the DC.
Any suggestion on which one to use?
I've heard that a vector would be a good idea but I'm not quite sure if they are dynamic..
EDIT: The type of element that it should contain is a raw packet of type "unsigned char *". How would I write the code for the DC to contain such a type?

std::deque<unsigned char *> is the obvious choice here, since it supports efficient FIFO semantics (use push_back and pop_front, or push_front and pop_back, the performance should be the same).
In my experience the std::queue (which is a container adapter normally built over std::deque) is not worth the effort, it only restricts the interface without adding anything useful.

For a router, you probably should use a fixed size custom container (probably based around std::array or a C array). You can then introduce some logic to allow it to be used as a circular buffer. The fixed size is extremely important because you need to deal with the scenario where packets are coming in faster than you can send them off. When you reach your size limit, you then flow off.
With dynamically re-sizable containers, your may end up running out of memory or introducing unacceptable amounts of latency into the system.

You can use std::queue. You insert elements at the end using push() and remove elements from the front using pop(). front() returns the front element.
To store unsigned char* elements, you'd declare a queue like this:
std::queue<unsigned char*> packetQueue;

Related

Using STL vector as a FIFO container for byte data

I have a thread running that reads a stream of bytes from a serial port. It does this continuously in the background, and reads from the stream come in at different, separate times. I store the data in a container like so:
using ByteVector = std::vector<std::uint8_t>;
ByteVector receive_queue;
When data comes in from the serial port, I append it to the end of the byte queue:
ByteVector read_bytes = serial_port->ReadBytes(100); // read 100 bytes; returns as a "ByteVector"
receive_queue.insert(receive_queue.end(), read_bytes.begin(), read_bytes.end());
When I am ready to read data in the receive queue, I remove it from the front:
unsigned read_bytes = 100;
// Read 100 bytes from the front of the vector by using indices or iterators, then:
receive_queue.erase(receive_queue.begin(), receive_queue.begin() + read_bytes);
This isn't the full code, but gives a good idea of how I'm utilizing the vector for this data streaming mechanism.
My main concern with this implementation is the removal from the front, which requires shifting each element removed (I'm not sure how optimized erase() is for vector, but in the worst case, each element removal results in a shift of the entire vector). On the flip side, vectors are candidates for CPU cache locality because of the contiguous nature of the data (but CPU cache usage is not guaranteed).
I've thought of maybe using boost::circular_buffer, but I'm not sure if it's the right tool for the job.
I have not yet coded an upper-limit for the growth of the receive queue, however I could easily do a reserve(MAX_RECEIVE_BYTES) somewhere, and make sure that size() is never greater than MAX_RECEIVE_BYTES as I continue to append to the back of it.
Is this approach generally OK? If not, what performance concerns are there? What container would be more appropriate here?
Erasing a from the front of a vector an element at the time can be quite slow, especially if the buffer is large (unless you can reorder elements, which you cannot with a FIFO queue).
A circular buffer is an excellent, perhaps ideal data structure for a FIFO queue of fixed size. But there is no implementation in the standard library. You'll have to implement it yourself or use a third party implementation such as the Boost one you've discovered.
The standard library provides a high level structure for a growing FIFO queue: std::queue. For a lower level data structure, the double ended queue is a good choice (std::deque, which is the default underlying container of std::queue).
On the flip side, vectors are candidates for CPU cache locality because of the contiguous nature of the data (but this is not guaranteed).
The continuous storage of std::vector is guaranteed. A fixed circular buffer also has continuous storage.
I'm not sure what is guaranteed about cache locality of std::deque but it is typically quite good in practice as the typical implementation is a linked list of arrays.
Performance will be poor, which may or may not matter. Taking from the head entails a cascade of moves. But STL has queues for exactly this purpose, just use one.

asio: best way to store a message to be broadcast

I want to make a buffer of characters, write to it using sprintf, then pass it to multiple calls of async_write() (i.e. distribute it to a set of clients). My question is what is the best data structure to use for this? If there are compromises then I guess the priorities for defining "best" would be:
fewer CPU cycles
code clarity
less memory usage
Here is what I have currently, that appears to work:
function broadcast(){
char buf[512];
sprintf(buf,"Hello %s","World!");
boost::shared_ptr<std::string> msg(new std::string(buf));
msg->append(1,0); //NUL byte at the end
for(std::vector< boost::shared_ptr<client_session> >::iterator i=clients.begin();
i!=clients.end();++i) i->write(buf);
}
Then:
void client_session::write(boost::shared_ptr<std::string> msg){
if(!socket->is_open())return;
boost::asio::async_write(*socket,
boost::asio::buffer(*msg),
boost::bind(&client_session::handle_write, shared_from_this(),_1,_2,msg)
);
}
NOTES:
Typical message size is going to be less than 64 bytes; the 512 buffer size is just paranoia.
I pass a NUL byte to mark the end of each message; this is part of the protocol.
msg has to out-live my first code snippet (an asio requirement), hence the use of a shared pointer.
I think I can do better than this on all my criteria. I wondered about using boost::shared_array? Or creating an asio::buffer (wrapped in a smart pointer) directly from my char buf[512]? But reading the docs on these and other choices left me overwhelmed with all the possibilities.
Also, in my current code I pass msg as a parameter to handle_write(), to ensure the smart pointer is not released until handle_write() is reached. That is required isn't it?
UPDATE: If you can argue that it is better overall, I'm open to replacing sprintf with a std::stringstream or similar. The point of the question is that I need to compose a message and then broadcast it, and I want to do this efficiently.
UPDATE #2 (Feb 26 2012): I appreciate the trouble people have gone to post answers, but I feel none of them has really answered the question. No-one has posted code showing a better way, nor given any numbers to support them. In fact I'm getting the impression that people think the current approach is as good as it gets.
First of all, note that you are passing your raw buffer instead of your message to the write function, I think you do not meant to do that?
If you're planning to send plain-text messages, you could simply use std::string and std::stringstream to begin with, no need to pass fixed-size arrays.
If you need to do some more binary/bytewise formatting I would certainly start with replacing that fixed-size array by a vector of chars. In this case I also wouldn't take the roundtrip of converting it to a string first but construct the asio buffer directly from the byte vector. If you do not have to work with a predefined protocol, an even better solution is to use something like Protocol Buffers or Thrift or any viable alternative. This way you do not have to worry about things like endianness, repetition, variable-length items, backwards compatibility, ... .
The shared_ptr trick is indeed necessary, you do need to store the data that is referenced by the buffer somewhere until the buffer is consumed. Do not forget there are alternatives that could be more clear, like storing it simply in the client_session object itself. However, if this is feasible depends a bit on how your messaging objects are constructed ;).
You could store a std::list<boost::shared_ptr<std::string> > in your client_session object, and have client_session::write() do a push_back() on it. I think that is cleverly avoiding the functionality of boost.asio, though.
As I got you need to send the same messages to many clients. The implementation would be a bit more complicated.
I would recommend to prepare a message as a boost::shared_ptr<std::string> (as #KillianDS recommended) to avoid additional memory usage and copying from your char buf[512]; (it's not safe in any case, you cannot be sure how your program will evolve in the future and if this capacity will be sufficient in all cases).
Then push this message to each client internal std::queue. If the queue is empty and no writings are pending (for this particular client, use boolean flag to check this) - pop the message from queue and async_write it to socket passing shared_ptr as a parameter to a completion handler (a functor that you pass to async_write). Once the completion handler is called you can take the next message from the queue. shared_ptr reference counter will keep the message alive until the last client suffesfully sent it to socket.
In addition I would recommend to limit maximum queue size to slow down message creation on insufficient network speed.
EDIT
Usually sprintf is more efficient in cost of safety. If performance is criticical and std::stringstream is a bottleneck you still can use sprintf with std::string:
std::string buf(512, '\0');
sprintf(&buf[0],"Hello %s","World!");
Please note, std::string is not guaranteed to store data in contiguous memory block, as opposite to std::vector (please correct me if this changed for C++11). Practically, all popular implementations of std::string does use contiguous memory. Alternatively, you can use std::vector in the example above.

Resizable char buffer container type for C++

I'm using libcurl (HTTP transfer library) with C++ and trying to download files from remote HTTP servers. As file is downloaded, my callback function is called multiple times (e.g. every 10 kb) to send me buffer data.
Basically I need something like "string bufer", a data structure to append char buffer to existing string. In C, I allocate (malloc) a char* and then as new buffers come, I realloc and then memcpy so that I can easily copy my buffer to resized array.
In C, there are multiple solutions to achieve this.
I can keep using malloc, realloc, memcpy but I'm pretty sure that they are not recommended in C++.
I can use vector<char>.
I can use stringstream.
My use cases is, I'll append a few thousands of items (chars) at a time, and after it all finishes (download is completed), I will read all of it at once. But I may need options like seek in the future (easy to achieve in array solution (1)) but it is low priority now.
What should I use?
I'd go for stringstream. Just insert into it as you recieve the data, and when you're done you can extract a full std::string from it. I don't see why you'd want to seek into an array? Anyway, if you know the block size, you can calculate where in the string the corresponding block went.
I'm not sure if many will agree with this, but for that use case I would actually use a linked list, with each node containing an arbitrarily large array of char that were allocated using new. My reasoning being:
Items are added in large chunks at a time, one at a time at the back.
I assume this could use quite a large amount of space, so you avoid reallocation events when a vector would otherwise need more space.
Since items are read sequentially, the penalty of link lists being unidirectional doesn't affect you.
Should Seeking through the list become a priority, this wouldn't work though. If it's not a lot of data ultimately, I honestly think a vector would be fine, dispite not being the most efficient structure.
If you just need to append char buffers, you can also simply use std::string and the member function append. On top of that stringstream gives you formatting, functionality, so you can add numbers, padding etc., but from your description you appear not to need that.
I would use vector<char>. But they will all work even with a seek, so your question is really one of style and there are no definitive answers there.
I think I'd use a deque<char>. Same interface as vector, and vector would do, but vector needs to copy the whole data each time an append exceeds its existing capacity. Growth is exponential, but you'd still expect about log N reallocations, where N is the number of equal-sized blocks of data you append. Deque doesn't reallocate, so it's the container of choice in cases where a vector would need to reallocate several times.
Assuming the callback is handed a char* buffer and length, the code to copy and append the data is simple enough:
mydeque.insert(mydeque.end(), buf, buf + len);
To get a string at the end, if you want one:
std::string mystring(mydeque.begin(), mydeque.end());
I'm not exactly sure what you mean by seek, but obviously deque can be accessed by index or iterator, same as vector.
Another possibility, though, is that if you expect a content-length at the start of the download, you could use a vector and reserve() enough space for the data before you start, which avoids reallocation. That depends on what HTTP requests you're making, and to what servers, since some HTTP responses will use chunked encoding and won't provide the size up front.
Create your own Buffer class to abstract away the details of the storage. If I were you I would likely implement the buffer based on std::vector<char>.

Output binary buffer with STL

I'm trying to use something that could best be described as a binary output queue. In short, one thread will fill a queue with binary data and another will pop this data from the queue, sending it to a client socket.
What's the best way to do this with STL? I'm looking for something like std::queue but for many items at a time.
Thanks
What does "binary data" mean? Just memory buffers? Do you want to be able push/pop one buffer at a time? Then you should wrap a buffer into a class, or use std::vector<char>, and push/pop them onto std::deque.
I've needed this sort of thing for a network communications system in a multi-threaded environment.
In my case I just wrapped std::queue with an object that handled locking (std::queue is not thread-safe, generally speaking). The objects in the queue were just very lightweight wrappers over char*-style arrays.
Those wrappers also provided the following member functions which I find extremely useful.
insertByte(unsigned int location, char value)
insertWord(unsigned int location, int value)
insertLong(unsigned int location, long value)
getByte/Word/Long(unsigned int location)
These were particularly useful in this context, since the word and long values had to be byteswapped, and I could isolate that issue to the class that actually handled it at the end.
There were some slightly strange things we were doing with "larger than 4 byte" chunks of the binary data, which I thought at the time would prevent us from using std::vector, although these days I would just use it and play around with &vector[x].

Which STL container should I use for a FIFO?

Which STL container would fit my needs best? I basically have a 10 elements wide container in which I continually push_back new elements while pop_front ing the oldest element (about a million time).
I am currently using a std::deque for the task but was wondering if a std::list would be more efficient since I wouldn't need to reallocate itself (or maybe I'm mistaking a std::deque for a std::vector?). Or is there an even more efficient container for my need?
P.S. I don't need random access
Since there are a myriad of answers, you might be confused, but to summarize:
Use a std::queue. The reason for this is simple: it is a FIFO structure. You want FIFO, you use a std::queue.
It makes your intent clear to anybody else, and even yourself. A std::list or std::deque does not. A list can insert and remove anywhere, which is not what a FIFO structure is suppose to do, and a deque can add and remove from either end, which is also something a FIFO structure cannot do.
This is why you should use a queue.
Now, you asked about performance. Firstly, always remember this important rule of thumb: Good code first, performance last.
The reason for this is simple: people who strive for performance before cleanliness and elegance almost always finish last. Their code becomes a slop of mush, because they've abandoned all that is good in order to really get nothing out of it.
By writing good, readable code first, most of you performance problems will solve themselves. And if later you find your performance is lacking, it's now easy to add a profiler to your nice, clean code, and find out where the problem is.
That all said, std::queue is only an adapter. It provides the safe interface, but uses a different container on the inside. You can choose this underlying container, and this allows a good deal of flexibility.
So, which underlying container should you use? We know that std::list and std::deque both provide the necessary functions (push_back(), pop_front(), and front()), so how do we decide?
First, understand that allocating (and deallocating) memory is not a quick thing to do, generally, because it involves going out to the OS and asking it to do something. A list has to allocate memory every single time something is added, and deallocate it when it goes away.
A deque, on the other hand, allocates in chunks. It will allocate less often than a list. Think of it as a list, but each memory chunk can hold multiple nodes. (Of course, I'd suggest that you really learn how it works.)
So, with that alone a deque should perform better, because it doesn't deal with memory as often. Mixed with the fact you're handling data of constant size, it probably won't have to allocate after the first pass through the data, whereas a list will be constantly allocating and deallocating.
A second thing to understand is cache performance. Going out to RAM is slow, so when the CPU really needs to, it makes the best out of this time by taking a chunk of memory back with it, into cache. Because a deque allocates in memory chunks, it's likely that accessing an element in this container will cause the CPU to bring back the rest of the container as well. Now any further accesses to the deque will be speedy, because the data is in cache.
This is unlike a list, where the data is allocated one at a time. This means that data could be spread out all over the place in memory, and cache performance will be bad.
So, considering that, a deque should be a better choice. This is why it is the default container when using a queue. That all said, this is still only a (very) educated guess: you'll have to profile this code, using a deque in one test and list in the other to really know for certain.
But remember: get the code working with a clean interface, then worry about performance.
John raises the concern that wrapping a list or deque will cause a performance decrease. Once again, he nor I can say for certain without profiling it ourselves, but chances are that the compiler will inline the calls that the queue makes. That is, when you say queue.push(), it will really just say queue.container.push_back(), skipping the function call completely.
Once again, this is only an educated guess, but using a queue will not degrade performance, when compared to using the underlying container raw. Like I've said before, use the queue, because it's clean, easy to use, and safe, and if it really becomes a problem profile and test.
Check out std::queue. It wraps an underlying container type, and the default container is std::deque.
Where performance really matters, check out the Boost circular buffer library.
I continually push_back new elements
while pop_front ing the oldest element
(about a million time).
A million is really not a big number in computing. As others have suggested, use a std::queue as your first solution. In the unlikely event of it being too slow, identify the bottleneck using a profiler (do not guess!) and re-implement using a different container with the same interface.
Why not std::queue? All it has is push_back and pop_front.
A queue is probably a simpler interface than a deque but for such a small list, the difference in performance is probably negligible.
Same goes for list. It's just down to a choice of what API you want.
Use a std::queue, but be cognizant of the performance tradeoffs of the two standard Container classes.
By default, std::queue is an adapter on top of std::deque. Typically, that'll give good performance where you have a small number of queues containing a large number of entries, which is arguably the common case.
However, don't be blind to the implementation of std::deque. Specifically:
"...deques typically have large minimal memory cost; a deque holding just one element has to allocate its full internal array (e.g. 8 times the object size on 64-bit libstdc++; 16 times the object size or 4096 bytes, whichever is larger, on 64-bit libc++)."
To net that out, presuming that a queue entry is something that you'd want to queue, i.e., reasonably small in size, then if you have 4 queues, each containing 30,000 entries, the std::deque implementation will be the option of choice. Conversely, if you have 30,000 queues, each containing 4 entries, then more than likely the std::list implementation will be optimal, as you'll never amortize the std::deque overhead in that scenario.
You'll read a lot of opinions about how cache is king, how Stroustrup hates linked lists, etc., and all of that is true, under certain conditions. Just don't accept it on blind faith, because in our second scenario there, it's quite unlikely that the default std::deque implementation will perform. Evaluate your usage and measure.
This case is simple enough that you can just write your own. Here is something that works well for micro-conroller situations where STL use takes too much space. It is nice way to pass data and signal from interrupt handler to your main loop.
// FIFO with circular buffer
#define fifo_size 4
class Fifo {
uint8_t buff[fifo_size];
int writePtr = 0;
int readPtr = 0;
public:
void put(uint8_t val) {
buff[writePtr%fifo_size] = val;
writePtr++;
}
uint8_t get() {
uint8_t val = NULL;
if(readPtr < writePtr) {
val = buff[readPtr%fifo_size];
readPtr++;
// reset pointers to avoid overflow
if(readPtr > fifo_size) {
writePtr = writePtr%fifo_size;
readPtr = readPtr%fifo_size;
}
}
return val;
}
int count() { return (writePtr - readPtr);}
};